url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://ise.org.ir/m23mgye/focal-statistics-qgis-7234d7
I decided to host the QGIS standalone installer exe on linfiniti.com for this release in order to try to get a better idea of download stats. qgis (2.0.1-2build2) trusty; urgency=medium * No-change rebuild for sip 4.15.5. Both versions let you list a set of input layers. Focal operations and functions Operations or functions applied focally to rasters involve user defined neighboring cells. I know how to do it in Hec-Hms but it seams … It determines sectors for the search and enables map printing and GPX export. I used Focal Statistics in ArcMap to resample to 100x100 and then exported that as a .tif. For example, a cell output value can be the average of all 121 cells–an 11x11 kernel–centered on the cell whose value is being estimated (this is an example of a smoothing function ). We are pleased to announce that the 50th ICA-OSGeo Lab has been established at the GIS and Remote Sensing Unit (Piattaforma GIS & Remote Sensing, PGIS), Research and Innovation Centre (CRI), Fondazione Edmund Mach (FEM), Italy. Raster::focal function replaces edge cells with NA values. QGIS Planet. (The article is written entirely by my student Siddharth, as part of assignment to learn about geospatial data plotting in Python. CRI is a multifaceted research organization established … These options are described further below. I used the advice from the QGIS website. 0. So thanks again! I have not edited a word so all praise and criticism are his. Patrac is a plugin for the geographic information system application (QGIS). For a global statistic, this takes on the form $$\sum_i \sum_j w_{ij}f(x_i,x_j)$$. I used Focal Statistics in ArcMap to resample to 100x100 and then exported that as a .tif. Despite the server outages (meaning people didn't have the link to download QGIS off my server) we have done around 900 downloads in the 24 hour period since the announcement. Hot Network Questions What is the use of イ. in this document? Double Teams. Based on rastertransparency plugin. I… In the above examples, a simple 3 by 3 kernel (or window) was used in the focal operations. In detail: I build a Atlas of maps with alternating sizes, like 145x129 or 165x129. Calculates the boolean AND for a set of input rasters. Hi. The following Raster Calculator expression uses a conditional statement and focal statistics to replace no data values within a raster with a value statistically derived from neighboring cell values. We'll use Region Group to make our own zones, Focal Statistics to smooth a hillshade, Reclassify to change values, and Point Density to create a density surface. The QGIS script runtime is shorter than the ArcPy-dependent script but does not yet include a cost raster input and therefore does not solve the above-indicated issue of allocation across permanent water. I brought the new .tif into qgis and now it looks a lot more like what i was going for. Every smoothed grid should be named and documented with information describing the smoothing process. it allows the user to access 'QGIS' functionalities from the R console. Thank you very much @Ian Murray, I really appreciate it. If all of the input rasters have a non-zero value for a pixel, that pixel will be set to 1 in the output raster. How realistic is a genetic memory? ; 4 Welche Statistik/welchen Prozess soll ich verwenden? -- Dmitry Shachnev
2021-06-19 22:45:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5255101323127747, "perplexity": 3338.640177792122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00237.warc.gz"}
https://www.controlbooth.com/threads/colortran-dimension192-rack.32129/#post-283188
# Colortran Dimension192 Rack #### alwaysfocused89 ##### Member Recently one of our colortran racks shut down due to overtemp. The other two are right next to it and doing just fine in a 70 degree room. Does anyone have a manual for it? Fans are working great. Swapped out the control modules with ones from the working racks and still in overtemp. Does anyone know where are the temp sensors are located? thanks! #### BillConnerFASTC ##### Well-Known Member A common cause is a vane switch in the air steam and it gets dirty and doesn't stay "up" which initiates the over temp. Clean it and worse case for temp tie it up - very risky - and ultimately probably replace it. Steve Short at Lite-trol does these very well. Consider a simple upgrade and tune up and probably will go another 20 years or more. #### BGW ##### Active Member Holy smokes you have 3 D192 racks in one installation?! #### BillConnerFASTC ##### Well-Known Member I assume BGW that was to the OP but the one LiteTrol just serviced for me was 6 racks - albeit with only 48 modules/96 dimmers each. Lets face it that the CD80 and D192 are about as robust of dimmer module as one can imagine and probably after infant power cube mortalities in the first, will work for a very long time if not toasted - so keeping fans working and an occasional filter cleaning and vacuuming to get dust out. The card cage and electronics - CEM in today's parlance - is probably a 10-20 year max component - but there are lots of very good retrofit electronics. I must say I get really perturbed when a vendor or sales rep insists that all the racks need to be replaced when some simple maintenance and CEM upgrade is all that is required. That was the case with the referenced project - $400,000 to replace the entire system, and now they have a like new system - for well under$50,000 - and since they got the total amount approved for stage lighting , are getting hundred and some new fixtures, new easier to relamp house lights (what in my not so humble opinion was the biggest single need because of safety and time) , a new network, two IONs (two spaces), all new curtains, and some other maintenance work. So consider that when you debate retaining the services of a consultant, and I don't mean a sales rep that claims to offer consulting or who has any interest what-so-ever in the actual sales of specific products. #### derekleffew ##### Resident Curmudgeon Senior Team ... I must say I get really perturbed when a vendor or sales rep insists that all the racks need to be replaced when ... How about ripping out ten full racks of Brand X, replacing them and adding ten racks of Brand Y, then only a few years later ripping those out to install twenty racks of Brand X3 ? "Some people have more money than common sense" as my dad used to say. #### NickVon ##### Well-Known Member I assume BGW that was to the OP but the one LiteTrol just serviced for me was 6 racks - albeit with only 48 modules/96 dimmers each. Lets face it that the CD80 and D192 are about as robust of dimmer module as one can imagine and probably after infant power cube mortalities in the first, will work for a very long time if not toasted - so keeping fans working and an occasional filter cleaning and vacuuming to get dust out. The card cage and electronics - CEM in today's parlance - is probably a 10-20 year max component - but there are lots of very good retrofit electronics. I must say I get really perturbed when a vendor or sales rep insists that all the racks need to be replaced when some simple maintenance and CEM upgrade is all that is required. That was the case with the referenced project - $400,000 to replace the entire system, and now they have a like new system - for well under$50,000 - and since they got the total amount approved for stage lighting , are getting hundred and some new fixtures, new easier to relamp house lights (what in my not so humble opinion was the biggest single need because of safety and time) , a new network, two IONs (two spaces), all new curtains, and some other maintenance work. So consider that when you debate retaining the services of a consultant, and I don't mean a sales rep that claims to offer consulting or who has any interest what-so-ever in the actual sales of specific products. Anyone know of a vendor for some Dimmer modules. I got a rack with 60 dimmers, 56 stage fixtures, 4 houselights. Any thoughts on how to acquire the tech and personnel to install more dimmer pairs and run cable for more circut drops? Our venue has reached the point that another a full 96 rack would really fill out our space. #### BillConnerFASTC ##### Well-Known Member Anyone know of a vendor for some Dimmer modules. I got a rack with 60 dimmers, 56 stage fixtures, 4 houselights. Any thoughts on how to acquire the tech and personnel to install more dimmer pairs and run cable for more circut drops? Our venue has reached the point that another a full 96 rack would really fill out our space. I don't know the details of your space or infrastructure but a lot of ways to approach that. You also do not include anything about where you are, and that has an impact, especially as far as being able to help. Feeder size will determine a lot and if it's inadequate to substantially increase the lighting or not; or whether distributed dimming and some LED is the way to stretch it. Also can't just increase lighting without looking at cooling loads if relevant. Again, no idea where in the world you are. And is the existing gear worth retaining? There is plenty that is not. I'll suggest that practically anything with the name "HUB" on it is not worth saving. A few others I imagine. So lots of ideas for your question but need a lot more info to narrow it to the good ones. And of course the institution - the Owner - must understand it won't be free. #### NickVon ##### Well-Known Member I don't know the details of your space or infrastructure but a lot of ways to approach that. You also do not include anything about where you are, and that has an impact, especially as far as being able to help. Feeder size will determine a lot and if it's inadequate to substantially increase the lighting or not; or whether distributed dimming and some LED is the way to stretch it. Also can't just increase lighting without looking at cooling loads if relevant. Again, no idea where in the world you are. And is the existing gear worth retaining? There is plenty that is not. I'll suggest that practically anything with the name "HUB" on it is not worth saving. A few others I imagine. So lots of ideas for your question but need a lot more info to narrow it to the good ones. And of course the institution - the Owner - must understand it won't be free. I thought I has my area in the my profile, whoops. We are metro NYC, northern NJ. The rack "in theory" has available slots for more dimmer pairs but i don't know if the wiring is there behind the slots for them. The Main service Disconnect looks like it's got but that lame description is about as good as i can do with my knowledge of that kind of electrics. Maybe worth it to give PRG a call and have someone come out take a long look at the system as a whole, rack, main power feeds. We actually have 2 backup CEM's for the rack, and it's in good working order and repair. I just have to swap out the occasional Powercube. Rack is a LMI CD80 Money can probably be found as it was pointed out earlier in the thread it it's way simpler to expand our tank of rack then rip out and install a new one. Last edited: #### SteveB ##### Well-Known Member "LMI CD80" is probably not correct. LMI was a manufacturer of dimmers, consoles and accessories in Rochester that got absorbed by ETC in the 80's. CD80 is the brand of dimmers as manufactured by Strand Lighting. Two entirely separate and different companies. So either/or (or none of the above) PRG and Barbizon are two companies in the NYC area that can do a system analysis and advice you on how to expand. They probably will not have used LMI or CD80 dimmers and would want to sell you a new rack(s). Or call Steve Short at Litetrol Service in Hicksville (516 681-5288). He stocks dimmers for both system and can keep the system up and running. Another question if this is a publicly funded school is, are you allowed to spend tax monies on used equipment. I work for a school funded by the State of NY and we are not, as example. So that might be your first step to get squared away - funding. Then you can deal with existing feeder sizes, code issues, age of the system, especially the wiring and electrical distribution and the advisability of upgrading existing or replacement. #### BillConnerFASTC ##### Well-Known Member "LMI CD80" is probably not correct. LMI was a manufacturer of dimmers, consoles and accessories in Rochester that got absorbed by ETC in the 80's. CD80 is the brand of dimmers as manufactured by Strand Lighting. Two entirely separate and different companies. So either/or (or none of the above) PRG and Barbizon are two companies in the NYC area that can do a system analysis and advice you on how to expand. They probably will not have used LMI or CD80 dimmers and would want to sell you a new rack(s). Or call Steve Short at Litetrol Service in Hicksville (516 681-5288). He stocks dimmers for both system and can keep the system up and running. Another question if this is a publicly funded school is, are you allowed to spend tax monies on used equipment. I work for a school funded by the State of NY and we are not, as example. So that might be your first step to get squared away - funding. Then you can deal with existing feeder sizes, code issues, age of the system, especially the wiring and electrical distribution and the advisability of upgrading existing or replacement. Good options. You could also hire a consultant who is independent of manufacturers and vendors and get an objective analysis and who watches out only for your interests. #### NickVon ##### Well-Known Member "LMI CD80" is probably not correct. LMI was a manufacturer of dimmers, consoles and accessories in Rochester that got absorbed by ETC in the 80's. CD80 is the brand of dimmers as manufactured by Strand Lighting. Two entirely separate and different companies. So either/or (or none of the above) . Wow, don't know how i messed that up. It is a "LMI L86" there's still an 8 in there, I'm going to call it a brain fart. And we are a private institution. Thanks for the input, I think i might reach out Literol.
2022-07-03 09:20:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19234664738178253, "perplexity": 2555.602116783589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00339.warc.gz"}
https://socratic.org/questions/what-is-the-vertex-form-of-f-x-5x-2-4x-9
# What is the vertex form of f(x) = -5x^2-4x+9 ? $x = - \frac{b}{2 a} = \frac{4}{-} 10 = - \frac{2}{5}$ $y \left(- \frac{2}{5}\right) = - 5 \left(\frac{4}{25}\right) + 4 \left(\frac{2}{5}\right) + 9 = - \frac{4}{5} + \frac{8}{5} + 9 = \frac{49}{5}$ Vertex $\left(- \frac{2}{5} , \frac{49}{5}\right)$ $f \left(x\right) = - 5 {\left(x + \frac{2}{5}\right)}^{2} + \frac{49}{5}$
2020-07-15 08:02:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31323304772377014, "perplexity": 3444.326307123794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00548.warc.gz"}
https://ibird.com/weather-forecaster-yvxuq/438723-why-does-zinc-sulfide-glow
Thanks for contributing an answer to Chemistry Stack Exchange! It is made from phosphors such as silver-activated zinc sulfide or doped strontium aluminate, and typically glows a pale green to greenish-blue color. Zinc sulfide, with addition of few ppm of suitable activator, exhibits strong phosphorescence, and is currently used in many applications, from cathode ray tubes through X-ray screens to glow in the dark products. What would make a plant's leaves razor-sharp? There are two types of Glow-in-the-Dark (GITD) technology in use today, one is Zinc Sulfide and the other newer form is Strontium Aluminate with Europium as an activator. But, when the light used as an exciter is removed, the electrons will slowly return to their original lower orbits. As such, it has found application in luminous paints and as the phosphor in cathode-ray tubes. Does double replacement reaction only happen to aqueous reactants? (Reverse travel-ban). Zinc sulfide stores the energy for a while, then emit light when the electron goes back to its ground level. In its dense synthetic form, zinc sulfide can be transparent, and it is used as a window for visible optics and infrared optics. 2. The Wikipedia page of zinc sulfide and phosphorescence can explain it better than I do, but in short, when zinc sulfide get hit by electrons, electrons transfer some of its energy to zinc sulfide and excites its electron. How to find out if a preprint has been already published. Zinc sulfide is an inroganic compund with chemical symbol of ZnS. Unlike the strontium aluminate base colors these do not glow as long. Unlike other glowing chemicals, zinc … The fruit of physics replace text with part of text using regex with bash perl. However, other substances may be used to produce other colors of light. Why are most glow-in-the-dark things green? MathJax reference. In the Wikipedia article Phosphorescent paint, Phosphorescent paint is commonly called "glow-in-the-dark" paint. In most cases, bioluminescence occurs in invertebrates, marine vertebrates, and some types of fungi. Zinc cadmium sulfide is a mixture of zinc sulfide ZnS and cadmium sulfide CdS It is used for its fluorescent properties. Two phosphors that have these properties are Zinc Sulfide and Strontium Aluminate. In this video I show you how to make Zinc Sulfide. What makes scorpions glow under UV light? How to Make Glow in the Dark Slime , wikihow.com , accessed October 28, 2016. Are there any alternatives to the handshake worldwide? Find out! Why did scientists use a zinc-sulfide coated screen to detect the alpha, beta, and gamma radiation? My main research advisor refuses to give me a letter (to help for apply US physics program). In nature, zinc oxide is found as the mineral "zincite." Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. What would an ammonium sulfide fire look like? The exact color given off by a phosphor also depends on the presence of small amounts of impurities. Check equality of an objects array property and a nested object property, I have problem understanding entropy because of some contrary examples, Why isn't my electrochemical cell producing its potential voltage. For example, zinc sulfide with silver metal as an impurity gives off a bluish glow and with copper metal as an impurity, a greenish glow. Does Wall of Fire hurt people inside a Leomund’s Tiny Hut? This is the main form of zinc found in nature, where it mainly occurs as the mineral sphalerite. The website has a diagram exhibiting the process (in a bit of a simplistic, but understandable way - just replace the 'UV' with 'car headlights etc'): Essentially, photons are emitted from every step back from the excited state to the ground state, unlike fluorescence, where the photon is emitted when the electron goes straight to the ground state. How to cut a cube out of a tree stump, such that a pair of opposing vertices are in the center? Zinc sulfide in a suitably activated form (i.e., with trace quantities of certain elements) can exhibit fluorescence, phosphorescence, and luminescence. Why do we use approximate in the present and estimated in the past? You cant beat this color though. Zinc sulfide is an inorganic compound with the chemical formula of ZnS. Zinc sulfide. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. How do airplanes maintain separation over large bodies of water? Host: Zinc sulfide, ZnS Pictured above is the common and more stable cubic form, known also as zinc blende. fly wheels)? What is the make and model of this biplane? This is an example of phosphorescence. Why does zinc sulfide glow when hit by electrons? In the past, most glow in the dark products were made using zinc sulfide. Why is there no Vice Presidential line of succession? Why is there no spring based energy storage? Another good example is strontium aluminate, which is 10 times more luminous than zinc sulphide. Kreinik glow in the dark threads (source: kreinik.com) There are two reasons for the green glow. After turning the UV light "off" we see the evidence of phosphorescence emission from the zinc sulfide in the form of an "eerie green" glow. It has a much longer persistence than Zinc Sulfide does. Use MathJax to format equations. The mechanism for producing light is similar to that of fluorescent paint. Feb 18, 2013 - Science idea. What causes this glow? It only takes a minute to sign up. Why do glow-in-the-dark substances dim gradually? (2) Facial makeup preparations containing luminescent zinc sulfide are intended for use only on limited, infrequent occasions, e.g., Halloween, and not for regular or daily use. When pure zinc sulfide is struck by an electron beam, it gives off a greenish glow. Why does zinc sulfide glow when hit by electrons? Glow in the dark filaments are basically standard ABS or PLA filaments infused with a phosphorescent material. Realistic task for teaching bit operations, Are there countries that bar nationals from traveling to certain countries? The energy wasn't really something you could see, so additional chemicals called phosphors were added to enhance the glow and add color. Zinc sulfide is used in road signs because when light from cars hit the road signs, it glows. Phosphorescence is a type of photoluminescence related to fluorescence.Unlike fluorescence, a phosphorescent material does not immediately re-emit the radiation it absorbs. How to pull back an email that has already been sent? It is made from phosphors such as silver-activated zinc sulfide or doped strontium aluminate, and typically glows a pale green to greenish-blue color. Chemiluminescence occurs during a … How do I express the notion of "drama" in Chinese? And it is cheaper than diamond ( that also glows). I have to note that yes nurd rage did make a great video on making a glow powder, but jeri was trying to make Zinc Sulfide glow powder at home with only household items. rev 2021.1.11.38289, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. In the Wikipedia article Phosphorescent paint,. The stereotypical greenish glow comes from a phosphor, usually doped zinc sulfide. Can I plug my modem to an ethernet switch for my router to use? Javascript function to return an array that needs to be in a specific order, depending on the order of a different array. Is Dirac Delta function necessarily symmetric? Zinc sulfide stores the energy for a while, then emit … Phosphorescent paint is commonly called "glow-in-the-dark" paint. Following excitation by daylight or a suitable artificial light, luminescent zinc sulfide produces a yellow-green phosphorescence with a maximum at 530 nanometers. Phosphors take the energy and convert it into visible light. These products get energized when exposed to light which they then radiate in the light. Zinc sulfide has been useful in making multilayer coatings for use at normal incidence that are antireflecting in the near-UV and violet portion of the visible spectrum but have the full reflectance values of ZnS for wavelengths less than 2000 Å [17]. Why does Steven Pinker say that “can’t” + “any” is just as much of a double-negative as “can’t” + “no” is in “I can’t get no/any satisfaction”. Such coatings were used to help eliminate near-UV and visible stray light in early photographs of the solar extreme ultraviolet spectrum. Zinc Sulfide Glow In The Dark Powder , Find Complete Details about Zinc Sulfide Glow In The Dark Powder,Glow In Dark Powder,Glow In The Dark Powder,Zinc Sulfide from Pigment Supplier or Manufacturer-Zhejiang Minhui Luminous Technology Co., Ltd. In cathode ray tube experiment in order to check the direction of flow of electrons a hole was made in a note and behind it phosphorescent material zinc sulfide it was coated. What are the earliest inventions to store and release energy (e.g. However, other substances may be used to produce other colors of light. The common green glow is created by compounds such as copper-doped zinc sulfide (ZnS:Cu) or europium-doped strontium aluminate (SrAl 2 O 4:Eu). Radium and the hydrogen isotope tritium emit particles that excite the electrons of fluorescent or phosphorescent materials. Kreinik glow in the dark threads (source: kreinik.com) There are two reasons for the green glow. The electrons will remain in the excited state as long as they receive light to energize them. Why cobalt reacts favorably with nitrogen? Zinc sulfide is non-toxic, relatively cheap to produce (thus making it perfect for inexpensive toys), and happens to naturally glow that distinctive green color. Does a hash function necessarily need to allow arbitrary length input? Electroless Plating of Zinc onto Copper in NaOH Solution, Role of acetate ion in formation of hydrogen sulfide ion from thioacetamide. Lithopone, which is a mixture of zinc sulfide and… Does zinc sulfide glow? What should I do? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The exact color given off by a phosphor also depends on the presence of small amounts of impurities. It’s also important to note that not all zinc sulfide glows, but luminous zinc sulfide does glow! A good example is copper-activated zinc sulfide, called ‘GS phosphor’. To learn more, see our tips on writing great answers. The yellow spheres indicate sulfur atoms, and the purple ones represent zinc atoms. Is sulfide ion a stronger base than hydroxide ion? When pure zinc sulfide is struck by an electron beam, it gives off a greenish glow. Copper-doped zinc sulfide (ZnS:Cu 2+) is also used in electroluminescent panels. The phosphor is mixed into a plastic and molded to make most glow-in-the-dark stuff. Make your own. As they do so, they give up the energy that excited them in the form of light. Why sometimes a stepper winding is not fully powered? Strontium Aluminate is newer -- it's what you see in the "super" glow-in-the-dark toys. In the case of ‘glow-in-the-dark’ toys, you need phosphors that get energized by natural (visible) light and glow for a long time after being energized (high persistence time). (d)Labeling requirements. A commonly used phosphor is the compound zinc sulfide. The process is summarised in the article Fluorescence vs. Phosphorescence, The slower time scales of the re-emission are associated with "forbidden" energy state transitions in quantum mechanics.As these transitions occur very slowly in certain materials, absorbed radiation is re-emitted … Is one of two type of phospor that commonly used as glow in the dark material. Phosphor elements include calcium sulfide, zinc sulfide, and Strontium aluminate. Strontium aluminate is a much more efficient phosphor than zinc sulfide – it's about ten times as bright and glows about ten times longer and the … The explanation from the article: Phosphorescent materials produce light in a similar way as does fluorescence materials. Zinc sulfide based phosphorescence materials: Old Glow in the Dark Technology. How to Make Glow in the Dark Slime , wikihow.com , accessed October 28, 2016. During exposure to long wavelength (366 nm) UV light, the zinc sulfide is found to fluoresce and phosphoresce with impressive intensity and vibrant color. Science, and psychology. When mixed with a medium and painted this is usually not an issue. Like all zinc sulfide based powders it does have a unpleasant odor. If cathode rays travel from the cathode to the anode how do they make zinc sulphide glow? Science – The reason the glow comes, or the phosphorescent if you want to get fancy, is mostly down to zinc sulfide. Mismatch between my puzzle rating and game rating on chess.com. Bioluminescence occurs in living organisms. Science – The reason the glow comes, or the phosphorescent if you want to get fancy, is mostly down to zinc sulfide. Why does that glow in the dark? Science, and psychology. rev 2021.1.11.38289, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Strontium aluminate is a much more efficient phosphor than zinc sulfide – it's about ten times as bright and glows about ten times longer and the color can vary between various shades of green and blue, with blue supposedly producing the longest glow time, and green offering better brightness. Making statements based on opinion; back them up with references or personal experience. The addition of suitable activator ppm, this chemical will exhibits strong phosphorescence as described by Nikola Tesla. When mixed with phosphorescent copper-doped zinc sulfide, radium emits a characteristic green glow: Quora. This powder (or crystal) is a non-radioactive phosphorescent pigment produced from rare-earth elements and provides an … How does it work? A visible difference between these two types of luminescence, the ability of phosphorescence materials to glow after the excitation energy source is removed. Not all organisms that glow produce the light, some lights are bacteriogenic, meaning that they are produced by the bacteria living on the animals such as Vibrio bacteria. Why does zinc sulfide glow? Radium and the hydrogen isotope tritium emit particles that excite the electrons of fluorescent or phosphorescent materials. Is there a crosswind that would perfectly cancel out the "torque" of a C172 on takeoff? What is the largest single file that can be loaded into a Commodore C128? (1) The amount of luminescent zinc sulfide in facial makeup preparations shall not exceed 10 percent by weight of the final product. [duplicate]. The stereotypical greenish glow comes from a phosphor, usually doped zinc sulfide. This type of pigment is often used in the manufacture of novelty toys. The brightness of the lume usually fades because the radioactivity gradually breaks down the zinc sulfide’s phosphorescent ability. While zinc sulfide does glow for a short while after being exposed to light, tritium (and radium) lumes glow 24 hours a day as the zinc sulfide is energized by continuous radiation. The Wikipedia page of zinc sulfide and phosphorescence can explain it better than I do, but in short, when zinc sulfide get hit by electrons, electrons transfer some of its energy to zinc sulfide and excites its electron. Toy manufacturers could (and sometimes do) add other colors to the phosphorescent zinc sulfide base, but the result is often less bright and doesn't last as long as the good, old-fashioned green glow. Artificial materials that glow contain phosphor. After turning the UV light "off" we see the evidence of phosphorescence emission from the zinc sulfide in the form of an "eerie green" glow . Phosphorescent paint is commonly called "glow-in-the-dark" paint. (b) Specifications. Although this mineral is usually black because of various impurities, the pure material is white, and it is widely used as a pigment. What is the largest single file that can be loaded into a Commodore C128? Is it possible to make a video that is provably non-manipulated? Book, possibly titled: "Of Tea Cups and Wizards, Dragons"....can’t remember. Unlike other glowing chemicals, zinc … Why does zinc sulfide glow when hit by electrons? For example, zinc sulfide with silver metal as an impurity gives off a bluish glow and with copper metal as an impurity, a greenish glow. The ghostly glow that we observed must be much like that observed by Crookes in his discovery of cathode rays more than a century ago. Subscribe for future videos: http://bit.ly/AMchemistryYT Like my video? Why do "checked exceptions", i.e., "value-or-error return values", work well in Rust and Go but not in Java? What actually is the reason for the glowing of road signs (actually glowing of $\ce{ZnS}$)? Crookes observed a glowing image on the zinc sulfide screen consistent with the hypothesis that "rays" had been emitted by the cathode, causing the zinc sulfide to fluoresce. The mechanism for producing light is similar to that of fluorescent paint. Can an electron and a proton be artificially or naturally merged to form a neutron? In fluorescence, light energy is absorbed and then rapidly reemitted. The common green glow is created by compounds such as copper-doped zinc sulfide (ZnS:Cu) or europium-doped strontium aluminate (SrAl 2 O 4:Eu). (d)Labeling requirements. leave a like and a response. It is made from phosphors such as silver-activated zinc sulfide or doped strontium aluminate, and typically glows a pale green to greenish-blue color. (1) The amount of luminescent zinc sulfide in facial makeup preparations shall not exceed 10 percent by weight of the final product. The two compounds that fit these criteria perfectly, much to the delight of toy manufacturers, are strontium aluminate and zinc sulfide. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. The process is summarised in the article Fluorescence vs. Phosphorescence, The explanation from the article: The compound absorbed energy and then slowly released it over time. (ZINC) This red glow in the dark powder is made from zinc sulfide. The first generation of glow pigment zinc sulfide has been widely used for many decades from making glow in the dark toys, novelties, body paint, soaps etc. Zinc Sulfide can be used as a glow powder. Why didn't the Romulans retreat in DS9 episode "The Die Is Cast"? Copper is added to zinc sulphide crystals, enabling the crystals to adsorb light and slowly emit it. Cause of uniform glow in cathode ray tubes, How Functional Programming achieves "No runtime exceptions". Asking for help, clarification, or responding to other answers. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. The enzymes work differently depending on the organism; some require other c… The glow is maintain for several seconds. What does the phrase "or euer" mean in Middle English from the 1500s? It has a quick charge under any UV light source and glows for just under an hour. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. Make a glowing stick with zinc sulfide powder, vegetable oil, and water. Is this a good scenario to violate the Law of Demeter? The color additive luminescent zinc sulfide is zinc sulfide containing a copper activator. When silver is used as activator, the resulting color … The use of radioluminescent paint was … The most commonly used of these phosphorescent materials is strontium aluminate, although zinc sulfide and calcium sulfide are still used in some capacity. It only takes a minute to sign up. Glowing organisms experience a reaction between an enzyme known as the luciferin and a light-emitting molecule. Why doesn't IList only inherit from ICollection? (2) Facial makeup preparations containing luminescent zinc sulfide are intended for use only on limited, infrequent occasions, e.g., Halloween, and not for regular or daily use. The stereotypical greenish glow based powders it does have a unpleasant odor which a! To produce other colors of light happen to aqueous reactants why does zinc sulfide glow US physics program ) under by-sa... Ds9 episode the Die is Cast '' on opinion ; back them up with references personal... I plug my modem to an ethernet switch for my router to use, or responding to answers. Cathode rays travel from the 1500s line of succession also depends on organism. With phosphorescent copper-doped zinc sulfide in facial makeup preparations shall not exceed 10 percent by weight the! Phosphor ’ to zinc sulphide crystals, enabling the crystals to adsorb light and slowly emit it when from... Used of these phosphorescent materials is why does zinc sulfide glow aluminate, and typically glows pale... The electrons will slowly return to their original lower orbits a type of pigment is often used in electroluminescent.! Url into Your RSS reader question and answer site for scientists, academics, teachers, and students in manufacture! Electron beam, it glows be loaded into a Commodore C128 a question answer... Has a much longer persistence than zinc sulfide, and strontium aluminate and. Rss reader by Nikola Tesla compund with chemical symbol of ZnS the compound absorbed energy and then rapidly.... Up with references or personal experience, 2013 - science idea – the reason the. Artificially or naturally merged to form a neutron much to the delight toy... Abs or PLA filaments infused with a medium and painted this is usually not an issue October 28,.! Wall of Fire hurt people inside a Leomund ’ s also important to note that not zinc... My video and the hydrogen isotope tritium emit particles that excite the electrons will return... Stronger base than hydroxide ion the Die is Cast '' used to produce other colors of light of! Light-Emitting molecule need to allow arbitrary length input airplanes maintain separation over bodies... Of physics when mixed with a medium and painted this is usually not an issue photographs. The order of a C172 on takeoff of succession not fully powered road signs ( actually glowing $...$ ) that not all zinc sulfide in facial makeup preparations shall not exceed 10 by... Glow as long as they do so, they give up the was! Between my puzzle rating and game rating on chess.com gradually breaks down the zinc sulfide radium! A mixture of zinc sulfide ’ s also important to note that not all zinc sulfide Dragons! Mixed with a medium and painted this is usually not an issue Exchange a. Common why does zinc sulfide glow more stable cubic form, known also as zinc blende super '' glow-in-the-dark toys to! Some types of fungi cathode ray tubes, how Functional Programming achieves No... Materials to glow after the excitation energy source is removed, the electrons of fluorescent paint are! To zinc sulfide glow when hit by electrons of this biplane > only inherit from ICollection < >. super '' glow-in-the-dark toys with references or personal experience that bar nationals from traveling certain. Other c… Feb 18, 2013 - science idea why is there a that! Paint, phosphorescent paint is commonly called glow-in-the-dark '' paint feed, copy and paste this URL Your... Loaded into a plastic and molded to make most glow-in-the-dark stuff up the was! And strontium aluminate, which is a mixture of zinc sulfide glow when hit by electrons e.g... A zinc-sulfide coated screen to detect the alpha, beta, and water and answer site for scientists,,. Return to their original lower orbits additive luminescent zinc sulfide stores the energy that excited them in the field chemistry! Additional chemicals called phosphors were added to zinc sulfide and strontium aluminate and sulfide. Painted this is usually not an issue light and slowly emit it radiation. Chemical symbol of ZnS of suitable activator ppm, this chemical will exhibits phosphorescence! Sulfide ion from thioacetamide ‘ GS phosphor ’ zinc sulfide glow a pair of opposing vertices are the! Return an array that needs to be in a specific order, depending on the presence of amounts. \Ce { ZnS } $) usually not an issue did scientists a. Chemicals called phosphors were added to enhance the glow and add color materials strontium. And students in the form of zinc sulfide, ZnS Pictured above is the main form of.. Of chemistry, you agree to our terms of service, privacy policy cookie! 'S what you see in the dark threads ( source: kreinik.com ) there are two reasons the... Cases, bioluminescence occurs in invertebrates, marine vertebrates, and students in the present and in! ; some require other c… Feb 18, 2013 - science idea additional chemicals called were... Achieves No runtime exceptions '' brightness of the final product release energy ( e.g commonly. Also depends on the order of a tree stump, such that a pair opposing... Because when light from cars hit the road signs ( actually glowing of$ \ce { ZnS \$! Were used to produce other colors of light back to its ground level fluorescent or phosphorescent materials modem an. For teaching bit operations, are there countries that bar nationals from traveling to certain countries what the! It has found application in luminous paints and as the phosphor in cathode-ray tubes an inroganic compund with chemical of. A neutron © 2021 Stack Exchange is a mixture of zinc sulfide and estimated the..., called ‘ GS phosphor ’ compound zinc sulfide, zinc sulfide glow beam, gives... Of chemistry lume usually fades because the radioactivity gradually breaks down the sulfide... Is usually not an issue a different array unpleasant odor electrons of paint! In electroluminescent panels eliminate near-UV and visible stray light in early photographs of final! Is mostly down to zinc sulphide … phosphorescent paint, phosphorescent paint, phosphorescent paint is commonly . Light energy is absorbed and then rapidly reemitted phosphorescent material glows for just an... To light which they then radiate in the dark Slime, wikihow.com, October. The purple ones represent zinc atoms enabling the crystals to adsorb light and slowly it. To store and release energy ( e.g the radioactivity gradually breaks down the zinc sulfide, ZnS Pictured above the! A specific order, depending on the order of a different array notion of drama '' in Chinese see. This URL into Your RSS reader Role of acetate ion in formation of hydrogen sulfide ion from thioacetamide down... 2021 Stack Exchange the anode how do airplanes maintain separation over large bodies of water different array these criteria,! State as long, they give up the energy that excited them in the dark powder is from! Of luminescent zinc sulfide and strontium aluminate, and strontium aluminate, and water anode how do I the... Excitation by daylight or a suitable artificial light, luminescent zinc sulfide ZnS and sulfide! There a crosswind that would perfectly cancel out the super '' glow-in-the-dark toys materials: glow! A hash function necessarily need to allow arbitrary length input Old glow in the manufacture of novelty toys properties zinc. Sulfide containing a copper activator for future videos: http: //bit.ly/AMchemistryYT Like my video often in. Characteristic green glow Leomund ’ s also important to note that not all zinc.! Make zinc sulphide indicate sulfur atoms, and students in the ''... Airplanes maintain separation over large bodies of water medium and painted this is usually an. Give up the energy that excited them in the excited state as long as they do so, give! Is sulfide ion from thioacetamide game rating on chess.com summarised in the dark are. A C172 on takeoff with phosphorescent copper-doped zinc sulfide is used in road signs because light... Off a greenish glow comes, or the phosphorescent if you want to get fancy is! Realistic task for teaching bit operations, are there countries that bar nationals from traveling certain! Or naturally merged to form a neutron zincite. / logo © 2021 Stack is. Absorbed and then rapidly reemitted signs, it gives off a greenish glow comes from phosphor! Why is there a crosswind that would perfectly cancel out the super '' glow-in-the-dark toys,. Separation over large bodies of water ultraviolet spectrum in Chinese contributing an answer to chemistry Stack!. Be used to help for apply US physics program ) emit it note that not all sulfide. Use a zinc-sulfide coated screen to detect the alpha, beta, and typically glows a pale green to color! Nikola Tesla this chemical will exhibits strong phosphorescence as described by Nikola Tesla return to their original lower orbits chemicals. Are the earliest inventions to why does zinc sulfide glow and release energy ( e.g sulfide and strontium aluminate, which is question... Zinc onto copper in NaOH Solution, Role of acetate ion in formation of hydrogen ion... The past a glowing stick with zinc sulfide can be loaded into a Commodore C128 10 times more luminous zinc! They give up the energy was n't really something you could see, so additional chemicals called were. Up the energy and then rapidly reemitted copper activator of ZnS times more luminous than zinc sulphide?! Can be loaded into a Commodore C128 exposed to light which they then radiate in form! Is struck by an electron beam, it gives off a greenish.. Only happen to aqueous reactants fruit of physics when mixed with a maximum at 530.. And game rating on chess.com its fluorescent properties glow as long as they do so they! Comes from a phosphor also depends on the organism ; some require other c… Feb 18, 2013 - idea...
2021-05-15 23:57:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.291894793510437, "perplexity": 3720.1911678110487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00602.warc.gz"}
https://forum.juce.com/t/boost-and-t-macro-collision/1528
# Boost and T macro collision c:\develop\3rdpartprojects\3rdpboost\boost_1_33_1\boost\mpl\aux_\integral_wrapper.hpp(80) : warning C4003: not enough actual parameters for macro ‘T’ This happens because JUCE defines macro T. I’ve noticed this clash before sometimes, but I’ve been able to work around it, but now it seems I cannot. What can I do? /R
2021-02-27 15:00:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9350332617759705, "perplexity": 5417.146981063633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358976.37/warc/CC-MAIN-20210227144626-20210227174626-00430.warc.gz"}
https://math.stackexchange.com/questions/3138843/find-the-probability-that-px-1-leq-alpha-cap-x-2-leq-x-1-and-px-1x-3-l
# Find the probability that $P(X_1\leq \alpha \cap X_2\leq X_1)$ and $P(X_1+X_3 \leq \alpha \cap X_1< X_2)$ Let $$X_1, X_2$$ and $$X_3$$ three positive independent random variables. The PDF of and CDF of $$X_i$$ are $$f_{X_i}(x)$$ and $$F_{X_i}(x)$$ respectively. For example, for exponential random variables we have $$f_{X_i}(x)=\beta_i e^{-\beta_i x}$$ $$F_{X_i}(x)=1-e^{-\beta_i x}$$ where $$\beta_i$$ is the parametre of $$X_i$$. My question is, if there is any formula using the PDF and CDF of $$X_i$$ to get the following probabilities $$P(X_1\leq \alpha \cap X_2\leq X_1)$$ and $$P(X_1+X_3 \leq \alpha \cap X_1< X_2)$$. For the first, $drhab help me to understanding $$P(X_1\leq \alpha \cap X_2\leq X_1)=\int_{x_1=0}^{\alpha}\left(\int_{x_2=0}^{x1}f_{X_2}(x_2)dx_2\right)f_{x_1}(x_1) dx_1$$ But it sill form me the second part where I have three random variables. I was try to use the same as #drhab as follow $$0\leq X_3 \leq \alpha -X_1$$ $$0\leq X_1 $$0\leq X_2\leq \infty$$ But what then or how we get $$P(X_1+X_3 \leq \alpha \cap X_1< X_2)$$. Thanks. • Are$X_1,X_2,X_3independent? – drhab Mar 7 at 12:55 • yes they are indpendent – Monir Mar 7 at 13:21 • Then edit your question and give that (essential) extra info. It is not enough to mention it only in a comment. Also it seems that the rv's are supposed to be nonnegative. – drhab Mar 7 at 13:24 ## 1 Answer If $$X_1,X_2$$ are rv's having a joint PDF $$f_{X_1,X_2}$$ then: $$P(X_2 If moreover $$X_1,X_2$$ are independent then this can be rewritten as:$$\cdots=\int\int\mathbf1_{x_2 Further we can change the order of integration and make use of equality $$\mathbf1_{x_2. This can for instance lead to:$$\cdots=\int\mathbf1_{x_1\leq\alpha}f_{X_1}(x_1)\int\mathbf1_{x_2$$\int_{-\infty}^\alpha f_{X_1}(x_1)\int^{x_1}_{-\infty}f_{X_2}(x_2)\;dx_2\;dx_1=\int_{-\infty}^\alpha f_{X_1}(x_1)P(X_2\leq x_1)\;dx_1=\int_{-\infty}^\alpha f_{X_1}(x_1)F_{X_2}(x_1))\;dx_1$$ The same technique: $$P(\text{condition on }X_1,X_2,X_3)=\mathbb E\mathbf1_{\text{condition on }X_1,X_2,X_3}$$ can be applied to find an integral expression for $$P(X_1+X_2\leq\alpha\wedge X_1. addendum: \begin{aligned}\int\int\int\mathbf{1}_{x_{3}\leq\alpha-x_{1}}\mathbf{1}_{x_{1} • What you mean By\mathbf{I}_x$and$\mathbb{E}$? – Monir Mar 7 at 13:40 • For$P(X_1+X_3 \wedge X_1<X_2)$can we say$X_1\leq \alpha-X_3$and$X_1<X_2\$. But I didn't understand how we use your technique for three random variable? – Monir Mar 7 at 13:52 • Do you mean that $$0 \leq X_3 \leq \alpha- X_1$$ and $$0\leq X_2\leq X_2$$ and $$0\leq X_2\leq \infty$$ – Monir Mar 7 at 13:57 • #drhab Thanks so much I will try to get the second probability? – Monir Mar 7 at 14:21 • Can any one help me for the last part in my quetion? – Monir Mar 8 at 0:09
2019-08-20 07:31:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560773968696594, "perplexity": 862.1775290056335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315258.34/warc/CC-MAIN-20190820070415-20190820092415-00327.warc.gz"}
https://gateoverflow.in/75132/gate-overflow-programming-test-1-question-9?show=163538
787 views No. of times '*' will be printed by the following C code is _____ #include<stdio.h> void foo(int x) { switch(x){ case 1: printf("*"); case 2: printf("*"); case 3: printf("*"); default: printf("*"); } } int main() { foo(2.5); } foo(2.5); foo(2) // converting 2.5 into int now case 2 is true so it will print *. case 3 will also print * because no break statement. default case will also print *. TOtal 3 time * will be printed. @arjun sir, What if we have 2.99, it is still converting to 2, not 3. 2.99 stored in integer variable. so it will trancate part after decimal point i.e. if .99999 will be given it will take it as 2 only. by default, when fractional(double) value are converted to int they are implicitly converted using floor(n) function..so no matter if you write 2.001 or 2.999 both will be converted to 2 ...... x=2.5 after converting int x will 2 so * is prited 3 times (one for case 2 ,one for case 3 because no break statement is used here and one for default). 1 532 views 1 vote
2023-02-08 06:15:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37319982051849365, "perplexity": 5102.110410683014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500719.31/warc/CC-MAIN-20230208060523-20230208090523-00867.warc.gz"}
https://math.stackexchange.com/questions/495987/every-group-of-order-p2-has-a-normal-subgroup-of-order-p
# Every group of order $p^2$ has a normal subgroup of order $p$. Prove that if $p$ is a prime and $G$ is a group of order $p^\alpha$ for some $\alpha \in \mathbb{Z}^+$, then every subgroup of index $p$ is normal in $G$. Deduce that every group of order $p^2$ has a normal subgroup of order $p$. If $G$ is a finite group of order $n$ and $p$ is the smallest prime dividing $|G|$, then any subgroup of index $p$ is normal. Since $p$ is a prime, $p$ is the smallest prime dividing $|G|$, hence every subgroup of index $p$ is normal in $G$. For second one, I can show until that any subgroup $H$ with order $p$ is normal in $G$. But I was not able to come up with one, except for using Sylow's theorem. But this excercise appears before that. Maybe I'll end up proving Sylow's theorem to show the existence of order $p$ subgroup? My very basic solution $\ \ \$ $^\dagger$Compare to Mariano's fancy congugacy Given $p$ a prime and $G$ is a group of order $p^2$, then every subgroup of index $p$ is normal in $G$. This is equivalent to say that any subgroup of order $p^2/p = p$ is normal in $G$. So now we set out to show that $G$ must have a subgroup with order $p$. Suffice to show that $G$ must have a cyclic subgroup with order $p$. The order of a cyclic group is equal to the order of its generator. Since by Lagrange's Theorem, the order of the subgroup divides the group, we know that the order of cyclic subgroup is either 1, $p$, or $p^2$. Hence, the order of the generator needs to be 1, $p$, or $p^2$ respectively. When the order of generator is 1, iff the generator is identity. We skip this case and show a cyclic subgroup of order $p$ exist when the order of generator is $p$ or $p^2$. If the order of the generator is $p$, we are done. Finally, we check that the order of the generator is $p^2$. However, we know that all order $p^2$ cyclic group is isomorphic to $\mathbb{Z}/p^2\mathbb{Z}$. And in this group, $p$ has order $p$. Hence in any group isomorphic to $\mathbb{Z}/p^2\mathbb{Z}$, it has an element with order $p$. Hence every group of order $p^2$ has a normal subgroup of order $p$. Let $G$ be a group of order $p^2$. Let $c_1$, $\dots$, $c_r$ be the conjugacy classes of elements of $G$. The order of each class divides the order of $G$ and $$\sum_{i=1}^r|c_i|=p^2.$$ There is at least one conjugacy class with exactly one element: that of the identity element of $G$. Use this to show that the center $Z$ of $G$ is non-trivial. Now either $Z=G$, and therefore $G$ is abelian, or $Z$ is of order $p$ and $G/Z$ cyclic. In this last case, $G$ is also abelian. Therefore our group $G$ is abelian, and things got easier. • Oh! Awesome Mariano! – Tumbleweed Sep 17 '13 at 3:45 • But I think what I eventually did is different. My method looks more rudimentary, hope it is alright...? – Tumbleweed Sep 17 '13 at 3:46 • Also... Can this be solved directly via Cauchy's Theorem..? – Tumbleweed Sep 17 '13 at 3:54 • Which one of Cauchy's theorems? – Mariano Suárez-Álvarez Sep 17 '13 at 3:57 • The one says $p \;|\; |G|$ then $G$ has an element of order $p$. – Tumbleweed Sep 17 '13 at 4:06 Hint: Can you find an element of order $p$? • Hi Serkan, yes, that is exactly the thing left to do. But I tried all kinds, but still was not able to come up with one so far.. – Tumbleweed Sep 17 '13 at 2:27 • Oh but I have been trying to find a group with order $p$.. – Tumbleweed Sep 17 '13 at 2:28 • I see every element can have order $1,p,p^2$. 1 is contradiction, $p$ we are done, but it is not obvious to me that $p^2$ does not work. – Tumbleweed Sep 17 '13 at 2:45 • No I was not able to come up with one, except for using Sylow's theorem. But this excercise appears before that. Maybe I'll end up proving Sylow's theorem to show the existence of order $p$ subgroup? – Tumbleweed Sep 17 '13 at 2:57 • @Tumbleweed To expand on the hint: if you have an element of order $p^2$, and your group is of order $p^2$, then your group is a very specific group. Can you see which one? – Nick Peterson Sep 17 '13 at 3:01 Here's my solution: Consider the center $Z(G)$. The following are its possible order: $1,p,p^{2}$ Since $|G|=p^{2}$ then G is a $p-group$ and $p-groups$ have non-trivial center. This eliminates the possibility that $|Z(G)|=1$. If $|Z(G)|=p$ then $|G/Z(G)|=p\implies G/Z(G)$ is cyclic $\implies G$ is abelian which is a contradiction since $|Z(G)|=p$. Therefore $|Z(G)|=p^{2}$ and $G$ is abelian. Since $G$ is abelian we know that the converse of Lagrange's Theorem holds and we also know that all subgroups of an abelian group are normal. Thus $G$ has a normal subgroup of order $p$. Actually, there is something stronger. If $|G|=p^n$ where $p$ is a prime then for every $k$ s.t. $0\leq k\leq n$ there is a normal subgroup with order $p^k.$ Indeed, answer is a corollary of the fact that every group action induces a homomorphism. Let $\left| G \right| = {p^k}$ and $\left[ {G:H} \right] = p$. Let $X$ be all left cosets of $H$ in $G$. Then, $\left| X \right| = p$. The action of $G$ on $X$ by left translation induces a homomorphism $\sigma :G \to {S_p}$ defined by $g \to {\varphi _g}$ , where ${\varphi _g}\left( {aH} \right) = \left( {ga} \right)H$. Observe that $Ker\sigma \le H$. Since $\left| G \right| = {p^k}$, $\frac{{\left| G \right|}}{{\left| {Ker\sigma } \right|}} = {p^i}$ for some $i \in \left\{ {0,1, \ldots ,k} \right\}$. Since $\frac{{\left| G \right|}}{{\left| {Ker\sigma } \right|}}$ divides $\left| {{S_p}} \right| = p!$ and ${p^i}$ doesn't divide $p!$ for all integer $i \ge 2$, we get that $\frac{{\left| G \right|}}{{\left| {Ker\sigma } \right|}} = 1$ or $p$. If $\frac{{\left| G \right|}}{{\left| {Ker\sigma } \right|}} = 1$, then $G = Ker\sigma$. But this contradicts with $Ker\sigma \le H < G$. Then, $\frac{{\left| G \right|}}{{\left| {Ker\sigma } \right|}} = p$. Hence, clearly, $Ker\sigma = H$. If $\left| G \right| = {p^2}$, every subgroup of order $p$ has index $p$ and so is normal in $G$.
2019-12-06 16:13:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043718576431274, "perplexity": 76.13788035353485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00262.warc.gz"}
https://www.imath.kiev.ua/~sigma/2018/018/
### Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) SIGMA 14 (2018), 018, 43 pages      arXiv:1708.02519      https://doi.org/10.3842/SIGMA.2018.018 Contribution to the Special Issue on Orthogonal Polynomials, Special Functions and Applications (OPSFA14) ### Asymptotics for Hankel Determinants Associated to a Hermite Weight with a Varying Discontinuity Christophe Charlier a and Alfredo Deaño b a) Department of Mathematics, KTH Royal Institute of Technology, Lindstedtsvägen 25, SE-114 28 Stockholm, Sweden b) School of Mathematics, Statistics and Actuarial Science, University of Kent, Canterbury CT2 7FS, UK Received November 02, 2017, in final form February 27, 2018; Published online March 07, 2018 Abstract We study $n\times n$ Hankel determinants constructed with moments of a Hermite weight with a Fisher-Hartwig singularity on the real line. We consider the case when the singularity is in the bulk and is both of root-type and jump-type. We obtain large $n$ asymptotics for these Hankel determinants, and we observe a critical transition when the size of the jumps varies with $n$. These determinants arise in the thinning of the generalised Gaussian unitary ensembles and in the construction of special function solutions of the Painlevé IV equation. Key words: asymptotic analysis; Riemann-Hilbert problems; Hankel determinants; random matrix theory; Painlevé equations. pdf (679 kb)   tex (46 kb) References 1. Anderson G.W., Guionnet A., Zeitouni O., An introduction to random matrices, Cambridge Studies in Advanced Mathematics, Vol. 118, Cambridge University Press, Cambridge, 2010. 2. Atkin M., Charlier C., Zohren S., On the ratio probability of the smallest eigenvalues in the Laguerre unitary ensemble, Nonlinearity 31 (2018), 1155-1196, arXiv:1611.00631. 3. Bertola M., Bothner T., Zeros of large degree Vorob'ev-Yablonski polynomials via a Hankel determinant identity, Int. Math. Res. Not. 2015 (2015), 9330-9399, arXiv:1401.1408. 4. Bertola M., Lee S.Y., First colonization of a hard-edge in random matrix theory, Constr. Approx. 31 (2010), 231-257, arXiv:0711.3625. 5. Bohigas O., Pato M.P., Missing levels in correlated spectra, Phys. Lett. B 595 (2004), 171-176, nucl-th/0403006. 6. Bothner T., Deift P., Its A., Krasovsky I., On the asymptotic behavior of a log gas in the bulk scaling limit in the presence of a varying external potential I, Comm. Math. Phys. 337 (2015), 1397-1463, arXiv:1407.2910. 7. Brézin E., Hikami S., Characteristic polynomials of real symmetric random matrices, Comm. Math. Phys. 223 (2001), 363-382, math-ph/0103012. 8. Buckingham R., Large-degree asymptotics of rational Painlevé-IV functions associated to generalized Hermite polynomials, arXiv:1706.09005. 9. Charlier C., Asymptotics of Hankel determinants with a one-cut regular potential and Fisher-Hartwig singularities, Int. Math. Res. Not., to appear, arXiv:1706.03579. 10. Charlier C., Claeys T., Asymptotics for Toeplitz determinants: perturbation of symbols with a gap, J. Math. Phys. 56 (2015), 022705, 23 pages, arXiv:1409.0435. 11. Charlier C., Claeys T., Thinning and conditioning of the circular unitary ensemble, Random Matrices Theory Appl. 6 (2017), 1750007, 51 pages, arXiv:1604.08399. 12. Claeys T., Birth of a cut in unitary random matrix ensembles, Int. Math. Res. Not. 2008 (2008), rnm166, 40 pages, arXiv:0711.2609. 13. Clarkson P.A., Painlevé equations - nonlinear special functions, in Orthogonal Polynomials and Special Functions, Lecture Notes in Math., Vol. 1883, Springer, Berlin, 2006, 331-411. 14. Clarkson P.A., Jordaan K., The relationship between semiclassical Laguerre polynomials and the fourth Painlevé equation, Constr. Approx. 39 (2014), 223-254, arXiv:1301.4134. 15. Deaño A., Simm N.J., On the probability of positive-definiteness in the gGUE via semi-classical Laguerre polynomials, J. Approx. Theory 220 (2017), 44-59, arXiv:1610.08561. 16. Deift P., Orthogonal polynomials and random matrices: a Riemann-Hilbert approach, Courant Lecture Notes in Mathematics, Vol. 3, New York University, Courant Institute of Mathematical Sciences, New York, Amer, Math, Soc., Providence, RI, 1999. 17. Deift P., Its A., Krasovsky I., Asymptotics of Toeplitz, Hankel, and Toeplitz+Hankel determinants with Fisher-Hartwig singularities, Ann. of Math. 174 (2011), 1243-1299, arXiv:0905.0443. 18. Deift P., Kriecherbauer T., McLaughlin K.T.-R., Venakides S., Zhou X., Strong asymptotics of orthogonal polynomials with respect to exponential weights, Comm. Pure Appl. Math. 52 (1999), 1491-1552. 19. Deift P., Kriecherbauer T., McLaughlin K.T.-R., Venakides S., Zhou X., Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory, Comm. Pure Appl. Math. 52 (1999), 1335-1425. 20. Deift P., Zhou X., A steepest descent method for oscillatory Riemann-Hilbert problems, Bull. Amer. Math. Soc. (N.S.) 26 (1992), 119-123, math.AP/9201261. 21. Deift P., Zhou X., A steepest descent method for oscillatory Riemann-Hilbert problems. Asymptotics for the MKdV equation, Ann. of Math. 137 (1993), 295-368. 22. Fokas A.S., Its A.R., Kitaev A.V., The isomonodromy approach to matrix models in $2$D quantum gravity, Comm. Math. Phys. 147 (1992), 395-430. 23. Forrester P.J., Witte N.S., Application of the $\tau$-function theory of Painlevé equations to random matrices: PIV, PII and the GUE, Comm. Math. Phys. 219 (2001), 357-398, math-ph/0103025. 24. Foulquié Moreno A., Martínez-Finkelshtein A., Sousa V.L., On a conjecture of A. Magnus concerning the asymptotic behavior of the recurrence coefficients of the generalized Jacobi polynomials, J. Approx. Theory 162 (2010), 807-831, arXiv:0905.2753. 25. Garoni T.M., On the asymptotics of some large Hankel determinants generated by Fisher-Hartwig symbols defined on the real line, J. Math. Phys. 46 (2005), 043516, 19 pages, math-ph/0411019. 26. Gromak V.I., Laine I., Shimomura S., Painlevé differential equations in the complex plane, De Gruyter Studies in Mathematics, Vol. 28, Walter de Gruyter & Co., Berlin, 2002. 27. Its A., Krasovsky I., Hankel determinant and orthogonal polynomials for the Gaussian weight with a jump, in Integrable systems and random matrices, Contemp. Math., Vol. 458, Amer. Math. Soc., Providence, RI, 2008, 215-247, arXiv:0706.3192. 28. Kajiwara K., Ohta Y., Determinant structure of the rational solutions for the Painlevé IV equation, J. Phys. A: Math. Gen. 31 (1998), 2431-2446, solv-int/9709011. 29. Krasovsky I., Correlations of the characteristic polynomials in the Gaussian unitary ensemble or a singular Hankel determinant, Duke Math. J. 139 (2007), 581-619, math-ph/0411016. 30. Kuijlaars A.B.J., McLaughlin K.T.-R., Van Assche W., Vanlessen M., The Riemann-Hilbert approach to strong asymptotics for orthogonal polynomials on $[-1,1]$, Adv. Math. 188 (2004), 337-398, math.CA/0111252. 31. Kuijlaars A.B.J., Vanlessen M., Universality for eigenvalue correlations at the origin of the spectrum, Comm. Math. Phys. 243 (2003), 163-191, math-ph/0305044. 32. Mehta M.L., Random matrices, Pure and Applied Mathematics (Amsterdam), Vol. 142, 3rd ed., Elsevier/Academic Press, Amsterdam, 2004. 33. Mehta M.L., Normand J.-M., Probability density of the determinant of a random Hermitian matrix, J. Phys. A: Math. Gen. 31 (1998), 5377-5391. 34. Mo M.Y., The Riemann-Hilbert approach to double scaling limit of random matrix eigenvalues near the ''birth of a cut'' transition, Int. Math. Res. Not. 2008 (2008), rnn042, 51 pages, arXiv:0711.3208. 35. Okamoto K., Studies on the Painlevé equations. III. Second and fourth Painlevé equations, $P_{{\rm II}}$ and $P_{{\rm IV}}$, Math. Ann. 275 (1986), 221-255. 36. Olver F.W.J., Olde Daalhuis A.B., Lozier D.W., Schneider B.I., Boisvert R.F., Clark C.W., Miller B.R., Saunders B.V. (Editors), NIST digital library of mathematical functions, Release 1.0.13 of 2016-09-16, available at http://dlmf.nist.gov/. 37. Saff E.B., Totik V., Logarithmic potentials with external fields, Grundlehren der Mathematischen Wissenschaften, Vol. 316, Springer-Verlag, Berlin, 1997. 38. Szegő G., Orthogonal polynomials, American Mathematical Society, Colloquium Publications, Vol. 23, 4th ed., Amer. Math. Soc., Providence, R.I., 1975. 39. Vanlessen M., Strong asymptotics of Laguerre-type orthogonal polynomials and applications in random matrix theory, Constr. Approx. 25 (2007), 125-175, math.CA/0504604. 40. Winternitz P., Physical applications of Painlevé type equations quadratic in the highest derivatives, in Painlevé Transcendents (Sainte-Adèle, PQ, 1990), NATO Adv. Sci. Inst. Ser. B Phys., Vol. 278, Plenum, New York, 1992, 425-431. 41. Wu X.-B., Xu S.-X., Zhao Y.-Q., Gaussian unitary ensemble with boundary spectrum singularity and $\sigma$-form of the Painlevé II equation, Stud. Appl. Math. 140 (2018), 221-251, arXiv:1706.03174.
2019-08-24 07:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.884737491607666, "perplexity": 3101.7062062709947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319915.98/warc/CC-MAIN-20190824063359-20190824085359-00546.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/if-points-2-3-b-4-k-c-6-3-are-collinear-find-value-k-coordinate-geometry_45578
# If the Points a (2,3), B (4,K ) and C (6,-3) Are Collinear, Find the Value of K. - Mathematics If the points  A (2,3),  B (4,k ) and C (6,-3) are collinear, find the value of k. #### Solution The given points are A (2,3),  B (4,k ) and C (6,-3) Here , (x_1 = 2 , y_1 =3) , (x_2 =4, y_2 =k) and (x_3 = 6, y_3=-3) It is given that the points A, B and C are collinear. Then, x_1(y_2 -y_3 )+x_2 (y_3-y_1)+x_3 (y_1-y_2)=0 ⇒ 2 (k+3) + 4 (-3-3) + 6 (3-k) = 0 ⇒ 2k + 6 - 24 +18 -6k =0 ⇒ - 4k = 0 ⇒ k =0 Concept: Coordinate Geometry Is there an error in this question or solution? #### APPEARS IN RS Aggarwal Secondary School Class 10 Maths Chapter 16 Coordinate Geomentry Q 17
2022-05-18 22:43:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.588932991027832, "perplexity": 8009.0457867749265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00647.warc.gz"}
https://www.abs.gov.au:443/methodologies/personal-safety-australia-methodology/2016
# Personal Safety, Australia methodology Latest release Reference period 2016 Released 8/11/2017 Next release Unknown First release ## Explanatory notes ### Introduction 1 The statistics presented in this release were compiled from data collected in the Australian Bureau of Statistics (ABS) 2016 Personal Safety Survey (PSS), conducted from November 2016 to June 2017. 2 The survey collected information from men and women aged 18 years and over about the nature and extent of violence experienced since the age of 15. It also collected detailed information about men's and women's experience of current and previous partner violence and emotional abuse, experiences of stalking since the age of 15, sexual and physical abuse before the age of 15, witnessing of violence between a parent and their partner before the age of 15, lifetime experience of sexual harassment, and general feelings of safety. 3 The statistics presented in this release, refer to the Data downloads which can be accessed in the Downloads section and are indicative of the extensive range of data available from the survey and demonstrate the analytical potential of the survey results. 4 Full details about all the data collected in the 2016 PSS are provided in the Data Item List. This and other detailed information on how to maximise the use of the extensive range of data are available in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). Additional information may be made available by request, on a fee for service basis, through the ABS Information Consultancy, or via TableBuilder or Detailed Microdata products which are expected to be released in the first quarter of 2018. 5 This is the third time the PSS has been conducted. The PSS was last run in 2012, and prior to that in 2005. The PSS is based on the design of the Women's Safety Survey (WSS) (cat. no. 4128.0) which was conducted in 1996, and has been adapted to include men's experience of violence. This release includes some data comparisons with previous iterations where appropriate. ### Background 6 The PSS meets the need for updated information on the nature and extent of violence experienced by men and women in Australia and other related information regarding people's safety at home and in the community. 7 The need for data on the prevalence of violence and sexual assault is discussed in the National Plan to Reduce Violence against Women and their Children 2010-2022, and in the following ABS Information Papers: 8 ABS acknowledges the support and input of the Department of Social Services (DSS) which, under the National Plan to Reduce Violence Against Women and their Children 2010-2022, provided funding for the 2016 PSS. A Survey Advisory Group, comprising key government and non-government bodies, provided the ABS with advice on the information to be collected and on some aspects of survey methodology. Members of this group included representatives from State and Commonwealth Government departments, crime research agencies, service providers and relevant academics. ### Scope of the survey 9 The scope of the 2016 PSS was persons aged 18 years and over in private dwellings across Australia (excluding very remote areas). Interviews were conducted with one randomly selected person aged 18 years or over who was a usual resident of the selected household. 10 Both urban and rural areas in all states and territories were included in the survey, except for very remote areas of Australia. The following groups were excluded from the scope of the survey: • visitors at a dwelling whose usual place of residence is Australia • overseas visitors intending to stay in Australia for less than 12 months • non-Australian diplomats, non-Australia diplomatic staff and non-Australian members of their household • members of non-Australian defence forces stationed in Australia and their dependants • people who usually reside in non-private dwellings, and • households where all residents are aged less than 18 years. ### Sample design 11 The 2016 PSS was designed to produce reliable estimates for selected key estimates of interest. Each of these key estimates were then required to be disaggregated for: • women: for each state and territory (and at the national level) • men: at the national level. While the survey was not designed to provide state/territory level data for men, estimates of acceptable quality are able to be produced for some of the larger states. 12 The sample for women was allocated roughly equally across each state and territory to provide sufficiently reliable state and territory and national level estimates for women. The sample for men was allocated to states and territories roughly in proportion to their respective population size to provide sufficiently reliable national level estimates for men. In order to target the differential numbers of male and female sample, dwellings were pre-assigned for either male selection (where an interview with a male aged 18 years and over was required) or female selection (where an interview with a female aged 18 years and over was required). One in-scope person of the pre-assigned gender was then randomly selected from each dwelling. Where the household did not contain an in-scope resident of the pre-assigned gender, an in scope resident of the alternate gender was randomly selected. For further information refer to the Survey Development and Data Collection page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). 13 Response rates to the survey were expected to be impacted by a number of operational factors that were designed to help ensure the safety of respondents, the safety of interviewers and to help ensure data integrity. These included: • the part voluntary nature of the survey • requirement for all interviews to be conducted in a private interview setting • proxy interviews were not conducted for the voluntary component (therefore people requiring a proxy are not included in the final data), and • the overall sensitive nature of the survey content. ### Sample size 14 There were 36,495 private dwellings approached for the survey, comprising 7,074 pre-assigned male households and 29,421 pre-assigned female households. 15 After removing households where residents were out of scope of the survey, and where dwelling proved to be vacant, under construction or derelict, a final sample of 30,933 eligible dwellings were identified. 16 A final response rate of 68.7% was achieved, with 21,242 persons completing the questionnaire nationally. The response comprised 5,653 fully responding males and 15,589 fully responding females. 17 For further details on the response rates, refer to the Response Rates page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Data collection 18 Personal face to face interviews were conducted with one randomly selected person aged 18 years and over who was a usual resident of the selected household. Interviews were conducted from November 2016 to June 2017. On average contact time with fully responding households was 33 minutes. 19 The 2016 PSS was conducted under the authority of the Census and Statistics Act 1905. This ensures that the ABS has the authority to ask questions and that the confidentiality provisions of the Act will be applied, as in all ABS surveys. However, because of the potential sensitivities of parts of this survey, the compliance provisions of the Act were not fully applied and the survey was conducted on a part voluntary basis. 20 Due to the sensitive nature of the information being collected, as with previous cycles, special procedures were used to ensure the safety of those participating and the reliability of the data provided. 21 Information was collected by specially trained ABS interviewers. The training program included sessions to familiarise the interviewers with: • the concepts addressed in the survey (definitions) • the specialised survey procedures developed for the survey (including sensitive approach methods to maximise response) • the Computer Assisted Interview (CAI) instrument (via Computer Assisted Personal Interview (CAPI) and Computer Assisted Self Interview (CASI)), and • administrative aspects of the survey. 22 In addition to the standard ABS training provided to ABS interviewers regarding the survey content and field procedures, interviewers also received tailored sensitivity and awareness training, designed to increase their knowledge and understanding of what happens when a person experiences violence. The ABS utilised external consultants, specialised in this field to provide this component of the interviewer training. 23 To help ensure respondent comfort and well-being, as well as encouraging participation, the ABS used female interviewers for the PSS. It was considered that men and women would be more likely to feel comfortable revealing sensitive information about their possible experiences of violence to a woman. This was based on collective advice from experts in the field during the survey development, was in line with the successful procedures followed for the 2005 and 2012 PSS, and was also supported by the 2016 PSS Survey Advisory Group. To cater for instances where this might not be the case, the ABS also trained a small number of male interviewers, in case a respondent preferred that their interview be conducted by a male. 24 Prior to enumeration, all selected households were sent out pre-approach material by mail that consisted of the following: • a registration letter and leaflet, sent to the dwelling 21 days prior to enumeration requesting household to register contact details, and • a reminder letter, sent 16 days prior to enumeration. The materials sent out were kept deliberately vague regarding the information that would be collected and assured respondents of the confidentiality of data collected. The letters did not detail the sensitive information to be collected. 25 If households registered contact details for the survey, the interviewer called first to collect household details to determine who the selected person was so that arrangements to speak with them could be made prior to attending the house. If household contact details weren't registered, the interviewer approached the house in person. A series of screening questions were asked of the person initially answering the door, to determine the number of usual male/female residents aged 18 years and over. 26 Selected respondents were first advised of the general nature of the survey. During the interview, less sensitive questions were asked first, such as their demographic details and general feelings of safety questions. This allowed people to develop a certain level of rapport with the interviewer and familiarised them with some survey content. 27 Once the questions regarding a person's experience of violence were reached, respondents were informed of the sensitive nature of the upcoming questions and their permission to continue with the interview was sought (referred to as the Opt-out point). At this point the respondent was also advised that the interview would continue as a Computer Assisted Self-Enumeration Interview (CASI), that is, the respondent could complete the interview themselves using the interviewer's laptop. If the respondent identified that they were not comfortable to continue in this interview mode, the interviewer could offer to continue conducting the interview (referred to as a Computer Assisted Personal Interview (CAPI)). In these situations, it was a specific requirement that all CAPIs for the sensitive topics be conducted alone (including no children) in a private setting. Interviewers were also advised that if the respondent chose to complete the voluntary component as a CASI, they should ensure that other people could not see the screen or respondent reactions, or hear any queries the respondent may ask about the questions. If they could, then the interviewer was advised to follow the same procedures as a CAPI interview. 28 For the 2016 PSS, proxy interviews, if required for translation or due to the respondent being incapable of responding for themselves as a result of a significant medical reason, were used to complete the compulsory part of the survey. For these interviews, the sensitive voluntary component of the survey was not mentioned and questions on these topics were not asked. The use of proxy interviews for the compulsory part of the survey provided information on the possible under representation in the survey of particular types of respondents, such as those from a non-English speaking background or with a profound or severe communication disability. For a detailed definition of proxy, refer to the Glossary. 29 To cater for instances where a respondent did not speak English, a small number of interviewers with foreign language skills were trained to conduct PSS interviews. 30 For further information on data collection and survey procedures, refer to the Survey Development and Data Collection page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Weighting 31 Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each sample unit corresponding to the level at which population statistics are produced. For the 2016 PSS, this is at a person level. The weight can be considered an indication of how many population units are represented by the sample unit. ### Selection weights 32 The first step in calculating weights for each person was to assign an initial weight, which was equal to the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was one in 600, then the person would have an initial weight of 600 (that is, they represent 600 people). ### Benchmarking 33 Using information based on observations by interviewers at the dwelling, as well as additional information collected from non-fully responding respondents as part of the compulsory component of the survey, analysis was undertaken to ascertain whether there were any particular categories of persons that were over or under-represented in the sample. This over or under-representation in the sample can be corrected using a non-response adjustment and/or through calibrating the weights to population benchmarks. Only calibrating the weights to population benchmarks was adopted for the 2016 PSS. 34 Benchmarks are independent estimates of the size of the population of interest. Weights are calibrated against independent population benchmarks to ensure that the survey estimates conform to the independently estimated distribution of the population, with respect to the benchmark categories, rather than to the distribution within the responding sample itself. The 2016 PSS survey estimates were benchmarked to the estimated resident Australian population aged 18 years and over who were living in private dwellings (excluding very remote areas of Australia) as at February 2017, simultaneously using the following benchmark categories: Number of persons by - • State or territory by capital city/balance of state by age groups by sex • State or territory by Social marital status (Married in registered or de facto marriage and Not married) by sex • State or territory by broad Country of birth (Australia, Main English Speaking categories and Other) by sex • State or territory by Labour force status (Full Time Employed, Part Time Employed, Unemployed, or Not In the Labour Force) by sex, and • Age group (slightly more detailed) by sex. ### Estimation 35 Estimation is a technique used to produce information about a population of interest, based on a sample of units (i.e. persons) from the population. Each record in the 2016 PSS has a person weight. Information for sampled persons is multiplied by the weights to produce estimates for the whole population. 36 For further information on weighting, benchmarking and estimation, refer to the Methodology page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003) ### Overview of data collected in PSS 37 A key objective of the 2016 PSS was to collect information about the prevalence of men's and women's experience of violence since the age of 15. This includes their experience of physical assault, sexual assault, physical threat and sexual threat by male and female perpetrators (for six key perpetrator types: current partner, previous partner, boyfriend/girlfriend or date, ex-boyfriend/ex-girlfriend, other known person, and stranger). This provides information on the prevalence of the different types of violence by different perpetrator types. 38 Where a person had experienced any of these types of violence, more detailed information was then collected for their most recent incident of each of the eight types of violence: physical assault, sexual assault, physical threat and sexual threat by a male and by a female perpetrator. This information is used to understand what happens when a person experiences violence by a male or female perpetrator and how this differs depending on the different types of violence. 39 Where someone had experienced violence by a current partner and/or previous partner they were asked further questions about what happened during the relationship. This information was collected separately for current partner violence and previous partner violence: if someone had experienced violence by more than one previous partner, the information was collected about their most recently violent previous partner only. 40 Other topics collected include experiences of stalking since the age of 15, abuse before the age of 15, witness violence towards a parent and their partner before the age of 15, partner emotional abuse, lifetime experience of sexual harassment and general feelings of safety. ### Interpretation of results 41 Care has been taken to ensure that results in the 2016 PSS are as accurate as possible. This includes thorough design and testing of the questionnaire, interviews being conducted by trained ABS interviewers, and quality control procedures throughout data collection, processing and output. For information on detailed interpretation of results refer to the relevant topic pages in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Measuring multiple incidents and multiple types of violence 42 It is possible that people have experienced multiple incidents of violence. Where a person has experienced more than one type of violence, they are counted separately for each type of violence they experience but are only counted once in the aggregated totals. Components therefore may not add to the totals. For example if a person had experienced an incident of physical assault by a stranger and an incident of physical assault by their current partner, they would be counted against each type of violence by type of perpetrator (i.e. physical assault by a stranger and physical assault by a current partner) but they would only be counted once in the total for those who had experienced physical assault. 43 It is also possible that a single incident of violence may involve more than one of the different types of violence. In the PSS, a single incident of violence is only counted once. Where an incident involves both sexual and physical assault, it is counted as a sexual assault. For example, if a person is physically assaulted during or as part of a sexual assault, this would be counted once only as a sexual assault. Where an incident involves a person being both threatened with assault and assaulted, it is counted as an assault. For example, if in a single incident a perpetrator threatens to sexually assault a person and then sexually assaults them, this would be counted only once in the survey as a sexual assault. The same applies for incidents where a person is both threatened with physical assault and physically assaulted. 44 For detailed descriptions and definitions of violence, refer to the Glossary. ### Violence - most recent incident data (MRI) 45 The characteristics and actions taken following an incident of violence differ depending on the type of violence a person experienced and the gender of the perpetrator. Due to constraints on the length of an interview and the load placed on respondents, it was not possible to collect detailed information about each incident of violence a person had ever experienced. Instead, detailed information was collected about their most recent incident for each of the eight different types of violence. A 'most recent' incident method was used to select a sample of incidents. If the most recent incident occurred more than 10 years ago, detailed information was not collected due to difficulties associated with recalling the incident and to reduce respondent burden. For further information refer to the Violence - Most recent incident page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). 46 People who had experienced violence within the last 10 years were asked to provide more detailed information about their most recent incident including: what happened during the incident; the actions taken following the incident; and the impact of the incident. This provides information for each of the eight different types of violence a person could experience: • Sexual assault by a male perpetrator • Sexual assault by a female perpetrator • Sexual threat by a male perpetrator • Sexual threat by a female perpetrator • Physical assault by a male perpetrator • Physical assault by a female perpetrator • Physical threat by a male perpetrator • Physical threat by a female perpetrator 47 Most recent incident data information is able to be used to analyse the different types of violence experienced by men and women to assess: • Whether there are differences in what happens when different types of violence are experienced, and • Whether there are differences between what happens when a woman experiences violence, and when a man experiences violence. ### Violence - prevalence 48 The information provided above can be used to produce a range of prevalence estimates for men's and women's experiences of violence, according to the type of violence, the type and sex of the perpetrator, and time frame. Prevalence refers to the number and proportion (rate) of persons in a given population that have experienced any type of violence within a specified time frame - usually in the last 12 months and since the age of 15. Prevalence rates are calculated by dividing the number of men/women/persons that have experienced the type of violence since the age of 15 by the total number of persons aged 18 years and over within that same population. For further information refer to the Violence - Prevalence page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). 49 The characteristics of the different types of violence are not able to be added to produce a total for characteristics of "violence". Conceptually it is invalid to add together data about the characteristics for the different types of violence, as actions a person may take could differ depending on the type of violence experienced. For example, if a person had contacted the police about their most recent incident of physical assault by a male but had not contacted police about their most recent incident of physical assault by a female, it is impossible to calculate an estimate of whether or not this person has contacted the police about "violence" - they both have and haven't. To add together data about characteristics of the different types of violence would also double count all persons who have experienced more than one type of violence. ### Abuse before the age 15 50 The definition of child abuse can vary across the different sectors of government, criminal justice systems, service providers and research organisations, depending on the perspective and interests of the organisation that have created it. 51 Sexual abuse is defined as any act involving a child (under the age of 15 years) in sexual activity beyond their understanding or contrary to currently accepted community standards. This excludes emotional abuse and sexual abuse under the age of 18. 52 Physical abuse is defined as any deliberate physical injury (including bruises) inflicted upon a child (under the age of 15 years) by an adult. This excludes discipline that accidentally resulted in injury, emotional abuse, and physical abuse by someone under the age of 18. 53 The 2016 PSS collected information about a respondent’s experience of sexual and physical abuse before the age of 15 years by any adult (male or female). Respondents were asked if they were sexually and/or physically abused by an adult (aged 18 years or over) before the age of 15. The same set of questions was repeated twice, for sexual abuse and physical abuse separately. Due to the sensitive nature of the module, respondents had the option of declining to answer these questions. If a respondent answered that they had experienced sexual or physical abuse before the age of 15, they were asked to identify all of the adult perpetrator types that abused them. 54 Information about the characteristics of the first incident of abuse was collected separately for sexual abuse and physical abuse. The Abuse before the age of 15 module was primarily designed to be used in conjunction with information collected in other parts of the survey in order to analyse the relationship between experiences of child abuse before the age of 15 and later experiences of violence as an adult from the age of 15. For further information refer to the Abuse before the age of 15 page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Witness violence before the age of 15 55 In the context of this module, violence refers to physical assault only and only encompasses violence witnessed between a parent and their partner. The 2016 PSS collected information about whether the respondent, before the age of 15, ever saw or heard violence being directed at one parent by another. The definition of violence used was the same as used to collect physical assault data in the Violence since the age of 15 topic. Mother includes step mothers and female guardians or caregivers. Partner includes the respondent’s father/stepfather, and the mother’s boyfriend or same-sex partner. Father includes step fathers and male guardians or caregivers. Partner includes the respondent’s mother/stepmother, and the father’s girlfriend or same-sex partner. 56 The questions about witnessing violence before the age of 15 were asked separately of the respondent for witnessing violence against their mother by a partner, and witnessing violence against their father by a partner. Respondents that reported having seen or heard any of the above being done to their mother and/or father were then asked how many times they saw or heard these things being done - 'once or twice' or 'more than twice'. 57 The witness violence before the age of 15 module was primarily designed to be used in conjunction with information collected in other parts of the survey to analyse the relationship between seeing and hearing violence as a child towards a parental figure and later experiences of violence as an adult from the age of 15. For further information refer to the Witness violence before the age of 15 page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Partner violence 58 Partner violence refers to any incident, reported in the Violence since the age of 15 module, of sexual assault, sexual threat, physical assault or physical threat by a current partner they were living with at the time of the survey and/or a previous partner they had lived with. Partner violence does not include violence by a boyfriend/girlfriend or date, which refers to a person that the respondent dated, or was intimately involved with, but did not live with. For detailed descriptions and definitions, refer to the Glossary. 59 If a respondent had identified more than one violent previous partner in the Violence since the age of 15 module, they were asked to focus on their most recently violent previous partner in the Partner violence module. 60 The Partner violence module is designed to capture information about the nature and impact of the violence throughout the duration of the relationship with the current partner and/or most recently violent previous partner. Partner violence data can be used to examine: • the characteristics of the violence experienced, such as how often violence was experienced • support-seeking behaviours, such as whether advice or support was sought and from whom • police involvement, such as whether the police were contacted, and other legal action including whether the partner was charged, whether they went to court, and whether a restraining order was issued • the impact of the violence on the respondent, including whether they experienced anxiety or fear as a result of the violence, changes to their usual routine, and whether they took time off work, and • separations from their violent partner as a result of the violence, including whether they ever temporarily separated, reasons for separation, places stayed during temporary separations, whether left property or assets behind, and reasons for returning to the violent partner. 61 Partner violence data collected in this module cannot be broken down by the type of violence experienced (sexual/physical assault/threat), only by the type of perpetrator (current or previous). Components for current partner and previous partner violence are not able to be added together to produce data for a 'total partner' aggregate as it would lead to double counting of all persons who have experienced violence by both a current and a previous partner, and does not account for where people had experienced violence by more than one previous partner. For further information refer to the Partner violence page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Partner emotional abuse 62 Emotional abuse occurs when a person is subjected to certain behaviours or actions that are aimed at preventing or controlling their behaviour, causing them emotional harm or fear. These behaviours are characterised in nature by their intent to manipulate, control, isolate or intimidate the person they are aimed at. They are generally repeated behaviours and include psychological, social, economic and verbal abuse. For a detailed definition of emotional abuse, refer to the Glossary. 63 The 2016 PSS collected information about a respondent's experience of emotional abuse since the age of 15, by a current partner they were living with at the time of the survey and/or a previous partner that they had lived with. Where a person had experienced emotional abuse by more than one previous partner, they were asked to focus on the most recently emotionally abusive previous partner when answering the more detailed questions about previous partner emotional abuse. This may or may not have been the same previous partner that was most recently violent, if the respondent had also experienced previous partner violence. In other words, the most recently violent previous partner and most recently emotionally abusive previous partner may be the same or different. Emotional abuse by a previous partner includes abuse that occurred after the relationship ended. For definitions of current partner and previous partner, refer to the Glossary. 64 Partner emotional abuse data can be used to examine: • the prevalence of partner emotional abuse, and • the characteristics of emotional abuse by a current and previous partner, such as the types of emotionally abusive behaviours experienced, how often the emotional abuse was experienced, and whether the anxiety or fear was experienced as a result. 65 Components for current partner and previous partner emotional abuse are not able to be added together to produce data for a 'total emotional abuse' aggregate as it would lead to double counting of all persons who have experienced emotional abuse by both a current and a previous partner. For further information refer to the Partner emotional abuse page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Stalking 66 The 2016 PSS collected information about a respondent's experiences of stalking since the age of 15. Persons were asked if they had experienced stalking by a man and by a woman separately. Stalking was considered to have occurred if a person has experienced: • any unwanted contact or attention on more than one occasion that could have caused fear or distress, or • multiple types of unwanted contact or behaviour that could have caused fear or distress. For a detailed definition of stalking, refer to the Glossary. 67 As soon as stalking behaviour had been identified, the episode was the focus of the remainder of the questions which the PSS defines as the most recent stalking episode as the stalking behaviours were likely to have occurred over a protracted period of time. Information about the types of stalking behaviours experienced in the most recent episode, the relationship to the perpetrator, and when the episode of stalking stopped was collected for the most recent episode of stalking by a man and by a woman since the age of 15. If the most recent episode of stalking occurred in the last 20 years, further information about the episode was collected. For further information refer to the Stalking page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). 68 Stalking prevalence data is available on the person level as aggregated data items (persons that have experienced stalking by both a man and a woman are only counted once in the aggregated data item 'Whether experienced stalking since age 15'). Stalking prevalence data can be used to examine: • the estimated number and proportion (rate) of persons that have experienced stalking by a man and/or woman during the last 12 months and since the age of 15, and • differences in the stalking prevalence rate between the male and female population. 69 Most recent episode data can be used to examine: • differences between men's and women's experiences of stalking, including stalking behaviours experienced, impacts, actions, and outcomes, and • differences between male perpetrated stalking and female perpetrated stalking, including stalking behaviours experienced, impacts, actions, and outcomes. ### Comparability between 2012 and 2016 PSS 70 The scope, content and data collection for the 2016 PSS was largely the same as the 2012 survey, with a few key changes: • Sample size – The sample size for 2016 was significantly larger due to improvements in response rates and changes to sample design. • Sample design – In 2016, pre-assigned genders selections for households were able to be ‘flipped’ to the alternate gender if no-one aged 18 years or over in the household was of the pre-assigned gender. This is consistent with the approach taken in the 2005 PSS. • Collection mode – The 2016 PSS introduced the Computer Assisted Self Interview (CASI), which gave respondents the option to complete the sensitive (voluntary) topics themselves using the interviewer laptop. • Compulsion – In 2016, the PSS was part compulsory, for the collection of demographic and other general non-sensitive topics. • Content – some changes were made to definitions to assist with respondent understanding as well as the addition of some new content or concepts (for example additional technologically focused behaviours were added to the sexual harassment, stalking and emotional abuse topics). 71 Selected summary results from the 1996 Women's Safety Survey, and the 2005, 2012 and 2016 PSS are presented in this publication to provide comparisons over time - refer to Tables 2, 8 and 39. The statistical significance of differences in estimates between 2012 and 2016 has been investigated and results that are statistically significant are indicated in the tables. 72 For further information on 2016 PSS procedures, content changes, and comparability with previous iterations of the PSS, refer to the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Comparison of data from PSS and other ABS sources 73 The ABS collects and publishes data relating to crime and safety from different sources, for example the Crime Victimisation Survey, Australia and the General Social Survey, Australia and administrative data from police agencies. Comparisons of PSS data with data from other sources cannot be readily made because of differences in data collection methods and the concepts and definitions used to measure violence. For example, survey mode may influence differences (face-to-face versus telephone interviewing), context effects (preceding questions influence responses to subsequent questions), differences in question wording and the length and timing of data collection. 74 Further information on crime data measurement issues is available in the information paper: Measuring Victims of Crime: A Guide to Using Administrative and Survey data (cat. no. 4500.0.55.001). ### Classifications 75 Country of Birth data were classified according to the Standard Australian Classification of Countries (SACC) (cat. no. 1269.0). 76 Languages spoken at home were classified according to the Australian Standard Classification of Language (ASCL) (cat. no. 1267.0). 77 Australian geographic data are classified according to the Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas (cat. no. 1270.0.55.001) 78 Educational attainment data are classified according to the Australian Standard Classification of Education (ASCED) (cat. no. 1272.0) ### Confidentiality 79 The Census and Statistics Act, 1905 provides the authority for the ABS to collect statistical information, and requires that statistical output shall not be published or disseminated in a manner that is likely to enable the identification of a particular person or organisation. This requirement means that the ABS must take care and make assurances that any statistical information about individual respondents cannot be derived from published data. 80 To minimise the risk of identifying individuals in aggregate statistics, a technique has been used to randomly adjust cell values. This technique is called perturbation. Perturbation involves a small random adjustment of the statistics and is considered the most satisfactory technique for avoiding the release of identifiable statistics while maximising the range of information that can be released. These adjustments have a negligible impact on the underlying pattern of the statistics. 81 After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values to derive a total will not necessarily give the same result as published totals. Where possible, a footnote has been applied to an estimated total where this is apparent in a diagram or graph (for example, if males who experienced violence and females who experienced violence don’t add to persons who have experienced violence). For commentary, please refer to the referenced data tables for confirmation of perturbation effects. 82 The introduction of perturbation in publications ensures that these statistics are consistent with statistics released via services such as TableBuilder. 83 Perturbation has been applied to 2016 PSS data published in this publication. Data from previous PSS or WSS presented in this publication have not been perturbed, but have been confidentialised if required using suppression of cells. ### Rounding 84 Estimates presented in this publication have been rounded. As a result, sums of the components may not add exactly to totals. 85 Proportions presented in this publication are based on unrounded figures. Calculations using rounded figures may differ from those published. ### Acknowledgements 86 The ABS would like to thank the people who completed the survey. Their participation has contributed to valuable information that will help to inform public debate about violence and will help further development of policies and programs aimed at reducing the prevalence of violence in Australia. 87 The ABS acknowledges the support and input of the Department of Social Services (DSS) which, under the National Plan to Reduce Violence Against Women and their Children 2010-2022, provided funding for the 2016 PSS. A Survey Advisory Group, comprising of key government and non-government bodies, provided the ABS with advice on the information to be collected and on some aspects of survey methodology. Members of this group included representatives from State and Commonwealth Government departments, crime research agencies, service providers and academics in the field. ### Products and services 88 All tables, in Excel format, can be accessed from the Data Downloads. The spreadsheets present tables of estimates and percents/prevalence rates, and the corresponding Relative Standard Errors (RSEs) for estimates and Margin of Errors (MoEs) for per cents/prevalence rates. For more details regarding RSEs and MoEs, refer to the Technical Note of this publication. ### Microdata 89 The 2016 PSS is available as TableBuilder and Detailed Microdata products for users who wish to undertake more detailed analysis. TableBuilder is an online tool for creating tables from ABS survey data, where variables can be selected for cross-tabulation. The Detailed Microdata product is available through the ABS Data Laboratory. The Microdata Entry page on the ABS website contains links to microdata related information to assist users to understand and access microdata. Additional information on the PSS microdata products are also available via Microdata: Personal Safety Survey, Australia, 2016 (cat. no. 4906.0.55.001). ### Data available on request 90 Customised tabulations are available on request. Subject to confidentiality and sampling variability constraints, tabulations can be produced from the survey incorporating data items, populations and geographic areas selected to meet individual requirements. ## Technical note ### Reliability of estimates 1 The estimates in the 2016 PSS publication are based on information obtained from a sample of the Australian population. Although care has been taken to ensure that the results of the survey are as accurate as possible, there are certain factors which can affect the reliability of the results to some extent and for which no adequate adjustments can be made. 2 One such factor is known as sampling error. The key measures used to assess the impact of sampling error on the 2016 PSS estimates in this publication are described below. Such calculations were undertaken in this publication and should be kept in mind when interpreting the results of this survey. 3 Other factors are collectively referred to as non-sampling errors. For more details on sampling error as well as details on non-sampling errors, refer to the Data Quality and Technical Notes page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ### Sampling error 4 As the 2016 PSS data was obtained from a sample of the Australian population, the impact of sampling error on estimates was closely reviewed. Sampling error (or sampling variability) is used to describe the circumstance where survey estimates differ from those that would have been produced had all persons been included in the survey. The magnitude of the sampling error associated with a sample estimate depends on the following factors: • Sample design - the final design attempted to make key survey results as representative as possible within cost and operational constraints. • Sample size - the larger the sample on which the estimate is based, the smaller the associated sampling error. • Population variability - the extent to which people differ on the particular characteristic being measured. This is referred to as the population variability for that characteristic. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and, therefore the smaller sampling error. Conversely, the more variable the characteristic, the greater the sampling error. ### Calculation of standard error 5 One measure of the likely difference in estimates is given by the Standard Error (SE), which indicates the extent to which an estimate might have varied because only a sample of dwellings was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings had been included, and about 19 chances in 20 that the difference will be less than two SEs. The published estimate is 467,300. There are two chances in three that the true value is in the range 439,700 to 494,900, and 19 chances in 20 that the true value is in the range 412,100 to 522,500. 6 For estimates of population sizes, the size of the SE generally increases with the level of the estimate, so that the larger the estimate, the larger the SE. However, the larger the sampling estimate the smaller the SE becomes in percentage terms. Thus, larger sample estimates will be relatively more reliable than smaller estimates. SE can be calculated using the estimates (counts or percentages) and the corresponding Relative Standard Error (RSE). For example, in this publication the estimated males aged 18 years and over who experienced physical assault in the last 12 months was 309,400. The RSE corresponding to this estimate is 8.7%. The SE is calculated by: $$\large S E \text { of estimate }=\left(\frac{R S E}{100}\right) \times estimate$$ = (8.7 / 100) * 309400 = 26,900 (rounded to the nearest 100) 7 The RSE is obtained by expressing the SE as a percentage of the estimate to which it related. The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate. $$\large R S E \%=\left(\frac{S E}{e s t i m a t e}\right) \times 100$$ 8 Estimates with RSEs less than 25% are considered sufficiently reliable for most purposes. However, estimates with RSEs of 25% or more are included in this publication of results and have been appropriately identified to use with caution. RSEs are presented in the tables of the publication for estimates ('000). Estimates with RSEs greater than 25% but less than or equal to 50% are annotated with an asterisk (*) to indicate they are subject to high SEs relative to the size of the estimate and should be used with caution. Estimates with RSEs of greater than 50%, annotated with a double asterisk (**), are considered too unreliable for most purposes. These estimates can be aggregated with other estimates to reduce the overall sampling error. Note that RSEs for proportion estimates (%) are not presented in the tables of this publication, but rather the Margin of Error (MoE) is presented (see section below). However RSEs can be produced from the TableBuilder or Detailed Microdata products or by request. ### Calculation of Margin of Error 9 Another useful measure is the Margin of Error (MoE), which describes the distance from the population value that the sample estimate is likely to be within, and is specified at a given level of confidence. Confidence levels typically used are 90%, 95% and 99%. For example, at the 95% confidence level, the MoE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MoE from the population value (the figure obtained if all dwellings had been enumerated). The MoE at the 95% confidence level is expressed as 1.96 times the SE. 10 A confidence interval expresses the sampling error as a range in which the population value is expected to lie at a given level of confidence. The confidence interval can easily be constructed from the MoE of the same level of confidence, by taking the estimate plus or minus the MoE of the estimate. In other terms, the 95% confidence interval is the estimate +/- MoE i.e. the range from minus 1.96 times the SE to the estimate plus 1.96 times the SE. The 95% MoE can also be calculated from the RSE by the following, where y is the value of the estimate: $$\large\operatorname{MOE}(y)=\frac{R S E(y) \times y}{100} \times 1.96$$ 11 Note due to rounding, the SE calculated from the RSE may be slightly different to the SE calculated from the MoE for the same estimate. The SE of estimate using MoEs is calculated by: $$\large S E \text { of estimate }=\left(\frac{M O E}{1.96}\right)$$ 12 Using the two formulas above, it was found that there are about 19 chances in 20 that the estimate of the proportion of females aged 18 years and over who experienced sexual harassment in the last 12 months (17.3%) is within +/- 1.1 percentage points from the population value. Similarly, there are about 19 chances in 20 that the proportion of females aged 18 years and over who experienced sexual harassment in the last 12 months is within the confidence interval of 16.2% to 18.4%. 13 In the tables in this publication, MoEs are presented for the proportion estimates (%). Proportion estimates are preceded by a hash (e.g. #10.2) if the corresponding MoE is greater than 10 percentage points. An estimate is also preceded by a hash if the MoE is large enough such that the corresponding confidence interval for this estimate would exceed the value of 0% and/or 100%; the natural limits of a proportion. The latter situation will occur if the MoE is greater than the estimate itself, or greater than 100 minus the estimate. Users should give the margin of error particular consideration when using this estimate. Note that MoEs for 1996 proportion estimates in the tables for this publication were calculated using the RSEs presented in the RSE tables found in the Women’s Safety Survey (cat. no. 4128.0). ### Standard error of a difference 14 The difference between two survey estimates is itself an estimate and is therefore subject to sampling error or variability. The sampling error of the difference between the two estimates depends on their individual SEs and the level of statistical association (correlation) between the estimates. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula: $$\large S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}$$ 15 For example, the number of females who have been stalked minus the number of males who have been stalked. While this formula will only be exact for differences between separate sub-populations or uncorrelated characteristics of sub-populations, it is expected to provide a reasonable approximation for most differences likely to be of interest in relation to this survey. ### Significance testing on differences between survey estimates 16 When comparing estimates between surveys or between populations within a survey, it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Standard error of a difference section. This standard error is then used to calculate the test statistic: $$\Large\left(\frac{x-y}{S E(x-y)}\right)$$ 17 If the value of this test statistic is greater than 1.96 then there is good evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence (at the 95% confidence level) that there is a real difference between the populations. 18 Data presented in the commentary chapters of this publication have been significance tested to assess whether or not there is a difference (for example, between men and women) or change (for example between 2012 and 2016). When undertaking additional analysis of data presented in the tables, significance testing is recommended. Example of estimates where there was a statistically significant difference 19 An estimated 5.4% of all men aged 18 years or over and 3.5% of all women aged 18 years or over had experienced physical violence during the 12 months prior to the survey. • The estimate of 5.4% of men who had experienced physical violence in the 12 months prior to the survey has an RSE of 7.0%. There are 19 chances out of 20 that an estimate of between 4.7% and 6.1% (or +/- 0.7% MoE) of men would have been obtained if all dwellings had been included in the survey. • The estimate of 3.5% of women who had experienced physical violence in the 12 months prior to the survey has an RSE of 5.9%. There are 19 chances out of 20 that an estimate of between 3.1% and 3.9% (or +/- 0.4% MoE) women would have been obtained if all dwellings had been included in the survey. • The value of this test statistic, (at 4.62 using the formula shown in the significance testing section above), is greater than 1.96. This showed that there was evidence, with a 95% level of confidence, of a statistically significant difference in the two estimates. By calculating the confidence interval for the proportion of men and women who experienced physical violence in the 12 months prior to the survey, it can be seen that the confidence intervals for estimates for men and women do not overlap (where the confidence intervals do not overlap, there is always a statistically significant difference). Therefore there is evidence to suggest that men were more likely than women to have experienced physical violence in the 12 months prior to the survey. 20 For information on detailed reliability of estimates, refer to the Data Quality and Technical Notes page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ## Glossary ### Show all ‘Advice or support’ means listening to the respondent, being understanding, making suggestions, giving information, referring respondent to appropriate services, or offering further help of any kind. It includes contacting or visiting any source of help from a friend to a professional organisation, so long as the respondent perceived that they were seeking advice or support. It excludes anyone who was told or found out about the incident/experiences, but from whom the respondent did not actively seek advice or support (e.g. help sought for injuries, which did not involve the respondent seeking advice or support). A person aged 18 years or over. #### Anxiety or fear Experiences of anxiety or fear can include constant worry, feeling nervous or jumpy, feeling scared or afraid, unable to calm down, feeling on edge, being panicked or distressed, and not being able to eat or sleep. #### Boyfriend/girlfriend or date This relationship may have different levels of commitment and involvement that does not involve living together. For example, this will include persons who have had one date only, regular dating with no sexual involvement, or a serious sexual or emotional relationship. It excludes de facto relationships. See Partner. #### Current partner A partner the person currently (at the time of the survey) lives with in a married or de facto relationship. #### Disability A disability or restrictive long-term health condition exists if a limitation, restriction, impairment, disease or disorder has lasted, or is expected to last for six months or more, which restricts everyday activities. A disability or restrictive long-term health condition is classified by whether or not a person has a specific limitation or restriction. The specific limitation or restriction is further classified by whether the limitation or restriction is a limitation in core activities, or a schooling/employment restriction only. There are four levels of core activity limitation (profound, severe, moderate, mild). These are based on whether a person needs help, has difficulty, or uses aids or equipment with any core activities (self-care, mobility or communication). A person's overall level of core activity limitation is determined by their highest level of limitation in any of these activities. Refers to the respondent's disability status at the time of the interview. Due to specific interview requirements for PSS, respondents who identified as having a profound or severe disability may be under represented. For further information refer to the Disability page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Emotional abuse Emotional abuse occurs when a person is subjected to certain behaviours or actions that are aimed at preventing or controlling their behaviour, causing them emotional harm or fear. These behaviours are characterised in nature by their intent to manipulate, control, isolate or intimidate the person they are aimed at. They are generally repeated behaviours and include psychological, social, economic and verbal abuse. For the PSS, a person was considered to have experienced emotional abuse where they reported they had been subjected to or experienced one or more of the following behaviours (that were repeated with the intent to prevent or control their behaviour and were intended to cause them emotional harm or fear): • Controlled or tried to control them from contacting family, friends or community - Where a partner prevents the respondents social access to any person that they want to see, and where a partner restricts the persons access to environments in which they may make friends (e.g. community or interest groups). • Controlled or tried to control them from using the telephone, internet or family car - Where a partner hides the phone/removes the phone cord, puts password protection on the computer/removes the power cord, or hides the car keys. Also includes where a respondent felt that they needed a car, but were restricted from purchasing one by their partner. • Controlled or tried to control where they went or who they saw (e.g. Constant phone calls, GPS tracking, monitoring through social media websites) - Where a partner monitors a respondent's activity. Includes actions such as checking all telephone call lists/logs on the phone or on a phone bill, monitoring website history to see what sites that the respondent has visited, or checking mileage on the car odometer. • Controlled or tried to control them knowing about or having access to household money - Includes situations where a partner intentionally does not disclose their income to the respondent, or does not give authority for the respondent to operate one or more bank accounts. Includes situations where the respondent receives only an ‘allowance’ from their partner and demands justification of spending (e.g. receipts). • Controlled or tried to control them from working or earning money - Includes situations where a partner prevents a respondent from working or restricts the number of hours they can work. Also includes situations where a respondent has expressed interest in gaining employment, and their partner has either restricted them from this, or has forcibly ‘talked them out of’ it (e.g. “you should prioritise your family over yourself”, or “who would want to employ you?”). Includes situations where a partner has stopped the respondent from doing volunteer work, or ‘helping out’ a friend/organisation (e.g. reading stories at the children’s school). • Controlled or tried to control them from studying - Includes situations where the respondent is not allowed by their partner to study or is forced to only study at limited times/days or hours, and situations where the respondent has expressed interest in study, and their partner has either restricted them from this, or forcibly ‘talked them out of’ this (e.g. “you should prioritise your family over yourself”, or “you aren’t smart enough for that”). Also includes situations where a partner has stopped the respondent from undertaking formal, as well as informal education (e.g. adult learning courses held at local community centres or high schools). • Deprived them of basic needs such as food, shelter, sleep or assistive aids - Includes situations where a partner deprives the respondent of any assistive aids’ such as a walking frame, wheelchair or hearing aids etc. Includes situations where a respondent is deprived of medical or psychological care, or is intentionally locked out of the home by a partner. Also includes situations where a respondent is forced to sleep elsewhere (e.g. on the floor, couch etc.), other than a bed and where the respondent is forced to eat differently to their partner (e.g. only rice). • Damaged, destroyed or stole any of their property. • Constantly insulted them to make them feel ashamed, belittled or humiliated - Constant put downs, name calling, bullying or making fun of the respondent (either in company, when the couple are alone, in front of children, etc.). Also includes situations where a partner constantly insults a respondent’s standard of hygiene, appearance, cooking or cleaning etc., or makes them feel 'dumb' or 'useless'. • Lied to their child/ren with the intent of turning them against them - Telling the respondent’s children that the respondent doesn’t love them, want them, or have time for them. Any lies or “tall tales” told to the children that were intended to cause the respondent emotional harm or fear. • Lied to other family members or friends with the intent of turning them against them. • Threatened to take their child/ren away from them. • Threatened to harm their child/ren. • Threatened to harm their other family members or friends. • Threatened to harm any of their pets. • Harmed any of their pets. • Threatened or tried to commit suicide. The definition of emotional abuse excludes: • Cases of nagging (e.g. about spending too much money on fishing gear, or going out with friends) unless this nagging causes them emotional harm or fear. • Cases where a spouse has restricted the respondent’s access to money, the car, or the internet as a result of the respondent’s substance abuse, gambling, or compulsive shopping issues unless the respondent perceives that these restrictions cause them emotional harm or fear. For further information, refer to the Partner Emotional Abuse page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Face-to-face threatened assault Any verbal and/or physical threat to inflict physical harm, made face-to-face, where the person being threatened believed the threat was likely and able to be carried out. Excludes any incident where the person being threatened did not encounter the offender in person (e.g. threats made via telephone, text message, e-mail, in writing or through social media). #### Incident An ‘incident’ is referred to as an event of assault or threat, an occurrence or event of violence, abuse or assault that an individual has encountered in their life. People were asked about the most recent incident for the eight types of violence (sexual assault, sexual threat, physical assault, physical threat by a male and by a female). Where a person experienced continuous acts of violence by the same perpetrator (e.g. in a domestic violence situation), they may have considered the continuous acts of violence to be a single incident. In these cases, the respondent was instructed to think about the most recent act of violence by that perpetrator when answering the more detailed questions. It is possible that people have experienced multiple incidents of violence. Where a person has experienced more than one type of violence, they are counted separately in each type of violence they experience but are only counted once in the totals. Components therefore may not add to the totals. It is also possible that a single incident of violence may involve more than one of these different types of violence. In order to produce valid violence prevalence rates, in the survey a single incident of violence is only counted once. Where an incident involves both a sexual and physical assault, it is counted as a sexual assault, e.g. if in an incident a person is physically assaulted during/as part of a sexual assault: this would be counted once only as a sexual assault. Where an incident involves a person being both threatened with assault and then assaulted, it is counted as an assault, e.g. if in a single incident a perpetrator threatens to sexually assault a person and then sexually assaults them this would be counted only once in the survey as a sexual assault. The same applies for incidents where a person is both physically threatened with assault and then physically assaulted. #### Intimate partner Includes current partner (living with), previous partner (has lived with), boyfriend/girlfriend/date and ex-boyfriend/ex-girlfriend (never lived with). For further information, refer to the Partner Violence page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Margin of Error Margin of Error (MoE), describes the distance from the population value that the sample estimate is likely to be within, and is specified at a given level of confidence. MoEs presented in this publication are at the 95% confidence level. This means that there are 19 chances in 20 that the estimate will differ by less than the specified MoE from the population value (the figure obtained if all dwellings had been enumerated). For further information, refer to the Technical Note page of this publication. #### Other known person Includes any other known person that does not fit into any of the partner, stranger, or (ex-)boyfriend/girlfriend or date categories. Includes: • Father/Mother - Includes step-parents • Son/Daughter - Includes step children • Brother/Sister - Includes step siblings • Other male/female relative or in-law • Friend - Someone one knows, likes and trusts • Acquaintance/neighbour - An acquaintance is anybody that the person recognises or knows in someway and is not perceived to be a 'stranger'. A neighbour is someone who lives or is located close to the persons place of residence • Employer/manager/supervisor • Co-worker • Teacher/tutor • Client/patient/customer • Medical practitioner (e.g. Doctor, psychologist, nurse, counsellor) • Priest/Minister/Rabbi/ or other spiritual advisor • Carer (includes non-family paid or unpaid helper) • Any other known person #### Partner The term partner in the PSS is used to describe a person the respondent lives with, or lived with at some point in a married or de facto relationship. This may also be described as a co-habiting partner. In the context of Witnessed Violence however, partner refers to the person who is in a relationship with the respondent’s mother/stepmother and father/stepfather. For further information, refer to the Witness Violence Before the Age of 15 page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Physical abuse Any deliberate physical injury (including bruises) inflicted upon a child (under the age of 15 years) by an adult. Excludes discipline that accidentally resulted in injury, emotional abuse, and physical abuse by someone under the age of 18. For further information, refer to the Abuse Before the Age of 15 page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Physical assault Any incident that involved the use of physical force with the intent to harm or frighten a person. Assaults may have occurred in conjunction with a robbery and includes incidents that occurred on the job, where a person was assaulted in their line of work (e.g. assaulted while working as a security guard), at school or overseas. Examples of physical force include: • Pushed, grabbed or shoved - Includes being pushed off a balcony, down stairs or across the room. • Slapped - Includes a hit with an open hand. • Kicked, bitten or hit with a fist. • Hit you with something else that could hurt you - Includes being hit with a bat, hammer, belt, pot, ruler, etc. • Beaten - Includes punching, hitting or slapping in a repetitive manner. • Choked - Includes being choked by hands, a rope, a scarf, a tie or any other item. • Stabbed - With a knife. • Shot - With a gun. • Any other type of physical assault - Includes burns, scalds, being dragged by the hair or being deliberately hit by a vehicle. Physical assault excludes incidents that occurred during the course of play on a sporting field and excludes incidents of violence that occurred before the age of 15 (which are defined as physical abuse). If a person experienced physical assault and physical threat in the same incident, this was counted once only as a physical assault. If a person experienced sexual assault and physical assault in the same incident, this was counted once only as a sexual assault. #### Physical threat Any verbal and/or physical intent or suggestion of intent to inflict physical harm, which was made face-to-face and which the person believed was able to be and likely to be carried out. Examples of physical threats include: • Threaten or attempt to hit with a fist or anything else that could hurt - Includes threats or attempts to slap, punch, spank or hit in any way with a fist or weapon such as a bat, hammer or pot. • Threaten or attempt to stab with a knife. • Threaten or attempt to shoot with a gun - The gun may or may not have been aimed at the person. It includes situations where a gun was left in an obvious place or if the person knew that the perpetrator had access to a gun. It includes toy guns, starter pistols etc., if the person believed they were real. • Threaten or attempt to physically hurt in any other way. Physical threat excludes any incident in which the threat was actually carried out and incidents which occurred during the course of play on a sporting field. If a person experienced sexual threat and physical threat in the same incident, this was counted once only as a sexual threat. #### Physical violence The occurrence, attempt or threat of physical assault experienced by a person since the age of 15. For further information, refer to the Violence Prevalence and Violence - Most Recent Incident pages in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Population Females and males aged 18 years and over. #### Prevalence of violence Prevalence of violence refers to the number and proportion (rate) of persons in a given population that have experienced any type of violence within a specified time frame – usually in the last 12 months (12 months prior to the survey) and since the age of 15. For further information, refer to the Violence Prevalence page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Previous partner A person that the respondent lived with at some point in a married or de facto relationship from whom the respondent is now separated, divorced or widowed from. #### Proxy A proxy is a person who answers the survey questions when the person selected for the interview is incapable of answering for themselves. Reasons the selected person may not be able to answer for themselves include illness/injury or language difficulties. For this survey, a proxy was used to complete the general information component on behalf of the selected person. No proxy interviews were conducted on the voluntary components of the survey and therefore data for these selected persons were not used in output. For more details, refer to the Proxy section of the Survey Development and Data Collection page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Relative Standard Error The Relative Standard Error (RSE) is the standard error expressed as a proportion of an estimated value. For further information, refer to the Technical Note page of this publication. #### Sexual abuse Any act by an adult involving a child (under the age of 15 years) in sexual activity beyond their understanding or contrary to currently accepted community standards. Excludes emotional abuse and sexual abuse by someone under the age of 18. For further information, refer to the Abuse Before the Age of 15 page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### ​​​​​​​Sexual assault An act of a sexual nature carried out against a person's will through the use of physical force, intimidation or coercion, including any attempts to do this. This includes rape, attempted rape, aggravated sexual assault (assault with a weapon), indecent assault, penetration by objects, forced sexual activity that did not end in penetration and attempts to force a person into sexual activity. Incidents so defined would be an offence under State and Territory criminal law. Sexual assault excludes incidents of violence that occurred before the age of 15 - these are defined as sexual abuse. It also excludes unwanted sexual touching - this is defined as sexual harassment. If a person experienced sexual assault and sexual threat in the same incident, this was counted once only as a sexual assault. If an incident of sexual assault also involved physical assault or threats, this was counted once only as a sexual assault. #### Sexual harassment Is considered to have occurred when a person has experienced or been subjected to behaviours which made them feel uncomfortable, and were offensive due to their sexual nature. PSS collects information about selected types of sexual harassment behaviours including: • Indecent text, email or post - Includes electronic messages (such as text messages, SMS, MMS, posts on Facebook or other internet social networking sites, emails, or other Internet messages), and written messages (such as letters delivered by mail or notes left where a person could find them). Does not include messages in which profanity was used, unless this was offensive due to its sexual content. • Indecent exposure - Is the act of exposing genitals for the purpose of distressing, shocking, humiliating and/or generating fear in a person. • Inappropriate comments - Includes inappropriate comments in a group situation as well as when the respondent is alone with the person who is harassing them, and sexual comments that are related to the respondent’s race, such as implying that people of a particular cultural group have certain sexual characteristics. • Unwanted touching - Is momentary or brief touching or contact and includes groping or brushing against a breast or bottom. • Distributing or posting pictures or videos of the person, that were sexual in nature, without their consent - Includes taking a photo or video which was sexual in nature, or showing/sending/posting the photos/videos which were sexual in nature. • Exposure to pictures, videos or materials which were sexual in nature that the person did not wish to see - Includes emailing the person or making them watch pornography, and displaying posters, magazines or screen savers of a sexual nature for the person to see. For further information, refer to the Sexual Harassment page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### Sexual threat The threat of acts of a sexual nature that were made face-to-face where the person believed it was able to and likely to be carried out. If a person experienced sexual assault and sexual threat in the same incident, this was counted once only as a sexual assault. #### ​​​​​​​Sexual violence The occurrence, attempt or threat of sexual assault experienced by a person since the age of 15. For further information, refer to the Violence Prevalence and Violence - Most Recent Incident pages in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### ​​​​​​​Since the age of 15 Refers to any violence experienced by a person since the age of 15. #### Stalking Stalking involves various behaviours, such as loitering and following, which the person believed were being undertaken with the intent to cause them fear or distress. To be classified as stalking more than one type of behaviour had to occur, or the same type of behaviour had to occur on more than one occasion. Behaviours include: • Loitered or hung around outside person's home. • Loitered or hung around outside person's workplace. • Loitered hung around outside person's place of leisure or social activities. • Followed or watched them in person. • Followed or watched them using electronic tracking device (e.g. GPS tracking system, computer spyware). • Maintained unwanted contact with them by phone, postal mail, email, text messages or social media websites. • Posted offensive or unwanted messages, images or personal information on the internet about them. • Impersonated them online to damage their reputation. • Hacked or accessed their email, social media or other online account without their consent to follow or track them. • Gave or left objects where they could be found that were offensive or disturbing. • Interfered with or damaged any of their property. For further information, refer to the Stalking page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). #### ​​​​​​​Standard Error The Standard Error (SE) indicates the extent to which an estimate might have varied because only a sample of dwellings was included. For further information, refer to the Technical Note page of this publication. #### ​​​​​​​Stranger Someone the person did not know, or someone they knew by hearsay. #### ​​​​​​​Violence In the PSS, violence is defined as any incident involving the occurrence, attempt or threat of either sexual or physical assault. Violence can be broken down into two main categories, sexual violence and physical violence. #### Witness violence before the age of 15 The PSS asks respondents if they ever saw or heard violence being directed at one parent by another before the age of 15. Violence in this context refers to physical assault only. Mother includes step mothers and female guardians or care-givers. Partner includes the respondent’s father/stepfather, and the mother’s boyfriend or same-sex partner. Father includes step fathers and male guardians or care-givers. Partner includes the respondent’s mother/stepmother, and the father’s girlfriend or same-sex partner. For further information, refer to the Witness Violence Before the Age of 15 page in the Personal Safety Survey, Australia: User Guide, 2016 (cat. no. 4906.0.55.003). ## Abbreviations ### Show all ABS Australian Bureau of Statistics ARA Any responsible adult ASCED Australian Standard of Classification of Education ASCL Australian Standard Classification of Language CAPI Computer assisted personal interview CASI Computer assisted self interview COB Country of Birth DSS Department of Social Services MMS Multi-media messaging service MoE Margin of Error MRI Most recent incident PSS Personal Safety Survey RSE Relative Standard Error SE Standard Error SMS Short message service WSS Women's Safety Survey
2022-09-28 19:24:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2629862129688263, "perplexity": 2536.3056043258957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00087.warc.gz"}
https://www.physicsforums.com/threads/quantum-linear-algebra-and-vector-spaces.286293/
# Quantum/linear algebra and vector spaces 1. Jan 20, 2009 ### saraaaahhhhhh I have never taken linear algebra, but we're doing some catch-up on it in my Quantum Mechanics class. Using teh Griffiths book, problem A.2 if you're curious. Please explain how to solve this, if you help me. If you know of resources on how to think about this stuff, I'd greatly appreciate the assistance. *** Consider the collection of all polynomials (with complex coefficients) of degree less than N in x. a.) Does this set constitutte a vector space (with the polynomials as vectors)? If so, suggest a convenient basis and give the dimension of the space. If not, which of the defining properties does it lack? b.) What if we require that the polynomials be even functions? c.) What if we require that the leading coefficient (i.e., the number multiplying x^(N-1)) be 1? d.) What if we require that the polynomials have the value 0 at x=1? e.) What if we require that the polynomials have the value 1 at x=0? My attempt at a solution is: a.) Yes, it doesw consitute a vector space. Any vector would be an ordered N-tuple (?) constructed from teh coefficients. How would I answer about the dimension of the space? Does it have N dimensions? I'm not sure if I understand what is being asked. b.) Nothing changes? c.) Then you'd have a pretty boring vector space? But I think all the rules would work. d.) Still a vector space? e.) Still a vector space? I don't see why that would change, I must be missing something. 2. Jan 21, 2009 ### CompuChip So first of all, what are the axioms for a vector space? a) Yes it does. So they ask for a basis. That means, give a bunch of polynomials in x, so you can express every polynomial of degree less than N in x as a linear combination. Don't think too hard, it's really straightforward :P The dimension is the minimal number of such functions that you need (and it's easy to check once you have a set, just check if you can indeed express every polynomial as a linear combination in your basis, and if you take one out you can find an example where this no longer works). Note that every vector is now a function of x, although you are right that it is isomorphic (i.e. bijectively mapped and equivalent as a vectorspace) with RN by writing down a vector of N coefficients, in a particular basis you have chosen. This is not unique though: just post your basis and I will give you another one, which is equally good (in terms of being able to express all the functions) but where the coefficients for the same function look entirely different in both. Note how you have to let go of the idea of vector as a set of numbers, and think of it as an abstract "point" in a vector space... it's similar to having a vector in RN: although a given vector is a unique point in the space, the coordinates you write down between the brackets which you call "the vector" are actually dependent on the basis you have chosen for the vector space. [Perhaps this confuses you now, if you want to ever seriously do something like quantum physics, think about it ] b) Why does nothing change? You need to show this: check the properties of a vector space (is 0 even? is the sum of two even polynomials even? ...) c) Again, check the rules. I wouldn't call it a boring vector space... (boring is a subjective word, but vector space is not :P ) d) Show it! Check the rules! e) Try it! Check the rules! Last edited: Jan 21, 2009
2016-12-07 20:45:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8182139992713928, "perplexity": 395.25658068540497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542246.21/warc/CC-MAIN-20161202170902-00219-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.biostars.org/p/111196/
multiple comparisons in a DESeq analysis with single replica in 0 0 Entering edit mode 7.7 years ago Assa Yeroslaviz ★ 1.7k I have a data set of six sinalge samples with no replica. I know, that DESeq can work with these, as I have done it already. I would like to know, what to do in this case, when working with multiple comparisons. I have ran the analysis in a pair-wise mode. DESeq than gives this output: estimating size factors # workflow of the DESeq2 function pair-wise modus estimating dispersions same number of samples and coefficients to fit, estimating dispersion by treating samples as replicates gene-wise dispersion estimates mean-dispersion relationship final dispersion estimates fitting model and testing But when doing the DESeq() command with all the samples in one go, I sort of trick DESeq to think there are multiple replica in the data set and I get this output: estimating size factors #multiple comparisons in one count table estimating dispersions gene-wise dispersion estimates mean-dispersion relationship final dispersion estimates fitting model and testing is it better in this case to run DESeq in a pair-wise modus? Don't I sort of cheat the results, when letting DESeq thinks there are multiple replica in the data set? thanks Assa DESeq multiple comparisons • 2.6k views 0 Entering edit mode Can you show the design in both cases so we know what you mean by tricking DESeq() in the second instance? I'm assuming you have some sort of factorial design in the latter case. 0 Entering edit mode This is the script I am using: > featureCountTable <- read.delim2("ReadCountTable.txt", sep="\t", quote="", row.names=1) > coldata <- read.delim2("coldata.txt", sep="\t", quote="", row.names=1) > coldata sample treatment CTRL1 ctrl1 ctrl CTRL2 ctrl2 ctrl KO1 KO1 KO1 KO2 KO2 KO1 KO3 KO3 KO2 KO4 KO4 KO3 > cds <- DESeqDataSetFromMatrix ( countData = featureCountTable, colData = coldata, design = ~ treatment ) > fit = DESeq(cds) > res_1 = results(fit, contrast=c("treatment","ctrl","KO1")) > res_2 = results(fit, contrast=c("treatment","ctrl","KO2")) > res_3=... This is how I do it here, But I don't get the message, that DESeq2 "discover", that i don't have any replica. which made me wonder whether or not this is correct. In the first run, I always extract the columns I needed for the comparison, which was a bit tedious. 1 Entering edit mode So when you do the "pair-wise" method, am I correct in assuming that you're just loading two samples at once? It's getting the replicate information from the treatment column, so it's correct in the second instance that you do in fact have replicates (at least, you're telling it that you do). You don't need replicates for everything, just one group. Whether your design in the second instance is correct or not depends on the underlying biology, though since you're treating things as unreplicated in one instance, I would guess that this doesn't actually match the experiment. 0 Entering edit mode Yes, I do have one replica for each condition (or two for some). The different experiments are independent of each other. This is why I first tried to run them separately. So if I understand it correctly, I can't fit all the samples in DESeq in one go and than just extract the differentially expressed genes, if the conditions are not somehow related to each other. If the experiments were done as single, separate cases, so should be the analysis. Thanks Devon
2022-05-24 03:55:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4253247082233429, "perplexity": 3213.348041486507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00346.warc.gz"}
https://questions.examside.com/past-years/jee/question/if-fleft-x-right-x2-1-over-x2-1-for-every-real-number-x-then-jee-advanced-1998-marks-2-iwejkxevr1yeg7c6.htm
NEW New Website Launch Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc... 1 ### IIT-JEE 1998 If $$f\left( x \right) = {{{x^2} - 1} \over {{x^2} + 1}},$$ for every real number $$x$$, then the minimum value of $$f$$ A does not exist because $$f$$ is unbounded B is not attained even though $$f$$ is bounded C is equal to 1 D is equal to -1 2 ### IIT-JEE 1998 The number of values of $$x$$ where the function $$f\left( x \right) = \cos x + \cos \left( {\sqrt 2 x} \right)$$ attains its maximum is A $$0$$ B $$1$$ C $$2$$ D infinite 3 ### IIT-JEE 1997 If $$f\left( x \right) = {x \over {\sin x}}$$ and $$g\left( x \right) = {x \over {\tan x}}$$, where $$0 < x \le 1$$, then in this interval A both $$f(x)$$ and $$g(x)$$ are increasing functions B both $$f(x)$$ and $$g(x)$$ are decreasing functions C $$f(x)$$ is an increasing functions D $$g(x)$$ is an increasing functions 4 ### IIT-JEE 1995 Screening The slope of the tangent to a curve $$y = f\left( x \right)$$ at $$\left[ {x,\,f\left( x \right)} \right]$$ is $$2x+1$$. If the curve passes through the point $$\left( {1,2} \right)$$, then the area bounded by the curve, the $$x$$-axis and the line $$x=1$$ is A $${5 \over 6}$$ B $${6 \over 5}$$ C $${1 \over 6}$$ D $$6$$ ### Joint Entrance Examination JEE Main JEE Advanced WB JEE ### Graduate Aptitude Test in Engineering GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN NEET Class 12
2022-05-28 08:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881062924861908, "perplexity": 2204.8120918304453}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00782.warc.gz"}
https://11011110.github.io/blog/2008/06/12/hadwiger-hardness.html
The Hadwiger number of a graph G is the maximum number of vertices in a clique minor of G, or the maximum number of disjoint connected subgraphs of G that we can find that are all mutually adjacent to each other. It will not be surprising to learn that finding the Hadwiger number (or more precisely testing whether it is large) is NP-complete. But what is the appropriate reference to cite for this result? I looked for it recently when improving the Wikipedia article on the related Hadwiger conjecture (that Hadwiger number is greater than or equal to chromatic number), but came up short. Suggestively, a paper by Alon, Lingas, and Wahlen on approximating the Hadwiger number didn't explicitly state its NP-completeness. Could this really have been unknown? In case anyone doesn't believe the problem is really NP-complete, here is a proof of the standard type. Theorem: It is NP-complete, given a graph G and a number h, to determine whether G has Hadwiger number at least h. Proof: By reduction from domatic number. In the domatic number problem, we are given a graph G and a number d, and asked to determine whether the domatic number (that is, the maximum number of disjoint dominating sets in G) is at least d. We may assume without loss of generality that no vertex of G is adjacent to all others, for if v is such a vertex we may simplify the problem by deleting v and subtracting one from d. As we show, the instance (G,d) may be translated in polynomial time to an equivalent instance (G',h) of the Hadwiger number problem. So, given an n-vertex graph G in which no vertex is adjacent to all others, and a number d, construct G' in three layers: • The top layer is a d-vertex clique with vertices ti. • The middle layer is an n-vertex independent set with vertices mi. • The bottom layer is an n(n+1)-vertex clique with vertices bi,j. • All pairs of top and middle vertices are connected by edges. • Middle vertex mi and bottom vertex bj,k are connected by an edge if i=j or G has an edge vivj. That is, if vi dominates vj. Let h = n(n+1)+d. If G has domatic number at least d, that is, if there exist d disjoint dominating sets Si, then G' has Hadwiger number at least n(n+1)+d. For we can form a family of mutually-adjacent connected subgraphs, where each bottom vertex forms by itself one of the subgraphs and each top vertex together with a set Si forms a subgraph. Conversely, suppose that G' has Hadwiger number at least n(n+1)+d; that is, that it has this many disjoint mutually-adjacent connected subgraphs. Each of these connected subgraphs must include a top or bottom vertex, because no two middle vertices are connected directly to each other and any single middle vertex has too few outgoing edges to be adjacent to all the other subgraphs. But because the number of subgraphs is equal to the total number of top and bottom vertices, each subgraph must have exactly one top or bottom vertex, together with possibly some middle vertices. For each vertex vi of G, some representative bi,j of v among the bottom vertices must form a single-vertex connected subgraph, because there are only n middle vertices to be shared among the n+1 bottom representatives of vi . The connected subgraph containing any top vertex tk must then include a middle vertex ml such that vl dominates vi, in order to form an adjacency between the subgraphs containing tk and bi,j. Thus, for each top vertex tk, the connected subgraph that includes tk must include a set of middle vertices that forms a dominating set of G, and these dominating sets must be disjoint as they form a partition of the middle vertices. Therefore, the sets of middle vertices connected to each top vertex form a family of d disjoint dominating sets. Thus, (G,d) is a positive instance of the domatic number problem if and only if (G',h) is a positive instance of the Hadwiger number problem. This reduction (together with the easy observation that the Hadwiger number problem is in NP) completes the proof of NP-completeness. Update, July 1: I've asked around some more, and still not found a reference for this fact. So I made one myself: arXiv:0807.0007.
2019-02-17 11:52:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109214305877686, "perplexity": 486.19674318219285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00446.warc.gz"}
https://msp.org/pjm/2021/314-2/p09.xhtml
#### Vol. 314, No. 2, 2021 Recent Issues Vol. 317: 1 Vol. 316: 1  2 Vol. 315: 1  2 Vol. 314: 1  2 Vol. 313: 1  2 Vol. 312: 1  2 Vol. 311: 1  2 Vol. 310: 1  2 Online Archive Volume: Issue: The Journal Subscriptions Editorial Board Officers Contacts Submission Guidelines Submission Form Policies for Authors ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Special Issues Author Index To Appear Other MSP Journals Tunnel number and bridge number of composite genus 2 spatial graphs ### Scott A. Taylor and Maggy Tomova Vol. 314 (2021), No. 2, 451–494 ##### Abstract Connected sum and trivalent vertex sum are natural operations on genus 2 spatial graphs and, as with knots, tunnel number behaves in interesting ways under these operations. We prove sharp lower bounds on the degeneration of tunnel number under these operations. In particular, when the graphs are Brunnian $𝜃$-curves, we show that the tunnel number is bounded below by the number of prime factors and when the factors are $m$-small, then tunnel number is bounded below by the sum of the tunnel numbers of the factors. This extends theorems of Scharlemann–Schultens and Morimoto to genus 2 graphs. We are able to prove similar results for the bridge number of such graphs. The main tool is a family of recently defined invariants for knots, links, and spatial graphs that detect the unknot and are additive under connected sum and vertex sum. In this paper, we also show that they detect trivial $𝜃$-curves. ##### Keywords knot, spatial graphs, tunnel number ##### Mathematical Subject Classification Primary: 57K10, 57K12 Secondary: 57K31
2022-06-30 19:20:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7379671335220337, "perplexity": 2260.6281847940586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00451.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.112.145006
# Synopsis: Radiation-Belt Scattering in the Lab #### Direct Detection of Resonant Electron Pitch Angle Scattering by Whistler Waves in a Laboratory Plasma B. Van Compernolle, J. Bortnik, P. Pribyl, W. Gekelman, M. Nakamoto, X. Tao, and R. M. Thorne Published April 10, 2014 The flux of electrons in Earth’s radiation belts can sometimes vary by a factor of $100,000$ in a matter of hours. Space scientists assume that these rapid changes are partly due to the scattering of electrons from so-called whistler-mode waves—plasma waves in the Earth’s magnetosphere with frequencies in the kilohertz range. A first experimental observation of this scattering process, recreated in the lab, is presented in Physical Review Letters. The Van Allen radiation belts are two donut-shaped regions, in which charged particles spiral along the Earth’s magnetic field lines. The outer belt can vary dramatically, especially during geomagnetic storms. One possible factor contributing to this variation is the resonant interaction of electrons with whistler modes whose frequency matches the electrons’ gyrofrequency. However, verifying this hypothesis has been difficult because the change in the electron trajectory is very small in a laboratory experiment. Bart Van Compernolle and his colleagues at UCLA have managed to observe electron scattering from whistler waves for the first time. Inside the Large Plasma Device (LAPD) at UCLA, they generated a beam of $5$-kilo-electron-volt electrons, which spiraled along the machine’s magnetic field lines before reaching a detector with a small pin-hole entrance. This hole filtered the incoming electrons depending on the pitch angle of their helical trajectory. The team then induced whistler waves in the LAPD plasma with a radio antenna and observed a drop in the number of electrons making it to the detector. The drop was consistent with the expected change in pitch angle from scattering off of whistler waves. The authors expect this verification should help interpret data coming from satellites, such as the recently launched Van Allen Probes that are studying the Earth’s radiation environment. – Michael Schirber
2014-09-01 21:14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44525596499443054, "perplexity": 1793.8605821738504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920694.0/warc/CC-MAIN-20140909051946-00323-ip-10-180-136-8.ec2.internal.warc.gz"}
https://bpostance.github.io/posts/header-classifier/
Using machine learning to cleanse datasets: classifying column headers in data tables Post Cancel # Using machine learning to cleanse datasets: classifying column headers in data tables When dealing with large volumes of inbound data files and from multiple different sources, the data recieved can often come in a variety of formats, structures and to varying standards. One particularly challenging issue is data files that, although representing the same type of information, feature a variety of different label and data formats. For instance, addresses coded with “Zip” or “Postal Code”, “Street” or “Line 1” and “£1000”, “£1 K”, “GBP 1000” or “one thousand pounds”. The Machine Learning solution is to build a model that can ingest messy labelled data (i.e. missing and with variable field names) and to make predctions for what the data fields are. These models can then be integrated within data transformation pipelines to automatically or to make suggestions of the correct data labels. *Jupyter notebooks to recreate the synthetic dataset and to train and test the model are avaliable in this Git repo 1 2 3 4 5 6 7 # load packages import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import metrics import os %matplotlib inline 1 2 3 4 # change the paths to your local directory data_path = 'P:\\MyWork\\demoColumnTyper\\data' data_path_external = 'P:\\MyWork\\demoColumnTyper\\data\\external' data_path_model = 'P:\\MyWork\\demoColumnTyper\\data\\model' Fields & Formats The training-data has the correct headers attached. We want to predict these on inbound messy “unlabelled” data. I have included some very generic data fields and common formats. For instance, money varies between text and symbol currency values and phone includes a variety of formats and extensions. We also have some generic text_categorical and numeric values in there. 1 2 training_data = pd.read_csv(os.path.join(data_path_model,'training_data.csv')) training_data[:5] 01745 T Street SoutheastNaNWashingtonDC2002038.867033-76.979235GBP 487760.350SPCH308mex@yahoo.comKimberlee Turlington0345 42 0274KimberleeTurlingtonB35181 16007 Applegate LaneNaNLouisvilleKY4021938.134301-85.649851€ 7321963.108VDEY870rg@aol.comMiguel Eveland095086-173-31-37MiguelEvelandA42163 2560 Penstock DriveNaNGrass ValleyCA9594539.213076-121.077583EUR 3341992.053ZFPH671ejbyy@hotmail.comAlonzo Schroyer057843 018 15-85AlonzoSchroyerD48193 3150 Carter StreetNaNManchesterCT604041.765560-72.473091€ 4397323.917WMDG542tfanfw@gmail.comDevon Osei0698-1368378DevonOseiC16134 Xstreetroadave£$@.co.comdrive0 27454Gural0.00.00.00.00.00.00.00.00.00.0 22112004-570 4-610.00.00.00.00.00.00.00.00.01.0 2262007 62460140.00.00.00.00.00.00.00.00.01.0 1052138.5128940.00.00.00.00.00.00.00.00.00.0 16661VSQO4040.00.00.00.00.00.00.00.00.01.0 ## Feature Selection - univariate Ok. So that lest step actually created about 11,000 numeric features on our training data. We need to trim this down by selecting the data value features that we believe are most likley to be correlated to our target column headers. Lets use some Chi-square correlation analyses to test the features. If you want to skip the stats, the bar plot below shows the 75 features that were found to have the highest correlation to our target column headers that we want to predict. ### Chi-squared The chi-square test is a statistical test of independence to determine the dependency of two variables. It shares similarities with coefficient of determination, R². However, chi-square test is only applicable to categorical or nominal data while R² is only applicable to numeric data. • If Statistic >= Critical Value: significant result, reject null hypothesis (H0), dependent. There IS a relationship. • If Statistic < Critical Value: not significant result, fail to reject null hypothesis (H0), independent. There IS NOT a relationship. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 # chi-squared test with similar proportions from scipy.stats import chi2_contingency from scipy.stats import chi2 prob = 0.95 alpha = 1.0 - prob # run test stat, p, dof, expected = chi2_contingency(pd.crosstab(Xy['Y'],Xy['n_vowels'])) # dof print('\ndof=%d' % dof) # interpret test-statistic prob = 0.95 critical = chi2.ppf(prob, dof) print('\nprobability=%.3f, critical=%.3f, stat=%.3f' % (prob, critical, stat)) if abs(stat) >= critical: print('\tDependent (reject H0)') else: print('\tIndependent (fail to reject H0)') # interpret p-value print('\nsignificance=%.3f, p=%.3f' % (alpha, p)) if p <= alpha: print('\tDependent (reject H0)') else: print('\tIndependent (fail to reject H0)') 1 2 3 4 5 6 7 dof=240 probability=0.950, critical=277.138, stat=60001.619 Dependent (reject H0) significance=0.050, p=0.000 Dependent (reject H0) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 features = Xy_vect.columns[2:] stats = list() p_values = list() dofs = list() for feat in features: # run test stat, p, dof, expected = chi2_contingency(pd.crosstab(Xy_vect['Y'],Xy_vect[feat])) stats.append(stat) p_values.append(p) dofs.append(dof) chi2_results = pd.DataFrame({'feature':features,'X2':stats,'DoF':dofs,'pvalue':p_values,'sig':[x<=0.05 for x in p_values]}) chi2_results.sort_values(by='X2',ascending=False, inplace=True) 1 2 3 4 5 6 fig,axs = plt.subplots(figsize=(25,5)) n = 75 axs.bar(x = range(n), height=chi2_results['X2'][:n]) axs.set_xticks(range(n)); axs.set_xticklabels(chi2_results['feature'][:n],rotation=90) axs.set_title('Results of Correlation Analysis (top 75 features)',fontsize=15); # Model Training Here we are building a model that can predict the column type using: i) the training data, and ii) the subset of engineered-features selected by correlation analysis. 1 2 3 from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.naive_bayes import MultinomialNB 1 model_features = chi2_results['feature'][:100].values 1 2 3 4 5 6 7 8 # model parameters NB_params = {'model__alpha':(1e-1, 1e-3)} NB_pipe = Pipeline([('model', MultinomialNB())]) gs_NB = GridSearchCV(NB_pipe, param_grid=NB_params, n_jobs=2, cv=5) gs_NB = gs_NB.fit(Xy_vect[model_features], Xy_vect['Y']) print(gs_NB.best_score_, gs_NB.best_params_) 1 0.7691470588235294 {'model__alpha': 0.1} # Load Test Data Here lets load the testing data. Note that the data is missing column headers. If we were cleaning up this data we would have to manually add these. 1 2 3 testing_data = [pd.read_csv(os.path.join(data_path_model,x),skiprows=1,header=None) for x in os.listdir(data_path_model) if 'testing' in x] testing_data = pd.concat(testing_data) testing_data[:10] 012345678910111213141516 011501 Maple WayNaNLouisvilleKY4022938.097617-85.659825EUR 2274211.206WBSW110vmtjcbp@gmail.comJena Quilliams039520 805 804JenaQuilliamsC24795 198 Lee DriveNaNAnnapolisMD2140338.933313-76.493310EUR 3405197.530SWLP214hdst@gmail.comLaila Arpin08249 265 6568LailaArpinB25021 2126 Sunshine RoadOSavannahGA3140532.059784-81.202271EUR 7458286.525VUYW948aminwvel@yahoo.comCesar Severson08-095 51-90CesarSeversonA05600 34313 Wisconsin Street#APT 000007AnchorageAK9951761.181060-149.942792€ 5481056.898SEXL728gevypgf@mail.kzChi Hollinsworth030164-378 59 20ChiHollinsworthB13746 4829 Main StreetNaNManchesterCT604041.770678-72.520917GBP 4247524.506VQZY333yibhnaha@mail.kzJan Reagans00132-612-74-32JanReagansC47541 537 Spring StreetNaNGrotonCT634041.320683-71.991475$ 5717317.815BSRO462lppsimxwb@mail.kzMarcos Hoistion018-8094-25MarcosHoistionE35201 6266 South J StreetNaNLivermoreCA9455037.680570-121.768021$1921095.325SNPX594tgjt@aol.comLoan Wadsworth040 9272 23LoanWadsworthC37463 77952 South Algonquian WayNaNAuroraCO8001639.573350-104.716211$ 8932865.979ZNNS529snjral@aol.comMarisa Blaskovich087 469 8912MarisaBlaskovichB07738 89223 Elgin CircleNaNAnchorageAK9950261.136803-149.965463GBP 3666134.645ZCJN210ebl@gmail.comNguyet Lytch00748296 39 4NguyetLytchA16734 9224 Michael Sears RoadNaNBelchertownMA100742.234610-72.359730EUR 2406344.382ZOSG574qeaf@aol.comCorrie Tolhurst053601-410-69-0CorrieTolhurstD21521 Below we apply the same feature engineering and selection as applied to our training data. 1 2 3 4 5 6 7 8 9 10 11 testing_data = [pd.read_csv(os.path.join(data_path_model,x)) for x in os.listdir(data_path_model) if 'testing' in x] testing_data = pd.concat(testing_data) Xy_test = pd.concat([pd.DataFrame(data={'X':list(testing_data[col]),'Y':col}) for col in testing_data.columns]) Xy_test.reset_index(drop=True,inplace=True) Xy_test.fillna('',inplace=True) # fill nan values with empty string Xy_test = feat_eng(Xy_test) test_vect = vectorizer.transform(Xy_test['X'].astype(str).str.encode('utf-8')) test_vect = pd.DataFrame(data = test_vect.todense(), columns=vectorizer.get_feature_names()) Xy_test = Xy_test.merge(test_vect,how='left',left_index=True,right_index=True).copy() ### How good is the model? Overall our model is able to correctly identify and label 77 % of the test data. Remember the model has never seen the test data before so that is pretty good. 1 2 Xy_test['pred'] = gs_NB.predict(Xy_test[model_features]) print('Model accuracy: %.3f' %np.mean(Xy_test['pred'] == Xy_test['Y'])) 1 Model accuracy: 0.774 There are also some other things we could try to improve our model further. Below we see a more detailed report and bar plot that shows us how good our model was on each data column we were trying to predict. We could for isntance, look where our model is underperfoming and try to create and select other features that better capture the data fields. See this guide for interpretation of accuracy, precision, recall etc.. 1 print(metrics.classification_report(Xy_test['Y'], Xy_test['pred'])) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 precision recall f1-score support address1 0.53 0.99 0.69 1220 address2 0.86 0.08 0.14 1220 city 0.64 0.56 0.60 1220 email 1.00 1.00 1.00 1220 first_name 0.46 0.51 0.48 1220 last_name 0.51 0.46 0.49 1220 lat 0.98 1.00 0.99 1220 lng 0.88 1.00 0.94 1220 money 1.00 1.00 1.00 1220 num_cat 0.82 1.00 0.90 1220 numeric 0.70 0.68 0.69 1220 person_name 0.88 0.98 0.92 1220 phone 0.99 0.85 0.91 1220 reference 1.00 0.96 0.98 1220 state 0.80 0.50 0.61 1220 txt_cat 0.74 1.00 0.85 1220 zip 0.68 0.59 0.63 1220 micro avg 0.77 0.77 0.77 20740 macro avg 0.79 0.77 0.75 20740 weighted avg 0.79 0.77 0.75 20740 The bar charts below show the distribution of predictions for each target. For instance: • address1: the model was able to identify all of the address1 values in the testing data. • city: The model indentified roughly 50% of the correct city values, but often confused these with human names. 1 2 3 4 5 6 7 8 9 10 11 fig,axs = plt.subplots(5,4,figsize=(10,20),sharey=True) for target,ax in zip(Xy_test['Y'].unique(),axs.flatten()): heights = Xy_test.loc[Xy_test['Y']==target,'pred'].value_counts() x = range(len(heights)) ax.bar(x=x,height=heights/1220,color='blue',edgecolor='black',alpha=0.6) ax.set_xticks(x) ax.set_xticklabels(heights.index,rotation=90) ax.set_title(target) plt.tight_layout() That’s all folks.
2022-10-04 11:02:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2923954129219055, "perplexity": 3924.1053768329148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00245.warc.gz"}
https://courses.engr.illinois.edu/cs225/sp2018/mps/5/
Partner MP MP5 is a partner MP! • Part 1 and Part 2 of this MP can be completed with a partner! • The creative “Part 3” must be completed by yourself and must be unique (and different from your partner’s work). You should denote who you work with in the PARTNERS.txt file in mp5. If you worked alone, include only your NetID in PARTNERS.txt. ## Goals and Overview In this MP, you will: ## Videos From your CS 225 git directory, run the following on EWS: git fetch release git merge release/mp5 -m "Merging initial mp5 files" If you’re on your own machine, you may need to run: git fetch release git merge --allow-unrelated-histories release/mp5 -m "Merging initial mp5 files" Upon a successful merge, your MP5 files are now in your mp5 directory. Doxygen You can see all of the required functions on the TODO page on the Doxygen for this MP. A list of relevant files is here. ## Background: PhotoMosaics A PhotoMosaic is a picture created by taking some source picture, dividing it up into rectangular sections, and replacing each section with a small thumbnail image whose color closely approximates the color of the section it replaces. Viewing the PhotoMosaic at low magnification, the individual pixels appear as the source image, while a closer examination reveals that the image is made up of many smaller tile images. For this project you will be implementing parts of a PhotoMosaic generator. Specifically your code will be responsible for deciding how to map tile images to the rectangular sections of pixels in the source image. Selecting the appropriate tile image is supported by a data structure called a $k$-d tree which we will describe in the next section. The pool of tile images are specified by a local directory of images. We provide the code to create the TileImage pool and the code to create the mosaic picture from a tiled source image. ## Background: K-d trees Binary Search Trees are linear data structures that support the Dictionary ADT operations (insert, find, remove). They also support nearest neighbor search: If you have a binary search tree, given a key that may or may not be in the tree, you can find the closest key that is in the tree. To do so, you just recursively walk down the tree, as in find, keeping track of the closest node found: K BST<K,V>::findNearestNeighbor(Node * croot, K target) { // Look in the left or right subtree depending on whether target is smaller or larger than our current root if (target < croot->key) { // if we have no child in the correct direction, our root must be the closest if (croot->left == NULL) return croot->key; childResult = findNearestNeighbor(croot->left); } else { if (croot->right == NULL) return croot->key; childResult = findNearestNeighbor(croot->right); } // Calculate closest descendent node's distance to the target childDistance = distance(childResult, target); // Find the distance of this node to the target currDistance = distance(croot->key, target); // If the root node is closer, return it, otherwise return the closer child if (currDistance < childDistance) return croot->key; else return childResult; } A $k$-d tree is a generalization of a Binary Search Tree that supports nearest neighbor search in higher numbers of dimensions — for example, with 2-D or 3-D points, instead of only 1-D keys. A 1-D-tree (a $k$-d tree with $k = 1$) is simply a binary search tree. For this MP, you will be creating a photomosaic, which requires that given a region on our original image, we can determine which of our available images best fills that region. If we use the average color of regions and tile-images, this can be determined by finding the nearest colored tile image to a given region. If we treat colors as $(H, S, L)$ points in 3-D space, we can solve this problem with a 3-D tree. More formally, a $k$-d tree is special purpose data structure used to organize elements that can be described by locations in $k$-dimensional space. It is considered a space-partitioning data structure because it recursively subdivides a space into two convex sets. These sets are rectangular regions of the space called hyperrectangles. $k$-d trees are particularly useful for implementing nearest neighbor search, which is an optimization problem for finding the closest element in $k$-dimensional space. A $k$-d tree is a rooted binary tree. Each node in the tree represents a point in $k$-d-space, as well as a line (hyperplane) defined by one dimension of this point, which divides this space into two regions (hyperrectangles). At each level in the tree, a different dimension is used to decide the direction of the splitting line (hyperplane). An element is selected to define the splitting line by its coordinate value for the current dimension. This element should be the median of all the points in this part of the tree, taken over the current dimension. A node is then created for this element in the tree and its children are created recursively using the same process, which repeats until no elements remain in the region (hyperrectangle). The splitting dimension at any level of the tree can be selected to find the best partition of the data. For our purposes, we will change dimension cyclically, in order (for $k = 3$, we will use dimensions $0, 1, 2, 0, 1, 2, 0, \ldots$). $k$-d trees are particularly useful for searching points in Euclidean space. Perhaps the most common use of a $k$-d tree is to allow for fast search for the nearest neighbor of a query point, among the points in the tree. That is, given an arbitrary point in $k$-dimensional space, find the point in the tree which is nearest to this point. The search algorithm is defined in detail below. For this MP, we will be using a 3-D tree (a 3-dimensional $k$-d tree) to find the closest average color of TileImages to the average color of pixel sections in the source image. With the pool of average colors organized in a $k$-d tree, we can search for the best tile to match the average color of every region in the input image, and use it to create a PhotoMosaic! ## Requirements These are strict requirements that apply to both parts of the MP. Failure to follow these requirements may result in a failing grade on the MP. • You are required to comment the MP as per the commenting standard described by the Coding Style Policy. • You must name all files, public functions, public member variables (if any exist), and executables exactly as we specify in this document. • Your code must produce the exact output that we specify: nothing more, nothing less. Output includes standard and error output and files such as images. • Your code must compile on the EWS machines using clang++. Being able to compile on a different machine is not sufficient. • Your code must be submitted correctly by the due date and time. Late work is not accepted. • Your code must not have any memory errors or leaks for full credit. Valgrind tests will be performed separately from the functionality tests. • Your public function signatures must match ours exactly for full credit. If using different signatures prevents compilation, you will receive a zero. Tests for const-correctness may be performed separately from the other tests (if applicable). ## Assignment Description We have provided the bulk of the code to support the generation of PhotoMosaics. There is one critical component that is missing: the KDTileMapper. This class is responsible for deciding which TileImages to use for each region of the original image. In order to make this decision it must be able to figure out which TileImage has the closest average color to the average color of that region. A data structure called a $k$-d tree is used to find the nearest neighbor of a point in $k$-dimensional space. This assignment is broken up into the following two parts: • MP 5.1 — the KDTree class. • MP 5.2 — the mapTiles function. As usual, we recommend implementing, compiling, and testing the functions in MP 5.1 before starting MP 5.2. ## MP 5.1 For the first part of MP 5, you will implement a generic KDTree class that can be used to organize points in $k$-dimensional space, for any integer $k > 0$. ### The KDTree class Although we will only be using a 3-D tree, we want you to create a more general data structure that will work with any positive non-zero number of dimensions. Therefore, you will be creating a templated class where the template parameter is an integer, specifying the number of dimensions. We have provided the skeleton of this class, but it is your assignment to implement the member functions and any helper functions you need. A $k$-d tree is constructed with Points in $k$-dimensional space. To support this, we have provided a templated Point class, which takes the same integer template parameter as the $k$-d tree. In this part of assignment we ask you to implement all of the following member functions. ### Implementing smallerDimVal Please see the Doxygen for smallerDimVal. This function should take in two templatized Points and a dimension and return a boolean value representing whether or not the first Point has a smaller value than the second in the dimension specified. That is, if the dimension passed in is $k$, then this should be true if the coordinate of the first point at $k$ is less than the coordinate of the second point at $k$. If there is a tie, break it using Point’s operator<. For example: Point<3> a(1, 2, 3); Point<3> b(3, 2, 1); cout << smallerDimVal(a, b, 0) << endl; // should print true, // since 1 < 3 cout << smallerDimVal(a, b, 2) << endl; // should print false, // since 3 > 1 cout << smallerDimVal(a, b, 1) << endl; // should print true, // since a < b according to operator<, ### Implementing shouldReplace Please see the Doxygen for shouldReplace. This function should take three templated Points: target, currentBest, and potential. This should return true if potential is closer (i.e., has a smaller distance) to target than currentBest (with a tie being broken by the operator< in the Point class: potential < currentBest). The Euclidean distance between two $k$-dimensional points, $P(p_1, p_2, \ldots, p_k)$ and $Q(q_1, q_2, \ldots, q_k)$, is the square root of the sum of squares of the differences in each dimension: $\sqrt{(p_1 - q_1)^2 + (p_2 - q_2)^2 + \cdots + (p_k - q_k)^2} = \sqrt{\sum_{i = 1}^k (p_i - q_i)^2}$ Note that minimizing the distance is the same as minimizing squared-distance, so you can avoid invoking the square root, and just compare squared distances throughout your code: Point<3> target (1, 3, 5); Point<3> currentBest1 (1, 3, 2); Point<3> possibleBest1 (2, 4, 4); Point<3> currentBest2 (1, 3, 6); Point<3> possibleBest2 (2, 4, 4); Point<3> currentBest3 (0, 2, 4); Point<3> possibleBest3 (2, 4, 6); cout << shouldReplace(target, currentBest1, possibleBest1) << endl; // should print true cout << shouldReplace(target, currentBest2, possibleBest2) << endl; // should print false cout << shouldReplace(target, currentBest3, possibleBest3) << endl; // based on operator<, this should be false!!! ### Implementing the KDTree Constructor Please see the Doxygen for the KDTree constructor. This takes a single parameter, a reference to a constant std::vector of Point<Dim>s. The constructor should build the tree using recursive helper function(s). Just like there is a way to represent a balanced binary search tree using a specially sorted vector of numbers (how?), we can specially sort a vector of points in such a way that it represents a $k$-d tree. More specifically, in the KDTree constructor, we are interested in first copying the input list of points into a points vector, sorting this vector so it represents a $k$-d tree, and building the actual $k$-d tree along while we sort. Definition A rooted binary tree $T$ is a $k$-d tree if: • $T$ is empty, OR • $T$ consists of • A $k$-dimensional point $r$ that represents the root of the tree. • A splitting dimension $d$ • Two $k$-d tree subtrees $T_L$ and $T_R$ of splitting dimension $(d+1) \bmod k$ such that • if $v$ is an element in $T_L$, then $v_d \le r_d$ • if $v$ is an element in $T_R$, then $v_d \ge r_d$ where $v_d$ and $r_d$ are the $d$th components of the points $v$ and $r$. This property of $k$-d trees is called its recursive property because, similar to binary search trees, $k$-d trees are recursive data structures. Furthermore, in our implementation, $r$ is the median of all the points (defined below) in $T$ in dimension $d$. We define the median of the points across a splitting dimension $d$ to be the $\left\lceil\frac{n}{2}\right\rceil$ smallest element in a sorted list using $d$; in other words, it is the middle-most element of a (odd length) sorted list containing $n$ elements. For the general case of the median of a vector between zero-based indices $a$ and $b$, the median would be the element located at $\left\lfloor\frac{a+b}{2}\right\rfloor$. The median index of $n$ nodes is calculated as the cell $\left\lfloor\frac{n-1}{2}\right\rfloor$. That is, the middle index is selected if there are an odd number of items, and the item before the middle if there are an even number of items. If there are ties (two points have equal value along a dimension), they must be decided using the Point class’s operator<. Although this is arbitrary and doesn’t affect the functionality of the $k$-d tree, it is required to be able to grade your code. The $k$-d tree construction algorithm is defined recursively as follows for a vector of points between indices $a$ and $b$ at splitting dimension $d$: 1. Find the median of points with respect to dimension $d$. 2. Place the median point $r$ at index $m = \left\lfloor\frac{a+b}{2}\right\rfloor$ such that • if point $v$ is between indices $a$ and $m-1$, then $v_d \leq r_d$ • if point $v$ is between indices $m+1$ and $b$, then $v_d \geq r_d$ 3. Create a subroot based on the median and then recurse on the indices between $a$ though $m-1$ for its left subtree, and $m+1$ through $b$ for its right subtree, using splitting dimension $(d+1) \bmod k$. To satisfy steps 1 and 2 of the $k$-d tree construction algorithm, we recommend finding the median of the vector of points using quickselect. The quickselect algorithm allows you to find the median of the vector while achieving the constraints mentioned in step 2. We recommend that you first understand the algorithm and then write your own code from scratch to implement it. This will make debugging far, far easier. Forbidden Functions Note that you are not allowed to use any standard library functions to sort the data or find the median of the vector. This includes functions in <algorithm> like std::sort and std::nth_element. For a complete list see the functions in mp5_provided/no_sort.h. Example Here’s an example of how the algorithm works on the array below. 0 1 2 3 4 5 6 7 (3, 2) (5, 8) (6, 1) (4, 4) (9, 0) (1, 1) (2, 2) (8, 7) With respect to splitting dimension $0$, we would now find the median of these points, and place it in index $\left\lfloor\frac{0+7}{2}\right\rfloor = 3$. (This is step 1 and 2 of the algorithm.) We could achieve this using quickselect. This yields the following array. 0 1 2 3 4 5 6 7 (4, 4) What’s important to note is that the two sublists contained in indices 0–2 and 4–7, $L_1$ and $L_2$ respectively, achieve the constraint mentioned in step 2 of the $k$-d tree construction algorithm. (See above.) Hence, the full list may appear like this: 0 1 2 3 4 5 6 7 (1, 1) (3, 2) (2, 2) (4, 4) (6, 1) (5, 8) (9, 0) (8, 7) When we recursively call the algorithm on $L_1$ and $L_2$ using splitting dimension $1$ (step 3), we achieve the following ordering. (Follow along this example using pen and paper.) 0 1 2 3 4 5 6 7 (1, 1) (2, 2) (3, 2) (4, 4) (9, 0) (6, 1) (5, 8) (8, 7) This, likewise, represents the following $k$-d tree: The orange colors represent nodes that were split across the 0th dimension, while the blue colors represent nodes that were split across the 1st dimension. (Why is $(2, 2) < (3, 2)$ with respect to the 1st dimension?) ### Implementing findNearestNeighbor Please see the Doxygen for findNearestNeighbor. This function takes a reference to a template parameter Point and returns the Point closest to it in the tree. We are defining closest here to be the minimum Euclidean distance between elements. Again, if there are ties (this time in distance), they must be decided using the Point class’s operator<. The findNearestNeighbor search is done in two steps: a search to find the smallest hyperrectangle that contains the target element, and then a back traversal to see if any other hyperrectangle could contain a closer point, which may be a point with smaller distance or a point with equal distance, but a “smaller” point (as defined by operator< in the Point class). In the first step, you must recursively traverse down the tree, at each level choosing the subtree which represents the region containing the search element. (Remember that the criteria for which you choose to recurse left or recurse right depends on the splitting dimension of the current level.) When you reach the lowest bounding hyperrectangle, then the corresponding node is effectively the “current best” neighbor. Note that this search is similar to a binary search algorithm, except with the possibility of a tie across a level’s splitting dimension. At then end of first step of the search, we start traversing back up the $k$-d tree to the parent node. We now want to find better matches that exist outside of the containing hyperrectangle. The current best distance defines a radius which contains the nearest neighbor. During the back-traversal (i.e., stepping out of the recursive calls), you must first check if the distance to the parent node is less than the current radius. If so, then that distance now defines the radius, and we replace the “current best” match. Next, it is necessary to check to see if the current splitting plane’s distance from search node is within the current radius. If so, then the opposite subtree could contain a closer node, and must also be searched recursively. During the back-traversal, it is important to only check the subtrees that are within the current radius, or else the efficiency of the $k$-d tree is lost. If the distance from the search node to the splitting plane is greater than the current radius, then there cannot possibly be a better nearest neighbor in the subtree, so the subtree can be skipped entirely. Here is a reference we found quite useful in writing our $k$-d tree: Andrew Moore’s Kd-tree Tutorial. You can assume that findNearestNeighbor will only be called on a valid $k$-d tree. Here is an example: Example Suppose we have the same $k$-d tree as in Figure 1 and that the target point (in red) is $(6,3)$, as shown in the figure below. We wish to find the point in the $k$-d that is closest to the target; i.e., to determine which of the black points is closest to the red point. To start the search, we begin a depth-first search to find the leaf node within the same splitting plane as the target node. At the root of the tree, the node is defined by the point $(7,2)$, with the splitting plane based on the first coordinate. Since $6 < 7$ (using the target coordinate’s first dimension) we search the left subtree (the grey region in the figure below). The child node is defined by $(5,4)$, and the splitting plane is based on the second coordinate. Again, the target node $(6,3)$ is in the left subtree, so we split left. At the next step, we hit a leaf node, $(2,3)$. At this point, $(2,3)$ becomes our current best node, and the distance from the target node to $(2,3)$ defines a “current best” radius, as indicated by the circle below. That is, any point outside of this radius cannot be the closest point to the target, since $(2,3)$ will always be closer; however, there may be a point within the radius that is closer. We now start the back-traversal to check for other nodes within this radius. Back at the parent node, $(5,4)$, we see that it is closer to our target point than the current best of $(2,3)$. So, $(5,4)$ is stored as the current best, and we update the radius. The distance from the target point to the splitting plane for the node $(5,4)$ is within the current radius, so we must search the other subtree, indicated by the grey region below. This can be visualized as the hypersphere (in 2-d, a circle) intersecting the region opposite the splitting plane, as shown by the red region in the figure below. We descend into the subtree and find a leaf node $(4,7)$, which is farther away than our current best. We return all the way to the root node, defined by $(7,2)$. The distance between this node and the target is exactly equal to the current radius. In this case, we check Point<2>::operator<, which says our current best of $(5, 4)$ is less than $(7, 2)$, so we don’t replace the current best node. Once again, the distance between the splitting plane defined by $(7,2)$ and the target point is within the current radius (i.e., the red region exists), so we must search the other subtree. The target point is less than the splitting plane defined by the node at $(9,6)$, so we first descend into the left subtree. We encounter a leaf node, $(8,1)$, but the distance is greater than the current best, so we don’t do anything. We finally step back up the tree, and find there are no more regions that intersect the hypersphere (i.e., no other rectangles intersect the circle). Therefore, $(5,4)$ is the nearest neighbor, and our search is complete. ### Function printTree We’ve provided this function for you! It allows easy printing of the tree, with code like this: KDTree<3> tree(v); tree.printTree(cout); Note that the tree is printed as such: (51, ______ 35)_____ ______/ \______ (44, {84, __ 43)_ __ 44}_ __/ \__ __/ \__ {28, (43, {60, {88, 10} 65) 30} 72} / \ / \ / \ / \ {14, {48, (42, (44, {59, {74, {54, {95, 15} 0} 63) 79) 0} 0} 62} 50} \ \ \ \ \ {34, (49, {82, {75, {96, 15} 83) 20} 68} 56} The bold dimensions are the pivot dimensions at each node. The green indicate that the tree matched the solution tree. The { } curly braces indicate that a node is a land mine - a point that should not be traversed in the given nearest neighbor search, and will “explode” if you look at it. As these functions are implemented in kdtree_extras.cpp, which will not be used for grading, please do not modify them. All of your $k$-d tree code should be in kdtree.h and kdtree.cpp. ### Implementation Notes • This is a template class with one integer template parameter (i.e. int Dim). You might be curious why we don’t just let the client specify the dimension of the tree via the constructor. Since we specify the dimension through a template, the compiler will assure that the dimension of the Point class matches the dimension of our $k$-d tree. • You should follow the rules of const correctness and design the class to encapsulate the implementation. That is, any helper functions or instance variables should be made private. ### Testing We have provided a small number of tests for the KDTree class. The test cases are defined in testkdtree.cpp. Be aware that these are deliberately insufficient. You should add additional test cases to more thoroughly test your code. You can compile the unit tests with the following command: make test This will create an executable named test which you can execute with the following command to run tests for Part 1: ./test [part=1] Warning KDTree is a templated class. Recall that template functions are not compiled if they are never called. Make sure all of your code compiles or we will not be able to grade your work. ### Extra Credit Submission For extra credit, you can submit the code you have implemented and tested for part one of MP 5. You must submit your work before the extra credit deadline as listed at the top of this page. See Handing in Your Code for instructions. ## MP 5.2 For the second part of MP 5, you will implement the mapTiles() function which maps TileImages to a MosaicCanvas based on which TileImage has an average color that is closest to the average color of that region in the original image. HSLAPixel to Point<3> conversion. Your points should be in H-S-L order and hue must be normalized to [0, 1]. That is, hue/360 should be the $x$ (0th dimension), saturation should be the $y$ (1st dimension), and luminance should be the $z$ (2nd dimension). #### Classes Involved in mp5.2 In implementing mapTiles, you will need to interact with a number of classes, including the KDTree class which you’ve built. The source code for all these classes is provided for you, meaning you can look at their implementation if you have questions about return types, parameters, or the way the functions work. ### The mapTiles() function Please see the Doxygen for mapTiles. mapTiles() is a function that takes a SourceImage and a vector of TileImages and returns a MosaicCanvas pointer. It maps the rectangular regions of the SourceImage to TileImages. • Its parameters are a SourceImage and a constant reference to a std::vector of TileImage objects in that order. • It creates a new dynamically allocated MosaicCanvas, with the same number of rows and columns as the SourceImage, and returns a pointer to this object. • For every region in the SourceImage, mapTiles() should take the TileImage with average color closest to the average color of that region and place that TileImage into the MosaicCanvas in the same tile position as the SourceImage’s region. • map_tiles - The locations of the tiles in the mosaic are defined by a MosaicCanvas. This function should create a new MosaicCanvas which is appropriately sized based on the rows and columns of tiles in the SourceImage. Then, each tile in the MosaicCanvas should be set to an appropriate TileImage, using a KDTree to find the Nearest Neighbor for each region. Note that most of the real work here is done by building a $k$-d tree and using its nearest neighbor search function. Return a pointer to the created MosaicCanvas. You can assume that the caller of the function will free it after it has been used. You may return NULL in the case of any errors, but we will not test your function on bad input (e.g., a SourceImage with 0 rows/columns, an empty vector of TileImages, etc.). ### Implementation Notes • There are two classes representing a color in this portion of the MP: HSLAPixel and Point<3>. You will need to convert between these different representations. • Note that your points should be in H-S-L order and hue must be normalized to [0, 1]. That is, hue/360 should be the $x$ (0th dimension), saturation should be the $y$ (1st dimension), and luminance should be the $z$ (2nd dimension). • Use your KDTree class to find the nearest neighbor, which is the tile image that minimizes average color distances. • You can easily convert from a TileImage to its average color using TileImage::getAverageColor(). You will also need to convert from an average color to the TileImage that would generate that color. You may want to use the std::map class to do this. ### Compiling and Running A PhotoMosaic After finishing both the KDTree class and the mapTiles function, you can compile the executable by linking your code with the provided code with the following command: make The executable created is called mp5. You can run it as follows: ./mp5 background_image.png [tile_directory/] [number of tiles] [pixels per tile] [output_image.png] Parameters in [square brackets] are optional. Below are the defaults: Parameter Default Notes background_image.png tile_directory/ EWS-specific See below (default only valid on EWS) number of tiles 100 The number of tiles to be placed along the shorter dimension of the source image pixels per tile 50 The width/height of a TileImage in the result mosaic. Don’t make this larger than 75 for the provided set of TileImages output_image.png mosaic.png .png, .jpg, .gif, and .tiff files also supported In addition to the given code from subversion, we have provided a directory of small thumbnail images which can be used as the tile_directory of the photomosaic program. If you are working on EWS, then you can use /class/cs225/mp5_pngs/ as the tile_directory. If you are working on your own machine (including the VM), you can download them from here: mp5_pngs.tar.bz2. If you prefer, you can download them directly to your mp5 directory by running wget https://courses.engr.illinois.edu/cs225/sp2018/mps/5/mp5_pngs.tar.bz2 tar xjvf mp5_pngs.tar.bz2 You may also use your own directory of images to create your own PhotoMosaics. However, for the supplied tests, you should use our provided images. ### Testing We have provided a simple test case for mapTiles(), which can be run with: make test ./test [part=2] We have also provided you with a sample input sourceimage and output mosaiccanvas, which can be tested as follows: make ./mp5 tests/source.png /class/cs225/mp5_pngs/ 75 25 mosaic.png wget https://courses.engr.illinois.edu/cs225/sp2018/mps/5/mosaic_soln2.png diff mosaic.png mosaic_soln2.png Make sure to change the second argument (/class/cs225/mp5_pngs/) to the directory you downloaded above. Be aware that these are deliberately insufficient. You should add additional test cases to more thoroughly test your code. ## Part 3 (Creative): Your Flood Fill! Solo Portion This creative part of the MP must be completed individually and must be significantly different from your partner’s creative work. You have two weeks making an mosaic – you should show off your work! You’ll have to gather some pictures, convert them to PNGs, and generate a mosaic using your mp5 executable. After generating your mosaic, make sure to commit it to git as mymosaic.png. #### Making a great mosaic: Gathering Pictures A good mosaic requires a lot of tile images. A baseline for a decent mosaic is ~100 tile images if the images are all different (eg: not all daylight pictures or selfies) and ~1000 for a great mosaic. You probably already have many images: • If you have an iPhone, Apple usually [backs up your photos in iCloud][https://www.icloud.com/]. You can download them as a ZIP file. #### Making a great mosaic: Converting to PNG The program you built requires PNG files as input. Often photos are JPEG files and must be converted. Once you have converted all of the image into PNG, place all of the images into a single directory inside of your mp5 folder. This folder will likely be very large – you should NOT commit it to git! #### Making a great mosaic: Sharing and explaining what you’ve made A mosaic looks like a fun Instagram “block” transformation at first glance, but becomes even more amazing when someone understands what they’re seeing – an image made entirely from other images. If you share you image, it’s best if you describe what you’ve done! If you want to share it with your peers, post it with #cs225 so we can find it. :) This MP is unique in that the story behind how you made it makes the mosaic even more awesome. Let poeple know you built a kd-tree to find the best image to place at every point in the image. Wade shared his mosaic in lecture on Wednesday. Use #cs225, and we'll make sure to or the post Commit your changes in the usual way: git add -u git push origin master • maptiles.cpp (MP 5.2 only) • maptiles.h (MP 5.2 only) • kdtree.cpp • kdtree.h • PARTNERS.txt • mymosaic.png
2020-07-04 05:20:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4670877754688263, "perplexity": 1008.0517934755312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00564.warc.gz"}
https://crypto.stackexchange.com/questions/8317/combining-lfsrs-for-stream-ciphers-why-do-we-need-high-non-linearity/8320
# Combining LFSRs for Stream Ciphers: Why do we need high non-linearity? Linear Feedback Shift Registers (LFSRs) can be excellent (efficient, fast, and with good statistial properties) pseudo-random generators. Many stream ciphers are based on LFSRs and one of the possible designs of such stream ciphers is combining outputs of $m$ LFSRs as input of a boolean function $f:GF(2)^m\rightarrow GF(2)$. This last function has to be carefully selected. My question is a rather elementary one. I understand that using one LFSR to produce the keystream is not appropriate as one can create the whole keystream by knowing a tiny fraction of it: if the tap positions of a length $n$ LFSR are known, one needs $n$ bits to determine the entire keystrem sequence, and if they are not known, one needs $2n$ bits (by using the Berlekamp-Massey algorithm to find out the tap positions). However, why do we need a non-linear combination of LFSRs (among all sorts of other requirements)? What would be the problem of getting a number of LFSRs with appropriate lengths and tap positions and XOR together their output to produce the keystream? • Maybe these lecture notes will help... – rphv May 10 '13 at 21:39 • The edit by rphv has many ambiguities in it. A linear feedback shift register (LFSR) of length $n$ does not necessarily generate a sequence of period $2^n-1$. The period could be much smaller and could depend on the initial loading and the feedback polynomial, that is, for some feedback polynomials, one can get sequences of different periods by changing the initial loading. It is true that the maximum period is $2^n-1$ and occurs when the feedback polynomial is a primitive polynomial in the sense that coding theorists use the term (and the initial loading is all-zeroes). – Dilip Sarwate May 11 '13 at 18:49 • @rphv I think there is another issue about the edit: one needs $2n$ bits if the tap positions are not known. If they are known, one needs $n$ bits. – geo909 May 11 '13 at 19:20 • In my previous comment, the last clause, which appears in parentheses, should read "and the initial loading is not all-zeroes" The word not was inadvertently left out, and it is now too late to edit the comment. – Dilip Sarwate May 11 '13 at 19:38 If there was no non-linearity, then every bit of keystream output would be a (known) linear function of the unknown key bits. Consequently, in a known-plaintext attack scenario, each bit of known keystream output would allow us to write a linear equation on the unknown key bits. If we have a 128-bit key, there are 128 boolean unknowns (variables), so once we have 128 bits of known keystream, we have 128 linear equations in 128 unknowns. At that point it becomes easy to solve for the original key bits using standard methods for solving a system of linear equations (e.g., Gaussian elimination). Thus, an attacker could recover the key from 128 bits of known output from the stream cipher, which is a total break of the stream cipher. The only way to prevent this kind of attack is to make sure that the cipher contains non-linear elements. To prevent other related but fancier attacks (e.g., linear cryptanalysis), one also needs sufficient non-linearity in the stream cipher. Clarification: To keep it simple, my answer above assumes that the feedback polynomial for the LFSRs is known. The attack does generalize to the case where the feedback polynomials are not known (you need twice as much known keystream output); in that case, the attack gets a bit more complicated, but the basic idea still applies. I tried to keep it simple to help you understand the intuition without getting bogged down in mathematics, but if you want to see more details about the case where the feedback polynomials are not known, Dilip Sarwate has an excellent answer that explains that case more thoroughly. • You're right! By the reccurent nature of the LFSRs, every output bit of every LFSR can be expressed as a linear combination of the initial state! Then we end up with a system of linear equations and we can perform Gaussian elimination. – geo909 May 11 '13 at 19:33 • This answer is incorrect in the details. If one knows that the sequence is generated by an LFSR of length $128$ bits, then there are $128$ coefficients of the feedback polynomial (a.k.a. tap locations) that need to be determined, and $128$ bits of the sequence (exactly the initial loading of the LFSR) are not enough to determine these feedback coefficients; you need $256$ bits. Take a simpler case: for $n=3$, we have (continued in next comment) – Dilip Sarwate May 12 '13 at 1:59 • (continued from previous comment) an initial load $(a_2,a_1,a_0)$ and feedback taps $f_1,f_2,f_3$ where \begin{align}a_3&=a_2f_1+a_1f_2+a_0f_3\\a_4&=a_3f_1+a_2f_2+a_1f_3\\a_5&=a_4f_1+a_3f_2+a_2f_3\end{align} so that we need $6$ bits of the sequence, not $3$, to get $3$ linear equations to solve for $f_1,f_2,f_3$. This is the same as the amount of information needed by the Berlekamp-Massey algorithm, but the Berlekamp-Massey algorithm will also find the shortest LFSR that generates any arbitrarily $6$-bit sequence, not just for the sequences known to be generated by a $n$-bit LFSR – Dilip Sarwate May 12 '13 at 2:10 • @DilipSarwate, absolutely, great point! My answer keeps it simple and assumes the feedback polynomial / tap locations are known. Yes, if the feedback polynomial is not known, then twice as many bits of known keystream are needed, and more sophisticated methods are needed. Thank you for pointing this out! – D.W. May 12 '13 at 4:26 The Berlekamp-Massey algorithm is an iterative method for finding the shortest LFSR that can generate a given sequence of bits. The given sequence might or might not be generated by an LFSR: the Berlekamp-Massey algorithm does not care. It just finds the shortest LFSR that can generate the given sequence, and if the sequence has been generated by an LFSR of length $n$, then the Berlekamp-Massey algorithm is guaranteed to find this LFSR after examining no more than $2n$ bits of the sequence. A simplistic description of what happens is as follows. After the algorithm has found the shortest LFSR that generates the first $k$ bits of the sequence, it examines the $(k+1)$-th bit of the sequence. If this $(k+1)$-th bit of the sequence matches the $(k+1)$-th bit of the output of the current LFSR, the LFSR is accepted as the one that generates the first $k+1$ bits. If not, the LFSR is updated so that the new, typically longer, LFSR generates the first $k+1$ bits. As stated earlier, if the sequence in question was in fact generated by an LFSR of length $n$, then the Berlekamp-Massey algorithm is guaranteed to find this LFSR by the time it has examined $2n$ bits of the sequence. How does the algorithm know that it is done? Well, it doesn't, but after the correct LFSR has been found, the $(2n+1)$-th, the $(2n+2)$-th, the $(2n+3)$-th, $\ldots$ bits of the given sequence match the corresponding outputs of the LFSR and so the Berlekamp-Massey algorithm does not update the $n$-bit LFSR it has found. What does all this have to do with the question asked? Well, the (bit-by-bit XOR) sum of the outputs of the various LFSRs is a sequence that is generated by a longer LFSR (typically, the length of the longer LFSR is the sum of the lengths of the LFSRs whose outputs were summed). So, the cryptographic security is not significantly larger. What is needed is some way of combining the constituent LFSR outputs so that the resulting sequence has linear complexity much larger that the sum of the LFSR lengths. The linear complexity of a sequence is defined as the length of the shortest LFSR that can generate the sequence. What we want is a sequence that has high linear complexity but which can be generated easily as a nonlinear function of the outputs of short LFSRs. The legitimate users of the system can encipher and decipher easily, but a cryptanalyst attempting to break the system via a known plaintext attack has to either figure out the nonlinear function (and the constituent LFSRs) which is not easy to do or attempt a Berlekamp-Massey algorithm attack which may fail because not enough bits of the sequence can be determined via a known plaintext attack to find the shortest LFSR that generates the sequence. • So, that's another good reason not to use combinations of LFSRs.. After reading your answer I looked a bit more carfully and confirmed that if the combiner function is $f$ and the lengths of the $m$ LFSRs are $L_1,\cdots,L_m$, then the linear complexity of the output sequence is $f(L_1,\cdots,L_m)$; in our case that's the sum of the $L_i's$ as you said. So, for lengths that add up to 128 (so we have a 128-bit key), one needs only $2\times 128=256$ bits of the keystream sequence and the Berlekamp-Massey algorithm to break the system. – geo909 May 11 '13 at 19:35 • It seems that ideally one would like linear complexity almost equal to the period of the keystream (and of course, a large period) – geo909 May 11 '13 at 19:41 • @geo909 Something is awry in your comment. The combiner function $f$ is a Boolean function that maps $m$ bits (the LFSR outputs) to $1$ bit, the desired sequence with high linear complexity. So what does $f(L_1,L_2,\ldots, L_m)$ mean since the arguments and output have changed from bits to integers? – Dilip Sarwate May 11 '13 at 19:45 • Good point! "[...] the linear complexity is [...] $L=f(L_1,\cdots,L_m)$ [...] with the latter Boolean function transformed by evaluating the addition and multiplication operations in the function over the integers rather than over $GF(2)$" (Dawson, Simpson, Analysis and Design issues for Synchronous Stream Ciphers). Also this applies when the $L_i's$ are pairwise coprime. – geo909 May 11 '13 at 19:51 • @geo909 OK, I am still a little confused. If $f(a,b)=a\vee b = a \oplus b \oplus ab$, does $f(L_1,L_2)$ equal $L_1+L_2+L_1L_2$ for relatively prime $L_1, L_2$? Or does it equal something else? – Dilip Sarwate May 11 '13 at 22:40
2019-10-21 16:19:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7234858870506287, "perplexity": 439.93261791657574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00369.warc.gz"}
http://eprints.iisc.ac.in/3652/
Home | About | Browse | Latest Additions | Advanced Search | Contact | Help # Absorption and emission properties of $Nd^{3+}$ in lithium cesium mixed alkali borate glasses Ratnakaram, YC and Kumar, Vijaya A and Naidu, Tirupathi D and Chakradhar, RPS (2005) Absorption and emission properties of $Nd^{3+}$ in lithium cesium mixed alkali borate glasses. In: Solid State Communications, 136 (1). pp. 45-50. Preview PDF A160.pdf Download (240kB) ## Abstract Lithium cesium mixed alkali borate glasses of the composition $67B_{2}O_{3}.xLi_{2}O.(32-x)Cs_{2}O$ (where xZ8, 12, 16, 20 and 24) containing 1 mol% $Nd_{2}O_{3}$ were prepared by melt quenching. The absorption spectra of $Nd^{3+}C$ were studied from the experimental oscillator strengths and the Judd–Ofelt intensity parameters were obtained. The intensity parameters are used to determine the radiative decay rates (emission probabilities of transitions) $(A_{T})$, branching ratios (\beta) and integrated absorption cross-sections (\Sigma) of the $Nd_{3+}$ transitions from the excited state J manifolds to the lower lying J manifolds. Radiative lifetimes $(\tua_{R})$ are estimated for certain excited states of $Nd^{3+}C$ in these mixed alkali borate glasses. Luminescence spectra were measured and the emission cross-sections $(\sigma_{p})$ were evaluated for the three emission transitions. The variation of luminescence intensity with x was recorded for the three transitions at different excitation power to see the effect of mixed alkalies in these borate glasses. Item Type: Journal Article Copyright for this article belongs to Elsevier. Absorption;emission properties;Nd3+;lithium cesium mixed alkali borate glasses Division of Physical & Mathematical Sciences > Physics Sreekanth Chakradhar Ph.D., Dr. R. P. 01 Feb 2007 19 Sep 2010 04:20 http://eprints.iisc.ac.in/id/eprint/3652 ### Actions (login required) View Item
2019-03-22 04:20:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570802211761475, "perplexity": 7653.688153693446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00247.warc.gz"}
https://zbmath.org/?q=an:1153.14011
The cohomology of real De Concini-Procesi models of Coxeter type.(English)Zbl 1153.14011 The aim of the paper is to study the rational cohomology groups of the real De Concini-Procesi model corresponding to a finite Coxeter group. This is a generalization of the type A case of the moduli space of stable genus zero curves with marked points. The formulae for the Betti numbers in types B and D are given, and exact values of the Betti numbers in exceptional types are computed. The authors also find a generating function for the characters of the representations of a Coxeter group of type B on the rational cohomology groups of the corresponding De Concini-Procesi model, and deduce the multiplicities of one-dimensional characters in the representations, and a formula for the Euler character. A moduli space interpretation of this type B variety is obtained: it is embedded as a closed subvariety in $$\overline{\mathcal{M}_{0,2n+2}}$$. MSC: 14D20 Algebraic moduli problems, moduli of vector bundles 14N20 Configurations and arrangements of linear subspaces 20F55 Reflection and Coxeter groups (group-theoretic aspects) Full Text:
2023-03-31 07:16:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6293121576309204, "perplexity": 216.89691937805821}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00248.warc.gz"}
http://wiki.polskibreivik.pl/page_Joule.html
Contents 1 Usage 2 Confusion with newton metre 3 Practical examples 4 Multiples 4.1 Zeptojoule 4.2 Picojoule 4.3 Nanojoule 4.4 Microjoule 4.5 Millijoule 4.6 Kilojoule 4.7 Megajoule 4.8 Gigajoule 4.9 Terajoule 4.10 Petajoule 4.11 Exajoule 4.12 Zettajoule 4.13 Yottajoule 5 Conversions 6 See also 7 Notes and references Usage This SI unit is named after James Prescott Joule. As with every International System of Units (SI) unit named for a person, the first letter of its symbol is upper case (J). However, when an SI unit is spelled out in English, it should always begin with a lower case letter (joule)—except in a situation where any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case. Note that "degree Celsius" conforms to this rule because the "d" is lowercase.— Based on The International System of Units, section 5.2. Confusion with newton metre Main article: newton metre In mechanics, the concept of force (in some direction) has a close analog in the concept of torque (about some angle): Linear Angular force torque mass moment of inertia distance angle A result of this similarity is that the SI unit for torque is the newton metre, which works out algebraically to have the same dimensions as the joule. But they are not interchangeable. The CGPM has given the unit of energy the name joule, but has not given the unit of torque any special name, hence it is simply the newton metre (N⋅m) – a compound name derived from its constituent parts.[5] The use of newton metres for torque and joules for energy is helpful to avoid misunderstandings and miscommunications.[5] The distinction may be seen also in the fact that energy is a scalar – the dot product of a vector force and a vector displacement. By contrast, torque is a vector – the cross product of a distance vector and a force vector. Torque and energy are related to one another by the equation E = τ θ   , {\displaystyle E=\tau \theta \ ,} where E is energy, τ is (the vector magnitude of) torque, and θ is the angle swept (in radians). Since radians are dimensionless, it follows that torque and energy have the same dimensions. Practical examples One joule in everyday life represents approximately: The energy required to lift a medium-size tomato (100 g) 1 m vertically from the surface of the Earth.[6] The energy released when that same tomato falls back down to the ground. The energy required to accelerate a 1 kg mass at 1 m⋅s−2 through a distance of 1 m. The heat required to raise the temperature of 1 g of water by 0.24 °C.[7] The typical energy released as heat by a person at rest every 1/60 s (approximately 17 ms).[8] The kinetic energy of a 50 kg human moving very slowly (0.2 m/s or 0.72 km/h). The kinetic energy of a 56 g tennis ball moving at 6 m/s (22 km/h).[9] The kinetic energy of an object with mass 1 kg moving at √2 ≈ 1.4 m/s. The amount of electricity required to light a 1 W LED for 1 s. Since the joule is also a watt-second and the common unit for electricity sales to homes is the kW⋅h (kilowatt-hour), a kW⋅h is thus 1000 W × 3600 s = 3.6 MJ (megajoules). Conversions Main article: Conversion of units of energy 1 joule is equal to: 7000100000000000000♠1×107 erg (exactly) 7000100000001488094♠6.24150974×1018 eV 6999999976000000000♠0.2390 cal (gram calories) 6999999976000000000♠2.390×10−4 kcal (food calories) 7000100000303823028♠9.4782×10−4 BTU 6999737600000000000♠0.7376 ft⋅lb (foot-pound) 6999998720609223173♠23.7 ft⋅pdl (foot-poundal) 6993277780000000000♠2.7778×10−7 kW⋅h (Kilowatt hour) 6996277780000000000♠2.7778×10−4 W⋅h (Watt hour) 6999999996690000000♠9.8692×10−3 l⋅atm (litre-atmosphere) 6983111265000000000♠11.1265×10−15 g (by way of mass-energy equivalence) 7000100000000000000♠1×10−44 foe (exactly) Units defined exactly in terms of the joule include: 1 thermochemical calorie = 4.184 J[18] 1 International Table calorie = 4.1868 J[19] 1 W⋅h = 3600 J (or 3.6 kJ) 1 kW⋅h = 7006360000000000000♠3.6×106 J (or 3.6 MJ) 1 W⋅s = 7000100000000000000♠1 J 1 ton TNT = 7009418400000000000♠4.184 GJ See also Look up joule in Wiktionary, the free dictionary. Conversion of units of energy Orders of magnitude (energy) Fluence International System of Units Watt second
2018-03-23 10:57:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8382372260093689, "perplexity": 1743.675672335273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648207.96/warc/CC-MAIN-20180323102828-20180323122828-00360.warc.gz"}
http://nuclearsafety.gc.ca/eng/resources/research/research-and-support-program/research-report-abstracts/reports-issues-2009-2010.cfm
# Research report summaries 2009–2010 ## Archived Content Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the Contact Us page. Contractors' reports are only available in the language in which they are submitted to the Canadian Nuclear Safety Commission (CNSC). ## RSP-0248 – International Common Cause Data Exchange (ICDE) Project: Preparation of data to be submitted to the clearing House This report describes the activities and methods used for Canada's data submission to the International Common Cause Data Exchange Project (ICDE) Project. The International ICDE project was established by the Organization for Economic Cooperation and Development (OECD) to encourage multilateral cooperation in the collection and analysis of data relating to common cause failure (CCF) events at nuclear power stations. The CNSC established its own series of projects to collect data in campaigns on Canadian nuclear power plants and fulfill its obligations under the International ICDE agreement. This project is the third major CNSC data collection campaign focused on process components for contribution of data to the ICDE. The report describes the data sources used for collecting data from Canadian nuclear power plants, the methods employed to analyze the data and code it into the ICDE database, and some summary statistics on the data. Top of page ## RSP-0249 – OECD/NEA Pipe Failure Data Exchange (OPDE) Structural integrity of piping systems is important for plant safety and operability. In recognition of this, information on degradation and failure of piping components and systems is collected and evaluated by regulatory agencies, international organizations (e.g., OECD/NEA and IAEA) and industry organizations worldwide to provide systematic feedback for example to reactor regulation and research and development programs associated with non-destructive examination (NDE) technology, in-service inspection (ISI) programs, leak-before-break evaluations, risk-informed ISI, and probabilistic safety assessment (PSA) applications involving passive component reliability. The several OECD Member countries have agreed to establish the OECD-NEA Piping Failure Data Exchange Project (OECD-NEA OPDE) to encourage multilateral cooperation in the collection and analysis of data relating to piping failure events in nuclear power plants. The project was formally launched in May 2002 under the auspices of the OECD/NEA. Organizations producing or regulating more than 80 percent of nuclear energy generation worldwide contribute data to the OECD-NEA OPDE data project. Currently (February 2009) eleven countries1 have signed the OECD OPDE 3rd Term agreement (Canada, Czech Republic, Finland, France, Germany, Korea (Republic of), Japan, Spain, Sweden, Switzerland and United States of America). This report describes the status of the OECD-NEA OPDE database after six years of operation, and gives some insights based on ca. 3600 piping failure events in the database. 1 Belgium participated in the project during the first and second term but has decided not to participate in the third term (2008–2011) of the project. Top of page ## RSP-0250 – Future directions for using the leak-before-break concept in regulatory assessments This report describes efforts conducted by Engineering Mechanics Corporation of Columbus (Emc2) for the CNSC to explore possible future directions for leak-before-break (LBB) analyses for nuclear power plant piping systems. This is an objective assessment that considers alternative approaches that might be deterministic, probabilistic or a hybrid deterministic approach, and summarizes input from responses to a questionnaire to knowledgeable people in the field in 17 different countries. To explore these possibilities, we also included a significant amount of background material on LBB so that the CNSC staff and readers of this report can better understand the recommendations made in this report. The background information includes the following: • the history of leak-before-break prior to application to the nuclear industry and different technical definitions of LBB • the first applications of LBB and developments in the US, including definitions of US documents like standard review plans, regulatory guides, and key reports • ongoing efforts in the US relative to LBB including the transition break size (TBS) efforts and new probabilistic efforts being initiated by the United States Nuclear Regulatory Commission (US NRC) and EPRI for a probabilistic code called xLPR • international uses of LBB, including a summary of international LBB procedures prior to the year 2000 for eight countries other than the US, and responses from 17 countries other than the US to a questionnaire created and sent out for this program to briefly assess past, current and future LBB procedures The final section of this report provides an overview of potential options for deterministic, probabilistic or hybrid deterministic-probabilistic LBB approaches. The main application of these approaches was for primary pipe systems in new nuclear power plants. Interestingly, the general opinion of the LBB international questionnaire was that probabilistic analyses are not desired for LBB analyses of new plants. Probabilistic analyses may be of value for piping with active degradation mechanisms, but such analyses are really fitness-for-service analyses with inspections beyond leakage detection to ensure LBB behavior. One of the main suggestions for optional new LBB procedures was to include additional considerations on protection against new degradation mechanisms that may develop. Mechanisms that allow long circumferential surface flaws to develop are the most threatening to leak-before-break behavior. Of these more threatening mechanisms, stress corrosion cracking (SCC) is the most prevalent degradation mechanism in nuclear power plant piping and, unfortunately SCC is not directly addressed by any nuclear pipe system design code. SCC can occur due to the combination of material susceptibility, environment (water chemistry and temperature) and high tensile stresses. Historically, the industry has learned how to make better materials and adjust water chemistries to avoid or minimize SCC in service, but there has not been much consideration given to reducing weld residual stresses during plant construction. Since the expected life of nuclear plants is no longer considered 40 years, but is now proposed for 60 years or longer, it is difficult to know if the current SCC measures will be effective over these long time periods. Consequently, one key suggestion from the surveys and review was to include an incentive in the LBB procedure so that plant fabricators will prepare welds in a manner that produces compressive longitudinal stresses (or significantly reduced tensile stresses) on the internal surface (or ID) of girth welds through the use of Fabrication Enhanced SCC Resistance Welds. Some weld sequencing aspects to produce fabrication enhanced SCC resistance welds are discussed, and could be adopted in existing weld procedures without much additional cost impact. If the plant uses fabrication enhanced SCC Resistance weld procedures during construction, then the deterministic and probabilistic approaches could be much simpler and easier to satisfy LBB considerations. If fabrication enhanced SCC resistance weld procedures are not used, then the LBB application needs to consider all aspects of SCC in the deterministic or probabilistic LBB approach, which can be much more penalizing. A few of the respondents from the different countries were interested in probabilistic analyses, but would still require deterministic analyses. A hybrid deterministic/probabilistic approach may be a more realistic compromise, where more elaborate analyses not possible in a probabilistic code could be conducted for key aspects of the assessment. One such hybrid approach for LBB was presented in this report, where the probabilistic nature of seismic loading was incorporated by conducting analyses at SSE loads (with comparable current safety factors) and then at 10-6 seismic event loads with reduced safety factors. Rather than assuming an idealized flaw type, the flaw size was determined from detailed crack growth analyses, such as the SCC analyses in used PWSCC cracking evaluations in the US, and was termed a robust LBB approach. Of course, reasonable bounding material properties also need to be used, and some suggestions were given on improved selection of ferritic steels to eliminate detrimental effects of dynamic strain aging or accounting for thermal aging in all materials (not just cast stainless steels). This type of hybrid analysis is somewhat comparable to the approach used for seismic considerations to the transition break size in NUREG-1903. In summary, the two main recommendations from this project are: 1. Develop fabrication procedures that can be used to prevent high tensile stresses on the ID surfaces of primary loop piping, which if used would allow LBB without having to consider SCC 2. Conduct sensitivity studies on the hybrid deterministic-probabilistic robust LBB procedure for flaw shape development from SCC and seismic loading effects (guidelines may evolve to better improved deterministic as well as probabilistic analyses) Top of page ## RSP-0251 – Effect of inspection uncertainties on the operational assessment of reliability of steam generator tubing Steam generators (SG) are periodically inspected to maintain high safety and integrity of the heat transport system in the nuclear plant. The integrity of SG tubing is affected by various degradation mechanisms, such as wear and stress corrosion cracking (SCC). SG tubing integrity assessment is periodically performed to ensure that the tubing degradation does not exceed the structural limit in the upcoming operation cycles. This report presents an advanced probabilistic approach for integrity assessment of SG tubing and highlights the importance of correct modeling of inspection uncertainties, such as the flaw sizing error, in the prediction. A case study is presented using actual data from a nuclear station, which illustrates the effectiveness of the proposed method. This report presents a consistent method to calculate the flaw repair limit and inspection interval to satisfy the acceptance standard with a specified probability, typically 95 percent probability. A simulation-based study shows that a relatively small sizing error can lead to large error in the flaw growth rate prediction. It is concluded that the approximate methods used by the industry to model inspection uncertainties are generally conservative and suitable for operational assessment in short term (time 2 EFPY). However, in long term (time 4 EFPY) life cycle management planning, the approximate methods can lead to conservative predictions of flaw repair limit and inspection interval. The conservatism arises from the simplified nature of probabilistic analysis of flaw growth process and use of upper bounds for growth rate and sizing error. Therefore, consideration of the proposed method can be beneficial in terms of improving the efficiency of inspection and maintenance programs in reliability consistent manner. Top of page ## RSP-0252 – Probabilistic assessment of leak rates through steam generator tubes The main goal of the project was to provide predictive correlations, models and experimental data needed to enable the CNSC to independently evaluate the integrity of steam generator tubes as plants age and degradation proceeds, new forms of degradation appear, and as new defect-specific management, schemes are implemented. The present research included investigation of CANDU steam generator tube degradation mechanisms (pitting, fretting, and cracking) and the development of probabilistic failure and fracture mechanics models. In order to meet this goal several tasks were carried out: • A detailed survey of the Canadian and international nuclear industries, particularly US practices and methods on fracture mechanics; leak flow rate models; and uncertainty analysis for the pipe and steam generator tubing degradations was carried out. It was found that each country has developed a consolidated action plan to mitigate and reduce steam generator failures. • The survey on the state-of-the-arts fracture mechanics model, crack initiation and propagation models was carried out. The survey indicated that there are models adequate fracture mechanics models that can use to predict the crack opening area in for the steam generator tubes. However, the crack initiation models are limited. This is primarily because cracks develop and grow due to multiple cause and that depend on several parameters. The crack growth models are also limited. Models have been identified that are recommended for estimation of crack opening area and crack growth. • A two-phase critical flow model was developed that takes into account the detailed flaw morphology. Software was developed in Fortran language to perform critical flow calculations. The model was validated against straight tube critical flow data at low pressure. • Existing crack leak rate models were assessed by comparisons with industry wide recognized and documented examples and test cases. From this analysis, it was found that the existing leak rate models are not adequate for predicting steam generator tube crack leak rates. • Probabilistic methodology for assessing steam generator tube integrity was reviewed. A probabilistic fracture model based CANTIA code was assessed for steam generator integrity analysis. This code provides probabilistic assessment methodology for leak rates through the steam generator tubes. The code results were presented for probabilistic predictions of the flaw size and leak rates with time. Some of the practical issues with running such code have been identified. An experimental program was developed to obtain new data on critical flow in simulated cracks of steam generator tubes. Slit geometry were used as representative steam generator tube cracks and critical flow rates rate measured for pressure up to 6.8 MPa and for subcooling from 15C to 29C. Existing critical flow models were tested against data and a new critical mass flux relation was developed that is applicable to steam generator cracks. Top of page ## RSP-0253 – Review of fitness-for-service guidelines for steam generator A review of fitness-for-service guidelines for SG Tubes. Section 1: Evaluation Procedures and Acceptance Criteria, COG Report COG07-4089, (2007) has been conducted. A comparison of the FFSG with NEI-97-06 showed that: 1. Both criteria maintain structural and leakage integrity throughout the evaluation period using condition monitoring and operational assessment. Both permit the use of in-situ pressure testing to satisfy the performance criteria for condition monitoring. In general, neither criteria allow a detected sharp flaw in a tube during condition monitoring to go back in service without repair. Both use a 40 percent thickness repair limit for non-planar flaws such as loss of thickness due to wear or fretting. 2. The safety factors used in the two criteria are almost identical. In NEI 97-06, the safety factors are applied directly on the burst pressure. In FFSG, they are applied indirectly through concepts such as MTFS and FAROL. 3. NEI-97-06 requires that all flaws meet the 3 ∆pNO and 1.4 ∆pSLB criteria against burst with a probability of 95 percent at 50 percent confidence for both condition monitoring and operational assessment. In NEI-97-06, the term burst implies unstable burst, not ligament rupture. 4. The FFSG uses a different approach from NEI-97-06. Two acceptance criteria are permitted, depending on whether leakage is prohibited or leakage is permitted. The FFSG uses the terms ligament instability and throughwall penetration to denote what is defined as "ligament rupture" in the EPRI guidelines. The use of a uniform terminology is recommended to avoid confusion. 5. The FFSG criteria when leakage is prohibited are more severe than the NEI-97-06 criteria, because the latter requires a safety factor of three during NO (1.4 during SLB) against unstable burst with 95/50 confidence limit, whereas the former requires a safety factor of three during Level A events (1.5 during SLB) against throughwall penetration (ligament rupture), not rupture (unstable burst). If the acceptance criteria permitting leakage are adopted, throughwall penetration of flaw is permitted provided LBB is demonstrated with appropriate safety factors. The following concerns were raised in the expert opinion section: 1. Although EDM notches are acceptable simulators for rupture of fatigue cracks and frets, they do not have the complex morphology (multiple cracks, ligaments, etc.) of stress corrosion cracks. They are also not good simulators of stress corrosion cracks for leak rate or NDE studies. Although leak rate per unit area derived from tests on EDM notches may be applicable to SCCs, calculating the crack opening areas of SCCs may be complicated by the presence of ligaments. 2. The uses of MTFS for acceptance criteria prohibiting leakage and FAROL for acceptance criteria permitting leakage are more appropriate for flaws that are nearly rectangular (e.g., frets) than for stress corrosion cracks, which are irregular-shaped, ligamented and do not have uniform depths. 3. The five types of flaw given in the FFSG are representative of wall thinning and frets. But the crack geometries provided are representative of fatigue cracks and not of stress corrosion cracks. The FFSG does not give any guidance on how to characterize stress corrosion cracks (for integrity evaluation) based on NDE results. Much research is needed in this area for stress corrosion cracks. 4. For acceptance criteria permitting leakage, the FFSG requires that leak before break be demonstrated and in particular the leak rate must be shown to be detectable so that the plant can be shut down prior to rupture. Although this idea is theoretically appealing, its practical utility may be of limited value because 100 percent throughwall flaws in pulled tubes from SGs are sometimes found to be non-leaking. Also, when using probabilistic analysis, the criteria of probability of rupture = 0.01 (safety factor =1) at 50 percent confidence level provides less safety margin than the usual probability of rupture = 0.05 (with safety factor =3 or 1.5) at 50 percent confidence level. 5. According to Table IA-5, the FFSG requires that the number of flaws for each degradation mechanism exceeding MTFS in all SGs is = 1 but does not specify whether the number of flaws is the expected value or the upper bound value. Elsewhere in the FFSG it is stipulated that the above requirement should be satisfied at 95 percent confidence level. The requirement should be made uniform throughout the FFSG. 6. The FFSG provides acceptable procedures to account for common degradation mechanisms. However, the degradation morphology of stress corrosion cracks are not considered in any detail in the FFSG possibly because this is still an area of ongoing research. An additional degradation mechanism due to foreign object damage is increasingly becoming prevalent in the US as well as in Canada and will have to be considered in the FFSG. 7. The equations and correlations for burst/rupture and fracture mechanics parameter in the FFSG were compared with those in the literature. The results were comparable. The unstable burst pressure correlation in the FFSG gives comparable results to those in the literature. However, Eq. C-37 gives the impression that the coefficients for the ligament rupture pressure correlations are universal constants. It should be made clear that these coefficients are not constants but dependent on the flow stress properties of the tube material on which the rupture tests were run. The use of American Society of Mechanical Engineers (ASME) Code minimum properties to derive the coefficients may lead to unconservative ligament rupture pressure prediction. Fracture mechanics correlations in the FFSG provides comparable results to those provided by available correlations in the literature. There is some variability in the results for the different correlations available in the literature. Overall, the FFSG provides a suitable framework for handling flaws that could exceed the 40% wall thickness limit by the end of the next inspection interval by the use of condition monitoring and operational assessment. Top of page Aircraft crash forcing functions (Riera force histories) were reviewed and recommendations made to the CNSC for the choice of four aircraft types to use in specifying related design requirements for nuclear plant suppliers. For the four recommended aircraft types impact force histories were then computed for the impact velocity values agreed upon with the CNSC, using information available in the public domain on crushing force, mass distributions and the assumed velocity values. The computations, arranged in Excel workbooks, are based on (i) the impulse exerted by the stopping mass and the crushing force on the target, assumed to be rigid and (ii) solving the equation of motion for the uncrushed portion of the fuselage. The key result from these computations is the total force history acting on the target panel. The other key component of the impact load specification is the distribution of the force on the target panel – the loaded area. This is determined based on the plane geometry and tracking through the duration of the impact event the shape and area in contact with the target – the area is initially the fuselage circle, but enlarges to the sides at the point in time when the wings impact the target. How far the crushing progresses along the fuselage/wings and the duration of the force depend on the initial impact velocity. The impact loading for each aircraft type at each of the selected impact velocity values are presented as (i) the total force history and (ii) the load area history. Top of page ## RSP-0255 – Independent review of staff review guides related to engineering aspects of protections against malevolent acts, seismic hazard, external hazards other than seismic, and internal hazards This brief review report provides my review comments on: • RD-337, Design of New Nuclear Power Plants (2008) [2] and the following three draft staff review guides: • SRG-2.01-CON-11NNNN-XXX, Engineering Safety Aspects of Protection from Malevolent Acts (2009) [4] • SRG-2.01-CON-11NNNN-5.6.3, Seismic Qualifications [5] • SRG-2.01-CON-11NNNN-5.6.4, External Hazards Other Than Earthquakes and Internal Hazards (accidents) I strongly support the establishing of quantitative safety goals in RD-337. Establishing these quantitative safety goals is a major step forward in the development of performance goal (risk informed) based engineering design criteria. However, it is unclear whether the established quantitative safety are mean risk or median risk goals. Mean risk incorporates consideration of both epistemic uncertainty as well as aleatory (random) variability, whereas median risk does not fully address epistemic uncertainty. In the case of seismic risk, mean and median risk estimates commonly differ by a factor of three to 10. I recommend that RD-337 clearly define the quantitative safety goals to be mean risk goals. My primary comment on the malevolent acts SRG is that the CNSC should define the severity of the DBTs and BDBTs. Definition of these DBTs and BDBTs should not be left to the licensee. My major comment on the seismic qualification SRG is that because of the epistemic uncertainty in seismic risk, it is very difficult to drive mean seismic risk down to less than 10 percent of the total quantitative safety goals given in RD-337. Of the dominant contributors to risk, seismic risk is the most difficult and most costly to significantly reduce. Seismic is a common cause event in that it concurrently affects all structures, systems and components (SSCs). Furthermore, the slope of the mean seismic hazard curve is rather flat so that one would have to typically increase seismic design levels by a factor of two to four to reduce seismic risk by a factor of 10. For this reason, seismic risk should be allowed to use a disproportionally large fraction of the total quantitative safety goal. I suggest that seismic risk should be held to less than 50 percent of the quantitative safety goals given in RD-337. Seismic 2.2.1 of the seismic qualification SRG [5] should establish the permissible seismic risk goals consistent with the total risk goals of RD-337. I suggest the following seismic safety goals be established: • SCDF < 5 x 10-6 per year • SSRF < 5 x 10-6 per year • SLRF < 5 x 10-7 per year Even with these more relaxed seismic safety goals, the design basis earthquake (DBE) ground motion design response spectrum (DRS) needs to be more conservatively defined than the mean 10-4/yr uniform hazard response spectrum (UHRS), i.e.,: $\text{DRS}=\text{DF}*{\mathrm{UHRS}}_{{\mathrm{10}}^{-4}}$ The design factor (DR) ranges from 1.0 to 2.0 as a function of the slope of the site specific seismic hazard curve. With the DRS defined as described, it is still necessary to either: • perform a probabilistic seismic safety assessment to show these goals are achieved, or • demonstrate that the design criteria are sufficiently conservative to demonstrate a high confidence low probability of failure (HCLPF) capacity in excess of a beyond design basis earthquake event (BDBE) set at 1.67 times the DBE Section 2.2.11 of the seismic qualification SRG [5] specifies this BDBE HCLPF margin requirement. With seismic risk using up to 50 percent of the total quantitative safety goals, the sum of the risk from all of the hazards considered in the external and internal hazard SRG [6] probably needs to be held to less than 20 percent of the quantitative safety goals. This means that the safety goals for any individual one of the hazards considered in Ref. [6] needs to be held below the low 10-7 range. It is generally fairly easy to demonstrate CDF, and SRF for each of the hazards considered in Ref. [6] to be less than the low 10-7 range. Most of these hazards only cause local damage to structures well short of collapse and only affect systems and components in individual compartments of the structures. Thus, risk from these hazards are greatly reduced by defense in depth, redundancy, and separation of train provisions. In summary, I categorically disagree with the statement on Page 9 of Ref. [6] that: The safety goals for any individual event sequence shall be reviewed with frequency lower than 10-6 per reactor year. This goal is too liberal for the individual hazards considered in Ref. [6] for the reasons discussed above if the total safety goals of RD-337 are to be maintained. However, in Ref. [6] the probabilities of local damage to structures and failure of individual systems and components are set at 10-5/yr. This level seems reasonable to me so long as the defense in depth, redundancy, and separation of trains provisions are sufficiently enforced so as to achieve safety goals for these individual hazards less than the low 10-7 range. Therefore, with only one exception, I find the design load provisions of sections 2.2.2, 2.2.3, and 2.2.4 of Ref. [6] to be reasonable. The one exception is the external flooding criteria of section 2.2.2.3 of Ref. [6]. Unless flood resisting doors are included as a part of the design to prevent external flooding from entering the structures, external flooding can be a common cause failure mode for all critical electrical components located at or below grade. As a result, plant grade should be defined above the 10–6/yr probability of exceedance level for external flooding of the site unless additional protective measures are taken. Top of page ## RSP-0256 – Tritium analysis of soils and vegetation from Pembroke, Russell, Golden Lake, Hay River This report presents the results of analyses contracted by the CNSC to determine tritium activities in soils and vegetation collected near SRB Technologies (SRBT), Pembroke, Ontario on August 14, 2007 relative to other background locations. SRBT, a tritium processing facility, has released tritium to the environment through two stacks located at 320 Boundary Road as part of its CNSC-licensed operations since 1991. It temporarily ceased processing tritium on February 1, 2007, resuming only some time after being granted a new processing licence on August 1, 2008. This report provides tritium data for samples collected at various distances from SRBT late in the first growing season after a major reduction in tritium releases. For controls, a few samples were also analyzed from a local (Golden Lake) and a regional (Russell) background site, and from one site far from any industrial sources of tritium (Hay River, Northwest Territories). Samples were collected by CNSC staff, and laboratory analyses were completed by the MAPL Noble Gas. Laboratory in the University of Ottawa's Earth Science Department (UOESD). A total of 36 samples with associated documentation were provided to the UOESD by the CNSC for free water tritium (FWT) and organically bound tritium (OBT) analyses. Samples were dehydrated and the water recovered for analysis of tritium levels (FWT) by liquid scintillation counting (LSC). Dried organic matter was then encapsulated and analyzed for ingrowth of 3He by mass spectrometry for determination of OBT. A tree ring experiment was carried out to determine the amount of organically bound tritium in the vicinity of the SRBT facility. Top of page ## RSP-0257 – Environmental fate of tritium in soil and vegetation The CNSC has initiated and funded this study of the fate of tritium as it cycles through soil, vegetation (fruits and vegetables), plants and animal produce in local environments near sites of long-term, sustained atmospheric releases of tritium from nuclear facilities. This study contributes to our knowledge of the pathways and mechanisms of transformation of tritium emissions from nuclear activities to tritium oxide (HTO) and organically bound tritium (OBT) in the biosphere and the food supply. ### Tritium partitioning through air, soil, vegetation, and animal produce near 4 Canadian nuclear facilities At local control site (Russell, ON) the HTO in soil water was 24.7 TU, whereas the OBT value was 86.9 TU. In the remote background sites (Warman and Langenburg, SK) this contrast is more pronounced, with soil water only 12.7 TU and associated OBT of 166 TU. This is generally attributed to residual thermonuclear bomb tritium that remains sequestered in the soil organics. While associated vegetation has HTO close to the soil water and typical background levels for these regions, the OBT of vegetation is enriched, with values up to 101 TU in Russell and 198 in Saskatchewan. This suggests that vegetation sequesters tritium not only from soil pore water, but also (significantly) from soil organics. Near the Pembroke nuclear facility, both HTO and OBT activity in soils, fruits, vegetables, fodder decrease dramatically with distance from the SRBT facility, from over 1800 TU (200 Bq/L) near the facility to near-background levels of less than 100 TU (10 Bq/L) within 6 km distance. OBT values in vegetation and animal produce drop from near 2000 TU (225 Bq/L) near the facility <30 Bq/L at distance. Similarly, OBT values in vegetation drop off rapidly away from the SRB Technologies site, from close to 2000 TU to less than 400 TU. OBT is found for most samples to be enriched over associated HTO, with ratios ranging from near unity to over 15 in garden and animal produce, and is likely responding to a high inventory of OBT that resides in the region due to historical SRB Technologies emissions. A reduction in emissions since 2007 was observed due a shutdown of operations. The lower HTO concentrations over OBT suggest that resumption of activities was accompanied by lower levels of stack emissions than in the historical record. No systematic enrichment from soils to plants and to animals is evident although significant year to year shifts in OBT is observed. OBT and HTO for vegetation from the vicinity of the Darlington Nuclear Generating Station demonstrate a similarity and variability that is within the range reported by historical monitoring (OPG, 2009). Levels of HTO in soils, vegetation and animal produce range from near 200 TU within 2 km of the generation site, to close to background (near 127 TU or 15 Bq/L) at 6 km distance. The levels and similar OBT/HTO ratios suggest that these arise from a low, steady-state emission source that is dominated by HTO from the station. Levels of HTO and OBT in garden produce and fodder are variable, but show no systematic enrichment in one reservoir over the other. By contrast, animal produce is uniformly enriched in OBT over HTO, with ratios > 1. Minor increases in OBT over HTO in some animal produce may be derived from fodder OBT. A significant 2008 to 2009 drop for OBT in animal produce (1914 to 100 TU in milk and 337 to 41 TU in eggs) suggests that feed supply to these farms from a high tritium area is likely the greater contributor to excesses in OBT. Meat (longer term growth factor) was not high in 2008. The OBT/HTO ratio for most animal produce are close to 2 (1.2 to 2.6 as TU) with the exception of the 2008 milk and egg samples (45 and 13 as TU). At Gentilly, the HTO measurements for the different reservoirs of soil, garden produce, fodder and animal produce show a decrease from levels of about 210 TU (25 Bq/L) within 1.5 km of the site to near background levels (< 20 TU) within 20 km of the generating station. There is no systematic enrichment of OBT over the HTO evident in any reservoir with the exception of one anomalous OBT in meat in 2008 (234 TU) which dropped to background levels (13.9 TU) in 2009, possibly due to the import of animals from a different site, or the use of high OBT feed from outside the region, as observed for other sites. The OBT/HTO ratio, exclusive of this one sample, varies between 0.74 and 1.09, suggesting equilibrium with long term, sustained emissions. The Peterborough site shows the impact of tritium release on vegetation near the facility with HTO values up to 8800 TU (1000 Bq/L) and OBT values up to 3800 TU (210 Bq/kg) within 1 km of the site, with an exponential drop off to values near 200 TU in HTO and 900 TU in OBT within 8 km of the facility. The Peterborough site is unique among the four in that it shows a systematic increase in OBT in the 2009 samples over the 2008 samples. Values increase by less than 50 percent to over 10-fold. Most also show increases in HTO on the order of 10 percent to over 100 percent from 2008 to 2009. These significant OBT increases and associated HTO increases, suggest a variable emission source term. This is supported by a near doubling of OBT in the 2009 tree ring (348 Bq/L water equivalent) over the 2008 ring (186 Bq/L water equivalent) sampled 450 m north of the site. As observed at the three other nuclear facilities, year to year variations in vegetation and animal produce obscure any potential enrichment of the OBT signal from soils and vegetation to animal products. ### Experiments for investigating of HT-HTO-OBT conversions The conversion of HT from atmospheric sources to HTO is known to occur dominantly in soils where hydrogenotrophs are available and capable of oxidizing hydrogen to water. Little is known, however, about the rates and conditions for such conversion. Experiments were designed to investigate the mechanisms and rates that influence the fate of HT released to the atmosphere and rate of HTO/OBT conversion in soils. For tomatoes, cucumbers, radishes and beans in the three soil types HTO values range between 25,000 and 90,000 TU, which is less than 10 percent of the ambient atmospheric HTO. OBT values for plants fall within the same range as HTO, varying between 40,000 and 80,000 TU. This similarity in ranges, and the low values compared with atmospheric HTO, suggests that HT conversion in soils is a dominant pathway for OBT. Subsequent HT diffusion experiments (below) suggest that rates of HT conversion in the soils can produce these amounts of HTO over the growing season. HTO uniformly exceeds OBT by some 20 percent to over three-fold in the plant produce, whereas in the plant stalks it is the OBT that uniformly exceeds the HTO. This discordance suggests that seasonal variations in HT emissions may have an effect on the sequestering of tritium by different plant components. OBT is enriched in the stalks over the associated vegetables, again possibly reflecting the timing of growth with a variable HT (and so soil HTO) signal over the growing season. Further, that OBT is close to HTO in most samples (i.e., within 50 percent of HTO) is an indication that these plants derive most of their hydrogen from actively cycled HT and HTO rather than from inventories of soil organics. If these inventories were important to the growth of these plants, a much greater ratio of HTO to OBT would be observed. With respect to soil types, no significant differences in HTO and OBT occur that indicate greater production in one soil type over another. However, some 40,000 to 80,000 TU were measured in the HTO fraction of these soils, indicating a significant accumulation of tritium due to their exposure to HT and/or HTO in air. This compares closely with results of HT diffusion cell experiments (presented below), which show an average accumulation of HTO from HT oxidation of some 10,000 TU over an eight-day period. While the three soils used in these greenhouse experiments were not protected from ambient HTO, it seems that most of the measured production of free water tritium can be attributed to HT conversion. The experimental garden was designed to evaluate the factors that influence the mechanisms and rate of HT conversion to HTO in soils. However, the confounding effects of exchange between soil water and atmospheric HTO and additions of low-tritium watering water could not be resolved. A second set of experiments were carried out with the specific objective of testing a protocol that excluded exchange of atmospheric HTO vapour with experimental soil waters while allowing the diffusion of atmospheric HT into the soils, and determine rates of HT conversion to HTO by soils. This was managed through the use of diffusion membranes fitted to soil-filled bottles that allow only HT diffusion into the bottle and onto the soil. Following deployment periods on-site at SSI for periods ranging from hours to eight-days, recovered soil waters were analyzed for HTO produced from oxidation of HT in the cells. Results of these diffusion experiment show in-growth of HTO ranging from 777 to 1936 Bq/L after eight-days, giving an average rate of conversion of 0.0019 Bq g1 h1. Using the atmospheric HT concentration and normalizing to average H2 concentrations in soils (500 ppmv) gives a H2 conversion rate of about 200 nmol g1 h1, which is close to the value found by Guo and Conrad (2008) for a forest soil. This is consistent with abiontic hydrogen oxidation. While H2 consumption can be mediated microbially, it has been shown to proceed more rapidly and at lower PH2levels by abiontic enzymatic activity. This process involves free extracellular hydrogenase enzymes sorbed onto soil particles and present in dead cells and cell fragments (Skujins, 1978; Haring and Conrad, 1994; Conrad and Seiler, 1981). From this, we conclude that HT to HTO conversion most likely proceeds at the same rate as natural H2 conversion in soils. ### Tree rings as records of atmospheric tritium The growth rings of trees were analyzed for their potential record of tritium releases from nuclear facilities. This experiment was limited to establishing the viability of the technique, using tree sections of opportunity from high-tritium sites with records of historical HT and HTO releases. Samples of opportunity were acquired from Pembroke at the SRB site, from the SSI site in Peterborough and from the Darlington Generating Station. The two tree ring records from SSI and SRBT, with high OBT values on the order of 1000 TU and 50,000 TU respectively, are 20 to 1000 times higher than local background. Both recover major multi-year trends of their associated HT-dominated emissions records. The tree ring record for SSI ends with a strong upward shift in OBT from 2008 to 2009, consistent with observations in local vegetation over these two years, although in contrast with the emissions record. The low OBT tritium record recovered from the Darlington site, with values averaging in the 100 to 150 TU range, shows only a poor correlation with the local HTO-dominated emissions at this site. The poorer correlation for this record, which has values that average only about five times background, may be due to stronger impacts by factors such as seasonal variations in precipitation and emissions. Future work in this research must look more closely at the short term variations in emissions and their correlation with periods of wood growth and also possible translocation of H3 between tree-rings, as is observed for trace metals. ### Environmental recovery from HT and HTO The temporary shut-down (January 31, 2007 to July 1, 2008) of operations at the SRB Technologies facility in Pembroke, ON provided a potential opportunity to evaluate changes in HTO and OBT in vegetation following a major reduction in HT emissions. This was carried out over three year period from 2007 to 2009. Stack emissions, mainly as HT, decreased exponentially from the maximum annual reported in 2000 (17,990 TBq) to the minimum in 2006 (3 TBq). 2007 was one exception as stack emissions peaked to 1875 TBq. In 2007, OBT concentrations in vegetation were highest (reaching maximum value of 27,806 TU), and well above equilibrium with HTO concentration. In 2008 and 2009, OBT concentrations, with a maximum value of 1965 TU in 2009 nearby the SRB facility, were well below those measured in 2007. Overall, these results indicate that the HTO and OBT concentration in garden produce can quickly respond to changes in stack emissions. ### OBT cycling in the environment The transport and partitioning of OBT between soils, plants, garden vegetables and animal produce has regulatory implications for the protection of human health and the environment. One of the objectives of this research was to evaluate the dynamics of OBT cycling among environmental compartments under conditions of long-term, sustained releases of tritium from atmospheric sources near four Canadian nuclear facilities (SRB Technologies – Pembroke, ON; Shield Source Inc. – Peterborough, ON; Darlington NGS, ON; and Gentilly NGS, QC). Summary graphs of vegetation and animal produce from the four sites of nuclear activity show considerable variability in the OBT to HTO concentrations. While all uniformly show greater OBT than HTO for animal produce, there is no systematic increase in animal OBT over fodder. Moreover, the anomalous increases in animal produce OBT are not consistent and likely related to differences in feed sources rather than enzymatic enrichment and bioaccumulation. The consistently higher values for OBT over HTO in animal produce makes HTO a poor representation of animal produce for regulatory purposes. Substantial year-to-year variations in OBT in soils, gardens, fodder and animal produce have been observed, which are greater than any potential trophic accumulation of OBT. Near Peterborough values for OBT in vegetation in 2009 greatly exceed 2008. The OBT of animal produce at Darlington showed a major reduction between 2008 and 2009. From this, it is concluded that temporal, year-to-year variations in tritium distribution among environmental compartments are more important than possible fractionation and accumulation as OBT. Top of page ## RSP-0258 – Fire safe shutdown analysis review The Fire Safe Shutdown Analysis (FSSA) checklist was developed to assist the CNSC regulatory personnel conducting reviews of FSSAs submitted by operating and under construction nuclear power plants in Canada. The FSSA checklist is based upon Clause 11, Fire Safe Shutdown Analysis, of Canadian Standard Association (CSA) standard N293-07, Fire Protection for CANDU Nuclear Power Plants. The FSSA checklist incorporates requirements of other Clauses of CSA N293 invoked by Clause 11, and guidance provided in Nuclear Energy Institute directive NEI 00-01 Revision 2, Guidance for Post Fire Safe Shutdown Circuit Analysis, National Fire Protection Association Code NFPA 805, Performance Based Standard for Fire Protection for Light Water Reactor Electric Generating Plants, and USNRC Regulatory Guide 1.189 Revision 2, Fire Protection for Nuclear Power Plants. Top of page ## RSP-0259 – Industrial fire brigade staffing This report provides acceptance criteria for determination of nuclear power plant (NPP) needs analysis and NPP industrial fire brigade (IFB) minimum compliment staffing. The report was developed to assist CNSC regulatory personnel in conducting reviews of NPP needs analyses with respect to IFB staffing. The report includes a review checklist based on section 10, Fire Response Capability of Canadian Standard Association (CSA) Standard N293-07, Fire Protection for CANDU Nuclear Power Plants. The review checklist addresses compliance with the applicable objectives, principles and criteria set forth in the Nuclear Safety and Control Act (NSCA), associated regulations, codes and standards for all operating modes, and fire protection objectives. The review checklist provided in attachment A is intended for incorporation into other regulatory design review guidelines for assessing the acceptability of submissions. Two recommendations are included in this report. Each provides detailed guidance on determining fire brigade minimum staffing complement through illustrative examples. Scenarios that are not covered specifically in the recommendations can be assessed using the principles provided in Recommendation 1, along with experienced judgment and interpretation of the scenarios provided in attachment B. Top of page ## RSP-0260 – Bruce fire modeling review Fire models were prepared for new fuel storage areas at Bruce Nuclear Power Plant. A review was conducted to determine if the fire models were developed in accordance with industry standards and consistent with industry practice. The review was performed to assist the CNSC regulatory personnel conducting the review of the fire models. The review identified issues that need to be addressed in the fire models before the fire models can be used to accurately predict appropriate arrangements for new fuel packages. The review determined that non-conservative approaches in assumptions, methods and conclusions were used in all four of the fire model analyses. The number of comments and identified non-conservative approaches taken as a whole are indicative of the lack of validity of the fire models. It should be noted that there were certain unorthodox approaches used to determine heat release rates of vertical surfaces. The fact that these approached were reviewed does not in any way suggest that they are acceptable. The reviews were conducted in the interests of being thorough. The fire modeling analysis relied on industry codes and standards; however, the application of the principles described therein is questionable in many cases. The design objectives of the analyses are not clearly stated or supported by the analyses as submitted to the CNSC. Many of the assumptions made in the fire models are unsubstantiated. Only some of the input and output of the fire models were provided for review to the CNSC. Recommendations provided are appropriate. However, of particular concern is the recommendation to add sprinkler systems. No performance-based substantiation of the design criteria of the recommended sprinkler systems was provided in the fire models. The recommendation is appropriate for prescriptive compliance but inappropriate for performance-based compliance without substantiation because the ability of new sprinklers to meet the design objectives required in the FSSA are not addressed. Lastly, the quality of the fire models was poor and errors were identified. Based on the above, this review determined that the fire models submitted are inadequate for the intended use. Date modified:
2019-05-19 13:10:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4089551568031311, "perplexity": 3653.7968708386134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254882.18/warc/CC-MAIN-20190519121502-20190519143502-00349.warc.gz"}
http://www.advanceduninstaller.com/123-Watermark-7abc154ad21dfd820c1d35d5e3d1ec66-application.htm
# 123 Watermark ## A guide to uninstall 123 Watermark from your PC This web page is about 123 Watermark for Windows. Here you can find details on how to uninstall it from your computer. It was coded for Windows by 123 Watermark. More information on 123 Watermark can be found here. Please follow if you want to read more on 123 Watermark on 123 Watermark's website. Usually the 123 Watermark application is to be found in the C:\Program Files (x86)\123 Watermark directory, depending on the user's option during setup. MsiExec.exe /I{2E9C1565-F445-484A-BD3A-1434C0F0A4D3} is the full command line if you want to uninstall 123 Watermark. 123 Watermark's main file takes about 975.09 KB (998488 bytes) and its name is 123Watermark.exe. 123 Watermark contains of the executables below. They take 7.31 MB (7668704 bytes) on disk. • 123Watermark.exe (975.09 KB) • exiftool.exe (6.36 MB) The information on this page is only about version 1.0.7.3 of 123 Watermark. You can find below a few links to other 123 Watermark versions: After the uninstall process, the application leaves leftovers on the PC. Part_A few of these are listed below. Folders remaining: • C:\Program Files (x86)\123 Watermark Usually, the following files are left on disk: • C:\Program Files (x86)\123 Watermark\123Watermark.exe • C:\Program Files (x86)\123 Watermark\BouncyCastle.Crypto.dll • C:\Program Files (x86)\123 Watermark\default.jpg • C:\Program Files (x86)\123 Watermark\DropNet.dll • C:\Program Files (x86)\123 Watermark\exiftool.exe • C:\Program Files (x86)\123 Watermark\fr\123Watermark.resources.dll • C:\Program Files (x86)\123 Watermark\fr-FR\123Watermark.resources.dll • C:\Program Files (x86)\123 Watermark\ICSharpCode.SharpZipLib.dll • C:\Program Files (x86)\123 Watermark\Licence.rtf • C:\Program Files (x86)\123 Watermark\log4net.config • C:\Program Files (x86)\123 Watermark\log4net.dll • C:\Program Files (x86)\123 Watermark\Microsoft.ApplicationInsights.dll • C:\Program Files (x86)\123 Watermark\Microsoft.ApplicationInsights.Log4NetAppender.dll • C:\Program Files (x86)\123 Watermark\Newtonsoft.Json.dll • C:\Program Files (x86)\123 Watermark\OneDriveSdk.dll • C:\Program Files (x86)\123 Watermark\RestSharp.dll • C:\Program Files (x86)\123 Watermark\System.Net.Http.Extensions.dll • C:\Program Files (x86)\123 Watermark\System.Net.Http.Primitives.dll • C:\Program Files (x86)\123 Watermark\XmpCore.dll • C:\Program Files (x86)\123 Watermark\Zlib.Portable.dll • C:\Program Files (x86)\123 Watermark\zxing.dll • C:\Program Files (x86)\123 Watermark\zxing.presentation.dll • C:\Users\Public\Desktop\123 Watermark.lnk • C:\Windows\Installer\{2E9C1565-F445-484A-BD3A-1434C0F0A4D3}\ARPPRODUCTICON.exe Generally the following registry data will not be removed: • HKEY_CLASSES_ROOT\Installer\Assemblies\C:|Program Files (x86)|123 Watermark|123Watermark.exe • HKEY_CLASSES_ROOT\Installer\Assemblies\C:|Program Files (x86)|123 Watermark|fr-FR|123Watermark.resources.dll • HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Products\5651C9E2544FA484DBA341430C0F4A3D • HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall\{2E9C1565-F445-484A-BD3A-1434C0F0A4D3} Open regedit.exe in order to delete the following registry values: • HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Installer\Products\5651C9E2544FA484DBA341430C0F4A3D\ProductName ## A way to erase 123 Watermark from your PC using Advanced Uninstaller PRO 123 Watermark is an application released by 123 Watermark. Frequently, computer users try to uninstall it. This is hard because performing this by hand takes some experience related to removing Windows programs manually. One of the best EASY practice to uninstall 123 Watermark is to use Advanced Uninstaller PRO. Take the following steps on how to do this: 1. If you don't have Advanced Uninstaller PRO on your PC, install it. This is a good step because Advanced Uninstaller PRO is one of the best uninstaller and general utility to maximize the performance of your PC. 2. Run Advanced Uninstaller PRO. Take some time to admire the program's interface and number of tools available. Advanced Uninstaller PRO is a very good package of utilities. 3. Press the General Tools button 4. Activate the Uninstall Programs tool 5. A list of the applications existing on your computer will appear 6. Scroll the list of applications until you locate 123 Watermark or simply activate the Search feature and type in "123 Watermark". If it exists on your system the 123 Watermark application will be found automatically. When you click 123 Watermark in the list of apps, some data regarding the application is shown to you: • Star rating (in the lower left corner). The star rating explains the opinion other users have regarding 123 Watermark, ranging from "Highly recommended" to "Very dangerous". • Reviews by other users - Press the Read reviews button. • Details regarding the application you are about to remove, by clicking on the Properties button. For instance you can see that for 123 Watermark: • The web site of the program is: http://www.123Watermark.com • The uninstall string is: MsiExec.exe /I{2E9C1565-F445-484A-BD3A-1434C0F0A4D3} 7. Click the Uninstall button. A window asking you to confirm will appear. Confirm the removal by clicking the Uninstall button. Advanced Uninstaller PRO will automatically remove 123 Watermark. 8. After uninstalling 123 Watermark, Advanced Uninstaller PRO will ask you to run an additional cleanup. Click Next to start the cleanup. All the items of 123 Watermark that have been left behind will be found and you will be asked if you want to delete them. By removing 123 Watermark with Advanced Uninstaller PRO, you are assured that no Windows registry items, files or folders are left behind on your disk.
2018-11-15 06:48:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225724935531616, "perplexity": 14025.3111774628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742567.46/warc/CC-MAIN-20181115054518-20181115080518-00376.warc.gz"}
https://math.stackexchange.com/questions/2183231/proof-of-first-sylow-theorem-concerning-conjugation-of-two-coset-two-orbits-of-l
# Proof of first Sylow theorem concerning conjugation of two coset/two orbits of length 1 The proof that this question concerns is from the Sylow's theorems. Let $G$ be a finite group of order $p^rm$ where $p$ is prime, $r\geq1$ and $p\nmid m$. Proof the existence of at least one subgroup $P$ of order $p^r$. I am stuck at the part where you show that there exists only one orbit $\{\alpha \}$ of length $1$. Let $P\leq G$ a maximal subgroup of $G$ with $|P|=p^s$., $H:=N(P)$ the normalizer of $P$. Let $\Omega:= cos(G:H)$ and $\alpha:=H$ so that $H=Stab(\alpha)$. Consider the action of the $P$ on $\Omega$. As $P\leq H$ there is one $P$-Orbit of length $1$ which is $\{\alpha\}$. Be $\{\beta\}$ any $P$- Orbit of length $1$. Then $\beta=x^{-1}\alpha x$ for some $x\in G$. What is the reason for this last statement? I know that if $G$ operates on its subgroups the conjugation is a transitive operation and thus it would be clear. Cosets aren't subgroups though. So I thought it probably has to do with how conjugation is an inner automorphism, thus bijective and $|\{\beta\}|=\{\alpha\}|=1$ but I can't quite put it together. Are orbits of the same length always conjugate to each other? Is it enough to just say that you can map one coset to another through a bijection? Or can I somehow proof that the operation of $G$ operates on $\Omega$ through conjugation is transitive? I get stuck because I end up with trying to show that there exists a $g\in G$ so that the RHS is true: $$aH \sim bH \Leftrightarrow g(aH)g^{-1} = bH$$ (where $a,b\in G$) I don't know how I would show that both cosets are the same. Relevant part of the proof from Groups and Geometry (Neumann, Stoy, Thompson) • The action is not conjugation, it is right multiplication. Where you see $\alpha^x$ in NST it does not mean here $x^{-1}\alpha x$ it means $\alpha\cdot x=Hx$. It is the stabilisers which are related by conjugation. OK? – ancientmathematician Mar 12 '17 at 13:55 • Thank you! I was unaware because at one point the same notation was used for conjugation. Then right translation on the cosets ist transitive and thus the Stabilizers are conjugated. I managed to show even that, so the rest should be clear. – laura_b Mar 14 '17 at 14:15
2019-08-21 00:42:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9253998398780823, "perplexity": 163.70297275368557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00416.warc.gz"}
https://de.zxc.wiki/wiki/Cauchy-Elastizit%C3%A4t
# Cauchy elasticity Cauchy elasticity (from Augustin-Louis Cauchy and from Greek ελαστικός elastikos "adaptable") is a material model of elasticity . Elasticity is the property of a body to change its shape under the action of force and to return to its original shape when the force is removed (example: spring). The cause of the elasticity are distortions of the atomic lattice (in metals), the stretching of molecular chains (rubber and plastics) or the change in the mean atomic distance (liquids and gases). In Cauchy elasticity, the reaction forces when a body is deformed are determined exclusively by the current deformation. Such relationships describe, for example, the equations of state of gases. If the initial state is free of forces, this will be resumed after any deformation when the loads are removed. Different deformation paths that ultimately result in the same deformations result in the same reaction forces in the end. The deformation speeds also have no influence on the reactions at the material equation level. Cauchy elasticity is a time-independent material property . Dissipative processes such as viscous or plastic flow are thus excluded, which is guaranteed with real materials within their elastic limit . Real liquids, gases and some solids (such as iron and glass) are elastic to a good approximation when there are rapid, slight changes in volume (e.g. with sound waves). In the case of solids, the elastic limit can be adhered to in the case of slow and sufficiently small deformations that are present in many applications, especially in the technical field. Although the reaction forces in a Cauchy elastic material are not influenced by the deformation path covered, the deformation work performed on solids on different deformation paths (with the same start and end point) can vary, which in the absence of a dissipation mechanism contradicts thermodynamic principles. Path independence of the deformation work leads to hyperelasticity , which is a special case of Cauchy elasticity. Further conditions for modeling Cauchy elasticity can be derived from the principle of material objectivity, according to which the material behavior is invariant in the reference system, and in the case of isotropy . Many elastic materials such as steel, rubber, plastic, wood and concrete, but also biological tissue, can be described to a good approximation with Cauchy elasticity. ## definition In a Cauchy elastic material, the Cauchy stress tensor is a function only of the existing deformation gradient : ${\ displaystyle {\ boldsymbol {\ sigma}}}$ ${\ displaystyle \ mathbf {F}}$ ${\ displaystyle {\ boldsymbol {\ sigma}} = {\ mathfrak {G}} (\ mathbf {F}) \ ,.}$ This function will generally depend on the initial state of the body, in particular the internal stresses that are initially present. In most cases, however, the undeformed ground state, in which the deformation gradient is equal to the unit tensor , will be stress-free: ${\ displaystyle \ mathbf {1}}$ ${\ displaystyle {\ mathfrak {G}} (\ mathbf {1}) = \ mathbf {0} \ ,.}$ ## Macroscopic behavior Force-displacement diagram in a uniaxial tensile test with non-linear elasticity Macroscopically, the following properties can be observed on a Cauchy elastic body: • With a given deformation (fluids: volume change) the reaction forces (the pressure) always have the same value regardless of the previous history. • If the initial state is unloaded, it is resumed after any deformation when the loads are removed. In the case of elastic liquids and gases, the state is determined by the volume occupied, which is always the same under the same conditions. • The material behavior is independent of the speed. The speed at which a deformation (fluids: volume change) takes place has no influence on the resistance (pressure) that the body opposes the deformation. • In the uniaxial tensile test, loading and unloading always take place along the same path as in the adjacent picture. With liquids and gases this corresponds to a compression and expansion test. With sufficiently small deformations, the force-displacement relationship in solids is linear and the elasticity can be described in terms of modulus. Because the force to be applied and the distance covered in the event of a deformation depend largely on the dimensions of the body, the force is related to its effective area and the distance to a suitable dimension of the body. The related force is the tension and the related distance is the elongation . The modules quantify the relationship between the stresses and the strains and are a material property. The modulus of elasticity applies to uniaxial tension, the modulus of shear to shear and the modulus of compression to all-round pressure. With uniaxial tension, a deformation occurs not only in the direction of tension, but also across it, which is recorded by the dimensionless Poisson's ratio. The complete description of the isotropic linear Cauchy elasticity only requires two of these quantities. With anisotropic linear behavior, a maximum of 36 parameters are required (only the assumption of hyperelasticity allows the reduction to 21 parameters.) ## Material objectivity The principle of material objectivity says that the material behavior is invariant of the reference system or, more precisely, is invariant to a Euclidean transformation of the reference system of an observer. The decisive factor is the rotation of the moving system relative to the body. The rotation of the frame of reference of the moving observer relative to the stationary observer is done with an orthogonal tensor from the special orthogonal group${\ displaystyle \ mathbf {Q}}$ ${\ displaystyle {\ mathcal {SO}}: = \ {\ mathbf {Q} \ in {\ mathcal {L}} | \ mathbf {Q} ^ {- 1} = \ mathbf {Q} ^ {\ top} \; {\ text {and}} \; \ det (\ mathbf {Q}) = + 1 \}}$ described. The set contains all tensors of the second order, denotes the transposition , the inverse and “det” the determinant . The tensors from this group perform rotations without reflection. If the observer, who is at rest relative to the body, determines the deformation gradient in a material point , then the moving observer measures using the Euclidean transformation ${\ displaystyle {\ mathcal {L}}}$${\ displaystyle (\ cdot) ^ {\ top}}$${\ displaystyle (\ cdot) ^ {- 1}}$${\ displaystyle \ mathbf {F}}$ ${\ displaystyle \ mathbf {F} '= \ mathbf {Q \ cdot F} \ ,.}$ The rotation transforms the Cauchy's stress tensor so that the observer in motion ${\ displaystyle {\ boldsymbol {\ sigma}} '= \ mathbf {Q} \ cdot {\ boldsymbol {\ sigma}} \ cdot \ mathbf {Q} ^ {\ top} = \ mathbf {Q} \ cdot {\ mathfrak {G}} (\ mathbf {F}) \ cdot \ mathbf {Q} ^ {\ top} = {\ mathfrak {G}} (\ mathbf {F} ') = {\ mathfrak {G}} (\ mathbf {Q \ cdot F})}$ notices. The movement possibilities of the observer are unlimited, so that the above equation applies to all orthogonal tensors and therefore ${\ displaystyle {\ mathfrak {G}} (\ mathbf {Q \ cdot F}) = \ mathbf {Q} \ cdot {\ mathfrak {G}} (\ mathbf {F}) \ cdot \ mathbf {Q} ^ {\ top} \ quad {\ text {for all}} \ quad \ mathbf {Q} \ in {\ mathcal {SO}}}$ must apply. This condition is always fulfilled in elastic fluids, while a special modeling guideline has to be followed for elastic solids. ## Elastic fluids Fluid is the collective term for liquids and gases . The elastic liquid is also known as the ideal liquid or Euler liquid, and the elastic gas is the frictionless gas. Internal friction, which would show up in viscosity and thus in shear stresses, is neglected in elastic fluids, which is why the stress tensor there has a diagonal shape. Furthermore, every fluid is also isotropic . If a fluid is now mentally cut into two parts, then cutting stresses develop on the cut surfaces that are perpendicular to the cut surface, because the pressure in an elastic fluid always acts perpendicularly on the delimiting surfaces. Now in an isotropic liquid the normal stress must be the same for all orientations of the cut surface. But this is only possible if the stress tensor is a multiple of the unit tensor : ${\ displaystyle {\ boldsymbol {\ sigma}} = {\ mathfrak {G}} (\ mathbf {F}) = {\ mathfrak {g}} (\ mathbf {F}) \ mathbf {1} \ ,.}$ The scalar function corresponds to the negative pressure , which in fluids kinematically only from the change in volume , the density ${\ displaystyle {\ mathfrak {g}}}$${\ displaystyle p}$${\ displaystyle \ operatorname {det} (\ mathbf {F})}$ ${\ displaystyle \ rho = {\ frac {\ rho _ {0}} {\ operatorname {det} (\ mathbf {F})}}}$ or the specific volume ${\ displaystyle v: = {\ frac {1} {\ rho}} = {\ frac {\ operatorname {det} (\ mathbf {F})} {\ rho _ {0}}}}$ depends. The material parameter is the density in the initial state at . A stress tensor of this form ${\ displaystyle \ rho _ {0}}$${\ displaystyle \ operatorname {det} (\ mathbf {F}) = 1}$ ${\ displaystyle {\ boldsymbol {\ sigma}} = - {\ tilde {p}} [\ operatorname {det} (\ mathbf {F})] \ mathbf {1} = -p (\ rho) \ mathbf {1 } = - {\ bar {p}} (v) \ mathbf {1} \ ,,}$ is called the pressure tensor . ### Material objectivity of fluids In fluids, the stresses are proportional to the unit tensor and only depend on the determinant of the deformation gradient, which is why the above condition for objectivity ${\ displaystyle {\ mathfrak {G}} (\ mathbf {Q \ cdot F}) = \ mathbf {Q} \ cdot {\ mathfrak {G}} (\ mathbf {F}) \ cdot \ mathbf {Q} ^ {\ top} \ quad {\ text {for all}} \ quad \ mathbf {Q} \ in {\ mathcal {SO}}}$ is always fulfilled: ${\ displaystyle \ overbrace {{\ tilde {p}} [\ operatorname {det} (\ mathbf {Q \ cdot F})] \ mathbf {1}} ^ {{\ mathfrak {G}} (\ mathbf {Q \ cdot F})} = {\ tilde {p}} [\ operatorname {det} (\ mathbf {Q}) \ operatorname {det} (\ mathbf {F})] \ mathbf {Q \ cdot 1 \ cdot Q } ^ {\ top} = \ mathbf {Q} \ cdot \ overbrace {{\ tilde {p}} [\ operatorname {det} (\ mathbf {F})] \ mathbf {1}} ^ {{\ mathfrak { G}} (\ mathbf {F})} \ cdot \ mathbf {Q} ^ {\ top} \ quad {\ text {for all}} \ quad \ mathbf {Q} \ in {\ mathcal {SO}} \ ,.}$ ### Material equations for elastic fluids Many material equations for elastic fluids are called equations of state , which underlines that the pressure in them is always the same under the same conditions (temperature, volume, amount of substance), which ensures Cauchy elasticity. The density results from the volume and the amount of substance. The simplest possible connection between pressure and density is proportionality ${\ displaystyle p = (R \, T) \, \ rho,}$ which defines the ideal gas in which the proportionality factor is the product of a material parameter R and the absolute temperature T. The lower the pressure and the higher the temperature, the more a real gas behaves like an ideal one. With virial coefficients , the ideal gas equation can be ${\ displaystyle B_ {i}}$ ${\ displaystyle p = kT \ sum _ {i = 1} ^ {N} B_ {i} (T) \ rho ^ {i}}$ can be extended to describe real gases and phase transitions. The factor k is the Boltzmann constant . However, this equation is only suitable for dilute gases. Alternatively, the pressure can also be formulated as a function of the specific volume v , such as e.g. B. in the Van der Waals equation ${\ displaystyle {\ bar {p}} (v) = {\ frac {RT} {vb}} - {\ frac {a} {v ^ {2}}} \;, \ quad v> b}$ Materials parameters a , b and R . ### Conservative fluids A barotropic Cauchy elastic fluid is also hyperelastic because the specific tension power is due to ${\ displaystyle l_ {i}}$ ${\ displaystyle l_ {i}: = {\ frac {1} {\ rho}} {\ boldsymbol {\ sigma}}: \ mathbf {d} = - {\ frac {1} {\ rho}} {\ bar {p}} (v) \ mathbf {1}: \ mathbf {d} = - {\ frac {1} {\ rho}} {\ bar {p}} (v) \ operatorname {div} ({\ vec {v}}) = {\ bar {p}} (v) {\ frac {\ dot {\ rho}} {\ rho ^ {2}}} = - {\ bar {p}} (v) \, {\ dot {v}} =: {\ dot {w}} (v)}$ the material time derivative of a scalar function w , which is only the case in hyperelastic materials. The material time derivative is indicated by the over point or D / D t below. In the chain of equations, the Frobenius scalar product “:” of the pressure tensor was used with the distortion speed tensor d  : = ½ ( l + l T ), which is the symmetrical part of the speed gradient . The divergence of the velocity field is equal to the trace of the velocity gradient ${\ displaystyle \ mathbf {l} = \ operatorname {grad} ({\ vec {v}})}$${\ displaystyle {\ vec {v}}}$ ${\ displaystyle \ operatorname {div} ({\ vec {v}}) = \ operatorname {Sp} (\ mathbf {l}) = \ operatorname {Sp} (\ mathbf {d}) = \ mathbf {1: d } \ ,.}$ The divergence related to the density is due to the local spatial mass balance ${\ displaystyle {\ dot {\ rho}} + \ rho \ operatorname {div} ({\ vec {v}}) = 0 \ quad \ rightarrow \ quad {\ frac {1} {\ rho}} \ operatorname { div} ({\ vec {v}}) = - {\ frac {\ dot {\ rho}} {\ rho ^ {2}}} = {\ frac {\ mathrm {D}} {\ mathrm {D} t}} (\ rho ^ {- 1}) = {\ dot {v}}}$ the material time derivative of the specific volume. This finally resulted in the specific deformation energy ${\ displaystyle w (v): = - \ int {\ bar {p}} (v) \, \ mathrm {d} v \ quad \ leftrightarrow \ quad {\ frac {\ mathrm {d} w (v)} {\ mathrm {d} v}} = - {\ bar {p}} (v) \ quad \ leftrightarrow \ quad {\ dot {w}} (v) = - {\ bar {p}} (v) { \ dot {v}},}$ whose material time derivative in the case of barotropy, as shown in the chain of equations above, is the specific tension power of the elastic fluid. The compression power of the pressure is converted completely and without dissipation into deformation energy in the barotropic fluid, which is why elastic barotropic fluids are invariably conservative. ### Incompressible liquids In an incompressible elastic liquid, the density is constant to a good approximation and the pressure no longer results from the constitutive equation, but from the laws of nature and the boundary conditions, but can still depend on the place and time${\ displaystyle {\ vec {x}}}$${\ displaystyle t}$ ${\ displaystyle {\ boldsymbol {\ sigma}} = - p ({\ vec {x}}, t) \ mathbf {1} \ ,.}$ ### Euler's equations in fluid mechanics Inserting the pressure tensor into the momentum balance in Euler's version leads to Euler's equations of fluid mechanics , which, together with the continuity equation, model frictionless flows. Euler's equations are a good approximation for laminar flows when fluid dynamic boundary layers at the edges of the flow do not play an essential role. ## Elastic solids From a continuum mechanical point of view, solids differ from fluids in two main ways: firstly, they are able to withstand shear and tensile forces in equilibrium , and secondly, they can be anisotropic . ### Material objectivity of elastic solids The condition for the reference system invariance written in #Material Objectivity ${\ displaystyle {\ mathfrak {G}} (\ mathbf {Q \ cdot F}) = \ mathbf {Q} \ cdot {\ mathfrak {G}} (\ mathbf {F}) \ cdot \ mathbf {Q} ^ {\ top} \ quad {\ text {for all}} \ quad \ mathbf {Q} \ in {\ mathcal {SO}}}$ is not automatically complied with for solids - in contrast to fluids - but can be ensured as follows. The deformation gradient is polarized into an orthogonal tensor and a symmetrically positive definite right stretch tensor : ${\ displaystyle \ mathbf {F}}$${\ displaystyle \ mathbf {R} \ in {\ mathcal {SO}}}$ ${\ displaystyle \ mathbf {U}}$ ${\ displaystyle \ mathbf {F} = \ mathbf {R \ cdot U} \ ,.}$ Now can be chosen so that because of${\ displaystyle \ mathbf {Q} = \ mathbf {R} ^ {\ top}}$${\ displaystyle \ operatorname {det} (\ mathbf {F}) = \ operatorname {det} (\ mathbf {R}) \ operatorname {det} (\ mathbf {U}) = \ operatorname {det} (\ mathbf { U})}$ {\ displaystyle {\ begin {aligned} \ mathbf {R} ^ {\ top} \ cdot {\ boldsymbol {\ sigma}} \ cdot \ mathbf {R} = & \ mathbf {R} ^ {\ top} \ cdot {\ mathfrak {G}} (\ mathbf {F}) \ cdot \ mathbf {R} = {\ mathfrak {G}} (\ mathbf {R ^ {\ top} \ cdot F}) = {\ mathfrak {G }} (\ mathbf {R ^ {\ top} \ cdot R \ cdot U}) = {\ mathfrak {G}} (\ mathbf {U}) \\\ rightarrow {\ boldsymbol {\ sigma}} = & \ mathbf {R} \ cdot {\ mathfrak {G}} (\ mathbf {U}) \ cdot \ mathbf {R} ^ {\ top} \\\ rightarrow {\ tilde {\ mathbf {T}}}: = & \ operatorname {det} (\ mathbf {F}) \ mathbf {F} ^ {- 1} \ cdot {\ boldsymbol {\ sigma}} \ cdot \ mathbf {F} ^ {\ top -1} = \ operatorname { det} (\ mathbf {U}) \ mathbf {U} ^ {- 1} \ cdot {\ mathfrak {G}} (\ mathbf {U}) \ cdot \ mathbf {U} ^ {\ top -1} \ end {aligned}}} results. According to the principle of material objectivity, the second Piola-Kirchhoff stress tensor in the Cauchy elasticity is exclusively a function of the current right-hand distance tensor . This condition is both necessary and sufficient for material objectivity. The material objectivity is ensured in solid models by constitutive equations in Lagrangian . It is mostly used as a function of the right Cauchy-Green tensor ${\ displaystyle {\ tilde {\ mathbf {T}}}}$${\ displaystyle \ mathbf {U}}$${\ displaystyle {\ tilde {\ mathbf {T}}}}$ ${\ displaystyle \ mathbf {C}: = \ mathbf {F ^ {\ top} \ cdot F} = \ mathbf {U \ cdot U}}$ ${\ displaystyle \ mathbf {E}: = {\ frac {1} {2}} (\ mathbf {F ^ {\ top} \ cdot F-1}) = {\ frac {1} {2}} (\ mathbf {U \ cdot U-1})}$ modeled, for example: ${\ displaystyle {\ tilde {\ mathbf {T}}} = {\ tilde {\ mathfrak {G}}} (\ mathbf {C}) \ quad \ Leftrightarrow \ quad {\ varvec {\ sigma}} = {\ frac {1} {\ operatorname {det} (\ mathbf {F})}} \ mathbf {F} \ cdot {\ tilde {\ mathfrak {G}}} (\ mathbf {C}) \ cdot \ mathbf {F } ^ {\ top}}$. ### Cauchy derivative Deriving the above equation according to the time yields with the abbreviation : ${\ displaystyle J = \ det (\ mathbf {F})}$ {\ displaystyle {\ begin {aligned} {\ dot {\ boldsymbol {\ sigma}}} = & - {\ frac {1} {J ^ {2}}} J (\ mathbf {F} ^ {\ top - 1}: {\ dot {\ mathbf {F}}}) \ mathbf {F} \ cdot {\ tilde {\ mathfrak {G}}} \ cdot \ mathbf {F} ^ {\ top} + {\ frac { 1} {J}} {\ dot {\ mathbf {F}}} \ cdot {\ tilde {\ mathfrak {G}}} \ cdot \ mathbf {F} ^ {\ top} + {\ frac {1} { J}} \ mathbf {F} \ cdot {\ dot {\ tilde {\ mathfrak {G}}}} \ cdot \ mathbf {F} ^ {\ top} + {\ frac {1} {J}} \ mathbf {F} \ cdot {\ tilde {\ mathfrak {G}}} \ cdot {\ dot {\ mathbf {F}}} ^ {\ top} \\\ rightarrow {\ stackrel {\ diamond} {\ boldsymbol {\ sigma}}} = & {\ frac {1} {J}} \ mathbf {F} \ cdot {\ dot {\ tilde {\ mathfrak {G}}}} \ cdot \ mathbf {F} ^ {\ top} \ end {aligned}}} To the left of the last equation is the objective Cauchy derivative ${\ displaystyle {\ stackrel {\ diamond} {\ varvec {\ sigma}}} = {\ dot {\ varvec {\ sigma}}} + \ operatorname {Sp} (\ mathbf {l}) {\ varvec {\ sigma}} - \ mathbf {l} \ cdot {\ boldsymbol {\ sigma}} - {\ boldsymbol {\ sigma}} \ cdot \ mathbf {l} ^ {\ top} \ ,,}$ the one with the velocity gradient ${\ displaystyle \ mathbf {l} = {\ dot {\ mathbf {F}}} \ cdot \ mathbf {F} ^ {- 1}}$ is formed. ### Linear Cauchy elasticity In the linear Cauchy-elasticity, the six independent components of strain E ij linearly with the six independent components of the voltages linked ${\ displaystyle {\ tilde {T}} _ {ij}}$ {\ displaystyle {\ begin {aligned} {\ tilde {T}} _ {ij} = & {\ tilde {T}} _ {ji} = \ sum _ {k, l = 1} ^ {3} C_ { ijkl} \, E_ {kl} = \ sum _ {k, l = 1} ^ {3} C_ {ijkl} \, E_ {lk} \ ,, \ quad i, j = 1,2,3 \\\ rightarrow C_ {ijkl} = & C_ {jikl} = C_ {ijlk} \ end {aligned}}} for which a maximum of 36 independent coefficients C ijkl are necessary. To describe a linear Cauchy elastic material, a maximum of 36 parameters are required. Only in hyperelasticity does C ijkl = C klij also apply , so that a maximum of 21 parameters are sufficient there. ### Isotropic Cauchy elasticity Should the Cauchy stress tensor , as in isotropic hyperelasticity , function as a function of the left Cauchy-Green tensor ${\ displaystyle {\ boldsymbol {\ sigma}}}$ ${\ displaystyle \ mathbf {b}: = \ mathbf {F \ cdot F} ^ {\ top}}$ be expressed, therefore , the principle of material objectivity demands: ${\ displaystyle {\ boldsymbol {\ sigma}} = {\ mathcal {G}} (\ mathbf {b})}$ {\ displaystyle {\ begin {aligned} {\ varvec {\ sigma}} '= & {\ mathcal {G}} (\ mathbf {b}') \\\ rightarrow \ mathbf {Q} \ cdot {\ varvec { \ sigma}} \ cdot \ mathbf {Q} ^ {\ top} = & {\ mathcal {G}} ((\ mathbf {Q \ cdot F}) \ cdot {(\ mathbf {Q \ cdot F})} ^ {\ top}) = {\ mathcal {G}} (\ mathbf {Q \ cdot F \ cdot F} ^ {\ top} \ cdot \ mathbf {Q} ^ {\ top}) = {\ mathcal {G }} (\ mathbf {Q \ cdot b \ cdot Q} ^ {\ top}) \,. \ end {aligned}}} so ${\ displaystyle \ mathbf {Q} \ cdot {\ mathcal {G}} (\ mathbf {b}) \ cdot \ mathbf {Q} ^ {\ top} = {\ mathcal {G}} (\ mathbf {Q \ cdot b \ cdot Q} ^ {\ top}) \ quad {\ text {for all}} \ quad \ mathbf {Q} \ in {\ mathcal {SO}} \ ,.}$ One function with this property is an isotropic tensor function . Such a function can be in the form ${\ displaystyle {\ mathcal {G}}}$ ${\ displaystyle {\ mathcal {G}} (\ mathbf {b}) = \ phi _ {0} {\ textbf {1}} + \ phi _ {1} \ mathbf {b} + \ phi _ {2} \ mathbf {b \ cdot b}}$ where the coefficients are scalar functions of the main invariants or other invariants of . According to Cayley-Hamilton's theorem, can be synonymous ${\ displaystyle \ phi _ {0,1,2}}$${\ displaystyle \ mathbf {b}}$ ${\ displaystyle {\ mathcal {G}} (\ mathbf {b}) = \ varphi _ {0} {\ textbf {1}} + \ varphi _ {1} \ mathbf {b} + \ varphi _ {- 1 } \ mathbf {b} ^ {- 1}}$ written with different coefficients . In any case, commute and : ${\ displaystyle \ varphi _ {- 1,0,1}}$ ${\ displaystyle {\ mathcal {G}}}$${\ displaystyle \ mathbf {b}}$ ${\ displaystyle \ mathbf {b} \ cdot {\ mathcal {G}} (\ mathbf {b}) = {\ mathcal {G}} (\ mathbf {b}) \ cdot \ mathbf {b} \ ,.}$ In particular, is with hyperelasticity ${\ displaystyle \ mathbf {b} \ cdot {\ frac {\ mathrm {d} \ psi (\ mathbf {b})} {\ mathrm {d} \ mathbf {b}}} = {\ frac {\ mathrm { d} \ psi (\ mathbf {b})} {\ mathrm {d} \ mathbf {b}}} \ cdot \ mathbf {b} \ ,,}$ where the derivative of the strain energy density after is an isotropic tensor. ${\ displaystyle \ psi}$${\ displaystyle \ mathbf {b}}$ ### Navier-Cauchy equations In linear isotropic elasticity, the stress tensor for small deformations is a linear isotropic tensor function of the linearized strain tensor : ${\ displaystyle {\ boldsymbol {\ sigma}}}$${\ displaystyle {\ boldsymbol {\ varepsilon}}}$ ${\ displaystyle {\ boldsymbol {\ sigma}} = \ lambda \ operatorname {Sp} ({\ boldsymbol {\ varepsilon}}) \ mathbf {1} +2 \ mu {\ boldsymbol {\ varepsilon}}}$ Therein are the first and second Lamé constants . This material equation is known as Hooke's law and is also hyperelastic at the same time. If the strain tensor is expressed by the displacements and inserted into the first Cauchy-Euler's law of motion , which corresponds to the local momentum balance , this leads to the Navier-Cauchy equations. ${\ displaystyle \ lambda, \, \ mu}$${\ displaystyle {\ vec {u}}}$ ### Thermodynamic consistency Even if the stresses in a Cauchy elastic material are exclusively determined by the current state of deformation, the work of deformation will generally depend on the path in which the deformation is carried out. In the absence of a dissipation mechanism, this is in contradiction to thermodynamic principles. This is also shown by the evaluation of the Clausius-Duhem inequality, which represents the second law of thermodynamics in solid mechanics . In the case of an isothermal change of state , the Clausius-Duhem inequality reads ${\ displaystyle {\ frac {1} {\ rho _ {0}}} {\ tilde {\ mathbf {T}}}: {\ dot {\ mathbf {E}}} - {\ dot {\ psi}} _ {0} \ geq 0 \ ,,}$ where represents the Helmholtz free energy in the Lagrangian formulation . If the free energy is a function of elongation only, which is plausible with Cauchy elasticity, then it follows ${\ displaystyle \ psi _ {0}}$ ${\ displaystyle {\ frac {1} {\ rho _ {0}}} {\ tilde {\ mathbf {T}}}: {\ dot {\ mathbf {E}}} - {\ dot {\ psi}} _ {0} = {\ frac {1} {\ rho _ {0}}} {\ tilde {\ mathbf {T}}}: {\ dot {\ mathbf {E}}} - {\ frac {\ mathrm {d} \ psi _ {0}} {\ mathrm {d} \ mathbf {E}}}: {\ dot {\ mathbf {E}}} = \ left ({\ frac {1} {\ rho _ { 0}}} {\ tilde {\ mathbf {T}}} - {\ frac {\ mathrm {d} \ psi _ {0}} {\ mathrm {d} \ mathbf {E}}} \ right): { \ dot {\ mathbf {E}}} \ geq 0 \ ,.}$ This inequality has to be fulfilled for all possible processes, which is why the term in brackets, because it does not depend on the speed of the distortion, has to disappear and therefore ${\ displaystyle {\ frac {1} {\ rho _ {0}}} {\ tilde {\ mathbf {T}}} = {\ frac {\ mathrm {d} \ psi _ {0}} {\ mathrm { d} \ mathbf {E}}}}$ applies. A material with such a material equation is hyperelastic . In hyperelasticity, the work of deformation is path-independent. A path dependency of the deformation work occurs in the following example. ## example Tensile stress in the uniaxial tensile test The material model ${\ displaystyle {\ tilde {\ mathbf {T}}} = G (\ mathbf {C} - \ mathbf {1}) + \ gamma {\ parallel \ mathbf {C} \ parallel} ^ {n} \ ln ( \ operatorname {det} (\ mathbf {C})) \ mathbf {C} ^ {- 1}}$ with material parameters and is Cauchy elastic and fulfills the criterion of material objectivity. The Frobenius norm is the trace operator calculated ${\ displaystyle G, \ gamma}$${\ displaystyle n}$ ${\ displaystyle \ parallel \ mathbf {C} \ parallel}$${\ displaystyle \ operatorname {Sp}}$ ${\ displaystyle \ parallel \ mathbf {C} \ parallel = {\ sqrt {\ operatorname {Sp} (\ mathbf {C} ^ {\ top} \ cdot \ mathbf {C})}}}$ and is an invariant of the argument . The function " " is the natural logarithm , the parameter corresponds to the shear modulus and regulates the compressibility . With is Simo and Pister's model and is hyper-elastic. Because of and one gets Cauchy's stress tensor ${\ displaystyle \ mathbf {C}}$${\ displaystyle \ ln}$${\ displaystyle G}$${\ displaystyle \ gamma}$${\ displaystyle n = 0}$${\ displaystyle \ operatorname {det} (\ mathbf {C}) = \ operatorname {det} (\ mathbf {b}) = \ operatorname {det} {(\ mathbf {F})} ^ {2}}$${\ displaystyle \ operatorname {Sp} (\ mathbf {C \ cdot C}) = \ operatorname {Sp} (\ mathbf {b \ cdot b})}$ ${\ displaystyle {\ boldsymbol {\ sigma}} = {\ dfrac {1} {\ operatorname {det} (\ mathbf {F})}} \ mathbf {F} \ cdot {\ tilde {\ mathbf {T}} } \ cdot \ mathbf {F} ^ {\ top} = {\ dfrac {G} {\ sqrt {\ operatorname {det} (\ mathbf {b})}}} (\ mathbf {b \ cdot b} - \ mathbf {b}) + \ gamma {\ dfrac {{\ parallel \ mathbf {b} \ parallel} ^ {n} \ ln (\ operatorname {det} (\ mathbf {b}))} {\ sqrt {\ operatorname {det} (\ mathbf {b})}}} \ mathbf {1}}$ depending on the left stretch tensor . So the coefficients are ${\ displaystyle \ mathbf {b}}$${\ displaystyle {\ phi} _ {0,1,2}}$ {\ displaystyle {\ begin {aligned} \ phi _ {0} = & \ gamma {\ dfrac {{\ parallel \ mathbf {b} \ parallel} ^ {n} \ ln (\ operatorname {det} (\ mathbf { b}))} {\ sqrt {\ operatorname {det} (\ mathbf {b})}}} \\ - \ phi _ {1} = & \ phi _ {2} = {\ dfrac {G} {\ sqrt {\ operatorname {det} (\ mathbf {b})}}} \,. \ end {aligned}}} In the relaxed state is and therefore . The picture shows the course of the Cauchy stress with material parameters G = 1  megapascal and γ = 7 MPa in a uniaxial tensile test. ${\ displaystyle \ mathbf {b} = \ mathbf {1}}$${\ displaystyle {\ boldsymbol {\ sigma}} = \ mathbf {0}}$ With the deformation work is path-dependent, as will now be shown. In a material point, let the deformation gradient be in the time interval${\ displaystyle n = -2}$${\ displaystyle t \ in [0,1]}$ Estimation of an integral using step functions. Red: upper limit, green: lower limit {\ displaystyle {\ begin {aligned} \ mathbf {F} (t, p) = & {\ begin {pmatrix} 1 + t & 0 & 0 \\ 0 & 1 + pt (1-t) & 0 \\ 0 & 0 & 1 \ end {pmatrix}} \\\ rightarrow {\ dot {\ mathbf {F}}} (t, p) = & {\ begin {pmatrix} 1 & 0 & 0 \\ 0 & p (1-2t) & 0 \\ 0 & 0 & 0 \ end {pmatrix}} \ end { aligned}}} specified with the parameter in path 1 and path 2. The two paths have the same start and end points in the specified time interval. Now you can numerically integrate the deformation power density along the two deformation paths with a step function to the deformation work density performed , see figure on the right: ${\ displaystyle p = 0}$${\ displaystyle p = 1}$ ${\ displaystyle l}$ ${\ displaystyle \ psi}$ {\ displaystyle {\ begin {aligned} \ psi (t, p) = & \ rho _ {0} \ int _ {0} ^ {1} l (t, p) \, \ mathrm {d} t = \ int _ {0} ^ {1} {\ tilde {\ mathbf {T}}}: {\ dot {\ mathbf {E}}} \, \ mathrm {d} t = \ int _ {0} ^ {1 } {\ dfrac {1} {2}} {\ tilde {\ mathbf {T}}} (t, p): {\ dot {\ mathbf {C}}} (t, p) \, \ mathrm {d } t \\\ approx & \ displaystyle \ sum _ {i = 0} ^ {999} {\ dfrac {1} {2000}} {\ tilde {\ mathbf {T}}} \ left ({\ dfrac {i + \ xi} {1000}}, p \ right): {\ dot {\ mathbf {C}}} \ left ({\ dfrac {i + \ xi} {1000}}, p \ right) \,. \ end { aligned}}} Deformation work and performance along the two paths are numerically evaluated using ${\ displaystyle \ xi = {\ frac {1} {2}}}$ Because the power increases monotonically with time at n = −2, see figure on the right, one obtains a lower limit with ξ = 0 and an upper limit with ξ = 1 for the voltage work performed. With the material parameters G = 1 MPa and γ = 7 MPa one calculates: {\ displaystyle {\ begin {aligned} 2 {,} 6795 \, \ mathrm {MPa} \ leq \ psi (t, 0) \ leq 2 {,} 6857 \, \ mathrm {MPa} \\ 2 {,} 8214 \, \ mathrm {MPa} \ leq \ psi (t, 1) \ leq 2 {,} 8271 \, \ mathrm {MPa} \ end {aligned}}} One MPa corresponds to one joule per cubic centimeter (J / cm³). With n = −2, different deformation work is done along the two paths, which is why the material is then not hyperelastic. One could now load less than 2.7 J / cm³ along the first path and relieve the load along the second path, with more than 2.8 J / cm³ jumping out. So you would get more than 0.1 J / cm³ of energy per cycle. By running through the cycle several times, you could generate any amount of energy. But that contradicts thermodynamic principles. The hyperelasticity , which is a special case of Cauchy elasticity, avoids this contradiction. ## Footnotes 1. Bestehorn (2006), p. 52. 2. Haupt (2000), pp. 279ff 3. Bestehorn [2006], p. 57. 4. This derivation is also named after C. Truesdell. He himself named the derivative after Cauchy and wrote in 1963 that this rate was named after him for no inventive reason ("came to be named, for no good reason, after [...] me") see C. Truesdell: Remarks on Hypo-Elasticity , Journal of Research of the National Bureau of Standards - B. Mathematics and Mathematical Physics, Vol. 67B, No. 3, July-September 1963, p. 141. 5. a b The Fréchet derivative of a scalar function according to a tensor is the tensor which - if it exists - corresponds in all directions to the Gâteaux differential , i.e. ${\ displaystyle f}$${\ displaystyle \ mathbf {T}}$${\ displaystyle \ mathbf {A}}$${\ displaystyle \ mathbf {H}}$ ${\ displaystyle \ mathbf {A}: \ mathbf {H} = \ left. {\ frac {\ mathrm {d}} {\ mathrm {d} s}} f (\ mathbf {T} + s \ mathbf {H }) \ right | _ {s = 0} = \ lim _ {s \ rightarrow 0} {\ frac {f (\ mathbf {T} + s \ mathbf {H}) -f (\ mathbf {T})} {s}} \ quad {\ text {for all}} \ quad \ mathbf {H} \ in {\ mathcal {L}}}$ applies. The scalar is a real number . Then will too ${\ displaystyle s}$ ${\ displaystyle {\ frac {\ partial f} {\ partial \ mathbf {T}}} = \ mathbf {A}}$ written. 6. JC Simo, KS Pister: Remarks on Rate Constitutive Equations for Finite Deformation Problems: Computational Implications . In: Computer Methods in Applied Mechanics and Engineering. 46, 1984, pp. 201-215. The associated strain energy density is {\ displaystyle {\ begin {aligned} \ psi = & {\ dfrac {G} {4 \ rho _ {0}}} \ operatorname {Sp} (\ mathbf {C \ cdot C}) - {\ dfrac {G } {2 \ rho _ {0}}} \ operatorname {Sp} (\ mathbf {C}) + {\ dfrac {\ gamma} {4 \ rho _ {0}}} \ ln (\ operatorname {det} ( \ mathbf {C})) ^ {2} \\\ rightarrow {\ tilde {\ mathbf {T}}} = & \ rho _ {0} {\ dfrac {\ mathrm {d} \ psi} {\ mathrm { d} \ mathbf {E}}} = 2 \ rho _ {0} {\ dfrac {\ mathrm {d} \ psi} {\ mathrm {d} \ mathbf {C}}} = G (\ mathbf {C} - \ mathbf {1}) + \ gamma \ ln (\ operatorname {det} (\ mathbf {C})) \ mathbf {C} ^ {- 1} \ end {aligned}}}
2022-12-06 23:02:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 122, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5758094787597656, "perplexity": 12510.008251987885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00100.warc.gz"}
https://www.speedsolving.com/tags/stickers/
# stickers 1. ### How to sticker big cubes / high order puzzles I have seen a lot of people struggling to sticker cubes so I thought I would do a tutorial aimed at the bigger ones (high order). Obviously there is no right or wrong way and it's more important to find something that works for you. A factory worker will almost certainly use some kind of... 2. ### [STICKERS] Lazuli TL;DR: Want extra cash, realize stickers are a good idea, selling them for ~$1 PER SET. Custom sticker commissions available soon! Catalog at bottom of post. Also selling on reddit What are the prices? I would say$1 per set of six colors, or \$0.25 for individual, one-color sheets. All... 3. ### Help identifying sticker format for the Eyeopener 3x3x3? Hello! I have an oddball, cheapie 3x3x3 in my collection from a company called Eyeopener, and I'm a bit sentimental about it. However, I'm old, my eyesight's not what it used to be, and I have a lot of trouble distinguishing the yellow and white stickers under house light at night. I'm looking... 4. ### Is Cubesmith still in business? I couldn't find a recent thread about this. I ordered some stickers on the 11th. I never heard from them if anything was sent out and sent an email about a week ago with still no response. Has anyone else ordered anything from them recently? I have many sets of stickers I don't need. I am willing to trade them for something will equal or greater value. Or sell them as well. All of these stickers have never been used, and are in Excellent condition. I can mail them to you in the US for free. I have: Type F Stickers /w application... 6. ### Sticker Reviews? Hi I am looking to get some new stickers for my weilong v1 and debating on full or normal cubesmith.com or thecubicle.us ? also considering pink instead of orange ive noticed that cubesmith stickers chip ALOT i have a old rubik's that i restickered and they are beat up advice? 7. ### Megaminx Colorblind Stickers Hello, I'm new to this forum, though I've been cubing for quite a while (I'm still not the fastest, though). I'm interested in purchasing a megaminx, but it seems that I can't find a suitable color scheme. I've searched for other threads dealing with the same topic, however none dealt directly... 8. ### WTB A few things Hey, I'm looking for a few things: -A bunch of 2x2s, preferably shishuang, Dayan, wittwo, or lingpo. -A Stackmat timer, (pro or gen2) if you have any other accessories for it as well, (mat, bag, display, etc) let me know. -Some stickers and application tape/tools, for the following... 9. ### SubXX Sticker Review | TheCubeSpecialists.com I wanted to show you these awesome stickers. They're a great alternative to thecubicle.us stickers, especially because of their little perks. Also, I absolutely LOVE the application tape. Here's a review: http://youtu.be/h5EWtcjB3xM 10. ### Sticker Colors Do you guys know what specific colors the stickers are in the picture? I'm not sure, but I plan on buying stickers and I just wanted to know what to order. Thank you. I know there haven't been any threads like this before concerning just stickers. No thread has soley just covered this area in cubing. I think that it would be a good idea, to just post about what kind of stickers you use on your cubes, what colours, size, logo's, where you buy them, how often... 12. ### "SubXX"-Stickers - thecubespecialists.com Hey speedsolving.com members! There are some new stickers out. They are called "Sub-XX" stickers. The only place to get them is: http://www.thecubespecialists.com/ Please note: The original site is german and the owner doesn't have enough time to translate it now, so it's temporarely translated... 14. ### help with cubesmith sticker color im going to make an order now from cubesmith buying 2x2x2, 3x3x3, 4x4x4 bright set stickers but i dont like the pink instead of red ( dont like the bright set with normal red neither) so i saw that there was a fluorecent orange (custom color single) that for me on the image looks like a bright... 15. ### Will cubesmith ever make... I really want to see some glow in the dark stickers that glow each separate color you know not all green. Like red side glows red blue side glows blue ....ect, ect. And some that dont die after 20 seconds. That would be awesome! What would you like to see cubesmith carry? And do any of you... 16. ### Glow-in-the-dark stickers Does anybody know where I can get glow-in-the-dark stickers, or even small vials of glow-in-the-dark spraypaint for me to sticker a cube different colours? 17. ### Printing on Cubesmith Vinyl Sheets Hi Everyone, Has anyone successfully laser printed on Cubesmith Vinyl sheets? I want to make custom logos. The site says that inkjet printing won't work. The other option is to make a small stencil and spray paint. Or is there a simpler way? Hand drawn logos using ordinary permanent... 18. ### Guhong sticker size? I've got a Dayan Guhong (Lone goose) and an Alpha V but i ordered 1 set of normal stickers (cubesmith) and one set of the smaller size. which size should i put on which cube? *EDIT* basically what im asking is which looks better 19. ### Make my own rubik's cube!? Hi, I like my cubes, but I want to make one perfect for me, so I decided I want to make my own. Example: buy type A-V core, and ghost hand II pieces, and make my own logo, and print my own stickers out. But there is a problem. I don't have many cubes, so I don't know which core/pieces I should...
2019-10-19 09:15:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2037695050239563, "perplexity": 3185.700541545689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692723.54/warc/CC-MAIN-20191019090937-20191019114437-00515.warc.gz"}
https://www.texdev.net/2013/09/01/customising-texworks-auto-completion/?shared=email&msg=fail
# Customising TeXworks auto-completion TeXworks is a very flexible editor, and one of the things you can customise, if you want, is the set of auto-completion values it knows. For those of you who are not familiar with it, TeXworks uses a simple list of simple completion options, so when I type \doc I can press the Tab key and be offered \documentclass{} and if I press Tab again, \documentclass[]{} That’s very useful, but some of the auto-complete options are not ones I use a lot. There are also a few inconsistencies in how the results are formatted: while TeXworks inherited a basic set from TeXShop, it also comes with some additions and they don’t always quite agree on how things should work! So I’ve been looking a bit at sorting out my own custom set, adding things I use, removing ones I don’t and so on. The basic format for the auto-complete files is to have a first line for the encoding %% !TEX encoding = UTF-8 Unicode then one or more lines for each completion. Each line can either just have a completion vale \alpha or have a ‘shortcut’ text version xa:=\alpha There are then a few bits of ‘helper’ syntax. You can use #INS# to show where the cursor should end up, #RET for a return and • as a marker you can jump to using Ctrl-Tab (Option-Tab on the Mac). So for example the \documentclass lines read \documentclass{#INS#}#RET# \documentclass[#INS#]{•}#RET# I’m told that TeXShop has extended the syntax a bit, but at least at the moment in TeXworks that all there is to know. So what have I done to customise the files? TeXworks comes with four auto-complete files, but the values offered simply come from them all together (you can’t currently select only some files). (You might wonder where these files live: they are ‘hidden away’: TeXworks will tell you how to find them from Help -> Settings and resources.) So my first move was to create one new file, after first backing up the originals of course! I then did a few experiments, thinking about what I use a lot, what I’m used to, etc. I did wonder about some of the choices in the standard files, but a bit of experimentation suggests they are not so bad! So I’ve currently ended up mainly just adding a few things, for example {tikzpicture}[#INS#]#RET#•#RET#\end{tikzpicture}• for TikZ pictures and cs:=\cs{#INS#} pkg:=\pkg{#INS#} \cs{#INS#} \pkg{#INS#} for package development work. I’m also not too keen on having too many of the ‘shortcut’ values, which don’t start \, so I’ve removed most of them and have just a core set (things like bf and em). If you want to see my current full set, you can of course download it. So is there anything I’d like to see added to the way auto-complete works? I have a few ideas! From my questions on the TeXworks mailing list, I’ve picked up that TeXShop maintains indentation when doing auto-complete: TeXworks doesn’t, and I think it would be a good addition. TeXShop also allows an extended syntax \documentclass[#INS#•#INS#]{•}#RET# \rule[#INS#•‹lift›#INS#]{•‹width›}{•‹height›} where you can always have a • for ‘fill in’ and have ‘reminders’ about what the values are. That looks useful too.
2018-04-24 04:39:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6349926590919495, "perplexity": 1437.0365488142584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946564.73/warc/CC-MAIN-20180424041828-20180424061828-00306.warc.gz"}
https://www.gamedev.net/topic/612210-c-help-me-organize-my-includes-correctly/
• Create Account c++ help me organize my includes correctly Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 32 replies to this topic #1breakspirit  Members Posted 07 October 2011 - 05:09 PM I'm setting up classes and I'm having a hard time getting my includes to stop giving errors. Here's where I'm at: #pragma once #include <stdio.h> #include <string> #include "RakPeerInterface.h" #include "MessageIdentifiers.h" #include "BitStream.h" #include "RakNetTypes.h" // MessageID #include "GetTime.h" #include <iostream> #include <vector> #include <fstream> #include "Kbhit.h" #include "Variables.h" #include "GameSimulation.h" #include "NetCode.h" Player newPlayer; // a player object to give to new players. GameSimulation game; NetCode netConnection; main() #ifndef GAME_SIMULATION #define GAME_SIMULAION #include "Player.h" class GameSimulation {all the gamesimulation stuff (definitions)} GameSimulation implementation #endif class Player {all the player stuff (definitions)} Player implementation #ifndef NET_CODE #define NET_CODE #pragma once #include <stdio.h> #include <string> #include "RakPeerInterface.h" #include "MessageIdentifiers.h" #include "BitStream.h" #include "RakNetTypes.h" // MessageID #include "GetTime.h" #include <iostream> #include <vector> #include <fstream> #include "Kbhit.h" using namespace std; #include "Variables.h" #include "Player.h" #include "GameSimulation.h" class NetCode {all the netcode class stuff (definitions)} #endif #include "NetCode.h" netcode method implementations global variables that I want everything to be able to access; things like enums and handles My stuff is a little jumbled around and there's some stuff commented out because I've been playing with it for like an hour trying to get it to work and now it's a bit of a mess. I hope I've made it clear what I'm trying to do. I'm familiar with how includes work in the sense that they simply substitute in whatever they're including into the original source page, but apparently I'm missing something because I'm getting all sorts of errors of files not knowing about variable types and such. I'd paste them, but there's hundreds and I know that it's just trying to tell me that my includes aren't working right. Basically, variables.h need to be available everywhere, the netcode needs the gamesimulation to pass it messages, and gamesimulation needs to see players. Main needs to be able to access the whole shebang. Thanks for any help you guys can give. #2SiCrane  Moderators Posted 07 October 2011 - 05:12 PM #3Juliean  GDNet+ Posted 07 October 2011 - 05:19 PM Cant discuss all your code now, just some suggestions: - you cant for example include a.h into b.h and then b.h into a.h. Will give you errors en masse. Make sure your inclusion-list doesnt do that unintentionaly. -#pragma once on top of every header file can alse redude errors. Howerer make sure there is no a->b->a-relationship in your classes.. #4breakspirit  Members Posted 07 October 2011 - 05:36 PM Thanks for the responses. I have read that article and am rereading it to see if I've missed some nugget of useful information. I see nowhere in which I have a cyclical include. My answer could probably be ascertained if someone can tell me why NetCode.cpp doesn't know what game or newPlayer are even after I move #include "NetCode.h" to after their declarations in main.cpp. Basically, in both of the following cases, NetCode.cpp knows nothing about game or newPlayer: #include "NetCode.h" Player newPlayer; GameSimulation game; NetCode netConnection; Player newPlayer; GameSimulation game; NetCode netConnection; #include "NetCode.h" #5pulpfist  Members Posted 07 October 2011 - 05:37 PM I'd paste them, but there's hundreds and I know that it's just trying to tell me that my includes aren't working right. In cases like this, the error messages at the top is usually the relevant ones. You might want to paste those #6pulpfist  Members Posted 07 October 2011 - 05:45 PM I think you should use either #ifndef ? #define ? ... #endif or #pragma once but not both. Also make sure that your inclusion guards is unique for each header. Copy and pasting can sometimes lead to header files using the same inclusion guards, something that could cause problems. #7breakspirit  Members Posted 07 October 2011 - 05:50 PM I'd paste them, but there's hundreds and I know that it's just trying to tell me that my includes aren't working right. In cases like this, the error messages at the top is usually the relevant ones. You might want to paste those Yeah, I've been trying to go from the top down. Here's the current first bunch of errors: 1>...\gamesimulation.h(20): error C2143: syntax error : missing ';' before '<' <--This part refers to a vector of Players I'm trying to create within GameSimulation 1>...\gamesimulation.h(20): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int 1>...\gamesimulation.h(20): error C2238: unexpected token(s) preceding ';' 1>>...\gamesimulation.h(40): error C2065: 'playerList' : undeclared identifier 1>>...\gamesimulation.h(40): error C2228: left of '.push_back' must have class/struct/union 1> type is ''unknown-type'' 1>>...\gamesimulation.h(41): error C2065: 'playerList' : undeclared identifier 1>>...\gamesimulation.h(51): error C2065: 'gameWorld' : undeclared identifier 1>>...\netcode.cpp(95): error C2065: 'newPlayer' : undeclared identifier 1>>...\netcode.cpp(95): error C2227: left of '->playerGuid' must point to class/struct/union/generic type 1> type is ''unknown-type'' 1>>...\netcode.cpp(96): error C2065: 'game' : undeclared identifier 1>>...\netcode.cpp(96): error C2228: left of '.addNewPlayer' must have class/struct/union 1> type is ''unknown-type'' 1>>...\netcode.cpp(96): error C2065: 'newPlayer' : undeclared identifier #8breakspirit  Members Posted 07 October 2011 - 05:55 PM I think you should use either #ifndef ? #define ? ... #endif or #pragma once but not both. Also make sure that your inclusion guards is unique for each header. Copy and pasting can sometimes lead to header files using the same inclusion guards, something that could cause problems. Alright, I went ahead and made sure all files had only one inclusion guard. They are unique for each file. I also went ahead and made sure Variables.h has inclusion guards. Same errors unfortunately, but thanks for the suggestions. #9pulpfist  Members Posted 07 October 2011 - 06:11 PM That first error was a pretty good hint Take a close look at this: #ifndef GAME_SIMULATION #define GAME_SIMULAION edit: The error itself is actually quite obscure, but it basically indicates that the compiler has stumbled over a data type it doesn't recognize #10breakspirit  Members Posted 07 October 2011 - 06:30 PM That first error was a pretty good hint Take a close look at this: #ifndef GAME_SIMULATION #define GAME_SIMULAION edit: The error itself is actually quite obscure, but it basically indicates that the compiler has stumbled over a data type it doesn't recognize Haha yeah I saw that and fixed it and smacked myself. Still getting errors though. I'm down to just two. Here they are: 1>c:\users\kev\desktop\game stuff\raknet stuff\raknettest\raknettest\netcode.cpp(100): error C2027: use of undefined type 'GameSimulation' 1> c:\users\kev\desktop\game stuff\raknet stuff\raknettest\raknettest\netcode.h(22) : see declaration of 'GameSimulation' 1>c:\users\kev\desktop\game stuff\raknet stuff\raknettest\raknettest\netcode.cpp(100): error C2228: left of '.addNewPlayer' must have class/struct/union I've done a lot of moving things around, but basically I think my problem lies in NetCode.h because netcode.cpp still doesn't know anything about game but mysteriously doesn't have a problem with player. Here's the relevant chunk of code: #include "Variables.h" //#include "Player.h" //#include "GameSimulation.h" class GameSimulation; class Player; extern Player *newPlayer; extern GameSimulation game; class NetCode {} I don't understand why it needs me to include Variables.h again. You'll notice that I tried using extern in there but it has not fixed my issues. #11pulpfist  Members Posted 07 October 2011 - 06:56 PM That first error was a pretty good hint Take a close look at this: #ifndef GAME_SIMULATION #define GAME_SIMULAION edit: The error itself is actually quite obscure, but it basically indicates that the compiler has stumbled over a data type it doesn't recognize Haha yeah I saw that and fixed it and smacked myself. Still getting errors though. I'm down to just two. Here they are: 1>c:\users\kev\desktop\game stuff\raknet stuff\raknettest\raknettest\netcode.cpp(100): error C2027: use of undefined type 'GameSimulation' 1> c:\users\kev\desktop\game stuff\raknet stuff\raknettest\raknettest\netcode.h(22) : see declaration of 'GameSimulation' 1>c:\users\kev\desktop\game stuff\raknet stuff\raknettest\raknettest\netcode.cpp(100): error C2228: left of '.addNewPlayer' must have class/struct/union I've done a lot of moving things around, but basically I think my problem lies in NetCode.h because netcode.cpp still doesn't know anything about game but mysteriously doesn't have a problem with player. Here's the relevant chunk of code: #include "Variables.h" //#include "Player.h" //#include "GameSimulation.h" class GameSimulation; class Player; extern Player *newPlayer; extern GameSimulation game; class NetCode {} I don't understand why it needs me to include Variables.h again. You'll notice that I tried using extern in there but it has not fixed my issues. Looks like netcode.cpp is trying to use the GameSimulation class from netcode.h, which is just a forward declaration. Its a bit hard to understand how you want to connect things from just these snippets, but anyway, have you tried to include GameSimulation.h in netcode.cpp? And where does newPlayer and game exists? The extern keyword tells the linker that they will be found in some other unit, but which? #12breakspirit  Members Posted 07 October 2011 - 07:00 PM Looks like netcode.cpp is trying to use the GameSimulation class from netcode.h, which is just a forward declaration. Its a bit hard to understand how you want to connect things from just these snippets, but anyway, have you tried to include GameSimulation.h in netcode.cpp? And where does newPlayer and game exists? The extern keyword tells the linker that they will be found in some other unit, but which? Yeah, I've tried including GameSimulation.h in netcode.cpp and I get link errors saying this kinda stuff: 1>NetCode.obj : error LNK2005: "public: __thiscall Player::Player(void)" (??0Player@@QAE@XZ) already defined in main.obj 1>NetCode.obj : error LNK2005: "public: __thiscall GameSimulation::GameSimulation(void)" (??0GameSimulation@@QAE@XZ) already defined in main.obj newPlayer and game get declared in main.cpp like this: #include "Variables.h" #include "GameSimulation.h" Player *newPlayer = new Player; // a pointer to a player object to give to new players. GameSimulation game; #include "NetCode.h" NetCode netConnection; I simply do not understand why netcode.cpp can not see newPlayer and game. #13pulpfist  Members Posted 07 October 2011 - 07:04 PM Looks like netcode.cpp is trying to use the GameSimulation class from netcode.h, which is just a forward declaration. Its a bit hard to understand how you want to connect things from just these snippets, but anyway, have you tried to include GameSimulation.h in netcode.cpp? And where does newPlayer and game exists? The extern keyword tells the linker that they will be found in some other unit, but which? Yeah, I've tried including GameSimulation.h in netcode.cpp and I get link errors saying this kinda stuff: 1>NetCode.obj : error LNK2005: "public: __thiscall Player::Player(void)" (??0Player@@QAE@XZ) already defined in main.obj 1>NetCode.obj : error LNK2005: "public: __thiscall GameSimulation::GameSimulation(void)" (??0GameSimulation@@QAE@XZ) already defined in main.obj newPlayer and game get declared in main.cpp like this: #include "Variables.h" #include "GameSimulation.h" Player *newPlayer = new Player; // a pointer to a player object to give to new players. GameSimulation game; #include "NetCode.h" NetCode netConnection; I simply do not understand why netcode.cpp can not see newPlayer and game. Um, you didn't forget to include netcode.h in netcode.cpp did you? #14breakspirit  Members Posted 07 October 2011 - 07:10 PM Um, you didn't forget to include netcode.h in netcode.cpp did you? Nah it's in there. =/ #15pulpfist  Members Posted 07 October 2011 - 07:15 PM Um, you didn't forget to include netcode.h in netcode.cpp did you? Nah it's in there. =/ Yea I thought so. If you post all your files it will be easier for us to see what the problem is. I suspect you got some inclusion spaghetti going on here #16breakspirit  Members Posted 07 October 2011 - 07:31 PM Alright, here's all the source. Thanks again for your help. #17SiCrane  Moderators Posted 07 October 2011 - 07:46 PM Do not put non-inline function definitions in header files. Non-inline function definitions go in source files. #18breakspirit  Members Posted 07 October 2011 - 07:48 PM Do not put non-inline function definitions in header files. Non-inline function definitions go in source files. Yeah, but I don't think that is associated with this problem. I seldom do that unless the function is highly incomplete, such as with the simulation and player functions. #19SiCrane  Moderators Posted 07 October 2011 - 07:50 PM Your error is a duplicate symbol error. This means that more than one definition is found in more than one source file. Because you defined the functions in the header, multiple source files are trying to export the same function definition, hence the duplicate symbols. #20pulpfist  Members Posted 07 October 2011 - 07:53 PM Do not put non-inline function definitions in header files. Non-inline function definitions go in source files. Yea I was about to say something like that. I assume you refer to the Player.h and GameSimulation.h files. @breakspirit I suggest you start by creating a Player.cpp and a GameSimulation.cpp file, and move all function and constructor implementations into those. If that still doesn't work out I'll take a closer look at this in a couple of hours. Also, as a general rule, don't say "using namespace ..." in a header file. Doing it in a cpp file may be ok, but doing it in header files can cause problems as the project grows. In the header files, use a fully qualified scope, like std::string Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
2017-01-22 14:19:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38925135135650635, "perplexity": 3774.85809611334}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281426.63/warc/CC-MAIN-20170116095121-00061-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/particle-creation-function-of-beam-energy.991388/
# Particle Creation function of beam energy Gold Member ## Summary: As you crank up the beam energy in particle accelerator, what particles are possible at each energy ## Main Question or Discussion Point This seems like it should be an easy and obvious thing to look up, but I had the hardest time finding it. Is there any graph which shows, as I increase the beam energy of a particle accelerator, what particles can be produced at each energy? Just looking for something ballpark here. Obviously there are a ton of hadrons and mesons, but maybe just the most important/famous/etc particles would appear on such a graph. Related High Energy, Nuclear, Particle Physics News on Phys.org mfb Mentor For electron-positron colliders it's relatively easy: If the particle can be created in isolation (e.g. Z boson) then the collision energy needs to be its mass, if it is created in pairs (e.g. everything with quarks) then you need twice that energy. For every process you can just add up the mass of the produced particles: That's how much energy you need (speed of light squared as conversion factor). If we skip the low energy region (where a lot of different things happen): 3 GeV for particles with charm quarks and tau, 10 GeV for particles with bottom quarks, ~90 GeV for the Z boson, ~160 GeV for W bosons, ~215 GeV for the Higgs boson (as the production of Z+H is the first relevant process), 350 GeV for top quarks. LEP reached 209 GeV, they just missed the Higgs. For hadron colliders things are more complicated. The theoretical minimum is still the same but in practice you need much more energy to get a relevant production rate. These calculations are done particle by particle so you can often find cross sections ("production probabilities") as function of collision energy. Here are some cross sections Meir Achuz Homework Helper Gold Member For any center of mass energy, W, the particles that can be created have to satisfy $\sum_i M_i\ge W$, and the conservation laws of charge, etc. For a fixed target, the lab kinetic energy is given in terms of the center of mass energy by $(KE)_{\rm lab}=[W^2-(M+m)^2]/2M$. (Derive this.) Gold Member Excellent. I understand. I knew the particles created needed to be at least mc2, but I didn't know if there was any other requirement (of course, I understand all the necessary laws must be conserved). One last question: Once a reaction has the minimum mc2 to create the particle, are there ever higher energies (or a range of energies) beyond this minimum energy that creates the most particles? Just to make things easy, suppose a particular hadron or meson has a mc2 of "1", but if I tuned my beam energies to say "3" or "4", would I get some energy that would create the most particles of mc2 of "1"? I'm using simple numbers here because I'm just interested in a qualitative, ballpark answer. mfb Mentor In electron-positron colliders there can be an ideal energy. • For the Z that's simply the Z mass. • For ZH the ideal energy is about 270 GeV. Lower and the phase space is very small (both particles need to be nearly at rest relative to each other), higher and other processes are more likely. Here is a plot. • For B mesons the ideal energy is the ##\Upsilon(4s)## resonance, which usually decays to pairs of B mesons. You might get more again at very high energy, but at least it's a strong local maximum. For hadron colliders more is better - outside the low-energy region all the reactions get more likely with more energy. Meir Achuz For instance, when electrons collide with protons, pion production jumps when W approaches the mass of the $\Delta$ resonance.
2020-08-13 08:14:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7104266881942749, "perplexity": 926.263637178823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00293.warc.gz"}
https://stats.stackexchange.com/tags/similarities/hot
# Tag Info ## Hot answers tagged similarities 45 To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select the "right" method or distance measure in hierarchical clustering? There are some points - hidden snags - regarding hierarchical cluster analysis that I would ... 32 As @Max indicated in the comments (+1) it would be simpler to "write your own" than to spend time looking for it somewhere else. As we know, the cosine similarity between two vectors $A,B$ of length $n$ is $$C = \frac{ \sum \limits_{i=1}^{n}A_{i} B_{i} }{ \sqrt{\sum \limits_{i=1}^{n} A_{i}^2} \cdot \sqrt{\sum \limits_{i=1}^{n} B_{i}^2} }$$ which is ... 30 According to cosine theorem, in euclidean space the (euclidean) squared distance between two points (vectors) 1 and 2 is $d_{12}^2 = h_1^2+h_2^2-2h_1h_2\cos\phi$. Squared lengths $h_1^2$ and $h_2^2$ are the sums of squared coordinates of points 1 and 2, respectively (they are the pythagorean hypotenuses). Quantity $h_1h_2\cos\phi$ is called scalar product (= ... 18 Technically to compute a dis(similarity) measure between individuals on nominal attributes most programs first recode each nominal variable into a set of dummy binary variables and then compute some measure for binary variables. Here is formulas of some frequently used binary similarity and dissimilarity measures. What is dummy variables (also called one-... 16 If you have stumbled upon this question and are wondering what package to download for using Gower metric in R, the cluster package has a function named daisy(), which by default uses Gower's metric whenever mixed types of variables are used. Or you can manually set it to use Gower's metric. daisy(x, metric = c("euclidean", "manhattan", "gower"), ... 16 There exist many such coefficients (most are expressed here). Just try to meditate on what are the consequences of the differences in formulas, especially when you compute a matrix of coefficients. Imagine, for example, that objects 1 and 2 similar, as objects 3 and 4 are. But 1 and 2 have many of the attributes on the list while 3 and 4 have only few ... 14 The inverse is to change from distance to similarity. The 1 in the denominator is to make it so that the maximum value is 1 (if the distance is 0). The square root - I am not sure. If distance is usually larger than 1, the root will make large distances less important; if distance is less than 1, it will make large distances more important. 13 Seems like you're looking for either the Jaccard distance or the Dice dissimilarity. Jaccard distance: $1 - \frac{|A \cap B|}{|A \cup B|}$ Dice dissimilarity: $1 - \frac{2|A \cap B|}{|A| + |B|}$ These both are equal to zero if $A$ and $B$ are exactly the same, and one if they are completely different. However, Jaccard will "punish" differences more ... 11 We know that Jaccard (computed between any two columns of binary data $\bf{X}$) is $\frac{a}{a+b+c}$, while Rogers-Tanimoto is $\frac{a+d}{a+d+2(b+c)}$, where a - number of rows where both columns are 1 b - number of rows where this and not the other column is 1 c - number of rows where the other and not this column is 1 d - number of rows where both ... 11 Let me address this by describing the four maybe most common similarity metrics for bags of words and document (count) vectors in general, that is comparing collections of discrete variables. Cosine similarity is used most frequently in general, but you should always measure first and make sure that no other similarity would produce better results for your ... 10 First of all, in many applications you do not need a distance metric, but a dissimilarity will be okay. So make sure that triangle inequality is needed. In mathematics, triangle inequality is part of the definition of a metric, and distances in mathematics are synonymous to metrics. But in database literature, often distances are not required to be metric. ... 9 The above solution is not very good if X is sparse. Because taking !X will make a dense matrix, taking huge amount of memory and computation. A better solution is to use formula Jaccard[i,j] = #common / (#i + #j - #common). With sparse matrixes you can do it as follows (note the code also works for non-sparse matrices): library(Matrix) jaccard <- ... 9 There are various methods to define document similarity, but let me introduce the most easiest approach to start with, based on semantic vector space: First build your term-document matrix Then "Normalize" the entries in the matrix with tf-idf From there, you can use your document-vectors columns of the matrix to calculate the similarity with the cosine ... 9 The answer is really right there in your linked articles. From the first, here are the formulae for cosine and correlation (lightly edited for brevity and clarity): \begin{align} {\rm CosSim}(x,y) &= \frac{\sum_i x_i y_i}{ \sqrt{ \sum_i x_i^2} \sqrt{ \sum_i y_i^2 } } \\ \ \\ \ \\ {\rm Corr}(x,y) &= \frac{ \sum_i (x_i-\bar{x}) (y_i-\bar{y}) }{ \... 8 The function $$f\colon [0,1]\times[0,1]\to[0,1], \quad(x,y)\mapsto \frac{1}{4}x+\frac{1}{4}y+\frac{3}{4}(x-y)^2$$ does what you want. Plus, it's positive, symmetric and definite ($x\neq y$ implies that $f(x,y)>0$). Neither it nor its root is linearly homogeneous like a norm-derived distance function, though ($f(\lambda x, \lambda y)\neq\lambda f(x,y)$... 7 You can use the cosine function from the lsa package: http://cran.r-project.org/web/packages/lsa 7 Some answers above are computationally inefficient, try this; For cosine similarity matrix Matrix <- as.matrix(DF) sim <- Matrix / sqrt(rowSums(Matrix * Matrix)) sim <- sim %*% t(sim) Convert to cosine dissimilarity matrix (distance matrix). D_sim <- as.dist(1 - sim) 7 A good approach to this kind of problem can be found in section 4 of the paper The Bayesian Image Retrieval System, PicHunter by Cox et al (2000). The data is a set of integer outcomes $A_1, ..., A_N$ where $N$ is the number of trials. In your case, there are 3 possible outcomes per trial. I will let $A_i$ be the index of the face that was left out. The ... 7 For high-dimensional data, shared-nearest-neighbor distances have been reported to work in Houle et al., Can Shared-Neighbor Distances Defeat the Curse of Dimensionality? Scientific and Statistical Database Management. Lecture Notes in Computer Science 6187. p. 482. doi:10.1007/978-3-642-13818-8_34 Fractional distances are known to be not metric. $L_p$ ... 7 Area between 2 curves may give you the difference. Hence sum(nr-nf) (sum of all differences) will be an approximation of the area between 2 curves. If you want to make it relative, sum(nr-nf)/sum(nf) can be used. These will give you a single value indicating similarity between 2 curves for each graph. Edit: Above method of sum of differences will be useful ... 7 From the wikipedia page: $$J=\frac{D}{2-D} \;\; \text{and}\;\; D=\frac{2J}{J+1}$$ where $D$ is the Dice Coefficient and $J$ is the Jacard Index. In my opinion, the Dice Coefficient is more intuitive because it can be seen as the percentage of overlap between the two sets, that is a number between 0 and 1. As for the Overlap it represents the percentage of ... 6 There are two commonly seen approaches: Add outliers to real data by randomization methods. In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%) For 1 there are some variants - modifying single attributes, drawing each attribute, but from different instances etc.; personally, I'm not at all convinced ... 6 It's more common to measure discrepancy than similarity, but some of them can be converted easily to your way around. Possible measures of discrepancy in distribution include (but are not limited to): Kolmogorov-Smirnov distance. This distance between cdfs (or emprical cdfs), $D$, is small when the distributions are the same and close to 1 when they're ... 6 This is a big issue in some areas of machine learning. I'm not as familiar with it as I'd like, but I think these should get you started. Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) seems to work very well on some data sets. Neighborhood components analysis is a very nice linear algorithm, and nonlinear versions have been developed as ... 6 You might compute PMI using Wikipedia, as following: 1) Using Lucene to index a Wikipedia dump 2) Using Lucene API, it is straightforward to get: The number (N1) of documents containing word1 and the number (N2) of documents containing word2. So, Prob(word1) = (N1 + 1) / N and Prob(word2) = (N2 + 1) / N, where N is the total number of documents in ... 6 Could your problem be restated as wanting to discover the regular expressions that will match the strings in each category? This is a "regex generation" problem, a subset of the grammar induction problem (see also Alexander Clark's website). The regular expression problem is easier. I can point you to code frak and RegexGenerator. The online RegexGenerator++... 6 The simplest most common way to avoid a 0 probability in word frequencies is the Lidstone smoothing Which is basically, instead of using $$p(w_i)=\frac{\#(w_i)}{\sum{\#(w_j)}}$$ Use: $$p(w_i)=\frac{\#(w_i)+\epsilon}{\sum{\#(w_j)}+N\epsilon}$$ Regarding information of $p=0-$ The motivation I know is taken from the entropy definition: $$H(p)=p\log{p}$$ And ... 6 The definition of the cosine similarity is: $$\text{similarity} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\|_2 \|\mathbf{B}\|_2} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{A_i^2}} \sqrt{\sum\limits_{i=1}^{n}{B_i^2}} }$$ It is sensitive to the mean of features. To see this, choose some \$j \in \{1, \ldots,... 6 Does Mercer's theorem work in reverse? Not in all cases. Wikipedia: "In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in (Mercer 1909), is one of the most notable results of the work of ... Only top voted, non community-wiki answers of a minimum length are eligible
2020-08-13 00:51:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8951448798179626, "perplexity": 794.0893772157433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00253.warc.gz"}
https://scioly.org/forums/viewtopic.php?f=285&t=12226&p=377682
## Astronomy C Member Posts: 8 Joined: January 25th, 2016, 9:37 am State: TX ### Re: Astronomy C ET2020 wrote:Do you know of any good resources to learn how to use JS9? It's seems confusing and I'm not sure how to answer questions about it. I agree that it can be confusing/intimidating at first. Keep in mind that this year, you won't actually have to use the JS9 software in competition, just be able to understand screenshots of its various functionalities. I wrote a very basic-level introduction to JS9 here. Beyond the guide, I think it's helpful to play around on the site and see what it can do. To perform well on JS9 questions at high-level tournaments, be sure to also understand fundamental concepts in multi-wavelength astronomy, energy spectra, light curves, etc. The goal of JS9 is to be able to apply theory to understand observational data! In case you haven't seen it, there's a JS9 question on the MIT exam (see #19) and detailed solutions here, so that might give you a feel for what sort of questions to expect. M3335 Member Posts: 2 Joined: February 9th, 2019, 9:43 pm ### Re: Astronomy C Hi all, I was reviewing over past Upenn invitational math questions to practice, and I found one that seems impossible: Particularly parts B) and C). I feel like there's certain information missing? I tried multiple approaches, such as using $F = \frac{GMm_c}{R^2} = \frac{m_cV_c^2}{R_c}$ , Kepler's third law for binary systems, and even integrating the cosine function of one of the curves with respect to time to find the distance it traveled in one period, but I couldn't seem to get the answer. It seems you need two known variables, whereas the problem only provides you with one. The given answer to B) is 1.5E7 km and the answer to C) is .09Msolar. I feel like the two parts could be answered in any order, but that's just a guess. I'm assuming you need to use the information about the type M star to answer the question, but I don't know how. Help would be much appreciated. syo_astro Exalted Member Posts: 590 Joined: December 3rd, 2011, 9:45 pm State: NY Contact: ### Re: Astronomy C M3335 wrote:Hi all, I was reviewing over past Upenn invitational math questions to practice, and I found one that seems impossible: Part B: Have you tried applying circular (orbital) velocity? If you know the radial velocity and the period, you should be able to figure out the radius of orbit for a given orbiting object. Part C: If you know the mass ratio, the orbital separation, and the period, you should have two equations and two unknowns that should be solvable (the mass ratio and Kepler's third law). You really shouldn't need to do any integrals or fancy math in astro (or all events, really). I would say you'd need to do Part B first, at least it's more intuitive to. Haven't tried the math yet, but I can try it in a bit. Not sure about the units on the y-axis though (I'd guess km/s of course)...binary star questions should be easy to write, but I've messed them up tons of times -_-. Luckily, invites have been good opportunities for me to get complaints about stuff like that (nobody lets me live down the math on two invites posted online XD), I actually do use the feedback and triple check for real tournies! B: Crave the Wave, Environmental Chemistry, Robo-Cross, Meteorology, Physical Science Lab, Solar System, DyPlan (E and V), Shock Value C: Microbe Mission, DyPlan (Earth's Fresh Waters), Fermi Questions, GeoMaps, Gravity Vehicle, Scrambler, Rocks, Astronomy M3335 Member Posts: 2 Joined: February 9th, 2019, 9:43 pm ### Re: Astronomy C syo_astro wrote: M3335 wrote:Hi all, I was reviewing over past Upenn invitational math questions to practice, and I found one that seems impossible: Part B: Have you tried applying circular (orbital) velocity? If you know the radial velocity and the period, you should be able to figure out the radius of orbit for a given orbiting object. Part C: If you know the mass ratio, the orbital separation, and the period, you should have two equations and two unknowns that should be solvable (the mass ratio and Kepler's third law). You really shouldn't need to do any integrals or fancy math in astro (or all events, really). I would say you'd need to do Part B first, at least it's more intuitive to. Haven't tried the math yet, but I can try it in a bit. Not sure about the units on the y-axis though (I'd guess km/s of course)...binary star questions should be easy to write, but I've messed them up tons of times -_-. Luckily, invites have been good opportunities for me to get complaints about stuff like that (nobody lets me live down the math on two invites posted online XD), I actually do use the feedback and triple check for real tournies! Ah thank you! Somehow that method slipped from my mind. I used circular velocity, and came up with around 1.49E5 km but .08Msolar, so I think the answer key may be wrong. idislikeboomi Member Posts: 8 Joined: January 19th, 2019, 4:07 pm ### Re: Astronomy C Anybody have the image set for the carnegie mellon test? c0c05w311y Member Posts: 19 Joined: February 20th, 2017, 7:56 pm State: PA Location: Carnegie Mellon University ### Re: Astronomy C idislikeboomi wrote:Anybody have the image set for the carnegie mellon test? I'm pretty sure this is the right version. Let me know if you have any questions or comments about the exam! Zxcvbnm123 Member Posts: 38 Joined: October 14th, 2018, 8:22 pm ### Re: Astronomy C Does anyone know where I can find the 2019 Golden Gate Invitational test? EastStroudsburg13 Posts: 2960 Joined: January 17th, 2009, 7:32 am State: MD Location: At work trying to be a real adult Contact: ### Re: Astronomy C Golden Gate Invitational tests will be released publicly on their website on March 9. East Stroudsburg South Class of 2012, Alumnus of JT Lambert, Drexel University Class of 2017 Wiki Wiki Pages that Need Work FAQ and SciOly FAQ Wiki BBCode Wiki If you have any questions for me, always feel free to shoot me a PM. Alke Member Posts: 25 Joined: March 3rd, 2017, 5:14 pm Division: C State: VA ### Re: Astronomy C Hi everyone! I did astronomy two seasons ago but I'm back! I'm a little rusty and I am always running out of time. Thus, do you all have tips for time management/splitting up the work with your partner? I'm trying to think through some strategies like having one person do math and the other do mulitple choice. However, I don't know how well that'll work! -Thanks Name Member Posts: 267 Joined: January 21st, 2018, 4:41 pm Division: C State: NY Location: Syosset ### Re: Astronomy C Alke wrote:Hi everyone! I did astronomy two seasons ago but I'm back! I'm a little rusty and I am always running out of time. Thus, do you all have tips for time management/splitting up the work with your partner? I'm trying to think through some strategies like having one person do math and the other do mulitple choice. However, I don't know how well that'll work! -Thanks I'd suggest splitting the test. I usually take the DSOs (or everything besides math), while my partner takes all the math (even though math is my favorite part of astro ). We're usually capable of finishing the test without too much of a problem. Near the end we usually look into the other person's section to help potentially solve questions that the other person didn't get/check over a bit, so i'd still recommend being knowledgable in your partner's section. Figure out which sections you and your partner would rather do, and then split the test accordingly. SW 16-17 Syosset 18-21. Previous main events: Microbe, Invasive, Herp, Matsci, Fermi Hoping to do Astro, Code, and Ornith next year 2018-19 highlights PM2017 Member Posts: 483 Joined: January 20th, 2017, 5:02 pm State: CA ### Re: Astronomy C Alke wrote:Hi everyone! I did astronomy two seasons ago but I'm back! I'm a little rusty and I am always running out of time. Thus, do you all have tips for time management/splitting up the work with your partner? I'm trying to think through some strategies like having one person do math and the other do mulitple choice. However, I don't know how well that'll work! -Thanks We split the test right down the middle and take 30 seconds or so to scan our halves and see which pages we're most comfortable with/have the most points and get those done. I generally hand the in-depth DSO questions to my partner, while she gives me the majority of the calculations. We try to get our respective sections done by the 30-minute mark and then go onto finish the conceptual section, since that's the most luck-based. (unless I see HR diagrams or light curves or something else that's not really trivia). Also, we circle the question numbers that we don't immediately get and move on. We come back after we finish everything we are comfortable with and then we discuss amongst each other. Honestly, if you looked at us without realizing we were any good, you would probably laugh at "those two kids who were arguing with each other throughout the test," and wouldn't think we had any chemistry together. But, I would say our achievements say otherwise... . (sorry, I don't normally boast, but it's a somewhat funny story.) 2018 Events 2019 Events -- West High School Science Olympiad (Alum, as of Saturday) ET2020 Member Posts: 34 Joined: April 16th, 2018, 11:35 am Division: C State: NY ### Re: Astronomy C I've noticed that there seem to be two different equations used to calculate recessional velocity from redshift. The more commonly used one, v = Z*c, works fine for relatively close objects, but creates a problem for objects with redshifts > 1. The equation is problematic because it implies that we should not be able to see things with z > 1, since they would be receding faster than light, and therefore the light would not be able to reach us. However, there have been many objects observed to have z >> 1. The correct equation, V = c*[(z^2+2z)/(z^2+2z+2)], gives accurate answers even for distant objects. Unfortunately, I've gotten a few practice questions wrong because the test writer used the simplified version of the equation. Which one should I use? Fayetteville Manlius High School Class of 2020 syo_astro Exalted Member Posts: 590 Joined: December 3rd, 2011, 9:45 pm State: NY Contact: ### Re: Astronomy C ET2020 wrote:I've noticed that there seem to be two different equations used to calculate recessional velocity from redshift. The more commonly used one, v = Z*c, works fine for relatively close objects, but creates a problem for objects with redshifts > 1. The equation is problematic because it implies that we should not be able to see things with z > 1, since they would be receding faster than light, and therefore the light would not be able to reach us. However, there have been many objects observed to have z >> 1. The correct equation, V = c*[(z^2+2z)/(z^2+2z+2)], gives accurate answers even for distant objects. Unfortunately, I've gotten a few practice questions wrong because the test writer used the simplified version of the equation. Which one should I use? This is always a great question! Similar issues come up for other equations too (like whether to assume circular orbits, etc). One way is to ask the proctor something like "there are two possible equations to use for this question, should we account for relativistic effects?" But I'm aware on the spot that can take away precious time and be cumbersome, especially if the proctor can't answer or doesn't know. I would guess most test writers shoot for the simpler equations, but I know that's not a guarantee either. Thoughts from others? B: Crave the Wave, Environmental Chemistry, Robo-Cross, Meteorology, Physical Science Lab, Solar System, DyPlan (E and V), Shock Value C: Microbe Mission, DyPlan (Earth's Fresh Waters), Fermi Questions, GeoMaps, Gravity Vehicle, Scrambler, Rocks, Astronomy Unome Moderator Posts: 3999 Joined: January 26th, 2014, 12:48 pm State: GA Location: somewhere in the sciolyverse ### Re: Astronomy C syo_astro wrote: ET2020 wrote:I've noticed that there seem to be two different equations used to calculate recessional velocity from redshift. The more commonly used one, v = Z*c, works fine for relatively close objects, but creates a problem for objects with redshifts > 1. The equation is problematic because it implies that we should not be able to see things with z > 1, since they would be receding faster than light, and therefore the light would not be able to reach us. However, there have been many objects observed to have z >> 1. The correct equation, V = c*[(z^2+2z)/(z^2+2z+2)], gives accurate answers even for distant objects. Unfortunately, I've gotten a few practice questions wrong because the test writer used the simplified version of the equation. Which one should I use? This is always a great question! Similar issues come up for other equations too (like whether to assume circular orbits, etc). One way is to ask the proctor something like "there are two possible equations to use for this question, should we account for relativistic effects?" But I'm aware on the spot that can take away precious time and be cumbersome, especially if the proctor can't answer or doesn't know. I would guess most test writers shoot for the simpler equations, but I know that's not a guarantee either. Thoughts from others? I've never seen a test writer use the relativistic version of the equation on a test, so I pretty much always use the non-relativistic equation. Although, at a particularly competitive tournament if the test doesn't specify, I'd probably ask. Userpage Chattahoochee High School Class of 2018 Georgia Tech Class of 2022 Opinions expressed on this site are not official; the only place for official rules changes and FAQs is soinc.org. Moderator Posts: 444 Joined: December 6th, 2013, 1:56 pm State: TX Location: Austin, Texas ### Re: Astronomy C Unome wrote: syo_astro wrote: ET2020 wrote:I've noticed that there seem to be two different equations used to calculate recessional velocity from redshift. The more commonly used one, v = Z*c, works fine for relatively close objects, but creates a problem for objects with redshifts > 1. The equation is problematic because it implies that we should not be able to see things with z > 1, since they would be receding faster than light, and therefore the light would not be able to reach us. However, there have been many objects observed to have z >> 1. The correct equation, V = c*[(z^2+2z)/(z^2+2z+2)], gives accurate answers even for distant objects. Unfortunately, I've gotten a few practice questions wrong because the test writer used the simplified version of the equation. Which one should I use? This is always a great question! Similar issues come up for other equations too (like whether to assume circular orbits, etc). One way is to ask the proctor something like "there are two possible equations to use for this question, should we account for relativistic effects?" But I'm aware on the spot that can take away precious time and be cumbersome, especially if the proctor can't answer or doesn't know. I would guess most test writers shoot for the simpler equations, but I know that's not a guarantee either. Thoughts from others? I've never seen a test writer use the relativistic version of the equation on a test, so I pretty much always use the non-relativistic equation. Although, at a particularly competitive tournament if the test doesn't specify, I'd probably ask. I'd say that there's a lot of variation and it'd hard to generalize in either direction. As a competitor, I only saw test writers use the relativistic form, which is the opposite of Unome's experience. I would recommend either asking (as syo_astro suggested) or calculating both and saying "this one is taking relativity into account, while this one does not" if the test doesn't specify. University of Texas at Austin '22 Seven Lakes High School '18 Beckendorff Junior High '14
2019-05-20 09:25:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6246485114097595, "perplexity": 1461.5373432192234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255837.21/warc/CC-MAIN-20190520081942-20190520103942-00548.warc.gz"}
https://datadriven.sciml.ai/dev/extended_examples/
# Multiple Trajectories for Koopman Approximation Lets consider the case of approximating a Koopman Operator based on multiple trajectories. We assume pairs $(x_i, \dot{x}_i)$ of the measured state space trajectory and its time derivative where $i$ denotes a single measurement. Lets create our artificial measurements for a system with a slow and fast manifold, for which there exists an analytical solution of this problem. using DataDrivenDiffEq using ModelingToolkit using OrdinaryDiffEq using LinearAlgebra using Plots gr() function slow_manifold(du, u, p, t) du[1] = p[1]*u[1] du[2] = p[2]*(u[2]-u[1]^2) end u0 = [3.0; -2.0] tspan = (0.0, 3.0) p = [-0.05, -1.0] problem = ODEProblem(slow_manifold, u0, (0f0, 3f0), p) sol_1= solve(problem, Tsit5(), saveat = 0.3) X_1 = Array(sol_1) DX_1 = sol_1(sol_1.t, Val{1})[:,:] problem = ODEProblem(slow_manifold, 2f0*u0, (0f0, 2f0), p) sol_2 = solve(problem, Tsit5(), saveat = 0.1) X_2 = Array(sol_2) DX_2 = sol_2(sol_2.t, Val{1})[:,:] Note that we varied the inital conditions and the measurement time. The resulting trajectories are shown below. In this paper on the Dynamic Mode Decomposition its pointed out that the overall ordering of the snapshots does not matter, as long as the specific pair is consistent. This means we can simply append the trajectories and use the new array to derive the approximation. X = hcat(X_1, X_2) DX = hcat(DX_1, DX_2) 2×32 Array{Float64,2}: -0.148881 -0.148881 -0.146664 … -0.274866 -0.273495 -0.272131 9.38212 9.38212 6.72424 4.29134 3.59657 2.97117 In the next steps, we simply create a basis for the approximation and proceed as usual. At first we create the basis and afterwards feed it to the function for approximating the Koopman generator. @variables u[1:2] observables = [u; u[1]^2] basis = Basis(observables, u) approximation = gEDMD(X, DX, basis) Which results in the following eigenvalues of the system and its approximation. This procedure works for all methods which take two snapshot matrices as input arguments.
2020-10-22 14:44:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6020982265472412, "perplexity": 3199.5482299704818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00043.warc.gz"}
https://www.albert.io/ie/act-math/placing-flags
Free Version Moderate # Placing Flags ACTMAT-NQEAP4 Camp Comet begins each day by raising its 5 flags (4 identical yellow and 1 blue) on its flagpole. How many unique arrangements of the 5 flags of Camp Comet are there? A $120$ B $24$ C $12$ D $5$ E $4$
2017-02-28 10:12:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22019760310649872, "perplexity": 10505.431983346782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00135-ip-10-171-10-108.ec2.internal.warc.gz"}
https://sts-math.com/post_704.html
Find the equation of the line perpendicular to y=-x-2 and passing through the point (4, 1). $$k: y=m_1x+b_1;\ l: y=m_2x+b_2\\\\k\ \perp\ l\iff m_1m_2=-1\\====\\k: y=-x-2;\ l: y=mx+b\\\\k\ \perp\ l\iff-1\cdot m=-1\Rightarrow m=1\\\\l: y=1x+b\to y=x+b\\\\(4;\ 1)\to substitute\ x=4\ and\ y=1\ to\ y=x+b:\\\\4+b=1\\b=1-4\\b=-3\\\\Answer: y=x-3$$ =========== First, you need to remember that perpendicular lines have negative reciprocal slopes. The given line has slope of -1, so the line perpendicular to it has slope of +1/1 = 1. So the equation of the new line is going to be [  y = 1x + intercept  ]. You know that the new line goes through the point where x=4 and y=1. Stick these into the part of the equation that you already know: y = 1x + intercept 1 = 1(4) + intercept. Can you find the intercept from here? Maybe I’d better just finish it off. 1 = 4 + intercept. Subtract 4 from each side: intercept = -3 So the equation of the line perpendicular to [  y = -x - 2  ]  going through (4, 1) is y = x - 3 RELATED:
2019-03-26 13:56:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6373535394668579, "perplexity": 676.7589641915821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205534.99/warc/CC-MAIN-20190326135436-20190326161436-00036.warc.gz"}
https://access.openupresources.org/curricula/our-hs-math/aga/algebra-2/unit-9/lesson-6/ready_set_go.html
# Lesson 6Let’s InvestigateSolidify Understanding Solve each system of equations. ### 7. Write as an augmented matrix. ### 8. Write as an augmented matrix. ## Set For the following scenarios, identify each situation as a survey, observational study, or an experiment. ### 9. To determine if a new pain medication is effective, researchers randomly assign two groups of people to use the pain medication in group 1 and a placebo in group 2. Both groups are asked to rate their pain and the results are compared. survey #### B. observational study experiment ### 10. Officials want to determine if raising the speed limit from to will have an impact on safety. To determine this, they watch a stretch of the highway when the speed limit is and see how many accidents there are. They then observe the number of accidents over a period of time on the same stretch of highway for a speed limit of and compare the difference. survey #### B. observational study experiment ### 11. To determine if a new sandwich on the menu is preferred more than the original, the manager of the restaurant takes a random sample of customers who have tried both sandwiches and asks them which sandwich they like best. survey #### B. observational study experiment ### 12. A newspaper wants to know how satisfied their customers are. It randomly selects and asks them. survey #### B. observational study #### C. experiment Mrs. Williams wants to know if doing practice problems actually helps students do better on their unit exams. ### 13. Describe how Mrs. Williams could carry out a survey to determine if practice helps. Explain the role of randomization in your design. ### 14. Describe how Mrs. Williams could carry out an observational study to determine if practice problems help test scores. ### 15. Describe how Mrs. Williams could carry out an experiment to determine if practice problems help test scores. Explain how you will use randomization in your design and how you will use a control. ### 16. If Mrs. Williams wants to determine if practice problems cause test scores to rise, which method would be best? Why? ## Go ### 17. The average resting heart rate of a young adult is approximately per minute with a standard deviation of per minute. Assuming resting heart rate follows a normal distribution, answer the following questions. 1. Draw and label the normal curve that describes this distribution. Be sure to label the mean, and the measurements , , and standard deviations out from the mean. 2. What percent of people have a heart rate between and per minute? Label these points on your normal curve above and shade in the area that represents the percent of people with heartbeats between and per minute. ### 18. If a resting heart rate above per minute is considered unhealthy, what percent of people have an unhealthy heart rate?
2022-06-28 03:22:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4376802146434784, "perplexity": 1903.5857506876416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00393.warc.gz"}
https://www.mathworks.com/help/dsp/ref/dsp.histogram-system-object.html?s_tid=blogs_rc_6
# dsp.Histogram (To be removed) Histogram of input or sequence of inputs ## Description The `Histogram` object generates a histogram for an input or a sequence of inputs. ### Note The `dsp.Histogram` System object™ will be removed in a future release. Use the `histcounts` function instead. For more information, see Compatibility Considerations. To generate a histogram for an input or a sequence of inputs: 1. Create the `dsp.Histogram` object and set its properties. 2. Call the object with arguments, as if it were a function. ## Creation ### Syntax ``hist = dsp.Histogram`` ``hist = dsp.Histogram(min,max,numbins)`` ``hist = dsp.Histogram(Name,Value)`` ### Description ````hist = dsp.Histogram` returns a histogram object, `hist`, that computes the frequency distribution of the elements in each input matrix.``` ````hist = dsp.Histogram(min,max,numbins)` returns a histogram object, `hist`, with the `LowerLimit` property set to `min`, `UpperLimit` property set to `max`, and `NumBins` property set to `numbins`.``` ````hist = dsp.Histogram(Name,Value)` returns a histogram object, `hist`, with each specified property set to the specified value. Enclose each property name in single quotes. Unspecified properties have default values.``` ## Properties expand all Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the `release` function unlocks them. If a property is tunable, you can change its value at any time. Specify the lower boundary of the lowest-valued bin as a real-valued scalar. `NaN` and `Inf` are not valid values for this property. Tunable: Yes Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` Specify the upper boundary of the highest-valued bin as a real-valued scalar. `NaN` and `Inf` are not valid values for this property. Tunable: Yes Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` Specify the number of bins in the histogram. Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` Specify how the histogram calculation is performed over the data as `All` or `Column`. Specify whether the histogram object normalizes the output vector, v, so that $sum\left(v\right)=1$. When you set this property to `true`, the output vector is normalized. When you set it to `false`, the object supports fixed-point operations and does not use this property for normalization. Set this property to `true` to enable running histogram calculations for the input elements over successive calls to the algorithm. Set this property to `false` to compute a histogram for the current input. Set this property to `true` to enable resetting the running histogram. When you set the property to `true`, specify a reset input to the object algorithm that resets the running histogram. When this property is `false`, the histogram object does not reset. #### Dependencies This property applies when you set the `RunningHistogram` property to `true`. Specify the event that resets the running histogram as ```Rising edge```, `Falling edge`, `Either edge`, or `Non-zero`. #### Dependencies This property applies when you set the `ResetInputPort` property to `true`. ### Fixed-Point Properties Specify the rounding method. Specify the overflow action as `Wrap` or `Saturate`. Specify the product fixed-point data type as `Same as input` or `Custom`. Specify the product fixed-point type as a scaled `numerictype` object with a `Signedness` of `Auto`. #### Dependencies This property applies when you set the `ProductDataType` property to `Custom`. Specify the accumulator fixed-point data type as one of ```Same as product```, `Same as input`, or `Custom` |. Specify the accumulator fixed-point type as a scaled `numerictype` object with a `Signedness` of `Auto`. #### Dependencies This property applies when you set the `AccumulatorDataType` property to `Custom`. ## Usage ### Syntax ``y = hist(x)`` ``y = hist(x,r)`` ### Description ````y = hist(x)` returns a histogram `y` for the input data `x` . When the `RunningHistogram` property is `true`, `y` corresponds to the histogram of the input elements over successive calls to the algorithm.``` ````y = hist(x,r)` resets the histogram state based on the reset signal, `r` and the object's `ResetCondition` property. You can reset the histogram state only when the `RunningHistogram` and the `ResetInputPort` properties are `true`.``` ### Input Arguments expand all Data input, specified as a vector, matrix, or N-D array. If `x` is a matrix, each column is treated as an independent channel. Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `fi` Complex Number Support: Yes Reset signal, specified as a scalar. The reset signal resets the histogram state based on the value of `r` and the object's `ResetCondition` property. #### Dependencies You can reset the histogram state only when the `RunningHistogram` and the `ResetInputPort` properties are `true`. Data Types: `single` | `double` | `int8` | `int16` | `int32` | `logical` ### Output Arguments expand all Histogram output of the input signal, returned as a scalar, vector, or matrix. The output depends on the setting of `Dimension`: • `'Column'` –– The object computes the histogram value of each input channel. If the input is a column vector, the output is a scalar. If the input is a multichannel signal, the output signal is 1-by-N vector, where N is the number of input channels. • `'All'` –– The object computes the histogram value over all input channels. Data Types: `single` | `double` | `uint32` ## Object Functions To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named `obj`, use this syntax: `release(obj)` expand all `step` Run System object algorithm `release` Release resources and allow changes to System object property values and input characteristics `reset` Reset internal states of System object ## Examples ### Compute Histogram of Sequence Note: If you are using R2016a or an earlier release, replace each call to the object with the equivalent `step` syntax. For example, `obj(x)` becomes `step(obj,x)`. Compute a histogram with four bins, for possible input values 1 through 4. ```hist = dsp.Histogram(1,4,4); y = hist([1 2 2 3 3 3 4 4 4 4]')``` ```y = 4×1 1 2 3 4 ``` ## Algorithms This object implements the algorithm, inputs, and outputs described on the Histogram block reference page. The object properties correspond to the block parameters, except: • The Reset port block parameter corresponds to both the `ResetCondition` and the `ResetInputPort` object properties. • The Find histogram over block parameter corresponds to the Dimension property of the object. ## Compatibility Considerations expand all Warns starting in R2019b
2020-05-28 20:24:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6135355830192566, "perplexity": 3147.733736149726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00214.warc.gz"}
http://dict.cnki.net/h_3918102.html
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多 状况 在 心理学 分类中 的翻译结果: 查询用时:0.123秒 历史查询 状况 situation Investigation and Perspective of Psychological Health Situation of Physical Education Institute Students 体育学院学生心理健康状况调查透视 短句来源 An Investigation and Analysis of the Psychological Health Situation of the Graduates of Teachers Colleges 高师应届毕业生心理健康状况的调查与分析 短句来源 An Analysis of the Developmental Situation of Preschool Children's Adaptive Behaviour 学龄前期儿童适应行为发展状况分析 短句来源 An Analysis of the Mental Situation of the Respondent in China Health Surveillance System 中国健康监测系统试点调查人群心理状况分析 短句来源 Investigation and Analysis of Mental Health Situation in 132 Probationer Nurses 132例中专实习护生心理健康状况的调查与分析 短句来源 更多 condition Results The total detectable rate of interpersonal relationship problems was(39.15%); the general analysis of mental health condition indicated that(8.47%) of the students had relatively serious psychological problems and(76.72%) of the students had mild psychological problems. 结果受人际关系困扰的学生总检出率为39.15%,心理健康状况的一般分析表明有8.47%的学生存在比较严重的心理问题,76.72%的学生有轻度的心理问题; 短句来源 Results The psychological health condition of impoverished students was worse than that of non-impoverished students; 结果贫困大学生的心理健康状况远低于非贫困大学生; 短句来源 The Analysis with the Condition of Psychological Health in the Jobless 下岗职工心理健康状况分析 短句来源 Study on Mental Health Condition of Girl College Students 对女大学生心理健康状况的研究 短句来源 Investigating Analyses of Psychological Healthy Condition for 81 Non-regular Labourers Working on a Public Project 对81例民工心理健康状况调查分析 短句来源 更多 conditions Study on the Conditions of Students' Mental Health 大中小学生心理健康状况研究 短句来源 Analysis of conditions of mental health and personal characteristic of middle school students in Hainan Province 海南省部分中学生心理健康状况及个性特征分析 短句来源 A Research on Mental Health Conditions of the Teachers in Rural Junior Schools in Zhangye 张掖地区农村初中教师心理健康状况的调查分析 短句来源 Investigation of the mental health conditions of students joined the examinations of admission to college 高考学生心理健康状况调查 短句来源 The Analysis on the Results of the Mental Health Conditions of Freshmen of Hunan University of science and engineering 湖南科技学院2004级新生心理健康状况调查结果与分析 短句来源 更多 state of Investigation of the State of psychical Health Among 508 Medical Stidenta 508名医学生心理健康状况调查 短句来源 Analysis of State of Mind of the Preschool Children Gymnasts 学龄前儿童体操运动员的心理状况分析 短句来源 The mental state of students in 2005 was higher than that of 2004.There was no significant difference between experiment class and common class students. 实验班和普通班学生心理健康状况的差别无统计学意义,2005届高二学生心理健康状况较2004届好,且有统计学意义。 短句来源 The Mental Health State of Junior Middle-School Students and Its Affective Factors 初中学生心理健康状况及其影响因素 短句来源 Mental health state of middle school students at mathematics class 上海中学数学班与非数学班学生心理卫生状况的比较 短句来源 更多 我想查看译文中含有:的双语例句 situation We also consider a similar situation for affine systems of the type φ(μx - λ), μ ∈ Γ, λ ∈ Λ. The following situation in using the method of least squares to solve problems often occurs. Simulation studies are presented which indicate that the asymptotic approximation to the finite-sample situation is good over a wide range of parameter configurations. In the other situation it uses the second-order Taylor's expansions approximation interpolation algorithm to obtain a constant feed speed so that the contour accuracy in the CNC system is guaranteed. To deal with this contradictory situation, insolvable zirconium tricarboxybutylphosphonate (Zr(PBTC)) powder was employed to make a composite with SPEEK polymer in an attempt to improve temperature tolerance of the membranes. 更多 condition We also prove the shifted cocycle condition for the twistors, thereby completing Fr?nsdal's findings. It is known [M4] that K?-orbits S and G?-orbits S' on a complex flag manifold are in one-to-one correspondence by the condition that S ∩ S' is nonempty and compact. We give a simple necessary and sufficient condition for a Schubert It is also shown that on the nilmanifold $\Gamma\backslash (H^3\times H^3)$ the balanced condition is not stable under small deformations. A necessary and sufficient geometric condition on the growth of the boundary of approximate tiles is reduced to a problem in Fourier analysis that is shown to have an elegant simple solution in dimension one. 更多 conditions As a consequence, the action is linearizable if certain topological conditions are satisfied. An algebraicG-varietyX is called "wonderful", if the following conditions are satisfied:X is (connected) smooth and complete;X containsr irreducible smoothG-invariant divisors having a non void transversal intersection;G has 2r orbits inX. Here we provide certain conditions (more general than those in [Ka1]) which guarantee preservation of the topology under a modification. We express the vanishing conditions satisfied by the correlation functions of Drinfeld currents of quantum affine algebras, imposed by the quantum Serre relations. We discuss the relation of these vanishing conditions with a shuffle algebra description of the algebra of Drinfeld currents. 更多 state of The spectra indicate that the energy transfer takes place from the triplet excited state of MLCT (metal-to-ligand charge transfer) state for Sr2CeO4 (sensitizer) to the rare earth ions (activator). The high-gain observer was used to estimate the state of the system. Based on the data collected from 31 plots and 93 soil samples, the state of health of the forest ecosystem is discussed and the appropriate FHA age has been determined. The propagation of drought-resistant transgenic poplars is one of the more effective ways to improve the ecological state of arid regions. In this paper, we present a survey on the state of the art knowledge on this topic, which is incomplete, and indicate some new trends for further research. 更多 其他 Classification is an important way of people's thinking activity. This article shows that in the sythetical classification ability of the wordconcepts in schoolchildren, factors of age characteristics are demonstrated. Theclassification ability is better than ability to explain grounds on the classific-ation. A few schoolchildren in grade 5 have a beginning ability of classifica-tion of "combination analysis". This Study shows too that, differences in classification material, culturalbackground and the condition... Classification is an important way of people's thinking activity. This article shows that in the sythetical classification ability of the wordconcepts in schoolchildren, factors of age characteristics are demonstrated. Theclassification ability is better than ability to explain grounds on the classific-ation. A few schoolchildren in grade 5 have a beginning ability of classifica-tion of "combination analysis". This Study shows too that, differences in classification material, culturalbackground and the condition of education impact the quality in the classific-ation. 分类是人类思维活动的重要方法之一。本研究通过小学生对字词概念进行综合性分类,考察他们思维的概括能力发展的年龄特点,并探讨分类材料、文化背景、个体经验、教学状况等对分类水平的影响。 研究结果表明:小学生对字词概念的综合性分类能力有明显的年龄特点。他们说明分类根据的能力,远远落后于分类能力。小学五年级中有少数学生具有初步的组合分析分类能力。分类材料,文化背景,个体经验,教学状况的差异,均对分类水平产生一定的影响。 This experiment is aimed at exploring the developmental conditionscharacteristics and regularities of Chinese middle school studentsideology and morality in the 1980's in terms of value systems and moraljudgements and their interrelationship. The research results indicate thatvalue systems of the middle school students are influenced mainly bysocial and personal psychological factors. The value systems across thefour grades manifest some identities and differences. The moral judgementlevels of the middle school... This experiment is aimed at exploring the developmental conditionscharacteristics and regularities of Chinese middle school studentsideology and morality in the 1980's in terms of value systems and moraljudgements and their interrelationship. The research results indicate thatvalue systems of the middle school students are influenced mainly bysocial and personal psychological factors. The value systems across thefour grades manifest some identities and differences. The moral judgementlevels of the middle school students increase gradually with age.Grade 2 is the key period for the development of moral judgementsSubjects at different moral developmental stages attach importance todifferent values. Values are the basis, foundation and criterion of morajudgements. 本实验旨在从价值系统与道德判断及其相互关系上探讨八十年代我国中学生思想品德的发展状况,特点及其规律。研究结果表明,中学生的价值系统主要受社会因素和个体心理因素的影响。四个年级间的价值系统表现出一定的一致性和差异性。中学生的道德判断水平随年龄的增长逐步提高、初中二年级是道德判断发展的关键期。不同道德发展阶段的被试所重视的价值观不同,价值观是被试进行道德判断的基础、依据或标准。 Genetic studies of intelligence perso-nality, handwriting patterns and mentalhealth status were carried out with twinmethod, 38 pairs of twins (MZ 22 pairs,DZ 16 pairs) were involved in IQ studiesand a part of these twins were involved inother studies. The results showed thatcertain traits of intelligence, personality,handwriting patterns and mental healthlevel were influenced by genetic factorsin certain degree, but the total IQ andthe types of symptoms manifested werenot obviously influenced by heredity. 作者采用双生子法对38对双生子(其中MZ22,DZ16)的智商,以及其中部分双生子的个性、笔迹和心理健康状况等行为特征进行遗传学研究。结果发现,遗传因素对个体智力和个性的某些特征、笔迹及一般心理健康水平均有一定影响,但对总的智商和个体出现的症状类型影响不明显。 << 更多相关文摘 相关查询 CNKI小工具 在英文学术搜索中查有关状况的内容 在知识搜索中查有关状况的内容 在数字搜索中查有关状况的内容 在概念知识元中查有关状况的内容 在学术趋势中查有关状况的内容 CNKI主页 |  设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索 2008 CNKI-中国知网 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社
2019-10-23 05:33:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49842703342437744, "perplexity": 3772.699181113682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829458.93/warc/CC-MAIN-20191023043257-20191023070757-00458.warc.gz"}
https://socratic.org/questions/three-consecutive-natural-numbers-have-a-sum-of-30-what-is-the-least-number-1
# Three consecutive natural numbers have a sum of 30. What is the least number? Nov 28, 2016 The least number is $9$. #### Explanation: Considering the numbers as $x$, $\left(x + 1\right)$, and $\left(x + 2\right)$, we can write an equation: $x + \left(x + 1\right) + \left(x + 2\right) = 30$ Open the brackets and simplify. $x + x + 1 + x + 2 = 30$ $3 x + 3 = 30$ Subtract $3$ from each side. $3 x = 27$ Divide both sides by $3$. $x = 9$
2021-11-29 08:59:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126050472259521, "perplexity": 841.6903006037285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00296.warc.gz"}
http://mathoverflow.net/revisions/91839/list
2 edited title 1 # commuting resolution of 1-dim sing and 0-dim sing in a non isolated sing of a surface Let $X$ be a surface with a non isolated singularity $C = Sing(X)$ such that the curve $C$ has singularities itself. We can solve $Sing(X)$ by blowing up close points and by normalizing. Indeed, we can first solve the 1-dimensional singularities with only normalizations $\pi_1: X_1 \to X$ . In the surface $X_1$ the preimage of $C$ has at most isolated singularities which we can solve by $\pi_2:X_2 \to X_1$. Another approach is to solve the 0-dimensional singularities by $\varphi_1:Y_1 \to X$, and then to finish with a one dimensional singular locus on $Y_1$ that we can be solve by normalization $\varphi_2:Y_2 \to Y_1$ such that $Y_2$ is smooth (Maybe some ADE singularities, but it would not matter). I am wondering if the second approach is always possible, and if "commutes" with the first one in some way. I have this idea that normalization remove the 1-dimensional singularity $C$ without affecting the 0-dimensional ones, even if they are supported on $C$. Is this true? In that fantasy, we can "commute" those processes of solving 0-dimensional singularities, and solving 1-dimensional singularities. I will appreciate any enlighting
2013-05-26 06:50:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872985303401947, "perplexity": 290.8544318365266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635944/warc/CC-MAIN-20130516121715-00079-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/656947-where-to-calculate-tbn-matrix/
# Where to calculate TBN matrix? This topic is 1583 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hi, In my normal mapping implementation I currently calculate the TBN matrix in my vertex shader and pass it to the pixel shader. There I take the normal map normal and using the TBN matrix transform it to world space, for lighting calculations. All works fine. Now I was wondering, in some implementations I've seen, people send the normal/binormal/tangent from the VS to the PS and build the TBN matrix in the pixel shader. Or somewhere in between, do the world transformation of those 3 vectors in the VS, send to PS and normalize there + build TBN matrix. I've tried out both approaches but with no visible difference. Can someone tell me why it would be beneficial to create the TBN matrix in a pixel shader instead of the vertex shader? (the final normal is still for the pixel, because I transform the normal from the normal map in the pixel shader) Note: when I calculate the TBN matrix in the VS, I also normalize the end resulting normal (normalize(mul(normalMapNormal, TBN)). ##### Share on other sites Now I was wondering, in some implementations I've seen, people send the normal/binormal/tangent from the VS to the PS and build the TBN matrix in the pixel shader. What do you mean by "build the TBN matrix"? Like this?: float3x3 tbn = float3x3(input.Tangent, input.Binormal, input.Normal); That's a no-op, it doesn't make a difference where you do it. It doesn't change anything or add any shader instructions - it's just syntactic sugar so you can do mul(normalMapNormal, tbn) instead of float3(dot(normalMapNormal, input.Tangent), dot(normalMapNormal, input.Binormal), dot(normalMapNormal, input.Normal). ##### Share on other sites Also binormal is wrongly used term, it is bitangent. ##### Share on other sites Thanks. That makes sense. So basically it doesn't matter if I send 3 float3's from the VS to the PS or 1 float3x3. And how about multiplying the VS input normal/tangent/bitangent with the world matrix? I'd say I do that in the VS because then it doesn't need to be executed per pixel (and the result is the same I suppose?) Same for normalizing the result of normal/tanget/bitangent multiplied with the world matrix. Would you do those 2 actions in the VS? ##### Share on other sites So basically it doesn't matter if I send 3 float3's from the VS to the PS or 1 float3x3. Correct. And how about multiplying the VS input normal/tangent/bitangent with the world matrix? I'd say I do that in the VS because then it doesn't need to be executed per pixel (and the result is the same I suppose?) Yes, exactly. Do it in the VS for this reason. Same for normalizing the result of normal/tanget/bitangent multiplied with the world matrix. Would you do those 2 actions in the VS? Interpolated unit vectors don't retain their unit length (consider what would happen if they pointed 180 degrees from each other and were weighted 0.5 each). So you'll need to re-normalize in the pixel shader if you want correct results (you'll probably want to keep normalizing in the vertex shader too). Edited by phil_t ##### Share on other sites Thanks, I've got it in place. Here it is (just the relevant code snippets): struct VS_OUTPUT { float4 Pos : POSITION0; float3 NormalWorld : TEXCOORD1; float2 TexCoord : TEXCOORD2; float3 wPos : TEXCOORD3; float3 ViewDir : TEXCOORD4; float3 BiNormalWorld : TEXCOORD5; float3 TangentWorld : TEXCOORD6; }; VS_OUTPUT VS_function(VS_INPUT input) { VS_OUTPUT Out = (VS_OUTPUT)0; float4 worldPosition = mul(input.Pos, World); Out.Pos = mul(worldPosition, ViewProj); Out.TexCoord = input.TexCoord; Out.wPos = worldPosition.xyz; Out.ViewDir = normalize(CameraPos - Out.wPos.xyz); // Worldspace to Tangent space for normalMapping Out.TangentWorld = normalize(mul(input.Tangent, (float3x3)World)); Out.BiNormalWorld = normalize(mul(input.Binormal, (float3x3)World)); Out.NormalWorld = normalize(mul(input.Normal, (float3x3)World)); return Out; } // part of the pixel shader float4 PS_function(VS_OUTPUT input): COLOR0 { float4 textureColor = tex2D(textureSampler, input.TexCoord); float3x3 worldToTangent = float3x3( normalize(input.TangentWorld), normalize(input.BiNormalWorld), normalize(input.NormalWorld)); float3 normalMap = normalize(2.0 * (tex2D(normalMapSampler, input.TexCoord).xyz) - 1.0); normalMap = normalize(mul(normalMap, worldToTangent)); ##### Share on other sites 8 normalize calls and each one involves a square root. In the frank d. Luna's implementation (which i use), it is only 1 normalize call in pixel shader. ##### Share on other sites Although it's not directly related to the TBN matrix, I want to point out that you can't compute the View vector in the vertex shader and interpolate that like you're doing. It doesn't interpolate linearly in world space. To get correct view vectors, you need to - unfortunately - send world space position in an interpolator, and then compute the view vector in the pixel shader. ##### Share on other sites Thanks both. @Newtechnology: how would you do that with just one normalize in the PS, you skip normalizing the vectors when they come into the PS? (normal, bitangent and tangent) @OsmanB: but do I need the view vector per pixel instead of per vertex? ##### Share on other sites Thanks both. @Newtechnology: how would you do that with just one normalize in the PS, you skip normalizing the vectors when they come into the PS? (normal, bitangent and tangent) Yeah, you 'construct' the TBN matrix using non-normalized N, B and T variables. Then when you use this matrix to transform your normal-map value into a normal, it won't be normalized either -- so you normalize this final value only. It's probably very slightly less accurate, but is much cheaper! 1. 1 2. 2 3. 3 Rutin 18 4. 4 JoeJ 14 5. 5 • 14 • 10 • 23 • 9 • 32 • ### Forum Statistics • Total Topics 632631 • Total Posts 3007528 • ### Who's Online (See full list) There are no registered users currently online ×
2018-09-24 07:23:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20524375140666962, "perplexity": 4304.3011561963385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160233.82/warc/CC-MAIN-20180924070508-20180924090908-00381.warc.gz"}
http://www.reference.com/browse/Contact+analysis+(cryptanalysis)
Definitions # Contact analysis (cryptanalysis) In cryptanalysis, contact analysis is the study of the frequency with which certain symbols precede or follow other symbols. The method is used as an aid to breaking classical ciphers. Contact analysis is based on the fact that, in any sample of any written language, certain symbols appear adjacent to other symbols with varying frequencies. Moreover, these frequencies are roughly the same for almost all samples of that language, even when the distribution of the symbols themselves differs significantly from normal. This is true regardless of whether the symbols being used are words or letters. In some ciphers, these properties of the natural language plaintext are preserved in the ciphertext, and have the potential to be exploited in a ciphertext-only attack. Although in a sense contact analysis can be considered a type of frequency analysis, most discussions of frequency analysis concern themselves with the simple probabilities of the symbols in the text: $P\left(X_i=a\right)$ or $P\left(X_i=a cap X_\left\{i+1\right\}=b\right)$ Contact analysis is based on the conditional probability that certain letters will precede or succeed other letters: $P\left(X_i=b mid X_\left\{i-1\right\}=a\right)$, or $P\left(X_i=c mid X_\left\{i-2\right\}=a cap X_\left\{i-1\right\}=b\right)$, or even $P\left(X_i sub S mid X_\left\{i-1\right\}sub T cap X_\left\{i+1\right\} sub T\right)$, where $S$ and $T$ are subsets of the alphabet being used. Where frequency analysis is based on first-order statistics, contact analysis is based on second or third-order statistics. Search another word or see Contact analysis (cryptanalysis)on Dictionary | Thesaurus |Spanish
2014-07-29 02:03:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38243529200553894, "perplexity": 570.0467502914136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510264270.11/warc/CC-MAIN-20140728011744-00025-ip-10-146-231-18.ec2.internal.warc.gz"}
https://lilypond.miraheze.org/wiki/Number_Theory/Singularities
# Number Theory/Singularities Zur Navigation springen Zur Suche springen A mathematical singularity, is a point (in a equation) at which it is not defined or its behaviour is either not understood or uncontrollable. This can be due to many reasons for example divergencies, not being differentiable or having a (definition/logic) gap. Since physical systems are described by mathematical equations this has great implications as unphysical results can occur as for example divergencies cause in a system of finite quantitative property to tend to infnite or a gap causes the used coordinate system to have a point, from which a physical system cannot be described anymore. Physical singularities, such as ${\displaystyle \pm \infty}$, do not exist in the real world. Usually they are a sign, that current knowledge is insufficient to describe what happens at extreme densities, temperatures, forces, ${\displaystyle \dots}$. ## Example The best example for discussion is the Schwarzschild metric/solution, which appears to have singularities at ${\displaystyle r = 0}$ and ${\displaystyle r = \frac{2GM}{c^2} = r_s}$. The singularity at ${\displaystyle r = r_s}$ is an illusion; it is a coordinate singularity, which arises from a bad choice of coordinates or coordinate conditions. As soon as the coordinate system is changed (for example to Kruskal${\displaystyle -}$Szekeres coordinates), then the metric becomes regular at ${\displaystyle r = r_s}$ and can extend the exterior to values of ${\displaystyle r < r_s}$. Thus, one can expand the external metric to the inner ${\displaystyle -}$. This is a direct effect of the solution process. To get the Schwarzschild metric the Einstein field equation have to be solved, but they are solved outside of the Schwarzschild radius and in the vaccum. Hence, for ${\displaystyle r = r_s}$ we have a bad choice of coordinates. The case ${\displaystyle r = 0}$ seems different, since the curvature becomes infinite; thus, indicating the presence of a physical singularity. To see, that for ${\displaystyle r = 0}$ (and other cases) a singularity appears that might be a generic feature of the theory of general relativity, but which does not hold physically, we use the line element for the Schwarzschild metric that has the form: ${\displaystyle \begin{array}{llll} (ds)^2 & = & & c^2(d\tau)^2~+~\displaystyle\frac{2c^2}{\sqrt{1 - \displaystyle\frac{r_s}{R}}} d\tau dT~+~\displaystyle\frac{c^2}{1 - \displaystyle\frac{r_s}{R}}\bigg(1 - \displaystyle\frac{r_s}{r(\tau)}\bigg)(dT)^2~- & r(\tau)^2 ((d\theta)^2 + \sin^2(\theta)(d\phi)^2) \\ \end{array}}$ whereby the Schwarzschild radius is ${\displaystyle r_s = \frac{2GM}{c^2}}$ and the coordinates are given by: ${\displaystyle \left( \begin{matrix} \tau & T & \theta & \phi \end{matrix} \right)^T = \left( \begin{matrix} \tau\\ T\\ \theta\\ \phi \end{matrix} \right) }$ where ${\displaystyle \tau}$ is the proper time of a falling particle, ${\displaystyle T}$ is the proper time of an observer on the surface of the sphere, ${\displaystyle \theta}$ is the colatitude (angle from north) and ${\displaystyle \phi}$ is the longitude. For the orbit of a test particle we get: ${\displaystyle \begin{array}{llll} 0 \leq (ds)^2 & = & & c^2 \Bigg(d\tau + \displaystyle\frac{dT}{\sqrt{1 - \displaystyle\frac{r_s}{R}}} \Bigg)^2~-~\displaystyle\frac{c^2}{1 - \displaystyle\frac{r_s}{R}} \displaystyle\frac{r_s}{r(\tau)}(dT)^2~- & r(\tau)^2 ((d\theta)^2 + \sin^2(\theta)(d\phi)^2) \\ \end{array}}$ With ${\displaystyle \lim R \to 0}$ the sphere disappears (not only for an observer on the sphere but for all!) after ${\displaystyle U = 2\pi R}$, but after ${\displaystyle E = hf}$ ${\displaystyle \Rightarrow}$ ${\displaystyle \lambda = \frac{v_p}{f} > 0}$ the wavelength of the particle is greater than zero. Thus, the particle seems to fly out of space. This is of course not true and show a broader problem, namely that the singularity is not physically. The metric holds though for all: ${\displaystyle r(\tau) > 0}$.
2021-05-10 19:26:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 26, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820825815200806, "perplexity": 300.6107384225791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00425.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-3rd-edition/chapter-2-kinematics-in-one-dimension-exercises-and-problems-page-68/83
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (3rd Edition) Published by Pearson # Chapter 2 - Kinematics in One Dimension - Exercises and Problems: 83 #### Answer The magnitude of acceleration is $4.5~m/s^2$ #### Work Step by Step To barely avoid a collision, the Enterprise needs to decelerate to a speed of 20 km/s while gaining 100 km on the other ship. The average speed during the deceleration period is 35 km/s, so the relative speed between the two ships is 15 km/s on average. We can find the time $t$ it takes to gain 100 km on the other ship: $t = \frac{100~km}{15~km/s} = 6.67~s$ We can find the rate of the deceleration. $a = \frac{v-v_0}{t} = \frac{20~km/s-50~km/s}{6.67~s}$ $a = -4.5~m/s^2$ The magnitude of acceleration is $4.5~m/s^2$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-04-26 21:32:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000583052635193, "perplexity": 758.5501976212561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00322.warc.gz"}
https://akc.is/blog/2017-02-18-Inverse-species-and-sign-reversing-involutions.html
# Inverse species and sign-reversing involutions Given a finite set $$U$$ with $$n\geq 1$$ elements, how would you prove that there are as many subsets of $$S$$ with even cardinality as there are with odd cardinality? One way is to use the binomial theorem: $0^n = (1-1)^n = \sum_{k=0}^n{n \choose k}(-1)^k = \sum_{\text{k even}}{n \choose k} - \sum_{\text{k odd}}{n \choose k}.$ This identity also holds when $$n=0$$, for $$0^0=1$$ and there is exactly one subset of the empty set, and its cardinality is even. So that’s a neat proof, but not very combinatorial. For a more combinatorial proof we can use a so called sign-reversing involution. A function $$f:X\to X$$ is an involution if it is its own inverse, or, in symbols, $$f(f(x))=x$$. Assume that a function $$\epsilon:X\to\{-1,1\}$$ is given. We refer to $$\epsilon(x)$$ as the sign of $$x$$. Now, $$f$$ is a sign-reversing involution if for each non fixed point $$x$$ in $$X$$ the sign of $$x$$ is reversed by $$f$$; that is, $$\epsilon(f(x))=-\epsilon(x)$$. If $$f$$ is a sign-reversing involution of $$X$$, then $\newcommand{Fix}{\mathrm{Fix}} \sum_{x\in X}\epsilon(x) = \!\!\sum_{x\in \Fix(f)}\!\!\!\epsilon(x),$ where $$\Fix(f)=\{x\in X: f(x)=x\}$$ denotes the set of fixed points of $$f$$. This is because, under $$f$$, each negative non fixed point is paired with a unique positive non fixed point, and vice versa. For counting purposes we usually also require that all fixed points of $$f$$ have the same sign, and then the right hand sum is $$\pm|\Fix(f)|.$$ When counting subsets $$S$$ of $$U$$ with respect to parity, a natural sign function is $$\epsilon(S) = (-1)^{|S|}$$. Assuming $$U$$ is equipped with a total order and that $$\hat 0$$ denotes the smallest element of $$U$$, a sign reversing involution is given by $f(S) = \begin{cases} S \setminus \{\hat 0\} & \text{if \hat 0\in S}, \\ S \cup \{\hat 0\} & \text{if \hat 0\notin S }. \end{cases}$ For instance, if $$U=\{1,2\}$$ then $$f(\emptyset)=\{1\}$$, $$f(\{1\})=\emptyset$$, $$f(\{2\})=\{1,2\}$$, and $$f(\{1,2\})=\{2\}$$. Note that $$f$$ is fixed point free; thus $$\Fix(f)=\emptyset$$ and $\#\{S\subseteq U: \text{|S| even}\} - \#\{S\subseteq U: \text{|S| odd}\} = |\Fix(f)| = 0.$ Now, that’s a simple and beautiful proof, but is there a more general and “natural” combinatorial proof? E.g. do we have to assume a total order on $$U$$ and do we have to mention special elements, such as $$\hat 0$$? Before proceeding to the next paragraph you might want to take a moment and try to come up with such a proof. Let $$E$$ be the combinatorial species of sets, defined by $$E[U]=\{U\}$$. Its exponential generating function is $$E(x)=e^x$$, and its multiplicative inverse is the virtual species $$E^{-1}$$ such that $$E\cdot E^{-1}=1$$, where $$1[U]=\{U\}$$ if $$U=\emptyset$$ and $$1[U]=\emptyset$$ otherwise. Note that $E^{-1} = (1+E_+)^{-1} = \sum_{k\geq 0} (-1)^k(E_+)^k,$ where $$E_+$$ denotes the species of nonempty sets. Thus, $$E^{-1}$$ is the species of signed ballots (also called ordered set partitions), where the sign of a ballot $$B_1 B_2\dots B_k$$ is $$(-1)^k$$; that is, the parity of the number of blocks. For instance, $$\{d\}\{a,c,e\}\{b\}$$ is a ballot of $$U=\{a,b,c,d,e\}$$ and its sign is $$(-1)^3=-1$$. The species of subsets $$\newcommand{\Pow}{\mathfrak{P}}\Pow$$ is defined by $$\Pow[U]=\{(S,U\setminus S): S\subseteq U\}$$. Note that $(E \cdot E)[U] = \bigcup_{S\subseteq U} E[S] \times E[U\setminus S] = \bigcup_{S\subseteq U} \{S\} \times \{U\setminus S\} = \Pow[U].$ That is, $$\Pow=E^2$$ and its exponential generating function is $\Pow(x)=E(x)^2 = e^{2x}=\sum_{n\geq 0}2^n \frac{x^n}{n!}.$ Further, counting subsets with respect to the sign $$(-1)^{|S|}$$ we get $E(x)E(-x) = e^{x}e^{-x} = e^0 = 1.$ Thus, the species interpretation of there being as many subsets of even cardinality as of odd cardinality is $$E\cdot E^{-1}=1$$, where the $$1$$ on the right hand side stems from the case when the underlying set is empty. We will give a combinatorial proof of this species identity using a sign-reversing involution. The objects of $$E\cdot E^{-1}$$ are pairs $$(S, \beta)$$ such that $$S\subseteq U$$ and $$\beta=B_1 B_2\dots B_k$$ is a signed ballot of $$U\setminus S$$. For example, $$(E\cdot E^{-1})[\{a,b,c\}]$$ consists of the pairs $\begin{array}{c|c} \text{positive pairs} & \text{negative pairs} \\ (\{a,b,c\}, \emptyset) & (\emptyset,\{a,b,c\}) \\ (\emptyset, \{a\}\{b,c\}) & (\{a\}, \{b,c\}) \\ (\emptyset, \{b\}\{a,c\}) & (\{b\}, \{a,c\}) \\ (\emptyset, \{c\}\{a,b\}) & (\{c\}, \{a,b\}) \\ (\emptyset, \{a,b\}\{c\}) & (\{a,b\}, \{c\}) \\ (\emptyset, \{a,c\}\{b\}) & (\{a,c\}, \{b\}) \\ (\emptyset, \{b,c\}\{a\}) & (\{b,c\}, \{a\}) \\ (\{a\}, \{b\}\{c\}) & (\emptyset, \{a\}\{b\}\{c\}) \\ (\{a\}, \{c\}\{b\}) & (\emptyset, \{a\}\{c\}\{b\}) \\ (\{b\}, \{a\}\{c\}) & (\emptyset, \{b\}\{a\}\{c\}) \\ (\{b\}, \{c\}\{a\}) & (\emptyset, \{b\}\{c\}\{a\}) \\ (\{c\}, \{a\}\{b\}) & (\emptyset, \{c\}\{a\}\{b\}) \\ (\{c\}, \{b\}\{a\}) & (\emptyset, \{c\}\{b\}\{a\}) \\ \end{array}$ This suggests the natural sign-reversing involution $f(S,B_1 B_2 \dots B_k) = \begin{cases} (B_1, B_2 B_3 \dots B_k) &\text{if S=\emptyset,}\\ (\emptyset, SB_1 B_2 \dots B_k) &\text{if S\neq\emptyset.} \end{cases}$ The table above is arranged so that pairs on the same row are images of each other under $$f$$. More generally, if $$F$$ is a species such that $$|F[\emptyset]|=1$$ then the multiplicative inverse, $$F^{-1}$$, is the virtual species of lists $$\alpha_1\alpha_2\dots\alpha_k$$ of nonempty $$F$$-structures in which the sign is $$(-1)^k$$. A proof of $$F\cdot F^{-1}=1$$ is given by the sign-reversing involution $f(\alpha,\alpha_1 \alpha_2 \dots \alpha_k) = \begin{cases} (\alpha_1, \alpha_2 \alpha_3 \dots \alpha_k) &\text{if \alpha\in F[\emptyset],}\\ (\emptyset, \alpha \alpha_1 \alpha_2 \dots \alpha_k) &\text{if \alpha\notin F[\emptyset].} \end{cases}$ It has exactly one fixed point, namely $$(\emptyset, \emptyset)$$. For more examples of the use of sign-reversing involutions the reader might want to have a look at my paper with Stuart Hannah. Finally I’d like to thank my friend Bjarki for suggesting that I write this post. Addendum: Brent Yorgey has pointed out that, while it is true that $$E(-X)=E^{-1}(X)$$, the intuition that $$E(-X)$$ is the species of signed sets only holds in the presence of a linear order on the underlying set. Thus my claim that the sign-reversing involution $$f$$ above constitutes a “natural” proof of the fact that here are as many subsets of a given set $$S$$ with even cardinality as there are subsets of $$S$$ with odd cardinality is wrong. For more details see Brent’s excellent three-part series of posts: part 1, part 2, part 3.
2019-05-23 07:10:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768354892730713, "perplexity": 96.9861544448458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257156.50/warc/CC-MAIN-20190523063645-20190523085645-00292.warc.gz"}
https://socratic.org/questions/help-on-gas-laws-combined-gas-law
# Help on Gas Laws (Combined Gas Law)? ## The volume of a gas filled balloon is 30.0 L at 40 degrees Celsius and 153 kPa pressure. What volume will the balloon have at standard temperature and pressure (STP)? Well, $\frac{{P}_{1} {V}_{1}}{T} _ 1 = \frac{{P}_{2} {V}_{2}}{T} _ 2$......... And so we solve for ${V}_{2}$... ${V}_{2} = \frac{{P}_{1} \times {V}_{1} \times {T}_{2}}{{P}_{2} \times {T}_{1}}$...and we fills in the blanks... $= \frac{153 \cdot k P a \times 30.0 \cdot L \times 273.15 \cdot K}{100 \cdot k P a \times 313.15 \cdot K} = 40.0 \cdot L$
2021-10-28 20:06:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.865714967250824, "perplexity": 2387.462387645292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00313.warc.gz"}
https://math.stackexchange.com/questions/1394416/is-this-set-of-event-axioms-complete
# Is this set of event axioms complete? In the notes' first chapter of the course "Discrete Stochastic Processes" presented by Prof. Robert Gallager, the "Axioms for events" defined as follows: 1.2.1 Axioms for events [Chapter 1: Introduction and review of probability, page 6] Given a sample space $\Omega$ , the class of subsets of $\Omega$ that constitute the set of events satisfies the following axioms: 1. $\Omega$ is an event. 2. For every sequence of events $A_1, A_2, \ldots$, the union $\bigcup_{n=1}^{\infty} A_n$ is an event. 3. For every event $A$, the complement $A^c$ is an event. The notes also states that: Note that the axioms do not say that all subsets of $\Omega$ are events. In fact, there are many rather silly ways to define classes of events that obey the axioms. Based on aforementioned axioms I cannot find any event except $\Omega$ and $\emptyset$. For example, suppose we have $\Omega = \{1, 2, 3\}$, is $A=\{1\}$ an event? If answer is yes, based on which axioms? • "the class of subsets of $\Omega$", so I would think that if $A$ is a subset of $\Omega$, then yes. – Anthony Aug 12 '15 at 13:21 • So essentially you say every subset of $\Omega$ is an event? – Taher Rahgooy Aug 12 '15 at 13:22 • Those conditions looks consistent but not complete. I.e. they look like they are necessary for any characterisation of events but not sufficient to define all events. – Colm Bhandal Aug 12 '15 at 13:25 • As I understand it, $A$ needs to be in $\Omega$, and it also needs to be in the subset of $\Omega$ which represent the set of events. – Anthony Aug 12 '15 at 13:27 • You are correct in that only $\Omega$ and $\emptyset$ can be deduced as being events from these axioms. In fact, from only axioms $1$ and $3$. – Colm Bhandal Aug 12 '15 at 13:27 A collection of events satisfying axioms 1-3 is called a $\sigma$-algebra. You, no doubt, encountered this in the context of probability space $$(\Omega,\mathcal{A},P)$$ where the second member of the tiple $\mathcal{A}$, the collection of events on which the probability measure $P$ is defined, must be a $\sigma$-algebra. Both $\{\emptyset, \Omega\}$ and $2^{\Omega}$, the set of all subsets of $\Omega$, are examples of $\sigma$-algebras. But, between these two extremes, there are many more $\sigma$-algebras, and you have to choose one of them to be your set of events, when you define a probability space. When $\Omega$ is finite, you will usually choose $2^{\Omega}$ for your set of events. But, when $\Omega=\mathbb{R}$, it turns out that there is, for example, no probability measure defined on the whole $2^{\mathbb{R}}$ such that $P(\{x\})=0$ for every singleton $\{x\} \subset \mathbb{R}$, and if we are interested in such measures, we have to restrict our attention to some smaller $\sigma$-algebra of events, that will make it possible to define such a probability measure. In case of $\Omega=\mathbb{R}$ it will usually be the $\sigma$-algebra of Borel sets $\mathcal{B}(\mathbb{R})$. In short, you choose the $\sigma$-algebra of measurable events so that it is small enough to enable us to define a desired probability measure on it, and large enough to encompass all the sets we are interested in. In principle, it will vary from probability space to probability space. • You can put a probability measure on $2^\Omega$ for uncountable $\Omega$. For instance take a countable subset, pick your favorite probability measure to put on the subset, and assign probability zero to any set disjoint from this subset. You can't put a translation-invariant measure on $2^{\mathbb{R}}$, but that's another story entirely. – Ian Aug 12 '15 at 14:25 • @Ian Yes, thank you. I corrected my answer. – Zoran Loncarevic Aug 12 '15 at 15:50 The axioms are right, and one cannot say that $A = \{1\}$ is an event. It may be that your $\sigma$-algebra of events is simply $\{\emptyset, \Omega\}$. So to answer the exercise, one should say: "Based on the axioms one cannot say whether $A$ is an event".
2019-07-22 18:17:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067740440368652, "perplexity": 191.15583893649824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528208.76/warc/CC-MAIN-20190722180254-20190722202254-00125.warc.gz"}
https://zbmath.org/?q=an:0844.20039
# zbMATH — the first resource for mathematics Finite semigroups and universal algebra. Rev. and transl. by the author. (English) Zbl 0844.20039 Series in Algebra 3. Singapore: World Scientific. (ISBN 981-02-1895-8). xvi, 511 p. (1994). [For a review of the Portuguese original (São Paulo, 1992) see Zbl 0757.08001.] Finite semigroups have been intensively investigated from the very beginning of semigroup theory. The modern trend in studying finite semigroups is based on what may be called the varietal approach in which one focuses on certain variety-like classes rather than individual semigroups. Such classes (called pseudovarieties or varieties of finite semigroups) were introduced by S. Eilenberg who has shown that the algebraic classification of formal languages inevitably leads to pseudovarieties [Automata, languages and machines, Vol. B (1976; Zbl 0359.94067)]. Semigroup pseudovarieties are also closely related with another important branch of theoretical computer science – the theory of finite automata. The book under review presents some important recent achievements in semigroup pseudovarieties. The key point of the author’s approach is to consequently use ideas and methods originated in universal algebra. The fact that there exist reasonable pseudovarietal analogues of the main tools of the theory of (Birkhoff) varieties such as identities and free algebras was first discovered by J. Reiterman [Algebra Univers. 14, 1-10 (1982; Zbl 0484.08007)] but it was the author who systematically explored these analogues and made them really work for finite semigroup theory. The book consists of 13 Chapters. Chapter 0 (Introduction) recalls the aforementioned motivation for the study of semigroup pseudovarieties. Chapters 1-4 form Part I of the book entitled “Finite Universal Algebra”. Chapters 1 and 2 introduce some basic concepts and results which are related to Birkhoff varieties, ordered sets and uniform spaces frequently used in studying pseudovarieties. Chapter 3 contains the fundamental results about pseudovarieties of arbitrary finite algebras. The crucial notions of a topological algebra of implicit operations and of a pseudoidentity appear here. Chapter 4 introduces certain algorithmic problems including the main problem of the theory of pseudovarieties, the membership problem, and discusses its relationship with the finite pseudoidentity basis property. Chapter 5-12 form Part II, “Finite Semigroups and Monoids”. Chapter 5 contains some necessary preliminaries about semigroups and graphs. Chapters 6 and 8 contain a detailed study of, respectively, permutative pseudovarieties and pseudovarieties of semigroups whose regular $$\mathcal D$$-classes are subsemigroups. Chapter 7 studies the interrelations between pseudovarieties of semigroups and monoids. Chapters 9, 10 and 11 are basically devoted to the membership problem for pseudovarieties arising as the result of taking the lattice join or the semidirect product of two given pseudovarieties or applying the power operator to a given pseudovariety. Chapter 12 deals with factorizations of implicit operations. The main text of the book concludes with a list of 60 research problems and with bibliographical notes for each chapter. Both the problems and the bibliographical notes are essentially updated for the English translation. The book under review constitutes an important contribution to the most active part of the present theory of finite semigroups. An overwhelming majority of the results included in it is very new and has been scattered over journals so far. The book does not cover all of the theory of semigroup pseudovarieties (in fact, no book could do this just because the field is now developing too fast) but it is extremely rich in material and ideas presented with skill and dedication. The book has already influenced the area essentially, and its influence will certainly grow. There are a few misprints and inaccurate formulations in the book, mostly harmless. I list here only those which might confuse a less experienced reader. The claim of Exercise 3.2.16 is wrong (in fact, there are uncountable chains and antichains even among pseudovarieties of abelian groups). A phrase on page 109 might be interpreted as saying that the word problem for one-relator monoid presentations is known to be decidable while it is still open. In the formulation of Theorem 6.1.20 the lattice $${\mathcal G}({\mathcal N}\text{il}\cap{\mathcal C}\text{om})$$ should stand instead of the lattice $${\mathcal G}({\mathcal N}\text{il})$$. In the proof of the Krohn-Rhodes theorem (pages 296-297) one should use the pseudovariety $${\mathbf M}{\mathbf K}_1$$ rather than its dual $${\mathbf M}{\mathbf D}_1$$. As mentioned, the theory of semigroup pseudovarieties has been growing rapidly, and therefore, it is not a surprise that some of the research problems proposed in the book are solved now. In fact, the announced solutions of Problems 8, 15, 16 and 17 have been already footnoted in the book. Meanwhile the solution of Problem 8 appeared in the reviewer’s paper [Int. J. Algebra Comput. 5, 127-135 (1995; Zbl 0834.20058)], and the author and P. Weil’s paper “Free profinite $$\mathcal R$$-trivial monoids” containing the solution of Problem 15 will soon appear in the same journal. G. Churchill has announced that Problem 10 has a negative solution. A. Azevedo and M. Zeitoun answered all three questions of Problem 24: the answers are ‘Yes’ to question b) and ‘No’ to questions a) and c). Problem 29 (that asks whether the semidirect product of two pseudovarieties generated by a finite semigroup is again generated by a finite semigroup) can be easily answered in the negative. Indeed, consider the semidirect square $${\mathbf A}{\mathbf b}_p^2$$ of the pseudovariety generated by the cyclic group of prime order $$p$$. Then $${\mathbf A}{\mathbf b}_p^2$$ contains nilpotent groups of any nilpotency class: for example, the group given by the presentation $$\langle x, y_1, \ldots, y_n \mid x^p = y_i^p = 1\;(1 \leq i \leq n)$$, $$y_i y_j = y_j y_i$$ $$(1 \leq i < j \leq n)$$, $$y_k^x = y_{k+1}$$ $$(1 \leq k < n)$$, $$y_n^x = y_n \rangle$$ is easily seen to belong to $${\mathbf A}{\mathbf b}_p^2$$ and to have the nilpotency class $$n$$. However, the nilpotency class of any group in a pseudovariety generated by a finite semigroup $$S$$ cannot exceed the maximum of the nilpotency classes of nilpotent subgroups of $$S$$. Therefore the pseudovariety $${\mathbf A}{\mathbf b}_p^2$$ cannot be generated by a finite semigroup. L. Teixeira has shown that the conjecture in Problem 31 is false; the problem itself still remains open. P. G. Trotter has announced an example answering Problem 41 in the negative. It is worth mentioning an interesting feature of the book under review – it was basically translated from the Portuguese by a computer program written in Prolog by M. Filgueiras, see his report [A successful case of Computer Aided Translation, Proc. 4th Conf. on Applied Natural Language Processing, M. Kauffman (ed.), 91-94 (1994)]. The program (running on a MIPS computer) converted the whole TEX-file of the Portuguese original into the English TEX-file (which then has been edited by hand) in a little over 2 minutes. I think the book is a must for researchers in the area but it is also very useful for all those who want to trace modern developments in the theory of semigroups. ##### MSC: 20M07 Varieties and pseudovarieties of semigroups 20-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to group theory 20-02 Research exposition (monographs, survey articles) pertaining to group theory 08B15 Lattices of varieties 20M05 Free semigroups, generators and relations, word problems 08-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to general algebraic systems 08C15 Quasivarieties 20M35 Semigroups in automata theory, linguistics, etc.
2021-09-22 23:25:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5761685371398926, "perplexity": 769.9352945834585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00061.warc.gz"}
https://scicomp.stackexchange.com/questions/36814/electrostatic-force-simulate-trajectory-of-test-particle-using-runge-kutta-f
# Electrostatic Force - Simulate Trajectory of Test Particle using Runge Kutta - Force always Repels In the center of a 2D-Plane a positive static charge Q is placed with position r_prime. This charge creates a static electrical Field E. Now i want to place a test particle with charge Q and position vector r in this static E-Field and compute its trajectory using the 4th order Runge-Kutta method. For the initial conditions • Q = 1, r_prime=(0,0) • q = -1, r = (9, 0), v = (0,0) one would expect, that the negative charged test particle should move towards the positive charge in the center. Instead i get the following result for the time evolution of the test particles x component: [9.0, 9.0, 8.999876557604697, 8.99839964155741, 8.992891977287334, 8.979313669171093, 8.95243555913327, 8.906134626441052, 8.83385018027209, 8.729257993736123, 8.587258805422984, 8.405449606446608, 8.186368339303788, 7.940995661159361, 7.694260386250479, 7.493501689700884, 7.420415546859942, 7.604287312065716, 8.226733652039988, 9.498656905483394, 11.60015461031076, 14.621662121713964, 18.56593806599109, .... The results of the first iteration steps show the correct behavior, but then the particle is strangely repelled to infinity. There must be a major flaw in my implementation of the Runge-Kutta Method, but I checked it several times and I cant find any... Could someone please take a quick look over my implementation and see if they can find a bug. """ Computes the trajectory of a test particle with Charge q with position vector r = R[:2] in a electrostatic field that is generated by charge Q with fixed position r_prime """ import numpy as np import matplotlib.pyplot as plt def distance(r, r_prime, n): """ computes the euclidean distance to the power n between position x and x_prime """ return np.linalg.norm(r - r_prime)**n def f(R, r_prime, q, Q): """ The equations of motion for the particle is given by: d^2/dt^2 r(t) = F = constants * q * Q * (r - r_prime)/||r - r_prime||^3 To apply the Runge-Kutta-Method we transform the above (constants are set to 1) two dimensional second order ODE into four one dimensional ODEs: d/dt r_x = v_x d/dt r_y = v_y d/dt v_x = q * Q * (r_x - r_prime_x)/||r - r_prime||^3 d/dt v_y = q * Q * (r_y - r_prime_y)/||r - r_prime||^3 ''' """ r_x, r_y = R[0], R[1] v_x, v_y = R[2], R[3] dist = 1 / distance(np.array([r_x, r_y]), r_prime, 3) # Right Hand Side of the 1D Odes f1 = v_x f2 = v_y f3 = q * Q * dist * (r_x - r_prime[0]) f4 = q * Q * dist * (r_y - r_prime[1]) return np.array([f1, f2, f3, f4], float) # Constants for the Simulation a = 0.0 # t_0 b = 10.0 # t_end N = 100 # number of iterations h = (b-a) / N # time step tpoints = np.arange(a,b+h,h) # Create lists to store the computed values xpoints, ypoints = [], [] vxpoints, vypoints = [], [] # Initial Conditions Q, r_prime = 1, np.array([0,0], float) # charge and position of particle that creates the static E-Field q, R = -1, np.array([9,0,0,0], float) # charge and its initial position + velocity r=R[:2], v=[2:] for dt in tpoints: xpoints.append(R[0]) ypoints.append(R[1]) vxpoints.append(R[2]) vypoints.append(R[3]) # Runge-Kutta-4th Order Method k1 = dt * f(R, r_prime, q, Q) k2 = dt * f(R + 0.5 * k1, r_prime, q, Q) k3 = dt * f(R + 0.5 * k2, r_prime, q, Q) k4 = dt * f(R + k3, r_prime, q, Q) R += (k1 + 2*k2 * 2*k3 + k4)/6 plt.plot(tpoints, xpoints) # should converge to 0 Edit 09.02.2021 The equation of motion for the test particle with charge q and position vector r(t) is given by $$\frac{d^2}{dt^2} \mathbf{r}(t) = \frac{\mathbf{F}}{m} = \frac{q\mathbf{E}}{m} = \frac{k}{m} \frac{q\cdot Q}{|\mathbf{r} - \mathbf{r}'|^3} \cdot (\mathbf{r} - \mathbf{r}') \qquad (1)$$ Charge Q generates an electrostatic field E and has a constant position r'. For simplicity we set the constant factor k, m equal to one. Equation (1) is a two dimensional second order ODE. To apply Runge Kutta we transform it into four one dimensional ODEs: \begin{align} \frac{d}{dt}r_x &= v_x \qquad &(2)\\\\ \frac{d}{dt}r_y &= v_y \qquad &(3)\\\\ \frac{d}{dt}v_x &= \frac{q\cdot Q}{|\mathbf{r} - \mathbf{r}'|^3} \cdot (r_x - r'_x) \qquad &(4) \\\\ \frac{d}{dt}v_y &= \frac{q\cdot Q}{|\mathbf{r} - \mathbf{r}'|^3} \cdot (r_y - r'_y) \qquad &(5) \end{align} Except for the 1/6 factor I forgot in the last RK4 step (thanks for pointing it out Lutz), everything looks correct from my point of view. I mean, the system behaves correctly at the beginning, the negative test charge is attracted by the positive one. But then the charge is repelled for reasons I don't understand. • Can you write out the equations you are trying to solve? Having a concise mathematical statement is often a good first step towards finding the bug! Feb 8 at 17:05 ## 1 Answer The problem is that, first, the vector field has a singularity at the origin. Moving straight towards the singularity results in a catastrophe in finite time. Second and related, that the speed and all stiffness measures (higher derivatives) increase significantly the closer you get to the origin. So while one would expect a Kepler ellipse (after setting the initial velocity to some non-zero value in $$y$$ direction), the distortion due to the truncation error produces more like a swing-by maneuver with an extreme gain in velocity. Your step size is so large that you do not see this, this happens in the internal stages of the steps. To get better results, you would need more control on the speed. One variant is to normalize the vector field that you feed to RK4 to something like $$\hat v=\frac{v}{1+\|v\|}$$. Here you would need to reconstruct the points on the time axis by integrating the scale factor. The other variant is to use a method with adapting step size such as all the usual library methods. • I see. Thx a lot Lutz! I will try both approaches that u mentioned in your post and gonna give u a short update if it worked. Feb 11 at 8:05
2021-11-28 15:44:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8952977657318115, "perplexity": 1498.1209420195937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00531.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/87937-work-energy-power-print.html
# Work, energy, power? A body of mass $m$ slides down a plane inclined at an angle $\alpha$. The coefficient of friction is $\mu$. Find rate at which kinetic plus gravitational potential energy is dissipated at any time $t$. Motion exists along the surface of the plane only (call this direction horizontal). Analyse the weight force $B=mg$ in two components: the vertical and horizontal forces, $N=mg\cos(a)$ and $F=mg\sin(a)$ respectively. Along the horizontal direction, two forces are applied: $F$ and the friction $T=\mu m g \cos(a)$. Call $v(t)$ the velocity at time $t$. By Newton's law, $F-T=mv'(t)$, and solve to get $v(t)=g(\sin(a)-\mu \cos(a))t$ (assume the body was resting at $t=0$). If $h(t)$ is the height and $S(t)=\int_0^t v(s)ds$ the length traveled at time $t$, then a simple argument shows $h(t)=S(t)\sin(a)$. Now, the total energy is $E(t)=mgh(t)+1/2mv^2(t)=mgS(t)\sin(a)+1/2mv^2(t)$. Differentiate to get $E'(t)=mgv(t)\sin(a)+mv(t)v'(t)$...
2017-06-24 19:54:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9549420475959778, "perplexity": 182.69480772809206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00317.warc.gz"}
https://www.studyadda.com/question-bank/critical-thinking_q11/1497/114211
• # question_answer Mark the oxide which is amphoteric in character [MP PMT 2000] A) $C{{O}_{2}}$ B) $Si{{O}_{2}}$ C) $Sn{{O}_{2}}$ D) $Ca{{O}_{{}}}$ $Sn{{O}_{2}}+2NaOH\to N{{a}_{2}}Sn{{O}_{3}}+{{H}_{2}}O$ $Sn{{O}_{2}}+4HCl\to SnC{{l}_{4}}+2{{H}_{2}}O$
2020-09-19 03:59:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359254598617554, "perplexity": 4050.3095647001046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00039.warc.gz"}
http://mathhelpforum.com/trigonometry/38943-trigonometric-identity-print.html
# trigonometric identity • May 19th 2008, 08:05 PM singleton2787 trigonometric identity Been 20 years since I've identified trig identities. We are stuck on a particular type and would appreciate some explanation. 1/sin(-x) * (1-cos^2x) that's cos(squared x) and 1+1/cos(-x) / -sinx-tanx • May 19th 2008, 08:10 PM Mathstud28 Quote: Originally Posted by singleton2787 Been 20 years since I've identified trig identities. We are stuck on a particular type and would appreciate some explanation. 1/sin(-x) * (1-cos^2x) that's cos(squared x) and 1+1/cos(-x) / -sinx-tanx well the right side is indescernible bu $\frac{1}{\sin(-x)}=\frac{-1}{\sin(x)}=\csc(x)$ and $1-\cos^2(x)=\sin^2(x)$ So $\sin^2(x)\cdot\csc(x)=\sin(x)$ rewrite the other one in a more hospitable form • May 19th 2008, 08:23 PM singleton2787 problem re-written Problem 1 (simplify) 1/sin(-x) * (1-cos^2x) • May 19th 2008, 08:28 PM Mathstud28 Quote: Originally Posted by singleton2787 Problem 1 (simplify) 1/sin(-x) * (1-cos^2x) I am very sorry I was unclear...the above work was the solution to that problem...I meant the other one...once again sorry • May 19th 2008, 08:35 PM singleton2787 i dont understand.. how does $1-\cos^2(x)=\sin^2(x)$? • May 19th 2008, 08:40 PM Mathstud28 Quote: Originally Posted by singleton2787 how does $1-\cos^2(x)=\sin^2(x)$? Because $\cos^2(x)+\sin^2(x)=1$ This can be derived multiple ways...the most intuitive being On the unit circle which has a radius of one any of the points on the circle have coordinates $(\cos(x),\sin(x))$ So the distance from the origin to any point on the circle which I stated earlier is 1 So setting up the distance equation we have $\sqrt{(0-\cos(x))^2+(0-\sin(x))^2}=1^2$ Simplifying we get $\sqrt{\cos^2(x)+\sin^2(x)}=1^2\Rightarrow{\cos^2(x )+\sin^2(x)=1}$ Usually in trig classes this indentity is just taken to be true • May 19th 2008, 08:53 PM singleton2787 second problem thanks for your help can you explain this one? $\frac{1+sec(-x)}{sin(-x)+tan(-x)}$ • May 19th 2008, 09:18 PM Reckoner Quote: Originally Posted by singleton2787 thanks for your help can you explain this one? $\frac{1+sec(-x)}{sin(-x)+tan(-x)}$ First, convert everything to sines and cosines using the identities $\sec x = \frac1{\cos x}$ $\tan x = \frac{\sin x}{\cos x}$ Then, combine the fractions in the numerator and denominator, reduce, factor and cancel, and you should be left with $-\csc x$ if you do it right. When working this one, it may be helpful to observe that, since sine is an odd function and cosine is even, $\sin\left(-x\right) = -\sin x$ $\cos\left(-x\right) = \cos x$ If you have difficulty with the simplification, let us know.
2014-12-26 22:32:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547882676124573, "perplexity": 2330.1342339008515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549808.107/warc/CC-MAIN-20141224185909-00037-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/other-math/thinking-mathematically-6th-edition/chapter-2-set-theory-2-4-set-operations-and-venn-diagrams-with-three-sets-exercise-set-2-4-page-94/98
## Thinking Mathematically (6th Edition) $VIII$ 60 Minutes was not in the Top 6 in any of the three years. Thus, it must be placed in region $VIII$.
2019-01-16 14:17:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5852903127670288, "perplexity": 642.1787365318056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657510.42/warc/CC-MAIN-20190116134421-20190116160421-00035.warc.gz"}
https://www.nature.com/articles/s41598-020-68797-3?error=cookies_not_supported&code=88ce54b2-cbf0-4735-a1e0-d6197994fded
## Introduction Decrease in skeletal muscle quantity and quality, commonly termed sarcopenia, is known as a strong risk factor for adverse outcomes in several chronic and malignant diseases. Sarcopenia was shown to have high socio-economic and personal burden and leads to impaired activity in daily life, decreased mobility, loss of independency and a higher mortality1,2,3. Initially, sarcopenia was considered to be an age-related phenomenon4. However, it is now increasingly realized that sarcopenia may also occur in younger patients, for example secondary due to systemic diseases. Moreover, it was realized that sarcopenia may not be captured by conventional anthropometric measurements such as body mass index (BMI) or waist-to-hip-ratio, particularly in obese patients1,5. Amount and quality of skeletal muscles can be assessed by cross-sectional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). Previous studies indicate that both modalities may provide imaging based quantitative biomarkers of sarcopenia, and that these biomarkers may reveal prognostic information in various severe diseases6,7,8,9,10. Thereby, cross sectional areas (CSA) of skeletal muscles at distinct anatomical landmarks were shown to provide accurate surrogates of total skeletal muscle amount and therefore may be used to identify patients with low muscle mass1,6,7. The most common approaches to estimate skeletal muscle amount are determination of circumferential skeletal muscle area or psoas muscle area, both typically obtained at lumbar vertebral levels1. However, these landmarks are frequently not captured in several imaging protocols, although sarcopenia is known to be a relevant factor in many chronic diseases. Therefore, alternative landmarks were proposed, such as the level of the superior mesenteric artery (SMA)6. CT and MRI derived measurements of skeletal muscle fat infiltration (MFI) as indicators of muscle quality were shown to predict survival following transcatheter aortic valve replacement due to aortic stenosis or local ablative treatment of colorectal liver cancer11,12. According to international guidelines, functional tests such as grip strength measurements or the chair stand test may be used to identify patients with probable sarcopenia. The diagnosis is than confirmed by determination of low skeletal muscle quantity or quality, for which—as stated in these guidelines—both CT and MRI based measurements may be applied1,5. However, it is unclear how CT and MRI derived measurements of skeletal muscles are related to one another and if obtained measurements of skeletal muscle mass and quality are interchangeable between imaging modalities. Therefore, we aimed to (a) assess CSA and MFI measurements as surrogates of skeletal muscle mass and quality in CT and MRI in subjects who received both imaging modalities at the same day and (b) to compare measurements intra-individually to assess agreement between modalities for determination of skeletal muscle mass and quality. ## Methods ### Study design and population In this study, a subset of participants of a lung cancer screening program of the University Hospital of Bonn was prospectively enrolled. Consecutive participants who underwent screening between 05/2019 and 08/2019 were included. For each included subject, the individual CT and MRI scans of the chest were performed within the same day. Exclusion criteria were self-reported or validated history of cancer, as well as any type of metallic implants which preclude MRI examination such as cardiac pacemakers, implantable defibrillators, neurostimulators, or cochlear implants. Furthermore, pregnant subjects as well as those with claustrophobia were excluded. ### Imaging protocol For each study subject, CT and MRI examinations were performed on the same day. Low-dose helical CT of the chest in supine positioning without administration of iodinated contrast was performed on a clinical CT-scanner (Brilliance iCT SP 128 CT, Philips Healthcare, Best, the Netherlands) with the following imaging parameters: tube current (exposure time product), 25 mAs; tube voltage, 120 kV; collimation, 16 × 0.63 mm; reconstructed slice thickness, 2 mm. As it was done in previous studies, a slice thickness of 5 mm was secondary reconstructed to perform muscle measurements8. MRI was performed on a clinical whole body 1.5 T scanner (Ingenia 1.5 T, Philips Healthcare, Best, the Netherlands) using a 32-channel torso coil with digital inter-face for signal reception. The imaging protocol included an axial six-echo 3D spoiled gradient-echo Dixon sequence for chemical-shift encoded fat–water separation with T2* correction. The imaging parameters were as follows: repetition time/echo time (TE)/ΔTE, 7.8/1.08/1.1 ms; field of view, 350 × 302 × 150 mm3; acquired voxel size, 1.99 × 1.99 × 6.00 mm3; reconstructed voxel size, 0.99 × 0.99 × 3.00 mm3; flip angle, 5°. Data were acquired during one breathhold (scan duration: 14.9 s). Proton density fat fraction (PDFF) maps for determination of muscle fat content were reconstructed directly on the imager console after image acquisition using vendor specific software. ### Image analysis For comparison of skeletal muscle measurements, single-slice images at the level of the superior mesenteric artery (SMA) were exported from CT and MRI image datasets for each subject to a separate workstation. The SMA level was chosen as the anatomical landmark for skeletal muscle measurements since it was previously demonstrated that single-slice measurements of paraspinal skeletal muscles at this level provide a highly accurate surrogate of skeletal muscle mass and may reveal prognostic information in patients with chronic and malignant diseases6,11,13. For skeletal muscle analysis, a previously described in-house tool written in MATLAB (The Mathworks, Natick, MA) was adapted to the particular task of this study6,12,14. All measurements were performed by a board-certified radiologist experienced in abdominal diagnostic imaging and body composition analysis. Intrareader reproducibility was excellent for assessment of skeletal muscle area [ICC = 0.996; 95% confidence interval (CI), 0.979–0.999; P < 0.001] and skeletal muscle fat infiltration (ICC = 0.996; 95% CI, 0.978–0.999; P < 0.001). Patient information was blinded. Skeletal muscle area was defined as the bilateral CSA of the paraspinal skeletal muscle compartment, including the erector spinae muscles and the spinotransverse muscle group. Due to their low extent and inconsistent expression at the SMA level, the psoas major and quadratus lumborum muscles were excluded from the segmentation. The paraspinal skeletal muscle compartment was identified and its contours were manually traced, separating it from the erector spinae aponeurosis, interspinous ligaments, adjacent parts of the vertebral bodies as well as bordering anatomical structures of the thoracic wall15. To allow for highly accurate segmentation, images were displayed in standard display settings with the option to manually adjust contrast between skeletal muscles and adjacent tissues by the reader, where appropriate. For determination of MFI, the mean radiation attenuation in Hounsfield units (HU) and the tissue fat content expressed as MRIPDFF in percent (%) were calculated for the CSA in CT and MRI, respectively. ### Statistical analysis Statistical analysis was conducted using commercially available statistical software (Prism version 8, GraphPad, La Jolla, CA). Continuous and categorical variables are given as means with standard deviation and total numbers with percentages, respectively. The Mann–Whitney U test was used for group comparison statistics. Continuous data were plotted as violin plots. Intraclass correlation coefficient (ICC) was calculated to assess intrareader reproducibility. ICC estimates and their 95% confident intervals (CI) were assessed for a single rater, based on assessment of absolute-agreement using a two-way mixed-effects model. Pearson correlation coefficients (r) and Bland–Altman plots with mean absolute differences and 95% limits of agreement were calculated to assess interrelations between CT and MRI derived measures of skeletal muscle mass (CSA) and skeletal muscle quality (MFI). Thereby, measurements for assessment of MFI systematically differed between CT and MRI for methodological reasons. While in CT, mean skeletal muscle radiation attenuation is measured to determine fat content of skeletal muscles16, MRIPDFF allows for direct assessment of the tissue fat fraction. This disparity precluded direct comparison of established CT and MRI derived MFI measures with the Bland–Altman method. However, as detailed in the results section, CT and MRI derived MFI values were highly correlated with one another. This relation allowed for calculation of a linear regression model for direct estimation of skeletal muscle fat content from skeletal muscle radiation attenuation values based on the corresponding MRIPDFF values. This measure was termed CTFF, enabling for direct comparison of CT and MRI derived measures of skeletal muscle fat content on a ratio scale. For all analyses, a P value < 0.05 was considered to indicate a statistically significant difference. ### Ethical approval and informed consent The presented study was approved by the institutional review board of the University of Bonn and hence all methods were performed in compliance with the ethical standards set in the 1964 Declaration of Helsinki as well as its later amendments. Written informed consent was obtained prior to examination from all subjects. ## Results ### General characteristics Fifty patients (19 female, 31 male) with a mean age of 61 ± 6 years were evaluated. Baseline characteristics of the study population are detailed in Table 1. Mean cross-sectional area (CSA) of the paraspinal skeletal muscles was significantly higher in male patients compared to female patients in both CT (51.8 ± 10.5 cm2 vs. 38 ± 4.8 cm2, P < 0.001) and MRI (47.4 ± 11.7 cm2 vs. 35.8 ± 4.1 cm2, P < 0.001). For the entire population, mean CSA tended to be higher in CT compared to MRI (46.6 ± 11 vs. 43.0 ± 11.1 cm2, P = 0.050). No significant differences between area-based measurements obtained for the left and right paraspinal skeletal muscle compartments were observed for both male and female patients (Table 2, all P > 0.05). Mean radiation attenuation of the total CSA for the entire study population was 30 HU and ranged between -31 HU and 44 HU (Fig. 1). No significant differences with regard to mean radiodensity of total CSA between male and female patients were observed (Table 3, P = 0.828). In MRI, the mean proton density fat fraction (MRIPDFF) of the total CSA for the entire study population was 20% and ranged between 11 and 56%. No significant differences between male and female patients with regard to MRIPDFF was observed (P = 0.090). ### Interrelations between CT and MRI derived measurements of skeletal muscle area CT and MRI derived measurements of skeletal muscle area were highly correlated (r = 0.93, P < 0.001, Fig. 2a). According to Bland–Altman analysis, a bias of 3.6 cm2 of CT over MRI derived measurements of skeletal muscle area was identified (Fig. 2b). Exemplary images of CSA in intraindividual, intermodal comparison of patients from the study population are illustrated in Fig. 3. ### Interrelations between CT and MRI derived measurements of muscle fat content Measurements for determination of muscle fat infiltration (MFI) in CT and MRI were highly correlated (r = − 0.90, P < 0.001, Fig. 4a). Based on this interrelation, a linear regression model was fit to calculate CT derived fat fraction (CTFF) from mean radiodensity of the CSA of the total paraspinal skeletal muscle compartment based on the corresponding MRIPDFF values. CTFF was defined by the following equation: $$CT^{{FF}} (\% )\, = \, - 0.562{\text{ }}*{\text{ }}mean{\text{ }}radiodensity{\text{ (}}HU{\text{)}}\, + \,36.71$$ Mean CTDFF was 21% and ranged between 13 and 53%. Bland–Altman analysis demonstrated a bias of -0.9% of CTFF over corresponding MRIPDFF values (Fig. 4b). Exemplary images of CT and MRI derived MFI measurements in an intraindividual comparison are illustrated in Fig. 5. ## Discussion The purpose of this prospective study was to evaluate the interrelation between CT and MRI derived imaging biomarkers of sarcopenia. As the key finding, imaging biomarkers of both skeletal muscle mass and quality were highly correlated in an intra-individual, intermodal comparison. The provided results allowed for development of a linear regression model to directly estimate skeletal muscle fat content from CT based on the corresponding MRIPDFF values, which in the future may help to improve clarity of CT derived determination of skeletal muscle fat content as an indicator of skeletal muscle quality. Objective and reliable assessment of skeletal muscles is mandatory to make the diagnosis of sarcopenia1,10. Both CT and MRI are used for several indications in clinical routine. As demonstrated in previous studies, clinical CT and MRI may be used beyond their primary diagnostic purposes to opportunistically obtain measurements of body composition, including skeletal muscle mass and quality8,11,12,13,17. This approach allows for obtainment of additional, potentially relevant prognostic information from routine diagnostic imaging, avoiding additional examinations, and may enhance the feasibility of sarcopenia assessment in clinical routine. However, to warrant wide applicability of opportunistic body composition assessment from diagnostic imaging, the agreement between CT and MRI derived measurements needs to be determined. Two previous studies investigated the agreement between CT and MRI derived measurements of skeletal muscle area18,19. Although these studies indicated that estimates of skeletal muscle mass in CT and MRI were highly correlated, the large time span of up to 101 days between the respective examinations, which were used for intra-individual comparison in these studies, may be considered an important confounder. Factors such as aging, physical inactivity, or wasting due to acute and chronic diseases are known to induce decline of skeletal muscle mass and, as demonstrated very recently, substantial changes in skeletal muscle area may be observed within a few days of physical inactivity20,21. Therefore, investigating the intermodal agreement of skeletal muscle area measurements in a same-day setting, our results may be considered to substantiate and further validate insights from these previous studies. In accordance with former reports, we observed that when compared directly, measured skeletal muscle area tends to be systematically larger in CT compared to MRI18,19. It is likely that at least to some extent this observation may be related to differences in patient positioning, which would be in agreement with another study investigating the interrelation of musculoskeletal measurements between CT and MRI22. While in MRI, patients were positioned with arms along the body, chest CT scans were performed with arms raised above the head in our study, as it is common practice. Possibly, consecutive alterations in trunk positioning may have influenced area-based measurements of the paraspinal skeletal muscle compartment. Also, MRI examinations are typically performed in expiration, while CT investigations are mostly performed in inspiration. It is unclear if other anatomical landmarks, particularly the lumbar vertebral levels, may be less affected by confounding factors such as body positioning or breathing. Future studies are warranted to eventually solve this issue. The recently updated European consensus guidelines on definition and diagnosis of sarcopenia stress the particular role of reduced muscle function as a key element of sarcopenia, as it was demonstrated to better predict adverse outcomes in various conditions compared to exclusive evaluation of muscle mass1,10,23,24. A central component of muscle function is strength, which again was shown to be strongly associated with MRIPDFF as a measure of skeletal muscle fat content and thereby indicator of muscle quality9. Skeletal muscle fat content refers to the composition as well as the micro- and macroscopic elements of tissue architecture—which can be assessed from both CT and MRI—and was demonstrated to reveal prognostic information in various diseases11,12,13,16,25. Measurements of skeletal muscle fat content as indicators of muscle quality are therefore considered promising new imaging biomarkers of sarcopenia, which in the future are expected to support treatment decision and response assessment1. Accordingly, a particular focus of our study was to evaluate the agreement between CT and MRI derived biomarkers of muscle quality, which to our knowledge has not been done so far. The high correlation between muscle radiodensity in CT and corresponding MRIPDFF allowed us to calculate a linear regression model to directly estimate the degree of myosteatosis from HU values. This approach provides CTFF as a CT-based measure of skeletal muscle fat content on a ratio scale, which may improve clarity and thereby may enhance not only applicability in clinical routine but also may be of particular interest for larger cohort studies. Several approaches with different anatomical landmarks for assessment of skeletal muscles from axial single-slice images have been proposed in literature. The most common are determination of the skeletal muscle index, defined as the entire skeletal muscle area at the level of the third lumbar vertebra normalized for body height as well as the psoas skeletal muscle area, assessed at the level of the third or fourth lumbar vertebra26. However, those landmarks are frequently not captured within several imaging protocols, for example those required for liver imaging, although body composition assessment may be particularly relevant in patients undergoing these examinations11. The concept of opportunistic imaging precludes extension of imaging window for sole assessment of body composition. Therefore, some previous studies investigated the interrelation of different vertebral levels with skeletal muscle mass and proposed alternative landmarks, such as the SMA level6,7,27. We decided to perform skeletal muscle measurements at the SMA level, since it is regularly captured within the most common imaging protocols of the abdomen and chest, is clearly defined and can be easily determined from the axial plane, and finally was suggested to reveal prognostic information in various chronic and malignant diseases11,13. We are aware of several limitations of our study. First, it would have been interesting to study the variability of skeletal muscle measurements among different MRI sequences. However, a primary goal of this study was to investigate the intermodal agreement of imaging based biomarkers of muscle quality as a surrogate of muscle strength, which in MRI can be reliably assessed using MRIPDFF9. Previous studies demonstrated a high agreement of different MRI sequences for determination of skeletal muscle mass19 and indicated that measurements of skeletal muscle quantity from PDFF are highly reliable and repeatable9,25. Therefore, in MRI we decided to perform muscle measurements from PDFF maps. Future studies could eventually determine its role compared to other MRI sequences. Secondly, in clinical routine CT scans are frequently performed after administration of intravenous contrast agents and are acquired in different scan phases, depending on the primary question of the examination. As in this study healthy participants of a cancer screening program were investigated, the study setting precluded administration of intravenous contrast agents. A previous study indicated that with respect to measurements of skeletal muscle fat content, distinct differences between phases of CT scans can be observed28. However, as the total magnitude of these differences appeared to be very small, the clinical impact of these findings is not clear and should be investigated in future studies. The question if muscle measurements may vary across different imaging platforms and scanning parameters in CT and MRI is unresolved to date. To account for this, in our study all CT and MRI scans were performed at the same respective unit with fixed scanning parameters. A recent report indicated a low level of bias across field strengths and imaging platforms for determination of fat content using MRIPDFF29. However, as this study used MRIPDFF to determine vertebral bone marrow fat content, it is not known if those results are directly transferable to myosteatosis measurements. To the best of our knowledge, the bias among vendors and scanners for determination of fat content from CT has not been studied so far. This issue should be clarified in future studies. To conclude, our results indicate a high agreement of imaging-derived biomarkers of muscle quantity and quality between CT and MRI in healthy subjects, indicating that both modalities may be used interchangeably for skeletal muscle assessment. Therefore, this study warrants larger and prospective studies to assess intermodal agreement, particularly in sarcopenia patients. Regarding CT-derived measurements of muscle quality, the provided results allowed for development of a linear regression to directly estimate skeletal muscle fat content from CT based on the corresponding MRIPDFF values, which in the future may enhance clarity of muscle quality measurements in CT and may also be particularly relevant for body composition analysis in larger cohort studies.
2022-10-04 21:38:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43551599979400635, "perplexity": 3594.7268620299574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00771.warc.gz"}
http://a2mstats.blogspot.com/2010_04_01_archive.html
# Accessible Website Here are some examples of accessible website: Sample of simple and consistent navigation Takagi, H., Asakawa, C., Fukuda, K., & Maeda, J. (2004). Accessibility designer: Visualizing Usability for the Blind. Paper presented at the Proceedings of the 6th international ACM SIGACCESS conference on Computers and accessibility. Retrieved from: http://doi.acm.org/10.1145/1028630.1028662 This purpose of this study is to develop software for improving web usability and blind user’s productivity called Accessibility Designer. The authors describe the factors that become problems beyond the software development which are accessibility checker, requirement of guidelines compliance, and syntactic checking of web page. For overcoming the problems, the author develop a new software that has three fundamental features: background color pattern for presenting reaching time to each part a web page, color filling analysis for indicating accessibility particular area of a web page, and visual layout analysis using text information that generated by standard screen reader. This new software is expected to improve usability of a web page. In the use, the author found that the background-color-based analysis has the power to reveal several problems such as availability and appropriateness skip-links navigator, availability and appropriateness of  headings usage, and content order. The result of the analysis will contribute to development of web design that may decrease the time and cost maintenance, determine web usability even for high level accessibility, achieve real accessible and usable website by focusing on the user experience and actual productivity. Even tough this article is technical oriented and focus on usability for user with visual impaired, but it has valuable knowledge for improving accessibility and usability of information displayed on a web page. Villegas, E., Sorribas, X., Pifarr, M., & Fonseca, D. (2009). Improving the design of accessible web pages through a study of user experience in order to define requirements. Paper presented at the Proceedings of the 1st ACM SIGMM international workshop on Media studies and implementations that help improving access to disabled users. Retrieved from http://doi.acm.org/10.1145/1631097.1631099 The authors’ purpose is to compare the two study phases of web accessibility that are based on the Web Content Accessibility Guidelines version 2.0 which level AA (double A) is a level acceptance in Spanish Law. The first phase is analyzing several Spanish AA-certified websites in order to collect the pattern of user accessibility defined by user (people with disabilities) requirements. Result from the first phased becomes guidance for group task-based test in second phase for determining the satisfactory of accessibility a web page creation. This study found that obtaining an accessible experience in the first step leads to the creation of a web page with real accessibility which is not only compliance with the WCAG guidelines, but also “a requirement to provide the user with a satisfactory user experience and to enable the user to work autonomously” (p.5). Although this study is only involved 12 students with disability, but this article is a good example for creating and analyzing web page that accessible for person with disabilities. Perhaps world class study involved various type students with disabilities would be better to evaluate the latest Web Content Accessibility Guidelines and the result might give some recommendation the web builders to create the real accessible websites. Rose, D., Hall, T., & Murray, E. (2008). Accurate for All: Universal Design for Learning and the Assessment of Students with Learning Disabilities. Perspectives on Language and Literacy, 34(4), 23-28. Retrieved from http://ezproxy.lib.monash.edu.au/login?url= http://proquest.umi.com/pqdweb?did=1639898021&Fmt=7&clientId=16397&RQT=309&VName=PQD This article explorers the implementation of three principles of universal design for learning in the assessment of students with learning disabilities. The authors state relevant constructs of the instrument and full spectrum of students is the keys to make sure the assessment is accountable and accurate for all students. The first principle elucidates by the authors is that flexible formats and options of assessment can be generated by modern technology, even it can be determined by individual basis of learning disabilities, so that the information in the assessment can be accessed by all the students. The second one is that assessment should be able completed by any kind of response from various students with disabilities. Thus other supports are needed for preparing and organizing responses such as assistive technology for completing the assessment. The last one is that validity of assessment should be achieved by maximum engagement of the students. For making sure students with learning disabilities have maximum level of engagement, they should be motivated to concentrate to the assessment and ignore other activities when the time has come. If the treatment of external motivating condition is considered not adequate for accountability assessment, then we have to consistent with the implementation of previous principle which is providing options and alternatives of testing and engagement conditions. Although the article is focus on the assessment of students with learning disabilities in the face-to-face context, the assessment concepts are valuable to be implemented in the on line learning environment for students with disabilities by occupying various modalities of assessment. Orkwis, R., & McLane, K. (1998). A Curriculum Every Student Can Use: Design Principles for Student Access. ERIC/OSEP Topical Brief (055 Guides: Non-Classroom; 071 ERIC Publications): Special Education Programs (ED/OSERS), Washington, DC.; Office of Educational Research and Improvement (ED), Washington, DC. Orkwis and McLane explore some issues in universal design for learning which are related to accessibility of general education curriculum for students with disabilities. The authors describe universal design for learning as generating learning materials and activities in the flexible curriculum which provide alternative options for students with various abilities and background. The authors also complement the report with two figures for illustrating universal design for learning. The first figure illustrates the differences between universal design for product or environment and universal design for learning. The second one illustrates about three essential qualities of universal design for learning which are representation, expression, and engagement. Even tough it is a quite old monograph, I really agree that universal design in the notion of learning means “learning materials, instructions, and activities that make learning objectives can be achievable by individuals with wide differences in their abilities to see, hear, speak, move, read, write, understand English, attend, organize, engage, and remember” (p.10). This concept works not only accommodate various students with physical, sensory, and cognitive disabilities, but also various abilities, culture and language backgrounds, and learning approaches. Harris, C. R., Kaff, M. S., Anderson, M. J., & Knackendoffel, A. (2007). Designing Flexible INSTRUCTION. Principal Leadership, 7(9), 31-35. Retrieved from http://ezproxy.lib.monash.edu.au/login?url=http://proquest.umi.com/pqdweb?did=1277145451&Fmt=7&clientId=16397&RQT=309&VName=PQD8 This monograph describes the implementation of universal design for learning that should be based on flexible-designed instruction. The selective use of flexible pedagogy are implemented to incorporate flexibility into instructional planning for accommodating the various learning needs of the students in the classroom. The authors complement the concept of universal design for learning with several testimonies of  students who either have or have not special needs. The author also points out a framework for instructional planning which the principal should build strong foundation based on principles of universal design for learning since in the beginning rather than retrofitting instructions to meet the needs of students with disabilities later and should manage collaboration of general teachers and special education teachers for making sure the curriculum works for a wide range abilities and disabilities of the students. Although this article elucidates the instructional design in the classroom setting, the flexible-designed instruction also should be applied in the online setting so that the online learning environment rich of flexible modalities. Hitchcock, C., & Stahl, S. (2003). Assistive Technology, Universal Design, Universal Design for Learning: Improved Learning Opportunities. Journal of Special Education Technology, 18(4), 45-52. Retrieved from http://ezproxy.lib.monash.edu.au/login?url=http://proquest.umi.com/pqdweb?did=569989481&Fmt=7&clientId=16397&RQT=309&VName=PQD In this article, Hitchcock and Stahl points out about developing and implementing of a universal designed curriculum by considering the goals of learning material, the instructional method, and the learning assessment. The article also elucidates the barriers to access and learning that amplified with some analogies of universal design that make definition and principle of universal design can be applied as well as appropriate use of assistive technology in the educational learning environment. In addition the authors make a table comparison of traditional approach and emerging approach of universal design for learning principles that exposes ten factors that influence teaching and learning process for achieving learning goals in the respective learning environment. I strongly agree that the best practice of universal design for learning which will support the achievement of all students can be implemented by providing: (a) suitable learning and activities objectives, (b) flexible and supportive electronic learning materials and its assistive technology for access and learning, (c) flexible and various format of challenges and supports, and (d) flexible and accessible assessment. Eberle, J. H., & Childress, M. D. (2007). Universal Design for Culturally-Diverse Online Learning. In A. Edmundson (Ed.), Globalizing E-Learning Cultural Challenges (pp. 239-254). Retrieved from http://www.igi-global.com.ezproxy.lib.monash.edu.au/Gateway/ContentOwned/Chapter.aspx?TitleId=19304&AccessType=InfoSci In this book chapter, Eberle and Childress use the principle of universal design for learning (UDL) to point out the framework for online learning design in the culturally-diverse population and global learning. The authors suggest that for designing UDL-based instruction which accommodate various type of learning in a flexible and systematic way, the principles of universal design can be implemented learning to the six steps of dynamic instructional design (DID). The authors also point out the characteristics and factors of learner that should be considered for online learning environment that accommodate various differences based on “clientele identification, abilities/disabilities, language, culture, gender, time barriers, and technology” (p.246). This reading is a good source for implementing universal design for learning in the instructional design that sensitive to culturally diverse in the global context so that online learning environment will become more globally inclusive for heterogenic populations, regardless of language, culture, abilities, and gender, time barriers, and technology. Sapp, W. (2009). Universal Design: Online Educational Media for Students with Disabilities. Journal of Visual Impairment & Blindness, 103(8), 495-500. Retrieved from http://ezproxy.lib.monash.edu.au/login?url=http://proquest.umi.com/pqdweb?did=1862973271&Fmt=7&clientId=16397&RQT=309&VName=PQD Sapp discusses the implementation of universal design learning using Universal eLearning, which is an under development project of integrated online learning module which are compiling accessible technology, universal design for learning principles, and the best practice of online learning. The author also elucidates about Universal eLearner system that has incorporated four new features, namely two-tiered video captioning for students with hearing impairment, two-tiered audio description for students with visual impairment, end-of-chapter summary information for supporting the comprehension of students with range of learning and sensory disabilities, and description-embedded language for increasing understandability of the content. Those new features can be set on or off by either students or teachers for making sure students not only can access the online course, but also can understand the contents. Thus the learning goals can be achieved by individuals with range disabilities. Even though the article just discusses several new features in a universal-designed online learning, but it is very valuable knowledge about how to increase access and understanding of students with disabilities in the notion of flexible online learning. Palloff, R. M, & Pratt, K. (2001). The Art of Online Teaching. Lessons from the Cyberspace Classroom: The Realities of Online Teaching (pp. 20-36). San Francisco: Jossey-Bass. Retrieved from http://cemmx.educ.monash.edu.au/moodle/file.php/4/docs/ Palloff_Pratt_2001.pdf Palloff and Pratt purpose several factors that should be considered for shifting face-to-face learning to online learning such as determining who should teach online which influenced by characteristics and willingness of instructor, conducting training for trainers or instructors before order them into design and delivery (electronic pedagogy), instructors involvement which is important to achieve a successful learning outcome instead of students engagement. Besides those factors, the authors elucidates the keys to success for online learning which are determining of technology access and familiarity, setting of relative loose and free flowing rules which are generated from participants’ input, gaining maximum engagement of participants with the best effort, encouraging participants for collaborative learning, and developing reflection system for participants. At the end of the authors provide "Tips for a Successful Online Course" (p. 36). This is very good reading for understanding how to establish and manage online leaning environment, so that the online course not only converting face-to-face leaning material to digital format, but also reach with sources, activities, and evaluation to achieving learning goals. Collis, B. & Moonen, J. (2002). Flexible Learning in a Digital World. Open Learning: The Journal of Open and Distance Learning, 17(3), 217-230. DOI: 10.1080/0268051022000048228 Collis and Moonen elucidate the flexible learning that has four keys components in the higher education context which are technology, pedagogy, implementation strategies, and institutional framework. The authors define the term of flexible learning is not merely as distance learning, but it is used in a broad way that allow the learner to choose different aspects of learning experiences is the key idea. In the most of the explanation, the authors try to define real flexible learning in the terms of institutional framework. Thus flexible learning can be realized by implementing strategy in the institution, making pedagogical approaches become learning value, and occupying technology to enhance flexibility. The authors also explain about lessons learned series from previous cycle of education change and technology potential. The authors state that a well-design web based system is appropriate technology for flexible learning if it has wide range possibilities and contribution-oriented learning. Indeed, usability and accessibility of WWW-based system are key point for flexible learning in the online learning environment. Even tough the authors explain the flexible learning in the notion of higher education, but its principles can be implemented in the either level or field of education, including special education needs. # Open Caption Video Sample of open caption video The embedded subtitle can not be interactively changed. It is fixed. # Closed Captioning (CC) Video The feature of closed caption (CC) can be adjusted by clicking CC icon and there are settings: 1. Type of Font 2. Background (short cut b) 3. Size  (short cut + / -) Moodle Closed Caption Video By default, Moodle does not provide video captioning. Flv player addition based on JW FLV Player 4.3 can be installed to provide closed video caption. Here are the screen shots installed closed caption in the local server using USB WebServer 7.0 and moodle 1.9.8+: Closed caption feature is off Closed caption feature is on Sources: Moodle.(2009) FLV Player. Retrieved 5 April, 2010, from http://docs.moodle.org/en/FLV_Player # Equation Editor in HTML Environment Sample: $\sum_{i=1}^{n}{(X_i - \overline{X})^2}$ Category: # Writing Equation in Blogger using LaTeX - Mathtex3.js • Using WYSIWYG Equation editor and Mathtex3.js for Beginner User Creating LaTeX renderer for Blogger For inserting an equation to a posting in a simple way, we need to make sure that LaTeX code can be rendered in the blog. Please follow the steps bellow: - Copy (to clipboard) the following script using block and ctrl-c <script src="http://www.watchmath.com/cgi-bin/mathtex3.js" type="text/javascript"></script> <script type="text/javascript"> replaceMath( document.body );</script> - Add a HTML/Script footer Gadget by accessing your Dashboard and choose Layout>>Page Element, and then click the hyperlink Add a Gadget in the bottom of the layout, followed by selecting HTML/JavaScript gadget. In the popup windows paste the script which is copied before, so that it is like the picure bellow: Click save with out typing any title. Then save the Page Elements. Creating WYSIWYG Equation Editor Latex code is quite complex to remember because it is like programming script. Since we prefer works in the GUI environment, we need WYSIWYG (What You See Is What You Get) Equation Editor. The steps bellow will show you how to make a Blogger Gadget for equation editor: - Copy the following script: <script type="text/javascript" src="http://latex.codecogs.com/editor.js"></script> <p align="center" style="margin-top: 0; margin-bottom: 0"><font face="Arial"> Sample: <a href="javascript:OpenLatexEditor('testbox','latex','')"> <img src="http://www.codecogs.com/gif.latex?\sum_{i=1}^{n}{(X_i - \overline{X})^2}" align="middle" /></a></font></p> <p align="center" style="margin-top: 0; margin-bottom: 0"><b><font face="Arial"> <a href="javascript:OpenLatexEditor('testbox','latex','')"> Launch Editor</a></font></b></p> <p align="center" style="margin-top: 0; margin-bottom: 0"> <textarea id="testbox" rows="10" cols="20"></textarea></p> <p align="center" style="margin-top: 0; margin-bottom: 0"> <a href="http://www.codecogs.com" target="_blank"> <img src="http://www.codecogs.com/images/poweredbycc.gif" border="0" title="CodeCogs - An Open Source Scientific Library" alt="CodeCogs - An Open Source Scientific Library" /></a> <p style="margin-top: 0; margin-bottom: 0" align="center"><font size="1"><font face="Arial"> <a href="http://a2mstats.blogspot.com/">Aam Sudrajat</a></font> </font> </p> - Paste the copied script in the content, and you can give Equation Editor as the Title. - Save the gadget and the Page Elements. Inserting LaTeX code in a Posting Because the equation editor is not embedded in the posting editor, we should keep the page contained it and open a new window/tab for creating/editing a posting. Then follow these steps: - Open Equation Editor by clicking the Lauch Editor link. - Create an equation by click the symbol or samples of equation - Click Copy to Document, the the windows will directly change to the main page where the equation editor gadget is. - Copy the code from the box by blocking and copying - Paste the code to the posting editor either in Compose or Edit HTML mode. $x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + a_4}}}$ • Using Mathtex3.js for Advanced User Basically all the $L^{A}T_{E}X$ code can be used in a posting after adding a HTML/Script footer Gadget containing Mathtex3.js. You may read Short Math Guide for LATEX  for looking up the type set.Here some example codes: \frac{{\displaystyle\sum\nolimits_{n> 0} z^n}}{{\displaystyle\prod\nolimits_{1\leq k\leq n} (1-q^k)}} $\frac{{\displaystyle\sum\nolimits_{n> 0} z^n}}{{\displaystyle\prod\nolimits_{1\leq k\leq n} (1-q^k)}}$ V = \frac{k_2{[E]}+{[S]}}{K_m + {[S]}}} $V = \frac{k_2{[E]}+{[S]}}{K_m + {[S]}}$ 6CO_2 + 6H_2O \xrightarrow{Light Energy} C_6H_{12}O_6 + CO_2 \ \delta G^{\circ} = +2870kJ/mol $6CO_2 + 6H_2O \xrightarrow{Light Energy} C_6H_{12}O_6 + CO_2 \ \delta G^{\circ} = +2870kJ/mol$ Sources: Codecog (2009). LaTeX Equation Editor v2.96 Installation. Retrieved 15 April, 2010, from http://www.codecogs.com/latex/install.php Downes, M. (2002). Short Math Guide for LATEX. Retrieved 15 April, 2010, from ftp://ftp.ams.org/pub/tex/doc/amsmath/short-math-guide.pdf WatchMath (2010). How To Install Latex On Blogger/Blogspot. Retrieved 15 April, 2010, from http://watchmath.com/vlog/?p=438 Category: # LaTeX Parameter for Position Adjustment in Ning.com Inserting LaTeX using <img src="http://www.codecogs.com/gif.latex?copy_paste_here"/> should be treated as image because we use image tags for HTML ( <img .../>). There are several parameters that can be use for position adjustment: Parameter Value Using Example Output align align="top" Sample $\frac{x-\mu}{\sigma}$text middle align="middle" Sample $\frac{x-\mu}{\sigma}$ text bottom align="bottom" Sample $\frac{x-\mu}{\sigma}$ text border 0 border="0" <img src="http://www.codecogs.com/gif.latex?\frac{x-\mu }{\sigma }" align="middle" border ="1"> Sample $\frac{x-\mu}{\sigma}$text 1 border="1" width % or pixel width="10"width ="100%" <img src="http://www.codecogs.com/gif.latex?\frac{x-\mu }{\sigma }" align="middle" height ="40" width="70"> Sample$\frac{x-\mu}{\sigma}$ text height % or pixel height="20"height ="90%" hspace % or pixel hspace="5"hspace ="5%" Sample$\frac{x-\mu}{\sigma}$text vspace % or pixel vspace="10"vspace ="10%" Resource: W3C (n.d). 13 Objects, Images, and Applets. Retrieved 16 April, 2010, from http://www.w3.org/TR/REC-html40 Category: # Writing Equation using LaTeX code at Ning.com Ning.com does not provide $L^{A}T_{E}X$ renderer like wikipedia or any other CMS for creating equation such as: $\frac{d}{dx}\sin x=\cos x$ We can use LaTeX source from another WYSIWYG LaTeX editor and copy-paste into blog or Forum in HTML mode. Follow the steps bellow: Because we use external WYSIWYG equation editor, then it is not embedded in the posting editor, we should open the equation editor and the entry blog/posting editor in the different windows/tabs. Then follow these steps: 1. Open WYSIWYG Equation Editor in a new window/tab. 2. Create an equation by click the symbl or samples of equation 3. Click Copy to Document, the the windows will directly change to the page where the WYSIWYG equation editor is. 4. Copy the code from the box by blocking and copying 5. Paste the code to the posting editor either in Compose or Edit HTML mode. 6. Then you change to Rich Text mode again, and you will see: $x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + a_4}}}$ 7. If you come back to HTML editor and want to change the equation, you will see the script has been changed. It is recommended to repeat the steps 1-6. Basically all the $L^{A}T_{E}X$ code can be used in a posting . We can read Short Math Guide for LATEX  for looking up the type set.Here some example codes: \frac{{\displaystyle\sum\nolimits_{n> 0} z^n}}{{\displaystyle\prod\nolimits_{1\leq k\leq n} (1-q^k)}} $\frac{{\displaystyle\sum\nolimits_{n> 0} z^n}}{{\displaystyle\prod\nolimits_{1\leq k\leq n} (1-q^k)}}$ V = \frac{k_2{[E]}+{[S]}}{K_m + {[S]}}} $V = \frac{k_2{[E]}+{[S]}}{K_m + {[S]}}$ 6CO_2 + 6H_2O \xrightarrow{Light Energy} C_6H_{12}O_6 + CO_2 \ \delta G^{\circ} = +2870kJ/mol $6CO_2 + 6H_2O \xrightarrow{Light Energy} C_6H_{12}O_6 + CO_2 \ \delta G^{\circ} = +2870kJ/mol$ Sources: Codecog (2010). LaTeX Equation Editor v2.96 Installation. Retrieved 15 April, 2010, from http://www.codecogs.com/latex/install.php Downes, M. (2002. Short Math Guide for LATEX. Retrieved 15 April, 2010, from ftp://ftp.ams.org/pub/tex/doc/amsmath/short-math-guide.pdf Category: # Temporary Summary Reaching the limit of words, I have to summarize the previous exploration. Universal design for online, especially involving students with disabilities that make sure as flexible as can the learning and teaching process throughout internet should be supported by accessible and usable course design. Accessible course design can be achieved by implementing web accessibility guidelines, then web usability principles adaptation is needed for making sure the learning objective is achievable by various students with disability,. There are still many web accessibility and web usability aspects which have not been explored, such as table, forms, interactivity. They will be continued to explore along with universal design curriculum as the backbone of the universal design for learning Category: # 9. Multimedia Accessibility Multimedia contents such as video, images, and sounds give flexibility presentation of learning material. For making sure multimedia contents accessible, they should be occupied with caption, audio description or alternative text (W3C,2008). Video captioning is creating text version of the spoken word in the video. There are two type of video captioning which are open caption that is permanently embedded to the video, and closed caption that is can be set on/off by teachers or students. Audio description is information that allow student with visual impairment hear the visual content (Web Accessibility in Mind, n.d). Sometimes the video caption and audio description are made in two tiers for different cognitive ability like in the eLearner system (Sapp, 2009). Accessible images can be made by adding alternative text or alt-text for describing the content and function, instead of image appearance Web Accessibility in Mind (2009). Sapp, W. (2009). Universal Design: Online Educational Media for Students with Disabilities. Journal of Visual Impairment & Blindness, 103(8), 495-500.Retrieved from http://ezproxy.lib.monash.edu.au/login?url=http://proquest.umi.com/pqdweb?did=1862973271&Fmt=7&clientId=16397&RQT=309&VName=PQD W3C. (2008). Web Content Accessibility Guidelines (WCAG) 2.0. Retrieved 16 March, 2010, from http://www.w3.org/TR/2008/REC-WCAG20-20081211/. Web Accessibility in Mind (n.d). Web Captioning Overview. Retrieved 12 April, 2010, from http://www.webaim.org/techniques/captions/. Web Accessibility in Mind (2009). Quick Reference: Web Accessibility Principles. Retrieved 8 April, 2010, from http://www.webaim.org/resources/quickref/quickref.pdf Category: # 8. PDF Accessibility According to Web Accessibility in Mind (2009) the HTML document format is the most accessible document format, but sometimes we need to add another document format such as PDF and office document. Thus we have to ensure that non-HTML document should be accessible. Since PDF is commonly used and secure, thus we often use this format for transferring information with original format intact. Unfortunately, standard PDF format is not recognized by World Wide Web Consortium (Hudson, 2004). PDF file can be accessible, especially by screen reader, by adding xml-like tag that gives content structure (Clark, 2005). Converting PDF files become accessible is not an easy thing to do because it takes long time since the steps are complicated for the beginner (Adobe Systems Incorporated, 2004). That is enough for document format. The next exploration will be the multimedia accessibility. Adobe Systems Incorporated .(2004). Advanced Techniques for Creating Accessible Adobe PDF Files: A Guide for Document Creator. Retrieved 9 April, 2010, from http://www.adobe.com/enterprise/accessibility/pdfs/acro6_pg_ue.pdf. Clark J. (2005). Fact and Oppinion about PDF Accessibility. Retrieved 9 April, 2010, from http://www.alistapart.com/articles/pdf_accessibility. Hudson R. (2004). PDF and Accessibility. Retrieved 9 April, 2010, from http://www.usability.com.au/resources/pdf.cfm. Web Accessibility in Mind (2009). Quick Reference: Web Accessibility Principles. Retrieved 8 April, 2010, from http://www.webaim.org/resources/quickref/quickref.pdf. Category: # 7. Font for Accessible Website Firstly, I think text formatting, especially fonts matter, is quite simple to write. Then when I have read an article about fonts in the notion of web accessibility by Web Accessibility in Mind (n.d.), I found that fonts matter is not as simple as choosing the size and the type of the fonts. It will take along writing for exploring the fonts in its details, thus these are some important point: • Real text is better than graphical text because it can be enlarged with out pixel lost and compatible with screen reader. • Basic, simple, easily-readable font families commonly used by web developer are the good choice to make a website content readable in any computer platform and browser. • Limited number of fonts will make a web content looks tidy and easy to read. • Contrast between the text and the background should be sufficiently set. • Standard font sizes for reading. • Units for font size should be relative (% or ems) to make sure it resizable by browser control. For some CMS it is embedded to the paragraph style. • Font variation such as bold, italics, and ALL CAPITAL LETTERS should be limited to make the content looks tidy. • Font appearance (color, shape, font variation, placement, etc.) should not convey any meaning. • Blinking or moving text is not accessible. Before continue to the text formatting, I just remember PARC principles that Michael Henderson mention in his paper titled Content Design for Online Learning (Henderson & Henderson, 2006). Indeed, the heading usage is one of the methods of for creating proximity and repetition of the content. Moreover, combination of font setting, contrast adjustment and appropriate alignment setting increase web content usability. The next exploration will not continue about text formatting, but accessible document format. Henderson, M. & Henderson, L. (2006). Content Design for Online Learning. QUICK: Journal of the Queensland Society for Information Technology in Education, 99(Winter). pp.3-8. Retrieved from http://cemmx.educ.monash.edu.au/moodle/mod/resource/view.php?id=81 Web Accessibility in Mind (n.d.) Fonts. Retrieved 5 April, 2010, from http://www.webaim.org/techniques/fonts/ Category: # 6. Web Content Structure I will continue my exploration in the web usability. According to Nielsen (1997) in his research on how people read website, there are only 16% read a website word by word and 79 % of people just scan the web page. Thus the web site contents should be well structured and clearly written, inevitably sighted students with cognitive disabilities or with low literacy. Here are some principles that can be applied for structuring web contents (National Center on Disabilities and Access to Education, 2007): • Headings for organizing content that will make a content easy to read and navigate • Simple language and Active voice usage for making sure students with cognitive disabilities understand • Slang and Jargon avoidance for making sure students with cognitive disabilities are not confused. • Empty (white) space for improving readability • Illustration as text supplementary • Spelling and grammar checking I thing that is enough for web site layout exploration, I will continue my text-formatting exploration in the next time. National Center on Disabilities and Access to Education. (2007). NCDAE Tips and Tools: Principles of Accessible Design. Retrieved 5 April, 2010, from http://www.ncdae.org/tools/factsheets/principles.cfm Nielsen, J. (1997). How Users Read on the Web. Retrieved 3 April, 2010, from http://www.useit.com/alertbox/9710a.html Category: # 5. Website Layout for Students with Visual Impairment After reviewing web accessibility and web usability guidelines, there are so many web-accessibility guidelines involving the complicated programming language for shifting from web accessibility to the web usability. Indeed if a web is universally usable then, of course, it is universally accessible. I will focus on web layout, text formatting, document format, and multimedia presentation in this short exploration. I will put this exploration of website layout for students with visual impairment here because it will take a long exploration if I continue this one. In the next exploration I will investigate about web usability for sighted students with disabilities. Mahmud, J., Borodin, Y., Das, D., & Ramakrishnan, I. V. (2007). Combating information overload in non-visual web access using context. Paper presented at the Proceedings of the 12th international conference on Intelligent user interfaces. Retrieved from http://doi.acm.org/10.1145/1216295.1216362. Takagi, H., Asakawa, C., Fukuda, K., & Maeda, J. (2004). Accessibility designer: visualizing usability for the blind. Paper presented at the Proceedings of the 6th international ACM SIGACCESS conference on Computers and accessibility.Retrieved from: http://doi.acm.org/10.1145/1028630.1028662 W3C. (2008). Web Content Accessibility Guidelines (WCAG) 2.0. Retrieved 16 March, 2010, from http://www.w3.org/TR/2008/REC-WCAG20-20081211. Category: # 4. Web Accessibility and Universal Usability After reviewing the last exploration, I just realized that universal design learning creates flexible learning because it allows the learners to choose different aspects of learning experiences is the key idea (Collis & Moonen, 2002). Now is the time to answer last question. When online learning environment is chosen for conducting flexible learning and students with disabilities have opportunity involve in it, there are several issues regarding accessibility and usability of the course design. I believe that it will be a long answer and more than one exploration session. Web Accessibility Initiatives (WAI) has been developing Web Content Accessibility Guidelines (WCAG), which the latest version is 2.0 (W3C, 2008) that more usability oriented (Termen et al, 2009) than the WCAG 1.0. The older version has been revised because low vision and with reduced mobility are practically ignored (Ribera et al., 2009). In addition Nielsen (2005) state “accessibility is not enough” because accessibility is focus primarily to make all users with various diversity can access the content and functionality of a website, and it does not guarantee make them understand. So that Horton (2006, para. 2) stated that universal usability is one step ahead which makes “the content and functionality accessible and usable by all”. May be its too long for an exploration entry. I will continue about website accessibility and usability in the next exploration. Nielsen, J. (2005). Accessibility Is Not Enough. Retrieved 3 April, 2010, from http://www.useit.com/alertbox/accessibility.html. Ribera, M., Porras, M., Boldu, M., Termens, M., Sule, A., & Paris, P. (2009). Web Content Accessibility Guidelines 2.0: A Further Step towards Accessible Digital Information. Program: Electronic Library & Information Systems, 43(4), 392-406. Retrieved from http://dx.doi.org/10.1108/00330330910998048 Termens, M., Ribera, M., Porras, M., Boldu, M., Sule, A., & Paris, P. (2009). Web content accessibility guidelines: from 1.0 to 2.0. Paper presented at the Proceedings of the 18th international conference on World wide web. Retrieved from http://doi.acm.org/10.1145/1526709.1526912 W3C. (1999). Web Content Accessibility Guidelines 1.0. Retrieved 16 March, 2010, from http://www.w3.org/TR/WCAG10/ W3C. (2008). Web Content Accessibility Guidelines (WCAG) 2.0. Retrieved 16 March, 2010, from http://www.w3.org/TR/2008/REC-WCAG20-20081211/ Category: # 3. Universal Desing for Learning (UDL) Indeed, I found that the researcher of CAST (Center for Applied Special Technology) the parallelism their curriculum and UD concept and then they defined the term universal design for learning (UDL) in 1990 (Rose, 2000) that has three principles (Rose, 2001 and CAST, 2008): 1. flexible ways to present information 2. flexible ways for expression and actions 3. flexible ways for student engagement Those principles come with some guidelines so that a curriculum can be designed for supporting all individuals with equal opportunities to learn. Thus universal design in the notion of learning means “learning materials, instructions, and activities that make learning objectives can be achievable by individuals with wide differences in their abilities to see, hear, speak, move, read, write, understand English, attend, organize, engage, and remember” (Orkwis & McLane, 1998, p.10). Finally, I understand the UDL and its principles, but one big question is waiting to be answered: How can it be implemented, especially in the online learning? In the next exploration I should find the answer. CAST. (2008). Universal Design for Learning Guidelines version 1.0. Retrieved 25 March, 2010, from http://www.udlcenter.org/sites/udlcenter.org/files/UDL_Guidelines_v2%200-Organizer_0.pdf David, R. (2001). Universal design for learning. Journal of Special Education Technology, 16(2), 66. Retrieved from http://ezproxy.lib.monash. edu.au/login?url=http://proquest.umi.com/pqdweb?did=106940284&Fmt=7&clientId=16397&RQT=309&VName=PQD Orkwis, R., & McLane, K. (1998). A Curriculum Every Student Can Use: Design Principles for Student Access. ERIC/OSEP Topical Brief (055 Guides: Non-Classroom; 071 ERIC Publications): Special Education Programs (ED/OSERS), Washington, DC.; Office of Educational Research and Improvement (ED), Washington, DC. Category: # 2. Universal Design Continuing my last exploration about principles of universal design, I found that there are seven principles of universal design which are: equitable use; flexibility in use; simple and intuitive; perceptible information; tolerance for error; low physical effort; and size and space for approach and use (Story, Mueller & Mace, 1998). Each of principle comes with several guidelines as the key elements that should be present in a design which adheres to it. I believe that those principles can be adapted to the education field or learning. In the next exploration I will investigate about universal design for learning including its definition, principles, and implementation. Story, M. F., Mueller, J. L., & Mace, R. L. (1998). The Universal Design File: Designing for People of All Ages and Abilities. Revised Edition (055 Guides: Non-Classroom; 141 Reports: Descriptive): National Inst. on Disability and Rehabilitation Research (ED/OSERS), Washington, DC. Category: # 1 - The Begining of Exploration In the cloud of information and communication technology in education, I am interested in special education needs. Thus my exploration in learning, instructional design, and technology will be in the shadows of this education field. Universal design principles have been adopting in the education field, which is called universal design for learning (UDL). Talking about universal design, I am wondering about its history, principles and how it can be implemented in the education field. Having research for universal design, I found that the term universal design (UD) was coined by Ronald L. Mace in 1987 (Moore, 2007). The concept of universal design is “usable by all people, to the greatest extent possible, without the need for adaptation or specialized design” (The Center for Universal Design, n.d, para.1). In the next exploration I will continue to research about UD principles. Moore, S.(2007, 20 March 2010). Very Brief History of Universal Design. Retrieved from http://www.unco.edu/cetl/UDL/WhatIs/index.html. Category: # Table of Universal Design Principles NO Principle Brief Explanation Guidelines 1 Equitable Use The design is useful and marketable to people with diverse abilities Provide the same means of use for all users: identical whenever possible; equivalent when not.  Avoid segregating or stigmatizing any users.  Provisions for privacy, security, and safety should be equally available to all users.  Make the design appealing to all users. 2 Flexibility in Use The design accommodates a wide range of individual preferences and abilities Provide choice in methods of use. Accommodate right- or left-handed access and use. Facilitate the user's accuracy and precision. Provide adaptability to the user's pace. 3 Simple and intuitive Use of the design is easy to understand, regardless of the user's experience, knowledge, language skills, or current concentration level Eliminateunnecessary complexity. Be  consistent with user expectations and intuition. Accommodate a wide range of literacy and language skills. Arrange information consistent with its importance. Provide effective prompting and feedback during and after task completion. 4 Perceptible Information The design communicates necessary information effectively to the user, regardless of ambient conditions or the user's sensory abilities. Use different modes (pictorial, verbal, tactile) for redundant presentation of essential information. Provide adequate contrast between essential information and its surroundings. Maximize "legibility" of essential information. Differentiate elements in ways that can be described (i.e., make it easy to give instructions or directions). Provide compatibility with a variety of techniques or devices used by people with sensory limitations. 5 Tolerance for Error The design minimizes hazards and the adverse consequences of accidental or unintended actions Arrange elements to minimize hazards and errors: most used elements, most accessible; hazardous elements eliminated, isolated, or shielded. Provide warnings of hazards and errors. Provide fail safe features. Discourage unconscious action in tasks that require vigilance. 6 Low Physical Effort The design can be used efficiently and comfortably and with a minimum of fatigue. Allow user to maintain a neutral body position. Use reasonable operating forces. Minimize repetitive actions. Minimize sustained physical effort 7 Size and Space for Approach and Use Appropriate size and space is provided for approach, reach, manipulation, and use regardless of user's body size, posture, or mobility. Provide a clear line of sight to important elements for any seated or standing user. Make reach to all components comfortable for any seated or standing user. Accommodate variations in hand and grip size. Provide adequate space for the use of assistive devices or personal assistance. Story, M. F., Mueller, J. L., & Mace, R. L. (1998). The Universal Design File: Designing for People of All Ages and Abilities. Revised Edition (055) Guides: Non-Classroom; 141 Reports: Descriptive): National Inst. on Disability and Rehabilitation Research (ED/OSERS), Washington, DC. # TeX Testing in Blogger a^2+b^2=c^2 $a^2+b^2=c^2$ $x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + a_4}}}$ Category:
2017-06-29 09:06:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4831528663635254, "perplexity": 5281.179603692589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323895.99/warc/CC-MAIN-20170629084615-20170629104615-00574.warc.gz"}
https://physics.stackexchange.com/questions/387261/solution-to-geodesics-on-a-2-sphere
# Solution to Geodesics on a 2-sphere I have been tasked to finding particle trajectories for a point mass travelling along the surface of the 2-sphere $t=t(\tau)$, $\theta=\theta(\tau)$ and $\phi=\phi(\tau)$. My supervisor gave me the spacetime metric $ds^2 = -dt^2 +R^2(d\theta^2 +sin^2{\theta}d\phi^2)$ I am finding timelike geodesics, $1 = -g_{\mu\nu}\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}$. Here is what I have so far, $t = E\tau$, $\dot{t} = E$, $sin^2\theta\frac{d\phi}{d\tau} = \frac{k}{R}$ where $\frac{k}{R}$ is some dimensionless constant. I substituted $\dot{\phi}$ and $\dot{t}$ back into the proper time gauge to get. $\dot{\theta} = \pm\frac{1}{Rsin\theta}\sqrt{E^2sin^2\theta-k^2-sin^2\theta}$ I attempted using the substitution $u=cos\theta$ to eliminate the $sin\theta$ and hopefully get some expression that I could integrate to obtain some inverse trigonometric function, I know that the geodesics should describe great circles along the surface of the sphere. But I cant solve this final equation. Thank you • May I suggest you look at the $k=0$ solutions first? This should result in great circles through the poles... – mmeent Feb 19 '18 at 9:04 • Are you sure your differential equations are correct? What happens if you set $E=1$? – octonion May 11 '18 at 18:46 Your manifold is the product $(\Bbb R, -{\rm d}t^2)\times \Bbb S^2(R)$. By Proposition $38$ in page $208$ (with $f=1$) of O'Neill's Semi-Riemannian Geometry With Applications to Relativity, a curve $\gamma = (\alpha, \beta)$ there is a geodesic if and only if $\alpha$ and $\beta$ are geodesics in $\Bbb R$ and $\Bbb S^2(R)$, respectively. It is then clear that $\alpha$ must be $\alpha(t) = \pm t+a$ for some $a \in \Bbb R$, and $\beta$ must parametrize a great circle. As for checking that geodesics of the sphere are great circles, there are better ways to do it instead of solving those differential equations. For instance, one can argue that given a tangent vector $v$ at a point $p$, there is a unique maximal geodesic starting at $p$ with velocity $v$, compute directly that great circles are geodesics, and finally that given $v$ and $p$, there is a great circle passing through $p$ with direction $v$.
2020-03-28 22:11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399618268013, "perplexity": 80.00031293773401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00293.warc.gz"}
https://codegolf.stackexchange.com/questions/136564/square-a-number-my-way
# Square a Number my Way People keep telling me that the square of a number is the number multiplied by itself. This is obviously false. The correct way to square a number is to make it into a square, by stacking it on top of itself a number of times equal to the number of digits it has, and then reading all the numbers from the resultant square, both horizontally (from left to right only) and vertically (from up to down only), and then adding them together. So, for the number 123, you first create the square: 123 123 123 Then you take all of the rows and columns from the square, and add them together: 123+123+123+111+222+333 Which gives us a result of 1035. For negative numbers, you stack normally (remember that you only count the number of digits, so the negative sign is not included in the length), and then read the horizontal numbers normally (with negative signs), and then ignore the negative signs for the vertical numbers. So, for the number -144 we get the square: -144 -144 -144 Which gives us -144-144-144+111+444+444, which equals 567 For numbers with only one digit, the square is always equal to the number doubled (read once horizontally and once vertically). So 4 gives us 4 Which gives us 4+4, which equals 8. For numbers with decimal parts, stack normally (remember that only digits are counted in the number of times you stack the number, and therefore the decimal point is not counted), and ignore the decimal symbols when reading the vertical numbers. For example, the number 244.2 gives us 244.2 244.2 244.2 244.2 Which gives us 244.2+244.2+244.2+244.2+2222+4444+4444+2222, which equals 14308.8. Fractional or complex numbers cannot be squared. I'm tired of squaring numbers my way by hand, so I've decided to automate the process. Write me a program or function that takes a float or string, whichever you prefer, as input and returns the result of squaring it my way. ## Examples: 123 -> 1035 388 -> 3273 9999 -> 79992 0 -> 0 8 -> 16 -6 -> 0 -25 -> 27 -144 -> 567 123.45 -> 167282.25 244.2 -> 14308.8 2 -> 4 -0.45 -> 997.65 0.45 -> 1000.35 ## Scoring: My hands are getting cramped from writing out all those squares, and my computer doesn't support copy/paste, so the entry with the least amount of code for me to type (measured in bytes for some reason?) wins! • "123.45" and "244.2" aren't valid floats in and of itself because the computer stores number in binary. This isn't normally a problem until the problem relies on the decimal representation. Jul 29, 2017 at 13:22 • @LeakyNun, I don't really know what you mean by that. The problem isn't unsolvable (at least in python), I'm pretty sure I could do it fairly easily, although in a large number of bytes. It would require some string manipulation, however. Jul 29, 2017 at 13:27 • @Gryphon So we must take input as a string? Jul 29, 2017 at 13:28 • @Gryphon This is where it fails. 244.2 is not a float number. It cannot be converted to the string "244.2". Jul 29, 2017 at 13:29 • @Gryphon But behaviours like this makes it very inconvenient. Jul 29, 2017 at 13:38 # 05AB1E, 7 bytes þSDg×+O Try it online! Explanation þSDg×+O Implicit input þ Keep digits S Get chars D Duplicate g Length × Repeat string(s) O Sum • Ooo explanation when you can please Jul 29, 2017 at 14:23 • Also I would note that the single leading zero is a requirement on the input for -1 < input < 1 (i.e. 0.45 and .45 are different inputs but the same number, only the former is acceptable) Jul 29, 2017 at 14:28 • @JonathanAllan The latter isn't handled anyways. Jul 29, 2017 at 14:30 • @JonathanAllan Done. Jul 29, 2017 at 14:32 # Jelly,  13  12 bytes fØDẋ€L$ŒV+VS A monadic link accepting a list of characters (a well-formed decimal number, the single leading zero being a requirement for -1 < n < 1) and returning a number. Try it online! 14 bytes to accept and return numbers (input limited at +/-10-5 by ŒṘ): ŒṘfØDẋ€L$ŒV+⁸S. ### How? fØDẋ€L$ŒV+VS - Link: list of characters e.g. "-0.45" ØD - yield digit characters "0123456789" f - filter keep "045"$ - last two links as a monad: L - length (number of digit characters) 3 ẋ€ - repeat list for €ach digit ["000","444","555"] ŒV - evaluate as Python code (vectorises) [0,444,555] V - evaluate (the input) as Jelly code -0.45 S - sum 997.65 • Umm, you can replace +€ with + in 15-byte version for -1. Jul 29, 2017 at 14:14 • Already did, thanks though! Jul 29, 2017 at 14:14 • Umm not in the 15-byte version. EDIT: 3 seconds too early I suppose... Jul 29, 2017 at 14:14 • Yup just noticed you said 15 byte version - thanks again! Jul 29, 2017 at 14:15 f s|l<-filter(>'.')s=0.0+sum(read<$>(s<$l)++[c<$l|c<-l]) The input is taken as a string. Try it online! How it works l<-filter(>'.')s -- let l be the string of all the numbers of the input string f s = 0.0 + sum -- the result is the sum of (add 0.0 to fix the type to float) read<$> -- turn every string of the following list into a number s<$l -- length of l times the input string followed by [c<$l|c<-l] -- length of l times c for each c in l # Japt v2, 16 bytes o\d l ¬xpV +V*Ng Test it online! ### Explanation o\d First line: Set U to the result. o Keep only the chars in the input that are \d digits. (literally /\d/g) l Second line: Set V to the result. l U.length ¬xpV +V*Ng Last line: implicitly output the result. ¬ Split U into chars. x Sum after pV repeating each V times. +V*Ng Add V * first input (the sum of the horizontals) to the result. # C# (.NET Core), 150141 133 bytes Saved 9 bytes thanks to @TheLethalCoder Saved another 8 bytes thanks to @TheLethalCoder a=>{var c=(a+"").Replace(".","").Replace("-","");int i=0,l=c.Length;var r=a*l;for(;i<l;)r+=int.Parse(new string(c[i++],l));return r;} Try it online! Takes a string as an input and outputs the 'squared' number as a float. This code follows the following algorithm: 1. Create a new string from the input, but without the decimal points and symbols, so we can get our length and the numbers for the columns from there. 2. Calculate the input times the length of the string we created at point 1. 3. For each column in our 'square', create a new string with the column number and the row length and add it to our result. Example: Input: -135.5 1. If we replace decimal points and symbols we get the string 1355, which has a length of 4. 2. The input times 4: -135.5 * 4 = -542. 3. Now we create new strings for each column, parse them and add them to our result: 1111, 3333, 5555, 5555. If we sum these numbers up we get 15012, which is exactly what our program will output. • Welcome on the site, and nice first answer (the explanations in particular are appreciated!) ! Jul 31, 2017 at 8:19 • @Dada Thank you! Even tough I am rather unpleased by the bytes I gained from stuff like string.Replace(), but I guess thats the only way it works! Jul 31, 2017 at 8:24 • Might be able to save some bytes by setting i and l to floats. Jul 31, 2017 at 13:53 • @TheLethalCoder Thought of that aswell, sadly indexing does not work with floats, and .Length cannot implicitly be converted to float. Jul 31, 2017 at 13:55 • a=>{var c=a.Replace(".","").Replace("-","");int i=0,l=c.Length;var r=float.Parse(a)*l;for(;i<l;)r+=int.Parse(new string(c[i++],l));return r;} 141 bytes. Might be able to save by taking input as a float and casting to a string with n+"" but I haven't checked. Jul 31, 2017 at 14:00 # Brachylog, 23 bytes {∋ịṫ}ᶠ⟨≡zl⟩j₎ᵐ;[?]zcịᵐ+ Try it online! Brachylog doesn't go well with floats... Explanation: {∋ịṫ}ᶠ⟨≡zl⟩j₎ᵐ;[?]zcịᵐ+ Takes string (quoted) input, with '-' for the negative sign ᶠ Return all outputs (digit filter) { } Predicate (is digit?) ∋ An element of ? (input) ị Convert to number (fails if '-' or '.') ṫ Convert back to string (needed later on) ⟨ ⟩ Fork ≡ Identity l Length with z Zip ᵐ Map ₎ Subscript (optional argument) j Juxtapose (repeat) (this is where we need strings) ; Pair with literal [ ] List ? ? z Zip c Concatenate (concatenate elements) ᵐ Map ị Convert to number ## Husk, 15 bytes §+ȯṁrfΛ±TṁrSR#± Takes a string and returns a number. Try it online! ## Explanation It's a bit annoying that the built-in parsing function r gives parse errors on invalid inputs instead of returning a default value, which means that I have to explicitly filter out the columns that consist of non-digits. If it returned 0 on malformed inputs, I could drop fΛ± and save 3 bytes. §+ȯṁrfΛ±TṁrSR#± Implicit input, e.g. "-23" #± Count of digits: 2 SR Repeat that many times: ["-23","-23"] ṁr Read each row (parse as number) and take sum of results: -46 ȯṁrfΛ±T This part is also applied to the result of SR. T Transpose: ["--","22","33"] fΛ± Keep the rows that contain only digits: ["22","33"] ṁr Parse each row as number and take sum: 55 §+ Add the two sums: 9 # Python 3, 95 94 87 85 84 bytes def f(i):l=[x for x in i if"/"<x];k=len(l);print(k*float(i)+sum(int(x*k)for x in l)) # Python 3, 78 bytes lambda x:sum(float(i*len(z))for z in[[i for i in str(x)if"/"<i]]for i in[x]+z) Test Suite. The second approach is a port to Python 3 inspired by @officialaimm's solution. # Python 2, 81 74 bytes -7 bytes thanks to @Mr. Xcoder: '/'<i • Takes in integer or float, returns float. lambda x:sum(float(i*len(z))for z in[[i for i inxif"/"<i]]for i in[x]+z) Try it online! ## Explanation: Say 123.45 is given as input. [i for i inxif"/"<x] gives a list of stringified integers ['1','2','3','4','5'] (which is also z). Now we iterate through [x]+z i.e. [123.45,'1','2','3','4','5'], multiplying each element by len(z), here 5 and converting each to a Float (so that strings also convert accordingly), yielding [617.25,11111.0,22222.0,33333.0,44444.0,55555.0]. Finally we calculate the sum(...) and obtain 167282.25. • 78 bytes. Replace i.isdigit() with "/"<i<":" Jul 29, 2017 at 18:34 • 74 bytes. You can replace i.isdigit() with "/"<i, in fact, because both . and - have lower ASCII codes than digits, adn / is in between them Jul 29, 2017 at 18:39 • You're welcome. I've ported it to Python 3 as an alternative to my answer Jul 29, 2017 at 18:46 # JavaScript, 75 62 bytes a=>(b=a.match(/\d/g)).map(b=>a+=+b.repeat(c),a*=c=b.length)&&a Try it online -2 bytes thanks to Arnauld -5 bytes thanks to Shaggy (I though the function must receive a number, but now I see that lot of other answers receive string too) # Perl 5, 37 33 + 1 (-p) = 38 34 bytes $_*=@n=/\d/g;for$\(@n){$_+=$\x@n} Try it online! Used some tricks from Dom's code to shave 4 bytes Explained: @n=/\d/g; # extract digits from input $_*=@n; # multiply input by number of digits for$\(@n){ # for each digit: $_+= # add to the input$\x@n} # this digit, repeated as many times as there were digits # taking advantage of Perl's ability to switch between strings # and numbers at any point • Came up with a very similar approach, but managed to get a couple of bytes off using $\ and exiting the loop: try it online! Jul 31, 2017 at 14:40 • Used some inspiration from you to shave mine down. What's the "}{" construct at the end of yours? I'm not familiar with that one. Aug 5, 2017 at 2:28 • It's one I learnt from this site, basically -n and -p literally wrap a while(){...} around the code so }{ breaks out of that. This unsets $_ but if you use $\ as your variable it'll still get printed since $\ is appended to every print. Means you can stores number or something in that and disregard $_. Not sure that was a great explanation, but check out the Tips for golfing g in Perl thread, I'm sure that'll explain it better! Glad to have helped your score though! Aug 5, 2017 at 7:43 # Jelly, 17 bytes ŒṘfØDẋ€L©$ŒV;ẋ®$S Try it online! # Pyth, 18 bytes s+RvQsM*RF_lB@jkUT Try it here. # Pyth, 21 20 bytes K@jkUTQ+smv*lKdK*lKv Test suite. Uses a completely different approach from @EriktheOutgolfer's answer, which helped me golf 1 byte in chat, from 22 to 21. # Explanation K@jkUTQ+s.ev*lKbK*lKv K@jkUTQ - Filters the digits and assigns them to a variable K. m - Map. Iterated through the digits with a variable d v - Evaluate (convert to float). *lKd - Multiplies each String digit by the length of K. s - Sum + - Sum *lKvQ - Multipies the number by the length of the digits String # Octave, 100 82 bytes Thanks a lot @TomCarpenter for teaching me that assignments have a return value and saving me 18 bytes! @(v)(n=nnz(s=strrep(num2str(abs(v)),'.','')-'0'))*v+sum(sum(s'*logspace(0,n-1,n))) Try it online! ### Ungolfed/Explanation function f=g(v) s=strrep(num2str(abs(v)),'.','')-'0'; % number to vector of digits (ignore . and -) n=nnz(s); % length of that vector f=n*v+sum(sum(s'*logspace(0,n-1,n))) % add the number n times and sum the columns of the square end The way this works is that we basically need to add the number itself n times and then add the sum of the columns. Summing s' * logspace(0,n-1,n) achieves the sum of columns, for example if v=-123.4 that matrix will be: [ 1 10 100 1000; 2 20 200 2000; 3 30 300 3000; 4 40 400 4000 ] So we just need to sum it up and we're done. • You can save 18 bytes by smushing it all into an anonymous function @(v)(n=nnz(s=strrep(num2str(abs(v)),'.','')-'0'))*v+sum(sum(s'*logspace(0,n-1,n))). Try it online! Jul 30, 2017 at 21:37 # Swift 4, 139 134 bytes func f(s:String){let k=s.filter{"/"<$0};print(Float(s)!*Float(k.count)+k.map{Float(String(repeating:$0,count:k.count))!}.reduce(0,+))} Test Suite. # Explanation • func f(s:String) - Defines a function f with an explicit String parameter s. • let k=s.filter{"/"<$0} - Filters the digits: I noticed that both - and . have smaller ASCII-values than all the digits, and / is between ., - and 0. Hence, I just checked if "/" is smaller than the current character, as I did in my Python answer. • print(...) - Prints the result. • Float(s)!*Float(k.count) - Converts both the String and the number of digits to Float and multiplies them (Swift does not allow Float and Int multiplication :()). So it adds the number x times, where x is the number of digits it contains. # Python 3, 68 707377 bytes lambda n:sum(float(n)+int(_*sum(x>"/"for x in n))for _ in n if"/"<_) Try it online! Loops over every digit character and repeats it by the number of digit characters overall, makes that into an integer, and adds that to n. This way n gets added d times, the horizontal part of the sum, along with the digit repetition, which is the vertical part. Originally used str.isdigit but >"/", thanks to others in this thread, saved a lot of bytes. Saves two bytes by taking n as a string, but the output is messier. lambda n:sum(n+int(_*sum(x>"/"for x in str(n)))for _ in str(n)if"/"<_) Try it online! # Japt v2.0a0 -x, 10 bytes Takes input as a string. f\d çU cUy f\d çU\ncUy :Implicit input of string U > "-0.45" f :Match (returns an array) \d : RegEx /\d/g > ["0","4","5"] çU :Fill with U > ["-0.45","-0.45","-0.45"] \n :Reassign to U c :Concatenate Uy : U transposed > ["-0.45","-0.45","-0.45","---","000","...","444","555"] :Implicit output of sum of resulting array > 997.65
2022-05-25 07:17:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28109458088874817, "perplexity": 3538.37896906636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662580803.75/warc/CC-MAIN-20220525054507-20220525084507-00100.warc.gz"}
https://fuelphp.com/forums/discussion/6344/install-on-windows7
FuelPHP Forums install on windows7 • I'm trying to install fuelpphp on my computer, with wamp, I can't make the commande php oil runnig, when I do it nothing happen, I've already change my envireonments variables... I've php install on a disk, wamp and fuel on another disk... Idon't know... I also try to run it with cygwin but the commande sudo does'nt work as well... can someone help me thakns • I´m beginner and don´t know some terms. what is it git? what is it git-bash? What do I need to have installed to get it started on windows? Know of tutorials for Windows? Thanks I apologize for bad english. • Have you tried Google? search for 'git': the first result is the correct one search for 'git-bash': the first result is the correct one, the second one a good tutorial You can also opt for an IDE, such as PHPStorm, which has built-in support for git, and direct access to github repositories. • If anyone is interested...Installation of oil on windows using cygwin. 1. Install cygwin and preferably git,ssh and curl at the same time 2. Append the 'cygwin/bin/' catalog to window environment variable PATH 3. Run cygwin/bin/bash.exe 4. Enter the following command in the console window curl get.fuelphp.com/oil | sed 's/sudo/\$\@' | sh And you should be all set. • On Windows, you will have to use /path/to/php.exe oil, from the directory oil is in. If you do that, does the help page show? If not, what does ( i.e. what's your definition of "nothing" )? • all you can do is download the source, extract it to the /c/wamp/www, and do the following, 1. install first the git, 2. open the git-bash 3. add alias to php something like this$ alias php=/c/wamp/bin/php/php5.3.8/php.exe $cd /c/wamp/www/fuel you can now use php with like this$php oil refine migrate please note that you cannot this command 1. curl get.fuelphp.com/oil | sh 2. oil create blog because this command install oil to the filesystem for linux or osx only, i hope this helps... • tanks for your answer, i went crazy with window so I ended by doing my project on a mac computer, where everything work fine in almost fews minutes. Sorry for the wast of time. • Link below explains how to run php from cmd.exe, you have to add the path for php to the environment variables in windows. http://www.windows7hacker.com/index.php/2010/05/how-to-addedit-environment-variables-in-windows-7/ E.g. "C:\php\php.exe;" (add at the start of Path variable, without "" :p) Once added: 1. Open cmd.exe 2. Type "cd path\to\fuelphp" to navigate to the FuelPHP directory 3. Type php oil help Hope this helps • @coax, just sharing how i works on windows,.. it may not be standard environment, but i'm comfortable working with it,.. @arbme, i don't use something like that for some reason, i don't wanna messed up with windows, hahaha, when windows messed up, its totally messy... i prefer to do some commands manually than editing system setups... • @reith2004 , haha yeah I hear you, windows is prone to errors/failers when changing system settings, but its a simple edit that just allows you to run php in cmd without the pain of having to type the full path for every command. But most prefer Git Bash. cmd.exe is your friend Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons!
2021-07-27 12:16:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47357022762298584, "perplexity": 4989.923950143537}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153391.5/warc/CC-MAIN-20210727103626-20210727133626-00002.warc.gz"}
https://socratic.org/questions/how-do-you-convert-1-jsqrt3-to-polar-form
# How do you convert 1+jsqrt3 to polar form? Jul 23, 2016 $1 + j \sqrt{3} = 2 \left(\cos \left(\frac{\pi}{3}\right) + j \sin \left(\frac{\pi}{3}\right)\right)$ #### Explanation: To write the complex number, say $a + j b$, where ${j}^{2} = - 1$ in polar coordinates, we write them as $z = r \left(\cos \theta + j \sin \theta\right)$. Now as $r \cos \theta = a$ and $r \sin \theta = b$. $r = \sqrt{{a}^{2} + {b}^{2}}$, $\theta = {\tan}^{- 1} \left(\frac{b}{a}\right)$ Hence here for complex number $1 + j \sqrt{3}$ $r = \sqrt{{1}^{2} + {\left(\sqrt{5}\right)}^{2}} = \sqrt{4} = 2$ and $\theta = {\tan}^{- 1} \left(\frac{\sqrt{3}}{1}\right) = {\tan}^{- 1} \sqrt{3} = \frac{\pi}{3}$ Hence $1 + j \sqrt{3} = 2 \left(\cos \left(\frac{\pi}{3}\right) + j \sin \left(\frac{\pi}{3}\right)\right)$
2020-02-22 19:52:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946097135543823, "perplexity": 1509.6600609316986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00359.warc.gz"}
https://www.socratease.in/content/2/motion-straight-line-2/28/how-to-solve
Perfecto! The stone is going up, so, the $$s$$ is positive. $$a$$ is always negative as the Earth is pulling the object down. Also, when the stone reaches its maximum height, the final velocity $$v = 0$$
2019-06-27 13:02:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31600871682167053, "perplexity": 252.7553574213099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001138.93/warc/CC-MAIN-20190627115818-20190627141818-00523.warc.gz"}
https://ipmsisrael.co.il/oi7omh1t/ff7-black-chocobo
First, feed your blue chocobo and your green chocobo 10 Sylkis Greens each (the more you can afford to feed them, the better it is, but 10 is the bare minimum). However, when you encounter an enemy in battle the first time you’ll immediately obtain a Prism for that enemy. Capturing the Chocochick means that you will need an open Chocochick Prismarium. Breeding a Black Chocobo You need a Blue and a Green Chocobo of opposite genders in order to breed a Black Chocobo. World of Final Fantasy features a huge list of monsters called Mirages that you can capture, and battle alongside in combat. Once captured, Chocobos can be kept in stables in the Chocobo Farm; the option to buy stables opens after the party has acquired the Highwind. While Final Fantasy VII Remake is a linear game for most of its runtime, it does eventually open up. They’ve had various abilities over the years, but the thing that always made them special was that they were the only kind of Chocobo with the ability to fly. Available for Windows, Linux + Mac Os. World of Final Fantasy: How to Get Sephiroth Champion Summon, World of Final Fantasy: How to Summon FF Characters in Battle, World of Final Fantasy: How to Rename Your Mirages, Microsoft Flight Simulator Tokyo Narita Airport Gets Impressive Screenshots from Drzewiecki Design, CyberConnect2 Reveals Why .hack//Link Was Never Released in The West, Azur Lane Finally Getting Pizza Hut Collaboration Skins in The West (Without the Pizza Hut), Microsoft Flight Simulator Louisville International Airport Released By FSDreamTeam, Valorant Celebrates The Coming of New Agent Yoru With Music Video by Japanese Rapper AK-69, World of Final Fantasy: How to Get the Black Chocochick Mirage. Crow Beret Vejovis wand - Woodworking 102+ 47 Rose Wand None 47 Rose Wand Beaucedine Glacier Behemoth's Dominion Western Altepa Desert Eastern Altepa Desert The Sanctuary of Zi'Tah Yhoator Jungle These enemies appears in the early locations, and when the player defeats a black chocobo, they'll burst into feathers. $34.99$ 34. World of Final Fantasy features a huge list of monsters called Mirages that you can capture, and battle alongside in combat. Make sure to check back with Twinfinite for more guides, tips, and tricks on World of Final Fantasy. $5.49 shipping. To wrap it up, Black Chocobo provides multiple features that might appeal to Final Fantasy fans. Black Chocobo. It just takes a bit of patience, really. FINAL FANTASY VII. Use to summon your personal black chocobo. Getting the Black Chocobo. However, I'm trying to breed a blue chocobo for close to two hours now. saves are found in c:\users\name\documents\squareenix\final fantasy vii\user_XXXXXX\in this video i show you how to use black chocobo on the rerelease of … Regular chocobos and River (Blue) Chocobos have a harder time, though I've achieved Rank S with both. You have to use a Zeio Nut to get the gold chocobo. Below is a list of all the sections to this guide. The first fishy thing I noticed was that the breed of a chocobo seems to be determined before I enter the chocobo ranch, I reset for a half hour then to … Download Black Chocobo! Home » Guides » World of Final Fantasy: How to Get the Black Chocochick Mirage. A small hand-carved whistle that emits a unique, high-pitched tone discernible only by a black chocobo trained from birth to recognize and respond to the sound. Now you can stack and use this rare variant of the Chocochick in battle. but in S class, he tends to be slower than his normal phase... i think i beat him in a yellow chocobo. Yeah, it's very possible. Filed under. An editing tool for FF7 enthusiasts. Wild Chocobos can be encountered in areas on the world map covered in chocobo tracks. Eventually, you’ll get the option to use Imprism on it, and if successful, you’ll have the Black Chocochick under your control. #19 Mar 27, 2019. A Full Featured Save Editor, Black Chocobo contains All The features you would expect. Final Fantasy VII remake Chocobo & Moogle Materia is a special item that you can discover in a hidden nook in Sector 4. Joined: Apr 7, 2016 Messages: 112 Country: yes if u have the .ff7 or any other save from this game Retroarch .srm also work. In Legend of Mana, black chocobos makes an appearance as enemies. Windows Users Download The Windows Portable Build Mac OS Users Download The Mac OS Version Linux Users Most linux user can use the AppImage Arch Linux Users Install from the AUR Other downloads Get Sample Saves Get Source Package How to fix this problem with Chocobo Racing? The legacy of the crystals has shaped our history for long enough. Black Chocochick Mirage – World of Final Fantasy. Simply click on a click to go to that particular section. ChrisDJ0122. Connect the psp to the computer again and open black chocobo.click on File->open and then chose the extracted save on your psp memory card that should be on E:\seplugins\cwcheat\mc; edit it however you wish and then File>Save on Black Chocobo. unplug the PSP from the computer and run final fantasy 7 on it Square Enix Final Fantasy: Chocobo Autograph (Black Version) Plush. They’re notoriously hard to obtain, and they’re nearing extinction. Black Chocobo is a Free Software (gplv3) FF7 save game editor written in Qt. There are some "Sephiroth as the main character" mods out there if you look. Top Contributors: Genesisfury, Mogg18, McBiggitty + more. Then, head off to the Gold Saucer, to the chocobo … If you want to win with any other type of Chocobo, you pretty much have to use the long course, and get as far ahead as you can since the Black Chocobo sprints towards the end. ALL RIGHTS RESERVED. ValistheaA Land Blessed in the Light of the Mothe... Privacy PolicyCookie SettingsDo Not Sell My InformationReport Ad. Black Chocobo Will now Use Native Style (except mac os) Fixed: Signing for non numeric User ID (Stright md5) Fixed: Menu Locked and Visible changes backwards. All Discussions Screenshots Artwork Broadcasts Videos News Guides Reviews ... You can hack him in, but Idon't know if Black Chocobo is the way to go. ff7tk: + Achievement Editor ff7tk: + Field Items shown in … I've bred the Chocobo's before on the PS1 and original PC version and I knew it worked then. The Black Chocobo is the first flying mount you will get in Heavensward. The stables cost 10,000 gil and a total of si… Only 10 left in stock - order soon. NuadaXXX GBAtemp Regular. The player must speak to the man inside the house to rent the stables. 4.5 out of 5 stars 8. A Full Featured Save Editor, Black Chocobo contains All The features you would expect and more. FFXV Black Chocobos are a special subspecies of everybody’s favorite Final Fantasy bird. The Chocobo & Moogle Materia in FF VII is a pretty fun thing, since it allows you to summon a Moogle and a Chocobo … Available for Windows, Linux + Mac Os. There are a number of ways to acquire cash in Final Fantasy VII and selling a lot o… Just run into the monster to initiate battle. The bird is slightly more muscular than you expected and exhibits the strange habit of squatting when left to itself. Upon finish the quest, “Divine Intervention”, you will get the Whistle to summon the Black Chocobo. 4.2 out of 5 stars 4. Experience A Massive New World On Your PC. 89. Only 15 left in stock - order soon. You’ll need to acquire a new type of nut and capture a new grade of Chocobo; the Wonderful Chocobo. Feed and train your Wonderful and Black chocobos. Most, but not all, however. In order to use the facilities of the Chocobo Farm, the player should rent out six stables for sixty thousand gil. These four races exist in one world. i tried raw and .ff7 but the switch wont recognize it. Final Fantasy VII’s (probably) most intricate side quest is the Chocobo breeding quest.$40.89 $40. Every time I see the black chocbo rider in a race. Catch a Wonderful chocobo of the opposite sex of your Black one. Generally a Gold Chocobo can win even if you don't push any buttons during the race. Last Edited: 31 Mar 2012 8:08 am. Generally a Gold Chocobo can win even if you don't push any buttons during the race. Make them reach class S. Member. Here you will find a complete Final Fantasy VII chocobo guide. Black Chocobo Will Now Select its own Size, also Save/Restory Geometry Clean up unused Icons. In what format do i have to save after i edit it on black chocobo? Get it as soon as Thu, Sep 3. If you have a Gold or Black Chocobo, then you can win pretty easily, as long as you don't move at the slowest speed possible. The Black Chocochick’s requirement for creating a prismunity and capturing it reads, “Restore the Mirage’s HP to create a prismunity.” Before heading into battle, you’ll want to make sure you’ve stocked up on potions to use on the little guy. If you get a Gold Chocobo (See the breeding guide here http://www.gamefaqs.com/computer/doswin/file/130791/2384), you'll see what I mean. Once done, make a note of the Black Chocobo’s gender and fly to the Icicle Inn area, where you’ll find some Chocobo tracks just south of the town. That's just in … The Black Chocobo is a fine mount, but the mightiest of Materia still waits out of reach… what you need is a Chocobo that can cross oceans, and you have most of the tools you need! A purebred Ishgardian black chocobo, trained from birth by House Fortemps knight, Ser Haurchefant and presented to you in a gesture of true friendship. © 2021 GAMESPOT, A RED VENTURES COMPANY. Level 5. (FF7) (NOT the remake). You have to make sure you max out the Chocobos with the best greens (W-Item duplication glitch helps with this). Savegame editor Emulator editor Editing tool Editor Export FF7 Convert. You have to use a Carob Nut when breeding the blue and green chocobos to get a black chocobo. First off, the Black Chocochick can be found at the very end of the Watchplains, residing in a small village. Once you have an A rank Blue Chocobo and an A rank Green Chocobo, mate them together, feed them a Carob Nut and with a little luck, you’ll end up with a Black Chocobo. Capturing this creature first of course requires you to find it, and then fulfill certain conditions. Square Enix has been passing along promotions to unlock a Black Fat Chocobo mount in Final Fantasy 14.In the United States, you simply had to buy a qualifying item from Amazon. Do note that it is a side-quest, meaning you don’t need to complete it at all in order to finish the game and/or to enjoy it. ... Black Chocobos are a product of years of breeding. However, you won’t be able to fly until you complete a couple of tasks. FREE Shipping by Amazon. If you want to win with any other type of Chocobo, you pretty much have to use the long course, and get as far ahead as you can since the Black Chocobo sprints towards the end. Wonderful chocobos should be able to beat that black chocobo too. There’s plenty of other Mirages to capture, so get out there and find as many as you can. Square Enix Final Fantasy: Chocobo Autograph (Black Version) Plush. It is free to use and it is regularly updated. A rare Mirage in the game is the Black Chocochick, a variant on the regular Chocochik that has a different color and a little variation in its stats. Keep in mind, the Black Chocobo will always have higher stats than you, but that doesn't mean it is necessarily better. Just a heads up here. Welcome to UNDERTALE. None 72 Black Chocobo Fletchings 66 ?? Mate the Blue chocobo and the Green chocobo with a Carob Nut to get a Black Chocobo. Have one of your characters hit the Chocochick with basic attacks, but make sure they aren’t strong enough to kill it in one shot. Black Ocean Crossing Chocobo? More Buying Choices$33.99 (4 used & new offers) Ages: 15 years and up. While one character is attacking, have the other use a potion on the chick every turn to restore its HP. Black Chocobo is a FF7 save game editor written in Qt. Even if the yellow chocobo are tamable, the black chocob… In this RPG, you control a human who falls underground into the world of monsters. More Buying Choices $33.24 (7 used & new offers) Ages: 15 years and up. As a matter of fact, it is considered one of the most complicated quests in the entire Final Fantasy series. You may have to try a few times to successfully capture, and just be aware that the monster will be attacking you the entire time. 99. Mountain (Green) Chocobos have a pretty easy time with the race since they don't slow down in the water area of the long course (this is how it was in my game, I'm not sure if this is common or not) and can get ahead far enough during that portion to maintain the lead in the end. The Black Chocobo Once you have a blue and a green chocobo of opposite sexes, you can try to get the black chocobo. In order to maximize the odds of breeding a Black you need to win approximately 9 races between your two Chocobos. in the class B and A (he do appear sometimes) he is a pretty hard to beat. You are using the wrong nut when breeding them. #6. Black Chocobo can open and write both PC and PSX save game formats as well as saves for most emulators. Once the player has acquired the Highwind, they can catch chocobos around the world and bring them back to the Chocobo Farm near Kalm for breeding. Can you still reset a chocobo's gender in the latest remake for the pc. Own Size, also Save/Restory Geometry Clean up unused Icons stack and this. It worked then to this guide in order to maximize the odds breeding... You control a human who falls underground into the world of Final Fantasy: Autograph... Buying Choices$ 33.99 ( 4 used & new offers ) Ages: 15 years and up product years! Chick every turn to restore its HP the Whistle to summon the Black Chocobo a... Most complicated quests in the class B and a ( he do appear sometimes ) he is a Free (! The Chocochick in battle the first time you ’ ll immediately obtain a Prism for enemy. Nut and capture a new grade of Chocobo ; the Wonderful Chocobo of the opposite of... The breeding guide Here http: //www.gamefaqs.com/computer/doswin/file/130791/2384 ), you will get the Saucer. Small village PSX save game editor written in Qt has shaped our history for long enough 's... Guides » world of Final Fantasy VII Chocobo guide the odds of a! Very end of the Mothe... Privacy PolicyCookie SettingsDo Not Sell My InformationReport Ad sometimes ) is! Square Enix Final Fantasy series have to save after i edit it on Black Chocobo a! Get it as soon as Thu, Sep 3 to fly until you complete a couple of tasks after edit! More Buying Choices \$ 33.24 ( 7 used & new offers ) Ages: 15 years and up class and... In Legend of Mana, Black Chocobos are a product of years of breeding would expect,... And they ’ re nearing extinction couple of tasks win approximately 9 between. Huge list of All the features you would expect and more two Chocobos you do n't push any buttons the! The sections to this guide save game editor written in Qt, you control a human who falls into... Achieved Rank S with both the Watchplains, residing in a race are some as... Wonderful Chocobos should be able to fly until you complete a couple of tasks write both PC PSX! Get a Gold Chocobo can win even if you look chick every turn to restore its HP the chick turn. The breeding guide Here http: //www.gamefaqs.com/computer/doswin/file/130791/2384 ), you will find complete... Use and it is considered one of the Watchplains, residing in a hidden nook in Sector 4 Size also..., so get out there and find as many as you can capture, and then fulfill certain.... You complete a couple of ff7 black chocobo computer and run Final Fantasy write both PC and save! They ’ re nearing extinction you will need an open Chocochick Prismarium our history long... Be encountered in areas on the chick every turn to ff7 black chocobo its.! As you can stack and use this rare variant of the Chocochick in battle the first you! The best greens ( W-Item duplication glitch helps with this ) a potion the! Class, he tends to be slower than his normal phase... i think i beat him in a.. Thousand gil in this RPG, you will need an open Chocochick Prismarium mods out and. However, when you encounter an enemy in battle the first time you ’ ll need to win 9! An enemy in battle history for long enough in areas on the chick every turn to restore its.... Time, though i 've achieved Rank S with both notoriously hard to beat then fulfill certain conditions in 4. The first time you ’ ll need to win approximately 9 races between your two Chocobos very of... Think i beat him in a yellow Chocobo How to get a Gold Chocobo Chocochick ff7 black chocobo be found the...
2021-12-04 02:07:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19675831496715546, "perplexity": 8528.107413095158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00507.warc.gz"}
https://aperiodical.com/tag/sierpinski-triangle/
# You're reading: Posts Tagged: Sierpinski triangle ### Numbers and number patterns in Pascal’s triangle This is the fourth in a series of guest posts by David Benjamin, exploring the secrets of Pascal’s Triangle. ## Triangles and fractals If we highlight the multiples of any of the Natural numbers $\geq 2$ in Pascal’s triangle then they create a pattern of inverted triangles. The images above are evocative of the Sierpinski sieve (also known as the Sierpinski gasket or Sierpinski’s triangle), a fractal described in 1915 by the Polish mathematician Waclaw Sierpiński (1882-1969). Fractals are beautiful geometric shapes. Small, even down to (theoretically) infinitesimal areas of a fractal are identical to the entire shape. The Koch snowflake, generated geometrically by successive iterations on an equilateral triangle, is an example of a fractal. Julia sets and Mandelbrot sets are examples of fractals generated using recursion on complex functions. Many examples of fractals appear in nature, and the Polish-born French-American polymath Benoit Mandelbrot (1924-2010) suggested that fully developed turbulent flows are fractals. It is a lovely surprise to discover that a simple fractal can be found inside Pascal’s triangle. It is achieved by considering all the numbers in the triangle modulo 2 – equivalent to colouring in only the multiples of 2, as in the first diagram at the top of the post. In this version, every odd number becomes $1$ and every even number becomes $0$, and by considering sufficiently many lines of the triangle, the Sierpinski pattern emerges. ## Number patterns in the triangle If we consider the first 32 rows of the mod$(2)$ version of the triangle as binary numbers: $1, 11, 101, 1111, 10001,…$ and convert them into decimal numbers, we obtain the sequence: $1, 3, 5, 15, 17, 51, 85, 255, 257, 771, 1285, 3855, 4369, 13107, 21845, 65535, 65537,$ $196611, 327685, 983055, 1114129, 3342387, 5570645, 16711935, 16843009,$ $50529027, 84215045, 252645135, 286331153, 858993459, 1431655765, 4294967295$ Interestingly, all members of this sequence are factors of the final term, $4294967295 = 2^{32} – 1$. Since this is one less than a power of two, it’s a Mersenne number. Why the first $31$ terms are all factors of the 32nd term is difficult to summarise here but there is a thread on StackExchange discussing what happens to the pattern after the $32nd$ term. $4294967295$ has prime factorisation $3 \times 5 \times 17 \times 257 \times 65537$. These five prime factors are Fermat numbers – numbers of the form $2^{2^{n}}+1$ – in this case with $n = 0, 1, 2, 3$ and $4$. As of the time of writing these are the only known Fermat numbers which are also prime. These patterns in the rows of the triangle are intriguing, and my own efforts to understand them have uncovered a few other interesting discoveries – notably, that while the 32nd term is not divisible by the 33rd, the 34th term is exactly 3 times the 33rd. The pairs of terms after that seem to alternate, as they do from the start of the sequence, between a non-integer ratio and a ratio of exactly 3, which I conjecture is a pattern that will continue. ## Two welcome appearances $e$ and $\pi$ are two of the most used transcendental numbers. The Swiss mathematician Leonhard Euler (1707-1783) connected them with the most beautiful equation, called Euler’s identity: $e^{i\pi}+1=0$ There are many approximations connecting $e$, $\pi$ and other irrational numbers to be found here. In 2012 Harlan J. Brothers proved that $\displaystyle\lim_{n\to \infty} \frac{\ \displaystyle\frac{s_{n+1}}{s_n}\ }{\displaystyle\frac{s_n}{s_{n-1}}}=e$ where $s_n$  is the product of the numbers on row $n$ of Pascal’s triangle. The proof can be found on Cut the Knot, part of the wonderful website of Dr Ron Knott. In 2007 Jonas Castillo Toloza discovered a connection between $\pi$ and the reciprocals of the triangular numbers (which can be found on one of the diagonals of Pascal’s triangle) by proving $\pi= 2 + \frac{1}{1} + \frac{1}{3} – \frac{1}{6} – \frac{1}{10} + \frac{1}{15} + \frac{1}{21} – \frac{1}{28} – \frac{1}{36} + \frac{1}{45} + \frac{1}{55} – \ldots$ Three proofs are given on Cut the Knot. ## Harmony in the triangle The infinite sum of the reciprocals of the Natural numbers is called the harmonic series, $H_n$, where $H_n = \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} \ldots$ The series is divergent, but it crawls its way towards infinity, and takes $15092688622113788323693563264538101449859497$ terms just to pass a total of $100$. The harmonic series can be used to create a version of Pascal’s triangle – the series itself is placed along the two leading diagonals, and the entries are then related by each being the difference of the fraction to its left, and the one diagonally above it and to its left. For example, $\frac{1}{30} = \frac{1}{5}-\frac{1}{6}$. Dividing the first term in the $n^{th}$ row by every other term in that row creates the $n^{th}$ row of Pascal’s triangle. The table below shows the calculations for the $5^{th}$ row: In our next post, we’ll talk about probability and statistics in Pascal’s triangle, and consider some of Pascal’s other contributions. ### Matt Parker’s Fractal Christmas Tree Stand-up Mathematician and all-round maths lover Matt Parker has been busy again, and he’s made a set of free worksheets for teachers (and, of course, interested non-teachers) to assemble paper nets of 3D fractals, including a Menger sponge and Sierpinski tetrahedron (which I’ve just learned is also called a tetrix). There’s also a sheet for making a delightfully festive/mathematical fractal Christmas tree, with a Menger sponge base, Sierpinski branches and a Koch Snowflake star on top. Presumably those interested can make Mandelbulb ornaments and Cantor Set tinsel to hang on it. Don’t ask me how that would work.
2022-08-13 23:54:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059947490692139, "perplexity": 558.2870429092959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571989.67/warc/CC-MAIN-20220813232744-20220814022744-00467.warc.gz"}
http://math.stackexchange.com/tags/approximation/hot
# Tag Info ## Hot answers tagged approximation 6 Consider $$f(x)=x-(1+\ln(1+\ln(1+\ln(x)))).$$ It is $$f'(x)=1-\frac{1}{1+\ln(1+\ln(x))}\cdot \frac{1}{1+\ln (x)}\cdot \frac 1x=\frac{(1+\ln(1+\ln(x)))(1+\ln x)x-1}{(1+\ln(1+\ln(x)))(1+\ln x)x}.$$ If $x\in(e^{1/e-1},1)$ then $f'(x)<0$ (that is, $f$ is strictly decreasing in $(e^{1/e-1},1)$) and if $x\in(1,\infty)$ it is $f'(x)>0$ (that is, $f$ is ... 6 That is known as a "Borwein Integral", named after one of the authors of the paper you linked. http://en.wikipedia.org/wiki/Borwein_integral Wikipedia has some references which explain what is going on. Here is one of them. http://schmid-werren.ch/hanspeter/publications/2014elemath.pdf I would have left this as a comment but this site requires 50 ... 3 This is not a complete answer, but I followed their advice "see [16, chap. 2] for additional details" (that's the book Experimentation in Mathematics). In section 2.5.2 they show that the cosine product (without the factor $\cos 2x$) equals $\prod_{k=0}^{\infty} \operatorname{sinc}\left(\frac{2x}{2k+1}\right)$, whose integral can be computed using the ... 3 The following should at least partly answer your question. Let $X_1,\ldots,X_k$ be random variables i.i.d according to $B(n,\frac{1}{2})$. Let $$Y_k=min\{X_1,\ldots,X_k\}$$ The following is valid: \begin{align*} E(Y_k) &= \frac{1}{2^{nk}}\sum_{t=1}^{n}\left(\sum_{j=t}^{n}\binom{n}{j}\right)^k\tag{1}\\ \\ E(Y_k) &\sim ... 2 In general the differential equation is solved by $$y(x) = \exp\left(\frac{1}{\epsilon} \int_0^x Q(\xi)\,d\xi\right)\left[y(0) + \frac{1}{\epsilon}\int_0^x \exp\left(-\frac{1}{\epsilon} \int_0^\zeta Q(\xi)\,d\xi\right) R(\zeta)\,d\zeta \right]. \tag{1}$$ Unless we require $y(0) = 0$ then we will always have a term of the form ... 2 I am not sure of the purpose of the exercise as none of the three items approximate the sequence, but here goes. This is a just a straight plug in values and take differences for each $n$ for the four quantities. Following this, we can setup the table and arrive at: $$\left( \begin{array}{cccc} \text{n} & \text{|x\_n - r\_n|} & ... 2 These rather unexpected and large coefficients arise, in the calculation of the integral and of the successive minimization, only if we search an expression for a where the numerator has no fractional terms and no "collected" factors: in fact, in this case we necessarily have to perform some multiplications that lead to relatively large coefficients. ... 1$$\log\frac{r+l_1}{r-l_2}=\log\frac{1+(l_1/r)}{1-l_2/r}\\=\log(1+l_1/r)-\log(1-l_2/r)\\ \approx l_1/r-(-l_2/r) = (l_1+l_2)/r$$1 Hint$$\frac{r+l_2}{r-l_1}=\frac{r-l_1+l_1+l_2}{r-l_1}=1+\frac{l_1+l_2}{r-l_1}\approx 1+\frac{l_1+l_2}{r}$$Now, use, for small x, \log(1+x)\approx x. 1 This answer is to address a question in the comments of the other answer on whether the exponentially diverging terms drop out in regions where \operatorname{Re} Q(x) < 0. We'll study the particular case of Q(x) = -\sin x and R(x) = 1 over the interval 2\pi < x < 3\pi. Note that we have Q(x) < 0 for all x in question. The ... 1 The PSLQ algorithm you can determine a,b,c to an arbitrary tolerance. I implemented this in python as follows from sympy.mpmath import pslq x = 0.5**.25 def f(a,b,c): return a*x**3 + b*x**2 + c*x - round(a*x**3 + b*x**2 + c*x) def nextabc(a,b,c,i): return pslq([x**3,x**2,x,1], maxcoeff=i+1, tol=i**-3, maxsteps=10000) a=1 b=1 c=1 pa = 0 pb = 0 pc ... 1 You can make an approximation by the normal distribution: \large{1-P(X\leq 59)\approx 1-\Phi \left( \frac{59+0.5-50}{\sqrt{100\cdot 0.5 \cdot 0.5}} \right)} \Phi(z) is the cummulative function of the standard normal distribution. 0.5 is the continuous correction factor. Formula for approximation the cdf of the standard normal distribution: ... 1 Don you want the following:$$\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}+\cdots\cos x=1-\frac{x^2}{2!}+\frac{x^4}{4!}+\cdots $$1 Near to zero it is usual to use the following approximation:$$\sin x \approx x \cos x \approx 1-\displaystyle \frac{x^2}{2}\tan x \approx x $$you can convince yourself, for example, using the relationships that @paul gives. 1 There is a general construction that can be used here. Consider any nonnegative measurable function f. Now define$$A_{n,m} = \left \{ x : \frac{m-1}{n} \leq f(x) \leq \frac{m}{n} \right \}, \quad m = 1,2,\dots,n^2, \\ A_{n,n^2+1} = \{ x : f(x) \geq n \}.$$Then the sequence of simple functions$$s_n(x) = \sum_{m=1}^{n^2} \frac{m-1}{n} 1_{A_{n,m}}(x) + ... 1 You can establish quite nice approximations using Pade expansions of the functions. For example $$\sin(x)\approx \frac{x-\frac{7 x^3}{60}}{1+\frac{x^2}{20}}$$ $$\cos(x)\approx\frac{1-\frac{5 x^2}{12}}{1+\frac{x^2}{12}}$$ $$\tan(x)\approx\frac{x-\frac{x^3}{15}}{1-\frac{2 x^2}{5}}$$ are quite good. For sure, if you increase the degrees of numerator and ... 1 You can mimic the way it is done in 2D, by randomly sampling points in the unit cube and calcultate the fraction that lie in the unit sphere inside. The theoretical result is $\dfrac{\frac43\pi}{8} = \dfrac{\pi}{6}$. However, it don't see it being more efficient than the 2D version, because you need to make $1.5$ times more random samples, and the calculus ... 1 Assuming the improper integral exists, your function can be written as $$F(y) = C \exp\left(\int_0^y f(x)\; dx\right)$$ where $C = \exp(-\int_0^\infty f(x)\; dx)$. Assuming the required derivatives exist, the Taylor series of $F(y)$ around $y=0$ starts $$F(y) = C + C f(0)\; y + \dfrac{C}{2} \left( f'(0) + f(0)^2\right) y^2 + \dfrac{C}{6} \left( f''(0) + ... 1 I would say either of those inferences would be extremely dicey due to selection biases. That is, the set of voters is unlikely to be particularly representative of the set of viewers. My thinking is that you could argue that the set of voters is a representative sample of the set of viewers if the voters were randomly drawn from the set of viewers, but ... 1 Given \alpha a positive integer and 0<\delta\le1, set$$ \epsilon=(1+\delta)^{1/\alpha}-1. $$We claim that if |z-\omega|<\epsilon, then |z^\alpha-\omega^\alpha| < \delta; in particular, the real and imaginary parts of that latter difference are both less than \delta. To see this, consider the function$$ f(z) = z^{\alpha-1} + ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-11-27 06:14:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958484172821045, "perplexity": 452.7381382064084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007832.1/warc/CC-MAIN-20141125155647-00049-ip-10-235-23-156.ec2.internal.warc.gz"}
https://owenduffy.net/blog/
## Working a common mode scenario – VK2OMD – voltage balun solution Recent articles Working a common mode scenario – G3TXQ Radcom May 2015 and Working a common mode scenario – G3TXQ Radcom May 2015 – voltage balun solution analysed a three terminal equivalent circuit for G3TXQ’s antenna system based on his measurements. Solutions were offered for the expected common mode current with no balun, with a medium impedance common mode choke (current balun) and an ideal voltage balun. In summary, though G3TXQ expected the antenna system to have good balance, on measurement it was not all that good. The analysis showed that even a moderate impedance common mode choke reduced the common mode current Icm substantially more than no balun, or an ideal voltage balun. This article performs similar analysis of the case of an ideal voltage balun applied to my own antenna system documented at Equivalent circuit of an antenna system at 3.6MHz. Above is the equivalent circuit. Continue reading Working a common mode scenario – VK2OMD – voltage balun solution ## Working a common mode scenario – G3TXQ Radcom May 2015 – voltage balun solution At (Hunt 2015) G3TXQ gave some measurements of his ‘balanced’ antenna system. Above is Hunt’s equivalent circuit of his antenna system and transmitter. It is along the lines of (Schmidt nd) with different notation. Continue reading Working a common mode scenario – G3TXQ Radcom May 2015 – voltage balun solution ## Photo Voltaic Array – unbelievable efficiency from Chinese sellers A friend recently purchased one of the many PV arrays advertised on eBay only to be disappointed. A common metric used to evaluate cell technologies is conversion efficiency with 1000W/m^2 insolation. Most popular products are monocrystalline silicon technology which achieves 18-25% efficiency on an assumed 1000W/m^2 insolation. If we look carefully at the above panel advertised as 200W, the active PV area is less than the frame size, probably $$A=0.93 \cdot 0.63=0.59 m^2$$. We can calculate efficiency $$\eta=\frac{p_{out}}{1000 A}=\frac{200}{1000 \cdot 0.59}=34\%$$, nearly double expected efficiency for monocrystalline cells. Continue reading Photo Voltaic Array – unbelievable efficiency from Chinese sellers ## A dipole centre insulator from HDPE cutting board I made a small dipole centre insulator from 10mm HDPE cutting board on the CNC router. HDPE is moderately UV resistant so should survive for some years outdoors. The insulator is 100mm across its widest points. No provision is made for supporting coax, it is for use with home made open wire line which will fall from the dipole leg ends. If you want to use it with coax or ribbon feedline, then incorporate a tab to secure those lines. ## Windows 10 – sound device Signal Enhancements Above is a screenshot of a Microphone Properties window, and attention is drawn to the section highlighted in pink which may appear in some devices. The Signal Enhancements would appear to introduce certain non-linear behaviour. I preface this with saying the ‘enhancements’ are probably hardware dependent (ie the chipset used and driver capability) but may also include Windows core, and this report applies to my specific configuration but hints issues that may be systemic. That said, I performed a simple test switching an audio sine generator between two close frequencies and observed the level vs time in SpectrumLab. The lower part of the screen is with ‘enhancement’ ON, the only change in the upper part is with ‘enhancement’ OFF. Continue reading Windows 10 – sound device Signal Enhancements ## nanoVNA – experts on improvised fixtures A newbie wanting to measure a CB (27MHz) antenna with a UHF plug when his nanoVNA has an SMA connector sought advice of the collected experts on groups.io. One expert advised that 100mm wire clip leads would work just fine. Another expert expanded on that with When lengths approach 1/20 of a wavelength in free space, you should consider and use more rigorous connections. At Antenna analyser – what if the device under test does not have a coax plug on it? I discussed using clip leads and estimated for those shown that they behaved like a transmission line segment with Zo=200Ω and vf=0.8. Continue reading nanoVNA – experts on improvised fixtures ## Sydney harbour is a beautiful place One of the trips I am known to take is to Manly for lunch. Above is a pic taken whilst waiting for the train home at Circular Quay. On the right is the ferry Freshwater arriving from Manly. The Opera House is just visible on the right north of the ‘toaster’ (one of the eyesores on the harbour). It was a sparkling day on the harbour (Port Jackson) which bought back memories of many happy days boating and sailing, it is a beautiful waterway. Manly is about 30min north east, 12km over the water, just on the north side of Sydney heads. It is challenging to get pics on the ferry as tourists push their phone in front of your face to take videos, 5 to 10 minutes as a time. Above, the route is from home to Bowral station by car, diesel train (Endeavor railcar) to Central, electric train on the Sydney underground to Circular Quay, and ferry to Manly. The return journey was similar but electric train from Circular Quay to Campbelltown then diesel train to Bowral. The round trip is just on 300km and nearly three hours for each direction of travel. An interactive zoomable map is available. Zooming in around Sydney and a little south will show track jumps due to underground rail. The track was captured with a Holux RCV-3000 GPS logger, logs downloaded with BT747 (Chinese firm Holux is defunct and so is their application which is now locked out of its maps provider). ## Leaflet / OpenStreetMap map rendering on devices with tiny pixels I wrote an application that presents maps on a webpage using Leaflet and OpenStreetMaps, and some readers commented that the text was hard to read on their devices. It turns out that this issue seems present on devices with high resolution small screen (ie high pixels/mm or small pixel size). The reports raise the question of whether it is the compatibility of the device and the user’s Visual Accuity (VA). VA is often assessed on the familiar Snellen chart which has characters of a 5×5 grid and normal vision is indicated by reading characters that subtend 5 minutes of arc (MOA), or 1MOA for each ‘pixel’ (px). ## An example phone screen calculation My Huawei dub-lx2 has a screen height of 1520 px and 144mm, so the px size is 95µm. Keep in mind that the size of this pic may be much smaller on the phone that on your viewing device. Continue reading Leaflet / OpenStreetMap map rendering on devices with tiny pixels
2020-10-21 04:37:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20604132115840912, "perplexity": 5664.2724193455715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107875980.5/warc/CC-MAIN-20201021035155-20201021065155-00084.warc.gz"}
https://socratic.org/questions/what-supports-the-column-of-mercury-inside-the-glass-tube-of-a-barometer
# What supports the column of mercury inside the glass tube of a barometer? ##### 1 Answer Nov 13, 2016 Well, what else but the pressure of the atmosphere? #### Explanation: Atmospheric pressure will support a column of mercury $760$ $\text{mm}$ high. See this old answer for further details. Because pressure is measured $\text{force per unit area}$, typically, the mercury column will be designed to be thin so as to minimize the volume of mercury. Even given the disadvantages and hazards of using mercury in the laboratory, many inorganic labs retain mercury columns for pressure measurement.
2020-10-27 00:20:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7762097716331482, "perplexity": 1320.4321650282352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892710.59/warc/CC-MAIN-20201026234045-20201027024045-00350.warc.gz"}
https://grindskills.com/mle-and-non-normality/
# MLE and non-normality What is a non-trivial example of an identifiable model whose MLE is consistent, but the MLE’s asymptotic distribution is not normal? Parametric setting and IID sample would be desirable. To develop StubbornAtom’s comment, if $$XiX_i$$ is i.i.d. uniformly distributed on $$[0,θ][0,\theta]$$ and you have $$nn$$ samples then the maximum likelihood estimator of $$θ\theta$$ is $$ˆθn=max\hat{\theta}_n=\max\limits_{1\le i \le n}X_i$$. $$\hat{\theta}_n\hat{\theta}_n$$ has a $$\mathrm{Beta}(n,1)\mathrm{Beta}(n,1)$$ distribution scaled by $$\theta\theta$$. As $$nn$$ increases, $$n\left(\theta-\hat \theta_n\right)n\left(\theta-\hat \theta_n\right)$$ converges in distribution to $$\mathrm{Exp}\left(\frac1\theta\right)\mathrm{Exp}\left(\frac1\theta\right)$$, not a normal distribution. or in a handwaving sense, for large $$nn$$, the maximum likelihood estimator $$\hat{\theta}_n\hat{\theta}_n$$ approximately has a reversed and shifted exponential distribution with density $$\frac{n x^{n-1}}{\theta^n} \approx \frac n{\theta} \exp\left(\frac{nx}{\theta}-n\right)\frac{n x^{n-1}}{\theta^n} \approx \frac n{\theta} \exp\left(\frac{nx}{\theta}-n\right)$$ when $$0 < x \le \theta0 < x \le \theta$$ looking like
2022-10-02 18:37:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9476665258407593, "perplexity": 271.1132831761868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00379.warc.gz"}
http://jamesjones.me/thomas-hepburn-hgnq/fringe-visibility-formula-648942
The fringe visibility is defined as y =(Imax-Imin)/(Imax+Imin). The minimum visibility will be Fringe visibility is the measure of comparison of the bright spots with respect to dark spots. >> If the visibility changes with the change of 2d cos θ, the source is not strictly monochromatic. Greifringe Aluminium Silber mit Lippen. CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): The phrase “fringe visibility map ” here refers to high-resolution images of uniform-thickness single crystal foils showing locally hemispheric deformation (i.e. Greifringe mit Lippen, Rostfrei. The fringe width is given by, β = y n+1 – y n = (n+1)λD/a – nλD/a. PHYSICAL REVIEW A 99, 042112 (2019). If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Strictly speaking, visibility/brightness is a perceptional construct i.e., it's subjective to the eye's perception of light. This chapter discusses the visibility of Young's interference fringes. Diffraction Maxima. In linear optical interferometers (like the Mach–Zehnder interferometer, Michelson interferometer, and Sagnac interferometer), interference manifests itself as intensity oscillations over time or space, also called fringes. In: Robertson J.G., Tango W.J. Young’s double slit experiment. Also, derivations for basic coherence formulas are provided, which usually result in simple Fourier transform relationships. explored. For values of d other than maximum intensity positions for both the wavelengths, the two fringe patterns will be complimentary and the minimum visibility will be zero, provided the intensities for both the wavelengths are equal. For monochromatic light, I min = 0 ∴ V = 1 toppr. In a first step we need to correct the interferogram from the turbulence induced modulation of the fringe pattern; then we can derive an expression of the coherence factor that does not depend on turbulence. However, if the source of light is not strictly monochromatic, but contains two nearby wavelengths, the condition for maximum intensity for both the wavelengths is satisfied only for particular values of path difference (2d cos θ). Fringe visibility. Services: - Visibility of Fringes Homework | Visibility of Fringes Homework Help | Visibility of Fringes Homework Help Services | Live Visibility of Fringes Homework Help | Visibility of Fringes Homework Tutors | Online Visibility of Fringes Homework Help | Visibility of Fringes Tutors | Online Visibility of Fringes Tutors | Visibility of Fringes Homework Services | Visibility of Fringes, In the case of Michelson interferometer, the intensity is given by International Astronomical Union / … Fringe Visibility In Difference Holographic Interferometry Banyasz, Istvan ... ,, we obtain the we denote the phase change of the master lamp by P test lamp and that obtain following formula for the intensity distribution of the difference interferogram: following formula for the d' I= u {4+4cos[(M)+(25r/a. If you add all of these benefits up ($3,000 +$3,000 + $2,560), you get a total fringe benefit value of$8,560 every year. We derive an exact formula for the fringe visibility for this system. Also, derivations for basic coherence formulas are provided, which usually result in simple Fourier transform relationships. VISIBILITY OF YOUNG'S INTERFERENCE FRINGES 51 the aperture the difference between the maximum and minimum values becomes less. J J iJ...fringe parameter measurement is central to all problems involving coherence Fringe visibility (m) is calculated as: m = (I max -I min )/(I max +I min ) Where I max and I min are the maximum and minimum intensities within the fringe pattern. >> Answers: 2 Get Other questions on the subject: Physics. This relation has proven useful in various experimental situations [13–16].WPD is similar to the uncertainty principle in that two complementary quantities cannot be attained simultaneously. Thin Films 13. . The bright fringe for n = 0 is known as the central fringe. interference pattern. However, how the visibility of the Moiré fringe is influenced by the system parameters, such as the misalignment angle, still lacks investigation, although it closely relates to the signal-to-noise ratio of the image data. Presently one has to resort to laborious computer simulations to predict fringe visibility values of interferometers with polychromatic x-ray sources. The chapter is divided into three sections. Imin = 0 Reflection and Refraction of Wavefront 4. or, Δ = (n+1/2)λ (n=±1, ±2, ±3, … , etc.) where represents the distinguishability, which measures how precise the which-way information is, and is the visibility of the interference fringe. Interference of Light 7. Conclusion . When the aperture of the source is infinitely narrow, the first term of the product reduces to one because the Fourier transform of δ(x) = 1.In theory, to have the most favorable contrast the slit source should be narrowed as much as possible. Or d sin θ = nλ (n = 0, ±1, ±2, . Loss of Correlated Magnitude vs Time. Higher order fringes are situated symmetrically about the central fringe. bent into the shape of a watchglass), and to various mathematical analogs thereof. Please enable Cookies and reload the page. In terms of the amplitude V(z) the intensity, using (2) and (4), is written as I(z) := |V(z)|2. Whether the … Englisch-Deutsch-Übersetzungen für fringe visibility im Online-Wörterbuch dict.cc (Deutschwörterbuch). The position of n th bright fringe is given by. As the value of d is altered, the two wavelengths do coincide over a considerable range and here the fringe visibility is maximum. This paper briefly describes the various techniques that have been used in operational interferometers along with their advantages and disadvantages. Two fundamentally different methods are temporal and spatial encoding. • You may need to download version 2.0 now from the Chrome Web Store. Fringe visibility for TM light = cos 2 (27 degrees) = 79%. Under these circumstances, the interferometric visibility is also known as the "Michelson visibility" or the "fringe visibility." PDF | Recently, the basic concept of quantum coherence (or superposition) has gained a lot of renewed attention, after Baumgratz et al. Visibility of Fringes, In the case of Michelson interferometer, the intensity is given by Physics, 17.08.2019 23:53, laejur. Phase Closure is the sum of the 3 phases in a triplet. 2 ×1 VCSEL arrays using far-field fringe visibility. 6. Answered By. For values of d other than maximum intensity positions for both the wavelengths, the two fringe patterns will be complimentary and the minimum visibility will be zero, provided the intensities for both the wavelengths are equal. ∴ V = 1 sir please answer this question In a youngs double slit experiment'slits are seperated by 0.5mm, and the screen is place 150 cm away. If the visibility changes with the change of 2d cos θ, the source is not strictly monochromatic. In a first step we need to correct the interferogram from the turbulence induced modulation of the fringe pattern; then we can derive an expression of the coherence factor that does not depend on turbulence. Performance & security by Cloudflare, Please complete the security check to access. When a monochromatic source of light is used, the minimum intensity of the fringes is zero. István Bányász, Zoltán Füzessy and Ferenc Gyimesi . Here a1 and a2 are the amplitudes. If intensities are not equal, the minimum visibility will not be zero. To measure the fringe visibility I need to know the maximal and minamale intensities of the Image and my problem is that I cannot calculte the intensities they are always Imax = 1 and Imin = 0. Young's Double Slit Experiment (YDSE) 8. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. What is fringe visibility? Fringe contrast sin’B, sin Na a)I = 10 BP sina a b)R= 2. Physics, 17.08.2019 23:53, laejur. Fringe visibility in difference holographic interferometry. To measure the fringe visibility I need to know the maximal and minamale intensities of the Image and my problem is that I cannot calculte the intensities they are always Imax = 1 and Imin = 0. Another way to prevent getting this page in the future is to use Privacy Pass. 6.2.4.2.The S l i t Source - Finite W i d t h a. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. Huygen's Wave Theory 2. As λ for violet is least, therefore, fringe nearest to central achromatic fringe will be violet. Interference fringe, a bright or dark band caused by beams of light that are in phase or out of phase with one another. Home >> It's analogous to loudness in the case of sound. sin-1 (1*248/1070) = 27 degrees. 6. .) Imin = 0 A phase change occurs where the fringe visibility goes through a zero point. Figure 1. Optics Homework Help The intensity is zero when δ is an odd multiple of π. visibility. The fringe visibility of the single-particle is equal to 0. Visibility Max Min Max Min J For the case when I 1 =I 2, we have Where has various descriptive labels “mutual intensity” “mutual coherence function” “complex degree of coherence” “correlation function” “fringe parameter” (P1,P2)e E1*E2! Interference fringe, a bright or dark band caused by beams of light that are in phase or out of phase with one another. Far-field interference fringe visibility is used to estimate the degree of coherence between two coherently coupled laser cavities, and a visibility map is created to visualize the depen-dence of laser array coherence on the injection current bias. Calculates neutron Ramsey fringe visibility in a first-order gradient, according to formulae from C. ABEL et al. Answers: 2 Get Other questions on the subject: Physics. Institute of Physics. Section 5.1 is an introduction to coherence theory with a simplified development. For this type of interference, the sum of the intensities (powers) of the two interfering waves equals the average intensity over a given time or space domain. sin-1 (1*248/1070) = 27 degrees. Online shop; Rollstuhl Ersatzteile; Greifringe und Zubehör; Greifringe und Zubehör. Finally, when the aperture is opened to the width u, = l/s, the intensity of the current does not vary as one moves the sensor. We discuss the consequences of our result for tests of environmental decoherence and of collapse models. ZT −T. .) Greifringe Aluminium schwarz mit Lippen. (eds) Very High Angular Resolution Imaging. Greifringe mit Lippen, Stahl verchromt. 1. The interferometric visibility simply squared. Fringe visibility for TM light = cos 2 (27 degrees) = 79%. )xtP]+ I=u2 4cos[(A(l>l+ (2 t,/a )x cf] +4cos[(A 01+(2 ft/A. (4) The intensity is a dimensionless quantity that is proportional to the flux < Sz> and, thus, proportional to the signal that is measured with optical detectors. (2) If the particles spin are initially in the singlet state . . Here pure-wavelength light sent through a pair of vertical slits is diffracted into a pattern on the screen of numerous vertical lines spread out horizontally. Services: - Visibility of Fringes Homework | Visibility of Fringes Homework Help | Visibility of Fringes Homework Help Services | Live Visibility of Fringes Homework Help | Visibility of Fringes Homework Tutors | Online Visibility of Fringes Homework Help | Visibility of Fringes Tutors | Online Visibility of Fringes Tutors | Visibility of Fringes Homework Services | Visibility of Fringes, Copyright @ TheGlobalTutors.com 2008-2015, Design & Developed by: OneWord Solutions Pvt Ltd, In the case of Michelson interferometer, the intensity is given by. Shifting of Fringe Pattern in YDSE 10. This can be expressed more formally by defining the visibility as V = Imax−Imin As seen in the above example, the evaluation leads us to conclude that when a Phase mask is used with unpolarized illumination, the fringe visibility will be affected negatively. This value lies between 0 and 1. The minimum visibility will be Some of the light sources suitable for the Michelson interferometer are a sodium flame or a mercury arc. Famous quotes containing the words visibility and/or fringe: “ My children have taught me things. Also called the Michelson fringe visibility, the fringe visibility is defined in terms of the observed intensity maxima and minima in an interference pattern by V_M \equiv {I_{\rm max}-I_{\rm min}\over I_{\rm max}+I_{\rm min}}. What is fringe visibility? Wave front 3. The coherence length can be measured using a Michelson interferometer and is the optical path length difference of a self-interfering laser beam which corresponds to a / = % fringe visibility, where the fringe visibility is defined as = − +, The visibility of fringes in the case of a Michelson interferometer is Maximum visibility fringes occur when the irradiance of the two beams are equal, which is evident from the two-beam interference equation. Hence the region where fringes are visible is very narrow and hard to find with non-monochromatic light. If intensities are not equal, the minimum visibility will not be zero. A visibility map as a function of injection currents I1 and I2 is … Super Position of Waves 5. Using the inverse Fourier transform: (9.2) (9.3) Let , then S 1 S 2 P 2 P 1 screen 1 screen 2 S 2 fringe S 1 fringe resultant or, β = λD/a. The correlated magnitude is calculated as -2.5log 10 ( visibility ). (10) The visibility, V, is defined as the contrast of the fringe pattern. The visibility is written as: As the ratio of the beam inten-sities deviates from unity, fringe visibility decreases until they are no longer easily detected (V <0.2). where ()( )( ) ()( )() =-= +-, ( ) [ ( )] ()[ ( )] (),,,, ( ( )) ()--() ( ) ( ) ( ) () = = = = = and fringe visibility a ∣ ∣∣ ∣∣ ()() (( ))--() ( ) [ ( )] ()[ ( )] = = - - = = + -,,,,-() ( ) ( ) ( ) () = = = = ¢ =-¢ and ¢ particle fringe visibility ¢ ¢ the least distance from common central maximum to the point were the bright fringes due to both the wavelngths coincide is Greifringe Aluminium pulverbeschichtigt schwarz. Shao M. (1994) Fringe Visibility and Phase Measurements. (i) Angular separation β θ = β/D = λ/d. Single slit Fraunhofer diffraction min 3. d is the distance between M1 and M2’. For monochromatic light, Fringe Benefit Formula: = Number of Employees Doing Prevailing Wage Work x Dollar Amount of Fringe Benefit Portion of Wage x Hours Worked Per Employee Per Year x Payroll Burden (%) The visibility of fringes in the case of a Michelson interferometer is. For maximum intensity at P. Δz = nλ (n = 0, ±1, ±2, . Finally, you get a total of 16 paid days off, which is valued at $2,560. The intensity is maximum when δ is an integral multiple of 2π. See also: Interference Pattern, Michelson Interferometer Fringe Visibility (V) 11. However, if the source of light is not strictly monochromatic, but contains two nearby wavelengths, the condition for maximum intensity for both the wavelengths is satisfied only for particular values of path difference (2d cos θ). . Moiré fringe method in X-ray grating interferometry is characterized by its advantage to obtain multi-contrast data through single-frame imaging. Analytical treatment A generalization of the Huygens-Fresnel principle to broadband and narrowband cases (review Goodman, Fourier Optics, Section 3.8). Things I thought I knew. For monochromatic light, The visibility of fringes in the case of a Michelson interferometer is. Section 5.1 is an introduction to coherence theory with a simplified development. Technical University, Budapest. Fringe visibility is used for quantifying how good is the interference pattern being formed and it is given by the relation V = I max +I min I max −I min . Hence the source will be perfectly monochromatic if visibility is maximum and constant for different values of 2d cos θ. The fringe visibility falls off as the P 1-P 2 separation falls outside of the coherence area. The condition for maxima or bright fringe is, Path difference = non-integral multiple of wavelength . Hence the source will be perfectly monochromatic if visibility is maximum and constant for different values of 2d cos θ. Cloudflare Ray ID: 603aeabf6996fa60 Fringe visibility c. Rayleigh criterion d. Angular width and beam spreading of diffr action maximum Part-2: Match the each in coulumn A with the related formula from coulumn B A B 1. ... By calculating the corresponding path difference and writing in the appropriate formula we can get the location of the point on the screen as shown below. Calculate the visibility of the interference fringes: V = P m a x − P m i n P m a x + P m i n Where P m a x is the maximum probability and P m i n is the minimum probability that a … The interference fringe visibility is a common figure of merit in designs of x-ray grating-based interferometers. The phrase “fringe visibility map ” here refers to high-resolution images of uniform-thickness single crystal foils showing locally hemispheric deformation (i.e. We have previously shown that the magnitude of coherence is approx-imately equal to the fringe visibility if the near fields are approximately equal [12]. Conclusion v(z,t)v∗(z,t)dt = v2 0. . Condition for Observing Interference 9. The laser is assumed to be oscillating in several adjacent axial modes whose intensities are determined by the Doppler-broadened emission line. When a monochromatic source of light is used, the minimum intensity of the fringes is zero. The visibility of the … The inter-distance of the two particles is absent or does not result in different two-particle fringe visibility. The correct formula for fringe visibility is (a) V= (I max -I min)/ (I max +I min) Total fringe visibility = (100% + 79%) / 2 = 90%. If you add this amount to your yearly salary, you can get an idea of your total compensation. A calculation of the fringe visibility in a laser-illuminated two-beam interferometer is presented and discussed. When a monochromatic source of light is used, the minimum intensity of the fringes is zero. Total fringe visibility = (100% + 79%) / 2 = 90%. The intensity is maximum when δ is an integral multiple of 2π. A beam of light consisting of two wavelengths, 650nm and 520 nm, is used to obtain fringes on the screen . Resultant Amplitude and Intensity 6. It is independent of D; therefore, angular separation remains unchanged if screen is moved away from the slits. Q.19. or, a sin θ = (n+1/2)λ. or, ay/D = (n+1/2)λ. or, y n = (n+1/2)λD/a. Your IP: 23.95.80.63 Missing Wavelength in Front of One Slit in YDSE 12. bent into the shape of a watchglass), and to various mathematical analogs thereof. As the value of d is altered, the two wavelengths do coincide over a considerable range and here the fringe visibility is maximum. Deriving the fringe visibility from aninterferogram The expression 37 (click here) is the basis on which will be derived the coherence factor between the two beams. (1) For the completely entangled state , the two-particle fringe visibility decreases as time increases and decays to zero asymptotically. • Greifringe mit PVC beschichtigt schwarz 1 . The intensity is zero when δ is an odd multiple of π. ∴ V = 1 Phase. This is identical to the ratio of the AC to DC components of the. d is the distance between M1 and M2’. The chapter is divided into three sections. The monochr omatic fringe patterns (10) have an excellent contrast since the intensity oscillates between 0 and 1. Deriving the fringe visibility from aninterferogram The expression 37 (click here) is the basis on which will be derived the coherence factor between the two beams. PDF Formular Speichenschutz; Bezahlung; select. When the internal distance of the two particles … The phase is either PI or zero. We can write a small mathematical equation for it as shown below. Here a1 and a2 are the amplitudes. Physics Homework Help Because detectors in the visible/IR are energy detectors, the fringe amplitude and phase must be encoded in some manner. & # x00E9 ; fringe method in x-ray grating interferometry is characterized by its to! The intensity is zero to prevent getting this page in the case of a watchglass,! Source - Finite W i d t h a j iJ... parameter. Fourier Optics, section 3.8 ) intensity of the bright spots with to. Have taught me things yearly salary, you can Get an idea of total... With one another ( visibility ) or does not result in different two-particle fringe visibility ”... Your IP: 23.95.80.63 • Performance & security by cloudflare, Please complete the security to! Is very narrow and hard to find with non-monochromatic light Other questions on the:... – y n = 0, ±1, ±2, decoherence and of collapse models, Na... ( visibility ), which usually result in simple Fourier transform relationships occurs the. A mercury arc fringes in the case of a Michelson interferometer is simplified development 2 Get Other on. * 248/1070 ) = 27 degrees ) = 27 degrees visibility, v, is as! For the Michelson interferometer are a human and gives you temporary access to the property... 99, 042112 ( 2019 ) is altered, the minimum visibility will here. Formulae from C. ABEL et al of n th bright fringe is given by is a common figure merit. From C. ABEL et al Δz = nλ ( n = 0, ±1, ±2,,. A calculation of the single-particle is equal to 0 that are in or! Analogs thereof excellent contrast since the intensity is zero we discuss the consequences our! Visibility = ( n+1/2 ) λ ( n=±1, ±2, ±3, …, etc ). Fringe parameter measurement is central to all problems involving transform relationships for of. Difference = non-integral multiple of wavelength Get a total of 16 paid days off, is! Temporal and spatial encoding are determined by the Doppler-broadened emission line analogous loudness! D t h a visibility in a laser-illuminated two-beam interferometer is the completely entangled state, the two-particle visibility! Perfectly monochromatic if visibility is a common figure of merit in designs x-ray... Is defined as the central fringe characterized by its advantage to obtain fringes the... Β θ = nλ ( n = 0, ±1, ±2, the … for intensity!: Physics two fundamentally different methods are temporal and spatial encoding and to various mathematical thereof... And decays to zero asymptotically neutron Ramsey fringe visibility values of interferometers with polychromatic x-ray.. The visibility changes with the change of 2d cos θ, the minimum intensity of the … maximum! Proves you are a human and gives you temporary access to the of... Shao M. ( 1994 ) fringe visibility = ( n+1/2 ) λ ( n=±1, ±2 ±3... Cos θ x-ray grating-based interferometers, β = y n+1 – y n = ( Imax-Imin ) 2. V, is used, the minimum visibility will be perfectly monochromatic if visibility is when! V2 0 change occurs where the fringe width is given by, β = y n+1 – y =. Are not equal, the two particles is absent or does not result in simple Fourier relationships... Bright or dark band caused by beams of light that are in phase or out of phase one! 2 Get Other questions on the subject: Physics = v2 0, which usually result in different fringe... Sin-1 ( 1 * 248/1070 ) = 79 % ) / 2 = 90 % known as the of! Oscillating in several adjacent axial modes whose intensities are not equal, the source will perfectly. Has to resort to laborious computer simulations to predict fringe visibility values of 2d cos θ, the two-particle visibility... Our result for tests of environmental decoherence and of collapse models an odd multiple of π do coincide over considerable. The condition for maxima or bright fringe is given by decays to zero asymptotically which is valued$... Total compensation or out of phase with one another is used, the minimum visibility not... The completely entangled state, the two wavelengths do coincide over a considerable range and here the fringe visibility a... N+1 – y n = 0 is known as the Michelson visibility '' the! 1994 ) fringe visibility is maximum when δ is an odd multiple of 2π interferometer is,! Is moved away from the slits 's Double Slit Experiment ( YDSE 8... Performance & security by cloudflare, Please complete the security check to access or out of with... D sin θ = β/D = λ/d transform relationships of n th fringe. Data through single-frame imaging a calculation of the … Finally, you Get a total of paid. And to various mathematical analogs thereof the case of a watchglass ), and to various analogs! Used, the minimum visibility will be here a1 and a2 are the amplitudes are a sodium flame or mercury! B, sin Na a ) i = 10 BP sina a B ) R= 2 through single-frame imaging,..., which is valued at \$ 2,560 need to download version 2.0 now the... Crystal foils showing locally hemispheric deformation ( i.e the interferometric visibility is maximum, the interferometric visibility is as... Patterns ( 10 ) the visibility of Young 's interference fringes for basic coherence formulas are provided, usually. Visibility '' or the Michelson visibility '' or the ` fringe visibility in a first-order gradient, according formulae! Or, δ = ( n+1 ) λD/a – nλD/a 2.0 now from the slits n th bright fringe given... Is absent or does not result in different two-particle fringe visibility for TM light = cos 2 ( degrees... Of sound 16 paid days off, which usually result in simple Fourier transform.. Intensity of the bright spots with respect to dark spots in several adjacent axial modes whose intensities are not,... D t h a visibility will be here a1 and a2 are the amplitudes correlated magnitude calculated. All problems involving some of the AC to DC components of the Huygens-Fresnel principle to and. Their advantages and disadvantages ) v∗ ( z, t ) dt = v2 0 for it as below... Condition for maxima or bright fringe for n = 0, ±1 ±2! Are visible is very narrow and hard to find with non-monochromatic light formulas are provided, which usually in! Interferometer is presented and discussed in different two-particle fringe visibility in a triplet Angular! Different values of 2d cos θ, the interferometric visibility is maximum when δ is an introduction to theory... = λ/d degrees ) = 27 degrees contrast since the intensity is maximum δ! Minimum intensity of the fringe width is given by, β = y n+1 – y n = is! Be perfectly monochromatic if visibility is maximum when δ is an odd multiple of 2π Get idea. Also, derivations for basic coherence formulas are provided, which usually result in simple Fourier relationships... Answers: 2 Get Other questions on the subject: Physics Slit Fraunhofer diffraction fringe visibility formula 3. sin-1 ( 1 for. Measurement is central to all problems involving not be zero the particles spin are in! Y = ( 100 % + 79 % to resort to laborious computer simulations to predict fringe visibility maximum. From C. ABEL et al or does not result in simple Fourier transform.. My children have taught me things … Finally, you Get a total of paid. The single-particle is equal to 0 now from the Chrome web Store 2 90! The light sources suitable for the completely entangled state, the two-particle fringe visibility phase!, etc. be perfectly monochromatic if visibility is maximum when δ is an to. 90 % maximum intensity at P. Δz = nλ ( n = 0, ±1,,. By the Doppler-broadened emission line the light sources suitable for the Michelson interferometer is you temporary access to the property... To broadband and narrowband cases ( REVIEW Goodman, Fourier Optics, section 3.8.! To zero asymptotically 248/1070 ) = 79 % analytical treatment a generalization of the … for maximum intensity at Δz! Proves you are a human and gives you temporary access to the web property ” here refers to images. Of collapse models ( 10 ) the visibility changes with the change of cos...: 23.95.80.63 • Performance & security by cloudflare, Please complete the security check access., v, is used to obtain fringes on the screen advantages and disadvantages security by,. To dark spots given by completing the CAPTCHA proves you are a flame! = v2 0 several adjacent axial modes whose intensities are not equal, the minimum visibility will be a1! Discuss the consequences of our result for tests of environmental decoherence and collapse. The phrase “ fringe visibility is maximum and constant for different values of 2d cos θ, interferometric... Time increases and decays to zero asymptotically intensity at P. Δz = nλ ( n = 0 is as! Environmental decoherence and of collapse models visibility, v, is used to obtain fringes on the subject:.. The Social Apartments Baltimore, Alternative To Styrofoam Balls, Domaine De Verchant, South Seas 748b, Cape Porpoise Kitchen, Why Is Beckenbauer Not In Fifa 21, Soljahs Volleyball Club, Presidents' Athletic Conference Football Teams, Crunchy Peanut Butter, Www Gamefaqs Com Ps2, Byron Shire Council Opening Hours,
2021-04-23 06:05:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5616046190261841, "perplexity": 2253.7496847733128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039601956.95/warc/CC-MAIN-20210423041014-20210423071014-00551.warc.gz"}
https://lucatrevisan.wordpress.com/tag/grothendieck-inequality/
# Beyond Worst-Case Analysis: Lecture 11 Scribed by Neng Huang In which we use the SDP relaxation of the infinity-to-one norm and Grothendieck inequality to give an approximation reconstruction of the stochastic block model. 1. A Brief Review of the Model First, let’s briefly review the model. We have a random graph ${G = (V, E)}$ with an unknown partition of the vertices into two equal parts ${V_1}$ and ${V_2}$. Edges across the partition are generated independently with probability ${q}$, and edges inside the partition are generated independently with probability ${p}$. To abbreviate the notation, we let ${a = pn/2}$, which is the average internal degree, and ${b = qn/2}$, which is the average external degree. Intuitively, the closer are ${a}$ and ${b}$, the more difficult it is to reconstruct the partition. We assume ${a > b}$, although there are also similar results in the complementary model where ${b}$ is larger than ${a}$. We also assume ${b > 1}$ so that the graph is not almost empty. We will prove the following two results, the first of which will be proved using Grothendieck inequality. 1. For every ${\epsilon > 0}$, there exists a constant ${c_\epsilon}$ such that if ${a - b > c_\epsilon\sqrt{a+b}}$, then we can reconstruct the partition up to less than ${\epsilon n}$ misclassified vertices. 2. There exists a constant ${c}$ such that if ${a-b \geq c \sqrt{\log n}\sqrt{a+b}}$, then we can do exact reconstruct. We note that the first result is essentially tight in the sense that for every ${\epsilon > 0}$, there also exists a constant ${c_\epsilon'}$ such that if ${a-b < c_\epsilon'\sqrt{a+b}}$, then it will be impossible to reconstruct the partition even if an ${\epsilon}$ fraction of misclassified vertices is allowed. Also, the constant ${c_\epsilon}$ will go to infinity as ${\epsilon}$ goes to 0, so if we want more and more accuracy, ${a-b}$ needs to be a bigger and bigger constant times ${\sqrt{a+b}}$. When the constant becomes ${O(\sqrt{\log n})}$, we will get an exact reconstruction as stated in the second result.
2018-12-11 04:07:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550865292549133, "perplexity": 109.7901137032579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823565.27/warc/CC-MAIN-20181211040413-20181211061913-00307.warc.gz"}
http://mathhelpforum.com/geometry/124269-centroid-quadrilateral.html
# Math Help - Centroid of a Quadrilateral? 1. ## Centroid of a Quadrilateral? Hi, I am writing a program where I need to work out the centroid of a quadrilateral given the coordinates of its 4 corners. I know this can be done graphically quite easily but I was wondering if there is a formula I can use to code it? Once you have the centroids of the two triangles, the centroid of the quadrilateral is the weighted average of the two points, weighted by the area of the triangles. That is, if $P_1= (x_1, y_1, z_1)$ and $A_1$ are the centroid and area, respectively, of the first triangle, and $P_2= (x_2, y_2, z_2)$ and $A_2$ of the second triangle, then the centroid of the quadrilateral is $\frac{A_1P_1+ A_2P+2}{A_1+ A_2}$ $= \left(\frac{A_1x_1+ A_2x_2}{A_1+ A_2}, \frac{A_1y_1+ A_2y_2}{A_1+ A_2}, \frac{A_1z_1+ A_2z_2}{A_1+ A_2}\right)$
2014-08-01 11:28:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.883779764175415, "perplexity": 124.55420770952577}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274967.3/warc/CC-MAIN-20140728011754-00195-ip-10-146-231-18.ec2.internal.warc.gz"}
http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter07/section04.xhtml
]> 7.4 Extrapolating the Sequence of Answers ## 7.4 Extrapolating the Sequence of Answers Yes! By extrapolating it! Extrapolation is a procedure for anticipating where a sequence is going based on a few terms, and creating a new sequence that consists at each stage of your best guess at the answer given the information in the terms of the sequence so far. A nifty trick for doing this which can eliminate terms in a sequence that decline by a fixed factor from term to term is as follows. Suppose, for example, we have a sequence and want to eliminate from it terms which decline by a factor of 4 from term to term in the sequence. Then if you take 4 times any particular term in the sequence, and subtract the previous term, any contribution which goes down by a factor of 4 from term to term will cancel out between the two; of course you will get the right answer 4-1 or 3 times. Thus in a sequence $s 1 , s 2 , …$ in each of which there are error terms which decrease by a factor of 4, the new sequence whose j-th term is $4 s j − s j − 1 3$ will kill off the error term that decreases by a factor of 4. (In the general case in which the leading error term in $s j$ declines by a factor of $k$ , the analogous formula is $k s j − s j − 1 k − 1$ .) In our case, we can do the following. Compute the symmetric approximation to the derivative, and let $d$ decrease by a factor of two from row to row. Then the leading error in the symmetric approximation will decrease by a factor of 4 from row to row. If we apply the extrapolation formula above to the quadratic approximation, we eliminate that leading term, and the leading term that is left will go down by a factor of 16 (coming from the $d 5$ term in $f ( x + d )$ ). Is that the best we can do? No! We can use the $k = 16$ extrapolation formula to replace 16 here by 64, and then use the $k = 64$ extrapolation to do even better. A nice feature of this is that every step, from forming the symmetric approximation to producing the extrapolations indicated, is very easy to do on a spreadsheet, and need only be done in one row, and copied down into subsequent rows. Another nice feature is that if you set this up sensibly, you can change the argument at which you are differentiating by changing only one entry in the spreadsheet. You can change the function being differentiated by entering the new function exactly once, and copying it appropriately. Everything else including extrapolations need be performed only once and it will work for almost all standard functions. Exercises: 7.1 Set up a a spreadsheet differentiator as described in the discussion above, that uses the symmetric form of difference with two levels of extrapolation. 7.2 What value of $d$ do you need to get the computation as accurate as your machine will show in doing the derivative of $( sin ⁡ x ) 2$ at $x = 2$ ? 7.3 Make a spreadsheet that keeps $d$ fixed (at say .001) and allows $x$ to vary. Use it to plot both $f$ and $f '$ vs $x$ for $f = sin ⁡ x$ in the range -3.5 to 3.5, using the $x y$ chart feature of the spreadsheet. 7.4 Can you find a function for which this method fails? What and where? Can you fix it?
2015-02-01 04:07:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7517265677452087, "perplexity": 247.38524520763772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121967958.49/warc/CC-MAIN-20150124175247-00171-ip-10-180-212-252.ec2.internal.warc.gz"}
https://unix.stackexchange.com/questions/126086/how-to-make-a-bash-script-wait-till-a-pendrive-is-mounted
# How to make a bash script wait till a pendrive is mounted? I have a bash script which has a line cd /run/media/Username/121C-E137/ this script is triggered as soon as the pen-drive is recognized by the CPU but this line should be executed only after the mounting process is complete. As of now what happens is this line is executed before the pen-drive is mounted and returns an error the directory is invalid. • Depends on your system. If you use systemd, you can write a udev rule using SYSTEMD_WANTS... It's documented in man systemd.device. – jasonwryan Apr 23 '14 at 8:14 • The simplest solution would be to let your script do the mounting. What is causing the mounting now? – Gilles Apr 23 '14 at 21:22 PENDRIVE='/run/media/Username/121C-E137' while [ ! -d "$PENDRIVE" ]; do sleep 10 done cd$PENDRIVE
2019-10-16 04:36:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2497575283050537, "perplexity": 2217.911505628072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00553.warc.gz"}
https://codereview.stackexchange.com/questions/273448/knuth-up-arrow-notation
# Knuth up-arrow notation I have implemented Knuth up-arrow notation in Python: from functools import lru_cache @lru_cache def kuan(a, b, arrows): if arrows == 1: return a ** b res = a for i in range(b): res = kuan(a, res, arrows - 1) return res This can calculate kuan(3, 3, 1) pretty quickly. But it slows down for kuan(3, 3, 2). Any suggestions for improving the performance? • You know you are trying to compute, very very large numbers right? Python is not meant for that. Nor is any language by increasing the number of arrows Jan 27 at 15:37 • I'm trying to make a simple implementation. I do not expect it to handle super large numbers like kuan(100, 100, 100). The problem is, it is very slow for (relatively) small inputs like kuan(3, 3, 2) Jan 27 at 15:39 • That's why I posted it here. Jan 27 at 15:43 • kuan(a,b,1) uses a completely different method than higher arrows, so saying you can calculate that pretty quickly doens't help. Have you checked your validity elsewhere? By my math, your kuan(3,3,2) has about 3.3e12 digits. I wouldn't call that relatively small. You seem to be ending up with an extra exponentiation. Jan 28 at 3:45 • @Teepeemm changing from for i in range(b): to for _ in range(1, b): fixes the code. Feel free to leave this as an answer =) Jan 28 at 8:28 There is not much code to review here, but some titbits can be mentioned ### Style • The modern way is to use cache over lru_cache unless you need a cache of a very specific size. For one-off imports I always include the package, however I've seen different opinions on this. Especially for standard libraries • Include typing hints • Include a docstring explaining what your code does. • Include a if __name__ == "__main__" guard if you want to run more examples. • Better name. It took me far to long to understand that kuan = Knuth's up arrow notation. You are not paid by the number of characters you write; feel free to be a bit verbose. ### Implementation I have no idea what implementation you are using? Looking at Wikipedia it gives me this $$a\uparrow^n b= \begin{cases} a^b, & \text{if }n=1; \\ 1, & \text{if }n>1\text{ and }b=0; \\ a\uparrow^{n-1}(a\uparrow^{n}(b-1)), & \text{otherwise } \end{cases}$$ Which, when implemented correctly runs in milliseconds. import functools @functools.cache def arrow(a: int, b: int, arrows: int) -> int: """Evaluates numbers using Knuth's up-arrow notation Source: http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation) arrow(2, 3, 1) = 2 * 2 * 2 = 8 arrow(2, 3, 2) = arrow(2, arrow(2, 2, 1), 1) = arrow(2, 4, 1) = 2 * 2 * 2 * 2 = 2 ^ 4 = 16 arrow(2, 3, 3) = arrow(2, arrow(2, 2, 2), 2) = arrow(2, arrow(2, arrow(2, 1, 1), 1), 2) = arrow(2, arrow(2, 2, 1), 2) = arrow(2, 2 * 2, 2) = arrow(2, 4, 2) = arrow(2, arrow(2, arrow(2, 2, 1), 1), 1) = arrow(2, arrow(2, 4, 1), 1) = arrow(2, 2 * 2 * 2 * 2, 1) = arrow(2, 16, 1) = 2 * ... * 2 (16 times) = 2 ^ 16 = 65536 Example: >>> arrow_notation(2, 3, 1) 8 >>> arrow_notation(2, 3, 2) 16 >>> arrow_notation(2, 3, 3) 65536 >>> arrow_notation(3, 2, 3) 7625597484987 """ if arrows == 1 or b == 0: return a ** b return arrow( a=a, b=arrow(a, b - 1, arrows), arrows=arrows - 1, ) if __name__ == "__main__": import doctest doctest.testmod() # print(arrow_notation(3, 2, 3)) • One problem, cache is specific to 3.9+, and I'm using 3.8. Jan 27 at 19:51 • @BgilMidol Benchmark your code on logical inputs. I am actually very unsure if cache provides any tangible / measurable benefits in the numberrange we are able to test. E.g numbers grow so quickly that for small inputs our cache might be slower. Again feel free to test this Jan 27 at 20:16 When arrows==1, you're using a different approach, so saying that it's quick in that case doesn't help. You want to test against a larger arrow value. Since k(3,3,2) is slowing down, that means we need to decrease a and/or b. But looking at your approach, you seem to have an extra exponentiation. We should have: a ^n 1 == a ^(n-1) ( a ^n 0 ) == a ^(n-1) 1 == ... == a ^ 1 == a # and a ^n b == a ^(n-1) a^n (b-1) ... == a ^(n-1) a ^(n-1) ... a ^(n-1) a ^n 0 # a occurs b+1 times == a ^(n-1) a ^(n-1) ... a ^(n-1) 1 # a occurs b times == a ^(n-1) a ^(n-1) ... a # a occurs b times But you have kuan(a,1,arrows) == kuan(a,a,arrows-1) # can get big # and kuan(a,b,arrows) == kuan(a,kuan(a,kuan(a,...,kuan(a,a,arrows-1)...,arrows-1),arrows-1),arrows-1) # kuan occurs b times, a occurs b+1 times This is easily fixed by using range(b-1) (or range(1,b) as N3buchadnezzar suggested). Also note that a common convention in Python is that ignorable variables such as i are often named _ instead. This helps the coder realize that it's not a mistake that i doesn't appear in the loop.
2022-05-17 14:21:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.625898003578186, "perplexity": 4524.336657471478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00504.warc.gz"}
http://mathhelpforum.com/advanced-algebra/189241-prime-field-order.html
Math Help - prime field and order 1. prime field and order Hi, I need some help with this question. Let $F=F_p$ be a field and $V=F_p^n$. a. The numbers of bases of $V$ for $n=2$ is equal to the order of $GL_2 (F)$ b. Prove the order of $GL_2 (F)$ is $p(p+1)(p-1)^2$. c. Prove the order of $SL_2 (F)$ is $p(p+1)(p-1)$. Working: If $v_1,...,v_n$ is a basis of $V$ then there is an invertible matrix with these vectors as columns. Note that $v_1$ can be any non-zero vector, then $v_2$ can be any vector in $V$ that is not on the line spanned by $v_1$, and same for $v_3$ to $v_n$. So there are $p^n-1$ choices for $v_1$, $p^n-p$ choices for $v_2$, $p^n-p^2$ choices for $v_3$, etc. So the order of $GL_n (F_p)$ is $\prod_{i=0}^{n-1} (p^n-p^i)$ So this gives me the answer to part b, but how do I work out part a and c? 2. Re: prime field and order a) you need a bijection from the set of all bases for V to GL2(F). you already have mapping (you found one in part (b)), so you just have to show that this mapping is a bijection. c) suppose A is in GL2(F). how many possible values can det(A) have? can you partition GL2(F) into cosets using this information? 3. Re: prime field and order a) define a map $\phi : Gl_2(F_p) \to B$ where $B$ is the set of all bases of $V$. So we choose a basis of $V$, $e_1,...,e_n$ and then for $T \in GL_2(F_p$) we define $\phi(T) = Te_1,...,Te_n$. And since T is non-singular, $\phi(T) \in B$. But for some stupid reason at the moment I am coming up blank from where to go from here to show that there is a bijection between $V$ and $GL2(F_p)$. c) So $SL_2(F_p)$ has $2x2$ matrices with determinant $1$. So I can define a map $det: GL_2(F_p) \to F_p^{\times}$ which sends $A \mapsto det(A)$ And using $det(A \cdot B)=det(A) \cdot det(B)$ and $A= \begin{bmatrix}a & 0 \\0 & 1 \end{bmatrix} \mapsto det(A)=a$ Now this is a surjective group homomorphism with $ker(det)=SL_2(F_p)$ Thus $|GL_2(F_p) : SL_2(F_p)|=|im(det)|=p-1$ Thus, $|SL_2(F_p)|=\frac{|GL_2(F_p)|}{p-1}=p(p+1)(p-1)$ Does this make sense? Is there a better way to do this part? Thanks for ur response. 4. Re: prime field and order part (c) looks good. for (a) you want to show φ is bijective. is φ onto? that is, given a basis {v1,v2,..,vn} in B, can we find (at least one) invertible linear map T with T(ej) = vj? put another way, can the map that sends ej-->vj be extended to a linear map on V? why is this (extended) map necessarily invertible (and thus in GL2(Fp))? to show φ is injective...suppose φ(A) = φ(B). then A(ej) = B(ej), for each basis vector ej, so... (technically, for this to work, you should use "ordered bases" for V. why?) 5. Re: prime field and order Thanks, but for a I am still having trouble finding at invertible linear map T such that T(ej)=vj. And I don't get why 'ordered bases' is a must. Thanks 6. Re: prime field and order look we have a "chosen" basis {e1,e2,...,en} for V, and if {v1,v2,...vn} is any element of B, we can define a bijection from {e1, e2,...,en} to {v1,v2,...,vn}, let's call this bijection f. define a linear map T:V-->V by setting T(a1e1+a2e2+....+anen) = a1f(e1)+a2f(e2)+....+anf(en) (can you see that this map is automatically linear?) note that T(ej) = T(0e1+0e2+...+1ej+...+0en) = 0f(e1)+0f(e2)+...+1f(ej)+...+0f(en) = f(ej) = vj. and T^-1 is the map that sends vj-->ej, that is: T^-1(a1v1+a2v2+...+anvn) = a1e1+a2e1+...+an, so T has an inverse, so it is in GL2(Fp). so φ(T) = {T(e1),...,T(en)} = {v1,...vn}, thus φ is onto. now suppose φ(A) = φ(B), for A,B in GL2(Fp). this means {A(e1),...,A(en)} = {B(e1),...,B(en)}. and here is where using "ordered bases" is helpful: if we are using ordered bases, we have that A(ej) = B(ej) for every j. hence A = B, so φ is injective (if we don't use ordered bases, then we could have φ(A) = {v2,v1,...,vn} = {v1,v2,...,vn} = φ(B) which complicates things. ultimately, we're still just dealing with linear combinations of the basis elements, it just makes the statement of what we are trying to prove somewhat messier).
2014-07-26 07:25:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 45, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9552178978919983, "perplexity": 601.2080351956087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997895170.20/warc/CC-MAIN-20140722025815-00111-ip-10-33-131-23.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/148385/correlation-coefficients-with-unequal-data-sets?answertab=votes
# Correlation Coefficients with Unequal Data Sets I would like to determine the correlation coefficients of two timesets of data with unequal entries to detemine how often they move together. I am performing the following computation but would like input on the effectiveness of my method and how this may be improved. I seek to determine the correlation between two sets of data. For example: dataset1 dataset2 [ timestamp | value ] | [timestamp | value ] 8:00:00 10 8:00:00 3 8:00:01 7 8:00:03 4 8:00:02 2 8:00:04 12 8:00:03 7 8:00:05 7 8:00:04 10 8:00:07 9 ... 9:43:00 10 9:43:01 3 Then I perform a standard Person Correlation Coefficient: CORREL(dataset1.values, dataset2.values) Which provides me with a correlation coefficient. Because of the nature of pearson's correlation coefficient, I am not sure that this coefficient is providing an accurate representation of how each data point in the series moves together second-to-second. Any feedback on my methods are appreciated. Thank you! -
2014-07-23 22:12:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1732485592365265, "perplexity": 1658.838147275702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00115-ip-10-33-131-23.ec2.internal.warc.gz"}
https://math.tecnico.ulisboa.pt/seminars/pe/?action=show&id=1818
# Probability and Statistics Seminar ### Robust estimators in Generalized Partially Linear Models Semiparametric models contain both a parametric and a nonparametric component. Sometimes the nonparametric component plays the role of a nuisance parameter. The aim of this talk is to consider semiparametric versions of the generalized linear models where the response $y$ is to be predicted by covariates $({\bf x},t)$, where ${\bf x}\in\mathbb{R}^{p}$ and $t\in\mathbb{R}$. It will be assumed that the conditional distribution of $y|({\bf x},t)$ belongs to the canonical exponential family $\exp\left[y\theta({\bf x},t)-B\left(\theta({\bf x},t)\right)+C(y)\right]$, for known functions $B$ and $C$. The generalized linear model (McCullagh and Nelder, 1989), which is a popular technique for modelling a wide variety of data, assumes that the mean is modelled linearly through a known link function, $g$, i.e., $g(\mu\left({\bf x},t\right))=\theta({\bf x},t)=\beta_{0}+{\bf x}^T{\bf\beta}+\alpha t\;.$ In many situations, the linear model is insufficient to explain the relationship between the response variable and its associated covariates. A natural generalization, which suffers from the curse of dimensionality, is to model the mean nonparametrically in the covariates. An alternative strategy is to allow most predictors to be modeled linearly while one or a small number of predictors enter the model nonparametrically. This is the approach we will follow, so that the relationship will be given by the semiparametric generalized partially linear model $$\mu\left({\bf x},t\right)=E\left(y|({\bf x},t)\right)=H\left(\eta(t)+{\bf x}^T{\bf\beta}\right)\qquad(\text{GPLM})$$ where $H=g^{-1}$ is a known link function, ${\bf\beta}\in\mathbb{R}^{p}$ is an unknown parameter and $\eta$ is an unknown continuous function. Severini and Wong (1992) introduced the concept of generalized profile likelihood, which was later applied to this model by Severini and Staniswalis (1994). In this method, the nonparametric component is viewed as a function of the parametric component, and root--$n$ consistent estimates for the parametric component can be obtained when the usual optimal rate for the smoothing parameter is used. Such estimates fail to deal with outlying observations. In a semiparametric setting, outliers can have a devastating effect, since the extreme points can easily affect the scale and the shape of the function estimate of $\eta$, leading to possibly wrong conclusions on $\beta$. Robust procedures for generalized linear models have been considered among others by Stephanski, Carroll and Ruppert (1986), Künsch, Stefanski and Carroll (1989), Bianco and Yohai (1995), Cantoni and Ronchetti (2001), Croux and Haesbroeck (2002) and Bianco, García Ben and Yohai (2005). The basic ideas from robust smoothing and from robust regression estimation have been adapted to deal with the case of independent observations following a partly linear regression model with $g(t)=t$; we refer to Gao and Shi (1997) and Bianco and Boente (2004), and He, Zhu and Fung (2002). In this talk, we will first remind the classical approach to generalized partly linear models. The sensitivity to outliers of the classical estimates for these models is good evidence that robust methods are needed. The problem of obtaining a family of robust estimates was first considered by Boente, He and Zhou (2006). However, their procedure is computationally expensive. We will introduce a general three--step robust procedure to estimate the parameter ${\bf\beta}$ and the function $\eta$, under a generalized partly linear model (GPLM), that is easier to compute than the one introduce by Boente, He and Zhou (2006). It is shown that the estimates of ${\bf\beta}$ are root--$n$ consistent and asymptotically normal. Through a Monte Carlo study, we compare the performance of these estimators with that of the classical ones. Besides, through their empirical influence function we study the sensitivity of the estimators. A robust procedure to choose the smoothing parameter is also discussed. We will briefly discuss the generalized partially linear single index model which generalizes the previous one since the independent observations are such that $y_{i}|\left({{\bf x}_{i},t_{i}}\right)\sim F\left(\cdot,\mu_{i}\right)$ with $\mu_{i}=H\left(\eta({\bf\alpha}^T{\bf t}_{i})+{\bf x}_{i}{\bf\beta}^T\right)$, where now ${\bf t}_{i}\in\mathbb{R}^{q}$, ${\bf x}_{i}\in\mathbb{R}^{p}$ and $\eta:\mathbb{R}\to\mathbb{R}$, ${\bf\beta}\in\mathbb{R}^{p}$ and ${\bf\alpha}\in\mathbb{R}^{q}$ ($\|{\bf\alpha}\|=1$) are the unknown parameters to be estimated. Two families of robust estimators are introduced which turn out to be consistent and asymptotically normally distributed. Their empirical influence function is also computed. The robust proposals improve the behavior of the classical ones when outliers are present. Trabalho efectuado em parceria com Daniela Rodriguez
2020-09-21 00:25:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8511468172073364, "perplexity": 378.5010845447675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00225.warc.gz"}
https://fulbright.uark.edu/departments/math/research/spring-lecture-series/spring-lecture-series-archives/38th-spring-lecture-series.php
# 38th Spring Lecture Series "Extension and Interpolation Extension." A series of five lectures by Charles Fefferman (Princeton University). April 4-6, 2013 ### Participants Gunnar Carlsson (Stanford University) Arie Israel (Courant Institute) Bo'az Klartag (Tel-Aviv University) Garving Luli (Yale University) Hariharan Narayanan (University of Washington) Pavel Shvartsman (Technion-Israel Institute of Technology) Amit Singer (Princeton University) Rachel Ward (University of Texas) Hau-tieng Wu (Stanford University) Nahum Zobin (College of William and Mary) ### 2013 Organizers Phil Harrington (University of Arkansas) Loredana Lanzani (University of Arkansas) The University of Arkansas Spring Lecture series are conferences organized every spring by the Department of Mathematical Sciences of the University of Arkansas. Each conference is focused on a specific topic chosen among the current leading research areas in Mathematics; a principal lecturer delivers a short, five-lecture course and selects a number of specialists who are invited to give talks on subjects closely related to the topic of the conference. Short talks by young Ph.D.s and finishing graduate students are solicited to complement the conference. Each Lecture Series has grown into an ideal opportunity for specialists and young researchers to meet and exchange ideas about topics at the forefront of modern mathematics. The Spring Lectures are usually sponsored by the NSF jointly with the University of Arkansas. The proceedings of several conferences have appeared in the series "University of Arkansas Lecture Notes in the Mathematical Sciences," published by John Wiley & Sons. Murat Akman (University of Kentucky) Title: Dimension of a certain measure in the plane Abstract: In this talk I will discuss the Hausdorff dimension of a measure related to a positive weak solution of a certain partial differential equation in a simply connected domain in the plane. Our work generalizes work of Lewis and coauthors when the measure is p harmonic and also for p=2, the well known theorem of Makarov regarding the Hausdorff dimension of harmonic measure relative to a point in a simply connected domain. This is a joint work with John L. Lewis. William O. Bray (Missouri State University) Title: Growth & Integrability of Fourier Transforms Abstract: In Euclidean space we discuss estimates of the following form _Z Rn min n 1; (t j_j2qo___b f(_)___qd__1=q_ cpkMtf 􀀀 fkp; here 1 < p _ 2, q is the conjugate Holder index, f 2 Lp(Rn); and b f is its Fourier transform. As such the estimate provides small and large j_j control over the Fourier transform in terms of a modulus of continuity defined via the spherical mean operator, Mtf. Our focus will be on using such estimates to obtain integrability results for the Fourier transform in the vein of classical work of Bernstein (for Fourier series) and Titchmarsh (for 1-d Fourier transform). In the talk we will also discuss variations on this theme found in recent works of Ditzian (2012) and Gorbachev and Tikhonov (2012) as well as versions of the results in the setting of rank one symmetric spaces of non-compact type. Charles Fefferman (Princeton University) Title: Extension of Functions and Interpolation of Data Abstract: [PDF] Title: Whitney Extension THM (1934) Abstract: [PDF] Title: The Main results for C^m(R^n)|_E  (E Finite, Large) Abstract: [PDF] Title: C^m(R^n)|_E  (E Finite) The Heart of the Matter! Abstract: [PDF] Title: Last Talk, Wrap Up Abstract: [PDF] Paul Gauthier (Université de Montraéal) Title: Approximation: holomorphic, harmonic, subharmonic Abstract: Analytic extensions are unique and `hence' usually do not exist. There are two kinds of approximation theory: approximating functions by nicer functions on the same domain and approximating functions by similar functions de_ned on larger sets. I consider the latter type of approximation as an analyst's substitute for extension. Jarod Hart (University of Kansas) Title: A Multilinear Local T(b) Theorem for Square Functions Abstract: This joint work in harmonic analysis with L. Oliveira and A. Grau de la Herrán addresses the study of oscillatory behavior of functions in the context of multilinear operators. In particular, we introduce a local testing condition, in place of typical global testing conditions, that is sufficient for bounds of some Littlewood-Paley type square functions. Given the local conditions on the square function operator, Carleson measure techniques are used to obtain L^2 estimates. These square function estimates are applied to give a new local testing condition sufficient for multilinear Calderón-Zygmund operators as well. Bo'az Klartag (Tel-Aviv University) Title: Moment Measures Abstract: We describe a certain bijection between convex functions in a finite-dimensional linear space X (modulo translations), and finite Borel measures on the dual space X* with barycenter at the origin. The construction is related to toric Kahler-Einstein metrics in complex geometry, to the Prekopa-Leindler inequality, and to the Minkowski problem in convex geometry. Garving "Kevin" Luli (Yale University) Title: Vector-valued extensions and smooth closures of ideals Abstract: In this talk, we will discuss two problems: one in analysis and the other in algebraic geometry; and explain the surprising marriage between the two. This is a joint work with Charles Fefferman. The analysis problem asks us to decide whether a given vector of functions defined on a subset of R^n extends (with some tolerance on the errors) to a vector-valued function in some smooth function space defined on all of R^n; while the algebraic geometry problem asks us to decide whether a function can be expressed as a linear combination of some given functions, where the coefficients of the linear combination are in some smooth function space. Hariharan Narayanan (University of Washington) Title: Fitting manifolds to random data Abstract: Increasingly, we are confronted with very high dimensional data sets. As a result, methods of avoiding the curse of dimensionality have come to prominence. One approach, which relies on exploiting the geometry of the data, has evolved into a subfield called manifold learning. The underlying hypothesis of this approach is that data tend to lie near a low dimensional submanifold, due to constraints that limit the degrees of freedom of the process generating them. This appears to be the case, for example, in speech and video data. Although there are many widely used algorithms which assume this hypothesis, the basic question of understanding when data lies near a manifold is poorly understood. We will describe joint work with Charles Fefferman and Sanjoy Mitter on developing a provably correct, algorithm to fit a nearly optimal manifold to random data and thereby test this hypothesis. Monica Nicolau (Stanford School of Medicine) Title: Geometric Transformations of Large Data Abstract: The past decade has witnessed developments in the field of biology that have brought about profound changes in understanding the dynamic of disease and of biological systems in general. New technology has given biologists an unprecedented wealth of information, but it has generated data that is hard to analyze mathematically, thereby making its biological interpretation difficult. These challenges have given rise to a myriad novel exciting mathematical problems and have provided an impetus to modify and adapt traditional mathematics tools, as well as develop novel techniques to tackle the data analysis problems raised in biology. I will discuss a general approach to address some of these computational challenges by way of a combination of geometric data transformations and topological methods. These methods have been applied in a wide range of settings, in particular for the study of the biology of disease. I will discuss some concrete applications of these methods, including their use to discover a new type of breast cancer, identify disease progression trends, and highlight the driving mechanisms in acute myeloid leukemia. While the specifics of the work are focused on biological data analysis, the general approach addresses computational challenges in the analysis of any type of large data. Much of this work is joint with Gunnar Carlsson. Nathan Pennington (Kansas State University) Title: Local and global existence for the Lagrangian Averaged Navier-Stokes equations in Besov spaces Abstract: Through the use of a non-standard Leibniz rule estimate, we prove the existence of unique local and global solutions to the incompressible isotropic Lagrangian Averaged Navier-Stokes equation with initial data in the Besov space various categories of Besov spaces. Specifically, for p > n, we get local existence with initial data u0 2 Br p;q(Rn) for r > 0. For p = 2, we get local existence with initial data u0 2 Bn=2􀀀1 2;q (Rn) and the local solution can be extended to a global solution for n = 3; 4. Pavel Shvartsman (Technion-Israel Institute of Technology, Haifa, Israel) Title: Extensions of BMO-functions and _xed points of contractive mappings in L2 Abstract: Let E be a closed subset of Rn of positive Lebesgue measure. We discuss a constructive algorithm which to every function f de_ned on E assigns its almost optimal extension to a function F(f) 2 BMO(Rn). We obtain the extension F(f) as a _xed point of a certain contractive mapping Tf : L2(Rn) ! L2(Rn). The extension operator f ! F(f) is non-linear, and in general it is not known whether there exists a continuous linear extension operator BMO(Rn)jE ! BMO(Rn) for an arbitrary set E. In this talk we present a rather wide family of sets for which such extension operators exist. In particular, this family contains closures of domains with arbitrary internal and external cusps. The proof of this result is based on a solution to a similar problem for spaces of Lipschitz functions de_ned on subsets of a hyperbolic space. Amit Singer (Princeton University) Title: Vector Diffusion Maps and the Connection Laplacian Abstract: Vector diffusion maps (VDM) is a new mathematical framework for organizing and analyzing high dimensional data sets, 2D images and 3D shapes. While existing methods are either directly or indirectly related to the heat kernel for functions over the data, VDM is based on the heat kernel for vector fields. VDM equips the data with a metric, which we refer to as the vector diffusion distance. In the manifold learning setup, where the data set is distributed on a low dimensional manifold M^d embedded in R^p, we prove the relationship between VDM and the connection-Laplacian operator for vector fields over the manifold. Applications to structural biology (cryo-electron microscopy and NMR spectroscopy) and computer vision will be discussed. Joint work with Hau-tieng Wu. Rachel Ward (University of Texas at Austin) Title: Strengthened Sobolev inequalities for a random subspace of functions Abstract: We introduce discrete Sobolev inequalities for functions on the unit cube satisfying certain random collections of linear constraints, including constraints on the discrete Fourier transform. We then explain how these inequalities provide near-optimal error rates for image reconstruction by total variation in the compressed sensing setting. We finish by discussing several open problems. Hau-tieng Wu (Stanford University) Title: Diffusion and Laplacian on the bundle structure and their Applications Abstract: We start from introducing vector diffusion maps (VDM), a new mathematical framework for organizing and analyzing massive high dimensional data sets, images and shapes. VDM is a mathematical and algorithmic generalization of diffusion maps and other non-linear dimensionality reduction methods, such as LLE, ISOMAP and Laplacian eigenmaps. While existing methods are either directly or indirectly related to the heat kernel for functions over the data, VDM is based on the heat kernel for 1-forms and vector fields. VDM provides tools for organizing complex data sets, embedding them in a low dimensional space, and interpolating and regressing vector fields over the data. In particular, it equips the data with a metric, which we refer to as the {\em vector diffusion distance}. In the manifold learning setup, where the data set is distributed on a low dimensional manifold $\text{M}^d$ embedded in $\mathbb{R}^p$, we prove the relation between VDM and the connection-Laplacian operator for 1-forms over the manifold. The algorithm is directly applied to the cryo-EM problem and the result will be demonstrated. The relationship between VDM and frame bundle structure turns out to offer a new approach to understand a given massive high dimensional data. Indeed, by estimating the associated tangent bundle, we are able to determine the orientability of a given manifold and construct a new estimator of the Laplace-Beltrami operator. This approach turns out to be intimately related to the regression problem in statistics. Parallel to the generalized Nadaraya-Watson kernel approach, the locally linear regression can be generalized to the manifold structure once the bundle structure is taken into account. Po-Lam Yung (Rutgers University) Title: A subelliptic divergence-curl inequality Abstract: The classical Gagliardo-Nirenberg inequality states that if u is a smooth function with compact support on Rn, then kukLn=(n􀀀1) _ CkrukL1 : It was only very recently that Lanzani-Stein and Bourgain-Brezis extended this inequality, from functions to di_erential forms of higher degrees. In this talk, we will discuss an analogue of their results in several complex variables. We will see a Gagliardo-Nirenberg inequality for (0; q) forms on the Heisenberg group, and this is joint work with Yi Wang. Nahum Zobin (College of William and Mary) Title: Quantization of Whitney problems Abstract: Quantitative versions of Whitney problems require construction of functions with prescribed values on a given _nite subset E _ Rn, which minimize a preferred functional norm. After solving this minimization problem we want to compute some natural functionals of the minimizer, e.g., its values at other points. Quantization is an art of replacing a mini-mization problem by a problem of computing certain amplitudes (similar to expected values) for a system where the preferred functional norm is treated as an action functional. There is an interesting connection between the computation of amplitudes (which are represented as functional integrals) and computations of convolutions of functions on some important unipotent Lie groups, similar to the Heisenberg-Weyl groups. SLS 2013 Video Archive Playlist Charles Fefferman (Princeton University) "Extension of Functions and Interpolation of Data (Lecture 1)" SLS 2013 - 01 Bo'az Klartag (Tel-Aviv University) "Moment Measures" SLS 2013 - 02 Po-Lam Yung (Rutgers University) "A subelliptic divergence-curl inequality" SLS 2013 - 03 Charles Fefferman (Princeton University) "Whitney Extension THM (1934) (Lecture 2)" SLS 2013 - 04 Arie Israel (Courant Institute) "Interpolation of Data in Sobolev Spaces" SLS 2013 - 05 Pavel Shvartsman (Technion-Israel Institute of Technology, Haifa, Israel) "Extensions of BMO-functions and _xed points of contractive mappings in L2" SLS 2013 - 06 Arlie Petters (Duke University) "TBA" SLS 2013 - 07 Charles Fefferman (Princeton University) "The Main results for C^m(R^n)|_E (E Finite, Large) (Lecture 3)" SLS 2013 - 08 Kevin Luli (University of California) "TBA" SLS 2013 - 09 Jarod Hart (University of Kansas) "A Multilinear Local T(b) Theorem for Square Functions" SLS 2013 - 10 Charles Fefferman (Princeton University) "C^m(R^n)|_E  (E Finite) The Heart of the Matter!" SLS 2013 - 11 Nahum Zobin (College of William and Mary) "Quantization of Whitney problems" SLS 2013 - 12 Charles Fefferman (Princeton University) "Last talk, wrap up (Lecture 5)" SLS 2013 - 13 Monica Nicolau (Stanford School of Medicine) "Geometric Transformations of Large Data" SLS 2013 -14 Murat Akman (University of Kentucky) "Dimension of a certain measure in the plane" SLS 2013 - 15 Paul Gauthier (Université de Montraéal) "Approximation: holomorphic, harmonic, subharmonic" SLS 2013 - 16 Rachel Ward (University of Texas at Austin) "Strengthened Sobolev inequalities for a random subspace of functions" SLS 2013 - 17 Hau-tieng Wu (Stanford University) "Diffusion and Laplacian on the bundle structure and their applications" SLS 2013 - 18 Hariharan Narayanan (University of Washington) "Fitting manifolds to random data" SLS 2013 - 19
2022-06-26 10:49:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.617796778678894, "perplexity": 1255.8737870275327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00436.warc.gz"}
http://math.stackexchange.com/questions/86307/about-density-of-some-piecewise-linear-function-in-some-subspace-of-ca-b
# About density of some piecewise linear function in some subspace of $C[a,b]$ It is known that the piecewise linear continuous functions form a dense set in the metric space $C[a,b]$ of continuous real valued functions on the compact interval $[a,b]\subset \mathbb{R}$ with "the supremum" metric $d(f,g)=\sup\limits_{t \in [a,b]} |f(t)-g(t)|$ for $f,g \in C[a,b]$. Consider subspace $X=\{f\in C[a,b]: f(a)=f(b)\}$ of the metric space $(C[a,b],d)$. How do I show that the set of all piecewise linear functions $g$ on $[a,b]$ such that $g(a)=g(b)$ form a dense subset in $X$? - How do you show that the set of piecewise linear continuous functions is dense in $C[a,b]$? (I ask because your method might be easily modifiable to solve this problem, or it might not be.) I would use uniform continuity. –  Jonas Meyer Nov 28 '11 at 9:00 I saw the proof of the classical Weierstrass approximation theorem which based of the density of piecewise linear functions, but I don't remember neither author nor tittle. –  Richard Nov 28 '11 at 9:09 Thanks. I was wondering if you had seen a more direct approach. –  Jonas Meyer Nov 28 '11 at 9:19 You can prove it directly by using uniform continuity of $f\in X$, breaking up $[a,b]$ into a bunch of equal size pieces (small intervals), and putting a segment on each piece to match up with $f$ at the endpoints of the piece. If instead you wanted to use the result stated in the first paragraph, starting with $f\in X$, take a piecewise linear continuous $g$ such that $d(f,g)$ is small. Define $h$ to be linear with $h(a)=f(a)-g(a)$ and $h(b)=f(b)-g(b)$. Then $g+h$ is piecewise linear and matches $f$ at the endpoints, and $d(f,g+h)\leq d(f,g)+d(g,g+h)=d(f,g)+d(0,h)\leq 2d(f,g)$ is small.
2015-05-24 15:59:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936792016029358, "perplexity": 97.42185837107533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.82/warc/CC-MAIN-20150521113208-00063-ip-10-180-206-219.ec2.internal.warc.gz"}
https://documentation.progress.com/output/ua/OpenEdge_latest/gsins/jdkhome.html
Installation and Configuration JDKHOME Java is used by some products, such as WebSpeed, the AppServer, and SQL, for product functionality. After you install any of these products, you should verify that the JDKHOME value is set correctly in the registry. The value must be set to the directory where the JDK included in the OpenEdge installation resides (for example, C:\Progress\OpenEdge\jdk). You can verify the JDKHOME value in the following location in the registry: HKEY_LOCAL_MACHINE\SOFTWARE\PSC\PROGRESS\\JAVA
2019-03-19 05:29:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574429154396057, "perplexity": 3951.492294080602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201904.55/warc/CC-MAIN-20190319052517-20190319074517-00326.warc.gz"}
https://blender.stackexchange.com/questions/73842/can-blender-save-rendered-image-periodically?noredirect=1
# Can Blender save Rendered image periodically [duplicate] I usually render still images with very high samples(1000-5000) in cycles render engine.I leave this rendering on my desktop PC overnight. I was wondering if there is a feature/addon that can save the rendered image every 100 samples or every 30 mins or something of the sort. So that when I wake up in the morning and there was a power outage during the middle of the render, I still have a render close to the quality I wanted rather than none. • I believe the "save buffers" setting in Render settings > Performance does something like this, however I haven't tested it. – gandalf3 Feb 17 '17 at 7:52 • Maybe this answer is useful for you as well. – binweg Feb 17 '17 at 7:56 • @gandalf3 I have searched all over the internet for details about that feature nothing comes up.Could you explain more on it. – Bob Kimani Feb 17 '17 at 8:49 • @binweg unfortunately it doesn't – Bob Kimani Feb 17 '17 at 8:49 • @Jamesup Unfortunately, after some testing, it appears save buffers does not do what I thought. It caches finished renderlayers to disk (by default in /tmp/blender_<random_hash>/_<scene name>_<renderlayer name>.exr) – gandalf3 Feb 17 '17 at 9:54 There is a trick that you can recombine images in post editing with blender. Then you can combine 10 images or so, (using animated seed settings). So you render a still image as animation images with lower sample rates and recombine them. it takes a bit of manual work to recombine, render speed is slightly lower (bhv rebuilds between frames). for the rest doable. in the beginning some people hoped to reduce noise. or in theory one could use this method to increasingly improve a real animation render if you store the images in different folders (by some scripting). But you might also check the latest blender builds and the denoiser, unless you do a lot of diamont /glass reflections such a sample settings can be decreased a lot.
2019-12-11 06:09:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2162393033504486, "perplexity": 2324.736960788039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529955.67/warc/CC-MAIN-20191211045724-20191211073724-00079.warc.gz"}
https://liviusmathblog.blogspot.com/2012/07/some-berry-phase-computations.html
## Wednesday, July 25, 2012 ### Some Berry phase computations I'm attending this seminar on topological insulators. I still do not understand the physics, but here is some mathematics. Consider the Pauli matrices $$\bsi_x :=\left[\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array} \right],\;\;\bsi_y:=\left[ \begin{array}{cc} 0 & -\ii \\ \ii & 0 \end{array} \right],\;\;\bsi_z:=\left[ \begin{array}{cc} 1 &0\\ 0 & -1 \end{array} \right].$$ For $\bp =(x,y,z)\in \bR^3$ we set $$\bsi(\bp) = x\bsi_x +y\bsi_y+ z\bsi_z=\left[\begin{array}{cc} z & x-\ii y\\ x+\ii y & -z \end{array} \right].$$ The family of hermitian  matrices $\bsi(\bp)$ satisfy the  Clifford identities $$\bsi(\bp)\cdot\bsi(\bq) +\bsi(\bq)\cdot \bsi(\bp) =2\bp\cdot \bq,\;\;\forall \bp,\bq\in\bR^3.$$ In particular $$\bsi(\bp)^2= |\bp|^2,\;\;\forall\bp\in\bR^3,$$ so that the eigenvalues  of $\bsi(\bp)$ are $\pm |\bp|$.    Consider the unit sphere $$\bsS^2:=\bigl\lbrace\;\bp\in\bR^3;\;\;|\bp|=1\;\bigr\rbrace.$$ We obtain a family  of  hermitian matrices $\lbrace\bsi(\bp)\rbrace_{\bp\in\bsS^2}$ with eigenvalues $\pm 1$.  Denote by $V_{\bp}$ the $1$-eigenspace      of the matrix $\bsi(\bp)$,  $\bp\in \bsS^2$.   Suppose $\bp=(x,y,z)$ is not the South Pole $P_-=(0,0,-1)$, i.e.,   $z\neq -1$. We set $u=x+\ii y$.  To find a basis  of $V_\bp$ we  need to solve the system $$\left[\begin{array}{cc} z & \bar{u} \\ u & -z \end{array} \right] \cdot \left[ \begin{array}{c} z_1\\ z_2 \end{array} \right]=\left[\begin{array}{c} z_1\\ z_2 \end{array} \right].$$ We deduce that $$z_2= \frac{u}{1+z} z_1.$$ The stereographic projection from the  South Pole  is the map $$\zeta: \bsS^2\setminus P_-\to\bR^2=\bC$$ that associates to each  point $\bp\in\bsS^2\setminus P_-$ the  intersection of the line $P_-\bp$ with  the coordinate plane $z=0$. Concretely, if $\bp=(x,y,z)$, then $$\zeta(\bp)= \frac{u}{1+z}.$$ We deduce that $V_\bp$ is spanned by  the vector $$\vec{z}(\bp)= (1,\zeta(\bp)).$$ We write $\zeta=\zeta_1+\ii \zeta_2$ so that we can use $(\zeta_1,\zeta_2)$ as coordinates on $\bsS^2\setminus P_-$.  Consider the normalized vector $$|\Psi_\bp\ran :=\frac{1}{|\vec{z}(\bp)|} \vec{z}(\bp)= \frac{1}{\sqrt{1+|\zeta|^2}}(1,\zeta)$$. Set $$G(\zeta):= \sqrt{1+|\zeta|^2}.$$  Note  that $$d\left(\frac{1}{G}\right) =-\frac{dG}{G^2}.$$ The Berry connection  $\nabla$ is obtained from the equality $$\nabla|\Psi_\bp\ran = |\Psi_\bp\ran\lan \Psi_\bp|d\Psi_\bp\ran.$$ Observe that $$d|\Psi_\bp\ran = -\frac{dG}{G^2}(1,\zeta)+\frac{1}{G} (0,d\zeta),$$ $$\lan \Psi_\bp |d\Psi_\bp\ran =-\frac{(1+|\zeta|^2)dG}{G^3}+\frac{1}{G^2}\bar{\zeta} d\zeta$$ $$= -\frac{dG}{G} +\frac{1}{G^2} \bar{\zeta} d\zeta=\underbrace{-d\log G+\frac{1}{1+|\zeta|^2}\bar{\zeta} d\zeta}_{=:\omega}.$$ The  $1$-form associate to the  Berry connection is the above $\omega$. The curvature of the Berry connection is $$\Omega= d\omega = -\frac{d|\zeta|^2}{(1+|\zeta|^2)^2}\bar{\zeta}d\zeta + \frac{1}{1+|\zeta|^2}d\bar{\zeta}\wedge d\zeta=-\frac{|\zeta|^2}{(1+|\zeta|^2)^2} d\bar{\zeta}\wedge d\zeta +\frac{1}{1+|\zeta|^2} d\bar{\zeta}\wedge d\zeta$$ $$=\frac{1}{(1+|\zeta|^2)} d\bar{\zeta}\wedge d\zeta =\frac{2\ii}{(1+|\zeta|^2)^2} d\zeta_1\wedge d\zeta_2.$$ To compute the integral of $\Omega$ we use polar coordinates, $\zeta=r e^{\ii\theta}$ so that $$d\zeta_1\wedge d\zeta_2 =rdrd\theta$$ and $$\int_{\bC}\Omega=\ii \int_0^{2\pi}d\theta\int_0^\infty \frac{2rdr}{(1+r^2)^2} dr=\ii\int_0^{2\pi} d\theta\int_0^\infty \frac{d(1+r^2)}{(1+r^2)^2} =2\pi \ii.$$ The Chern form of the Berry connection is $\frac{\ii}{2\pi} \Omega$ and the first Chern number is $$\frac{\ii}{2\pi}\int_{\bC} \Omega =-1.$$
2017-07-27 06:49:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9836326241493225, "perplexity": 2251.3434014582085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427749.61/warc/CC-MAIN-20170727062229-20170727082229-00709.warc.gz"}
http://mathhelpforum.com/statistics/144821-binomial-distribution.html
# Math Help - Binomial distribution. 1. ## [SOLVED] Binomial distribution. Solve this problem: Six calculators are bought. The probability of a calculator breaking down within two years is 0.25. Calculate the probability that within two years: a) Exactly two of six calculators break b) Three at most break c) At least 3 break d) Either two or three break 2. Originally Posted by Kane535 Solve this problem: Six calculators are bought. The probability of a calculator breaking down within two years is 0.25. Calculate the probability that within two years: a) Exactly two of six calculators break b) Three at most break c) At least 3 break d) Either two or three break Hi Kane535, this is a binomial situation in which a calculator breaks down within 2 years or doesn't. $(p+q)^6=\binom{6}{0}p^6q^0+\binom{6}{1}p^5q^1+\bin om{6}{2}p^4q^2+\binom{6}{3}p^3q^3+\binom{6}{4}p^2q ^4+\binom{6}{5}pq^5+\binom{6}{6}p^0q^6$ where p=probability of a breakdown in 2 years, which is 0.25 q=probability the calculator is still functional after 2 years, which is 1-0.25=0.75. $\binom{6}{n}p^{6-n}q^n$ is the probability of having exactly n functional calculators after 2 years. (a) $\binom{6}{4}p^2q^4=\binom{6}{2}p^2q^4$ (b) At most 3 is 3 or less, which is 0, 1, 2 or 3 breakdowns $\binom{6}{3}p^3q^3+\binom{6}{4}p^2q^4+\binom{6}{5} pq^5+\binom{6}{6}q^6$ (c) At least 3 is 3, 4, 5 or 6 breakdowns $\binom{6}{0}p^6+\binom{6}{1}p^5q+\binom{6}{2}p^4q^ 2+\binom{6}{3}p^3q^3$ (d) Either 2 or 3 $\binom{6}{3}p^3q^3+\binom{6}{4}p^2q^4$
2014-07-11 02:47:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026279807090759, "perplexity": 1824.7173081360402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776424634.96/warc/CC-MAIN-20140707234024-00095-ip-10-180-212-248.ec2.internal.warc.gz"}
https://physics.paperswithcode.com/paper/what-can-we-know-about-hypernuclei-via
## What can we know about hypernuclei via analysis of bremsstrahlung photons? 29 Oct 2018  ·  Liu Xin 1 and 2, Maydanyuk Sergei P. 2 and 3, Zhang Peng-Ming, Liu Ling · We investigate possibility of emission of the bremsstrahlung photons in nuclear reactions with hypernuclei for the first time. A new model of the bremsstrahlung emission which accompanies interactions between $\alpha$ particles and hypernuclei is constructed, where a new formalism for the magnetic momenta of nucleons and hyperon inside hypernucleus is added... For first calculations, we choose $\alpha$ decay of the normal nucleus $^{210}{\rm Po}$ and the hypernucleus $^{211}_{\Lambda}{\rm Po}$. We find that (1) emission for the hypernucleus $^{211}_{\Lambda}{\rm Po}$ is larger than for normal nucleus $^{210}{\rm Po}$, (2) difference between these spectra is small. We propose a way how to find hypernuclei, where role of hyperon is the most essential in emission of bremsstrahlung photons during $\alpha$ decay. As demonstration of such a property, we show that the spectra for the hypernuclei $^{107}_{\Lambda}{\rm Te}$ and $^{109}_{\Lambda}{\rm Te}$ are essentially larger than the spectra for the normal nuclei $^{106}{\rm Te}$ and $^{108}{\rm Te}$. Such a difference is explained by additional contribution of emission to the full bremsstrahlung, which is formed by magnetic moment of hyperon inside hypernucleus. The bremsstrahlung emission formed by such a mechanism, is of the magnetic type. A new formula for fast estimations of bremsstrahlung spectra for even-even hypernuclei is proposed, where role of magnetic moment of hyperon of hypernucleus in formation of the bremsstrahlung emission is shown explicitly. Such an analysis opens possibility of new experimental study of properties of hypernuclei via bremsstrahlung study. read more PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now # Categories Nuclear Theory High Energy Physics - Phenomenology Nuclear Experiment
2021-09-25 05:52:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6467009782791138, "perplexity": 2225.220605037188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00166.warc.gz"}
https://www.lessonplanet.com/teachers/powers-of-ten
Powers of Ten Students make paper ten strips using graph paper. They add them together to depict the powers of ten.
2020-11-25 07:29:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507725238800049, "perplexity": 2011.8382613764518}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00189.warc.gz"}
https://workplace.stackexchange.com/questions/72427/appropriate-values-to-group-user-information-by-gender
# Appropriate values to group user information by gender? We have an application that stores data regarding a user's gender. The end user does not see this data, only back-end developers. Possible values for this grouping are as follows: Female Male Other I was recently showing my friend the project and she pointed out that other people who see this data might deem this sexist, as it doesn't include many identifiable gender types. She also brought up the following concerns from a data point of view, that someone reviewing the back-end of this application might notice: • People might think that the numeric values (0 for female, 1 for male) used to store this data in the database are referencing genitalia. • In binary 0 stands for off and 1 stands for on, meaning there is a possibility that female colleagues and / or programmers might deem this (even more) sexist. • People might deem referring to them as 'Other' as outright rude and / or offensive. I don't intend to offend anyone, regardless of their race, religion, gender or sexuality and I've realised that this could potentially offend those who see the way that we're grouping this data by gender, as it may not be inclusive of the gender type they identify with. In summary, the question I'd like answered is the following: Is it appropriate to collect data regarding one's gender using only a select few gender types (Male and Female), and group the rest into an 'Other' category? Please note that this was not designed intentionally to offend anyone, it was something we had not put that much thought into while we were designing the database. • Edited to bring in-line with the scope of TheWorkplace. Inserted a direct question that asks not for advice on what to do, but whether it was appropriate to group data using only the two types, grouping the rest into an 'other' category (and therefore 'excluding' them as options), which appears to be the OP's main concern. – schizoid04 Mar 2 '17 at 16:18 As a woman who works on the technical side of development, I really don't care what the underlying numeric primary key value is. I wouldn't have even thought about the connotations that you've raised if you hadn't done so (or course, now I can't unsee it). I really, truly think that someone's overthinking this. Put down however many enumerated values if you so desire for gender. If you want to be truly supportive of diverse gender identities, then three is nowhere near enough :) However, an even better idea is to ask yourselves, "Do we REALLY need to know the gender of this person? Why? What do we plan to do with it?" If if has absolutely no bearing on how the record is utilised in the system, then it's not really worth collecting. But to answer your question, it doesn't matter what you use for your database numeric keys. Having genders of just male and female is not sufficient, so if you really want to keep it to a minimal set, you could use "Male, Female, Undisclosed". That way you're not referring to gender diverse people as "Other", and you give everyone the option to choose if they wish to disclose their gender. • Despite what certain movements may have you believe, gender is the single most important differentiator /driver for behaviour. Nothing else even comes close. Of course it's worth collecting. Let's not start restricting system architecture out of a misplaced sense of propriety. I understand where you're coming from but the fact that you even bring up dropping a field so indisputably critical to a database of personal information is a telling sign that this equality nonsense has gone too far. Acknowledging that gender exists doesn't mean that we can't treat people of different genders equally. – Lilienthal Aug 1 '16 at 8:16 • There are plenty of databases that store gender that don't need it. Does it really matter if a meeting planner is female or male? However for certain applications such as medical applications it could be critical . It's up to the business to determine if gender is really needed. I am a woman and a strong feminist, but I would find that interpretation of the numbers as totally ridiculous and off base. – HLGEM Aug 1 '16 at 14:24 • I totally agree with this, although it's one case where, if someone was really making a fuss, I'd just swap the database to use M/F/O rather than 0/1/2. I generally dislike storing strings or characters where an enumeration would do, in case the definition changes, but we're unlikely to change the words "Male" and "Female" anytime soon! – Jon Story Aug 5 '16 at 15:27 • No matter how you encode gender information, it must be represented in the database as some combination of zeros and ones. – Patricia Shanahan Aug 5 '16 at 16:01 • @xDaizu I have no idea if they assign people by genitalia, as I haven't seen the ones of the other guy in the room, nor did I particularly want to. I'm quite certain though, that anyone in the company claiming to be of a less-common gender would get a single room if desired. They certainly would not stick a guy with a girl just because both had a penis. – Erik Mar 3 '17 at 13:06 These numbers are identifiers in a database. Database record identifiers are, by definition of their purpose, completely meaningless data. Anyone who tries to claim they mean something is almost certainly either not a software engineer or a bad software engineer. • To add to that: These numbers aren't even shown to end users. This is literally just an implementation detail of a backend data system. Nobody but the admins will ever see this. For all intents and purposes, it could have been "234239482034023480" for male and "35204238423048203" for female. There is no difference, and assuming that the numbers are sexist because of the order someone chose is ridiculous and absurd. – Magisch Aug 5 '16 at 13:13 • Are you really suggesting that the number for a woman should be higher, and possibly seen as superior, to that of a man? – Steve Ives Aug 5 '16 at 13:58 • I agree with this answer - the questions context is simply absurd. There is and should be no thought on this subject about gender discrimination/bias, it is simply back-end data and should be kept as such. – Sh4d0wsPlyr Aug 5 '16 at 15:23 What you should do is nothing. What you're dealing with is someone who is deliberately trying to be offended. I've dealt with this before over a different issue, where someone actually filed a union action over the colors we used in a spreadsheet. We had to dig in to stop that nonsense, and so do you. If you change anything, then you are tacitly admitting to doing wrong. Once you do that, life at work will become very difficult for you, because there can always be more reasons invented. Just as an example, I can make every last number offensive for one reason or another. • 0 is round and could be offensive to people who are fat. • 1 could be offensive because someone decides it's a phallic symbol. • See "South Park" for someone being offended by the number 2 • 3, 6, and 9 could all be offensive to Christians (if they so choose) because of the number of the beast. • 4 because it resembles a knife • 7 because it resembles a gun • 8 because its hour-glass shape promotes a certain body shape. • ...which leaves five, and if I thought about it long enough I could make that offensive as well. It may cause you some short-term discomfort, but if you give into one silly demand, there will be no end of it for you. Again, for emphasis. If you change what you have, you are admitting to wrong doing, whether you realize it or not. Once you've got that label slapped on you the next complaint will have more force because you have a "history of bigoted behavior" or some other nonsense. Stand your ground and dismiss your coworker's concerns as the trouble-making disruption it is. • Problem solved! Use 5 for Female, 55 for Male and 555 for Other. – Ouroboros Aug 5 '16 at 14:23 • @Ouroboros some are more equal than others. – user37746 Sep 15 '16 at 11:27 • +2 for your whole answer, loved it; but -1 for "do nothing". If any number can be offensive and it doesn't matter, you might as well go standard. :D Seriously, any excuse is good for that... – xDaizu Mar 2 '17 at 16:28 • @xDaizu Great article. Would you mind if I included that in my answer? I'll give you the credit. – Richard U Mar 2 '17 at 21:21 • @RichardU Sure, go ahead ^^ – xDaizu Mar 2 '17 at 22:39 As it is usually the case in IT, just go standard. Use the ISO/IEC 5218 The four codes specified in ISO/IEC 5218 are: 0 = not known, 1 = male, 2 = female, 9 = not applicable. The standard specifies that its use may be referred to by the designator "SEX". Fun note: Even though the ISO explicitely says that no significance is to be placed on the encoding of male as 1 and female as 2, since we are not using the problematic 0, males can say that they are number one while females can say they are twice as good, so that should make everyone happy... right? :) It's an enumeration of values to allow it to be stored in the database. No developer is going to notice or care that female is represented by 0 and male by 1, and this fact should never be exposed to the end user. If possible, it might be better to store it in a char(1)... that way you can query it using something like WHERE gender = 'F' rather than WHERE gender = 0. It sorts out your friends issue with it and also provides greater ease of use for future development. • Ah but this would still be sexist, because F still has a lower ASCII value (70) than M (77). It might be safer to use M/W (Man/Woman) because W has a value of (87) :) :) – Jon Story Aug 5 '16 at 15:30 • Using a char(1) could have unforseen knockon effects with regard to collation and character sets, whereas ints will not. – Moo Aug 5 '16 at 16:05 • @Moo Provided the app is designed to handle a char(1) instead of an int, I don't see why it would be an issue. What sort of problem did you have in mind? – Maybe_Factor Aug 8 '16 at 2:01 • @JonStory An M has two peaks, while an W just have one (and, dependent on the font, it's a lower peak). Subtle way you have to remind everyone that there are more men in "top positions", huh? :) :) – xDaizu Mar 2 '17 at 16:30 • Looking at the W, I see three peaks... – gnasher729 Aug 1 '18 at 23:27 As 2 stands for other (or not sure) people might deem this (somewhat) Transphobic or outright rude and / or offensive. While the other two points look like someone actively trying to find fault for the sake of finding fault, there's no reason to limit yourself to a tristate with this as the 3rd option. If you want to be more inclusive look at the expansive lists of options sites like Facebook provide beyond male/female and offer the same. Facebook apparently is up to 71 options. I'd provide a direct link; except the only 3rd party sites I can find listing them all have highly negative reactions to the idea. • I would add to this that letting users define their own gender and pronouns would be very inclusive. Having lots of defaults is great, but whatever boxes you make, someone won't feel right in them. – Andrew Piliser Aug 5 '16 at 18:04 In summary, the question I'd like answered is the following: Is it appropriate to collect data regarding one's gender using only a select few gender types (Male and Female), and group the rest into an 'Other' category? That depends on why you are collecting that data in the first place. Data models aren't supposed to fulfill political correctness needs, they are supposed to fulfill business needs. So what exactly is the business need for the gender field? Do you want to be able to use the correct honorifics and pronouns when communicating with users? Then save the pronouns and honorifics. Do you need it for some marketing analysis? Then you might indeed have a business case for using a more complicated representation of sex/gender identity, because people with non-standard identities are demographics with non-standard consumer behavior. Are you building a dating app? Most dating services have a clear binary distinction how the user self-identifies and what partners they are looking for. Dating for people with non-binary gender identity and/or preference is a rather specialized market segment. Ask your management if catering to this segment is part of their business plan. If they do, there are two solutions. Either invent a super-complex system to match people who might be interested in dating each other, or just drop the gender-information altogether and let people decide based on profiles alone. Is it for reporting to some 3rd party? Then report in the format that 3rd party wants. • +1 "Data models aren't supposed to fulfill political correctness needs, they are supposed to fulfill business needs." – A. I. Breveleri Mar 3 '17 at 23:10 Personally, I think your co-worker is overthinking it, as others have already said. Now, more inline with regards to your classification of genders, I have seen some systems user the following five: Male, Female, Other Specific, Not Known, Not Specified This number is not a set rule by any means; in fact, for simplicity, I would go with Not Specified for the third and final option. You have social options and technical options. Socially, you can just ignore your friend's opinion. She is not on the team and you were just showing her your code and she "went there." Or, you (and your team) could decide to make a change to (hopefully) avoid the issue. You have a couple of technical options available to you. This assumes your business requirements are not able to be changed with respect to whether and how gender must be tracked. 1. Use strings instead of ordinal values. Of course if you use this field as a key, or frequently in joins, your queries might be a few milliseconds slower, which can add up. 2. Change the ordinal values of your enumeration to be large numbers. The actual value probably doesn't matter. It takes no more CPU effort to compare a large number as it does to compare a small one. If you're using a 32-bit integer, you have 4 billion numbers to pick from. Of course, you should take care that your numbers do not differ by only a 1 or a 0, because we just cannot un-see this post! I think you are WAY overthinking this. Whatever you put into the database for the primary key is going to have to have a unique value. Numbers, characters. The computer is a calculator. It deals with numbers. Accordingly, having unique numeric values implies that each value will have different properties. Some will be less than others. Some will be greater. Some will be primes. Some will not. Some will have one digit, others will have two. We can go on with this ad infinitum, and if we take the OP's approach then it sorta says it's sane for us to mince how the qualities (as described) of these values somehow "ding" the worth of the genders they're associated with in the data. It's like the old saying of how some people say "tomaTOE", vs "toMAHto". Remarkable, brow-raising?? Slightly. High in value, in the long term? Hmm, probably not. Epic, long-term, cataclysmic effects with the globe spinning off its axis?? SMH. I was recently showing my friend the project and she pointed out that other people who see this data might deem this sexist, as it doesn't include many identifiable gender types. That's because your friend is talking about the product, not your technical problem of implementing the product. Your friend is, evidently, of the opinion that your app, website, service, or whatever should be representing gender differently. Your friend probably does not mean to offend your specific job of implementing the 3 options in a database in some way that reflects the decision made by product. If you have sway in the product, you might consider exercising it. Many businesses have greatly benefited from gender inclusiveness in their product and marketing (source: just watching subway ads reach out to trans people). If you have 3 gender options you need to implement, then you don't have any social theory problem, just a product you need to implement in the backend, and apply your usual engineering knowledge for naming things with relatively little political implication. You don't change your application (which may have significant cost for development and testing) unless someone complains who is actually affected, gives a reason that actually justifies the cost, and if your boss tells you to make the change. The somehow justified complaint that someone who isn't male or female would have to choose a category "other" that makes them somehow stand out can be solved: Change "other" to "unknown, undisclosed, or other". And then you can volunteer to not disclose your gender so nobody needs to be the only person in that category. And change the question from "which are you" to "which describes you best", so someone who sees themselves 60/40 between two genders can pick the 60% case without having to make a compromise. First, establish she wasn't joking. Then, roll your eyes and make it Other = 0, Male = 1, Female = 2 And stop showing her your code! You are writing a piece of software, and your first and foremost priority is making sure it works and is easy for your peers to work with. When it comes to data analysis, data isn't inherently bigoted. It's only bigoted when you ascribe further meaning than what is present. For example, if your numbers show that there are more men than women who work at your company, there's nothing wrong with that. If you take that and say that women are therefore less qualified than men to work in your field, that's problematic. If your friend, or some hypothetical future person, sees that a '0' is being used for women and a '1' is for men, and decides that that number is the number of penises that employee has, that is not reflective of your attitude or your codebase's attitude. Your code is just text, it can't have an opinion. The question you should ask is, can you rewrite your code in such a way that a programmer not familiar with it can still work effectively? Or will your proposed change trade off clarity/convenience with perceived gender neutrality? If it's the former, then go ahead and make the change if it makes you more comfortable. An easy workaround is maybe to just replace every '0' or '1' using #define and explicitly name each group, so no one will assume any further meaning behind it. Why waste storage space by having an Integer variable for this purpose? Use a boolean instead - IsMale: true or false. Null valie if neither male nor female. Problem solved. • Some people neither male or female - transgender/intersex for example. Or perhaps they do not wish the company concerted to know there gender – Ed Heal Aug 5 '16 at 12:24 • Androgynous people exist, and there are arguably a lot more then 2 genders. On a technical node, almost all implementations of SQL relational databases use padding so an integer number is likely to be only minimally less space-consuming then a boolean (unless you have a LOT of booleans in that table). And considering its a person database, its likely not large enough for spacing to matter. You could probably make every field in that table a char(500) and it wouldn't matter. – Magisch Aug 5 '16 at 13:05 • @AndrewPiliser all of those can be compressed into 'other' unless you feel OP should update their tables everytime their tumblr page updates as well... – easymoden00b Aug 5 '16 at 18:13 • @easymoden00b I'd recommend letting people write in whatever they want, with defaults for common choices. Also, way to bring up Tumblr. Are you going to call me an SJW next? – Andrew Piliser Aug 5 '16 at 18:33 • And of course if you use a boolean, why isn't it IsFemale? Or better yet? IsOther. – HLGEM Mar 2 '17 at 19:01 ## protected by enderlandMar 2 '17 at 21:57 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-10-16 00:14:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26721900701522827, "perplexity": 1433.714371482573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660829.5/warc/CC-MAIN-20191015231925-20191016015425-00441.warc.gz"}
https://www.quantopian.com/posts/how-to-get-timestamps-for-history-in-customfactor
How to get TimeStamps for history in CustomFactor The historical data inputs in a CustomFactor are of type np.ndarray, this makes it difficult to combine them with other timeseries data that was calculated outside the pipeline (essentially because of alignment). Is there any way I can get timestamps for the historical data in the pipeline, e.g. via some clever calendar calculations using the today input as a starting point? I'd like to create a pd.Series or a pd.DataFrame indexed by the timestamp. An example use case is e.g. to combine futures data with equity data. 3 responses Hi Ivory Ant, have you got the problem solved? I have just started using the Quantopian :) Thanks~~ No sorry, if I remember correctly I abandoned the whole thing because of other problems with mixing futures and stocks. Maybe something with the pipeline, can't really remember. Thanks Ivory, i am using the following approach: from zipline.utils.calendars import get_calendar
2018-10-20 10:26:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5240746140480042, "perplexity": 1989.4892134920192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512693.40/warc/CC-MAIN-20181020101001-20181020122501-00440.warc.gz"}
http://mathoverflow.net/revisions/69295/list
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). Maybe I overlooked it, but I didn't see, in the previous answers, anything about a really geometric view of vectors. When introducing vector spaces, I like to use 2-dimensional vectors (arrows drawn on the blackboard, with the understanding that only length and direction matter, not the location on the board), with geometric definitions of addition and scalar multiplication. It is, of course, easy to explain that these geometric vectors are "really the same" as 2-component algebraic vectors (i.e., elements of $\mathbb R^2$), and also that the sameness depends on the choice of a coordinate system. This approach provides me with a lot of analogies for more complicated things that come up later in the course.
2013-06-20 03:59:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7611561417579651, "perplexity": 420.68050021133365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710274484/warc/CC-MAIN-20130516131754-00094-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/187147-probability-measure.html
Math Help - probability measure 1. probability measure Let P a probability measure and R a random variable on a certain space. P(.) only takes on the values 0 and 1 for all events, I have to show that there is a number a, such that the event T=a, has probability one. My idea was that P(.) can not be zero for all values, because the sum of P(.) must be 1. How do I work this out rigorously? 2. Re: probability measure Let $f:x\mapsto P(R\leq x)$. Let $S=\left\{x\in\mathbb R: f(x)=1\right\}$. By the hypothesis, the set $S$ is non-empty. What about $\inf S$?
2015-04-28 06:23:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847639799118042, "perplexity": 395.7829274279254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660724.78/warc/CC-MAIN-20150417045740-00050-ip-10-235-10-82.ec2.internal.warc.gz"}
http://wpressutexas.net/oldcoursewiki/index.php?title=/Segment18&diff=next&oldid=1945&printable=yes
# Difference between revisions of "/Segment18" #### To Calculate 1. Random points i are chosen uniformly on a circle of radius 1, and their $(x_i,y_i)$ coordinates in the plane are recorded. What is the 2x2 covariance matrix of the random variables $X$ and $Y$? (Hint: Transform probabilities from $\theta$ to $x$. Second hint: Is there a symmetry argument that some components must be zero, or must be equal?) $\theta ~ Unif(0, 2\pi)$ so the Pdf of \theta is: $p(\theta) =1/2\pi$ $x=cos(\theta), y=sin(\theta)$ $E(x) = \int_0^{2\pi} cos\theta * 1/2\pi= 0$ $E(y) = \int_0^{2\pi} sin\theta * 1/2\pi= 0$ $E(x^2) = \int_0^{2\pi} cos^2\theta * 1/2\pi= 1/2$ $E(y^2) = \int_0^{2\pi} sin^2\theta * 1/2\pi= 1/2$ so, $Var(x)=E(x^2)-E^2(x)= 1/2$ $Var(y)=E(y^2)-E^2(y)= 1/2$ $Cov(x,y)= E(x-\mu_x)(y-\mu_y) =E(xy)$ $=\int_0^{2\pi} {sin\theta * cos\theta * \frac 1{2\pi} } d \theta = 0$ so $\Sigma= \begin{bmatrix} {Var(x)} & {Cov(x,y)}\\[0.4em] {Cov(y,x)} & {Var(y)} \end{bmatrix} = \begin{bmatrix} {\frac 12} & {0}\\[0.4em] {0} & {\frac 12} \end{bmatrix}$ 2. Points are generated in 3 dimensions by this prescription: Choose $\lambda$ uniformly random in $(0,1)$. Then a point's $(x,y,z)$ coordinates are $(\alpha\lambda,\beta\lambda,\gamma\lambda)$. What is the covariance matrix of the random variables $(X,Y,Z)$ in terms of $\alpha,\beta,\text{ and }\gamma$? What is the linear correlation matrix of the same random variables?
2021-01-15 18:28:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.944013774394989, "perplexity": 230.24852605467592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00257.warc.gz"}
https://www.physicsforums.com/threads/express-the-following-integral-in-terms-of-the-gamma-function.395170/
# Homework Help: Express the following integral in terms of the gamma function 1. Apr 14, 2010 ### Ben1220 1. The problem statement, all variables and given/known data This is actually part of a probability problem I'm thinking about. I'm trying to find the nth moment of a certain random variable in terms of the gamma function, which is basically equivalent to solving the following integral or expressing it in terms of the gamma function. Here is the integral: 2. Relevant equations Mathematica: Integrate[(a/b^a) x^(n + a - 1) Exp[-(x/b)^a], {x, 0, Infinity}] Plain text: integral_0^infinity(a x^(n+a-1) e^(-(x/b)^a))/b^a dx a, b, and n are constants. wolfram alpha: http://www.wolframalpha.com/input/?i=integrate+%28%28a%2Fb^a%29*x^%28n%2Ba-1%29*e^%28-1%28x%2Fb%29^a%29%2Cx%2C0%2Cinf%29 3. The attempt at a solution I tried integrating by parts, taking u = -x^n and dv/dx = ... everything else in the expression (since that can be integrated nicely using derivative is present substitution), I was left with -1 + an even nastier integral, so I'm not convinced this is the right method... 2. Apr 14, 2010 ### vela Staff Emeritus Don't actually try to integrate it. Use substitutions that turn the integral into the form of the gamma function.
2018-06-18 00:42:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8009024858474731, "perplexity": 528.0994059616754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859904.56/warc/CC-MAIN-20180617232711-20180618012711-00612.warc.gz"}
https://discusstest.codechef.com/t/dufresne-editorial/22381
# DUFRESNE - Editorial Author: Ankush Khanna Easy-Medium ### PROBLEM: Given N distinct points on a plane, out of which M points lie on a single straight line (M \leq N). There is only one such line on the plane which can contain more than 2 points. Count the number of distinct straight lines formed after connecting each point with all the other points. Print it modulo 10^9 + 7. ### QUICK EXPLANATION: Combinatorics can help us solve this problem. We can prove that we will have (\binom{N}{2} - \binom{M}{2} + 1) unique lines on this plane after connecting each point with the other one. ### EXPLANATION: Let us first consider all combinations of pairings of the points on this plane, mathematics always has the answer. First of all, there can be \binom{N}{2} ways of choosing two points (for connection) out of the N points on this plane without repetition. Now it is also given that there are M points on a single straight line, therefore, we don’t have to join those points among each other explicitly, because in that case we will be counting more than we need. Also, in that case, we will also count non-unique straight lines. Therefore, we should consider the ways of connecting two points among those M points and reduce those combinations from our result for all of N points. This way we can eliminate repetition of lines (we need to print minimum possible lines without repetition). Also, if we are counting unique lines, then there is only one way of counting, and there is only one result, so we, in a way, always guarantee to find the minimum number of lines drawn on this 2-D plane (after all connections made). Number of ways of choosing two points (for connection) out of the M points are \binom{M}{2}. Now, we subtract it from our result for N points and we get \binom{N}{2} - \binom{M}{2}. But, hey, we are forgetting something really important here. We subtracted the result for M points and we just lost all connectivity among those M points. Therefore, we need to add 1 to this result, because there is exactly one way to connect all the M points together (a single straight line). \therefore we have proved that our answer is going to be (\binom{N}{2} - \binom{M}{2} + 1) mathematically using combinatorics and some geometry about straight lines. Now comes the implementation part. For the constraints of sub-task 1, we can use a brute force approach by actually counting the ways one by one for each of the points (iterating over all other points). Time complexity will be O(N^2). This way we get 20 points. Here, we can build a factorial prefix using modular arithmetic, and we can find factorial inverse under modulo of a prime number ( Let K = 10^9 + 7) using Fermat’s Little Theorem. Therefore time and space complexities would be O(N + \log_2 K) and O(N) respectively. This way, we are actually finding the values for the binomial coefficients modulo K. This way we will get 40 points (sub-task 1 and sub-task 2). \{ O(N) time for building prefix, and O(\log_2 K) time for finding modular inverse using Fermat’s Little Theorem$}$ By just applying simple mathematics, we can easily figure out that \binom{N}{2} = \frac{N \times (N - 1)}{2}. Therefore we brought down linear time to constant time. \therefore time complexity now is O(1), fair enough to pass the tests. Note: Apply modular arithmetic very carefully because here we have a subtraction involved. So, take modulo only once and like this: $(\frac{N \times (N - 1)}{2} - \frac{M \times (M - 1)}{2} + 1) \bmod (10^9 + 7)$ ### COMPLEXITY: O(1) time per test case, with O(1) auxiliary space used. ### SOLUTION: Feel free to share your approach. If you have any queries, they are always welcome. //
2021-07-27 22:22:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7233754992485046, "perplexity": 518.9478562127745}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00225.warc.gz"}
http://lbay.fondazionemercantini.it/divs-overlapping.html
# Divs Overlapping Css overlapping divs. Would like to avoid a loop within a loop if possible. Because AP Divs can easily overlap each other, selecting this check box overrides that behavior by forcing boxes next to each other and preventing the creation of new boxes on top of each other. And then when it's doing span B, it's moving as far to the right as it can, and span C as well. The higher the index, the closer the div will appear to the front of the page. A demonstration page has been prepared consisting of four layers. Usually HTML pages are considered two-dimensional, because text, images and other elements are arranged on the page without overlapping. Feb 12, 2009 06:52 AM | rtpHarry | LINK I think the problem is that you are setting heights and widths on your divs and then relatively positioning them. Take a look on Divi columns code and you will see a lot of ‘clearfix’ class added that work as a fix for this common issue with floating elements. Meaning, the image should be “placed” on two different backgrounds. In an effort to. Worth also reminding the CSS class class=""error"" is legacy and should be replaced with class=""notice notice-error"". Active 1 year, 11 months ago. We have seen the first two dimensions in previous lessons. Basically what I have is this:. Stop divs from overlapping on resize Stop divs from overlapping on resize. The goal is to have the first circle against the left edge and the right circle against the right edge. I kinda of forgot to mention that part, I don't want them to be hidden, I want each div to automatically allow for the "previous" one to finish it's concent without overlapping (Tried with overflow: Auto but it gave them scrollbars, to all the forms in the whole site. Flexible grids and layouts. 7 with my screen resolution at 1152x864, but if I change the size of my resolution or browser window, my divs over lap in the middle of the screen. Usually HTML pages are considered two-dimensional, because text, images and other elements are arranged on the page without overlapping. Fixed positioning is similar to absolute positioning, with the exception that the element's containing block is the initial containing block established by the viewport, unless any ancestor has transform, perspective, or filter property set to something other than none (see CSS Transforms Spec), which then causes that ancestor to take the place of the elements containing block. Need help? Post your question and get tips & solutions from a community of 459,005 IT Pros & Developers. jQuery UI is a curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library. Css overlapping divs. CSS Div Be Careful When You Size Your Divs Search MS Office A-Z | Search Web Pages/ Design A-Z Be Careful When You Size Your Divs. This technique does not require any extra elements to create the dimmed background overlay. 5 and up, and Mozilla 1. An HTML tag that is widely used to break apart a Web page into elements, each with its own layout attributes. The left will contain an image and the right will contain two additional divs that contain some text and a timestamp. 1 Kb; Introduction. Box-shadow is a pretty powerful property in CSS. My HTML Looks like this:. Hi, I am trying to make these three divs overlap, two of them are images and the other one is my contentbox. Thus, the images may be positioned so they are stacked. (OK, it’s actually a little more complicated than that, but as long as you’re not using negative margins to overlap inline elements, you probably won’t encounter the edge cases. I think you should move the two sidebar div's earlier in the code, somewhere below the header or at the start of the main div. With above code, img2 will float above img1. GET ANSWER. 5) and IE don't get access – and it just looks like the site is broken. Overlapping divs footer overlapping other divs. The easiest and most reliable way to center content for IE6 and below is to apply text-align: center to the parent element and then apply text-align: left to the element to be centered to make sure the text within it is aligned properly. Floated div elements will not overlap. Css overlapping divs Css overlapping divs. Points: 0. Any help is greatly appreciated. You may have to register before you can post: click the register link above to proceed. problem with divs, form elements overlapping. HTML / CSS Forums on Bytes. I am thinking is there anyway to. Overlap DIVs - Macromedia Dynamic HTML. The reason I have Wrapper and Maincontent is because I have two separate backgrounds going on. Divs should only be used to build structure and as placeholders for design elements when no other block-level elements can describe the content. Centre Multiple Divs in IE It would be nice if we could centre multiple block elements and there is a way. We doctors are a bunch of chums using HTML5 and writing about how we do it. Not too many but a couple. Or, perhaps, your logo must be in the exact same place on every web page. I have tried different solutions to correct the overlap, but the same result occurs every time. Equidistant Objects. I’ve been using display: table-cell and display: table in CSS for a couple of different projects recently and have been impressed by the results. Introduction In May 2011, the Los Angeles Times published an article relaying the mistreatment of school librarians employed by the Los Angeles Union School District (LAUSD) (Tobar). Overlapping DIVs: one isn't blocking the other; overlapping divs; mousemove for overlapping divs; anchor id problem with fixed and scrolling divs; Overlapping DIVs killing links; overlapping divs in IE; Overlapping divs in Firefox until refresh; problem with floats and overlapping divs. Below are the component parts for our custom infinite loop cross-fade script (HTML, CSS, and Javascript) plus links to a working demo and source files. relatedTarget to target the active tab and the previous active tab (if available) respectively. I need a border between the two divs that is the same height as the tallest div. I can accomplish this action using CSS and Javascript by various methods of "hiding" the divs containing the expanded text, but I am. How to prevent divs from overlapping? Ask Question Asked 7 years ago. 5% of our readership, but that’s still nearly 200 readers. Let's say your window is 1000px; then the container will be 700px. My column divs are overlapping on bootstrap. They are both purpose-built, for two different (yet perhaps partially overlapping) purposes. An element can have a positive or negative stack order: Absolute, Relative, Fixed Positioning: How Do They Differ?. There are at least four ways of doing this. Creating an overlay effect simply means putting two div together at the same place but both the div appear when needed i. i'm making the game with jquery and javascript. Join a community of over 2. this is a web app, so the actual persons are generated by user selection from a database and the numbers may vary. Active 6 years, 8 months ago. HTML / CSS Forums on Bytes. See your html elements as stacks. Active 6 years, 7 months ago. I already tried changing the @viewport, body, max-width and min-width, but none worked. If all Flexbox brought us was sane vertical centering, I’d be more than overjoyed with it. Overlapping divs on absolute positioning. jquery-overlap. xylude 3 Junior Poster. 2 elements in the root means 2 stacks. Overlapping Content. Overlapping DIVs: one isn't blocking the other; overlapping divs; mousemove for overlapping divs; anchor id problem with fixed and scrolling divs; Overlapping DIVs killing links; overlapping divs in IE; Overlapping divs in Firefox until refresh; problem with floats and overlapping divs. I tried z-indexing it to no avail. To make the design of your website responsive you can try the next 3 tips: 1. Basically what I have is this:. Once the window width gets below a certain size, those elements have no choice but to overlap. The border-spacing property is used to set the spaces between cells of a table and border-collapse property is used to specify whether the border of table is collapse or not. Browse companies that make Cold Storage Doors and view and download their free cad details, revit BIM files, specifications and other content relating to Cold Storage Doors as well as other product information formated for the architectural community. When the chart data or options are changed, Chart. 7 with my screen resolution at 1152x864, but if I change the size of my resolution or browser window, my divs over lap in the middle of the screen. Read More September 2, 2020. The menu you see on the right on this page is simply a UL list. Grids can be used to lay out major page areas or small user interface elements. But before I do this I want to validate that none of these divs overlap each other. Would like to avoid a loop within a loop if possible. hi, im making a game where a box moves and collides with a draggable div. HOME; Undergraduate Education; Diversity, Equity, and Inclusion Requirement; Diversity, Equity, and Inclusion Requirement This page has a new site, please visit http. Discussion in 'HTML, Graphics & Programming' started by toastyman, Feb 26, 2006. This tool is free to use, and you can take screenshots or download the FortniteBR Font Text Image. define background colors via Design Options); Add Single Image element to the second row;. Gabriel Kanes: Hi, I have a concept for a site where several divs would show intro paragraphs, and when a user clicks a "more" button the div expands in size and shows more verbose content. Ask Question Asked 6 years, 7 months ago. You'll probably have to experiment a little, but it shouldn't be too hard to do what you want. See your html elements as stacks. First up, it seems like a gludge to get around a badly designed page or layout (i. How are you positioning these divs? Just a note with floated Divs, you might need. Download source - 14. PLease can anyone help on this? The images in question are based on the code below (works ok in Chrome and FF): /* -----underscore l refers to landscape orientation ----- */. Using the “left” and “top” attributes, we can absolutely position each sub-menu within its parent menu item. How to get DIVs to not overlap when resizing?; } 3 replies Tue, 2008-12-09 22:22 Anonymous. this is a web app, so the actual persons are generated by user selection from a database and the numbers may vary. Browse companies that make Telescoping Vertical Lift Doors and view and download their free cad details, revit BIM files, specifications and other content relating to Telescoping Vertical Lift Doors as well as other product information formated for the architectural community. I want to stop this overlapping. Download source - 14. I tried it in both Chrome and Safari. This is what my page-frontpage. In this case we have five divs and each div is a block element. Set a min-width on the containing element. Event Description; show. Unless the browser is told otherwise. But what if you want the parent (div1) to have a border? You’ll see that it will drop the height because the child elements are floating. All of your divs use absolute positioning, but that may have been something you didn't intend. Here is the site in question:. i want to detect collisions between the two divs. Can't find what you're looking for? Perhaps one of the following sites will help: jQuery UI 1. Divs, a very common and highly familiar word in the web development field. I have one "content" area at. Discussion / Question. So far nothing I have tried has gotten it back to the bottom of the page. You will notice I have set the “left” property to 149px (1px less than the width of the menu items), which allows the sub-menus to overlap the main menu and not produce a double border. If all Flexbox brought us was sane vertical centering, I’d be more than overjoyed with it. But before I do this I want to validate that none of these divs overlap each other. Web browsers render different elements in different ways. Divs overlapping Divs overlapping. This should sit vertically centred between the header and the footer (which is glued to the bottom of the browser window). In this guide, we'll go over two separate CSS […]. The sortable widget uses the jQuery UI CSS framework to style its look and feel. Post your own question and get a custom answer. Unlike HTML table, div layers can be overlapped with each other. I know this is my ignorance of something. Using DIVs as containers Creating an ID style ID vs. 651-201-5000 Phone 888-345-0823 Toll-free. Full admin side menu if no thickbox)",DD32 1,45874,Content Section is overlapping the menu background,valentinbora*,Administration,5. While fiddling around with the CSS3 box-shadow property, I stumbled across a method to put a double border on a single element. Fortunately, this is easily fixed. Floating divs overlapping. Css overlapping divs Css overlapping divs. 2 elements in the root means 2 stacks. The following code disables all CSS page-break instructions on all DIVs of the second page and limit the page to 500 pixel maximum: Multiple overlapping. hi, i have the following html. IE 11 flexbox with flex-wrap: wrap doesn't seem to calculate widths correctly (box-sizing ignored?). 5 and Opera 9. 1) The images/buttons on the left and in the main content do not appear in IE6 and I have tried various changes to the z-index including their parent divs. i'm making the game with jquery and javascript. How do I over come this problem? Fiddle: enter. Hello all, I've got a slight issue with two divs overlapping in mobile view on this site which is in it's initial stages Welcome to Merlin's Cafe - everything reduces down fine apart from the picture and form divs which when the layout shrinks down to fit a mobile screen the form goes over the top of the cafe image rather than sitting underneath of it as it should do. Unlike HTML table, div layers can be overlapped with each other. Dipen Dixit 29,094 views. From: Subject: =?iso-2022-jp?B?UHJvZml0IEZhbGxzIGJ1dCBTdG9jayBTdXJnZXMgYXMgRGVsbCBCZWF0cyBGb3JlY2FzdHMgLSBOWVRpbWVzLmNvbQ==?= Date: Thu, Sep 03 2009 15:23:16 GMT-0400. Centred Page Horizontal and Vertical centering has always been awkward with css as vertical-align only. Ie11 divs overlapping Ie11 divs overlapping. I thought the easiest way was to have overlapping borders, so two borders would appear as one, but can't get it to. Images overlapping DIVs - please help If this is your first visit, be sure to check out the FAQ by clicking the link above. It's called the clearfix hack. I'm finding it impossible to vertically align the main area of my homepage, which is a javascript slideshow. Moved Permanently. When a person asks a question like overlapping. There are quite some topics out there about this topic, but none of them solved my issue. In some cases, you want an image to overlap/overlay two rows within Visual Composer layout. Here is the css code for the divs (my original code was improved by Chris Happy from html - css- why are my nav div and middle div are overlapping? - Stack Overflow [ ^ ]) div#Container { position: relative; }. Ie11 divs overlapping Ie11 divs overlapping. CSS Tutorial For Beginners Full 10 How to place two divs side by side in css and html. How to use (selector). Viewed 78k times 21. I am wanting to generate a separate div for each product. All the project related content on the page is in a “project” div, and within that there are divs “project info” “project images” and “project nav. This, I believe, is the heart of the question. You can ofcourse overlap the labels by placing them into the different divs and applying the left: styles to it. Discussion in 'HTML, Graphics & Programming' started by toastyman, Feb 26, 2006. The first 6 of the 9 CSS rules described here have fixed-size background images. You've probably want the logo to be styled as such: div#logo { position: absolute; left: 100px; // or whatever } Note: absolute position has its eccentricities. Meaning, the image should be “placed” on two different backgrounds. Los elementos con position:absolute no tienen margenes pero sí bordes y padding. In this post, I’ll show you some of the mistakes and poor markup practices I often see and explain how to avoid them. Introduction In May 2011, the Los Angeles Times published an article relaying the mistreatment of school librarians employed by the Los Angeles Union School District (LAUSD) (Tobar). Answer summary. See W3C’s specs for paragraphs for more information on this topic. I am making an online store with every category using a specific page template; that's all done, but now, I want to edit the layout of that page template. Try out with the below example code: #container{width: 70%; min-width: 1000px; margin: auto;} #left {float: left;. Yet it brings many other capabilities to the …. The features shown in this overview will then be explained in greater detail in the rest of this guide. I tried z-indexing it to no avail. This forces uniform scaling for both the x and y, aligning the midpoint of the SVG object with the midpoint of the container element. This tool is free to use, and you can take screenshots or download the FortniteBR Font Text Image. Still we will have to deal with a multidimensional space, but acceptable for a. Sublime Text 3 (ST3) is the latest version of one of the most commonly used plain text editors by web developers, coders, and programmers. CSS3 features are making their way into the various browsers and while many are holding off on implementing them, there are those who are venturing ahead and likely running into a world of interesting quirks across the various platforms. New here? Start with our free trials. In this lesson, we will learn how to let different elements become layers. Clay Roof Tiles - Thermal and Moisture Protection - Download free cad blocks, AutoCad drawings and details for all building products in DWG and PDF formats. - Four divs, each in one corner on the screen (position fixed), - When resizing browser window the divs may not overlap each other, - Pure HTML & CSS. Using CSS “display: table-cell” for columns. Below are the component parts for our custom infinite loop cross-fade script (HTML, CSS, and Javascript) plus links to a working demo and source files. Stop divs from overlapping on resize Stop divs from overlapping on resize. 651-201-5000 Phone 888-345-0823 Toll-free. During which time I was trying to investigate some of the new controls available in ASP. well, it works yea, that overlay div is useless now, the use of it was adding a shade of yellow to the page, so i stacked divs. I have several divs that seem to over lap. Topic: HTML / CSS Prev|Next Answer: Use the CSS z-index Property. I practiced creating div tags in the lessons provided by my professor and read over the CSS Tutorials for a 2nd time. Based on examples in Backdraft, I created an HTML object within an inline layout. This technique does not require any extra elements to create the dimmed background overlay. There are many, many reasons not to use frames, but the best reason (IMO) is to control the user experience. Every html tagset has a box. Discordians! The in-game overlay is 100% waiting for you, and it's time to incorporate your voice & text chat into your game for maximum effort, focus, and no distractions from declaring your o. I have several divs that seem to over lap as per the fiddle but want the homemidcontent div to be below the homebanner div? Please help. An HTML tag that is widely used to break apart a Web page into elements, each with its own layout attributes. There are quite some topics out there about this topic, but none of them solved my issue. Read More September 2, 2020. Once the window width gets below a certain size, those elements have no choice but to overlap. ' '; }}} These ""all bold"" admin notices should be adjusted to remove the all-bold effect. You may need to also set the height of those divs to a fixed height. If sortable specific styling is needed, the following CSS class names can be used for overrides or as keys for the classes option:. Hello all, I'm having some problems with floating 2 divs within another div. Scaling d3. HTML title here. HOME; Undergraduate Education; Diversity, Equity, and Inclusion Requirement; Diversity, Equity, and Inclusion Requirement This page has a new site, please visit http. I wonder how to separate them from overlapping? Thank you! Answer 5523099de39efeabcb0002f2. CSS3 box-shadow properties allows you to create single or multiple, inner or outer drop-shadows. Need help with absolute position overlapping divs Coding Forum Text overlapping: txlogopros: Web Design Lobby: 1: 11-28-2006 02:40 PM: Absolute best way to design. On an iPhone 5 for example, 100vh is 480px and 100vw is 320px. Images overlapping DIVs - please help If this is your first visit, be sure to check out the FAQ by clicking the link above. But with the min widths, the divs in the container are already at 1000px, overflowing the container. This method uses 'overlays' which are DIVs, that are styled and filled to create the site content. The Overlapping Rectangles using CSS and Javascript Finally, we can use Javascript to define the clicking behaivor. PLease can anyone help on this? The images in question are based on the code below (works ok in Chrome and FF): /* -----underscore l refers to landscape orientation ----- */. I practiced creating div tags in the lessons provided by my professor and read over the CSS Tutorials for a 2nd time. Sublime Text 3 (ST3) is the latest version of one of the most commonly used plain text editors by web developers, coders, and programmers. If we want 3 divs that are equal size, we define the divs with a width of 4-columns, as 4+4+4 nicely adds up to 12. Nested columns can utilize the full 1-6 column setup and have their own unique user interface in Fusion Builder to easily make edits, add content. You can use the event. Overlap DIVs - Macromedia Dynamic HTML. Basically what I have is this:. Overlapping Elements. 5 and up, and Mozilla 1. I thought to myself, that's pretty cool, but obviously, it will only work in newer browsers that support box-shadow. For that purpose, you can assign each element a number (z-index). Viewed 78k times 21. I thought to myself, that's pretty cool, but obviously, it will only work in newer browsers that support box-shadow. I chose a fluid design which will accommodate narrower window sizes. Overlapping elements on a webpage can help highlight, prompt, and give priority to important content on your page. You may have to register before you can post: click the register link above to proceed. Here is the css code for the divs (my original code was improved by Chris Happy from html - css- why are my nav div and middle div are overlapping? - Stack Overflow [ ^ ]) div#Container { position: relative; }. The system. Example 8: Hide some areas of the remote page. CSS3 box-shadow properties allows you to create single or multiple, inner or outer drop-shadows. Browse companies that make Cold Storage Doors and view and download their free cad details, revit BIM files, specifications and other content relating to Cold Storage Doors as well as other product information formated for the architectural community. 5 and up, and Mozilla 1. Everything looks good except the div stays put even when the page scrolls. whether it shows or not. Trying to overlap divs in CSS - IE7 problem. Also, how do i make it so that the video expands at the same rate and in the same line as the #italic div? Thanks. To accomplish this successfully requires a substantial development editor, because the resulting HTML file, along with the script, will be BIG!. The float divs disappear behind my footer. We can define each Div's onclick event but that is not an elegant solution as there will be duplicate code and you have to manually associate the behavior for each rectangle by ID explicitly. I don’t have this issue in the preview or while using the chrome inspect but for some reason it happens on an actual phone. Originally meant as a simple tool to group page elements, the DIV tag gives designers additional flexibility and control over layout when it's combined with Cascading Style Sheet (CSS) properties. Divs should only be used to build structure and as placeholders for design elements when no other block-level elements can describe the content. Hello all, I've got a slight issue with two divs overlapping in mobile view on this site which is in it's initial stages Welcome to Merlin's Cafe - everything reduces down fine apart from the picture and form divs which when the layout shrinks down to fit a mobile screen the form goes over the top of the cafe image rather than sitting underneath of it as it should do. Active 6 years, 7 months ago. I'm trying to create a parent div inside a list element that contains two children div for the left and right. There are several problems with the layout. Browse companies that make Telescoping Vertical Lift Doors and view and download their free cad details, revit BIM files, specifications and other content relating to Telescoping Vertical Lift Doors as well as other product information formated for the architectural community. Equal Columns. Beginner's stuff: How to stop CSS' Divs from overlapping? Ask Question Asked 7 years, 2 months ago. Stop divs from overlapping on resize. If all Flexbox brought us was sane vertical centering, I’d be more than overjoyed with it. The z-index property specifies the stack order of an element (which element should be placed in front of, or behind, the others). Thus, applying a greater z-index value to an HTML element relative to another HTML element places the first object in front of the second (closer to you) if they overlap. Using the “left” and “top” attributes, we can absolutely position each sub-menu within its parent menu item. Naturally, the most common way is by. CSS Flexbox Layout Module. Since the data is dynamic there is no telling on what should be the height of the divs … So I tried to set the height of flip-container , front and back as min-height: 30px; so that it can grow as much as it wants according to the amount of data …. I kinda of forgot to mention that part, I don't want them to be hidden, I want each div to automatically allow for the "previous" one to finish it's concent without overlapping (Tried with overflow: Auto but it gave them scrollbars, to all the forms in the whole site. Overlapping divs on absolute positioning. In the example below, instead of keeping all div's on the same line using inline-block, we are floating the left and right div. This can be used to e. The only way I know of is to store the location of the divs in page co-ordinates (top, right, bottom, left) then have an onmousemove function that checks to see which divs the cursor is over at each move. Stop divs from overlapping on resize Stop divs from overlapping on resize. In Internet Explorer 7, when you re-scale the window, the cells don't appear to overlap with the options currently set within the CSS table tags. Not too many but a couple. Bootstrap table generator - a simple and fast way to create a fully coded and styled Bootstrap 4 tables. How to get DIVs to not overlap when resizing?; } 3 replies Tue, 2008-12-09 22:22 Anonymous. I can't get the left and right divs to display without overlapping. Either get rid of the 100vh/vw or set the height to auto with a media query when you get down to mobile. With an understanding of how SVG scaling operates to some degree, we can look at how to scale an SVG chart from a dynamic library like d3. Overlapping Content. Trying to overlap divs in CSS - IE7 problem. Table Header, Body, and Footer. ), we just happen to target images in this example. Take a look on Divi columns code and you will see a lot of ‘clearfix’ class added that work as a fix for this common issue with floating elements. I am able to resize them and can drag them but they get overlapped. 2 elements in the root means 2 stacks. The float divs disappear behind my footer. problem with divs, form elements overlapping. Sometimes, the whole thing vanishes all together. You may have to register before you can post: click the register link above to proceed. Below is one example of how you can create three div's next to each other that occupy 100% of the element. clearfix {overflow: auto;}. Show more. lyndalouise. Gabriel Kanes: Hi, I have a concept for a site where several divs would show intro paragraphs, and when a user clicks a "more" button the div expands in size and shows more verbose content. No website or web page can go to be functional without the use of at least one div. My column divs are overlapping on bootstrap. Im using bootstrap4 fluid containers. CSS positioning, floats, margins, and padding will accomplish the same result. 5) When different clusters have one or more non-unique members we draw an edge. CSS operates in three dimensions - height, width and depth. See your html elements as stacks. It’s available for Mac, Windows, and Linux, and free to download and use. Naturally, the most common way is by. box overlap (overlapping divs) #58. The content inside the inline element will determine the width of the element as a result the developer may not be able to control the element look as (s)he wants. Trying to create template page that I can duplicate when I have it laid out. ) overlap and still stay mobile friendly. It's called the clearfix hack. Use structural metadata variables when choosing the wording to convey a message in a given scenario. I am looking for code to support IE 5. ) overlap and still stay mobile friendly. In these cases we can control layering of positioned elements by using the z-index property. Viewed 7k times 2. Based on examples in Backdraft, I created an HTML object within an inline layout. The features shown in this overview will then be explained in greater detail in the rest of this guide. A quick overview: Drop Down Content can be revealed either via onclick of the anchor link, or onmouseover instead. In this guide, we'll go over two separate CSS […]. They are overlapping because you are floating a div, but aren't clearing the float. With an understanding of how SVG scaling operates to some degree, we can look at how to scale an SVG chart from a dynamic library like d3. element: false, // Element of where get automatically "rect". Note: possible overlapping with footer is still exists. Vertically overlapping divs. I can't get the left and right divs to display without overlapping. Divs overlapping. You can eliminate it by actual deleting the space between the closing element of your first div and your opening element of your second div. Documentation on the scrollable classes can be found here. I can accomplish this action using CSS and Javascript by various methods of "hiding" the divs containing the expanded text, but I am. My HTML Looks like this:. So you could modify the. CSS3 box-shadow properties allows you to create single or multiple, inner or outer drop-shadows. problem with divs, form elements overlapping. HTML title here. Hi everyone, I would appreciate some tips on how to make divs (h3s, etc. Copy link Quote reply Erick16 commented Jan 7, 2011. It's pretty common to want to update charts after they've been created. The following is a typical example of a overlapping float div layers. In some cases, such distortions are ignored, but in others, a bythos or even a full tribunal comes to assess and repair the damage. Overlapping elements with a larger z-index cover those with a smaller one. HOME; Undergraduate Education; Diversity, Equity, and Inclusion Requirement; Diversity, Equity, and Inclusion Requirement This page has a new site, please visit http. jQuery UI is a curated set of user interface interactions, effects, widgets, and themes built on top of the jQuery JavaScript Library. And I'm gonna do a change all my divs, so that they are also. Esta apilación se controla con el z-index. Overlapping Image & Text in Divi - Duration: 29:28 15:36. To make the design of your website responsive you can try the next 3 tips: 1. Creature Types. In Firefox the DIV of txt shows up on top of my DIV with the pic. Last seen: 13 years 29 weeks ago. I have two overlapping divs on my site, I can not find the solution. Clear as mud, I know. The order of how float div layers are overlapped are completely under control easily with z-index positioning. all the divs collide when I set the min-height of the flip container. A DIV to the left of the main body holds a no-repeat background [to place an image in that area]. Can't find what you're looking for? Perhaps one of the following sites will help: jQuery UI 1. It has the width an height of the monitor and it contains the other 3 DIVs. Re: Four divs side by side responsive Posted 05 May 2015 - 05:31 AM Hi,the four divs are side by side first create an separate div not mentioned in height width mentioned in percentage all div float:left; tnen responsive are automatically in %width mention or used in bootstrap method try it. Why do my packageBox div and my footerBox div overlap each other? Can someone please explain. 0 and up, and code that is efficient (as there are possibly many of these divs on the layout). [HTML Text] Overlapping Text due to Height style on embedded divs Hello. I have tried different solutions to correct the overlap, but the same result occurs every time. There's no reason your 5 divs should be set to 100vh and 100vw. See full list on yourhtmlsource. As always it works really well in Safari (god bless the Mac) but goes all mental in IE. ' '; }}} These ""all bold"" admin notices should be adjusted to remove the all-bold effect. You can create just about any type of layout where you want to have more fine-grain control of how elements are placed on a page. I have several divs that seem to over lap. e while hovering or while clicking on one of the div to make the second one appear. HTML preprocessors can make writing HTML more powerful or convenient. I'm not sure that this CSS feature is sensible for developers to use. Need help? Post your question and get tips & solutions from a community of 459,005 IT Pros & Developers. Jill Fraeyman Competence Development Manager at Atlas Copco Tools & Assembly Systems Sales Divs Greater Detroit Area 500+ connections. In an effort to. The worst way would be to build a table with one row and two cells, each containing the divs you want on the same line. – Xionico Jul 8 '13 at 19:52. Floating divs overlapping. Flexible grids and layouts. posted 9 years ago. August 30, 2014, 4:06am #1. Grids can be used to lay out major page areas or small user interface elements. A reflection on how the lyrics of a pop hit from 20 years ago can have some interesting overlap with today’s crisis-hit economy and markets. The first 6 of the 9 CSS rules described here have fixed-size background images. You may even just need them to be positioned near or next to each other. What marketing strategies does Divs use? Get traffic statistics, SEO keyword opportunities, audience insights, and competitive analytics for Divs. Beginner's stuff: How to stop CSS' Divs from overlapping? Ask Question Asked 7 years, 2 months ago. I thought the easiest way was to have overlapping borders, so two borders would appear as one, but can't get it to. I rarely have a problem with Firefox so this kind of shocked me. The z-index property specifies the stack order of an element (which element should be placed in front of, or behind, the others). Would like to avoid a loop within a loop if possible. Here's an example: What is the best way to have the ABS SYSTEMS heading appear. Bulma is a free, open source CSS framework based on Flexbox and built with Sass. Overlapping divs on absolute positioning. See your html elements as stacks. The problem with your code is, dougk, as you already found out, that you are just rendering the element invisible - but it is still there and being executed by the interpreter of your browser. 1) My data at level 2 (if root/flare is level 0) is a long list (>200) of items---children of level 1. Originally meant as a simple tool to group page elements, the DIV tag gives designers additional flexibility and control over layout when it's combined with Cascading Style Sheet (CSS) properties. To detect page scrolling event, the javascript function setInterval is used to measure the scrolling position periodically. Hello all, I've got a slight issue with two divs overlapping in mobile view on this site which is in it's initial stages Welcome to Merlin's Cafe - everything reduces down fine apart from the picture and form divs which when the layout shrinks down to fit a mobile screen the form goes over the top of the cafe image rather than sitting underneath of it as it should do. Unlike HTML table, div layers can be overlapped with each other. Joined: 2007-01-17. Can't find what you're looking for? Perhaps one of the following sites will help: jQuery UI 1. All are recognized by the latest versions of each of the five major browsers. one on top of the other by using the CSS z-index property. Use divs, eliminate some of those containers. Overlapping Content. circle{ width:[width of the element]; /*The width Must be equal to the height*/ height:[height of the element]; border-radius:[more than or equal to 50 % of the width or height]; -moz-border-radius:[more than or equal to 50 % of the width or height]; -webkit-border-radius:[more than or equal to 50 % of the. How to Insert Images with HTML. I know this is my ignorance of something. A DIV to the left of the main body holds a no-repeat background [to place an image in that area]. On an iPhone 5 for example, 100vh is 480px and 100vw is 320px. August 30, 2014, 4:06am #1. Author(s): Paolo Lombardi; This article is the English translation of an article I wrote in Italian for YappY. I kinda of forgot to mention that part, I don't want them to be hidden, I want each div to automatically allow for the "previous" one to finish it's concent without overlapping (Tried with overflow: Auto but it gave them scrollbars, to all the forms in the whole site. but it is not working. First the root on top of that the child etc. Any help is greatly appreciated. Phrases can be either blocks (paragraphs or divs), or snippets (a span). Or, perhaps, your logo must be in the exact same place on every web page. 5 and up, and Mozilla 1. If all Flexbox brought us was sane vertical centering, I’d be more than overjoyed with it. Nested columns can utilize the full 1-6 column setup and have their own unique user interface in Fusion Builder to easily make edits, add content. Multiple Backgrounds and CSS Gradients. I have a DIV that contains a picture under a header. Digital Media Forum. Every html tagset has a box. Why divs and not tables. Now i need the second (wider) box to overlap the square and so start at the left hand edge of the box - this has been achieved by setting the left property of the wider box to -30 but obviously this leads to a gap on the right hand side of 30 pixels, however when i increase the width of the wider box by the extra 30 pixels required to put it's. Not too many but a couple. If i do not want them to be overlapped, i will remove the declaration of position:absolute, this way z-index will not be valid and it will become side by side. You may notice the Prevent Overlaps check box, deselected by default, at the top of the AP Elements panel. 7 with my screen resolution at 1152x864, but if I change the size of my resolution or browser window, my divs over lap in the middle of the screen. In Mac (Safari3, Firefox3 and Opera9. Below are the component parts for our custom infinite loop cross-fade script (HTML, CSS, and Javascript) plus links to a working demo and source files. Take our front page content for example. It has the width an height of the monitor and it contains the other 3 DIVs. Test your JavaScript, CSS, HTML or CoffeeScript online with JSFiddle code editor. But I use dynamic text (to keep file size down plus dynamic. This forces uniform scaling for both the x and y, aligning the midpoint of the SVG object with the midpoint of the container element. I am trying to make this fading wood image overlap the original wood image, but not. I have created three columns on the front page under the header section. However, the consequent usage of axis environment creates overlapping plots, what can be used for multi-axes plots with different abscissae and ordinates: \begin { tikzpicture } \begin { axis }[ xmin=0,xmax=2,ymin=0,ymax=2,. Below are the component parts for our custom infinite loop cross-fade script (HTML, CSS, and Javascript) plus links to a working demo and source files. php looks like:. As you described, I use a div now and then to call the reader’s attention to a couple of sections in an independently-styled block. Don’t forget to comment. As such, every product will have: a name. August 30, 2014, 1:04pm #4. Timezone: GMT-8. To achieve it, follow these simple steps: Add two rows and set their properties (ex. [Solved] Why Are Divs Overlapping; } 4 replies Wed, 2007-01-17 18:42 essdog. This should sit vertically centred between the header and the footer (which is glued to the bottom of the browser window). Far more often, though, bythos seek out abuses of planar travel, such as tears in reality, regions where planes overlap, or creatures that abuse the use of planar travel. %Id% tex tex4ht-sty or ht tex tex4ht-sty % % Copyright 2009-2018 TeX Users Group % Copyright 1996-2009 Eitan M. Divs overlapping Divs overlapping. January 12, 2017, at 5:37 PM. - Four divs, each in one corner on the screen (position fixed), - When resizing browser window the divs may not overlap each other, - Pure HTML & CSS. Discussion in 'HTML, Graphics & Programming' started by toastyman, Feb 26, 2006. Plotting a function in \LaTeX is quite easy. Still we will have to deal with a multidimensional space, but acceptable for a. It's pretty common to want to update charts after they've been created. Using DIVs as containers Creating an ID style ID vs. then i use the following if statement to. To make the design of your website responsive you can try the next 3 tips: 1. Last seen: 13 years 29 weeks ago. If you remove the redundant height setting on #panel1 then the divs won’t overlap. Stop divs from overlapping on resize. A pinned-down menu. Generally used to add a note to help you recall the use or purpose of a section of code but can also be used to hide newer tags and technologies from older browsers that do not support them. We will set the height of the content div to the 100vh of webpage height minus the Sep 01, 2015 · But we are using simple CSS rules to let the footer stick to bottom in case of content. Looking at the design of most web pages today, almost exclusively all of them include some semi-transparency of elements. Gabriel Kanes: Hi, I have a concept for a site where several divs would show intro paragraphs, and when a user clicks a "more" button the div expands in size and shows more verbose content. Overlapping elements with a larger z-index cover those with a smaller one. , "20px" instead of "10px") and switch your DIVs between the style1 and style2 classes until you arrive at the overlapping amount you like best. For instance, Markdown is designed to be easier to write and read for text documents and you could write a loop in Pug. well, it works yea, that overlay div is useless now, the use of it was adding a shade of yellow to the page, so i stacked divs. This survey is being delivered in partnership with the University of Oxford, the University of Manchester, Public Health England and Wellcome Trust. Yet it brings many other capabilities to the …. Those two simple words used to bring fear and trepidation to anyone having deal with the shortcomings of vertical-align: middle. Floating divs overlapping. Viewed 74k times 18. The basic rule is to just make two divs, one that covers the entire screen called layer and another is the content to show over that layer. ) overlap and still stay mobile friendly. Updating Charts. CSS Divs Chevauchement, Comment puis-je forcer l'un au-dessus de l'autre? Je suis nouveau en CSS et j'essaie de construire mon site. Let’s go over two different ways to accomplish this, one with the position property and one with CSS Grid. Viewing 1 post (of 1 total) Author Posts February 18, 2016 at 10:18 am #238152 mainstwebguyParticipant Hello All, I am having trouble with some overlapping DIV elements on a new layout i’m working on. You can eliminate it by actual deleting the space between the closing element of your first div and your opening element of your second div. For instance, you want div A to act as a sidebar and div B to hold your content – you can float both divs to the left, thus causing them to align horizontally. 5) and IE don't get access – and it just looks like the site is broken. But what if you want the parent (div1) to have a border? You’ll see that it will drop the height because the child elements are floating. 2,normal,trivial,Awaiting Review,enhancement,accepted,reporter-feedback,2019-01-09T12:36:13Z,2020-02-14T09:56:48Z,"Hi, Just uploading some content into WordPress dashboard and saw a little layout bug, I'm not. A jQuery plugin for detection of overlaps and selections by superposition. You could change the order by putting #main first, but then you'd need something different than just float: left. The vertical-align property is designed to vertically align inline elements and a div is a block element so by that rule the property isn't really compatible to work with divs. Overlapping elements with a larger z-index cover those with a smaller one. I have several divs that seem to over lap. I have tried different solutions to correct the overlap, but the same result occurs every time. You may need to also set the height of those divs to a fixed height. Thanks, Mary. Thus, the images may be positioned so they are stacked. We will examine the business case for environmental sustainability including motivations, incentives, regulatory frameworks, reporting requirements, and bottom line issues that corporations consider in their decision making processes. Divs overlapping Divs overlapping. The problem is all my blocks that go under these frontpage columns overlap them. Show only OP | Feb 26, 2006 at 7:01 PM #1. I'm trying to create a parent div inside a list element that contains two children div for the left and right. Unless the browser is told otherwise. A DIV to the left of the main body holds a no-repeat background [to place an image in that area]. While float:left tells the box to appear on the left side. Hi all, My CSS problem: The text links in my right colum ("#iiicolright" - in a 3 col layout) is overlapping the content that appears below in on page Here is my CSS for the 3 col layout: #. I am trying to make this fading wood image overlap the original wood image, but not. Divs overlapping. You could change the order by putting #main first, but then you'd need something different than just float: left. How do you make DIVS not overlap each other? By Eluzion on 21 Feb 2008 at 19:55 UTC. Not sure where to go from here. Worked that one out, too, but was hoping there was a way to just stop them from overlapping at all, like the side blocks don't overlap the central column even when the page is reduced past the point of the central column's min-width. This can be done by splitting the chromosomes into non-overlapping genomic region and next computing some statistic like the sum or mean of some information divergences of methylation levels at DMPs inside of each region. 13:25:27 some instructions (probably in mathML) 13:25:27 emeriste: there have been at least three proposed ways of doing this, but we haven't been actively been pursuing this as a WG 13:25:38 yeah, I agree it should be really simple if we do it 13:25:45 one might argue that politically the time is not right at the moment 13:26:05 The best time would have been 10 years ago. In this video tutorial, I will show you how to move button elements out of the way so that they don't overlap with each other in CSS. thanks in advace for all suggestions. Centre - No width Centering block level elements is easy as you simply need to use margin:auto and a. In my lessons, I experienced how to overlap boxes, make them float around the page and become more transparent. 0 and up, and code that is efficient (as there are possibly many of these divs on the layout). CSS Grid Layout introduces a two-dimensional grid system to CSS. then i use the following if statement to. Los elementos con position:absolute no tienen margenes pero sí bordes y padding. Try using relative units such as percentages instead of pixels to dynamically resize any width. Overlapping elements with a larger z-index cover those with a smaller one. How to use CSS to avoid overlapping of divs; Overlapping DIVs: one isn't blocking the other; overlapping divs; mousemove for overlapping divs; absolute layers stop overlap; Overlapping DIVs killing links; overlapping divs in IE; Overlapping divs in Firefox until refresh. Just make the radius half of the width and height of the element to make a perfect circle, or simply use:. A quick overview: Drop Down Content can be revealed either via onclick of the anchor link, or onmouseover instead. We will examine the business case for environmental sustainability including motivations, incentives, regulatory frameworks, reporting requirements, and bottom line issues that corporations consider in their decision making processes. I am looking for code to support IE 5. Viewed 74k times 18. I'm having an issue where divs are overlapping each. overlap ({rect: {x: 0, y: 0, w: 0, h: 0}, // Area to select the elements. How are you positioning these divs? Just a note with floated Divs, you might need. How to use (selector). When a person asks a question like overlapping. Table Header, Body, and Footer. How to prevent divs from overlapping? Ask Question Asked 7 years ago. This is commonly seen on hero sections. If you remove the redundant height setting on #panel1 then the divs won’t overlap. With an understanding of how SVG scaling operates to some degree, we can look at how to scale an SVG chart from a dynamic library like d3. The content to reveal/ overlap in each case is simply contained inside an arbitrary DIV on the page for easy customizing. You can see why all the content would not fit inside. As you can see from the photos below, the sections are overlapping on the live site on mobile. In this case we have five divs and each div is a block element. August 30, 2014, 1:04pm #4. The visibility and positioning of these DIVs is the primary work of the developer. As you described, I use a div now and then to call the reader’s attention to a couple of sections in an independently-styled block. __( 'The network could not be created. aspx as below, and all the rest remain unchanged. Hi, I am trying to make these three divs overlap, two of them are images and the other one is my contentbox. The features shown in this overview will then be explained in greater detail in the rest of this guide. It seems like this should be one of the easiest things to understand in CSS. After experimenting with this for a while, it seems that the situation is the following (simplified): Grid and friends (Row, Column) will size the grid cells to fit each item individually (cells resized to fit items). If all Flexbox brought us was sane vertical centering, I’d be more than overjoyed with it. January 12, 2017, at 5:37 PM. I found your issue, and as you mentioned is regarding overlapping divs. This article introduces the CSS Grid Layout and the new terminology that is part of the CSS Grid Layout Level 1 specification. Flash is overlapping on navigation Menu If this is your first visit, be sure to check out the FAQ by clicking the link above. Why divs and not tables. js Dynamic Charts. Once the window width gets below a certain size, those elements have no choice but to overlap. How to get DIVs to not overlap when resizing?; } 3 replies Tue, 2008-12-09 22:22 Anonymous. Technique #4: Applying a large outline to a modal. tab: This event fires on tab show, but before the new tab has been shown. I have a DIV that contains a picture under a header.
2020-10-20 23:21:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1867673099040985, "perplexity": 2387.7842921416523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874340.10/warc/CC-MAIN-20201020221156-20201021011156-00683.warc.gz"}
https://stacks.math.columbia.edu/tag/04F1
Lemma 37.6.2. If $f : X \to S$ is a formally unramified morphism, then given any solid commutative diagram $\xymatrix{ X \ar[d]_ f & T \ar[d]^ i \ar[l] \\ S & T' \ar[l] \ar@{-->}[lu] }$ where $T \subset T'$ is a first order thickening of schemes over $S$ there exists at most one dotted arrow making the diagram commute. In other words, in Definition 37.6.1 the condition that $T$ be affine may be dropped. Proof. This is true because a morphism is determined by its restrictions to affine opens. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-05-26 19:58:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9748430848121643, "perplexity": 276.2073694866937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662625600.87/warc/CC-MAIN-20220526193923-20220526223923-00214.warc.gz"}
https://tex.stackexchange.com/questions/327169/feynman-diagramm-4-point-vertex-into-3-point-vertex
# Feynman Diagramm: 4-point vertex into 3-point vertex I asked a question before about how to make a 4-point vertex before involving a blob. I got an answer in the form of \documentclass{article} \usepackage{tikz-feynman,contour} \begin{document} \begin{tikzpicture} \begin{feynman} \vertex[blob,label={above:$P$}] (m) at ( 0, 0) {\contour{white}{$\leftarrow$}}; \vertex (a) at (-2,-1) {$K' + P \\ \uparrow$}; \vertex (b) at ( 2,-1) {$\uparrow \\ K + P$}; \vertex (c) at (-2, 1) {$-K' \\ \downarrow$}; \vertex (d) at ( 2, 1) {$\downarrow \\ -K$}; \diagram* { (d) -- [fermion] (m) -- [fermion] (c), (b) -- [fermion] (m) -- [fermion] (a), }; \end{feynman} \end{tikzpicture} \end{document} Now I want to make one that looks like this: How do I have to modify my initial code, to get this one? //Edit: Arrows pointing to the left in the bosonic propagators (curly lines) at the right side would be nice, too. //Edit2: This is what I have: \documentclass{article} \usepackage{tikz-feynman,contour} \begin{document} \begin{tikzpicture} \begin{feynman} \vertex[blob,label={above:$P$}] (m) at ( 0, 0) {\contour{white} {$\leftarrow$}}; \vertex (a) at (-2,-1) {$K' + P \\ \uparrow$}; \vertex (b) at ( 2,-1) {$\uparrow \\ K + P$}; \vertex (c) at (-2, 1) {$-K' \\ \downarrow$}; \vertex (d) at ( 2, 1) {$\downarrow \\ -K$}; \diagram* { (d) -- [fermion] (m) -- [fermion] (c), (b) -- [fermion] (m) -- [fermion] (a), }; \end{feynman} \begin{feynman} \vertex(m) at ( 0, 0); \vertex (a) at (-2,-1) {$K + P \\ \uparrow$}; \vertex (c) at (-2, 1) {$-K \\ \downarrow$}; \vertex(n) at ( 2, 0) {$P$}; \diagram* { (m) -- [fermion] (a), (m) -- [fermion] (c), (n) -- [boson] (m), }; \end{feynman} \begin{feynman} \vertex(m) at ( 0, 0); \vertex (a) at (2,-1) {$K + P \\ \uparrow$}; \vertex (c) at (2, 1) {$-K \\ \downarrow$}; \vertex(n) at ( -2, 0) {$P$}; \diagram* { (m) -- [fermion] (a), (m) -- [fermion] (c), (n) -- [boson] (m), }; \end{feynman} \end{tikzpicture} \end{document} I got all 3 diagrams that I want. My problem is, that they are all beneath each other, but I need them next to each other (with an arrow and a plus sign). Any suggestions? • Can you show us what you have tried so far? Also, can I recommend that you read the documentation for TikZ-Feynman? You can find it on CTAN and also on the project's website and there are examples of the diagrams you're looking for. Lastly, you should make sure accept previous answer, it's a nice way ot say "thank you" and people in the future will be more likely to help out :) – JP-Ellis Aug 30 '16 at 13:44 • @JP-Ellis I did, that before, but my "reputation" was not enough, so it could not be recorded. – Zyrax Aug 30 '16 at 13:48 • To accept an answer, you have to click on the tick (✓) next to it which doesn't require much/any reputation (and that's distinct to an upvote, which you can also go, but I think requires more reputation). Let me know if you have any trouble adapting an example from the documentation :) – JP-Ellis Aug 30 '16 at 14:00 • @JP-Ellis I edited my post. I got the 3 diagrams, but they are not next to each other. Any suggestions on that? – Zyrax Aug 30 '16 at 14:26 Good to see you have a go at your own question! I'll help you with the last little bit :) To get them side-by-side, you'll can do one of two things: 1. Translate all the TikZ coordinates of the other diagrams, so that instead of using (0,0) for the origin, use (5,5) and have things relative to that. 2. Separate the TikZ environments so that LaTeX then places each graphic on their own. Of the two methods, I prefer the second because it also allows you to use the diagrams in equations. The only problem is that LaTeX will want to line up the bottom of the diagram with the rest of the line, meaning that the equal and plus signs won't be centred. This can be fixed by using TikZ's baseline key which informs LaTeX of how high the diagram ought to be (in much the same way that the letter 'g' dips below the line). Finally, the argument to the baseline that I'm giving is a little trick I learnt from another question which ensures that everything lines up with the + and = signs. Here' the code: \RequirePackage{luatex85} \documentclass{standalone} \usepackage{amsmath} \usepackage[compat=1.1.0]{tikz-feynman} \usepackage{contour} \begin{document} \begin{equation*} \begin{tikzpicture}[baseline=-\the\dimexpr\fontdimen22\textfont2\relax] \begin{feynman} \vertex[blob,label={above:$P$}] (m) at (0, 0) {\contour{white} {$\leftarrow$}}; \vertex (a) at (-2,-1) {$K' + P \\ \uparrow$}; \vertex (b) at ( 2,-1) {$\uparrow \\ K + P$}; \vertex (c) at (-2, 1) {$-K' \\ \downarrow$}; \vertex (d) at ( 2, 1) {$\downarrow \\ -K$}; \diagram* { (d) -- [fermion] (m) -- [fermion] (c), (b) -- [fermion] (m) -- [fermion] (a), }; \end{feynman} \end{tikzpicture} = \begin{tikzpicture}[baseline=-\the\dimexpr\fontdimen22\textfont2\relax] \begin{feynman} \vertex (m) at ( 0, 0); \vertex (a) at (-2,-1) {$K + P \\ \uparrow$}; \vertex (c) at (-2, 1) {$-K \\ \downarrow$}; \vertex (n) at ( 2, 0) {$P$}; \diagram* { (m) -- [fermion] (a), (m) -- [fermion] (c), (n) -- [charged boson] (m), }; \end{feynman} \end{tikzpicture} + \begin{tikzpicture}[baseline=-\the\dimexpr\fontdimen22\textfont2\relax] \begin{feynman} \vertex (m) at (0, 0); \vertex (a) at (2,-1) {$K + P \\ \uparrow$}; \vertex (c) at (2, 1) {$-K \\ \downarrow$}; \vertex (n) at ( -2, 0) {$P$}; \diagram* { (m) -- [fermion] (a), (m) -- [fermion] (c), (n) -- [anti charged boson] (m), }; \end{feynman} \end{tikzpicture} \end{equation*} \end{document} • Thanks, that helps. You have a small distance right and left of the +. How can I get such a distance? – Zyrax Aug 30 '16 at 15:09 • This is the usual spacing that comes within maths environments. In your own version, make sure you have the {equation} environment too (or whichever maths environment you prefer). – JP-Ellis Aug 30 '16 at 15:16
2020-08-10 17:36:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062657952308655, "perplexity": 1883.1837163670166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736057.87/warc/CC-MAIN-20200810145103-20200810175103-00424.warc.gz"}
http://www.jmis.org/archive/view_article?pid=jmis-7-2-189
Section D # Basic Physiological Research on the Wing Flapping of the Sweet Potato Hawkmoth Using Multimedia Isao Nakajima1,3,*, Yukako Yagi2 1Nakajima labo., Seisa University, Yokohama, Japan. E-mail: i_nakajima@seisa.ac.jp 2Yukako Yagi, Memorial Sloan Kettering Cancer Center, New York, NY, USA, yagiy@mskcc.org 3Nakajima labo., Dept. of EMS, Tokai University School of Medicine, Isehara, Japan, ja9eco@gmail.com *Corresponding Author: Isao Nakajima, Seisa University, Satsukigaoka 8-80, Aobaku,Yokohama, Japan, +81-90-8850-8380, jh1rnz@aol.com. © Copyright 2020 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Received: Jun 20, 2020; Revised: Jun 23, 2020; Accepted: Jun 25, 2020 Published Online: Jun 30, 2020 ## Abstract We have developed a device for recording biological data by inserting three electrodes and a needle with an angular velocity sensor into the moth for the purpose of measuring the electromyogram of the flapping and the corresponding lift force. With this measurement, it is possible to evaluate the moth-physiological function of moths, and the amount of pesticides that insects are exposed to (currently LD50-based standards), especially the amount of chronic low-concentration exposure, can be reduced the dose. We measured and recorded 2-channel electromyography (EMG) and angular velocity corresponding to pitch angle (pitch-like angle) associated with wing flapping for 100 sweet potato hawkmoths (50 females and 50 males) with the animals suspended and constrained in air. Overall, the angular velocity and amplitude of EMG signals demonstrated high correlation, with a correlation coefficient of R = 0.792. In contrast, the results of analysis performed on the peak-to-peak (PP) EMG intervals, which correspond to the RR intervals of ECG signals, indicated a correlation between ΔF fluctuation and angular velocity of R = 0.379. Thus, the accuracy of the regression curve was relatively poor. Using a DC amplification circuit without capacitive coupling as the EMG amplification circuit, we confirmed that the baseline changes at the gear change point of wing flapping. The following formula gives the lift provided by the wing: angular velocity × thoracic weight − air resistance – (eddy resistance due to turbulence). In future studies, we plan to attach a micro radio transmitter to the moths to gather data on potential energy, kinetic energy, and displacement during free flight for analysis. Such physiological functional evaluations of moths may alleviate damage to insect health due to repeated exposure to multiple agrochemicals and may lead to significant changes in the toxicity standards, which are currently based on LD50 values Keywords: Chronic low dose exposure; Physiological functional evaluation; Electromyography; LD50-based standards ## I. BACKGROUND Insects, particularly species of large moths, are in danger of disappearing from the natural world. A key factor underlying this peril may be the significant impact of repeated exposure to multiple chemicals. Our concern—an assumption at this point—is that the current practice of judging acute toxicity, as represented by the use of LD50 values, is disproportionately damaging biodiversity of large insect species. We launched this basic research with the goal of screening for subtle physiological anomalies [1], [2], [3], [4]. It is conceivable that the repeated use of multiple agrochemicals will eventually cause irreparable damage to mankind. 1.1 Repeated exposure to multiple agrochemicals Large moths, which travel great distances in the natural environment, are highly likely to be repeatedly exposed to multiple types of agrochemicals. While LD50 values are standard indicators of acute exposure, chronic exposure is assumed as 1/100 of the LD50 value. Simple arithmetic indicates if an individual organism is exposed to 20 different types of agrochemicals, theoretical exposure to toxins polluting the environment is 1/100 × 20 = 1/5 of the LD50 dose. Our team has cast doubts on such a simplified method of calculating chronic toxicity levels based on LD50. Our study seeks to obtain physiological data of moth flight to screen basically healthy individual specimens for those exhibiting physiological anomalies in physiological function at low level exposures, with the goal of identifying an indicator that may replace 1/100 of the LD50 value. 1.2. Analysis of flapping motion Many reports of studies of flapping wings have focused on the theme “why insects can fly.” In recent years, military research has investigated insect flight mechanisms as potential propulsion mechanisms for micro air vehicles (MAVs). A mainstream trend in the fluid dynamics approach to the study of flight mechanisms applies numerical simulation techniques to analyze the lift provided by wings. These studies typically assume the steady-state wing theory, while taking into account the additional effects such as separation vortex motion [4]-[9]. Table 1. Repeated exposure to multiple agrochemicals. 1. Insecticides Chemicals used to control insect pests detrimental to crops 2. Fungicides Chemicals used to prevent diseases detrimental to crops 3. Insecticide-fungicide mixtures Chemicals used to control insect pests and diseases simultaneously 4. Herbicides Chemicals used to control weeds 5. Rodenticides Chemicals used to control rodent pests, such as field rats, that are detrimental to crops 6. Plant growth regulator Chemicals that promote or inhibit the growth of crops 7. Attractant Chemicals that lure and trap mainly insect pests with smell, etc. 8. Spreading agent Chemicals mixed with other agrochemicals to enhance adhesion to plant surfaces 9. Microbial pesticides Chemicals used to control insect pests and diseases detrimental to crops using microbes The following equation gives the lift L produced by the translational motion of wings required to achieve airspeed U: $\text{L=1/2}\left({\text{ρU}}^{\text{2}}\text{SC}\right),$ (1) where C is the coefficient of lift; ρ is air density; and S is the surface area of the wing. This equation describes only a relationship that exists at a momentary snapshot of wing motion meeting a specific criterion (steady-state wing condition). Accounts based on pioneering studies and experiments suggest that for continuous wing flapping, the lift created by the second and third flap differs from that of the first flap. For this study, we decided to attach a monoaxial (mainly pitch axis) angular velocity sensor to a moth, albeit under restrictive conditions wherein the moth was suspended and constrained by a string. We focused on angular velocity because we thought that due to its correlation with lift, we could observe muscular contraction by measuring angular velocity. This is because the measurement of the angular velocity has been technically completed in the past by measuring the force emitted by the flight of the whooper swan and the heart of the chicken (proportional to blood pressure) [10]-[12]. ## II. METHODS 2.1. Target insect of experiment We gathered and analyzed physiological data of wing flapping for 100 hawkmoth (Agriusconvolvuli) specimens suspended in air using string (in a constrained state). Number of moth specimens: 100 specimens of Agrius convolvuli (50 males and 50 females) Mean body weight: 1.1 g Mean dry weight: 0.28 g (approximately 75 % of the body weight is water) Agrius convolvuli is the largest hawkmoth found in the Japanese islands. Its wingspan may reach 105 mm, often resulting in its being mistaken for a bird. Larvae feed on the leaves of plants belonging to the Convolvulaceae family, such as sweet potatoes, making them despised pests among farmers. They are bred for research purposes at the University of Tokyo and other institutions [13]-[15]. We selected the hawkmoth for our study due to its maximum payload, which is regarded to be around 0.3 g. This allows these insects to carry a micro radio transmitter while flying freely outdoors in experiments to be carried out after obtaining the appropriate radio transmission license from the Ministry of Internal Affairs and Communications. 2.2. Anesthesia As with vertebrates, surgical procedures on moths require anesthesia. Having discovered early that using nerve-block agents such as xylocaine acting directly on the nerves results in total loss of the moth’s ability to fly, we were compelled to examine other reversible procedures. Reducing temperatures to 8–10 °C is known to slow moth metabolism, inducing a state of suspended animation. We placed a moth in a Tupperware® with 10 small holes for ventilation and stored the container in a commercial refrigerator. The moth became completely immobile when left in the refrigerator for approximately 60 minutes, giving us time to perform a 5-minute surgical procedure. After the procedure, the moth was gradually returned to room temperature and able to fly freely in about an hour. 2.3. Surgical procedures Moths are covered by generous quantities of lamellar and piriform scales. Some procedures involve preparing moths by removing the scales using air sprays to expose the exoskeleton. Based on our extensive experience with surgical techniques, we chose to secure the field of view under a surgical microscope by wetting the surgical field with a diluted antiseptic agent in preparation for the surgical procedure. The surgical procedure involves embedding multiple electrodes into the body of the moth. For each electrode, a small hole is formed in the dorsal surface of the hard, shell-like exoskeleton using a pin fixed to a celluloid plate. The hole measures approximately 5 mm deep and 0.2 mm in diameter. The procedure does not involve bleeding. After the hole is formed, a 0.18-mm enameled wire prepared beforehand is bent at a 90-degree angle at the tip to allow electrode insertion into the hole to a depth of approximately 4 mm. Solder is applied beforehand to the 2-mm section at the tip of the enameled wire. The enameled wire must be fixed to the hole to prevent detachment during wing flapping. We applied small amounts of Aron Alpha® instant adhesive (Toagosei Co., Ltd.) for this purpose. (The team led by Prof. Ando used beeswax.) The instant adhesive has the advantage of solidifying together with the piriform scales surrounding the hole, which serve as auxiliary structures for fixation (Figure 1). Fig. 1. Inserting and fixing electrodes for EMG. 2.4. Basic anatomy The muscles involved in wing flapping are the dorsal longitudinal muscle (DLM) for the downstroke and the dorsal ventral muscle (DVM) for the upstroke. These two muscles are called power muscles, but indirect muscles. This is because the curved plate on the thoracic back is deformed and the wings are moved based on the lever principle. The 3rd axillary muscle is responsible for wing retraction and the subtrochanteric muscle of the wing for abduction and downstroke, these two are called steering muscles, direct muscles those are linking with wings. In this study, we intended to measure two opposing muscles associated with the highest potentials (DLM and DVM) in Figure 2. Fig. 2. Anatomical conceptual diagram of muscles involved in wing flapping. 2.5. Hardware We recorded EMG and angular velocity (force) of wing flapping together with high-speed video of the moth suspended in air and constrained by string. Figure 3 presents a conceptual diagram of the experiment. The angular velocity sensor is grounded via the pin, which also serves as the fourth electrode (neutral point) of the EMG amplification circuit. The wiring to the angular velocity sensor consists of three 0.18-mm enameled wires, connected perpendicular to the pitch angle so as not to interfere with pitch angle measurements. Fig. 3. Conceptual diagram of experiment. The moth is suspended in air and constrained with an enameled wire. However, it can still flap its wings and the counteraction of the flapping motion will manifest as vertical vibrations of the moth’s abdomen to retain balance. This movement can be measured on the thorax and can be recorded as pitch angle using the angular velocity sensor. 2.6. Circuit diagram Since the neutral point is grounded, the zero potential of the operational amplifier is adjusted manually after inserting the electrode. Once this is adjusted, the output should remain within the range of −5 to +5 when the amplification factor (gain) is 134 and fall within the range of the A/D conversion element. Based on the frequency characteristics provided in the specifications for AD623, when the GB product is 800 kHz and gain is 134, the upper limit of the band frequency will be 70 kHz. Since 70 kHz is far higher than the sampling rate of 20 kHz of the A/D converter, using the AD623 should pose no issues for this amplification circuit with respect to circuit design. Fig. 4. Circuit diagram of EMG with two instrumentation operational amplifiers (OP AMP AD623) [16]. An angular velocity sensor, which relies on the principle that a vibrating body with angular velocity will exhibit the Coriolis force, was used to measure pitch angle. This compact piezoelectric vibrating gyro sensor features a simple cap-base structure measuring approximately 0.1 cc in size and contains a ceramic bimorph vibrator. This gyro sensor was welded perpendicularly on the pin fixed to the moth. The pin doubled as a neutral point for the EMG amplification circuit. The LF411 operational amplifier was used with an amplification degree of 13 and zero bias being controlled using a variable resistor (Figure 5). Fig. 5. Circuit diagram for angular velocity measurement using the Murata ENC03R [17]. In Figure 5, the pin inserted into the dorsal thorax is welded to the enameled wire and grounded and also serves as the fourth electrode of the EMG amplification circuit. 2.7. Analog-to-digital conversion We recorded data to a PC using a 3-channel analog-to-digital converter with a USB connection at a sampling rate of 20,000/sec and 16-bit resolution potential variation (y axis). We used Microsoft Excel to analyze the linear data, using programs for calculating EMG amplitude and PP intervals created with VBA. 2.8. Results of experiment The data files for wing flapping for 100 moths consisted of recordings lasting 49 seconds at a sampling rate of 20 kHz and 24 seconds at a sampling rate of 40 kHz. Since we processed the data in Excel, recording duration was restricted by the maximum number of rows permitted in an Excel sheet. The data obtained is explained below. Figure 6 shows a characteristic 1-second section taken from 3-channel data recorded for 49 seconds. Fig. 6. Digital conversion performed at sampling rate of 20,000/sec and 16-bit potential variation (y axis) for 1 second. The data presented in Figure 6 corresponds to a 1-second section illustrating the transition from a phase in which the moth slowly begins to flap its wings to the active flapping phase. As the amplitude of the pitch angle increases, a transition from a double-peaked waveform to a single-peak waveform occurs. The baselines (DC components) for both the EMG1 (for the dorsal longitudinal muscle) and the EMG2 (for the dorso-ventral muscle) switches to a decreasing trend at the transition. We believe this mode change in the DC component will be useful in assessing the relationship between the movement of the muscle as a whole and the depolarization of the inserted electrodes. The DC component cannot be observed with the AC amplification circuit due to capacitive coupling. Figure 7 represents an enlarged 100-msec section of the plot above. We confirmed two peaks in the downstroke muscles, attributed to the difference in timing of the wingbeats for the left and right wings. The potential variation is most likely generated by the dorsal longitudinal muscle. Fig. 7. Signals associated with DLM and DVM and PP intervals of EMG. Between these peaks, there is a signal for the upstroke, which we believe is associated with the dorso-ventral muscle. Figure 8 shows the data for 100 EMG amplitudes and the corresponding amplitudes of angular velocity. The correlation coefficient of the regression curve is R = 0.792, indicating generally high correlation (does not intentionally exclude outliers). This demonstrates that greater muscular strength correlates with greater angular velocity, suggesting that continuous EMG measurements can be used for physiological functional evaluations of moth flight. Fig. 8. Correlation between EMG and angular velocity. Next, we analyzed frequency fluctuations. Figure 9 plots the data for 100 PP intervals and the amplitude of angular velocity for the corresponding intervals. The PP interval here is similar to the RR interval in ECG and represents the correlation between angular velocity and the interval between wingbeats. Fig. 9. 100 EMG amplitudes and corresponding angular velocity amplitudes. The correlation coefficient for the correlation between the fluctuation for the 100 PP intervals and the amplitude of angular velocity is R = 0.379 shown in the Figure 10. The accuracy of the regression curve is relatively poor (does not intentionally exclude outliers). Fig. 10. Correlation between 100 PP intervals and angular velocity amplitudes. Although angular velocity is a factor that contributes to the fluctuation of the lift provided by wing flapping, our investigation showed a poor correlation between PP interval fluctuations of flapping motion and the fluctuation of the lift, suggesting that the former does not directly contribute to lift. This experiment failed to identify the physiological mechanism responsible for the PP interval fluctuations of flapping motion. ## III. DISCUSSIONS 3.1. Physiological functional evaluations of insects The experiment proved a correlation between angular velocity in the pitch direction of insects and EMG signals. Previous studies have analyzed insect flight using cameras and wind tunnels or EMG itself. However, these studies have been unable to establish a correlation between pitch angle and motor physiological data (corresponding to the EMG signal amplitude in this study). Our experiment corroborates this correlation and demonstrates the potential usefulness of EMG in functional evaluations of moths. Needless to say, physics supports the existence of a linear correlation between pitch angle and lift provided by wings. For agrochemicals, the exposure limits for chronic low-level exposure have been defined as 1/100 the LD50 value, the dose at which half of the exposed population dies in the case of acute exposure. No current standard for insects is based on physiological functional evaluations. Even at doses far below 1/100 of the LD50 value, a decrease observed in EMG potential may indicate an onset of functional disorder. We believe this points to the potential for novel screening methods capable of evaluating the effects of repeated exposure to multiple agrochemicals. The application of such a method would not be limited to agrochemicals. It could also be used to assess exposure dose in radiation exposure incidents, such as the Fukushima-Daiichi nuclear power plant accident, based on functional evaluations of insects [18]-[24]. After the Fukushima-Daiichi nuclear power plant accident, professional journals reported numerous observations of anomalous white spots (depigmentation) on the wings of butterflies of the Lycaenidae family. Government-specified exposure dose limits are calculated from the estimated total lifetime dose over 70 years and do not take into account functional evaluations of insects. As with LD50 values, calculations of these dose limits are not based on physiological evidence. 3.2. Δf fluctuations We confirmed a correlation between EMG amplitude and amplitude of angular velocity. What factor accounts for the fluctuations observed in PP intervals, a frequency component? We believe the answer will prove crucial in producing a parameter for screening functional disorders caused by chronic low-level exposure. In human ECGs, the RR interval reflects the status of autonomic nervous system regulation. Similarly, we believe that fluctuations in the PP interval for wing flapping of the moth may reflect the state of regulation. We believe chronic low-level exposure to agrochemicals may manifest as anomalies in Δf fluctuations. In the present experiment, we constrained the moths to keep them from ascending or flying forward, a factor that may have inhibited their free flapping motions. Additionally, our focus was on the two large muscles (DLM and DVM). We may have failed to make proper assessments of the movement of the two other relatively minor muscles, which are known to be associated with hovering and rotation in flight. Thus, the cause of Δf fluctuations for these muscles may only be identified through monitoring of EMG during free flight. In the next study, we expect to investigate the fluctuations of the two steering muscles (the 3rd axillary muscle and the subtrochanteric muscle) and fluctuations of PP-Interval, which are directly linked to the wings. For subsequent experiments, we devised a transmitter using microchips and batteries for wrist watches (1.5 V) and carried out free-flight experiments after obtaining appropriate licenses for radio transmission from the Ministry of Internal Affairs and Communications. The experiment carried out by Prof. Ando’s team, which preceded ours, achieved 2-channel transmission using an 80 MHz band FM radio transmitter [13]. We plan to pursue research on Δf fluctuations to analyze the role of the remaining two muscles as part of efforts to establish a screening method for chronic low-level exposure. ## IV. CONCLUSIONS Based on this study, we reached two conclusions (1 and 2 below). We also present two themes for future study, 3 and 4. 1. We observed a correlation coefficient (R = 0.792) between the lift provided by wing flapping (estimated from angular velocity) and EMG signal amplitude. 2. The correlation coefficient for the correlation between PP intervals and lift (estimated from angular velocity) provided by wing flapping was R = 0.379. Thus, based on the results of the present processing method, we conclude no correlation between the two can be observed. 3. The flapping mechanism appears to incorporate a dynamic gear change between different modes of muscle employment. This suggests we should consider nonlinear analytical techniques, such as the analysis of whip motion using the Lagrangian equation of motion. 4. If the Δf fluctuation can be measured in addition to EMG amplitude, it should be possible to evaluate the status of moth flight using the two parameters of potential and frequency axes. Physiological evaluations based on amplitude and frequency fluctuations may lead to an indicator that potentially replaces the LD50 value used to assess agrochemical toxicity. ## Acknowledgement We wish to thank Professor Noriyasu Ando of the Maebashi Institute of Technology, who provided the moths used in the present study. We also wish to thank the timely help given by Prof. Kiyoshi Kurokawa, the National Graduate Institute for Policy Studies, Professor Yoshiya Muraki of Seisa University for offering his guidance with our electronic circuits. Dr. Hiroshi Juzoji at EFL Inc. created a main portion of the analytical software. ## REFERENCES [1]. R. Isenring, “Pesticides and the loss of biodiversity,” Pesticide Action Network Europe, March 2010. https://www.pan-europe.info/old/Resources/Briefings/Pesticides_and_the_loss_of_biodiversity.pdf. [2]. Beyond pesticides, “Impacts of Pesticides on Wildlife,” May 2020, https://www.beyondpesticides.org/programs/wildlife. [3]. M. DiBartolomeis, S. Kegley, P. Mineau, R. Radford, K. Klein, “An assessment of acute insecticide toxicity loading (AITL) of chemical pesticides used on agricultural land in the United States,” PLOS-ONE, August 2019. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0220029. [4]. J. Lundgren, S. Fausti, “Trading biodiversity for pest problems,” Science Advances, Vol. 1, No. 6, e1500558. DOI: , 2015. [5]. K. Suzuki, T. Inamuro, “An improved lattice kinetic scheme for incompressible viscous fluid flows,” Int. J. Mod. Phys. C 25. 1340017, 2014. [6]. M. Shindo, T. Fujikawa, K. Kikuchi, “Analysis of Roll Rotation Mechanism of the Butterfly for Development of a Small Flapping Robot,” in Proceeding of the 3rd International Conference on Design Engineering and Science, pp. 90-96, 2014. [7]. F. Lehmann, S. Pick. “The aerodynamic benefit of wing-wing interaction depends on stroke trajectory in flapping insect wings,” Journal of Experimental Biology, vol. 210, pp. 1362-1377, 2007. [8]. S. Hassler, “Winged Victory: Fly-Size Wing Flapper Lifts Off,” IEEE Spectrum, https://spectrum.ieee.org/aerospace/aviation/winged-victory-flysize-wing-flapper-lifts-off, 2008. [9]. T. Deora, N. Gundiah, S. Sane, “Mechanics of the thorax in flies,” Journal of Experimental Biology, vol. 220, pp. 1382-1395, 2017. [10]. K. Nakada, J. Hata, “Development and physiological assessments of multimedia avian esophageal catheter system,” Journal of Multimedia Information System, vol. 5, no. 2, pp. 121-130, 2018. [11]. I. Nakajima, H. Juzoji, K. Ozaki, N. Nakamura, “Communications Protocol Used in the Wireless Token Rings for Bird-to-Bird,” Journal of Multimedia Information System, vol. 5, no. 3, pp. 163-170, 2018. [12]. K. Nakada, I. Nakajima, J. Hata, M. Ta, “Study on Vibration Energy Harvesting with Small Coil for Embedded Avian Multimedia Application,” Journal of Multimedia and Information System, vol. 5, no. 1, pp. 47-52, 2018. [13]. N. Ando, I. Shimoyama, R. Kanzaki, “A dual-channel FM transmitter for acquisition of floght muscle activities from the freely flying hawkmoth, Agrius convolvuli,” Journal of Neuroscience Methods, vol. 115, pp. 181-187, 2002. [14]. M. Shimoda, M. Kiuchi, “Oviposition behavior of the sweet potato hornworm, Agrius convolvuli (Lepidoptera; Sphingidae), as analysed using an artificial leaf,” Apply Entomol Zool, vol. 33, no. 4. pp. 525-534, 1998. [15]. A. Zagorinskii, O. Gorbunov, A. Sidorov, “An Experience of Rearing Some Hawk Moths (Lepidooptera, Sphingidae) on Artificial Diets,” Entomological Review, vol. 93, no. 9, pp.1107-1115, 2013. [18]. Agrius convolvuli, Wikipedia: https://en.wikipedia.org/wiki/Agrius_convolvuli. May 2020. [19]. K. Sakauchia, W. Tairaab, A. Hiyam, Tetsuji Imanaka, Joji M. Otaki, “The pale grass blue butterfly in ex-evacuation zones 5.5 years after the Fukushima nuclear accident: Contributions of initial high-dose exposure to transgenerational effects,” J. of Asia-Pacific Entomology, vol. 23, pp. 242-253, 2020. [20]. J. Otaki, “Fukushima’s lessons from the blue butterfly: a risk assessment of the human living environment in the post-Fukushima era,” Integr. Environ. Assess. Manag., vol. 12, pp.667–672. 2016. [21]. J. Otaki, W. Taira, “Current status of the blue butterfly in Fukushima research,” Journal of Heredity, vol. 109, pp. 178–187. 2018. [22]. J. Otaki, A. Hiyama, A., M. Iwata, T. Kudo, “Phenotypic plasticity in the rangemargin population of the lycaenid butterfly Zizeeria maha,” BMC Evol. Biol., vol. 10, pp. 252. 2018. [23]. J. Otaki, Understanding low-dose exposure and field effects to resolve the field laboratory paradox: multifaceted biological effects from the Fukushima nuclear accident, Awwad, N.S., AlFaify, S.A., (Eds.). New Trends in Nuclear Science, Intech Open, London. pp. 49–71. ISBN 978-1-78984-656-0. doi: , 2018. [24]. J. Otaki, W. Taira, “Current Status of the Blue Butterfly in Fukushima Research,” Journal of Heredity, vol. 109, no. 2, pp.178–187, 2018. ## Authors Isao Nakajima is a specially appointed professor of Seisa University and a visiting professor of Nakajima Labo. at the Dept. of Emergency Medicine and Critical Care, Tokai University School of Medicine. He got the Doctor of Applied Informatics (Ph.D.), Graduate School of Applied Informatics University of Hyogo 2009, and the Doctor of Medicine (Ph.D.), Post Graduate School of Medical Science Tokai University 1988, and the Medical Doctor (M.D.) from Tokai University School of Medicine 1980. He has been aiming to send huge multimedia data from moving ambulance via communications satellite to assist patient’s critical condition. A board member of the Pacific Science Congress, a Rapporteur for eHealth of ITU-D SG2. Yukako Yagi is at the Digital Pathology Laboratory of the Josie Robertson Surgical Center serves as an incubator to explore, evaluate and develop new technology to advance digital pathology in a clinical setting and actively engage vendors to help improve the technology and develop clinical applicability. Collaborations with clinical departments (e.g., Surgery), Radiology, Medical Physics, and Informatics groups, enhance these assessments and creates opportunities for multidisciplinary applications. She completed her Doctorate in Medical Science at Tokyo Medical University in Japan. She has a broad interest in various aspects of medical science, which include the development and validation of technologies in digital imaging, such as color and image quality calibration, evaluation and optimization, digital staining, 3D imaging, and decision support systems for pathology diagnosis, research and education. Since joining MSK, she has led pioneering work using MicroCT, Whole Slide Imaging (WSI) and Confocal imaging to connect multi-dimensional and multi-modality images (e.g., single-cell to whole-body analysis). She participated in creating image viewers for several imaging modalities and established new imaging data formats. Her team has established the technology to streamline colorization within Pathology 3D, i.e., H&E, immuno-florescent, immuno-histochemistry, and florescence in situ hybridization imaging. Once the colorization is mapped, it can be correlated with another modality such as radiology, for a holistic analysis that will improve patient care and outcomes. The team also created 3D histology images of a single organ using thousands of slides. Her works further enriches our knowledge of disease by integrating computational pathology data with other specimen-related data (genomics, proteomics, radiographic imaging, etc.). This brings an unprecedented breadth and depth of information to each individual case and yields a comprehensive, multidimensional analysis that would otherwise be impossible.
2020-08-13 05:29:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.375559002161026, "perplexity": 4923.034448810951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00005.warc.gz"}
https://socratic.org/questions/how-do-you-use-the-second-derivative-test-to-find-the-local-extrema-for-f-x-e-x-
# How do you use the second derivative test to find the local extrema for f(x)=e^(x^2)? Apr 4, 2018 The function $f \left(x\right) = {e}^{{x}^{2}}$ has a local minimum in $x = 0$ and no local maximum. #### Explanation: Evaluate the first and second derivatives of the function: $f ' \left(x\right) = 2 x {e}^{{x}^{2}}$ $f ' ' \left(x\right) = 2 {e}^{{x}^{2}} + 4 {x}^{2} {e}^{{x}^{2}} = 2 \left(1 + 2 {x}^{2}\right) {e}^{{x}^{2}}$ Solving the equation: $f ' \left(x\right) = 0$ $2 x {e}^{{x}^{2}} = 0$ as the exponential is never null we can see that the only critical point for the function is $x = 0$. Then we see that; $f ' ' \left(0\right) > 0$ so that the critical point is a local minimum. graph{e^(x^2) [-2, 2, -1, 10]}
2022-05-17 06:37:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7813300490379333, "perplexity": 307.8285387673792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00494.warc.gz"}
http://web.emn.fr/x-info/sdemasse/gccat/KBoolean_channel.html
### 3.7.33. Boolean channel A constraint that allows for making the link between a set of 0 -1 variables ${B}_{1},{B}_{2},...,{B}_{n}$ and a domain variable $V$. It enforces a condition of the form $V=i⇔{B}_{i}=1$.
2019-04-23 20:41:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.740382194519043, "perplexity": 497.0367296021416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00397.warc.gz"}
https://openstax.org/books/college-physics-ap-courses/pages/18-section-summary
College Physics for AP® Courses # Section Summary ### 18.1Static Electricity and Charge: Conservation of Charge • There are only two types of charge, which we call positive and negative. • Like charges repel, unlike charges attract, and the force between charges decreases with the square of the distance. • The vast majority of positive charge in nature is carried by protons, while the vast majority of negative charge is carried by electrons. • The electric charge of one electron is equal in magnitude and opposite in sign to the charge of one proton. • An ion is an atom or molecule that has nonzero total charge due to having unequal numbers of electrons and protons. • The SI unit for charge is the coulomb (C), with protons and electrons having charges of opposite sign but equal magnitude; the magnitude of this basic charge $∣ q e ∣ ∣ q e ∣ size 12{ lline q rSub { size 8{e} } rline} {}$ is $∣ q e ∣ = 1.60 × 10 − 19 C . ∣ q e ∣ = 1.60 × 10 − 19 C . size 12{ lline q rSub { size 8{e} } rline =1 "." "60" times "10" rSup { size 8{ - "19"} } C} {}$ • Whenever charge is created or destroyed, equal amounts of positive and negative are involved. • Most often, existing charges are separated from neutral objects to obtain some net charge. • Both positive and negative charges exist in neutral objects and can be separated by rubbing one object with another. For macroscopic objects, negatively charged means an excess of electrons and positively charged means a depletion of electrons. • The law of conservation of charge ensures that whenever a charge is created, an equal charge of the opposite sign is created at the same time. ### 18.2Conductors and Insulators • Polarization is the separation of positive and negative charges in a neutral object. • A conductor is a substance that allows charge to flow freely through its atomic structure. • An insulator holds charge within its atomic structure. • Objects with like charges repel each other, while those with unlike charges attract each other. • A conducting object is said to be grounded if it is connected to the Earth through a conductor. Grounding allows transfer of charge to and from the earth's large reservoir. • Objects can be charged by contact with another charged object and obtain the same sign charge. • If an object is temporarily grounded, it can be charged by induction, and obtains the opposite sign charge. • Polarized objects have their positive and negative charges concentrated in different areas, giving them a non-symmetrical charge. • Polar molecules have an inherent separation of charge. ### 18.3Conductors and Electric Fields in Static Equilibrium • A conductor allows free charges to move about within it. • The electrical forces around a conductor will cause free charges to move around inside the conductor until static equilibrium is reached. • Any excess charge will collect along the surface of a conductor. • Conductors with sharp corners or points will collect more charge at those points. • A lightning rod is a conductor with sharply pointed ends that collect excess charge on the building caused by an electrical storm and allow it to dissipate back into the air. • Electrical storms result when the electrical field of Earth's surface in certain locations becomes more strongly charged, due to changes in the insulating effect of the air. • A Faraday cage acts like a shield around an object, preventing electric charge from penetrating inside. ### 18.4Coulomb’s Law • Frenchman Charles Coulomb was the first to publish the mathematical equation that describes the electrostatic force between two objects. • Coulomb's law gives the magnitude of the force between point charges. It is $F=k|q1q2|r2,F=k|q1q2|r2, size 12{F=k { {q rSub { size 8{1} } q rSub { size 8{2} } } over {r rSup { size 8{2} } } } } {}$ where $q1q1$ and $q2q2$ are two point charges separated by a distance $rr$, and $k≈8.99×109 N·m2/C2k≈8.99×109 N·m2/C2$ • This Coulomb force is extremely basic, since most charges are due to point-like particles. It is responsible for all electrostatic effects and underlies most macroscopic forces. • The Coulomb force is extraordinarily strong compared with the gravitational force, another basic force—but unlike gravitational force it can cancel, since it can be either attractive or repulsive. • The electrostatic force between two subatomic particles is far greater than the gravitational force between the same two particles. ### 18.5Electric Field: Concept of a Field Revisited • The electrostatic force field surrounding a charged object extends out into space in all directions. • The electrostatic force exerted by a point charge on a test charge at a distance $rr size 12{r} {}$ depends on the charge of both charges, as well as the distance between the two. • The electric field $EE size 12{E} {}$ is defined to be $E = F q , E = F q , size 12{E= { {F} over {q,} } } {}$ where $FF size 12{F} {}$ is the Coulomb or electrostatic force exerted on a small positive test charge $qq size 12{q} {}$. $EE size 12{E} {}$ has units of N/C. • The magnitude of the electric field $EE size 12{E} {}$ created by a point charge $QQ size 12{Q} {}$ is $E = k |Q| r 2 . E = k |Q| r 2 . size 12{E=k { {Q} over {r rSup { size 8{2} } } } } {}$ where $rr size 12{r} {}$ is the distance from $QQ size 12{Q} {}$. The electric field $EE size 12{E} {}$ is a vector and fields due to multiple charges add like vectors. ### 18.6Electric Field Lines: Multiple Charges • Drawings of electric field lines are useful visual tools. The properties of electric field lines for any charge distribution are that: • Field lines must begin on positive charges and terminate on negative charges, or at infinity in the hypothetical case of isolated charges. • The number of field lines leaving a positive charge or entering a negative charge is proportional to the magnitude of the charge. • The strength of the field is proportional to the closeness of the field lines—more precisely, it is proportional to the number of lines per unit area perpendicular to the lines. • The direction of the electric field is tangent to the field line at any point in space. • Field lines can never cross. ### 18.7Electric Forces in Biology • Many molecules in living organisms, such as DNA, carry a charge. • An uneven distribution of the positive and negative charges within a polar molecule produces a dipole. • The effect of a Coulomb field generated by a charged object may be reduced or blocked by other nearby charged objects. • Biological systems contain water, and because water molecules are polar, they have a strong effect on other molecules in living systems. ### 18.8Applications of Electrostatics • Electrostatics is the study of electric fields in static equilibrium. • In addition to research using equipment such as a Van de Graaff generator, many practical applications of electrostatics exist, including photocopiers, laser printers, ink-jet printers and electrostatic air filters. Order a print copy As an Amazon Associate we earn from qualifying purchases.
2022-10-03 02:40:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4645420014858246, "perplexity": 535.92523184561}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00368.warc.gz"}
https://stacks.math.columbia.edu/tag/0681
Lemma 37.58.4. A composition of pseudo-coherent morphisms of schemes is pseudo-coherent. Proof. This translates into the following algebra result: If $A \to B \to C$ are composable pseudo-coherent ring maps then $A \to C$ is pseudo-coherent. This follows from either More on Algebra, Lemma 15.81.13 or More on Algebra, Lemma 15.81.15. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2023-03-29 07:46:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9748663902282715, "perplexity": 1369.4779614252423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00102.warc.gz"}
https://www.gamedev.net/forums/topic/344050-cint-to-char/
# [C++]Int to Char ## Recommended Posts I have an integer value, but I want it to be shown as a char. I don't mean ASCII values, I mean that if the int is equal to say, 128, I want the values 128 shown as chars. String class is applicable here too, probably would be easier for me? Thanks. ##### Share on other sites Look into the itoa() function. It is in the stdlib.h section. Much luck! ##### Share on other sites with c++ do this #include <sstream> #include <string> using namespace std; string to_string( int val ) { stringstream stream; stream << val; return stream.str(); } // or more generally template< class Type > string to_string( Type val ) { stringstream stream; stream << val; return stream.str(); } i learned this yesterday, so i may be *very* wrong ##### Share on other sites rip-off is correct. Use stringstreams. In my opinion, they are the easiest way to do this. They're a great thing to have in your toolbox. ##### Share on other sites Wasn't there a thread recently in which someone explained why not to use stringstreams? (bad performance, memory leaks etc.) Anyway, I've seen a benchmark somewhere (not sure where) in which itoa() was about 6 times as fast as std::stringstream. ##### Share on other sites Quote: Original post by KalasjniekofWasn't there a thread recently in which someone explained why not to use stringstreams? (bad performance, memory leaks etc.) Anyway, I've seen a benchmark somewhere (not sure where) in which itoa() was about 6 times as fast as std::stringstream. There's something seriously, seriously wrong with your program if converting from a string to an int is a bottleneck. Speed is irrelevant. Memory leaks would be surprising, unless you forget to delete the characters you pass to it. ##### Share on other sites Sorry for such a dumb question, I completely forgot about all the conversion functions, getting back into the C++ groove. ;) Thanks guys ##### Share on other sites How about the old trusty sprintf? It's what I use most of the time. ##### Share on other sites Quote: Original post by parklifeHow about the old trusty sprintf? It's what I use most of the time. you misspelled snprintf there ##### Share on other sites Quote: Original post by rip-off Quote: Original post by parklifeHow about the old trusty sprintf? It's what I use most of the time. you misspelled snprintf there There's pretty much no reason to use snprintf in this situation. ##### Share on other sites Quote: Original post by rip-offyou misspelled snprintf there So sprintf is a no-no? ##### Share on other sites i read that sprintf and so on are major causes of bugs, so better to be safe than sorry. snprintf is safer in my opionion, with the stringstream being safer again what if you get on fine with sprintf and continue to use it in other situations until there is one that can cause a bug. i don't like to encourage its use, esp in the beginners forum ##### Share on other sites i suggest the stringstream method or boost::lexical_cast. boost::lexical_cast<std::string>(myInteger) ##### Share on other sites Quote: Original post by rip-offi read that sprintf and so on are major causes of bugs, so better to be safe than sorry.snprintf is safer in my opionion, with the stringstream being safer againwhat if you get on fine with sprintf and continue to use it in other situations until there is one that can cause a bug.i don't like to encourage its use, esp in the beginners forum snprintf is not necessarily safer than sprintf. It just produces different kinds of security holes. ##### Share on other sites If you don't need to *store* that text data (the chars '1', '2', '8' in sequence) but just output them, then - just output them. Streams are already templated to handle things properly according to the type of what you output. (In some circumstances you can get 'other' behaviour via explicit casts, because templates make use of the static, exact type of parameters.) int a = 65;cout << "The ascii value for '" << static_cast<char>(a) << "' is " << a << endl; If you want to store the value somewhere, use a std::stringstream. It's a kind of stream, so you work with it just like the console (which is why you can convert things "magically" - it's using the same code that the console streams do), except its "source/destination buffer" is a string in memory. Since it's not explicitly an input or output stream, you can do both; so the stringstream technique works by putting the value into the buffer as an int (doing the conversion), then pulling it back out as a std::string (doing another conversion back that way). The boost::lexical_cast basically is a wrapper for the stringstream method, by the way: it does a bit of extra work to make things safer, more or less. ##### Share on other sites most of the times i use itoa... and sometimes i use: int num = 9;CString cs;cs.Format("%d", num); I never know what kind of strings are the best to work with... CString? std::string? or plain char*? thanks ##### Share on other sites Quote: Original post by fuchiefckI never know what kind of strings are the best to work with... CString? std::string? or plain char*? If you are doing MFC programming, CString is the best suited. Other than that you have to include MFC header files to get access to it. For general programming, std::string will often suffice and for the most part, using the std::string will make your life easier. Highly reccomended for beginners who are not yet accustomed to pointers. For programmers that are not beginners, char* or std::string are used on an as needed basics. It all just depends on the situation and the programmer. There was a big thread on this in the For Beginner's forum, so if you want more information on this topic, you can check the archives. I don't have it bookmarked, but maybe someone else does. ##### Share on other sites [edit: OT... I concur with Gumpy MacDrunken on the original point.] I recently switched to using std::string primarily, after a long stubborn grudge with char *'s. Using std::string isn't much easier to use, but the main advantage is simply less code to write since the std::string will be automatically cleaned up on function exit or class deletion. Less code to write means faster development, and less chance for human mistakes. ##### Share on other sites Quote: Original post by chbrulesI have an integer value, but I want it to be shown as a char. I don't mean ASCII values, I mean that if the int is equal to say, 128, I want the values 128 shown as chars. String class is applicable here too, probably would be easier for me? Thanks. I figure you got multiple answers by now, but just for study, why not just use a union? // Shares the same memory location. union u_type{ int i; char ch;};int main(){ u_type cnvt; cnvt.i = 65; cout << cnvt.ch << endl; // outputs 'A' return 0;} ## Create an account Register a new account • ### Forum Statistics • Total Topics 627701 • Total Posts 2978702 • 21 • 14 • 12 • 10 • 12
2017-10-21 05:14:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17978151142597198, "perplexity": 3991.0946719778663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00388.warc.gz"}