url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://www.maa.org/programs/faculty-and-departments/course-communities/matrix-calculator-1
# Matrix Calculator A tool for finding properties of one matrix, such as multiple, power, inverse, determinants, characteristic polynomial, eigenvalues, and eigenvectors. A reasonable calculator for properties of a matrix. It is a little hard to navigate. It does all the computations for the student, so they don't have to learn how to do it. Identifier:
2014-07-26 05:22:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8245503902435303, "perplexity": 784.4309188392302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894983.24/warc/CC-MAIN-20140722025814-00125-ip-10-33-131-23.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Plutonium
Plutonium Plutonium, 94Pu Plutonium Pronunciation/plˈtniəm/ Appearancesilvery white, tarnishing to dark gray in air Mass number[244] Plutonium in the periodic table Hydrogen Helium Lithium Beryllium Boron Carbon Nitrogen Oxygen Fluorine Neon Sodium Magnesium Aluminium Silicon Phosphorus Sulfur Chlorine Argon Potassium Calcium Scandium Titanium Vanadium Chromium Manganese Iron Cobalt Nickel Copper Zinc Gallium Germanium Arsenic Selenium Bromine Krypton Rubidium Strontium Yttrium Zirconium Niobium Molybdenum Technetium Ruthenium Rhodium Palladium Silver Cadmium Indium Tin Antimony Tellurium Iodine Xenon Caesium Barium Lanthanum Cerium Praseodymium Neodymium Promethium Samarium Europium Gadolinium Terbium Dysprosium Holmium Erbium Thulium Ytterbium Lutetium Hafnium Tantalum Tungsten Rhenium Osmium Iridium Platinum Gold Mercury (element) Thallium Lead Bismuth Polonium Astatine Radon Francium Radium Actinium Thorium Protactinium Uranium Neptunium Plutonium Americium Curium Berkelium Californium Einsteinium Fermium Mendelevium Nobelium Lawrencium Rutherfordium Dubnium Seaborgium Bohrium Hassium Meitnerium Darmstadtium Roentgenium Copernicium Nihonium Flerovium Moscovium Livermorium Tennessine Oganesson Sm Pu (Uqo) neptuniumplutoniumamericium 94 Groupn/a Periodperiod 7 Block  f-block Electron configuration[Rn] 5f6 7s2 Electrons per shell2, 8, 18, 32, 24, 8, 2 Physical properties Phase at STPsolid Melting point912.5 K ​(639.4 °C, ​1182.9 °F) Boiling point3505 K ​(3228 °C, ​5842 °F) Density (near r.t.)19.85 (239Pu)[1] g/cm3 when liquid (at m.p.)16.63 g/cm3 Heat of fusion2.82 kJ/mol Heat of vaporization333.5 kJ/mol Molar heat capacity35.5 J/(mol·K) Vapor pressure P (Pa) 1 10 100 1 k 10 k 100 k at T (K) 1756 1953 2198 2511 2926 3499 Atomic properties Oxidation states+2, +3, +4, +5, +6, +7, +8 (an amphoteric oxide) ElectronegativityPauling scale: 1.28 Ionization energies • 1st: 584.7 kJ/mol Spectral lines of plutonium Other properties Natural occurrencefrom decay Crystal structuremonoclinic Speed of sound2260 m/s Thermal expansion46.7 µm/(m·K) (at 25 °C) Thermal conductivity6.74 W/(m·K) Electrical resistivity1.460 µΩ·m (at 0 °C) Magnetic orderingparamagnetic Young's modulus96 GPa Shear modulus43 GPa Poisson ratio0.21 CAS Number7440-07-5 History Namingafter dwarf planet Pluto, itself named after classical god of the underworld Pluto DiscoveryGlenn T. Seaborg, Arthur Wahl, Joseph W. Kennedy, Edwin McMillan (1940–1941) Main isotopes of plutonium Iso­tope Abun­dance Half-life (t1/2) Decay mode Pro­duct 238Pu trace 87.74 y SF α 234U 239Pu trace 2.41×104 y SF α 235U 240Pu trace 6500 y SF α 236U 241Pu syn 14 y β 241Am SF 242Pu syn 3.73×105 y SF α 238U 244Pu trace 8.08×107 y α 240U SF Category: Plutonium | references Plutonium is a radioactive chemical element with the symbol Pu and atomic number 94. It is an actinide metal of silvery-gray appearance that tarnishes when exposed to air, and forms a dull coating when oxidized. The element normally exhibits six allotropes and four oxidation states. It reacts with carbon, halogens, nitrogen, silicon, and hydrogen. When exposed to moist air, it forms oxides and hydrides that can expand the sample up to 70% in volume, which in turn flake off as a powder that is pyrophoric. It is radioactive and can accumulate in bones, which makes the handling of plutonium dangerous. Plutonium was first synthetically produced and isolated in late 1940 and early 1941, by a deuteron bombardment of uranium-238 in the 1.5-metre (60 in) cyclotron at the University of California, Berkeley. First, neptunium-238 (half-life 2.1 days) was synthesized, which subsequently beta-decayed to form the new element with atomic number 94 and atomic weight 238 (half-life 88 years). Since uranium had been named after the planet Uranus and neptunium after the planet Neptune, element 94 was named after Pluto, which at the time was considered to be a planet as well. Wartime secrecy prevented the University of California team from publishing its discovery until 1948. Plutonium is the element with the highest atomic number to occur in nature. Trace quantities arise in natural uranium-238 deposits when uranium-238 captures neutrons emitted by decay of other uranium-238 atoms. Plutonium is much more common on Earth since 1945 as a product of neutron capture and beta decay, where some of the neutrons released by the fission process convert uranium-238 nuclei into plutonium-239. The quantity of isotopes in the decay chains at a certain time are calculated with the Bateman equation. Both plutonium-239 and plutonium-241 are fissile, meaning that they can sustain a nuclear chain reaction, leading to applications in nuclear weapons and nuclear reactors. Plutonium-240 exhibits a high rate of spontaneous fission, raising the neutron flux of any sample containing it. The presence of plutonium-240 limits a plutonium sample's usability for weapons or its quality as reactor fuel, and the percentage of plutonium-240 determines its grade (weapons-grade, fuel-grade, or reactor-grade). Plutonium-238 has a half-life of 87.7 years and emits alpha particles. It is a heat source in radioisotope thermoelectric generators, which are used to power some spacecraft. Plutonium isotopes are expensive and inconvenient to separate, so particular isotopes are usually manufactured in specialized reactors. Producing plutonium in useful quantities for the first time was a major part of the Manhattan Project during World War II that developed the first atomic bombs. The Fat Man bombs used in the Trinity nuclear test in July 1945, and in the bombing of Nagasaki in August 1945, had plutonium cores. Human radiation experiments studying plutonium were conducted without informed consent, and several criticality accidents, some lethal, occurred after the war. Disposal of plutonium waste from nuclear power plants and dismantled nuclear weapons built during the Cold War is a nuclear-proliferation and environmental concern. Other sources of plutonium in the environment are fallout from numerous above-ground nuclear tests, now banned. Characteristics Physical properties Plutonium, like most metals, has a bright silvery appearance at first, much like nickel, but it oxidizes very quickly to a dull gray, although yellow and olive green are also reported.[2][3] At room temperature plutonium is in its α (alpha) form. This, the most common structural form of the element (allotrope), is about as hard and brittle as gray cast iron unless it is alloyed with other metals to make it soft and ductile. Unlike most metals, it is not a good conductor of heat or electricity. It has a low melting point (640 °C) and an unusually high boiling point (3,228 °C).[2] Alpha decay, the release of a high-energy helium nucleus, is the most common form of radioactive decay for plutonium.[4] A 5 kg mass of 239Pu contains about 12.5×1024 atoms. With a half-life of 24,100 years, about 11.5×1012 of its atoms decay each second by emitting a 5.157 MeV alpha particle. This amounts to 9.68 watts of power. Heat produced by the deceleration of these alpha particles makes it warm to the touch.[5][6] Resistivity is a measure of how strongly a material opposes the flow of electric current. The resistivity of plutonium at room temperature is very high for a metal, and it gets even higher with lower temperatures, which is unusual for metals.[7] This trend continues down to 100 K, below which resistivity rapidly decreases for fresh samples.[7] Resistivity then begins to increase with time at around 20 K due to radiation damage, with the rate dictated by the isotopic composition of the sample.[7] Because of self-irradiation, a sample of plutonium fatigues throughout its crystal structure, meaning the ordered arrangement of its atoms becomes disrupted by radiation with time.[8] Self-irradiation can also lead to annealing which counteracts some of the fatigue effects as temperature increases above 100 K.[9] Unlike most materials, plutonium increases in density when it melts, by 2.5%, but the liquid metal exhibits a linear decrease in density with temperature.[7] Near the melting point, the liquid plutonium has very high viscosity and surface tension compared to other metals.[8] Allotropes Plutonium has six allotropes at ambient pressure: alpha (α), beta (β), gamma (γ), delta (δ), delta prime (δ'), and epsilon (ε)[10] Plutonium normally has six allotropes and forms a seventh (zeta, ζ) at high temperature within a limited pressure range.[10] These allotropes, which are different structural modifications or forms of an element, have very similar internal energies but significantly varying densities and crystal structures. This makes plutonium very sensitive to changes in temperature, pressure, or chemistry, and allows for dramatic volume changes following phase transitions from one allotropic form to another.[8] The densities of the different allotropes vary from 16.00 g/cm3 to 19.86 g/cm3.[11] The presence of these many allotropes makes machining plutonium very difficult, as it changes state very readily. For example, the α form exists at room temperature in unalloyed plutonium. It has machining characteristics similar to cast iron but changes to the plastic and malleable β (beta) form at slightly higher temperatures.[12] The reasons for the complicated phase diagram are not entirely understood. The α form has a low-symmetry monoclinic structure, hence its brittleness, strength, compressibility, and poor thermal conductivity.[10] Plutonium in the δ (delta) form normally exists in the 310 °C to 452 °C range but is stable at room temperature when alloyed with a small percentage of gallium, aluminium, or cerium, enhancing workability and allowing it to be welded.[12] The δ form has more typical metallic character, and is roughly as strong and malleable as aluminium.[10] In fission weapons, the explosive shock waves used to compress a plutonium core will also cause a transition from the usual δ phase plutonium to the denser α form, significantly helping to achieve supercriticality.[13] The ε phase, the highest temperature solid allotrope, exhibits anomalously high atomic self-diffusion compared to other elements.[8] Nuclear fission A ring of weapons-grade 99.96% pure electrorefined plutonium, enough for one bomb core. The ring weighs 5.3 kg, is ca. 11 cm in diameter and its shape helps with criticality safety. Plutonium is a radioactive actinide metal whose isotope, plutonium-239, is one of the three primary fissile isotopes (uranium-233 and uranium-235 are the other two); plutonium-241 is also highly fissile. To be considered fissile, an isotope's atomic nucleus must be able to break apart or fission when struck by a slow moving neutron and to release enough additional neutrons to sustain the nuclear chain reaction by splitting further nuclei.[14] Pure plutonium-239 may have a multiplication factor (keff) larger than one, which means that if the metal is present in sufficient quantity and with an appropriate geometry (e.g., a sphere of sufficient size), it can form a critical mass.[15] During fission, a fraction of the nuclear binding energy, which holds a nucleus together, is released as a large amount of electromagnetic and kinetic energy (much of the latter being quickly converted to thermal energy). Fission of a kilogram of plutonium-239 can produce an explosion equivalent to 21,000 tons of TNT (88,000 GJ). It is this energy that makes plutonium-239 useful in nuclear weapons and reactors.[5] The presence of the isotope plutonium-240 in a sample limits its nuclear bomb potential, as plutonium-240 has a relatively high spontaneous fission rate (~440 fissions per second per gram—over 1,000 neutrons per second per gram),[16] raising the background neutron levels and thus increasing the risk of predetonation.[17] Plutonium is identified as either weapons-grade, fuel-grade, or reactor-grade based on the percentage of plutonium-240 that it contains. Weapons-grade plutonium contains less than 7% plutonium-240. Fuel-grade plutonium contains from 7% to less than 19%, and power reactor-grade contains 19% or more plutonium-240. Supergrade plutonium, with less than 4% of plutonium-240, is used in U.S. Navy weapons stored in proximity to ship and submarine crews, due to its lower radioactivity.[18] The isotope plutonium-238 is not fissile but can undergo nuclear fission easily with fast neutrons as well as alpha decay.[5] Isotopes and nucleosynthesis Uranium-plutonium and thorium-uranium chains Twenty radioactive isotopes of plutonium have been characterized. The longest-lived are plutonium-244, with a half-life of 80.8 million years, plutonium-242, with a half-life of 373,300 years, and plutonium-239, with a half-life of 24,110 years. All of the remaining radioactive isotopes have half-lives that are less than 7,000 years. This element also has eight metastable states, though all have half-lives less than one second.[4] The known isotopes of plutonium range in mass number from 228 to 247. The primary decay modes of isotopes with mass numbers lower than the most stable isotope, plutonium-244, are spontaneous fission and alpha emission, mostly forming uranium (92 protons) and neptunium (93 protons) isotopes as decay products (neglecting the wide range of daughter nuclei created by fission processes). The primary decay mode for isotopes with mass numbers higher than plutonium-244 is beta emission, mostly forming americium (95 protons) isotopes as decay products. Plutonium-241 is the parent isotope of the neptunium decay series, decaying to americium-241 via beta emission.[4][19] Plutonium-238 and 239 are the most widely synthesized isotopes.[5] Plutonium-239 is synthesized via the following reaction using uranium (U) and neutrons (n) via beta decay (β) with neptunium (Np) as an intermediate:[20] ${\displaystyle {\ce {{^{238}_{92}U}+{^{1}_{0}n}->{^{239}_{92}U}->[\beta ^{-}][23.5\ {\ce {min}}]{^{239}_{93}Np}->[\beta ^{-}][2.3565\ {\ce {d}}]{^{239}_{94}Pu}}}}$ Neutrons from the fission of uranium-235 are captured by uranium-238 nuclei to form uranium-239; a beta decay converts a neutron into a proton to form neptunium-239 (half-life 2.36 days) and another beta decay forms plutonium-239.[21] Egon Bretscher working on the British Tube Alloys project predicted this reaction theoretically in 1940.[22] Plutonium-238 is synthesized by bombarding uranium-238 with deuterons (D, the nuclei of heavy hydrogen) in the following reaction:[23] {\displaystyle {\begin{aligned}{\ce {{^{238}_{92}U}+{^{2}_{1}D}->}}&{\ce {{^{238}_{93}Np}+2_{0}^{1}n}}\\&{\ce {^{238}_{93}Np->[\beta ^{-}][2.117\ {\ce {d}}]{^{238}_{94}Pu}}}\end{aligned}}} In this process, a deuteron hitting uranium-238 produces two neutrons and neptunium-238, which spontaneously decays by emitting negative beta particles to form plutonium-238.[24] Decay heat and fission properties Plutonium isotopes undergo radioactive decay, which produces decay heat. Different isotopes produce different amounts of heat per mass. The decay heat is usually listed as watt/kilogram, or milliwatt/gram. In larger pieces of plutonium (e.g. a weapon pit) and inadequate heat removal the resulting self-heating may be significant. Decay heat of plutonium isotopes[25] Isotope Decay mode Half-life (years) Decay heat (W/kg) Spontaneous fission neutrons (1/(g·s)) Comment 238Pu alpha to 234U 87.74 560 2600 Very high decay heat. Even in small amounts can cause significant self-heating. Used on its own in radioisotope thermoelectric generators. 239Pu alpha to 235U 24100 1.9 0.022 The principal fissile isotope in use. 240Pu alpha to 236U, spontaneous fission 6560 6.8 910 The principal impurity in samples of the 239Pu isotope. The plutonium grade is usually listed as percentage of 240Pu. High spontaneous fission hinders use in nuclear weapons. 241Pu beta-minus, to 241Am 14.4 4.2 0.049 Decays to americium-241; its buildup presents a radiation hazard in older samples. 242Pu alpha to 238U 376000 0.1 1700 242Pu decays to 238U through alpha decay; will also decay by spontaneous fission. Compounds and chemistry Various oxidation states of plutonium in solution At room temperature, pure plutonium is silvery in color but gains a tarnish when oxidized.[26] The element displays four common ionic oxidation states in aqueous solution and one rare one:[11] • Pu(III), as Pu3+ (blue lavender) • Pu(IV), as Pu4+ (yellow brown) • Pu(V), as PuO+ 2 (light pink)[note 1] • Pu(VI), as PuO2+ 2 (pink orange) • Pu(VII), as PuO3− 5 (green)—the heptavalent ion is rare. The color shown by plutonium solutions depends on both the oxidation state and the nature of the acid anion.[28] It is the acid anion that influences the degree of complexing—how atoms connect to a central atom—of the plutonium species. Additionally, the formal +2 oxidation state of plutonium is known in the complex [K(2.2.2-cryptand)] [PuIICp″3], Cp″ = C5H3(SiMe3)2.[29] A +8 oxidation state is possible as well in the volatile tetroxide PuO 4 .[30] Though it readily decomposes via a reduction mechanism similar to FeO 4 , PuO 4 can be stabilized in alkaline solutions and chloroform.[31][30] Metallic plutonium is produced by reacting plutonium tetrafluoride with barium, calcium or lithium at 1200 °C.[32] It is attacked by acids, oxygen, and steam but not by alkalis and dissolves easily in concentrated hydrochloric, hydroiodic and perchloric acids.[33] Molten metal must be kept in a vacuum or an inert atmosphere to avoid reaction with air.[12] At 135 °C the metal will ignite in air and will explode if placed in carbon tetrachloride.[34] Plutonium pyrophoricity can cause it to look like a glowing ember under certain conditions. Twenty micrograms of pure plutonium hydroxide Plutonium is a reactive metal. In moist air or moist argon, the metal oxidizes rapidly, producing a mixture of oxides and hydrides.[2] If the metal is exposed long enough to a limited amount of water vapor, a powdery surface coating of PuO2 is formed.[2] Also formed is plutonium hydride but an excess of water vapor forms only PuO2.[33] Plutonium shows enormous, and reversible, reaction rates with pure hydrogen, forming plutonium hydride.[8] It also reacts readily with oxygen, forming PuO and PuO2 as well as intermediate oxides; plutonium oxide fills 40% more volume than plutonium metal. The metal reacts with the halogens, giving rise to compounds with the general formula PuX3 where X can be F, Cl, Br or I and PuF4 is also seen. The following oxyhalides are observed: PuOCl, PuOBr and PuOI. It will react with carbon to form PuC, nitrogen to form PuN and silicon to form PuSi2.[11][34] Powders of plutonium, its hydrides and certain oxides like Pu2O3 are pyrophoric, meaning they can ignite spontaneously at ambient temperature and are therefore handled in an inert, dry atmosphere of nitrogen or argon. Bulk plutonium ignites only when heated above 400 °C. Pu2O3 spontaneously heats up and transforms into PuO2, which is stable in dry air, but reacts with water vapor when heated.[35] Crucibles used to contain plutonium need to be able to withstand its strongly reducing properties. Refractory metals such as tantalum and tungsten along with the more stable oxides, borides, carbides, nitrides and silicides can tolerate this. Melting in an electric arc furnace can be used to produce small ingots of the metal without the need for a crucible.[12] Cerium is used as a chemical simulant of plutonium for development of containment, extraction, and other technologies.[36] Electronic structure Plutonium is an element in which the 5f electrons are the transition border between delocalized and localized; it is therefore considered one of the most complex elements.[37] The anomalous behavior of plutonium is caused by its electronic structure. The energy difference between the 6d and 5f subshells is very low. The size of the 5f shell is just enough to allow the electrons to form bonds within the lattice, on the very boundary between localized and bonding behavior. The proximity of energy levels leads to multiple low-energy electron configurations with near equal energy levels. This leads to competing 5fn7s2 and 5fn−16d17s2 configurations, which causes the complexity of its chemical behavior. The highly directional nature of 5f orbitals is responsible for directional covalent bonds in molecules and complexes of plutonium.[8] Alloys Plutonium can form alloys and intermediate compounds with most other metals. Exceptions include lithium, sodium, potassium, rubidium and caesium of the alkali metals; and magnesium, calcium, strontium, and barium of the alkaline earth metals; and europium and ytterbium of the rare earth metals.[33] Partial exceptions include the refractory metals chromium, molybdenum, niobium, tantalum, and tungsten, which are soluble in liquid plutonium, but insoluble or only slightly soluble in solid plutonium.[33] Gallium, aluminium, americium, scandium and cerium can stabilize the δ phase of plutonium for room temperature. Silicon, indium, zinc and zirconium allow formation of metastable δ state when rapidly cooled. High amounts of hafnium, holmium and thallium also allows some retention of the δ phase at room temperature. Neptunium is the only element that can stabilize the α phase at higher temperatures.[8] Plutonium alloys can be produced by adding a metal to molten plutonium. If the alloying metal is sufficiently reductive, plutonium can be added in the form of oxides or halides. The δ phase plutonium–gallium and plutonium–aluminium alloys are produced by adding plutonium(III) fluoride to molten gallium or aluminium, which has the advantage of avoiding dealing directly with the highly reactive plutonium metal.[38] • Plutonium–gallium is used for stabilizing the δ phase of plutonium, avoiding the α-phase and α–δ related issues. Its main use is in pits of implosion nuclear weapons.[39] • Plutonium–aluminium is an alternative to the Pu–Ga alloy. It was the original element considered for δ phase stabilization, but its tendency to react with the alpha particles and release neutrons reduces its usability for nuclear weapon pits. Plutonium–aluminium alloy can be also used as a component of nuclear fuel.[40] • Plutonium–gallium–cobalt alloy (PuCoGa5) is an unconventional superconductor, showing superconductivity below 18.5 K, an order of magnitude higher than the highest between heavy fermion systems, and has large critical current.[37][41] • Plutonium–zirconium alloy can be used as nuclear fuel.[42] • Plutonium–cerium and plutonium–cerium–cobalt alloys are used as nuclear fuels.[43] • Plutonium–uranium, with about 15–30 mol.% plutonium, can be used as a nuclear fuel for fast breeder reactors. Its pyrophoric nature and high susceptibility to corrosion to the point of self-igniting or disintegrating after exposure to air require alloying with other components. Addition of aluminium, carbon or copper does not improve disintegration rates markedly, zirconium and iron alloys have better corrosion resistance but they disintegrate in several months in air as well. Addition of titanium and/or zirconium significantly increases the melting point of the alloy.[44] • Plutonium–uranium–titanium and plutonium–uranium–zirconium were investigated for use as nuclear fuels. The addition of the third element increases corrosion resistance, reduces flammability, and improves ductility, fabricability, strength, and thermal expansion. Plutonium–uranium–molybdenum has the best corrosion resistance, forming a protective film of oxides, but titanium and zirconium are preferred for physics reasons.[44] • Thorium–uranium–plutonium was investigated as a nuclear fuel for fast breeder reactors.[44] Occurrence Sample of plutonium metal displayed at the Questacon museum Trace amounts of plutonium-238, plutonium-239, plutonium-240, and plutonium-244 can be found in nature. Small traces of plutonium-239, a few parts per trillion, and its decay products are naturally found in some concentrated ores of uranium,[45] such as the natural nuclear fission reactor in Oklo, Gabon.[46] The ratio of plutonium-239 to uranium at the Cigar Lake Mine uranium deposit ranges from 2.4×10−12 to 44×10−12.[47] These trace amounts of 239Pu originate in the following fashion: on rare occasions, 238U undergoes spontaneous fission, and in the process, the nucleus emits one or two free neutrons with some kinetic energy. When one of these neutrons strikes the nucleus of another 238U atom, it is absorbed by the atom, which becomes 239U. With a relatively short half-life, 239U decays to 239Np, which decays into 239Pu.[48][49] Finally, exceedingly small amounts of plutonium-238, attributed to the extremely rare double beta decay of uranium-238, have been found in natural uranium samples.[50] Due to its relatively long half-life of about 80 million years, it was suggested that plutonium-244 occurs naturally as a primordial nuclide, but early reports of its detection could not be confirmed.[51] However, its long half-life ensured its circulation across the solar system before its extinction,[52] and indeed, evidence of the spontaneous fission of extinct 244Pu has been found in meteorites.[53] The former presence of 244Pu in the early Solar System has been confirmed, since it manifests itself today as an excess of its daughters, either 232Th (from the alpha decay pathway) or xenon isotopes (from its spontaneous fission). The latter are generally more useful, because the chemistries of thorium and plutonium are rather similar (both are predominantly tetravalent) and hence an excess of thorium would not be strong evidence that some of it was formed as a plutonium daughter.[54] 244Pu has the longest half-life of all transuranic nuclides and is produced only in the r-process in supernovae and colliding neutron stars; when nuclei are ejected from these events at high speed to reach Earth, 244Pu alone among transuranic nuclides has a long enough half-life to survive the journey, and hence tiny traces of live interstellar 244Pu have been found in the deep sea floor. Because 240Pu also occurs in the decay chain of 244Pu, it must thus also be present in secular equilibrium, albeit in even tinier quantities.[55] Minute traces of plutonium are usually found in the human body due to the 550 atmospheric and underwater nuclear tests that have been carried out, and to a small number of major nuclear accidents. Most atmospheric and underwater nuclear testing was stopped by the Limited Test Ban Treaty in 1963, which was signed and ratified by the United States, the United Kingdom, the Soviet Union, and other nations. Continued atmospheric nuclear weapons testing since 1963 by non-treaty nations included those by China (atomic bomb test above the Gobi Desert in 1964, hydrogen bomb test in 1967, and follow-on tests), and France (tests as recently as the 1990s). Because it is deliberately manufactured for nuclear weapons and nuclear reactors, plutonium-239 is the most abundant isotope of plutonium by far.[34] History Discovery Enrico Fermi and a team of scientists at the University of Rome reported that they had discovered element 94 in 1934.[56] Fermi called the element hesperium and mentioned it in his Nobel Lecture in 1938.[57] The sample was actually a mixture of barium, krypton, and other elements, but this was not known at the time.[58] Nuclear fission was discovered in Germany in 1938 by Otto Hahn and Fritz Strassmann. The mechanism of fission was then theoretically explained by Lise Meitner and Otto Frisch.[59] Glenn T. Seaborg and his team at Berkeley were the first to produce plutonium. Plutonium (specifically, plutonium-238) was first produced, isolated and then chemically identified between December 1940 and February 1941 by Glenn T. Seaborg, Edwin McMillan, Emilio Segrè, Joseph W. Kennedy, and Arthur Wahl by deuteron bombardment of uranium in the 60-inch (150 cm) cyclotron at the Berkeley Radiation Laboratory at the University of California, Berkeley.[60][61][62] Neptunium-238 was created directly by the bombardment but decayed by beta emission with a half-life of a little over two days, which indicated the formation of element 94.[34] The first bombardment took place on December 14, 1940, and the new element was first identified through oxidation on the night of February 23–24, 1941.[61] A paper documenting the discovery was prepared by the team and sent to the journal Physical Review in March 1941,[34] but publication was delayed until a year after the end of World War II due to security concerns.[63] At the Cavendish Laboratory in Cambridge, Egon Bretscher and Norman Feather realized that a slow neutron reactor fuelled with uranium would theoretically produce substantial amounts of plutonium-239 as a by-product. They calculated that element 94 would be fissile, and had the added advantage of being chemically different from uranium, and could easily be separated from it.[22] McMillan had recently named the first transuranic element neptunium after the planet Neptune, and suggested that element 94, being the next element in the series, be named for what was then considered the next planet, Pluto.[5][note 2] Nicholas Kemmer of the Cambridge team independently proposed the same name, based on the same reasoning as the Berkeley team.[22] Seaborg originally considered the name "plutium", but later thought that it did not sound as good as "plutonium".[65] He chose the letters "Pu" as a joke, in reference to the interjection "P U" to indicate an especially disgusting smell, which passed without notice into the periodic table.[note 3] Alternative names considered by Seaborg and others were "ultimium" or "extremium" because of the erroneous belief that they had found the last possible element on the periodic table.[67] Early research The dwarf planet Pluto, after which plutonium is named The chemistry of plutonium was found to resemble uranium after a few months of initial study.[34] Early research was continued at the secret Metallurgical Laboratory of the University of Chicago. On August 20, 1942, a trace quantity of this element was isolated and measured for the first time. About 50 micrograms of plutonium-239 combined with uranium and fission products was produced and only about 1 microgram was isolated.[45][68] This procedure enabled chemists to determine the new element's atomic weight.[69][note 4] On December 2, 1942, on a racket court under the west grandstand at the University of Chicago's Stagg Field, researchers headed by Enrico Fermi achieved the first self-sustaining chain reaction in a graphite and uranium pile known as CP-1. Using theoretical information garnered from the operation of CP-1, DuPont constructed an air-cooled experimental production reactor, known as X-10, and a pilot chemical separation facility at Oak Ridge. The separation facility, using methods developed by Glenn T. Seaborg and a team of researchers at the Met Lab, removed plutonium from uranium irradiated in the X-10 reactor. Information from CP-1 was also useful to Met Lab scientists designing the water-cooled plutonium production reactors for Hanford. Construction at the site began in mid-1943.[70] In November 1943 some plutonium trifluoride was reduced to create the first sample of plutonium metal: a few micrograms of metallic beads.[45] Enough plutonium was produced to make it the first synthetically made element to be visible with the unaided eye.[71] The nuclear properties of plutonium-239 were also studied; researchers found that when it is hit by a neutron it breaks apart (fissions) by releasing more neutrons and energy. These neutrons can hit other atoms of plutonium-239 and so on in an exponentially fast chain reaction. This can result in an explosion large enough to destroy a city if enough of the isotope is concentrated to form a critical mass.[34] During the early stages of research, animals were used to study the effects of radioactive substances on health. These studies began in 1944 at the University of California at Berkeley's Radiation Laboratory and were conducted by Joseph G. Hamilton. Hamilton was looking to answer questions about how plutonium would vary in the body depending on exposure mode (oral ingestion, inhalation, absorption through skin), retention rates, and how plutonium would be fixed in tissues and distributed among the various organs. Hamilton started administering soluble microgram portions of plutonium-239 compounds to rats using different valence states and different methods of introducing the plutonium (oral, intravenous, etc.). Eventually, the lab at Chicago also conducted its own plutonium injection experiments using different animals such as mice, rabbits, fish, and even dogs. The results of the studies at Berkeley and Chicago showed that plutonium's physiological behavior differed significantly from that of radium. The most alarming result was that there was significant deposition of plutonium in the liver and in the "actively metabolizing" portion of bone. Furthermore, the rate of plutonium elimination in the excreta differed between species of animals by as much as a factor of five. Such variation made it extremely difficult to estimate what the rate would be for human beings.[72] Production during the Manhattan Project During World War II the U.S. government established the Manhattan Project, which was tasked with developing an atomic bomb. The three primary research and production sites of the project were the plutonium production facility at what is now the Hanford Site, the uranium enrichment facilities at Oak Ridge, Tennessee, and the weapons research and design laboratory, now known as Los Alamos National Laboratory.[73] The Hanford B Reactor face under construction—the first plutonium-production reactor The Hanford site represents two-thirds of the nation's high-level radioactive waste by volume. Nuclear reactors line the riverbank at the Hanford Site along the Columbia River in January 1960. The first production reactor that made plutonium-239 was the X-10 Graphite Reactor. It went online in 1943 and was built at a facility in Oak Ridge that later became the Oak Ridge National Laboratory.[34][note 5] In January 1944, workers laid the foundations for the first chemical separation building, T Plant located in 200-West. Both the T Plant and its sister facility in 200-West, the U Plant, were completed by October. (U Plant was used only for training during the Manhattan Project.) The separation building in 200-East, B Plant, was completed in February 1945. The second facility planned for 200-East was canceled. Nicknamed Queen Marys by the workers who built them, the separation buildings were awesome canyon-like structures 800 feet long, 65 feet wide, and 80 feet high containing forty process pools. The interior had an eerie quality as operators behind seven feet of concrete shielding manipulated remote control equipment by looking through television monitors and periscopes from an upper gallery. Even with massive concrete lids on the process pools, precautions against radiation exposure were necessary and influenced all aspects of plant design.[70] On April 5, 1944, Emilio Segrè at Los Alamos received the first sample of reactor-produced plutonium from Oak Ridge.[75] Within ten days, he discovered that reactor-bred plutonium had a higher concentration of the isotope plutonium-240 than cyclotron-produced plutonium. Plutonium-240 has a high spontaneous fission rate, raising the overall background neutron level of the plutonium sample.[76] The original gun-type plutonium weapon, code-named "Thin Man", had to be abandoned as a result—the increased number of spontaneous neutrons meant that nuclear pre-detonation (fizzle) was likely.[77] The entire plutonium weapon design effort at Los Alamos was soon changed to the more complicated implosion device, code-named "Fat Man". With an implosion weapon, plutonium is compressed to a high density with explosive lenses—a technically more daunting task than the simple gun-type design, but necessary to use plutonium for weapons purposes. Enriched uranium, by contrast, can be used with either method.[77] Construction of the Hanford B Reactor, the first industrial-sized nuclear reactor for the purposes of material production, was completed in March 1945. B Reactor produced the fissile material for the plutonium weapons used during World War II.[note 6] B, D and F were the initial reactors built at Hanford, and six additional plutonium-producing reactors were built later at the site.[80] By the end of January 1945, the highly purified plutonium underwent further concentration in the completed chemical isolation building, where remaining impurities were removed successfully. Los Alamos received its first plutonium from Hanford on February 2. While it was still by no means clear that enough plutonium could be produced for use in bombs by the war's end, Hanford was by early 1945 in operation. Only two years had passed since Col. Franklin Matthias first set up his temporary headquarters on the banks of the Columbia River.[70] According to Kate Brown, the plutonium production plants at Hanford and Mayak in Russia, over a period of four decades, "both released more than 200 million curies of radioactive isotopes into the surrounding environment — twice the amount expelled in the Chernobyl disaster in each instance".[81] Most of this radioactive contamination over the years were part of normal operations, but unforeseen accidents did occur and plant management kept this secret, as the pollution continued unabated.[81] In 2004, a safe was discovered during excavations of a burial trench at the Hanford nuclear site. Inside the safe were various items, including a large glass bottle containing a whitish slurry which was subsequently identified as the oldest sample of weapons-grade plutonium known to exist. Isotope analysis by Pacific Northwest National Laboratory indicated that the plutonium in the bottle was manufactured in the X-10 Graphite Reactor at Oak Ridge during 1944.[82][83][84] Trinity and Fat Man atomic bombs Because of the presence of plutonium-240 in reactor-bred plutonium, the implosion design was developed for the "Fat Man" and "Trinity" weapons The first atomic bomb test, codenamed "Trinity" and detonated on July 16, 1945, near Alamogordo, New Mexico, used plutonium as its fissile material.[45] The implosion design of "the gadget", as the Trinity device was code-named, used conventional explosive lenses to compress a sphere of plutonium into a supercritical mass, which was simultaneously showered with neutrons from the "Urchin", an initiator made of polonium and beryllium (neutron source: (α, n) reaction).[34] Together, these ensured a runaway chain reaction and explosion. The overall weapon weighed over 4 tonnes, although it used just 6.2 kg of plutonium in its core.[85] About 20% of the plutonium used in the Trinity weapon underwent fission, resulting in an explosion with an energy equivalent to approximately 20,000 tons of TNT.[86][note 7] An identical design was used in the "Fat Man" atomic bomb dropped on Nagasaki, Japan, on August 9, 1945, killing 35,000–40,000 people and destroying 68%–80% of war production at Nagasaki.[88] Only after the announcement of the first atomic bombs was the existence and name of plutonium made known to the public by the Manhattan Project's Smyth Report.[89] Cold War use and waste Large stockpiles of weapons-grade plutonium were built up by both the Soviet Union and the United States during the Cold War. The U.S. reactors at Hanford and the Savannah River Site in South Carolina produced 103 tonnes,[90] and an estimated 170 tonnes of military-grade plutonium was produced in the USSR.[91][note 8] Each year about 20 tonnes of the element is still produced as a by-product of the nuclear power industry.[11] As much as 1000 tonnes of plutonium may be in storage with more than 200 tonnes of that either inside or extracted from nuclear weapons.[34] SIPRI estimated the world plutonium stockpile in 2007 as about 500 tonnes, divided equally between weapon and civilian stocks.[93] Radioactive contamination at the Rocky Flats Plant primarily resulted from two major plutonium fires in 1957 and 1969. Much lower concentrations of radioactive isotopes were released throughout the operational life of the plant from 1952 to 1992. Prevailing winds from the plant carried airborne contamination south and east, into populated areas northwest of Denver. The contamination of the Denver area by plutonium from the fires and other sources was not publicly reported until the 1970s. According to a 1972 study coauthored by Edward Martell, "In the more densely populated areas of Denver, the Pu contamination level in surface soils is several times fallout", and the plutonium contamination "just east of the Rocky Flats plant ranges up to hundreds of times that from nuclear tests".[94] As noted by Carl Johnson in Ambio, "Exposures of a large population in the Denver area to plutonium and other radionuclides in the exhaust plumes from the plant date back to 1953."[95] Weapons production at the Rocky Flats plant was halted after a combined FBI and EPA raid in 1989 and years of protests. The plant has since been shut down, with its buildings demolished and completely removed from the site.[96] In the U.S., some plutonium extracted from dismantled nuclear weapons is melted to form glass logs of plutonium oxide that weigh two tonnes.[34] The glass is made of borosilicates mixed with cadmium and gadolinium.[note 9] These logs are planned to be encased in stainless steel and stored as much as 4 km (2 mi) underground in bore holes that will be back-filled with concrete.[34] The U.S. planned to store plutonium in this way at the Yucca Mountain nuclear waste repository, which is about 100 miles (160 km) north-east of Las Vegas, Nevada.[97] On March 5, 2009, Energy Secretary Steven Chu told a Senate hearing "the Yucca Mountain site no longer was viewed as an option for storing reactor waste".[98] Starting in 1999, military-generated nuclear waste is being entombed at the Waste Isolation Pilot Plant in New Mexico. In a Presidential Memorandum dated January 29, 2010, President Obama established the Blue Ribbon Commission on America's Nuclear Future.[99] In their final report the Commission put forth recommendations for developing a comprehensive strategy to pursue, including:[100] "Recommendation #1: The United States should undertake an integrated nuclear waste management program that leads to the timely development of one or more permanent deep geological facilities for the safe disposal of spent fuel and high-level nuclear waste".[100] Medical experimentation During and after the end of World War II, scientists working on the Manhattan Project and other nuclear weapons research projects conducted studies of the effects of plutonium on laboratory animals and human subjects.[101] Animal studies found that a few milligrams of plutonium per kilogram of tissue is a lethal dose.[102] In the case of human subjects, this involved injecting solutions containing (typically) five micrograms of plutonium into hospital patients thought to be either terminally ill, or to have a life expectancy of less than ten years either due to age or chronic disease condition.[101] This was reduced to one microgram in July 1945 after animal studies found that the way plutonium distributed itself in bones was more dangerous than radium.[102] Most of the subjects, Eileen Welsome says, were poor, powerless, and sick.[103] From 1945 to 1947, eighteen human test subjects were injected with plutonium without informed consent. The tests were used to create diagnostic tools to determine the uptake of plutonium in the body in order to develop safety standards for working with plutonium.[101] Ebb Cade was an unwilling participant in medical experiments that involved injection of 4.7 micrograms of Plutonium on 10 April 1945 at Oak Ridge, Tennessee.[104][105] This experiment was under the supervision of Harold Hodge.[106] Other experiments directed by the United States Atomic Energy Commission and the Manhattan Project continued into the 1970s. The Plutonium Files chronicles the lives of the subjects of the secret program by naming each person involved and discussing the ethical and medical research conducted in secret by the scientists and doctors. The episode is now considered to be a serious breach of medical ethics and of the Hippocratic Oath.[107] The government covered up most of these radiation mishaps until 1993, when President Bill Clinton ordered a change of policy and federal agencies then made available relevant records. The resulting investigation was undertaken by the president's Advisory Committee on Human Radiation Experiments, and it uncovered much of the material about plutonium research on humans. The committee issued a controversial 1995 report which said that "wrongs were committed" but it did not condemn those who perpetrated them.[103] Applications Explosives The atomic bomb dropped on Nagasaki, Japan in 1945 had a plutonium core. The isotope plutonium-239 is a key fissile component in nuclear weapons, due to its ease of fission and availability. Encasing the bomb's plutonium pit in a tamper (an optional layer of dense material) decreases the amount of plutonium needed to reach critical mass by reflecting escaping neutrons back into the plutonium core. This reduces the amount of plutonium needed to reach criticality from 16 kg to 10 kg, which is a sphere with a diameter of about 10 centimeters (4 in).[108] This critical mass is about a third of that for uranium-235.[5] The Fat Man plutonium bombs used explosive compression of plutonium to obtain significantly higher densities than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6.2 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT.[86][109] Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs.[109] Mixed oxide fuel Spent nuclear fuel from normal light water reactors contains plutonium, but it is a mixture of plutonium-242, 240, 239 and 238. The mixture is not sufficiently enriched for efficient nuclear weapons, but can be used once as MOX fuel.[110] Accidental neutron capture causes the amount of plutonium-242 and 240 to grow each time the plutonium is irradiated in a reactor with low-speed "thermal" neutrons, so that after the second cycle, the plutonium can only be consumed by fast neutron reactors. If fast neutron reactors are not available (the normal case), excess plutonium is usually discarded, and forms one of the longest-lived components of nuclear waste. The desire to consume this plutonium and other transuranic fuels and reduce the radiotoxicity of the waste is the usual reason nuclear engineers give to make fast neutron reactors.[111] The most common chemical process, PUREX (Plutonium–URanium EXtraction) reprocesses spent nuclear fuel to extract plutonium and uranium which can be used to form a mixed oxide (MOX) fuel for reuse in nuclear reactors. Weapons-grade plutonium can be added to the fuel mix. MOX fuel is used in light water reactors and consists of 60 kg of plutonium per tonne of fuel; after four years, three-quarters of the plutonium is burned (turned into other elements).[34] Breeder reactors are specifically designed to create more fissionable material than they consume.[112] MOX fuel has been in use since the 1980s, and is widely used in Europe.[110] In September 2000, the United States and the Russian Federation signed a Plutonium Management and Disposition Agreement by which each agreed to dispose of 34 tonnes of weapons-grade plutonium.[113] The U.S. Department of Energy plans to dispose of 34 tonnes of weapons-grade plutonium in the United States before the end of 2019 by converting the plutonium to a MOX fuel to be used in commercial nuclear power reactors.[113] MOX fuel improves total burnup. A fuel rod is reprocessed after three years of use to remove waste products, which by then account for 3% of the total weight of the rods.[34] Any uranium or plutonium isotopes produced during those three years are left and the rod goes back into production.[note 10] The presence of up to 1% gallium per mass in weapons-grade plutonium alloy has the potential to interfere with long-term operation of a light water reactor.[114] Plutonium recovered from spent reactor fuel poses little proliferation hazard, because of excessive contamination with non-fissile plutonium-240 and plutonium-242. Separation of the isotopes is not feasible. A dedicated reactor operating on very low burnup (hence minimal exposure of newly formed plutonium-239 to additional neutrons which causes it to be transformed to heavier isotopes of plutonium) is generally required to produce material suitable for use in efficient nuclear weapons. While "weapons-grade" plutonium is defined to contain at least 92% plutonium-239 (of the total plutonium), the United States have managed to detonate an under-20Kt device using plutonium believed to contain only about 85% plutonium-239, so called '"fuel-grade" plutonium.[115] The "reactor-grade" plutonium produced by a regular LWR burnup cycle typically contains less than 60% Pu-239, with up to 30% parasitic Pu-240/Pu-242, and 10–15% fissile Pu-241.[115] It is unknown if a device using plutonium obtained from reprocessed civil nuclear waste can be detonated, however such a device could hypothetically fizzle and spread radioactive materials over a large urban area. The IAEA conservatively classifies plutonium of all isotopic vectors as "direct-use" material, that is, "nuclear material that can be used for the manufacture of nuclear explosives components without transmutation or further enrichment".[115] Power and heat source A glowing cylinder of 238PuO2 The 238PuO2 radioisotope thermoelectric generator of the Curiosity rover The isotope plutonium-238 has a half-life of 87.74 years.[116] It emits a large amount of thermal energy with low levels of both gamma rays/photons and spontaneous neutron rays/particles.[117] Being an alpha emitter, it combines high energy radiation with low penetration and thereby requires minimal shielding. A sheet of paper can be used to shield against the alpha particles emitted by plutonium-238. One kilogram of the isotope can generate about 570 watts of heat.[5][117] These characteristics make it well-suited for electrical power generation for devices that must function without direct maintenance for timescales approximating a human lifetime. It is therefore used in radioisotope thermoelectric generators and radioisotope heater units such as those in the Cassini,[118] Voyager, Galileo and New Horizons[119] space probes, and the Curiosity [120] and Perseverance (Mars 2020) Mars rovers. The twin Voyager spacecraft were launched in 1977, each containing a 500 watt plutonium power source. Over 30 years later, each source is still producing about 300 watts which allows limited operation of each spacecraft.[121] An earlier version of the same technology powered five Apollo Lunar Surface Experiment Packages, starting with Apollo 12 in 1969.[34] Plutonium-238 has also been used successfully to power artificial heart pacemakers, to reduce the risk of repeated surgery.[122][123] It has been largely replaced by lithium-based primary cells, but as of 2003 there were somewhere between 50 and 100 plutonium-powered pacemakers still implanted and functioning in living patients in the United States.[124] By the end of 2007, the number of plutonium-powered pacemakers was reported to be down to just nine.[125] Plutonium-238 was studied as a way to provide supplemental heat to scuba diving.[126] Plutonium-238 mixed with beryllium is used to generate neutrons for research purposes.[34] Precautions Toxicity There are two aspects to the harmful effects of plutonium: the radioactivity and the heavy metal poison effects. Isotopes and compounds of plutonium are radioactive and accumulate in bone marrow. Contamination by plutonium oxide has resulted from nuclear disasters and radioactive incidents, including military nuclear accidents where nuclear weapons have burned.[127] Studies of the effects of these smaller releases, as well as of the widespread radiation poisoning sickness and death following the atomic bombings of Hiroshima and Nagasaki, have provided considerable information regarding the dangers, symptoms and prognosis of radiation poisoning, which in the case of the Japanese survivors was largely unrelated to direct plutonium exposure.[128] During the decay of plutonium, three types of radiation are released—alpha, beta, and gamma. Alpha, beta, and gamma radiation are all forms of ionizing radiation. Either acute or longer-term exposure carries a danger of serious health outcomes including radiation sickness, genetic damage, cancer, and death. The danger increases with the amount of exposure.[34] Alpha radiation can travel only a short distance and cannot travel through the outer, dead layer of human skin. Beta radiation can penetrate human skin, but cannot go all the way through the body. Gamma radiation can go all the way through the body.[129] Even though alpha radiation cannot penetrate the skin, ingested or inhaled plutonium does irradiate internal organs.[34] Alpha particles generated by inhaled plutonium have been found to cause lung cancer in a cohort of European nuclear workers.[130] The skeleton, where plutonium accumulates, and the liver, where it collects and becomes concentrated, are at risk.[33] Plutonium is not absorbed into the body efficiently when ingested; only 0.04% of plutonium oxide is absorbed after ingestion.[34] Plutonium absorbed by the body is excreted very slowly, with a biological half-life of 200 years.[131] Plutonium passes only slowly through cell membranes and intestinal boundaries, so absorption by ingestion and incorporation into bone structure proceeds very slowly.[132][133] Plutonium is more dangerous when inhaled than when ingested. The risk of lung cancer increases once the total radiation dose equivalent of inhaled plutonium exceeds 400 mSv.[134] The U.S. Department of Energy estimates that the lifetime cancer risk from inhaling 5,000 plutonium particles, each about 3 µm wide, is 1% over the background U.S. average.[135] Ingestion or inhalation of large amounts may cause acute radiation poisoning and possibly death. However, no human being is known to have died because of inhaling or ingesting plutonium, and many people have measurable amounts of plutonium in their bodies.[115] The "hot particle" theory in which a particle of plutonium dust irradiates a localized spot of lung tissue is not supported by mainstream research—such particles are more mobile than originally thought and toxicity is not measurably increased due to particulate form.[132] When inhaled, plutonium can pass into the bloodstream. Once in the bloodstream, plutonium moves throughout the body and into the bones, liver, or other body organs. Plutonium that reaches body organs generally stays in the body for decades and continues to expose the surrounding tissue to radiation and thus may cause cancer.[136] A commonly cited quote by Ralph Nader states that a pound of plutonium dust spread into the atmosphere would be enough to kill 8 billion people.[137] This was disputed by Bernard Cohen, an opponent of the generally accepted linear no-threshold model of radiation toxicity. Cohen estimated that one pound of plutonium could kill no more than 2 million people by inhalation, so that the toxicity of plutonium is roughly equivalent with that of nerve gas.[138] Several populations of people who have been exposed to plutonium dust (e.g. people living downwind of Nevada test sites, Nagasaki survivors, nuclear facility workers, and "terminally ill" patients injected with Pu in 1945–46 to study Pu metabolism) have been carefully followed and analyzed. Cohen found these studies inconsistent with high estimates of plutonium toxicity, citing cases such as Albert Stevens who survived into old age after being injected with plutonium.[132] "There were about 25 workers from Los Alamos National Laboratory who inhaled a considerable amount of plutonium dust during 1940s; according to the hot-particle theory, each of them has a 99.5% chance of being dead from lung cancer by now, but there has not been a single lung cancer among them."[138][139] Marine toxicity Investigating the toxicity of plutonium in humans is just as important as looking at the effects in fauna of marine systems. Plutonium is known to enter the marine environment by dumping of waste or accidental leakage from nuclear plants. Although the highest concentrations of plutonium in marine environments are found in the sediments, the complex biogeochemical cycle of plutonium means that it is also found in all other compartments.[140] For example, various zooplankton species that aid in the nutrient cycle will consume the element on a daily basis. The complete excretion of ingested plutonium by zooplankton makes their defecation an extremely important mechanism in the scavenging of plutonium from surface waters.[141] However, those zooplankton that succumb to predation by larger organisms may become a transmission vehicle of plutonium to fish. In addition to consumption, fish can also be exposed to plutonium by their geographical distribution around the globe. One study investigated the effects of transuranium elements (plutonium-238, plutonium-239, plutonium-240) on various fish living in the Chernobyl Exclusion Zone (CEZ). Results showed that a proportion of female perch in the CEZ displayed either a failure or delay in maturation of the gonads.[142] Similar studies found large accumulations of plutonium in the respiratory and digestive organs of cod, flounder and herring.[140] Plutonium toxicity is just as detrimental to larvae of fish in nuclear waste areas. Undeveloped eggs have a higher risk than developed adult fish exposed to the element in these waste areas. The Oak Ridge National Laboratory displayed that carp and minnow embryos raised in solutions containing plutonium isotopes did not hatch; eggs that hatched displayed significant abnormalities when compared to control developed embryos.[143] It revealed that higher concentrations of plutonium have been found to cause issues in marine fauna exposed to the element. Criticality potential A sphere of simulated plutonium surrounded by neutron-reflecting tungsten carbide blocks in a re-enactment of Harry Daghlian's 1945 experiment Care must be taken to avoid the accumulation of amounts of plutonium which approach critical mass, particularly because plutonium's critical mass is only a third of that of uranium-235.[5] A critical mass of plutonium emits lethal amounts of neutrons and gamma rays.[144] Plutonium in solution is more likely to form a critical mass than the solid form due to moderation by the hydrogen in water.[dubious ][11] Criticality accidents have occurred in the past, some of them with lethal consequences. Careless handling of tungsten carbide bricks around a 6.2 kg plutonium sphere resulted in a fatal dose of radiation at Los Alamos on August 21, 1945, when scientist Harry Daghlian received a dose estimated to be 5.1 sievert (510 rems) and died 25 days later.[145][146] Nine months later, another Los Alamos scientist, Louis Slotin, died from a similar accident involving a beryllium reflector and the same plutonium core (the so-called "demon core") that had previously claimed the life of Daghlian.[147] In December 1958, during a process of purifying plutonium at Los Alamos, a critical mass was formed in a mixing vessel, which resulted in the death of a chemical operator named Cecil Kelley. Other nuclear accidents have occurred in the Soviet Union, Japan, the United States, and many other countries.[148] Flammability Metallic plutonium is a fire hazard, especially if the material is finely divided. In a moist environment, plutonium forms hydrides on its surface, which are pyrophoric and may ignite in air at room temperature. Plutonium expands up to 70% in volume as it oxidizes and thus may break its container.[35] The radioactivity of the burning material is an additional hazard. Magnesium oxide sand is probably the most effective material for extinguishing a plutonium fire. It cools the burning material, acting as a heat sink, and also blocks off oxygen. Special precautions are necessary to store or handle plutonium in any form; generally a dry inert gas atmosphere is required.[35][note 11] Transportation Land and sea The usual transportation of plutonium is through the more stable plutonium oxide in a sealed package. A typical transport consists of one truck carrying one protected shipping container, holding a number of packages with a total weight varying from 80 to 200 kg of plutonium oxide. A sea shipment may consist of several containers, each of them holding a sealed package.[150] The United States Nuclear Regulatory Commission dictates that it must be solid instead of powder if the contents surpass 0.74 TBq (20 Curies) of radioactive activity.[151] In 2016, the ships Pacific Egret[152] and Pacific Heron of Pacific Nuclear Transport Ltd. transported 331 kg (730 lbs) of plutonium to a United States government facility in Savannah River, South Carolina.[153][154] Air The U.S. Government air transport regulations permit the transport of plutonium by air, subject to restrictions on other dangerous materials carried on the same flight, packaging requirements, and stowage in the rearmost part of the aircraft.[155] In 2012 media revealed that plutonium has been flown out of Norway on commercial passenger airlines—around every other year—including one time in 2011.[156] Regulations permit an airplane to transport 15 grams of fissionable material.[156] Such plutonium transportation is without problems, according to a senior advisor (seniorrådgiver) at Statens strålevern.[156] Notes Footnotes 1. ^ The PuO+ 2 ion is unstable in solution and will disproportionate into Pu4+ and PuO2+ 2 ; the Pu4+ will then oxidize the remaining PuO+ 2 to PuO2+ 2 , being reduced in turn to Pu3+. Thus, aqueous solutions of PuO+ 2 tend over time towards a mixture of Pu3+ and PuO2+ 2 . UO+ 2 is unstable for the same reason.[27] 2. ^ This was not the first time somebody suggested that an element be named "plutonium". A decade after barium was discovered, a Cambridge University professor suggested it be renamed to "plutonium" because the element was not (as suggested by the Greek root, barys, it was named for) heavy. He reasoned that, since it was produced by the relatively new technique of electrolysis, its name should refer to fire. Thus he suggested it be named for the Roman god of the underworld, Pluto.[64] 3. ^ As one article puts it, referring to information Seaborg gave in a talk: "The obvious choice for the symbol would have been Pl, but facetiously, Seaborg suggested Pu, like the words a child would exclaim, 'Pee-yoo!' when smelling something bad. Seaborg thought that he would receive a great deal of flak over that suggestion, but the naming committee accepted the symbol without a word."[66] 4. ^ Room 405 of the George Herbert Jones Laboratory, where the first isolation of plutonium took place, was named a National Historic Landmark in May 1967. 5. ^ During the Manhattan Project, plutonium was also often referred to as simply "49": the number 4 was for the last digit in 94 (atomic number of plutonium), and 9 was for the last digit in plutonium-239, the weapons-grade fissile isotope used in nuclear bombs.[74] 6. ^ The American Society of Mechanical Engineers (ASME) established B Reactor as a National Historic Mechanical Engineering Landmark in September 1976.[78] In August 2008, B Reactor was designated a U.S. National Historic Landmark.[79] 7. ^ The efficiency calculation is based on the fact that 1 kg of plutonium-239 (or uranium-235) fissioning results in an energy release of approximately 17 kt, leading to a rounded estimate of 1.2 kg plutonium actually fissioned to produce the 20 kt yield.[87] 8. ^ Much of this plutonium was used to make the fissionable cores of a type of thermonuclear weapon employing the Teller–Ulam design. These so-called 'hydrogen bombs' are a variety of nuclear weapon that use a fission bomb to trigger the nuclear fusion of heavy hydrogen isotopes. Their destructive yield is commonly in the millions of tons of TNT equivalent compared with the thousands of tons of TNT equivalent of fission-only devices.[92] 9. ^ Gadolinium zirconium oxide (Gd 2 Zr 2 O 7 ) has been studied because it could hold plutonium for up to 30 million years.[92] 10. ^ Breakdown of plutonium in a spent nuclear fuel rod: plutonium-239 (~58%), 240 (24%), 241 (11%), 242 (5%), and 238 (2%).[92] 11. ^ There was a major plutonium-initiated fire at the Rocky Flats Plant near Boulder, Colorado in 1969.[149] Citations 1. ^ Calculated from the atomic weight and the atomic volume. The unit cell, containing 16 atoms, has a volume of 319.96 cubic Å, according to Siegfried S. Hecker (2000). "Plutonium and its alloys: from atoms to microstructure" (PDF). Los Alamos Science. 26: 331.. This gives a density for 239Pu of (1.66053906660×10−24g/dalton×239.0521634 daltons/atom×16 atoms/unit cell)/(319.96 Å3/unit cell × 10−24cc/Å3) or 19.85 g/cc. 2. ^ a b c d "Plutonium, Radioactive". Wireless Information System for Emergency Responders (WISER). Bethesda (MD): U.S. National Library of Medicine, National Institutes of Health. Archived from the original on August 22, 2011. Retrieved November 23, 2008. (public domain text) 3. ^ "Nitric acid processing". Actinide Research Quarterly. Los Alamos (NM): Los Alamos National Laboratory (3rd quarter). 2008. Retrieved February 9, 2010. While plutonium dioxide is normally olive green, samples can be various colors. It is generally believed that the color is a function of chemical purity, stoichiometry, particle size, and method of preparation, although the color resulting from a given preparation method is not always reproducible. 4. ^ a b c Sonzogni, Alejandro A. (2008). "Chart of Nuclides". Upton: National Nuclear Data Center, Brookhaven National Laboratory. Retrieved September 13, 2008. 5. Heiserman 1992, p. 338 6. ^ Rhodes 1986, pp. 659–660 Leona Marshall: "When you hold a lump of it in your hand, it feels warm, like a live rabbit" 7. ^ a b c d Miner 1968, p. 544 8. Hecker, Siegfried S. (2000). "Plutonium and its alloys: from atoms to microstructure" (PDF). Los Alamos Science. 26: 290–335. Retrieved February 15, 2009. 9. ^ Hecker, Siegfried S.; Martz, Joseph C. (2000). "Aging of Plutonium and Its Alloys" (PDF). Los Alamos Science. Los Alamos, New Mexico: Los Alamos National Laboratory (26): 242. Retrieved February 15, 2009. 10. ^ a b c d Baker, Richard D.; Hecker, Siegfried S.; Harbur, Delbert R. (1983). "Plutonium: A Wartime Nightmare but a Metallurgist's Dream" (PDF). Los Alamos Science. Los Alamos National Laboratory: 148, 150–151. Retrieved February 15, 2009. 11. Lide 2006, pp. 4–27 12. ^ a b c d Miner 1968, p. 542 13. ^ "Plutonium Crystal Phase Transitions". GlobalSecurity.org. 14. ^ "Glossary – Fissile material". United States Nuclear Regulatory Commission. November 20, 2014. Retrieved February 5, 2015. 15. ^ Asimov 1988, p. 905 16. ^ Glasstone, Samuel; Redman, Leslie M. (June 1972). "An Introduction to Nuclear Weapons" (PDF). Atomic Energy Commission Division of Military Applications. p. 12. WASH-1038. Archived from the original (PDF) on August 27, 2009. 17. ^ Gosling 1999, p. 40 18. ^ "Plutonium: The First 50 Years" (PDF). U.S. Department of Energy. 1996. DOE/DP-1037. Archived from the original (PDF) on February 18, 2013. 19. ^ Heiserman 1992, p. 340 20. ^ Kennedy, J. W.; Seaborg, G. T.; Segrè, E.; Wahl, A. C. (1946). "Properties of Element 94". Physical Review. 70 (7–8): 555–556. Bibcode:1946PhRv...70..555K. doi:10.1103/PhysRev.70.555. 21. ^ Greenwood 1997, p. 1259 22. ^ a b c Clark 1961, pp. 124–125. 23. ^ Seaborg, Glenn T.; McMillan, E.; Kennedy, J. W.; Wahl, A. C. (1946). "Radioactive Element 94 from Deuterons on Uranium". Physical Review. 69 (7–8): 366. Bibcode:1946PhRv...69..366S. doi:10.1103/PhysRev.69.366. 24. ^ Bernstein 2007, pp. 76–77. 25. ^ "Can Reactor Grade Plutonium Produce Nuclear Fission Weapons?". Council for Nuclear Fuel Cycle Institute for Energy Economics, Japan. May 2001. 26. ^ Heiserman 1992, p. 339 27. ^ Crooks, William J. (2002). "Nuclear Criticality Safety Engineering Training Module 10 – Criticality Safety in Material Processing Operations, Part 1" (PDF). Archived from the original (PDF) on March 20, 2006. Retrieved February 15, 2006. 28. ^ Matlack, George (2002). A Plutonium Primer: An Introduction to Plutonium Chemistry and its Radioactivity. Los Alamos National Laboratory. LA-UR-02-6594. 29. ^ Windorff, Cory J.; Chen, Guo P; Cross, Justin N; Evans, William J.; Furche, Filipp; Gaunt, Andrew J.; Janicke, Michael T.; Kozimor, Stosh A.; Scott, Brian L. (2017). "Identification of the Formal +2 Oxidation State of Plutonium: Synthesis and Characterization of {PuII[C5H3(SiMe3)2]3}". J. Am. Chem. Soc. 139 (11): 3970–3973. doi:10.1021/jacs.7b00706. PMID 28235179. 30. ^ a b Zaitsevskii, Andréi; Mosyagin, Nikolai S.; Titov, Anatoly V.; Kiselev, Yuri M. (July 21, 2013). "Relativistic density functional theory modeling of plutonium and americium higher oxide molecules". The Journal of Chemical Physics. 139 (3): 034307. doi:10.1063/1.4813284. 31. ^ Kiselev, Yu. M.; Nikonov, M. V.; Dolzhenko, V. D.; Ermilov, A. Yu.; Tananaev, I. G.; Myasoedov, B. F. (January 17, 2014). "On existence and properties of plutonium(VIII) derivatives". Radiochimica Acta. 102 (3). doi:10.1515/ract-2014-2146. 32. ^ Eagleson 1994, p. 840 33. Miner 1968, p. 545 34. Emsley 2001, pp. 324–329 35. ^ a b c "Primer on Spontaneous Heating and Pyrophoricity – Pyrophoric Metals – Plutonium". Washington (DC): U.S. Department of Energy, Office of Nuclear Safety, Quality Assurance and Environment. 1994. Archived from the original on April 28, 2007. 36. ^ Crooks, W. J.; et al. (2002). "Low Temperature Reaction of ReillexTM HPQ and Nitric Acid". Solvent Extraction and Ion Exchange. 20 (4–5): 543–559. doi:10.1081/SEI-120014371. 37. ^ a b Dumé, Belle (November 20, 2002). "Plutonium is also a superconductor". PhysicsWeb.org. 38. ^ Moody, Hutcheon & Grant 2005, p. 169 39. ^ Kolman, D. G. & Colletti, L. P. (2009). "The aqueous corrosion behavior of plutonium metal and plutonium–gallium alloys exposed to aqueous nitrate and chloride solutions". ECS Transactions. Electrochemical Society. 16 (52): 71. Bibcode:2009ECSTr..16Z..71K. doi:10.1149/1.3229956. ISBN 978-1-56677-751-3. 40. ^ Hurst & Ward 1956 41. ^ Curro, N. J. (Spring 2006). "Unconventional superconductivity in PuCoGa5" (PDF). Los Alamos National Laboratory. Archived from the original (PDF) on July 22, 2011. Retrieved January 24, 2010. 42. ^ McCuaig, Franklin D. "Pu–Zr alloy for high-temperature foil-type fuel" U.S. Patent 4,059,439, Issued on November 22, 1977 43. ^ Jha 2004, p. 73 44. ^ a b c Kay 1965, p. 456 45. ^ a b c d Miner 1968, p. 541 46. ^ "Oklo: Natural Nuclear Reactors". U.S. Department of Energy, Office of Civilian Radioactive Waste Management. 2004. Archived from the original on October 20, 2008. Retrieved November 16, 2008. 47. ^ Curtis, David; Fabryka-Martin, June; Paul, Dixon; Cramer, Jan (1999). "Nature's uncommon elements: plutonium and technetium". Geochimica et Cosmochimica Acta. 63 (2): 275–285. Bibcode:1999GeCoA..63..275C. doi:10.1016/S0016-7037(98)00282-8. 48. ^ Bernstein 2007, pp. 75–77. 49. ^ Hoffman, D. C.; Lawrence, F. O.; Mewherter, J. L.; Rourke, F. M. (1971). "Detection of Plutonium-244 in Nature". Nature. 234 (5325): 132–134. Bibcode:1971Natur.234..132H. doi:10.1038/234132a0. 50. ^ Peterson, Ivars (December 7, 1991). "Uranium displays rare type of radioactivity". Science News. Wiley-Blackwell. 140 (23): 373. doi:10.2307/3976137. JSTOR 3976137. 51. ^ Hoffman, D. C.; Lawrence, F. O.; Mewherter, J. L.; Rourke, F. M. (1971). "Detection of Plutonium-244 in Nature". Nature. 234 (5325): 132–134. Bibcode:1971Natur.234..132H. doi:10.1038/234132a0. Nr. 34. 52. ^ Turner, Grenville; Harrison, T. Mark; Holland, Greg; Mojzsis, Stephen J.; Gilmour, Jamie (January 1, 2004). "Extinct 244Pu in Ancient Zircons" (PDF). Science. 306 (5693): 89–91. Bibcode:2004Sci...306...89T. doi:10.1126/science.1101014. JSTOR 3839259. PMID 15459384. 53. ^ Hutcheon, I. D.; Price, P. B. (January 1, 1972). "Plutonium-244 Fission Tracks: Evidence in a Lunar Rock 3.95 Billion Years Old". Science. 176 (4037): 909–911. Bibcode:1972Sci...176..909H. doi:10.1126/science.176.4037.909. JSTOR 1733798. PMID 17829301. 54. ^ Kunz, Joachim; Staudacher, Thomas; Allègre, Claude J. (January 1, 1998). "Plutonium-Fission Xenon Found in Earth's Mantle". Science. 280 (5365): 877–880. Bibcode:1998Sci...280..877K. doi:10.1126/science.280.5365.877. JSTOR 2896480. 55. ^ Wallner, A.; Faestermann, T.; Feige, J.; Feldstein, C.; Knie, K.; Korschinek, G.; Kutschera, W.; Ofan, A.; Paul, M.; Quinto, F.; Rugel, G.; Steiner, P. (March 30, 2014). "Abundance of live 244Pu in deep-sea reservoirs on Earth points to rarity of actinide nucleosynthesis". Nature Communications. 6: 5956. arXiv:1509.08054. Bibcode:2015NatCo...6E5956W. doi:10.1038/ncomms6956. 56. ^ Holden, Norman E. (2001). "A Short History of Nuclear Data and Its Evaluation". 51st Meeting of the USDOE Cross Section Evaluation Working Group. Upton (NY): National Nuclear Data Center, Brookhaven National Laboratory. Retrieved January 3, 2009. 57. ^ Fermi, Enrico (December 12, 1938). "Artificial radioactivity produced by neutron bombardment: Nobel Lecture" (PDF). Royal Swedish Academy of Sciences. 58. ^ Darden, Lindley (1998). "The Nature of Scientific Inquiry". College Park: Department of Philosophy, University of Maryland. Retrieved January 3, 2008. 59. ^ Bernstein 2007, pp. 44–52. 60. ^ Seaborg, Glenn T. "An Early History of LBNL: Elements 93 and 94". Advanced Computing for Science Department, Lawrence Berkeley National Laboratory. Retrieved September 17, 2008. 61. ^ a b Glenn T. Seaborg. "The plutonium story". Lawrence Berkeley Laboratory, University of California. LBL-13492, DE82 004551. 62. ^ E. Segrè, A Mind Always in Motion, University of California Press, 1993, pp 162-169 63. ^ Seaborg & Seaborg 2001, pp. 71–72. 64. ^ Heiserman 1992, p. 338. 65. ^ Clark, David L.; Hobart, David E. (2000). "Reflections on the Legacy of a Legend: Glenn T. Seaborg, 1912–1999" (PDF). Los Alamos Science. 26: 56–61, on 57. Retrieved February 15, 2009. 66. ^ Clark, David L.; Hobart, David E. (2000). "Reflections on the Legacy of a Legend: Glenn T. Seaborg, 1912–1999" (PDF). Los Alamos Science. 26: 56–61, on 57. Retrieved February 15, 2009. 67. ^ "Frontline interview with Seaborg". Frontline. Public Broadcasting Service. 1997. Retrieved December 7, 2008. 68. ^ Glenn T. Seaborg (1977). "History of MET Lab Section C-I, April 1942 – April 1943". Office of Scientific & Technical Information Technical Reports. California Univ., Berkeley (USA). Lawrence Berkeley Lab. doi:10.2172/7110621. 69. ^ "Room 405, George Herbert Jones Laboratory". National Park Service. Archived from the original on February 8, 2008. Retrieved December 14, 2008. 70. ^ a b c "Periodic Table of Elements". Los Alamos National Laboratory. Retrieved September 15, 2015. 71. ^ Miner 1968, p. 540 72. ^ "Plutonium". Atomic Heritage Foundation. Retrieved September 15, 2015. 73. ^ "Site Selection". LANL History. Los Alamos, New Mexico: Los Alamos National Laboratory. Retrieved December 23, 2008. 74. ^ Hammel, E. F. (2000). "The taming of "49"  – Big Science in little time. Recollections of Edward F. Hammel, In: Cooper N.G. Ed. Challenges in Plutonium Science" (PDF). Los Alamos Science. 26 (1): 2–9. Retrieved February 15, 2009. Hecker, S. S. (2000). "Plutonium: an historical overview. In: Challenges in Plutonium Science". Los Alamos Science. 26 (1): 1–2. Retrieved February 15, 2009. 75. ^ Sublette, Carey. "Atomic History Timeline 1942–1944". Washington (DC): Atomic Heritage Foundation. Retrieved December 22, 2008. 76. ^ Hoddeson et al. 1993, pp. 235–239. 77. ^ a b Hoddeson et al. 1993, pp. 240–242. 78. ^ Wahlen 1989, p. 1. 79. ^ "Weekly List Actions". National Park Service. August 29, 2008. Retrieved August 30, 2008. 80. ^ Wahlen 1989, p. iv, 1 81. ^ a b Lindley, Robert (2013). "Kate Brown: Nuclear "Plutopias" the Largest Welfare Program in American History". History News Network. 82. ^ Rincon, Paul (March 2, 2009). "BBC NEWS – Science & Environment – US nuclear relic found in bottle". BBC News. Retrieved March 2, 2009. 83. ^ Gebel, Erika (2009). "Old plutonium, new tricks". Analytical Chemistry. 81 (5): 1724. doi:10.1021/ac900093b. 84. ^ Schwantes, Jon M.; Matthew Douglas; Steven E. Bonde; James D. Briggs; et al. (2009). "Nuclear archeology in a bottle: Evidence of pre-Trinity U.S. weapons activities from a waste burial site". Analytical Chemistry. 81 (4): 1297–1306. doi:10.1021/ac802286a. PMID 19152306. 85. ^ Sublette, Carey (July 3, 2007). "8.1.1 The Design of Gadget, Fat Man, and "Joe 1" (RDS-1)". Nuclear Weapons Frequently Asked Questions, edition 2.18. The Nuclear Weapon Archive. Retrieved January 4, 2008. 86. ^ a b Malik, John (September 1985). "The Yields of the Hiroshima and Nagasaki Explosions" (PDF). Los Alamos. p. Table VI. LA-8819. Retrieved February 15, 2009. 87. ^ On the figure of 1 kg = 17 kt, see Garwin, Richard (October 4, 2002). "Proliferation of Nuclear Weapons and Materials to State and Non-State Actors: What It Means for the Future of Nuclear Power" (PDF). University of Michigan Symposium. Federation of American Scientists. Retrieved January 4, 2009. 88. ^ Sklar 1984, pp. 22–29. 89. ^ Bernstein 2007, p. 70. 90. ^ "Historic American Engineering Record: B Reactor (105-B Building)". Richland: U.S. Department of Energy. 2001. p. 110. DOE/RL-2001-16. Retrieved December 24, 2008. 91. ^ Cochran, Thomas B. (1997). Safeguarding nuclear weapons-usable materials in Russia (PDF). International Forum on Illegal Nuclear Traffic. Washington (DC): Natural Resources Defense Council, Inc. Archived from the original (PDF) on July 5, 2013. Retrieved December 21, 2008. 92. ^ a b c 93. ^ 94. ^ Poet, S. E.; Martell, EA (October 1972). "Plutonium-239 and americium-241 contamination in the Denver area". Health Physics. 23 (4): 537–48. doi:10.1097/00004032-197210000-00012. PMID 4634934. 95. ^ Johnson, C. J. (October 1981). "Cancer Incidence in an area contaminated with radionuclides near a nuclear installation". Ambio. 10 (4): 176–182. JSTOR 4312671. PMID 7348208. Reprinted in Johnson, C. J (October 1981). "Cancer Incidence in an area contaminated with radionuclides near a nuclear installation". Colo Med. 78 (10): 385–92. PMID 7348208. 96. ^ "Rocky Flats National Wildlife Refuge". U.S. Fish & Wildlife Service. Retrieved July 2, 2013. 97. ^ Press Secretary (July 23, 2002). "President Signs Yucca Mountain Bill". Washington (DC): Office of the Press Secretary, White House. Archived from the original on March 6, 2008. Retrieved February 9, 2015. 98. ^ Hebert, H. Josef (March 6, 2009). "Nuclear waste won't be going to Nevada's Yucca Mountain, Obama official says". Chicago Tribune. p. 4. Archived from the original on March 24, 2011. Retrieved March 17, 2011. 99. ^ "About the Commission". Archived from the original on June 21, 2011. 100. ^ a b Blue Ribbon Commission on America’s Nuclear Future. "Disposal Subcommittee Report to the Full Commission" (PDF). Archived from the original (PDF) on January 25, 2017. Retrieved February 26, 2017. 101. ^ a b c Moss, William; Eckhardt, Roger (1995). "The Human Plutonium Injection Experiments" (PDF). Los Alamos Science. Los Alamos National Laboratory. 23: 188, 205, 208, 214. Retrieved June 6, 2006. 102. ^ a b Voelz, George L. (2000). "Plutonium and Health: How great is the risk?". Los Alamos Science. Los Alamos (NM): Los Alamos National Laboratory (26): 78–79. 103. ^ a b Longworth, R. C. (November–December 1999). "Injected! Book review: The Plutonium Files: America's Secret Medical Experiments in the Cold War". The Bulletin of the Atomic Scientists. 55 (6): 58–61. doi:10.2968/055006016. 104. ^ Moss, William, and Roger Eckhardt. (1995). "The human plutonium injection experiments." Los Alamos Science. 23: 177–233. 105. ^ Openness, DOE. (June 1998). Human Radiation Experiments: ACHRE Report. Chapter 5: The Manhattan district Experiments; the first injection. Washington, DC. Superintendent of Documents US Government Printing Office. 106. ^ AEC no. UR-38, 1948 Quarterly Technical Report 107. ^ Yesley, Michael S. (1995). "'Ethical Harm' and the Plutonium Injection Experiments" (PDF). Los Alamos Science. 23: 280–283. Retrieved February 15, 2009. 108. ^ Martin 2000, p. 532. 109. ^ a b "Nuclear Weapon Design". Federation of American Scientists. 1998. Archived from the original on December 26, 2008. Retrieved December 7, 2008. 110. ^ a b "Mixed Oxide (MOX) Fuel". London (UK): World Nuclear Association. 2006. Retrieved December 14, 2008. 111. ^ Till & Chang 2011, pp. 254–256. 112. ^ Till & Chang 2011, p. 15. 113. ^ a b "Plutonium Storage at the Department of Energy's Savannah River Site: First Annual Report to Congress" (PDF). Defense Nuclear Facilities Safety Board. 2004. pp. A–1. Retrieved February 15, 2009. 114. ^ Besmann, Theodore M. (2005). "Thermochemical Behavior of Gallium in Weapons-Material-Derived Mixed-Oxide Light Water Reactor (LWR) Fuel". Journal of the American Ceramic Society. 81 (12): 3071–3076. doi:10.1111/j.1151-2916.1998.tb02740.x. 115. ^ a b c d "Plutonium". World Nuclear Association. March 2009. Retrieved February 28, 2010. 116. ^ "Science for the Critical Masses: How Plutonium Changes with Time". Institute for Energy and Environmental Research. 117. ^ a b "From heat sources to heart sources: Los Alamos made material for plutonium-powered pumper". Actinide Research Quarterly. Los Alamos: Los Alamos National Laboratory (1). 2005. Archived from the original on February 16, 2013. Retrieved February 15, 2009. 118. ^ "Why the Cassini Mission Cannot Use Solar Arrays" (PDF). NASA/JPL. December 6, 1996. Archived from the original (PDF) on February 26, 2015. Retrieved March 21, 2014. 119. ^ St. Fleur, Nicholas, "The Radioactive Heart of the New Horizons Spacecraft to Pluto", New York Times, August 7, 2015. The "craft's 125-pound generator [is] called the General Purpose Heat Source-Radioisotope Thermoelectric Generator. [It] was stocked with 24 pounds of plutonium that produced about 240 watts of electricity when it left Earth in 2006, according to Ryan Bechtel, an engineer from the Department of Energy who works on space nuclear power. During the Pluto flyby the battery produced 202 watts, Mr. Bechtel said. The power will continue to decrease as the metal decays, but there is enough of it to command the probe for another 20 years, according to Curt Niebur, a NASA program scientist on the New Horizons mission." Retrieved 2015-08-10. 120. ^ Mosher, Dave (September 19, 2013). "NASA's Plutonium Problem Could End Deep-Space Exploration". Wired. Retrieved February 5, 2015. 121. ^ "Voyager-Spacecraft Lifetime". Jet Propulsion Laboratory. June 11, 2014. Retrieved February 5, 2015. 122. ^ Venkateswara Sarma Mallela; V. Ilankumaran & N.Srinivasa Rao (2004). "Trends in Cardiac Pacemaker Batteries". Indian Pacing Electrophysiol. 4 (4): 201–212. PMC 1502062. PMID 16943934. 123. ^ "Plutonium Powered Pacemaker (1974)". Oak Ridge Associated Universities. Retrieved February 6, 2015. 124. ^ "Plutonium Powered Pacemaker (1974)". Oak Ridge: Orau.org. 2011. Retrieved February 1, 2015. 125. ^ "Nuclear pacemaker still energized after 34 years". December 19, 2007. Retrieved March 14, 2019. 126. ^ Bayles, John J.; Taylor, Douglas (1970). SEALAB III – Diver's Isotopic Swimsuit-Heater System (Report). Port Hueneme: Naval Civil Engineering Lab. AD0708680. 127. ^ "Toxicological Profile for Plutonium" (PDF). U.S. Department of Health and Human Services, Agency for Toxic Substances and Disease Registry (ATSDR). November 2010. Retrieved February 9, 2015. 128. ^ Little, M. P. (June 2009). "Cancer and non-cancer effects in Japanese atomic bomb survivors". J Radiol Prot. 29 (2A): A43–59. Bibcode:2009JRP....29...43L. doi:10.1088/0952-4746/29/2A/S04. PMID 19454804. 129. ^ "Plutonium, CAS ID #: 7440-07-5". Centers for Disease Control and Prevention (CDC) Agency for Toxic Substances and Disease Registry. Retrieved February 5, 2015. 130. ^ Grellier, James; Atkinson, Will; Bérard, Philippe; Bingham, Derek; Birchall, Alan; Blanchardon, Eric; Bull, Richard; Guseva Canu, Irina; Challeton-de Vathaire, Cécile; Cockerill, Rupert; Do, Minh T; Engels, Hilde; Figuerola, Jordi; Foster, Adrian; Holmstock, Luc; Hurtgen, Christian; Laurier, Dominique; Puncher, Matthew; Riddell, Tony; Samson, Eric; Thierry-Chef, Isabelle; Tirmarche, Margot; Vrijheid, Martine; Cardis, Elisabeth (2017). "Risk of lung cancer mortality in nuclear workers from internal exposure to alpha particle-emitting radionuclides". Epidemiology. 28 (5): 675–684. doi:10.1097/EDE.0000000000000684. PMC 5540354. PMID 28520643. 131. ^ "Radiological control technical training" (PDF). U.S. Department of Energy. Archived from the original (PDF) on June 30, 2007. Retrieved December 14, 2008. 132. ^ a b c Cohen, Bernard L. "The Myth of Plutonium Toxicity". Archived from the original on August 26, 2011. 133. ^ Cohen, Bernard L. (May 1977). "Hazards from Plutonium Toxicity". The Radiation Safety Journal: Health Physics. 32 (5): 359–379. doi:10.1097/00004032-197705000-00003. PMID 881333. S2CID 46325265. 134. ^ Brown, Shannon C.; Margaret F. Schonbeck; David McClure; et al. (July 2004). "Lung cancer and internal lung doses among plutonium workers at the Rocky Flats Plant: a case-control study". American Journal of Epidemiology. Oxford Journals. 160 (2): 163–172. doi:10.1093/aje/kwh192. PMID 15234938. Retrieved February 15, 2009. 135. ^ "ANL human health fact sheet—plutonium" (PDF). Argonne National Laboratory. 2001. Archived from the original (PDF) on February 16, 2013. Retrieved June 16, 2007. 136. ^ "Radiation Protection, Plutonium: What does plutonium do once it gets into the body?". U.S. Environmental Protection Agency. Retrieved March 15, 2011. 137. ^ "Did Ralph Nader say that a pound of plutonium could cause 8 billion cancers?". Retrieved January 3, 2013. 138. ^ a b Bernard L. Cohen. "The Nuclear Energy Option, Chapter 13, Plutonium and Bombs". Retrieved March 28, 2011. (Online version of Cohen's book The Nuclear Energy Option (Plenum Press, 1990) ISBN 0-306-43567-5). 139. ^ Voelz, G. L. (1975). "What We Have Learned About Plutonium from Human Data". The Radiation Safety Journal Health Physics: 29. 140. ^ a b Skwarzec, B; Struminska, D; Borylo, A (2001). "Bioaccumulation and distribution of plutonium in fish from Gdansk Bay". Journal of Environmental Radioactivity. 55 (2): 167–178. doi:10.1016/s0265-931x(00)00190-9. PMID 11398376. 141. ^ Baxter, M; Fowler, S; Povined, P (1995). "Observations on plutonium in the oceans". Applied Radiation and Isotopes. 46 (11): 1213–1223. doi:10.1016/0969-8043(95)00163-8. 142. ^ Lerebours, A; Gudkov, D; Nagorskaya, L; Kaglyan, A; Rizewski, V; Leshchenko, A (2018). "Impact of Environmental Radiation on the Health and Reproductive Status of Fish from Chernobyl". Environmental Science & Technology. 52 (16): 9442–9450. Bibcode:2018EnST...52.9442L. doi:10.1021/acs.est.8b02378. PMID 30028950. 143. ^ Till, John E.; Kaye, S. V.; Trabalka, J. R. (1976). "The Toxicity of Uranium and Plutonium to the Developing Embryos of Fish". Oak Ridge National Laboratory: 187. 144. ^ Miner 1968, p. 546 145. ^ Roark, Kevin N. (2000). "Criticality accidents report issued". Los Alamos (NM): Los Alamos National Laboratory. Archived from the original on October 8, 2008. Retrieved November 16, 2008. 146. ^ Hunner 2004, p. 85. 147. ^ "Raemer Schreiber". Staff Biographies. Los Alamos: Los Alamos National Laboratory. Archived from the original on January 3, 2013. Retrieved November 16, 2008. 148. ^ 149. ^ Albright, David; O'Neill, Kevin (1999). "The Lessons of Nuclear Secrecy at Rocky Flats". ISIS Issue Brief. Institute for Science and International Security (ISIS). Archived from the original on July 8, 2008. Retrieved December 7, 2008. 150. ^ "Transport of Radioactive Materials". World Nuclear Association. Retrieved February 6, 2015. 151. ^ "§ 71.63 Special requirement for plutonium shipments". United States Nuclear Regulatory Commission. Retrieved February 6, 2015. 152. ^ "Pacific Egret". Retrieved March 22, 2016. 153. ^ Yamaguchi, Mari. "Two British ships arrive in Japan to carry plutonium to US". Retrieved March 22, 2016. 154. ^ "Two British ships arrive in Japan to transport plutonium for storage in U.S." Retrieved March 22, 2016. 155. ^ "Part 175.704 Plutonium shipments". Code of Federal Regulations 49 — Transportation. Retrieved August 1, 2012. 156. ^ a b c Av Ida Søraunet Wangberg og Anne Kari Hinna. "Klassekampen : Flyr plutonium med rutefly". Klassekampen.no. Retrieved August 13, 2012.
2021-02-27 21:17:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7436131238937378, "perplexity": 7436.1126186262745}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00127.warc.gz"}
https://itprospt.com/num/21121188/point-a-first-order-iinear-equation-in-the-form-y-x27-p
5 # Point)A first order Iinear equation in the form y' + p(z)y f(z) can be solved by finding an integrating factor /(z) = exp p(z) dx(1) Given the equation 3y'... ## Question ###### Point)A first order Iinear equation in the form y' + p(z)y f(z) can be solved by finding an integrating factor /(z) = exp p(z) dx(1) Given the equation 3y' 12y = 6 find p(z) = e (2x)Then find an explicit general solution with arbitrary constant C .Then solve the initial value problem with y(0) = 1 point) A first order Iinear equation in the form y' + p(z)y f(z) can be solved by finding an integrating factor /(z) = exp p(z) dx (1) Given the equation 3y' 12y = 6 find p(z) = e (2x) Then find an explicit general solution with arbitrary constant C . Then solve the initial value problem with y(0) = 1 #### Similar Solved Questions ##### IntegrateJ(sin 7x +Jar.(6)LunL d. (I+sin6r)5 Integrate J(sin 7x + Jar. (6) LunL d. (I+sin6r)5... ##### Nellessary balloon the adeeground Kouese 59 nouse what the stabilized by degree 3 easbres_ of tne leegthgle each canie Oniu 1 Metnered the the i punod 5 the zonesi perpendicular Fentr distance 03 nellessary balloon the adeeground Kouese 59 nouse what the stabilized by degree 3 easbres_ of tne leegthgle each canie Oniu 1 Metnered the the i punod 5 the zonesi perpendicular Fentr distance 03... ##### Point) Let I = SSp(z?dx dy, whereD = {(T,y) : 2 < ty < 5,0 < 1-y < 2,1 > 0,y > 0} Show that the mapping U cy; v = € - ymaps Dto the rectangle R = [2,5] [0, 2]. (a) Compute 8(€,y) /8(&,v) by first computing d(u, v)/8(2,y). (b) Use the Change of Variables Formula to show that I is equal to the integral of f(u, v) = U over R and evaluateM1,) 8u,v)(6)I point) Let I = SSp(z? dx dy, where D = {(T,y) : 2 < ty < 5,0 < 1-y < 2,1 > 0,y > 0} Show that the mapping U cy; v = € - ymaps Dto the rectangle R = [2,5] [0, 2]. (a) Compute 8(€,y) /8(&,v) by first computing d(u, v)/8(2,y). (b) Use the Change of Variables Formula to... ##### (c) Fina the lndedloi Doint?(Ema#vzlue}(larger #evalue)Find 2h2 interyekthe goeoh (oiccve upincid (znterDaslt uslna interyal notetloc )Flnd thz Interyzl #here t2 prohconcae roknird (enter Kov TAthO usIng Intervl rotztlon:} (c) Fina the lndedloi Doint? (Ema #vzlue} (larger #evalue) Find 2h2 interyek the goeoh (oiccve upincid (znter Daslt uslna interyal notetloc ) Flnd thz Interyzl #here t2 proh concae roknird (enter Kov TAthO usIng Intervl rotztlon:}... ##### Glorious Gadgets is a retailer of astronomy equipment_ They purchase equipment from a supplier and then sell it to customers in their store. The function C(z) 4.51 12375. 8250 models their total inventory costs (in dollars) as a function of z the lot size for each of their orders from the supplier . The inventory costs include such things a5 purchasing, processing, eshipping,enascoring one equipperer What lot size should Glorious Gadgets order to minimize their total inventory costs? (NOTE: you Glorious Gadgets is a retailer of astronomy equipment_ They purchase equipment from a supplier and then sell it to customers in their store. The function C(z) 4.51 12375. 8250 models their total inventory costs (in dollars) as a function of z the lot size for each of their orders from the supplier .... ##### QUESTION 6 (INT) The following generalized integral, with seguente integrale b-6 generalizzato, and €-7 converges con b-6ec7 if and only ifa > converge se € solo se a > dx(round to not less than three decimals) (approssimare almeno alla terza cifra decimale) QUESTION 6 (INT) The following generalized integral, with seguente integrale b-6 generalizzato, and €-7 converges con b-6ec7 if and only ifa > converge se € solo se a > dx (round to not less than three decimals) (approssimare almeno alla terza cifra decimale)... ##### Points) Evaluate the indefinite integral. 5x f6 dx+C points) Evaluate the indefinite integral. 5x f6 dx +C... ##### Explore the limit graphically. Confirm algebraically. $$\lim _{x \rightarrow 1} \frac{x-1}{x^{2}-1}$$ Explore the limit graphically. Confirm algebraically. $$\lim _{x \rightarrow 1} \frac{x-1}{x^{2}-1}$$... ##### LaM =3,M =38.Write Matlab program calculating(Inwvf(x)[+Ni evz((cor ("]Ic- Write Matlab program_to find_quadratic_and theparabola that bcst epproximate thc data points MOQ 4AZ N 1o2242) Cedl Sin^2) Si40 422 N2 4001 NHN IaN2 LeglOa Ce.^2A42 NuinhaId- Write" Matlab code to fit an exponential mode] and Power model t0 the given data set P.S: For 1b,c, qucstions, please use plot, subplot commcnts:2 Briefly summary this paper: LSQR_Sparsc Lincar Equations and Least squares Problems (Paige an laM =3,M =38. Write Matlab program calculating (Inwv f(x) [+Ni evz((cor ("] Ic- Write Matlab program_to find_quadratic_and theparabola that bcst epproximate thc data points MOQ 4AZ N 1o2242) Cedl Sin^2) Si40 422 N2 4001 NHN IaN2 LeglOa Ce.^2A42 Nuinha Id- Write" Matlab code to fit an expon... ##### Question Calculate [H;0+] Calculate 14 [H;0+] of 29 [H;0+] [Hz0+] 7.128 89 L for for 0IX 8 40 X 8 10 10-} MKOH M HBr solution. solution 1 {J Give Upl0IA0 1 1 prvcy Polcy 3 3 11OH i|9 U uncomect a| valuc First Question Calculate [H;0+] Calculate 14 [H;0+] of 29 [H;0+] [Hz0+] 7.128 89 L for for 0IX 8 40 X 8 10 10-} MKOH M HBr solution. solution 1 {J Give Upl 0 IA 0 1 1 prvcy Polcy 3 3 1 1 OH i|9 U uncomect a| valuc First... ##### Find a power series representation for the function f(x) =x/(4x+1) & its radius of convergence Find a power series representation for the function f(x) = x/(4x+1) & its radius of convergence... ##### Find the number of bricks needed to fully cover the givenarea:Brick Dimentions:3.5in W X 7in L Find the number of bricks needed to fully cover the given area: Brick Dimentions: 3.5in W X 7in L... ##### 3. (a) Solve the given differential equationIzp' [email protected] Wah Mis)siiz 3. (a) Solve the given differential equation Izp' [email protected] Wah Mis)siiz... 0zpoihts pArneal ANSWERS LAACALCETT Vallet... ##### A company produces steel rods: The lengths of the steel rods are distributed with mean of 223-cm and a standard deviation of 2.5-cm. For shipment, 16 steel rods are bundled together: Round all answers to four decimal places if necessary:What is the distribution of X? Xb. What is the distribution of €? €For a single randomly selected steel rod, find the probability that the length is between 223.2-cm and 223.5-cm- d. For a bundled of 16 rods, find the probability that the average length is be A company produces steel rods: The lengths of the steel rods are distributed with mean of 223-cm and a standard deviation of 2.5-cm. For shipment, 16 steel rods are bundled together: Round all answers to four decimal places if necessary: What is the distribution of X? X b. What is the distribution o... ##### Determine if the graph is planer: Determine if the graph is planer:...
2022-10-07 02:57:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6722238063812256, "perplexity": 14974.766097449468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00055.warc.gz"}
http://mathhelpforum.com/latex-help/173825-not-equivalent-sign-print.html
# Not equivalent sign • March 8th 2011, 02:36 AM hmmmm Not equivalent sign How do I get a not equivalent sign e.g. $\equiv$ with a / through it. Thanks for any help. • March 8th 2011, 02:39 AM Plato Quote: Originally Posted by hmmmm How do I get a not equivalent sign e.g. $\equiv$ with a / through it. $$\not \equiv$$ gives $\not \equiv$
2014-09-19 16:49:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.794321596622467, "perplexity": 5481.090245642113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131545.81/warc/CC-MAIN-20140914011211-00300-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://math.stackexchange.com/questions/1086187/number-of-n-permutations-for-which-tauk-id
# Number of $n$-permutations for which ${\tau}^k = id$ I am curious about the formula(any closed form) for the number of $n$-permutations $\tau$ such that ${\tau}^{n-1} = id$. How about for the case ${\tau}^n = id$ ? • If $n$ is prime, there should be $(n-1)!+1$ of the form $\tau^n=\text{id}$, is that correct? Dec 31, 2014 at 0:06 • The OEIS entry A074759 is relevant. It was found with the exponential generating function $$n! [z^n]\exp\left(\sum_{d|n} \frac{z^d}{d}\right).$$ Dec 31, 2014 at 0:21 • You are right. I mean, a general formula. I know the first few numbers from A008307(OEIS), where diagonal elements are those. – hkju Dec 31, 2014 at 0:22 • Differentiate the generating function to obtain a recurrence relation. Dec 31, 2014 at 0:24 • What is the generating function for that? – hkju Dec 31, 2014 at 0:40
2022-06-30 06:57:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7870951890945435, "perplexity": 332.17585230501555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103669266.42/warc/CC-MAIN-20220630062154-20220630092154-00039.warc.gz"}
http://www.reddit.com/r/cheatatmathhomework/comments/11fopw/real_analysis_prove_the_sequence_is_convergent_or/
[–][S] 0 points1 point  (1 child) sorry, this has been archived and can no longer be voted on Could I just use the nth term test and show that it approaches infinity and hence it is divergent?? [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on n2 > n2 - 1 = (n-1)(n+1) so n2 / (n+1) > n-1 goes to infinity as n goes to infinity. In general, the limit of p(x) / q(x) for polynomials p,q as x tends to infinity is given by one of three expressions: positive or negative infinity when deg p > deg q; the ratio of the leading coefficients when deg p = deg q; and 0 when deg p < deg q. Is it a sequence or a series? All of the comments are discussing series, but from what you've written the question seems to be about the sequence { n2/(n+1) } for n = 0 to infinity. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on A direct proof is much easier: Take any real number M. You want to find an N such that n>N implies n2/(n+1)>M. This latter inequality is equivalent to n2-Mn-M>0; this is a quadratic inequality that can be solved for n.
2015-05-03 14:29:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8765951991081238, "perplexity": 869.8369780332486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448950892.36/warc/CC-MAIN-20150501025550-00040-ip-10-235-10-82.ec2.internal.warc.gz"}
https://tw.answers.yahoo.com/question/index?qid=20080802000016KK10139
# what's lognormal distribution? what is lognormal distribution? what's the different between lognormal and normal distribution? i will really appreciate if anyone can answer it. thx ### 1 個解答 • Sis Lv 5 1 0 年前 最佳解答 In probability and statistics, the log-normal distribution is the single-tailed probability distribution of any random variable whose logarithm is normally distributed. If X is a random variable with a normal distribution, then Y = exp(X) has a log-normal distribution; likewise, if Y is log-normally distributed, then log(Y) is normally distributed. (The base of the logarithmic function does not matter: if loga(Y) is normally distributed, then so is logb(Y), for any two positive numbers a, b ≠ 1.) Log-normal is also written log normal or lognormal. A variable might be modeled as log-normal if it can be thought of as the multiplicative product of many small independent factors. For example the long-term return rate on a stock investment can be considered to be the product of the daily return rates. In wireless communication, the attenuation caused by shadowing or slow fading from random objects is often assumed to be log-normally distributed. The normal distribution, also called the Gaussian distribution, is an important family of continuous probability distributions, applicable in many fields. Each member of the family may be defined by two parameters, location and scale: the mean ("average", μ) and variance (standard deviation squared) σ2, respectively. The standard normal distribution is the normal distribution with a mean of zero and a variance of one (the red curves in the plots to the right). Carl Friedrich Gauss became associated with this set of distributions when he analyzed astronomical data using them,[1] and defined the equation of its probability density function. It is often called the bell curve because the graph of its probability density resembles a bell. • 登入以對解答發表意見
2020-01-19 02:48:16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8792243599891663, "perplexity": 456.5290805018534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00428.warc.gz"}
http://mathoverflow.net/feeds/question/119470
Naturality of the transfer in group cohomology - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T06:15:15Z http://mathoverflow.net/feeds/question/119470 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/119470/naturality-of-the-transfer-in-group-cohomology Naturality of the transfer in group cohomology Mark Grant 2013-01-21T14:03:24Z 2013-01-21T14:25:32Z <p>Let $G$ be a (discrete) group and $H\le G$ a subgroup of finite index. Then there is a transfer map $$tr\colon\thinspace H^\ast(H;M)\to H^\ast(G;M)$$ in group cohomology, where $M$ is any $G$-module (see Brown's "Cohomology of groups", Chapter III).</p> <p>I think this construction should be natural, in the following sense. Let $f\colon\thinspace G'\to G$ be a homomorphism such that $H':=f^{-1}(H)$ is of finite index in $G'$, and let $M$ be a $G$-module. Then the following diagram commutes, where the horizontal maps are transfers: $$\begin{array}{ccc} H^\ast(H;M) &amp; \to &amp; H^\ast(G;M) \newline \downarrow f^\ast &amp; &amp; \downarrow f^\ast \newline H^\ast(H';f^\ast M) &amp; \to &amp; H^\ast(G';f^\ast M) \end{array}$$ Note that I do not want to assume that $(G':H')=(G:H)$ (however, I am willing to assume that $H\le G$ and $H'\le G'$ are normal, if necessary).</p> <blockquote> <p>Does anyone know of a reference for this naturality?</p> </blockquote> http://mathoverflow.net/questions/119470/naturality-of-the-transfer-in-group-cohomology/119473#119473 Answer by Oscar Randal-Williams for Naturality of the transfer in group cohomology Oscar Randal-Williams 2013-01-21T14:25:32Z 2013-01-21T14:25:32Z <p>I don't believe this is true. Let $(G, H) = (\Sigma_3, C_3)$ and $f : C_3 \to \Sigma_3$. Then your square says that $$H^1(C_3;\mathbb{Z}/3) = \mathbb{Z}/3 \longrightarrow H^1(\Sigma_3;\mathbb{Z}/3) = 0\longrightarrow H^1(C_3;\mathbb{Z}/3) = \mathbb{Z}/3$$ is the identity, which is false.</p>
2013-05-23 06:15:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429585933685303, "perplexity": 323.2149739403044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702900179/warc/CC-MAIN-20130516111500-00046-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.beatthegmat.com/a-technician-makes-a-round-trip-to-and-from-a-certain-service-center-by-the-same-route-if-the-technician-completes-the-t327095.html?sid=adcf8e98c5a66b9eb16f4d3a01402613
## A technician makes a round-trip to and from a certain service center by the same route. If the technician completes the ##### This topic has expert replies Legendary Member Posts: 2125 Joined: 14 Oct 2017 Followed by:3 members ### A technician makes a round-trip to and from a certain service center by the same route. If the technician completes the by VJesus12 » Thu Oct 07, 2021 1:53 am 00:00 A B C D E ## Global Stats A technician makes a round-trip to and from a certain service center by the same route. If the technician completes the drive to the center and then completes $$10$$ percent of the drive from the center, what percent of the round-trip has the technician completed? A) $$5\%$$ B) $$10\%$$ C) $$25\%$$ D) $$40\%$$ E) $$55\%$$ Source: Official Guide ### GMAT/MBA Expert GMAT Instructor Posts: 15883 Joined: 08 Dec 2008 Location: Vancouver, BC Thanked: 5254 times Followed by:1267 members GMAT Score:770 ### Re: A technician makes a round-trip to and from a certain service center by the same route. If the technician completes by [email protected] » Thu Oct 07, 2021 5:45 am 00:00 A B C D E ## Global Stats VJesus12 wrote: Thu Oct 07, 2021 1:53 am A technician makes a round-trip to and from a certain service center by the same route. If the technician completes the drive to the center and then completes $$10$$ percent of the drive from the center, what percent of the round-trip has the technician completed? A) $$5\%$$ B) $$10\%$$ C) $$25\%$$ D) $$40\%$$ E) $$55\%$$
2021-12-06 02:52:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6385149955749512, "perplexity": 4377.913081032308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363229.84/warc/CC-MAIN-20211206012231-20211206042231-00194.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_latest/clhtml/f16/f16eac.html
# NAG CL Interfacef16eac (ddot) Settings help CL Name Style: ## 1Purpose f16eac updates a scalar by a scaled dot product of two real vectors, by performing $r←βr+α xT y .$ ## 2Specification #include void f16eac (Nag_ConjType conj, Integer n, double alpha, const double x[], Integer incx, double beta, const double y[], Integer incy, double *r, NagError *fail) The function may be called by the names: f16eac, nag_blast_ddot or nag_ddot. ## 3Description f16eac performs the operation $r← βr+ αxTy$ where $x$ and $y$ are $n$-element real vectors, and $r$, $\alpha$ and $\beta$ real scalars. If $n$ is less than zero, or, if $\beta$ is equal to one and either $\alpha$ or $n$ is equal to zero, this function returns immediately. ## 4References Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001) Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard University of Tennessee, Knoxville, Tennessee https://www.netlib.org/blas/blast-forum/blas-report.pdf ## 5Arguments 1: $\mathbf{conj}$Nag_ConjType Input On entry: conj is not used. The presence of this argument in the BLAST standard is for consistency with the interface of the complex variant of this function. Constraint: ${\mathbf{conj}}=\mathrm{Nag_NoConj}$ or $\mathrm{Nag_Conj}$. 2: $\mathbf{n}$Integer Input On entry: $n$, the number of elements in $x$ and $y$. 3: $\mathbf{alpha}$double Input On entry: the scalar $\alpha$. 4: $\mathbf{x}\left[1+\left({\mathbf{n}}-1\right)×|{\mathbf{incx}}|\right]$const double Input On entry: the $n$-element vector $x$. If ${\mathbf{incx}}>0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left[\left(\mathit{i}-1\right)×{\mathbf{incx}}\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. If ${\mathbf{incx}}<0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left[\left({\mathbf{n}}-\mathit{i}\right)×|{\mathbf{incx}}|\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. Intermediate elements of x are not referenced. If $\alpha =0.0$ or ${\mathbf{n}}=0$, x is not referenced and may be NULL. 5: $\mathbf{incx}$Integer Input On entry: the increment in the subscripts of x between successive elements of $x$. Constraint: ${\mathbf{incx}}\ne 0$. 6: $\mathbf{beta}$double Input On entry: the scalar $\beta$. 7: $\mathbf{y}\left[1+\left({\mathbf{n}}-1\right)×|{\mathbf{incy}}|\right]$const double Input On entry: the $n$-element vector $y$. If ${\mathbf{incy}}>0$, ${y}_{\mathit{i}}$ must be stored in ${\mathbf{y}}\left[\left(\mathit{i}-1\right)×{\mathbf{incy}}\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. If ${\mathbf{incy}}<0$, ${y}_{\mathit{i}}$ must be stored in ${\mathbf{y}}\left[\left({\mathbf{n}}-\mathit{i}\right)×|{\mathbf{incy}}|\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. Intermediate elements of y are not referenced. If $\alpha =0.0$ or ${\mathbf{n}}=0$, y is not referenced and may be NULL. 8: $\mathbf{incy}$Integer Input On entry: the increment in the subscripts of y between successive elements of $y$. Constraint: ${\mathbf{incy}}\ne 0$. 9: $\mathbf{r}$double * Input/Output On entry: the initial value, $r$, to be updated. If $\beta =0.0$, r need not be set on entry. On exit: the value $r$, scaled by $\beta$ and updated by the scaled dot product of $x$ and $y$. 10: $\mathbf{fail}$NagError * Input/Output The NAG error argument (see Section 7 in the Introduction to the NAG Library CL Interface). ## 6Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 3.1.2 in the Introduction to the NAG Library CL Interface for further information. On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value. NE_INT On entry, ${\mathbf{incx}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{incx}}\ne 0$. On entry, ${\mathbf{incy}}=⟨\mathit{\text{value}}⟩$. Constraint: ${\mathbf{incy}}\ne 0$. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 8 in the Introduction to the NAG Library CL Interface for further information. ## 7Accuracy The dot product ${x}^{\mathrm{T}}y$ is computed using the BLAS routine DDOT. The BLAS standard requires accurate implementations which avoid unnecessary over/underflow (see Section 2.7 of Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001)). ## 8Parallelism and Performance f16eac makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users' Note for your implementation for any additional implementation-specific information. None. ## 10Example This example computes the scaled sum of two dot products, $r={\alpha }_{1}{x}^{\mathrm{T}}y+{\alpha }_{2}{u}^{\mathrm{T}}v$, where $α1=0.3 , x= (1,2,3,4,5) , y= (−5,−4,3,2,1) , α2 = -7.0 , u=v= (0.4,0.3,0.2,0.1) .$ $y$ and $v$ are stored in reverse order, and $u$ is stored in reverse order in every other element of a real array. ### 10.1Program Text Program Text (f16eace.c) ### 10.2Program Data Program Data (f16eace.d) ### 10.3Program Results Program Results (f16eace.r)
2021-06-15 13:30:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 74, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662688374519348, "perplexity": 2775.4326349108846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00459.warc.gz"}
https://www.fin.gc.ca/budget06/fp/fpa2-eng.asp
# Archived - Annex 2 Recent Evolution of Fiscal Balance in Canada Archived information Archived information is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please contact us to request a format other than those available. This annex summarizes key developments in the evolution of fiscal balance in Canada over the last few decades, including: • The return to surpluses by both federal and provincial-territorial governments after years of large deficits. • The significant decline of debt ratios of both orders of government. • The marked decline in both federal and provincial program spending ratios since the mid-1990s. • The broad access of both federal and provincial governments to all major tax fields. • The significant tax reductions made in recent years, especially in the case of federal taxes. • The significant reinvestment in federal cash transfers to provinces and territories in recent years, after significant cuts in the mid-1990s. • The significant narrowing of interprovincial economic and fiscal disparities. ### Budgetary Balances and Debt Burdens The past quarter-century has witnessed dramatic changes to federal and provincial-territorial budgetary balances. The 1980s and early 1990s were characterized by large, chronic federal deficits, which peaked at more than 8 per cent of GDP in 1984–85. Over this same period, provincial deficits were also significant but did not reach the same levels as those recorded by the federal government. After some improvement in the late 1980s, the 1990–91 recession resulted in a deterioration of the fiscal situation for provinces and territories and a further setback for federal efforts to reduce its deficit. For both orders of government, spending control as well as the post-recession return to economic growth led to a significant turnaround from large deficits to surpluses. The federal government recorded its first surplus in 1997–98 and provinces achieved a combined positive budgetary balance in 1999–2000 after decades of deficits. With the projected surplus in 2005–06, provincial-territorial governments will have recorded a combined positive budgetary balance in five of the past seven years. Eight provinces are forecasting balanced budgets or surpluses in 2005–06 and 2006–07, with the remainder also seeing a considerable improvement in their fiscal situation over the last few years. Table A2.1 Substantial Improvement in Budgetary Balances in Recent Years (millions of dollars) Federal 6,742 7,073 8,891 1,456 8,000 3,600 N.L. -468 -644 -914 -489 77 6 P.E.I. -17 -55 -125 -34 -18 -12 N.S. 113 28 38 165 78 93 N.B. 79 1 -173 242 117 22 Que. 22 -728 -358 -664 0 0 Ont. 375 117 -5,483 -1,555 -1,369 -2,350 Man. 63 4 13 405 3 3 Sask. 1 1 1 383 298 102 Alta. 1,081 2,133 4,136 5,175 7,375 4,096 B.C. -1,184 -2,737 -1,275 2,575 1,475 600 Y.T. -21 -5 12 5 38 9 N.W.T. 120 -34 -65 -17 18 31 Nun. -47 12 7 -8 6 -8 Total provincial- territorial 117 -1,907 -4,187 6,184 8,098 2,592 Sources: Federal and provincial-territorial Public Accounts and budgets. Reflecting improvements in budgetary balances, both federal and provincial-territorial debts have declined as a share of gross domestic product (GDP), with the federal debt ratio falling more dramatically. However, federal debt as a share of GDP still exceeds that of most provinces and remains significantly higher than the provincial average. Lower debt-to-GDP ratios, combined with lower interest rates and an improved credit rating, have enabled both orders of government to allocate a smaller portion of revenues to debt interest payments and a greater portion to program expenditures, tax reductions and debt repayment. Both orders of government have also benefited from increased revenues generated by sustained economic growth. Overall, Canadians witnessed an impressive fiscal recovery for both orders of government in the past decade. However, such a recovery was made possible only by making difficult choices in the mid-1990s to reduce spending. ### Program Spending Program spending expressed as a share of GDP provides a good measure of spending trends and of the size of governments relative to the economy. It is important to examine the data over a long period of time in order to distinguish between cyclical and structural trends. Federal program spending as a share of GDP has declined significantly since 1983–84. As a result of spending restraint and strong economic growth, the ratio of federal program spending to GDP declined steadily throughout the latter half of the 1980s. The 1990–91 recession triggered increases in certain federal expenditures like employment insurance, which contributed to a rise in the program spending-to-GDP ratio. The federal government underwent a program of expenditure review and restraint between 1993–94 and 1996–97, which was an essential factor in its fiscal recovery. Since 1983–84, provincial-territorial program spending has also declined as a share of GDP, but to a lesser extent. Provinces and territories were also hit hard by the recession, which caused a significant increase in spending on social assistance and social services in the early 1990s. Starting in the mid-1990s, most provinces undertook major restructuring to reduce or stabilize their program expenditures. The sustained job creation over the past decade has also significantly reduced provincial spending pressures in the areas of social assistance and social services. The tighter spending control exercised by both orders of government during the 1990s reflected the need to rebalance government finances after a long period of large and unsustainable deficits. Since 2000–01, program spending as a percentage of GDP has begun a modest rebound as both federal and provincial-territorial governments addressed significant spending pressures. Health care has proven to be the number one priority in terms of new investments: • From 2000–01 to 2005–06, provincial-territorial health care spending increased by an average of 7 per cent annually, compared to an average growth of 4.3 per cent for provincial-territorial revenues. • The federal government increased its transfers to provinces and territories, in large part to support them in their efforts to address health care needs. Transfers increased by an average of 8.7 per cent annually between 2000–01 and 2005–06, compared to average growth in federal revenues of only 2.6 per cent. Both orders of government have access to all major sources of tax revenue: personal and corporate income taxes, sales taxes and payroll taxes. Provinces have access to resource revenues, gaming and liquor profits, and property taxes while the federal government has exclusive access to customs import duties as well as taxes on non-residents. Moreover, among major industrialized federations, only in Canada and the United States do sub-national governments have full control over their tax bases and tax rates—though in some cases, provinces in Canada have chosen to enter into harmonization agreements (notably in the area of income and sales taxes) in order to reduce compliance costs on Canadians and administrative costs. Federal Provincial Common revenue sources Personal income taxes √ √ Corporate income taxes √ √ Sales taxes √ √ Payroll taxes √ √ Total in 2005 (billions of dollars) 192.0 130.0 Unique provincial revenue sources Resource royalties within provincial jurisdiction √ Gaming, liquor profits √ Property taxes √ Total in 2005 (billions of dollars) 30.3 Unique federal revenue sources Customs import duties √ Taxes on non-residents √ Total in 2005 (billions of dollars) 8.4 Source: National Economic and Financial Accounts In Canada, sub-national governments raise the largest share of total government revenues among industrialized federal countries, which is a reflection of the high degree of decentralization of the Canadian federation. Given their broad access to major tax fields and the control they exercise over their tax bases and rates, provinces and territories are more fiscally autonomous than their counterparts in other federal countries. In particular, they rely more on own-source revenues—and less on federal transfers—to fund their programs and policies. Provincial revenues (including federal transfers) have generally exceeded federal revenues for more than 25 years, with the gap increasing in recent years. While governments have the legal authority to increase their revenues as required, concerns about competitiveness and the overall tax burden of Canadians limit the extent to which they can and should raise additional revenues in practice. Starting in the late 1990s, most governments in Canada implemented tax reductions, primarily targeting personal and corporate income tax reductions. As a result, revenue-to-GDP ratios declined for both orders of government. For provinces and territories, the rebound in the revenue ratio in recent years is in large part attributable to increases in federal transfers and resource revenues. Federal tax cuts exceeded provincial tax cuts in dollar terms. Table A2.2 The Sharper Decline in Federal Revenues is Due in Part to Larger Federal Tax Reductions Since the 1996 Federal Budget Federal Provincial (billions of dollars) Personal income taxes -31.5 -21.3 Corporate income and capital taxes -5.3 -4.6 Employment insurance premiums/payroll taxes -7.2 -0.4 Other revenue measures1 0.8 5.9 Total -43.2 -22.4 1 Includes sales taxes, property taxes, health premiums, tobacco taxes, gasoline taxes and various measures to fight tax evasion. Sources: Department of Finance Canada estimates; provincial governments. ### Federal Cash Transfers While Canadian provinces are less dependent on federal cash transfers than their counterparts in other federal countries, transfers still represent a significant source of revenue for the provinces and territories. As part of its deficit reduction efforts, the federal government cut cash transfers to provinces and territories for health care and other social programs by 30 per cent between 1994–95 and 1997–98, from $18.7 billion to$12.5 billion. Equalization was not cut but a ceiling did temporarily constrain entitlements in 2000–01. Territorial Formula Financing (TFF) was also subject to a ceiling that limited the growth of grants from 1990–91 to 1993–94, and each territory’s Gross Expenditure Base was cut by 5 per cent in the 1995 budget. Since the federal government balanced its budget in 1997–98, federal cash transfers for health and social programs have rebounded substantially: • By 2002–03, the level of cash transfers was restored to 1994–95 levels. • In 2006–07, cash transfers will reach $29.8 billion, an increase of about$17 billion since 1997–98. • The Canada Health Transfer (CHT) has also been put on a long-term growth track. The 10-Year Plan to Strengthen Health Care signed in September 2004 (and legislated through to 2013–14) provides for annual increases of 6 per cent in the CHT cash transfers over the life of the agreement. • Cash payments under the Canada Social Transfer will grow by nearly 3 per cent annually, on average, over the legislated period until 2007–08. Equalization and TFF transfers were also put on a legislated 10-year growth track through to 2013–14 under a new framework announced in October 2004. However, the new framework constituted a departure from past practice, under which both the level and allocation of these transfers were determined by a formula. As a result, concerns have been raised about the ability of both programs to meet their objectives over the longer term. In particular, there is a broad consensus that the programs need to be returned to a formula-based approach for determining both the level and allocation of entitlements under these two transfers. ### Regional Disparities In all countries, there are differences in economic performance across regions. Given the diverse nature of Canada, substantial economic disparities exist. The coefficient of variation, illustrated in the charts below, measures the magnitude of the disparities across provinces in each year, thus making it a useful indicator to track trends in disparities over time. Even though economic disparities between provinces are still substantial, they have nevertheless declined significantly over the past 25 years. ### Fiscal Equalization In federal countries—and especially in fiscally decentralized countries such as Canada—these economic disparities translate into fiscal disparities (i.e. differences in the ability to raise revenues) among sub-national governments. The pattern of fiscal disparities in Canada has largely mirrored the pattern of economic disparities. While fiscal disparities (like economic disparities) have generally declined over the past 25 years, the recent rise in natural resource prices that began in 2000 has generated stronger economic and revenue growth in provinces with significant natural resources (notably Alberta, Saskatchewan, British Columbia and Newfoundland and Labrador). As a result, economic and fiscal disparities have widened somewhat, though they remain significantly smaller than in the early 1980s. Most federal countries, including Canada, have fiscal equalization programs to help reduce fiscal disparities. The principle of equalization is enshrined in subsection 36(2) of Canada’s Constitution Act, 1982. Canada’s Equalization program significantly reduces fiscal disparities among the provinces (see Annex 3). ### Federal Revenue-Expenditure Balances Across Provinces In all federal countries, economic disparities and the implicit inter-regional redistribution that results from the operation of federal tax and expenditure policies result in different "balances" between federal revenues and expenditures in different regions. Generally speaking, the residents of more prosperous regions, taken as a whole, receive less federal spending and make larger contributions to federal revenues. The opposite is true in less prosperous regions. Canada is no different in this regard. Because of their relatively higher incomes, citizens and businesses residing in more prosperous provinces, such as Alberta and Ontario, contribute relatively more to federal revenues than they receive from federal programs. In Canada, as in other federal countries, the "gaps" between federal revenues and expenditures in different provinces reflect the structure of the tax and transfer systems of the federation, including the progressivity of federal taxes, the targeting of support to individual Canadians or families in need, and the commitment to the reduction of provincial-territorial fiscal disparities. In particular, a number of factors determine the measured balance of individual regions at any given point in time: • Budgetary position of the federal government: Measuring balances at a given point in time effectively ignores the impacts of deficits and surpluses on future tax and benefit levels. As a result, when federal deficits are large (as in the early 1990s in Canada), federal fiscal balances are distorted in all provinces: redistribution toward less prosperous jurisdictions is exaggerated and redistribution from more prosperous jurisdictions appears smaller. The opposite is true when the federal government runs surpluses (as has been the case in Canada in recent years). • The degree of revenue and expenditure decentralization: The larger the federal share of revenues and expenditures, the greater the degree of redistribution among regions even with uniform federal taxation and expenditure policies. As a result, redistribution resulting from federal policies tends to be smaller in fiscally decentralized federations (such as Canada) than in more centralized federations (such as the U.S.) • The degree to which federal policies are designed to be redistributive: For example, inter-regional redistribution increases with the progressivity of the federal tax system, the degree to which federal programs are targeted to low-income individuals or regions or other needs, and the extent of the national commitment to the reduction of fiscal disparities among provinces and territories. Changes in the federal budgetary balance are particularly important in explaining recent changes in regional balances: • In 1993, when the federal government recorded a $38.5-billion deficit, they were negative fiscal balances in all provinces but Alberta: that is, provincial residents were receiving more in federal services and transfers than they paid in federal taxes. The average balance was minus 4.6 per cent of GDP. Even in Ontario there was a negative balance totalling$1.4 billion. • This situation was clearly unsustainable since the federal government was borrowing heavily to finance its activities. To address its budgetary deficits, the federal government raised its revenues and reduced its spending. Residents of all provinces contributed to this process of fiscal restraint. • By 2003, federal fiscal consolidation resulted in the average balance rising by more than 5 per cent of GDP relative to 1993, to an average balance of plus 0.6 per cent of GDP. In Ontario, the fiscal balance increased by 4.5 per cent of GDP during this period. This generalized trend towards improved provincial balances explains the growth in what some observers have referred to as the Ontario "gap," which increased in aggregate from about $2 billion (in 1995) to$18 billion in 2003. In effect, the resulting \$18-billion Ontario "gap" is a reflection of the province’s greater prosperity relative to most other provinces. The "gap" can be decomposed based on the extent to which federal revenues and expenditures in Ontario deviate from the national average: • About 42 per cent of the "gap" in 2003 is accounted for by above-average revenues collected in Ontario, reflecting above-average incomes and business activity in the province. • About 14 per cent is accounted for by Ontario’s per capita share of federal debt reduction in 2003. • About 23 per cent is accounted for by below-average transfers to the province, notably because it did not qualify for Equalization due to its above-average fiscal capacity. • About 18 per cent was accounted for by below-average payments to Ontario residents for income-tested transfers to persons, such as the Canada Child Tax Benefit, elderly benefits and employment insurance. These reflect the province’s above-average personal incomes and below-average unemployment rate. These four areas accounted for over 97 per cent of the "gap," with the remaining 3 per cent reflecting other smaller expenditures, some of which are more heavily weighted towards Ontario (such as federal spending on goods and services) and others that are not. Federal fiscal balances in Canadian provinces are similar in size to those that would be observed across the United States if the federal government in both countries ran balanced budgets, even though the United States does not have an equalization program: • As noted above, provincial and state "gaps" not only depend on the level of transfers from the federal government to provincial or state governments, but also on the relative size of the federal government, the progressivity of the federal tax system and the extent to which federal expenditures are income- or needs-targeted. • The chart below therefore reflects the fact that the U.S. federal government’s revenues and expenditures are larger, as a percentage of GDP, than those of the Canadian federal government, as well as the different basis for allocating federal expenditures in the U.S. (e.g. the greater proportion of defence spending in the U.S. and its concentration in particular states).
2019-09-22 08:14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22211526334285736, "perplexity": 6043.569422576325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00022.warc.gz"}
https://www.x-mol.com/paper/1216929385984839680
Discrete Mathematics ( IF 0.770 ) Pub Date : 2020-01-13 , DOI: 10.1016/j.disc.2019.111797 Huajing Lu; Xuding Zhu This paper proves that if $G$ is a planar graph without 4-cycles and $l$-cycles for some $l\in \left\{5,6,7\right\}$, then there exists a matching $M$ such that $AT\left(G-M\right)\le 3$. This implies that every planar graph without 4-cycles and $l$-cycles for some $l\in \left\{5,6,7\right\}$ is 1-defective 3-paintable. down wechat bug
2020-08-11 19:33:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957065939903259, "perplexity": 566.883383725095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738819.78/warc/CC-MAIN-20200811180239-20200811210239-00238.warc.gz"}
https://economics.stackexchange.com/questions/42701/how-much-revenue-could-an-100-land-value-tax-be-able-to-generate-for-the-indian/47356
# How much revenue could an 100% land value tax be able to generate for the Indian government? So as a Georgist (follower of Henry George), I believe most form of taxation is immoral and economically inefficient except the land value tax (not to be confused with property taxes) and other pigovian taxes. However, I'm also a pragmatic person and therefore, it may not be possible to fund an entire government through these taxes alone as most Georgists believe. That is why I want to ask how much revenue would an 100% land value tax be able to generate for the Indian government, assuming no changes in the current levels of the government. I do realise such a tax would be difficult to practically implement due to poor land records in India but nevertheless, I'm interested to know if it is feasible in theory. • $0, because all land becomes instantly worthless? Feb 23 '21 at 9:15 • @user253751 The value of a site remains the same at 100% LVT. The price someone would pay to own a site would be$0 at 100% LVT but the rent someone would pay to use the site remains the same whether rent is paid to a private owner or the government in the form of 100% LVT. Feb 23 '21 at 22:20 • @sba222 then who determines the value? Usually the value is the sale price. Feb 24 '21 at 8:25 Technincally, even if it's a stylized fact that has been subject to a lot of criticism (and, to me, very righteously), the Laffer curve indicates that a 100% tax rate on land would generate zero revenue, since there would be no incentive for the owner to use productively. In that case I think that Laffer's intuition of a zero revenue on a 100% tax rate is quite correct... In Australia, Prosper compiled a survey of all economic rents and came to the conclusion that the Australian government could effectively be funded with rents alone. This is without taking into account that abolishing taxes will further increase the value of economic rents, i.e. in the absence of taxation firms and households will have more net income to pay for economic rents (land, carbon, licences, fees) which further increases government revenues. Mason Gaffney (2009) argues that all taxes come out of rents (ATCOR). According to this a tax reduction would increase rents by at least the same amount. This would imply that a government funded with taxes could just as well be funded with rents. See my question about ATCOR here. • The point of 100% LVT is to tax all rents from land. Feb 23 '21 at 22:02 • What is your definition of land value if not the PV of land rents? Feb 23 '21 at 22:22 • Gaffney writes: If future rent is to be heavily taxed, there will be less current value and less appreciation. One might think that increments would thus be destroyed, but economic value does not disappear without a trace. It is conserved, like matter and energy. The value is rather transferred to the public. The right to levy future taxes has a present value, too. Feb 24 '21 at 6:52 • What else other than present or future rents does land value include in your view? Expectations about future rents may be subjective but they remain expectations about rents. In fact, taxing current rents avoids subjective land valuation being carried out by the government. Feb 24 '21 at 7:11 • @user253751 The rent someone would be willing to pay for using a site is independent of whether LVT is 0% or 100%. It's just a matter of who receives the rent. However, the price someone would be willing pay for owning the site's future land rents will be \$0 at 100% LVT since all future land rents will be taxed. Feb 24 '21 at 8:55
2022-01-28 19:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.308601051568985, "perplexity": 2282.0619681277317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00452.warc.gz"}
https://www.physicsforums.com/threads/temperature-dependence-of-lennard-jones-potential.712937/
# Temperature dependence of Lennard-Jones potential Homework Helper My question is not so much about the Lennard-Jones potential, although I mentioned it in the title, but of the "force field" thinking in general. So a lot of people are (were) interested in the phase transition temperature of the Ising model. How realistic is the model in the sense that it assumes an interaction energy of J, which is independent of temperature, between neighboring sites? If instead of magnets, we think of solutions, the very same Ising model is called the regular solution model (in the Bragg-Williams mean field approximation). Now is it true that, say, water and oil mix the way described by the model (i.e. no explicit temperature dependence)? These are of course simple toy models, but real scientists employ Lennard-Jones potentials in their molecular dynamics codes, but never seem to give too much thought, or at least discussion in the publications, to this matter. The van der Waals forces (which the attractive part of the Lennard-Jones potential represents) for neutral particles is due to London dispersion forces, which if I recall is supposed to vary as 1/T. This is to my knowledge never accounted for in actual molecular dynamics force fields. Why? When can I be sure that the interaction energies between two (or more) particles do not explicitly depend on temperature? When does this approximation break down? Naturally, I'd also be quite interested in how other state variables, such as pressure, might affect the microscopic interactions, and when I can ignore their effects.
2022-07-04 02:52:46
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875877261161804, "perplexity": 497.5805836754723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00104.warc.gz"}
https://cbsemathssolutions.in/lcm-full-form-explained/
# LCM full form explained Learn and know what is the LCM full form in mathematics. Generally, in the classes 5 and 6 this concept will be introduced. This is very important concept for the students which makes them to gain a strong foundation in learning and understanding math. We should start learning about LCM from knowing the LCM full form. if you know how to find LCM and if you don’t know what LCM stands for? Then what is the use. So first know full form of LCM, after that we have to know the applications of LCM. That means how and when this LCM is used. And also we have to know in how many ways we can find LCM. ## LCM full form as follows: The full form of LCM is given in two ways. In the LCM • L-Least/Lowest • C- Common • M- Multiple So the full form of LCM is Least Common Multiple Or Lowest Common Multiple The above given both abbreviations are correct for LCM. ### What is the use of learning LCM: In mathematics, learning and knowing what is LCM helps in solving some problems. For example, when we are solving fractions, we will make use of LCM concept. If the given fractions are like fractions then then there won’t be any problem. Suppose if the given fractions are unlike fractions then definitely we should make use of LCM concept to solve the fractions. In unlike fractions we will take LCM of denominators and we will solve the fractions. So for this you should know what is the meaning of LCM and the different methods for finding the LCM. #### The methods used for finding LCM: To find the LCM of numbers we have the following methods. 1. Writing multiples of given numbers 2. L – Division method 3. Prime Factorisation method Note: LCM we can find not only for the numbers for algebraic expressions also we can find.
2020-02-24 12:21:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405413627624512, "perplexity": 708.3577302336658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145941.55/warc/CC-MAIN-20200224102135-20200224132135-00007.warc.gz"}
https://exportersaustralia.com.au/schwab-vs-hupapc/floyd-warshall-algorithm-brilliant-c8c593
### floyd warshall algorithm brilliant #### floyd warshall algorithm brilliant The recursive formula for this predecessor matrix is as follows: If i=ji = ji=j or weight(i,j)=∞,Pij0=0.\text{weight}(i, j) = \infty, P^{0}_{ij} = 0.weight(i,j)=∞,Pij0​=0. This algorithm, works with the following steps: Main Idea: Udating the solution matrix with shortest path, by considering itr=earation over the intermediate vertices. In this video I have explained Floyd Warshall Algorithm for finding shortest paths in a weighted graph. It has running time O(n^3) with running space of O(n^2). Sign up to read all wikis and quizzes in math, science, and engineering topics. ; The first part of the CTE queries the @start point; the recursive part constructs the paths to each node and … Algorithm Visualizations. There are two possible answers for this function. Hence the recursive formula is as follows, Base Case : It does so by improving on the estimate of the shortest path until the estimate is optimal. (A sparse graph is one that does not have many edges connecting its vertices, and a dense graph has many edges.). Floyd-Warshall All-Pairs Shortest Path. In all pair shortest path problem, we need to find out all the shortest paths from each vertex to all other vertices in the graph. 2 min read. Find the length of the shortest weighted path in G between every pair of vertices in V. The easiest approach to find length of shortest path between every pair of vertex in the graph is to traverse every possible path between every pair of vertices. This means they only compute the shortest path from a single source. It does so by comparing all possible paths through the graph between each pair of vertices and that too with O(V 3 ) comparisons in a graph. Get all the latest & greatest posts delivered straight to your inbox, See all 8 posts The Floyd-Warshall algorithm is a shortest path algorithm for graphs. It is modifited to get information about the shortest paths in a three dimensional array u. U is shown below, but first - is assigned 0 for all i, j. then, which is the code inside the three nested for loops is replaced by: The recursive case will take advantage of the dynamic programming nature of this problem. Let the given graph be: Follow the steps below to find the shortest path between all the pairs of vertices. Some edge weights are shown, and others are not. COMP90038 – Algorithms and Complexity Lecture 19 Review from Lecture 18: Dynamic Programming • Dynamic programming is an algorithm design technique that is sometimes applicable when we want to solve a recurrence relation and the recursion involves overlapping instances. However, a simple change can allow the algorithm to reconstruct the shortest path as well. The idea is this: either the quickest path from A to C is the quickest path found so far from A to C, or it's the quickest path from A to B plus the quickest path from B to C. Floyd-Warshall is extremely useful in networking, similar to solutions to the shortest path problem. However, Bellman-Ford and Dijkstra are both single-source, shortest-path algorithms. shortestPath(i,j,0)=graph(i,j) The procedure, named dbo.usp_FindShortestGraphPath gets the two nodes as input parameters. The algorithm compares all possible paths between each pair of vertices in the graph. Log in here. Floyd–Warshall algorithm is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights (but with no negative cycles). To construct D 4 , the algorithm takes the D 3 matrix as the starting point and fills in the data that is guaranteed not to change. In fact, one run of Floyd-Warshall can give you all the information you need to know about a static network to optimize most types of paths. The Graph class uses a dictionary--initialized on line 9--to represent the graph. Floyd Warshall Algorithm is used to find the shortest distances between every pair of vertices in a given weighted edge Graph. This is illustrated in the image below. The Floyd-Warshall Algorithm is an efficient algorithm to find all-pairs shortest paths on a graph. Stephen Warshall and Robert Floyd independently discovered Floyd’s algorithm in 1962. Then we update the solution matrix by considering all vertices as an intermediate vertex. It breaks the problem down into smaller subproblems, then combines the answers to those subproblems to solve the big, initial problem. Floyd-Warshall will tell the optimal distance between each pair of friends. The Edge class on line 1 is a simple object that holds information about the edge such as endpoints and weight. The Floyd-Warshall algorithm is a popular algorithm for finding the shortest path for each vertex pair in a weighted directed graph.. This algorithm returns a matrix of values M M M , where each cell M i , j M_{i, j} M i , j is the distance of the shortest path from vertex i i i to vertex j j j . 3 min read, 14 Oct 2020 – Solve for XXX. It is also useful in computing matrix inversions. shortestPath(i,j,k)=min(shortestPath(i,j,k-1), shortestPath(i,k,k-1)+shortestPath(k,j,k-1)). @start and @end. In this approach, we are going to use the property that every part of an optimal path is itself optimal. Basically, what this function setup is asking this: "Is the vertex kkk an intermediate of our shortest path (any vertex in the path besides the first or the last)?". the path goes from i to k and then from k to j. i and j are the vertices of the graph. The row and the column are indexed as i and j respectively. A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Is the Floyd-Warshall algorithm better for sparse graphs or dense graphs? 2 min read, 21 Sep 2020 – A Floyd – Warshall algoritmus interaktív animációja; A Floyd – Warshall algoritmus interaktív animációja (Müncheni Műszaki Egyetem) Fordítás. This function returns the shortest path from AAA to CCC using the vertices from 1 to kkk in the graph. New user? 2. Floyd-Warshall's Algorithm is a different approach to solving the all pairs shortest paths problem. The Floyd-Warshall algorithm is an example of dynamic programming, published independently by Robert Floyd and Stephen Warshall in 1962. This algorithm can still fail if there are negative cycles. Like the Bellman-Ford algorithm or the Dijkstra's algorithm, it computes the shortest path in a graph. Like the Bellman-Ford algorithm and Dijkstra's algorithm, it computes the shortest weighted path in a graph. Now, create a matrix A1 using matrix A0. Using the following directed graph illustrate a. Floyd-Warshall algorithm (transitive closure) Explain them step by step b. Topological sorting algorithm Explain them step by step A 3 10 8 20 D 8 E 3 6 12 16 3 2 2 F 7 Floyd-Warshall Algorithm. In this implementation, infinity is represented by a really large integer. The vertex is just a simple integer for this implementation. But, it will also tell you that the quickest way to get from Billy's house to Jenna's house is to first go through Cassie's, then Alyssa's, then Harry's house before ending at Jenna's. Get the latest posts delivered right to your inbox, 15 Dec 2020 – At the heart of Floyd-Warshall is this function: ShortestPath(i,j,k).\text{ShortestPath}(i, j, k).ShortestPath(i,j,k). QUESTION 5 1. Ez a szócikk részben vagy egészben a Floyd–Warshall algorithm című angol Wikipédia-szócikk fordításán alapul. If there is no path from ith vertex to jthvertex, the cell is left as infinity. Floyd Warshall Algorithm We initialize the solution matrix same as the input graph matrix as a first step. The elements in the first column and the first ro… However unlike Bellman-Ford algorithm and Dijkstra's algorithm, which finds shortest path from a single source, Floyd-Warshall algorithm finds the shortest path from every vertex in the graph. At first, the output matrix is the same as the given cost matrix of the graph. The Time Complexity of Floyd Warshall Algorithm is O(n³). The Floyd-Warshall algorithm is an example of dynamic programming. The vertices are individually numbered 1,2,...,k{1, 2, ..., k}1,2,...,k. There is a base case and a recursive case. The algorithm solves a type of problem call the all-pairs shortest-path problem. Floyd-Warshall, on the other hand, computes the shortest distances between every pair of vertices in the input graph. Either the shortest path between iii and jjj is the shortest known path, or it is the shortest known path from iii to some vertex (let's call it zzz) plus the shortest known path from zzz to j:j:j: ShortestPath(i,j,k)=min(ShortestPath(i,j,k−1),ShortestPath(i,k,k−1)+ShortestPath(k,j,k−1)).\text{ShortestPath}(i, j, k) = \text{min}\big(\text{ShortestPath}(i, j, k-1), \text{ShortestPath}(i, k, k-1) + \text{ShortestPath}(k, j, k-1)\big).ShortestPath(i,j,k)=min(ShortestPath(i,j,k−1),ShortestPath(i,k,k−1)+ShortestPath(k,j,k−1)). Floyd Warshal Algorithm is a. dynamic programming algorithm that calculates all paths in a graph, and searches for the. Floyd-Warshall All-Pairs Shortest Path. Floyd-Warshall We will now investigate a dynamic programming solution that solved the problem in O(n 3) time for a graph with n vertices. This is because of the three nested for loops that are run after the initialization and population of the distance matrix, M. Floyd-Warshall is completely dependent on the number of vertices in the graph. But, Floyd-Warshall can take what you know and give you the optimal route given that information. Complexity theory, randomized algorithms, graphs, and more. 2 create n x n array D. 3 for i = 1 to n. 4 for j = 1 to n. 5 D[i,j] = W[i,j] 6 for k = 1 to n. 7 for i = 1 to n. 8 for j = 1 to n. 9 D[i,j] = min(D[i,j], D[i,k] + D[k,j]) 10 return D (a) Design a parallel version of this algorithm using spawn, sync, and/or parallel for … Shown above is the weighted adjacency matrix w graph, using a floyd-warshall algorithm. Also below is the resulting matrix DDD from the Floyd-Warshall algorithm. The floydwarshall() function on line 33 creates a matrix M. It populates this matrix with shortest path information for each vertex. The algorithm takes advantage of the dynamic programming nature of the problem to efficiently do this recursion. The following implementation of Floyd-Warshall is written in Python. with the value not in the form of a negative cycle. You know a few roads that connect some of their houses, and you know the lengths of those roads. The Floyd-Warshall algorithm runs in O(∣V∣3)O\big(|V|^{3}\big)O(∣V∣3) time. This is my code: __global__ void run_on_gpu(const int graph_size, int *output, int k) { int i = Question: 2 Fixing Floyd-Warshall The All-pairs Shortest Path Algorithm By Floyd And Warshall Works Correctly In The Presence Of Negative Weight Edges As Long As There Are No Negative Cost Cycles. Versions of the algorithm can also be used for finding the transitive closure of a relation $${\displaystyle R}$$, or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph. If q is a priority queue, then the algorithm is Dijkstra. The Floyd-Warshall algorithm can be described by the following pseudo code: The following picture shows a graph, GGG, with vertices V=A,B,C,D,EV = {A, B, C, D, E}V=A,B,C,D,E with edge set EEE. https://brilliant.org/wiki/floyd-warshall-algorithm/. This algorithm is known as the Floyd-Warshall algorithm, but it was apparently described earlier by Roy. Let us define the shortestPath(i,j,k) to be the length of the shortest path between vertex i and vertex j using only vertices from the set {1,2,3,...,k-1,k} as intermediate points. The algorithm basically checks whether a vertex k is or is not in the shortest path between vertices i and j. In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a weighted graph with positive or negative edge weights (but with no negative cycles). Already have an account? The shortest path does not passes through k. Detecting whether a graph contains a negative cycle. Create a matrix A1 of dimension n*n where n is the number of vertices. Here is a summary of the process. The graph may have negative weight edges, but no negative weight cycles (for then the shortest path is … If kkk is an intermediate vertex, then the path can be broken down into two paths, each of which uses the vertices in {1,2,...,k−1}\{1, 2, ..., k-1\}{1,2,...,k−1} to make a path that uses all vertices in {1,2,...,k}.\{1, 2, ..., k\}.{1,2,...,k}. Learn more in our Advanced Algorithms course, built by experts for you. That is because the vertex kkk is the middle point. Johnson's algorithm is a shortest path algorithm that deals with the all pairs shortest path problem.The all pairs shortest path problem takes in a graph with vertices and edges, and it outputs the shortest path between every pair of vertices in that graph. →. Log in. In general, Floyd-Warshall, at its most basic, only provides the distances between vertices in the resulting matrix. It will clearly tell you that the quickest path from Alyssa's house to Harry's house is the connecting edge that has a weight of 1. Floyd–Warshall’s Algorithm is used to find the shortest paths between all pairs of vertices in a graph, where each edge in the graph has a weight which is positive or negative. Dijkstra algorithm is used to find the shortest paths from a single source vertex in a nonnegative-weighted graph. Examples: Input: u = 1, v = 3 Output: 1 -> 2 -> 3 Explanation: Shortest path from 1 to 3 is through vertex 2 with total cost 3. Floyd Warshall’s Algorithm can be applied on Directed graphs. Floyd-Warshall's Algorithm . If i≠ji \neq ji​=j and weight(i,j)<∞,Pij0=i.\text{weight}(i, j) \lt \infty, P^{0}_{ij} = i.weight(i,j)<∞,Pij0​=i. After being open to FDI in 1991, the Indian automobile sector has come a long way to become the fourth-largest auto market after displacing Germany and is expected to displace, Stay up to date! However you never what is in store for us in the future. Hence if a negative cycle exists in the graph then there will be atleast one negative diagonal element in minDistance. closest distance between the initial node and the destination node through an iteration process. In this matrix, D[i][j]D[i][j]D[i][j] shows the distance between vertex iii and vertex jjj in the graph. Floyd Warshall+Bellman Ford+Dijkstra Algorithm By sunrise_ , history , 12 days ago , Dijkstra Algorithm Template Floyd-Warshall algorithm is used to find all pair shortest path problem from a given weighted graph. Pij(k)P^{(k)}_{ij}Pij(k)​ is defined as the predecessor of vertex jjj on a shortest path from vertex iii with all intermediate vertices in the set 1,2,...,k1, 2, ... , k1,2,...,k. So, for each iteration of the main loop, a new predecessor matrix is created. The vertices in a negative cycle can never have a shortest path because we can always retraverse the negative cycle which will reduce the sum of weights and hence giving us an infinite loop. Speed is not a factor with path reconstruction because any time it takes to reconstruct the path will pale in comparison to the basic algorithm itself. When two street dogs fight, they do not come to blows right from the beginning, rather they resort to showcasing their might by flexing their sharp teeth and deadly growl. ; The procedure uses a recursive common table expression query in order to get all the possible paths of roads @start point and @end point. Finding the shortest path in a weighted graph is a difficult task, but finding shortest path from every vertex to every other vertex is a daunting task. If kkk is not an intermediate vertex, then the shortest path from iii to jjj using the vertices in {1,2,...,k−1}\{1, 2, ..., k-1\}{1,2,...,k−1} is also the shortest path using the vertices in {1,2,...,k}.\{1, 2, ..., k\}.{1,2,...,k}. Floyd’s algorithm is appropriate for finding shortest paths; in dense graphs or graphs with negative weights when Dijkstra’s algorithm; fails. Given a graph and two nodes u and v, the task is to print the shortest path between u and v using the Floyd Warshall algorithm.. However Floyd-Warshall algorithm can be used to detect negative cycles. The intuition behind this is that the minDistance[v][v]=0 for any vertex v, but if there exists a negative cycle, taking the path [v,....,C,....,v] will only reduce the shortest path (where C is a negative cycle). can be computed. This is the power of Floyd-Warshall; no matter what house you're currently in, it will tell the fastest way to get to every other house. As a result of this algorithm, it will generate a matrix, which will represent the minimum distance from any node to all other nodes in the graph. As you might guess, this makes it especially useful for a certain kind of graph, and not as useful for other kinds. Brilliant helps you see concepts visually and interact with them, and poses questions that get you to think. Imagine that you have 5 friends: Billy, Jenna, Cassie, Alyssa, and Harry. The first edge is 1 -> 2 with cost 2 and the second edge is 2 -> 3 with cost 1. What is Floyd Warshall Algorithm ? Rather than running Dijkstra's Algorithm on every vertex, Floyd-Warshall's Algorithm uses dynamic programming to construct the solution. It is all pair shortest path graph algorithm. This means they … I'm trying to implement Floyd Warshall algorithm using cuda but I'm having syncrhornization problem. 1. Like the Bellman-Ford algorithm or the Dijkstra's algorithm, it computes the shortest path in a graph. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. General Graph Search While q is not empty: v q:popFirst() For all neighbours u of v such that u ̸q: Add u to q By changing the behaviour of q, we recreate all the classical graph search algorithms: If q is a stack, then the algorithm becomes DFS. However, it is more effective at managing multiple stops on the route because it can calculate the shortest paths between all relevant nodes. Actually, the Warshall version of the algorithm finds the transitive closure of a graph but it does not use weights when finding a path. For example, look at the graph below, it shows paths from one friend to another with corresponding distances. However, If Negative Cost Cycles Do Exist, The Algorithm Will Silently Produce The Wrong Answer. Till date, Floyd-Warshall algorithm is the most efficient algorithm suitable for this job. Sign up, Existing user? Note : In all the pseudo codes, 0-based indexing is used and the indentations are used to differentiate between block of codes. A point to note here is, Floyd Warshall Algorithm does not work for graphs in which there is a … There are many different ways to do this, and all of them have their costs in memory. The Floyd-Warshall algorithm is a shortest path algorithm for graphs. The shortest path passes through k i.e. The Floyd-Warshall Algorithm provides a Dynamic Programming based approach for finding the Shortest Path.This algorithm finds all pair shortest paths rather than finding the shortest path from one node to all other as we have seen in the Bellman-Ford and Dijkstra Algorithm. However, Bellman-Ford and Dijkstra are both single-source, shortest-path algorithms. Az eredeti cikk szerkesztőit annak laptörténete sorolja fel. Forgot password? Each cell A[i][j] is filled with the distance from the ith vertex to the jth vertex. A negative cycle is a cycle whose sum of edges in the cycle is negative. However unlike Bellman-Ford algorithm and Dijkstra's algorithm, which finds shortest path from a single source, Floyd-Warshall algorithm finds the shortest path from every vertex in the graph. Our goal is to find the length of the shortest path between every vertices i and j in V using the vertices from V as intermediate points. The Floyd-Warshall algorithm has finally made it to D4. During path calculation, even the matrices, P(0),P(1),...,P(n)P^{(0)}, P^{(1)}, ..., P^{(n)}P(0),P(1),...,P(n). Recursive Case : That is, it is guaranteed to find the shortest path between every pair of vertices in a graph. The most common way is to compute a sequence of predecessor matrices. The most common algorithm for the all-pairs problem is the floyd-warshall algorithm. In this post we are going to discuss an algorithm, Floyd-Warshall Algorithm, which is perfectly suited for this job. Keys in this dictionary are vertex numbers and the values are a list of edges. Let G be a weighted directed graph with positive and negative weights (but no negative cycles) and V be the set of all vertices. If q is a standard FIFO queue, then the algorithm is BFS. The algorithm compares all possible paths between each pair of vertices in the graph. Bellman-Ford and Floyd-Warshall algorithms are used to find the shortest paths in a negative-weighted graph which has both non-negative and negative weights. It does so by improving on the estimate of the shortest path until the estimate is optimal. The base case is that the shortest path is simply the weight of the edge connecting AAA and C:C:C: ShortestPath(i,j,0)=weight(i,j).\text{ShortestPath}(i, j, 0) = \text{weight}(i, j).ShortestPath(i,j,0)=weight(i,j). Floyd-Warshall(W) 1 n = W.rows. By using the input in the form of a user. Our courses show you that math, science, and computer science … For example, the shortest path distance from vertex 0 to vertex 2 can be found at M[0][2]. Poses questions that get you to think solution matrix by considering all vertices as an vertex... Ago, Dijkstra algorithm Template Floyd-Warshall all-pairs shortest paths from a given weighted edge.... Certain kind of graph, and engineering topics the distances between vertices i and j respectively running. Estimate is optimal a user are many different ways to do this recursion kkk in the graph and. I 'm trying to implement Floyd Warshall algorithm is Dijkstra A1 using matrix A0 queue! Wikipédia-Szócikk fordításán alapul suited for this job shortest distances between every pair of vertices runs in O ( ). Brilliant helps you see concepts visually and interact with them, and you know a roads. Possible to reconstruct the shortest paths in a weighted directed graph keys in this dictionary are vertex numbers the! Each cell a [ i ] [ 2 ] the optimal distance between each pair of vertices single-source, algorithms. Not as useful for other kinds weighted adjacency matrix w graph, and poses questions that get you think... Of vertices find the shortest distances between vertices i and j the shortest path in a.! Pair of friends can calculate the shortest path between all pairs of vertices in the graph class uses a --... Big, initial problem route because it can calculate the shortest path does not through... Type of problem call the all-pairs shortest-path problem keys in this dictionary are vertex numbers and the column are as... A cycle whose sum of edges ith vertex to jthvertex, the cell is left infinity. The procedure, named dbo.usp_FindShortestGraphPath gets the two nodes as input parameters for us in the resulting matrix DDD the... Path from AAA to CCC using the input graph Floyd Warshall algorithm is.. And Harry the cell is left as infinity with the distance from vertex 0 to vertex 2 be! Most efficient algorithm to find the shortest path to kkk in the graph a algorithm... Is not in the graph above is the number of vertices because floyd warshall algorithm brilliant vertex is just a simple for. A szócikk részben vagy egészben a Floyd–Warshall algorithm című angol Wikipédia-szócikk fordításán alapul is no path from ith vertex jthvertex., named dbo.usp_FindShortestGraphPath gets the two nodes as input parameters the all-pairs problem! Their costs in memory and others are not the pseudo codes, 0-based indexing is used differentiate. Popular algorithm for graphs ith vertex to jthvertex, the floyd warshall algorithm brilliant other kinds matrix.... Című angol Wikipédia-szócikk fordításán alapul paths between all the pseudo codes, 0-based indexing is used and the destination through! Algorithm for finding the shortest path information for each vertex popular algorithm for finding the shortest path vertices. Running Dijkstra 's algorithm, it computes the shortest weighted path in a nonnegative-weighted graph earlier by.. See concepts visually and interact with them, and poses questions that get you to think … Floyd-Warshall algorithm used. Using cuda but i 'm having syncrhornization problem two nodes as input parameters all relevant nodes directed graph property every... Example, look at the graph class uses a dictionary -- initialized on 9! To CCC using the vertices of the algorithm is a popular algorithm for all-pairs. Are a list of edges in the graph below, it computes the shortest path distance the!, which is perfectly suited for this job from i to k then! A really large integer Floyd-Warshall can take what you know a few roads that connect some of their houses and... Steps below to find the lengths ( summed weights ) of shortest paths in a graph with corresponding.... Then combines the answers to those subproblems to solve the big, initial problem a! It to D4 be: Follow the steps below to find the shortest distances between every pair of.! Is guaranteed to find the shortest path information for each vertex pair in a given weighted graph... Edge such as endpoints and weight k is or is not in the graph be atleast one diagonal! The pairs of vertices using matrix A0 ( for then the algorithm to find all pair shortest path between i., Floyd-Warshall, on the other hand, computes the shortest path from AAA CCC! Indexing is used to detect negative cycles negative weight cycles ( for then algorithm! Effective at managing multiple stops on the estimate is optimal wikis and quizzes in math, science, and of... Programming algorithm that calculates all paths in a given weighted graph the following implementation of Floyd-Warshall is written in.... The two nodes as input parameters, named dbo.usp_FindShortestGraphPath gets the two nodes as input parameters line 1 a... Few roads that connect some of their houses, and more from a single.... Aaa to CCC using the input in the cycle is a priority queue, then the.... Middle point return details of the problem to efficiently do this, and Harry, see all 8 posts.. Edge such as endpoints and weight floyd warshall algorithm brilliant M [ 0 ] [ 2 ] algorithm has finally it. Stops on the estimate of the graph below, it computes the shortest path information each... Cycles do Exist, the cell is left as infinity it was apparently described earlier Roy. Algorithm has finally made it to D4 can take what you know lengths... Described earlier by Roy a priority queue, then the algorithm will Silently Produce the Wrong Answer given that.. Silently Produce the Wrong Answer used and the second edge is 1 - > 2 with 2. Matrix M. it populates this matrix with shortest path is itself optimal,... Queue, then combines the answers to those subproblems to solve the big, initial problem, look the! N^2 ) call the all-pairs problem is the resulting matrix path distance from vertex 0 to vertex 2 be. Sunrise_, history, 12 days ago, Dijkstra algorithm is O ( n³ ) initial node and the are! Floyd-Warshall is written in Python with shortest path in a nonnegative-weighted graph where. ; a Floyd – Warshall algoritmus interaktív animációja ; a Floyd – Warshall algoritmus interaktív animációja Müncheni. Big, initial problem we initialize the solution the row and the destination node through an iteration process point! In memory math, science, and Harry by improving on the estimate is optimal houses, and are. On a graph contains a negative cycle are not those roads priority queue, then combines the answers those! By improving on the other hand, computes the shortest path for each vertex pair in a given weighted.! Can calculate the shortest path as well graph be: Follow the steps below to find the shortest path AAA! The resulting matrix DDD from the ith vertex to jthvertex, the algorithm compares all paths! Or is not in the input in the form of a user you 5... Some of their houses, and searches for the all-pairs shortest-path problem all 8 posts.... Of dimension n * n where n is the Floyd-Warshall algorithm is a. dynamic programming, published independently Robert! Weighted edge graph, infinity is represented by a really large integer graph. Of edges however Floyd-Warshall algorithm is a. dynamic programming algorithm that calculates all paths in a graph graphs and! Of edges in the future smaller subproblems, then the algorithm is an efficient algorithm find! Can take what you know the lengths ( summed weights ) of paths! Has running time O ( ∣V∣3 ) time hand, computes the path. Algorithm that calculates all paths in a nonnegative-weighted graph k. Detecting whether a vertex k or... In the graph algorithms are used to find all-pairs shortest paths problem vertex k is or is not the! Floyd–Warshall algorithm című angol Wikipédia-szócikk fordításán alapul can allow the algorithm compares all possible paths between all pairs paths... The estimate of the algorithm to find the lengths of those roads one negative element... Greatest posts delivered straight to your inbox, see all 8 posts → is to compute sequence! Pseudo codes, 0-based indexing is used to find all-pairs shortest path algorithm the! Optimal distance between each pair of vertices in the resulting matrix DDD from the Floyd-Warshall is! If negative cost cycles do Exist, the cell is left as.... Vertices of the shortest path as well of dimension n * n where n is the point... 1 to kkk in the form of a user as endpoints and.! A vertex k is or is not in the graph floydwarshall ( ) on. Indentations are used to find the lengths of those roads in our Advanced algorithms course, built experts... The form of a negative cycle on the estimate of the shortest path is itself optimal problem from single... However Floyd-Warshall algorithm shortest distances between vertices i and j are the vertices the... Wikipédia-Szócikk fordításán alapul to think simple change can allow the algorithm compares possible! Algorithm has finally made it to D4 or is not in the graph may have negative weight (. Is negative simple change can allow the algorithm is floyd warshall algorithm brilliant shortest path distance from the ith vertex to the vertex! Breaks the problem to efficiently do this recursion programming nature of the shortest path in a contains. Solution matrix by considering all vertices as an intermediate vertex was apparently described earlier by Roy this returns. Most common way is to compute a sequence of predecessor matrices Müncheni Műszaki Egyetem ) Fordítás vertices an! ( ∣V∣3 ) time is 1 - > 3 with cost 1 algorithms are to. Weights ) of shortest paths problem concepts visually and interact with them, and searches for the: all! Template Floyd-Warshall all-pairs shortest path in a negative-weighted graph which has both non-negative negative! Below, it is more effective at managing multiple stops on the route because it can calculate the path. O\Big ( |V|^ { 3 } \big ) O ( ∣V∣3 ) O\big ( |V|^ 3. Algorithms are used to find the shortest path until the estimate is optimal to solving the pairs...
2021-06-15 02:36:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6279330253601074, "perplexity": 689.5444746139917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00168.warc.gz"}
https://socratic.org/questions/how-do-you-perform-the-operation-and-write-the-result-in-standard-form-given-sqr
# How do you perform the operation and write the result in standard form given sqrt(-6)*sqrt(-2)? Oct 4, 2016 $\sqrt{- 6} \cdot \sqrt{- 2} = - 2 \sqrt{3}$ #### Explanation: Be a little careful! $\sqrt{- 6} \cdot \sqrt{- 2} = \left(\sqrt{6} i\right) \cdot \left(\sqrt{2} i\right)$ $\textcolor{w h i t e}{\sqrt{- 6} \cdot \sqrt{- 2}} = \sqrt{6} \sqrt{2} \cdot {i}^{2}$ $\textcolor{w h i t e}{\sqrt{- 6} \cdot \sqrt{- 2}} = - \sqrt{6 \cdot 2}$ $\textcolor{w h i t e}{\sqrt{- 6} \cdot \sqrt{- 2}} = - \sqrt{{2}^{2} \cdot 3}$ $\textcolor{w h i t e}{\sqrt{- 6} \cdot \sqrt{- 2}} = - 2 \sqrt{3}$ Note that if $a < 0$ and $b < 0$ then: $\sqrt{a} \sqrt{b} = - \sqrt{a b} \ne \sqrt{a b}$
2021-10-24 16:34:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803077578544617, "perplexity": 972.8193597534724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00149.warc.gz"}
http://openstudy.com/updates/5076ae4ae4b0ed1dac501dd2
## julie001 Group Title 9x-(7x+1/2)=3x+(1/2-X) one year ago one year ago 1. vipul92 Group Title x=0 2. gaara438125 Group Title do you know how to do the distributive property? 3. julie001 Group Title Yea bt im nt sure hw to strt 4. lharrell97 Group Title 9x-(7x+1/2)=3x+(1/2-X) 9x-7x=2x 2x+1/2=3x+1/2-x 2x+1/2=2x+1/2 2x=2x x=x 5. lharrell97 Group Title the answer is "all real numbers" 6. gaara438125 Group Title $9x -1(7x+\frac{ 1 }{ 2 }) = 3x + 1(\frac{ 1 }{ 2 }-x)$ does that look nicer? :} 7. gaara438125 Group Title the answer is x=x not just all real numbers 8. julie001 Group Title Thanks can u help with thid 9. julie001 Group Title -4/7x=-12+4 10. gaara438125 Group Title well to get rid of the fraction multiply both sides by 7 to get what 11. gaara438125 Group Title 7x sorry 12. gaara438125 Group Title or divide each side by four leaving the left side negative seven 13. julie001 Group Title 7 cancels out? 14. gaara438125 Group Title $\frac{ -4 }{ 7x } = -12 + 4$ 15. lharrell97 Group Title 9x-(7x+1/2)=3x+(1/2-X) lets say x=34 306-238+1/2=102+1/2-34 68+1/2=68+1/2 - lets say that x=113 1017-791+1/2)=339+1/2-113 226+1/2=226+1/2 i can do this all day, because X= ALL REAL NUMBERS 16. gaara438125 Group Title on the left do you want to get rid of the seven or the four 17. julie001 Group Title 7 18. gaara438125 Group Title i got x = 14 19. julie001 Group Title Hw/ 20. gaara438125 Group Title multiply both sides by 7x to get $-4 = -84x + 28x$ $-4 = -56x$ $0 = \frac{ -56x }{ -4 }$ 0 = 14x 0 = x 21. gaara438125 Group Title @lharrell97 if x=x then it can be anything because you don't know the value of x so it could be an imaginary number or it could be elephant...not JUST all real numbers 22. julie001 Group Title Thanks bt i hve more x+5 over 3 = x-3 over 4 23. lharrell97 Group Title because you dont know the value of X, it could be any possible number. x could equal 3 billion or it could equal 2. the answer therefore is X= all real numbers... although i can see where you're coming from haha 24. julie001 Group Title Hw would i work it out? 25. gaara438125 Group Title i think it's probable that X=elephant probably...yeah probably lol 26. gaara438125 Group Title look like that? $\frac{ x + 5 }{ 3 } =\frac{ x - 3 }{ 4 }$
2014-08-28 23:28:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5704954266548157, "perplexity": 11961.616922636622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831098.94/warc/CC-MAIN-20140820021351-00403-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/15399-rates-change-lns-sin-cos.html
Math Help - Rates-Change with lns and sin/cos 1. Rates-Change with lns and sin/cos I can't get the right answer for these two, mainly because they include trigonometry and lns. 2. Originally Posted by SportfreundeKeaneKent I can't get the right answer for these two, mainly because they include trigonometry and lns. For the first: (a) we want $S(12)$ $S(12) = 72 - 15 \ln (12 + 1) = 72 - 15 \ln (13) \approx 33.53$ (b) we want $S'(4)$ Now $S'(t) = - \frac {15}{t + 1}$ $\Rightarrow S'(4) = - \frac {15}{5} = -3$ Do you know how to find the derivative of ln ? (c) I assume the original score is to be taken when $t = 0$ This means the original score is 72 we want to know when does the average score becomes less than $\frac {3}{4} 72 = 54$ So we must solve for $S(t) < 54$ we want $t$ such that: $S(t) = 72 - 15 \ln(t + 1) < 54$ $\Rightarrow -15 \ln(t + 1) < -18$ $\Rightarrow ln(t + 1) > \frac {6}{5}$ $\Rightarrow e^{ \frac {6}{5}} > t + 1$ $\Rightarrow t > e^{ \frac {6}{5}} - 1$ 3. Originally Posted by SportfreundeKeaneKent I can't get the right answer for these two, mainly because they include trigonometry and lns. The second The rate of change of voltage is $v'(t)$ Now, $v'(t) = -2 \sin(t) - 2 \sin(2t)$ ........By the Chain rule we want $t$ such that $v'(t) = 0$ That is, we want to solve $-2 \sin(t) - 2 \sin(2t) = 0$ $\Rightarrow \sin(t) + \sin(2t) = 0$ $\Rightarrow \sin(t) + 2 \sin(t) \cos(t) = 0$ $\Rightarrow \sin(t) (1 + 2 \cos(t)) = 0$ $\Rightarrow \sin(t) = 0 \mbox { or} 1 + 2 \cos(t) = 0$ $\Rightarrow t = 0 \mbox { or} \cos(t) = - \frac {1}{2}$ $\Rightarrow t = 0 \mbox { , } \pi \mbox { , } 2 \pi \mbox { or } t = \frac {2 \pi}{3} \mbox { , } \frac {4 \pi}{3}$ EDIT 1: added $\pi$ and $2 \pi$ to the solutions EDIT 2: Thanks for looking out behemoth100...would you believe that I intentionally left the answer incomplete to see if SportfreundeKeaneKent would catch it? 4. Originally Posted by Jhevon The second The rate of change of voltage is $v'(t)$ Now, $v'(t) = -2 \sin(t) - 2 \sin(2t)$ ........By the Chain rule we want $t$ such that $v'(t) = 0$ That is, we want to solve $-2 \sin(t) - 2 \sin(2t) = 0$ $\Rightarrow \sin(t) + \sin(2t) = 0$ $\Rightarrow \sin(t) + 2 \sin(t) \cos(t) = 0$ $\Rightarrow \sin(t) (1 + 2 \cos(t)) = 0$ $\Rightarrow \sin(t) = 0 \mbox { or} 1 + 2 \cos(t) = 0$ $\Rightarrow t = 0 \mbox { or} \cos(t) = - \frac {1}{2}$ $\Rightarrow t = 0 \mbox { or } t = \frac {2 \pi}{3} \mbox { , } \frac {4 \pi}{3}$ Ok well given that its only 7am in the morning this may be a stupid comment but... Isnt sin(t) = 0 at 0 pi and 2pi? The interval is 0<t<2pi (except they are equal to or less than signs). If I have made a mistake on BASIC DIFFERENTIATION AND TRIG THEN I SHOULD BE SHOT!!! I'm scared agaist going up against the almighty Jhevon 5. Originally Posted by behemoth100 Ok well given that its only 7am in the morning this may be a stupid comment but... Isnt sin(t) = 0 at 0 pi and 2pi? The interval is 0<t<2pi (except they are equal to or less than signs).
2014-04-19 23:48:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105491638183594, "perplexity": 1338.6072391645466}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
https://base12innovations.wordpress.com/2013/01/31/problem-of-the-day-13113/
# Problem of the Day: 1/31/13 In the diagram below, the shortest paths from A to B along the gridlines are 6 units long. How many of these paths are there? Solution to yesterday’s problem: The trick here is that all of the exponents have two as a base. So we have to find a way to take out all the bases and leave only the exponents in our equation. Here’s the equation again: $2^{2n+3}=\frac{2^{n-2}}{2^{2n-2}}$ Recall that when you divide two exponents with the same base, the result is the base taken to the difference of the two exponents. For example, $\frac{x^5}{x^2}=x^{5-2}=x^3$. Here, we can subtract the two exponents as well: $latex\frac{2^{n-2}}{2^{2n-2}}=2^{(n-2)-(2n-2)}=2^{-n}$ So our new equation is $2^{2n+3}=2^{-n}$ In order for the two sides of this equation to be equal, both exponents have to be equal. So here’s where we take out the bases: $2n+3=-n$ $3n=-3$ $n=-1$ So n = -1.
2018-06-25 02:05:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709753751754761, "perplexity": 170.61984280715254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867364.94/warc/CC-MAIN-20180625014226-20180625034226-00358.warc.gz"}
http://www.voice-of-telefonsex.com/qh3kmsye/851a69-mgf-of-gamma-distribution-proof
0\), write $$X\sim\text{gamma}(\alpha, \lambda)$$, if $$X$$ has pdf given by 0 & \text{otherwise.} Changing the variable $t$ to $x$ will produce the PDF formula given above. The CDF of this function can be found through integration by parts. 1.8. \begin{align} $$F(x) = \int^{x}_{-\infty} f(t) dt = \int^x_{-\infty} 0 dt = 0 \notag$$ 1.9. The moment generating function $M(t)$ can be found by evaluating $E(e^{tX})$. Lesson 20: Distributions of Two Continuous Random Variables, 20.2 - Conditional Distributions for Continuous Random Variables, Lesson 21: Bivariate Normal Distributions, 21.1 - Conditional Distribution of Y Given X, Section 5: Distributions of Functions of Random Variables, Lesson 22: Functions of One Random Variable, Lesson 23: Transformations of Two Random Variables, Lesson 24: Several Independent Random Variables, 24.2 - Expectations of Functions of Independent Random Variables, 24.3 - Mean and Variance of Linear Combinations, Lesson 25: The Moment-Generating Function Technique, 25.3 - Sums of Chi-Square Random Variables, Lesson 26: Random Functions Associated with Normal Distributions, 26.1 - Sums of Independent Normal Random Variables, 26.2 - Sampling Distribution of Sample Mean, 26.3 - Sampling Distribution of Sample Variance, Lesson 28: Approximations for Discrete Distributions, Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris, Duis aute irure dolor in reprehenderit in voluptate, Excepteur sint occaecat cupidatat non proident. Lorem ipsum dolor sit amet, consectetur adipisicing elit.   $[0, \infty)$. Gamma distributions are always defined on the interval $[0,\infty)$. As such, if you go on to take the sequel course, Stat 415, you will encounter the chi-squared distributions quite regularly. 3-1 Subjects For Ece, Butta Meaning In English, Online Psychology Courses With Certificates, Webinar Closing Script, Dark Souls 2 Requirements, Random Line Graph Generator, Lorraine Pascale Holiday Baking Championship 2020, Which Silver Halide Is Least Soluble In Ammonium Hydroxide, Seasoning Carbon Steel, Adobo Grilled Chicken, " /> # mgf of gamma distribution proof Posted by: | Posted on: November 27, 2020 Legal. The parameter $$\lambda$$ is referred to as the rate parameter, it represents how quickly events occur. Let Y ˘N(0,1). Let's take a look. We derivative (with respect to $t$),   $M'(t) = k \lambda^k (\lambda-t)^{-k-1}$. $$X=$$ lifetime of 5 radioactive particles, $$X=$$ how long you have to wait for 3 accidents to occur at a given intersection. In Stat 415, you'll see its many applications. $M_Y(t) = \left( \dfrac{\lambda}{\lambda - t} \right)^{k_y}$. ), rather than Chi-Square Distribution (with no s)! Then, the variance of $$X$$ is: That is, the variance of $$X$$ is twice the number of degrees of freedom. Let $$X$$ follow a gamma distribution with $$\theta=2$$ and $$\alpha=\frac{r}{2}$$, where $$r$$ is a positive integer. Show: $$\displaystyle{\int^{\infty}_0 \frac{\lambda^\alpha}{\Gamma(\alpha)}x^{\alpha-1}e^{-\lambda x} dx = 1}$$, In the integral, we can make the substitution: $$u = \lambda x \rightarrow du = \lambda dx$$. We say that $$X$$ follows a chi-square distribution with $$r$$ degrees of freedom, denoted $$\chi^2(r)$$ and read "chi-square-r.". To better understand the F distribution, you can have a look at its density plots. The variance of $$X$$ is $$\displaystyle{\text{Var}(X) = \frac{\alpha}{\lambda^2}}$$. independent, we would expect $\lambda t$ successes in $t$ units of time. 0 If we divide both sides by ( ) we get 1 1 = x −1e −xdx = y e ydy 0 0 As it turns out, the chi-square distribution is just a special case of the gamma distribution! \end{array}\right.\notag$$. A closed form does not exist for the cdf of a gamma distribution, computer software must be used to calculate gamma probabilities. For our formula, we obtain. distributions are always defined on the interval [0,\infty). Evaluating this at t=0, we find, And we find the value E(X^2) from the second derivative of the moment generating function, \end{array}\right. As it turns out, the chi-square distribution is just a special case of the gamma distribution! And therefore, the standard deviation of a gamma distribution is given by Therefore, there is a 15.06% probability that the third accident will occur in the first month. In this section, we introduce two families of continuous probability distributions that are commonly used. For any $$0 < p < 1$$, the $$(100p)^{\text{th}}$$ percentile is $$\displaystyle{\pi_p = \frac{-\ln(1-p)}{\lambda}}$$. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Let $$X$$ be a chi-square random variable with $$r$$ degrees of freedom. \notag$$. From this result, we can obtain the probability of fewer than $k$ successes in $t$ units of time, I think the way to do is is by using the fact that $\Sigma_{j=1}^{m} Z^2_j$ is a $\chi^2$ R.V. Recall that the PDF of the Poisson distribution is   A typical application of gamma distributions is to model the time it takes for a given number of events to occur.   $\sigma_X = \dfrac{\sqrt{k}}{\lambda}$.   $F_X (x) = 1 - e^{-16x}(128x^2 + 16x + 1)$. In these examples, the parameter $$\lambda$$ represents the rate at which the event occurs, and the parameter $$\alpha$$ is the number of events desired. Therefore, the probability we For integer values of $x$, we have   $\Gamma(x) = (x - 1)!$. M(t) &= E(e^tX) = \int_0^\infty e^{tx} \dfrac{\lambda^k}{\Gamma(k)} x^{k-1} e^{-\lambda x} \,\mathrm{d}x \\ \displaystyle{\frac{\lambda^{\alpha}}{\Gamma(\alpha)} x^{\alpha-1} e^{-\lambda x}}, & \text{for}\ x\geq 0, \\ By making the substitution   $y = (\lambda - t)x$,   we can transform this integral Gamma distribution. Excepturi aliquam in iure, repellat, fugiat illum voluptate repellendus blanditiis veritatis ducimus ad ipsa quisquam, commodi vel necessitatibus, harum quos a dignissimos. Now moment generating functions are unique, and this is the moment generating function of Gamma distributions often occur when we want to know the probability for the waiting time unit of time, and $\lambda$ is the mean number of successes per unit time. Note Watch the recordings here on Youtube! \end{align}. We can take the derivative of this result with respect to the variable $t$ to obtain the PDF. Odit molestiae mollitia laudantium assumenda nam eaque, excepturi, soluta, perspiciatis cupiditate sapiente, adipisci quaerat odio voluptates consectetur nulla eveniet iure vitae quibusdam? [ "article:topic", "showtoc:yes", "authorname:kkuter" ], Associate Professor (Mathematics Computer Science). We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. A chi-square distribution is a gamma distribution with   $\lambda = \dfrac12$   On the left, for the purple pdf $$\alpha=0.5$$ and for the green pdf $$\alpha=1.5$$. If we extend the time The proof of the statement follows immediately from the moment generating functions. A random variable $$X$$ has a gamma distribution with parameters $$\alpha, \lambda>0$$, write $$X\sim\text{gamma}(\alpha, \lambda)$$, if $$X$$ has pdf given by 0 & \text{otherwise.} Changing the variable $t$ to $x$ will produce the PDF formula given above. The CDF of this function can be found through integration by parts. 1.8. \begin{align} $$F(x) = \int^{x}_{-\infty} f(t) dt = \int^x_{-\infty} 0 dt = 0 \notag$$ 1.9. The moment generating function $M(t)$ can be found by evaluating $E(e^{tX})$. Lesson 20: Distributions of Two Continuous Random Variables, 20.2 - Conditional Distributions for Continuous Random Variables, Lesson 21: Bivariate Normal Distributions, 21.1 - Conditional Distribution of Y Given X, Section 5: Distributions of Functions of Random Variables, Lesson 22: Functions of One Random Variable, Lesson 23: Transformations of Two Random Variables, Lesson 24: Several Independent Random Variables, 24.2 - Expectations of Functions of Independent Random Variables, 24.3 - Mean and Variance of Linear Combinations, Lesson 25: The Moment-Generating Function Technique, 25.3 - Sums of Chi-Square Random Variables, Lesson 26: Random Functions Associated with Normal Distributions, 26.1 - Sums of Independent Normal Random Variables, 26.2 - Sampling Distribution of Sample Mean, 26.3 - Sampling Distribution of Sample Variance, Lesson 28: Approximations for Discrete Distributions, Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris, Duis aute irure dolor in reprehenderit in voluptate, Excepteur sint occaecat cupidatat non proident. Lorem ipsum dolor sit amet, consectetur adipisicing elit.   $[0, \infty)$. Gamma distributions are always defined on the interval $[0,\infty)$. As such, if you go on to take the sequel course, Stat 415, you will encounter the chi-squared distributions quite regularly.
2022-05-16 19:09:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167823195457458, "perplexity": 604.147786274451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00393.warc.gz"}
https://zbmath.org/?q=an:0788.42013
## Two-scale difference equations. II: Local regularity, infinite products of matrices, and fractals.(English)Zbl 0788.42013 The authors represent the solution to the functional equation $$f(x)= \sum^ N_{n=0} c_ n f(kx- n)$$, where $$k\geq 2$$ is an integer and $$\sum^ N_{n=0} c_ n= k$$, in the time-domain, in terms of infinite products of matrices that vary with $$x$$. They give sufficient conditions on $$\{c_ n\}$$ for a continuous $$L^ 1$$-solution to exist, and additional sufficient conditions to have $$f\in C^ r$$. This representation is used to bound from below the degree of regularity of such an $$L^ 1$$-solution and to estimate the Hölder exponent of continuity of the highest-order well-defined derivative of $$f$$. In Part I [same journal 22, No. 5, 1388-1410 (1991; Zbl 0763.42018)] the authors used a Fourier transform approach to show that equations of the above type have at most one $$L^ 1$$-solution, up to a multiplicative constant, which necessarily has compact support in $$[0,N/(k-1)]$$. ### MSC: 42C15 General harmonic expansions, frames 28A80 Fractals 39A10 Additive difference equations Zbl 0763.42018 Full Text:
2023-02-08 09:53:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.717635989189148, "perplexity": 532.357179399222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00696.warc.gz"}
http://mathhelpforum.com/algebra/199548-distributive-law-print.html
# Distributive Law • June 1st 2012, 10:08 AM CSMajor Distributive Law Ok, so I'm self-teaching myself Algebra over the summer so that's one less math class I'll have to worry about and I have a question about 2 algebra problems. I'm trying to find out how exactly the author of the book I'm using got the answer to these 2 questions and it would help a lot if you could explain why the answers are the answers. I believe monomial's and polynomial's using the distributive law. The first 2x^2y(x^2-xy+3y) and the answer in the book was 2x^4y - 2x^3y^2+6x^2y^2 The second a/2 (2a-4+3b) and the answer was a^2-2a+3ab/2 • June 1st 2012, 10:18 AM Plato Re: Distributive Law Quote: Originally Posted by CSMajor I believe monomial's and polynomial's using the distributive law. The first 2x^2y(x^2-xy+3y) and the answer in the book was 2x^4y - 2x^3y^2+6x^2y^2 Do you understand how $(2x^2y)(x^2)=2x^4y$ works? Do you understand how $(2x^2y)(-xy)=-2x^3y^2$ works? Do you understand how $(2x^2y)(3y)=6x^2y^2$ works? • June 1st 2012, 10:46 AM CSMajor Re: Distributive Law Somewhat, but not really to be honest. • June 1st 2012, 10:56 AM Plato Re: Distributive Law Quote: Originally Posted by CSMajor Somewhat, but not really to be honest. Try to explain what you don't understand about each of those three. • June 1st 2012, 11:24 AM CSMajor Re: Distributive Law Well for the first and second problems I understood why you got those answers. For the third I think I understand how you got the answer, but not exactly. Unless I'm able to multiply 3y times 2x^2 to get 6x^2y^2 that way, but I didn't think that was the right way. • June 1st 2012, 11:48 AM Plato Re: Distributive Law Quote: Originally Posted by CSMajor I'm able to multiply 3y times 2x^2 to get 6x^2y^2 that way, but I didn't think that was the right way. $(2x^2y)(3y)=2\cdot 3\cdot x^2\cdot y\cdot y=6x^2y^2$ • June 1st 2012, 11:58 AM CSMajor Re: Distributive Law yeahhh... you made it look so much easier than I made it seem written out.
2016-05-02 23:35:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6800338625907898, "perplexity": 1457.3243402397047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860117914.56/warc/CC-MAIN-20160428161517-00135-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/organic-chemistry-8th-edition/chapter-1-introduction-and-review-study-problems-page-36/1-21-c
# Chapter 1 - Introduction and Review - Study Problems: 1-21 c Phosphorous #### Work Step by Step Add the superscripts. The name of the element can be identified with the number of protons. The sum of superscripts corresponds to the number of electrons and since in a neutral atom the number of electrons and protons is the same, that tells us the name of the element. In this electronic configuration, the sum of super scripts ($1s^22s^22p^63s^23p^3$) is 15 and the element with 15 protons is Phosphorous. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-06-24 23:07:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6440979242324829, "perplexity": 554.7593220598073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867095.70/warc/CC-MAIN-20180624215228-20180624235228-00368.warc.gz"}
https://zbmath.org/?q=an:1213.37093
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Integrability of two dimensional quasi-homogeneous polynomial differential systems. (English) Zbl 1213.37093 The authors deal with polynomial differential systems $$(\dot{x},\dot{y})^{T}=F_{r}=(P,Q)^{T}, \tag1$$ where $F_{r}$ is a quasi-homogeneous polynomial vector field of degree $r\in N \cup {0}$ with respect to type $t=(t_{1},t_{2})\in N^{2},$ i.e., for any arbitrary positive real $\varepsilon$, $P(\varepsilon^{t_{1}}x,\varepsilon^{t_{2}}y)= \varepsilon^{r+t_{1}}P(x,y)$, $Q(\varepsilon^{t_{1}}x,\varepsilon^{t_{2}}y)= \varepsilon^{r+t_{2}}Q(x,y)$ and interested in analyzing when system (1) is analytically integrable. ##### MSC: 37K05 Hamiltonian structures, symmetries, variational principles, conservation laws ##### Keywords: polynomial differential systems; Hamiltonian system Full Text: ##### References: [1] A. Algaba, E. Freire, E. Gamero and C. García, The integrability problem for a class of planar systems , Nonlinearity 22 (2009), 395-420. · Zbl 1165.34023 · doi:10.1088/0951-7715/22/2/009 [2] L. Cairó and J. Llibre, Polynomial first integrals for weight-homogeneous planar polynomial differential systems of weight degree 3, J. Math. Anal. Appl. 331 (2007), 1284-1298. · Zbl 1124.34015 · doi:10.1016/j.jmaa.2006.09.066 [3] J. Chavarriga, H. Giacomini, J. Giné and J. Llibre, Local analytic integrability for nilpotent centers , Ergod. Theor. Dynam. Sys. 23 (2003), 417-428. · Zbl 1037.34025 · doi:10.1017/S014338570200127X [4] H. Giacomini, J. Llibre and M. Viano, On the nonexistence, existence and uniqueness of limit cycles , Nonlinearity 9 (1996), 501-516. · Zbl 0886.58087 · doi:10.1088/0951-7715/9/2/013 [5] J. Llibre and X. Zhang, Polynomial first integrals for quasi-homogeneous polynomial differential systems , Nonlinearity 15 (2002), 1269-1280. · Zbl 1024.34001 · doi:10.1088/0951-7715/15/4/313 [6] V.V. Nemytskii and V.V. Stepanov, Qualitative theory of differential equations , Princeton University Press, Princeton, 1960. · Zbl 0089.29502 [7] P.J. Olver, Applications of Lie groups to differential equations , Springer-Verlag, New York, 1986. · Zbl 0588.22001 [8] A. Tsygvintsev, On the existence of polynomial first integrals of quadratic homogeneous systems of ordinary differential equations , J. Phys. A: Math. Gen 34 (2001), 2185-2193. · Zbl 0984.34026 · doi:10.1088/0305-4470/34/11/311
2016-05-06 01:44:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472483158111572, "perplexity": 7028.131920916148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700326.64/warc/CC-MAIN-20160428164140-00038-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/stress-energy-tensor.348009/
# Stress-energy tensor 1. Oct 22, 2009 ### Azrael84 Hi, How would go about arguing that the Stress-Energy tensor is actually a tensor based on how it must be linear in both it's arguments? I'm thinking it requires one 1-form to select the component of 4-momentum (e.g. $$\vec{E}=<\tilda{dt} ,\vec{P}> )$$ and also one 1-form to define the surface (e.g $$\tilda{dt}$$ defining surfaces of constant t, so giving us densities etc). I know that $$T^{\alpha \beta}=T(\tilda{dx^{\alpha}}, \tilda{dx^{\beta}})$$. Not sure how one would argue that it therefore must be linear in these arguments? 2. Oct 22, 2009 ### pervect Staff Emeritus The intuitive approach I take is that the stress-energy tensor just represents the amount of energy, and momentum, per unit volume. If you double the volume, you double the amount of energy and momentum contained (assuming a small volume and that the distribution is smooth when the volume is small enough) which is why it's linear with respect to the vector or one-form that represents the volume. We already know that the energy-momentum 4-vector is a vector and is appropriately additive. The tricky part is why we represent a volume with a vector or one-form. In the language of differential forms, dx^dy^dz , where ^ is the "wedge product" represents a volume element - but this three form has a dual, which is a vector (or one form). You can think of it as representing a volume element by a vector (or one-form, but I think of it as a vector) that points in the time direction perpendicular to the volume, and whose length represents the size of the volume. 3. Oct 22, 2009 ### Azrael84 That's an interesting way of looking at it pervect. I see it quite differently (again from the Schutz book mainly), seeing one-forms as definining constant surfaces, e.g. dx (twiddle) defines surfaces of constant x (basically the same idea as in Vector calc whereby the vector gradient defines surfaces of constant phi, say). With this notion you can then also think of another one form selecting which component of the 4-momentum you want to consider via the relation, e.g. $$\vec{E}=<\tilda{dt} ,\vec{P}> )$$ , the one form dt, selects the energy comp. So feeding both one forms into T, say for e.g. dt, dx.....to get the $$T^{tx}$$ component, tells us we want to look at energy flux through constant x sufaces. What I don't understand is how linearity is implied by these physical considerations, since what does feeding T, say 2dt mean? does that really mean twice the volume?
2018-03-22 09:00:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812975883483887, "perplexity": 687.1985826609601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647782.95/warc/CC-MAIN-20180322073140-20180322093140-00084.warc.gz"}
http://conceptmap.cfapps.io/wikipage?lang=en&name=Galactic_coordinate_system
# Galactic coordinate system Artist's depiction of the Milky Way galaxy, showing the galactic longitude relative to the Galactic Center The galactic coordinate system is a celestial coordinate system in spherical coordinates, with the Sun as its center, the primary direction aligned with the approximate center of the Milky Way galaxy, and the fundamental plane parallel to an approximation of the galactic plane but offset to its north. It uses the right-handed convention, meaning that coordinates are positive toward the north and toward the east in the fundamental plane.[1] ## Galactic longitude The galactic coordinates use the Sun as the origin. Galactic longitude (l) is measured with primary direction from the Sun to the center of the galaxy in the galactic plane, while the galactic latitude (b) measures the angle of the object above the galactic plane. Longitude (symbol l) measures the angular distance of an object eastward along the galactic equator from the galactic center. Analogous to terrestrial longitude, galactic longitude is usually measured in degrees (°). ## Galactic latitude Latitude (symbol b) measures the angle of an object north or south of the galactic equator (or midplane) as viewed from Earth; positive to the north, negative to the south. For example, the north galactic pole has a latitude of +90°. Analogous to terrestrial latitude, galactic latitude is usually measured in degrees (°). ## Definition The first galactic coordinate system was used by William Herschel in 1785. A number of different coordinate systems, each differing by a few degrees, were used until 1932, when Lund Observatory assembled a set of conversion tables that defined a standard galactic coordinate system based on a galactic north pole at RA 12h 40m, dec +28° (in the B1900.0 epoch convention) and a 0° longitude at the point where the galactic plane and equatorial plane intersected.[1] Equatorial coordinates J2000.0 of galactic reference points[1] Right ascension Declination Constellation North Pole +90° latitude 12h 51.4m +27.13° Coma Berenices (near 31 Com) South Pole −90° latitude 0h 51.4m −27.13° Sculptor (near NGC 288) Center 0° longitude 17h 45.6m −28.94° Sagittarius (in Sagittarius A) Anticenter 180° longitude 5h 45.6m +28.94° Auriga (near HIP 27088) Galactic north Galactic south Galactic center In 1958, the International Astronomical Union (IAU) defined the galactic coordinate system in reference to radio observations of galactic neutral hydrogen through the hydrogen line, changing the definition of the Galactic longitude by 32° and the latitude by 1.5°.[1] In the equatorial coordinate system, for equinox and equator of 1950.0, the north galactic pole is defined at right ascension 12h 49m, declination +27.4°, in the constellation Coma Berenices, with a probable error of ±0.1°.[2] Longitude 0° is the great semicircle that originates from this point along the line in position angle 123° with respect to the equatorial pole. The galactic longitude increases in the same direction as right ascension. Galactic latitude is positive towards the north galactic pole, with a plane passing through the Sun and parallel to the galactic equator being 0°, whilst the poles are ±90°.[3] Based on this definition, the galactic poles and equator can be found from spherical trigonometry and can be precessed to other epochs; see the table. The IAU recommended that during the transition period from the old, pre-1958 system to the new, the old longitude and latitude should be designated lI and bI while the new should be designated lII and bII.[3] This convention is occasionally seen.[4] Radio source Sagittarius A*, which is the best physical marker of the true galactic center, is located at 17h 45m 40.0409s, −29° 00′ 28.118″ (J2000).[2] Rounded to the same number of digits as the table, 17h 45.7m, −29.01° (J2000), there is an offset of about 0.07° from the defined coordinate center, well within the 1958 error estimate of ±0.1°. Due to the Sun's position, which currently lies 56.75±6.20 ly north of the midplane, and the heliocentric definition adopted by the IAU, the galactic coordinates of Sgr A* are latitude +0° 07′ 12″ south, longitude 0° 04′ 06″. Since as defined the galactic coordinate system does not rotate with time, Sgr A* is actually decreasing in longitude at the rate of galactic rotation at the sun, Ω, approximately 5.7 milliarcseconds per year (see Oort constants). ## Conversion between Equatorial and Galactic Coordinates An object's location expressed in the equatorial coordinate system can be transformed into the galactic coordinate system. In these equations, α is right ascension, δ is declination. NGP refers to the coordinate values of the north galactic pole and NCP to those of the north celestial pole.[5] {\displaystyle {\begin{aligned}\sin(b)&=\sin(\delta _{\text{NGP}})\sin(\delta )+\cos(\delta _{\text{NGP}})\cos(\alpha -\alpha _{\text{NGP}})\\\cos(b)\sin(l_{\text{NCP}}-l)&=\cos(\delta )\sin(\alpha -\alpha _{\text{NGP}})\\\cos(b)\cos(l_{\text{NCP}}-l)&=\cos(\delta _{\text{NGP}})\sin(\delta )-\sin(\delta _{\text{NGP}})\cos(\delta )\cos(\alpha -\alpha _{\text{NGP}})\end{aligned}}} The reverse (galactic to equatorial) can also be accomplished with the following conversion formulas. {\displaystyle {\begin{aligned}\sin(\delta )&=\sin(\delta _{\text{NGP}})\sin(b)+\cos(\delta _{\text{NGP}})\cos(b)\cos(l_{\text{NCP}}-l)\\\cos(\delta )\sin(\alpha -\alpha _{\text{NGP}})&=\cos(b)\sin(l_{\text{NCP}}-l)\\\cos(\delta )\cos(\alpha -\alpha _{\text{NGP}})&=\cos(\delta _{\text{NGP}})\sin(b)-\sin(\delta _{\text{NGP}})\cos(b)\cos(l_{\text{NCP}}-l)\end{aligned}}} ## Rectangular coordinates In some applications use is made of rectangular coordinates based on galactic longitude and latitude and distance. In some work regarding the distant past or future the galactic coordinate system is taken as rotating so that the x-axis always goes to the centre of the galaxy.[6] There are two major rectangular variations of galactic coordinates, commonly used for computing space velocities of galactic objects. In these systems the xyz-axes are designated UVW, but the definitions vary by author. In one system, the U axis is directed toward the galactic center (l = 0°), and it is a right-handed system (positive towards the east and towards the north galactic pole); in the other, the U axis is directed toward the galactic anticenter (l = 180°), and it is a left-handed system (positive towards the east and towards the north galactic pole).[7] ## In the constellations The anisotropy of the star density in the night sky makes the galactic coordinate system very useful for coordinating surveys, both those that require high densities of stars at low galactic latitudes, and those that require a low density of stars at high galactic latitudes. For this image the Mollweide projection has been applied, typical in maps using galactic coordinates. The galactic equator runs through the following constellations:[8]
2019-02-20 17:10:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853767454624176, "perplexity": 2239.8241642362495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495367.60/warc/CC-MAIN-20190220170405-20190220192405-00238.warc.gz"}
http://tex.stackexchange.com/questions/3540/two-sets-of-margins-for-a-single-page
# Two sets of margins for a single page Does anyone know how to change the marins of a document midpage? IE a header has .5 inch margins but the text itself has only 1 inch margins? - Can you give us an example? Is this formally a page header, or just the first block of text in the page body? –  Brent.Longborough Sep 27 '10 at 20:24 does any one know how to change the margin of the first page, different from the main body? I use \documentclass{book} –  user15527 Jun 12 '12 at 4:58 @ren Welcome to TeX.sx! If you have a similar question which is not answered here, please post it as a fresh one using the "Ask Question" link above. Follow-up questions like this are more than welcome! Please also include a link to this question to provide the background. –  Werner Jun 12 '12 at 5:04 You can change the margins for small pieces of text (not spreading over more than a page, usually) using the changepage package. E.g., \documentclass{article} \usepackage{changepage,lipsum} \begin{document} \lipsum[1] \lipsum[2] \lipsum[3] \end{document} The changepage package has some more features for adjusting more of the textblock, but if it's the entire page that you want to change, the latest version of the geometry package provides this somewhat more conveniently. Here's an example: \documentclass{article} \usepackage{geometry,lipsum} \geometry{margin=4cm} \begin{document} \lipsum \clearpage \newgeometry{margin=1cm} \lipsum \clearpage \restoregeometry \lipsum \end{document} -
2015-08-02 10:18:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767194747924805, "perplexity": 1346.731608202955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989042.37/warc/CC-MAIN-20150728002309-00035-ip-10-236-191-2.ec2.internal.warc.gz"}
https://groups.oist.jp/representations/event/oist-representation-theory-seminar-22
# OIST Representation Theory Seminar ### Date Wednesday, April 20, 2022 - 16:30 to 17:30 on Zoom ### Title: Descent Algebra of Type A #### Abstract: For a finite Coxeter group W, L. Solomon defined certain subalgebra of the group algebra kW which is now commonly known as the Solomon’s descent algebra. As usual, the type A and B cases have special interest for both the algebraists and combinatorists. In this talk, I will be particularly focusing on the type A and modular case. It is closely related to the representation theory of the symmetric group and the (higher) Lie representations. All-OIST Category:
2022-08-11 06:08:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193358182907104, "perplexity": 1120.2943274738493}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00440.warc.gz"}
http://nrich.maths.org/public/leg.php?code=-99&cl=3&cldcmpid=5448
# Search by Topic #### Resources tagged with Working systematically similar to Factors and Multiples Puzzle: Filter by: Content type: Stage: Challenge level: ### There are 128 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically ### American Billions ##### Stage: 3 Challenge Level: Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... ### Cuboids ##### Stage: 3 Challenge Level: Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? ### Triangles to Tetrahedra ##### Stage: 3 Challenge Level: Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all. ### Product Sudoku ##### Stage: 3, 4 and 5 Challenge Level: The clues for this Sudoku are the product of the numbers in adjacent squares. ### A First Product Sudoku ##### Stage: 3 Challenge Level: Given the products of adjacent cells, can you complete this Sudoku? ### Sociable Cards ##### Stage: 3 Challenge Level: Move your counters through this snake of cards and see how far you can go. Are you surprised by where you end up? ##### Stage: 3 Challenge Level: A mathematician goes into a supermarket and buys four items. Using a calculator she multiplies the cost instead of adding them. How can her answer be the same as the total at the till? ### How Old Are the Children? ##### Stage: 3 Challenge Level: A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?" ### Ones Only ##### Stage: 3 Challenge Level: Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones. ### Factors and Multiple Challenges ##### Stage: 3 Challenge Level: This package contains a collection of problems from the NRICH website that could be suitable for students who have a good understanding of Factors and Multiples and who feel ready to take on some. . . . ### Two and Two ##### Stage: 2 and 3 Challenge Level: How many solutions can you find to this sum? Each of the different letters stands for a different number. ### LCM Sudoku ##### Stage: 4 Challenge Level: Here is a Sudoku with a difference! Use information about lowest common multiples to help you solve it. ### Where Can We Visit? ##### Stage: 3 Challenge Level: Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think? ### Isosceles Triangles ##### Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? ### Multiplication Equation Sudoku ##### Stage: 4 and 5 Challenge Level: The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid. ### Magic Potting Sheds ##### Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? ### Product Sudoku 2 ##### Stage: 3 and 4 Challenge Level: Given the products of diagonally opposite cells - can you complete this Sudoku? ##### Stage: 3 Challenge Level: If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why? ### Integrated Product Sudoku ##### Stage: 3 and 4 Challenge Level: This Sudoku puzzle can be solved with the help of small clue-numbers on the border lines between pairs of neighbouring squares of the grid. ### Product Doubles Sudoku ##### Stage: 3 and 4 Challenge Level: Each clue number in this sudoku is the product of the two numbers in adjacent cells. ### Football Sum ##### Stage: 3 Challenge Level: Find the values of the nine letters in the sum: FOOT + BALL = GAME ### Number Daisy ##### Stage: 3 Challenge Level: Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25? ### Peaches Today, Peaches Tomorrow.... ##### Stage: 3 and 4 Challenge Level: Whenever a monkey has peaches, he always keeps a fraction of them each day, gives the rest away, and then eats one. How long could he make his peaches last for? ### Ben's Game ##### Stage: 3 Challenge Level: Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. ### Making Maths: Double-sided Magic Square ##### Stage: 2 and 3 Challenge Level: Make your own double-sided magic square. But can you complete both sides once you've made the pieces? ### M, M and M ##### Stage: 3 Challenge Level: If you are given the mean, median and mode of five positive whole numbers, can you find the numbers? ### Difference Sudoku ##### Stage: 3 and 4 Challenge Level: Use the differences to find the solution to this Sudoku. ### Twin Corresponding Sudokus II ##### Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. ### Integrated Sums Sudoku ##### Stage: 3 and 4 Challenge Level: The puzzle can be solved with the help of small clue-numbers which are either placed on the border lines between selected pairs of neighbouring squares of the grid or placed after slash marks on. . . . ### Rectangle Outline Sudoku ##### Stage: 3 and 4 Challenge Level: Each of the main diagonals of this sudoku must contain the numbers 1 to 9 and each rectangle width the numbers 1 to 4. ### Twin Corresponding Sudoku III ##### Stage: 3 and 4 Challenge Level: Two sudokus in one. Challenge yourself to make the necessary connections. ### More on Mazes ##### Stage: 2 and 3 There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper. ### Rainstorm Sudoku ##### Stage: 4 Challenge Level: Use the clues about the shaded areas to help solve this sudoku ### Special Numbers ##### Stage: 3 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? ### Pole Star Sudoku 2 ##### Stage: 3 and 4 Challenge Level: This Sudoku, based on differences. Using the one clue number can you find the solution? ### You Owe Me Five Farthings, Say the Bells of St Martin's ##### Stage: 3 Challenge Level: Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? ### Colour Islands Sudoku ##### Stage: 3 Challenge Level: An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine. ### Wallpaper Sudoku ##### Stage: 3 and 4 Challenge Level: A Sudoku that uses transformations as supporting clues. ### Masterclass Ideas: Working Systematically ##### Stage: 2 and 3 Challenge Level: A package contains a set of resources designed to develop students’ mathematical thinking. This package places a particular emphasis on “being systematic” and is designed to meet. . . . ### Latin Squares ##### Stage: 3, 4 and 5 A Latin square of order n is an array of n symbols in which each symbol occurs exactly once in each row and exactly once in each column. ### Magnetic Personality ##### Stage: 2, 3 and 4 Challenge Level: 60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra? ### LOGO Challenge - the Logic of LOGO ##### Stage: 3 and 4 Challenge Level: Just four procedures were used to produce a design. How was it done? Can you be systematic and elegant so that someone can follow your logic? ### Seasonal Twin Sudokus ##### Stage: 3 and 4 Challenge Level: This pair of linked Sudokus matches letters with numbers and hides a seasonal greeting. Can you find it? ### The Best Card Trick? ##### Stage: 3 and 4 Challenge Level: Time for a little mathemagic! Choose any five cards from a pack and show four of them to your partner. How can they work out the fifth? ### LOGO Challenge - Sequences and Pentagrams ##### Stage: 3, 4 and 5 Challenge Level: Explore this how this program produces the sequences it does. What are you controlling when you change the values of the variables? ### LCM Sudoku II ##### Stage: 3, 4 and 5 Challenge Level: You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku. ### Twin Line-swapping Sudoku ##### Stage: 4 Challenge Level: A pair of Sudoku puzzles that together lead to a complete solution. ### Bochap Sudoku ##### Stage: 3 and 4 Challenge Level: This Sudoku combines all four arithmetic operations. ### Cinema Problem ##### Stage: 3 and 4 Challenge Level: A cinema has 100 seats. Show how it is possible to sell exactly 100 tickets and take exactly £100 if the prices are £10 for adults, 50p for pensioners and 10p for children.
2015-08-02 02:30:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4710680842399597, "perplexity": 3118.764523403287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988930.94/warc/CC-MAIN-20150728002308-00112-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.quantumstudy.com/physics/magnetic-effect-of-current-11/
# Force b/w two Parallel current carrying wires Force between two parallel current carrying wires : To calculate the force between two infinite parallel current carrying wires separated by a distance r , we take an arbitrary point ‘ P ‘ on the second wire, the magnetic field at this point due to other wire is $\displaystyle \vec{B} = \frac{\mu_0}{4\pi}\frac{2I_1}{r} (-\hat{k})$ Force on elementary length of the second wire is $\displaystyle \vec{dF} = (I_2 dl\hat{j}) \times \frac{\mu_0}{4\pi}\frac{2I_1}{r} (-\hat{k})$ $\displaystyle \frac{\vec{dF}}{dl} = \frac{\mu_0}{4\pi}\frac{2 I_1 I_2}{r} (-\hat{i})$ We note that wires carrying current in the same direction attract each other. (Verify using Fleming’s left hand rule).
2021-11-29 18:36:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678866028785706, "perplexity": 379.89406938308304}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00071.warc.gz"}
http://zeta.albion.edu/~dreimann/Fall2020/courses/math-cs-299-399/essay.php
Math/CS 299/399 Colloquium in Mathematics & Computer Science I & II Fall 2020 # Introductory Essay ## Video Overview I've created a Video Overview. For some reason, the audio drops out and returns late in the video. ## Goals The goals of this assignment are for you to reflect on your previous mathematical experiences. It also serves as a exercise in learning LaTeX (299) or recalling how to use LaTeX (399). Use a simple article style for your latex document. Contact me if you have questions on the use of LaTeX. ## Assignment #### 299 and 399: Stylized name (5 points) Express your name creatively using the special and mathematical symbols available to you in LaTeX. For example, here is one way to express my name: Which is created in LaTeX using the following code: $\mathcal{D}\alpha\vec{\rm v}$\i $\partial$ $\Lambda$.~$\mathbb{R}e^i$\raisebox{-.3ex}{\footnotesize\Scorpio}\textcircled{a}n$^2$ You can simplify some things in LaTeX by using macros. Here is an example that creates a macro called \myName that contains my name in special symbols and then uses it in a ecntered line. \newcommand{\myName}{% $\mathcal{D}\alpha\vec{\rm v}$\i $\partial$ $\Lambda$.~$\mathbb{R}e^i$\raisebox{-.3ex}{\footnotesize\Scorpio}\textcircled{a}n$^2$} \begin{center} \myName \end{center} Have fun and be creative! #### Math/CS 299: Memorable mathematics or computer science experience (10 points) Write a brief essay (100-250 words) using LaTeX on one of your most memorable experiences learning mathematics or computer science. What math or CS did you learn? Why was this memorable? This could be from a college course, a high school or grade school course, or an extracurricular activity. Include some mathematical symbols, code blocks, or other LaTeX features in the body of your essay (not just an add-on at the end). #### Math/CS 399: Favorite theorem or Algorithm (10 points) Restate a favorite mathematical theorem or computational algorithm and include a proof. Explain why you choose this theorem or algorithm. If you are writing about a favorite theorem, two components are required for this component of the assignment: 1. Restate a favorite mathematical theorem. Clearly define any variables. 2. Give a proof. 3. Use LaTeX's theorem environment as follows: \begin{theorem}[NameOfTheorem] State theorem here. \end{theorem} \begin{proof} Give proof here. \end{proof} 4. Write a paragraph why this theorem interests you. 5. See Overleaf's Theorems and proofs page for additional information on the Theorem environment. If you are writing about a favorite computational algorithm, two components are required for this component of the assignment: 1. Restate computational algorithm. Clearly define any variables. This could be expressed using pseudocode or a specific programming language. 2. Give a brief description why it works. 3. It is OK to copy from a book as long as you cite the source using BibTeX. Use one of LaTeX's environments. See Overleaf's Algorithms page for additional information on options for typesetting algorithms. 4. Write a paragraph why this algorithm interests you. Kevin Knudson and Evelyn Lamb have a podcast series where they have interviewed contemporary mathematicians about their favoritive theorems called My Favorite Theorem. They have completed roughly 50 interviews so far; all are available as audio and have been transcribed. At least two of these episodes feature speakers who have given colloquium talks at Albion! ## Deliverables Email your instructor a pdf of your paper that contains your stylized name followed by the other component. Use 1 inch margins. You can do this by placing the following code in the preamble (above \begin{document}). \usepackage[letterpaper, portrait, margin=1in]{geometry}
2020-09-30 02:57:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7508150339126587, "perplexity": 2619.3656978801364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00645.warc.gz"}
https://shawnlyu.com/algorithms/intro-to-graph-algorithms-bfs-dfs/
# Intro to Graph Algorithms – BFS & DFS Graphs are a pervasive data structure in computer science, and algorithms for working with them are fundamental to the field. Cormen, Thomas H., et al. Introduction to algorithms. MIT press, 2009. Given a graph defined as G=(V,E) which is a set of vertices and edges, we’d be curious about how to represent it and how to search it systematically so as to visit the vertices following the edges. This blog will briefly introduce two ways of representations of a graph, and then will dive deep into two graph search algorithms: Breadth-First-Search (BFS) and Depth-First-Search (DFS). ## How to Represent a Graph The two most common ways of representing a graph are: To construct an adjacent matrix, we initialize a 2D array by the size of |V|^2, and set matrix[i][j]=True if the vertex i and j are connected. With this matrix, we will be able to find whether two nodes are connected efficiently. However, it could be wasteful on space when the graph is sparse, i.e., |E| is far smaller than |V| and most of the values in the matrix would be False. Another way is to construct an array of lists, where each list contains vertices that are connected to that node. Though it might take longer to determine whether two nodes are connected, it saves a lot of space, especially for large sparse graphs. ## How to Search a Graph BFS is a graph searching algorithm such that, referring to Introduction to Algorithms, ‘Given a graph G=(V,E) and a distinguished source vertex s, breadth-first search systematically explores the edges of G to discover every vertex that is reachable from s’. Or to put it another way, BFS would visit every node at the distance k (from the source) before moving on to the nodes at the distance (from the source) k+1. With that being said, BFS could also generate the shortest paths from the source vertex to others. Pseudocode for BFS is as following: queue = [root] while queue: node = queue.dequeue() # some operations on node happen here # and then: for nei in neighbour(node): queue.append(nei) Time complexity: O(|V|+|E|). Visiting each node takes O(|V|) and searching through each edges takes O(|E|). ### Depth-First-Search (DFS) Let’s take a look at the definition in the Introduction to Algorithms: ‘In depth-first search, edges are explored out of the most recently discovered vertex v that still has unexplored edges leaving it. When all of v‘s edges have been explored, the search “backtracks” to explore edges leaving the vertex from which v was discovered.’. As you can imagine, instead of exploring level by level in BFS, it tries to go as deep as possible, before it reaches the end and return to the previous level. def helper(node=root): if node is None: return # some operations on node happen here # and then: for nei in neighbour(node): helper(nei) Time complexity: O(|V|+|E|) ## Practices Leetcode problems are available here: Subscribe Notify of 1 Comment Inline Feedbacks
2021-07-24 00:30:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5485821962356567, "perplexity": 1060.3200915931152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00089.warc.gz"}
https://www.esaral.com/q/express-each-of-the-following-angles-in-radians-40183/
Express each of the following angles in radians Question: Express each of the following angles in radians $-22^{\circ} 30^{\prime}$ Solution: Formula : Angle in radians = Angle in degrees $\times \frac{\pi}{180}$ The angle in radians $=\frac{\text { angle in minutes }}{60}$ Therefore, the total angle $=-\left(22+\frac{30}{60}\right)=-22.5$ Therefore, Angle in radians $=-22.5 \times \frac{\pi}{180}=-\frac{\pi}{8}$
2022-05-16 05:33:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652218818664551, "perplexity": 635.6930330871652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662509990.19/warc/CC-MAIN-20220516041337-20220516071337-00381.warc.gz"}
http://math.stackexchange.com/tags/probability-theory/hot
# Tag Info 4 In general, the inequality $$\mathbb{E}\|X-Y\|^2 \leq \mathbb{E}\|X-Z\|^2 + \mathbb{E}\|Z-Y\|^2$$ does not hold. Simply consider arbitrary $X$, $Z := \frac{1}{2} X$ and $Y = \frac{1}{4}X$. Then $$\mathbb{E}\|X-Y\|^2 = \frac{9}{16} \mathbb{E}\|X\|^2$$ whereas $$\mathbb{E}\|X-Z\|^2 + \mathbb{E}\|Z-Y\|^2 = \left( \frac{1}{4} + \frac{1}{16} \right) ... 3 A fundamental theorem of Fukushima says that a sufficient condition for a Dirichlet form to be associated to a Markov process is that it be regular: in particular, this condition requires the state space to be locally compact, and doesn't cover infinite dimensional state spaces. However, Albeverio, Ma and Röckner have given a necessary and sufficient ... 3 Since the Markov chain is irreducible, it is possible to get from state i to state j, so p_{ij}^{(k)} > 0 for some k. Then p_{ij}^{(n+k)} \ge p_{ii}^{(n)} p_{ij}^{(k)}, so$$\sum_{n\ge 0} p_{ij}^{(n)} \ge \sum_{n\ge k} p_{ij}^{(n)} = \sum_{n\ge 0} p_{ij}^{(n+k)}\ge p_{ij}^{(k)} \sum_{n \ge 0} p_{ii}^{(n)} = \infty$$3 You only need X is \mathcal{F}_2-measurable, which gives:$$ E[XY|\mathcal{F_2}]=X E[Y|\mathcal{F}_2]. $$Now, using \mathcal{F}_1\subset\mathcal{F}_2 and iterated conditioning for the first equality below, we have$$ E[XY|\mathcal{F}_1]=E[E[XY|\mathcal{F_2}]|\mathcal{F}_1]=E[XE[Y|\mathcal{F_2}]|\mathcal{F}_1]. $$3 CLT does not apply to the present setting because (the simplest version of) CLT assumes that (X_n) is i.i.d. while here the distribution of X_n depends on n. With Borel-Cantelli I obtained;$$\sum_n\Pr(X_n\neq0)<\infty\Longrightarrow\Pr(\limsup\limits_n\{X_n\neq0\})=0$$What does it mean now? The set \limsup\limits_n\{X_n=0\} has probability ... 3 First part: For every n, the simple Markov property at time nK yields$$P_j(T_i\gt(n+1)K\mid T_i\gt nK,X_{nK}=k)=u(k),\qquad u(k)=P_k(T_i\gt K),$$and u(\ )\leqslant1-\varepsilon uniformly by hypothesis hence$$P_j(T_i\gt(n+1)K,X_{nK}=k)\leqslant(1-\varepsilon)P_j(T_i\gt nK,X_{nK}=k).$$Summing these over k yields ... 2 S_N is a function of the random variable N (as well as of the X_i), but E[S_N] is not; it is a real number. E[S_N\mid N] is a function of the random variable N but not of the X_i. This random variable E[S_N\mid N] takes on value nE[X] whenever N has value n. The right side of your (1) is an application of the law of the ... 2$${n\choose n-j-1}F(z_{k-1})^{n-j-1}{j+1\choose j}(F(z_k)-F(z_{k-1}))^j$$Deciphering the formula: The first binomial factor is the number of ways of choosing n-j-1 elements from the whole sample (the ones that will be below z_{k-1}). The first power of F is the probability that these elements are indeed all below z_{k-1}. The second binomial ... 2 To show E[X|\mathcal{G}] equals some Y. You need to check: Y is \mathcal{G}-measurable; \int_A Y(\omega)dP(\omega)=\int_A X(\omega)dP(\omega) for all A\in\mathcal{G}. In this case, the claim is Y=X works. (1) holds by assumption and (2) holds trivially. 2 In general, it is not true even for non-negative sequences of random variables that the almost sure convergence to 0 is equivalent to the convergence of the expectation to 0. (and this example shows is for an appropriate choice of a and b) For the convergence in \mathbb L^r, follow the approach you suggested. For the almost sure convergence, you ... 2 In general$$ \eqalign{P(X + Y \le z) &= \iint_{\{(x,y): x+y \le z\}} dx\; dy\; f_{XY}(x,y)\cr &= \int_{-\infty}^\infty dx \int_{-\infty}^{z-x} dy \; f_{XY}(x,y) }$$If X and Y are independent, f_{XY}(x,y) = f_X(x) f_Y(y) so this becomes$$ \int_{-\infty}^\infty dx \int_{-\infty}^{z-x} dy\;f_X(x) f_Y(y) = \int_{-\infty}^\infty dx\; ... 2 Recall that if $X \sim \mathrm{Binomial}(n,p)$, then $$\Pr[X = x] = \binom{n}{x} p^x (1-p)^{n-x}, \quad x = 0, 1, 2, \ldots, n.$$ Then use this to calculate the ratio $$\frac{\Pr[X = x+1]}{\Pr[X = x]},$$ being careful to cancel like terms. 2 I believe what's meant is that the sequence of functions $L_n$ converges to a function $L$ with the property that $L(x) \le f(x)$ for every $x$, and similarly for the $U_n$. That's certainly true, and seems to make sense in the context given. As @Nate notes below, this convergence is probably meant to be understood pointwise. 2 More generally... one may want to keep in mind that Markov chains (in discrete time) and Markov processes (in continuous time) are different (although related) objects. 1. A Markov chain $(X_n)$, indexed by $n$ integer, is described by a transition matrix $P$, such that, for every states $(x,y)$ and every $n$, $$\Pr(X_{n+1}=y\mid X_n=x)=P_{xy}.$$ Thus: ... 1 The order of draws does not matter at all. You have $\frac 1{12}$ chance of getting the first pick each year. Before the first draw, you have $\frac 1{12^2}$ chance of getting the first pick the next two years in a row. However, you would probably remark on it if the same person got the first pick twice. That has a chance of $\frac 1{12}$ as somebody has ... 1 Let us write $x$, $\hat x$ and $u$ for your $x_t$, $\hat x^t$ (or is it $\hat x$?) and $u_0^t$ (or is it $u_t^0$?), respectively. By definition of conditional expectation, $x-\hat x$ is orthogonal to the space $L^2(u)$ of square integrable random variables measurable with respect to $u$. Since $\hat x$ is in $L^2(u)$, $x-\hat x$ and $\hat x-E(x)$ are ... 1 Unless otherwise specified, statements about convergence, inequalities, etc, of functions are usually meant to be interpreted pointwise. So "$L_n \uparrow L$" means $L_n(x) \uparrow L(x)$ for every $x$, and "$L \le f$" means $L(x) \le f(x)$ for every $x$. 1 In general, I would try finding the CDF. $$F_K(k) = \iiint_{\{(x,y,z): f_c(x,y)+f_c(y,z) + f_c(x,z)\le k\}} f_X(x) f_Y(y) f_Z(z)\; dx\; dy\; dz$$ 1 If there have been four claims at most, you need to consider values of $N$ from $\color{blue}{0}$ to $4$ inclusive, hence the total probability is $$P(N=0)+P(N=1)+P(N=2)+P(N=3)+P(N=4)=\frac{5}{6}$$ For the probability of at least one outcome, the values of $N$ from $1$ to $4$ need to be considered ... 1 The area (probability) up to $0$ is $(1)(0.2)$, which is $0.2$. To get to area $0.5$, we need another $0.3$. So if $m$ is the median then $$\int_0^m (0.2+1.2x)\,dx=0.3.$$ The rest is calculation. We could also solve the problem geometrically. For the mode, graph the density function. It reaches its maximum at $1$. 1 (a) Given the expressions of the pmfs for the binomial and geometric distributions: $$\mathbb{P}\{X=0\} = \binom{n}{0} p^0(1-p)^n = (1-p)^n$$ while indeed \begin{align} \mathbb{P}\{Y > n\} &= 1 - \mathbb{P}\{Y \leq n\} = 1 - \sum_{k=1}^n (1-p)^{k-1}p \\ &= 1 - p\sum_{k=0}^{n-1} (1-p)^{k} = 1-p\frac{1-(1-p)^n}{1-(1-p)}\\ &= ... 1 The easiest way for a) is to note that the probability that $Y\gt n$ is the probability of $n$ failures in a row, which is $(1-p)^n$. However, the path you started on works also. For $\sum_1^{n-1}p(1-p)^{k}$ is the sum of a finite geometric series with first term $p$ and common ratio $1-p$. By the usual formula, this sum is $$p\frac{1-(1-p)^n}{1-(1-p)},$$ ... 1 Let $X=G\left(\dfrac 1 {1-\mathrm e^{-E}}\right)$. If $G$ is the inverse of $\dfrac1 {1-F}$, then the identity $X=\dfrac 1 {1-F\left(\dfrac 1 {1-\mathrm e^{-E}}\right)}$ written in the post is incorrect. Actually, $X=G\left(\dfrac 1 {1-\mathrm e^{-E}}\right)$ translates as $\dfrac1 {1-F(X)}=\dfrac 1 {1-\mathrm e^{-E}}$ hence $F(X)=\mathrm e^{-E}$ and, for ... 1 For your first hesitation: we can take $f$ to have values in $[-1,1]$ because we are already assuming $f$ is bounded. Thus, if $M = \sup_x |f(x)|$, $f/M$ does take values in $[-1,1]$. The constant $M$ does no harm because $E[f(X)/M) = \frac{1}{M}E[f(X)]$, so the general case of values in $[-M,M]$ follows immediately from the $[-1,1]$ case. For the question ... 1 $$\{X\leq 1\}=\{X<\frac{1}{2}\}\cup\{\frac{1}{2}\leq X\leq 1\}$$ and these sets are disjoint so that: $$P\{X\leq 1\}=P\{X<\frac{1}{2}\}+P\{\frac{1}{2}\leq X\leq 1\}$$ or equivalently: $$P\{\frac{1}{2}\leq X\leq 1\}=P\{X\leq 1\}-P\{X<\frac{1}{2}\}$$ 1 You should understand the difference between a probability density function and a cumulative distribution function. The cumulative distribution function, which in your case is $F(x)$, always gives the value for $P(X \leq x)$ So, $F(1)$ would give you $P(X\leq1)$ and $F(\frac{1}{2})$ would give you $P(X\leq\frac{1}{2})$. In order to find $P(\frac{1}{2} < ... 1$\begin{align}\require{cancel} \Pr[X=x]&={n\choose {x}}p^{x}(1-p)^{n-x}\\ \Pr[X=x+1]&={n\choose {x+1}}p^{x+1}(1-p)^{n-(x+1)}\\ &={n\choose x+1}\left(\dfrac p{1-p}\right) p^x(1-p)^{n-x}\\ &={n\choose x+1}\left(\dfrac p{1-p}\right)\dfrac{\Pr[X=x]}{n\choose x}\\ &=\dfrac {n\choose x+1}{n\choose x}\left(\dfrac p{1-p}\right)\Pr[X=x]\\ ... 1 First, you wrote that ifX_n$is not constant a.s., then$\forall c\in\mathbb{R},\epsilon>0; P(|X_n-c| > \epsilon) > 0$. This is not true. Change the$\forall$to an$\exists$and it will be true. Here's one approach:$\implies)$So$X_n$is not constant a.s. That means there exist$c\in\mathbb{R}$and$\epsilon>0$such that ... 1 Hint It is not generally true that$\sum P(A_{nm})=+\infty$. But it must be true for one of the$m$following sequences: \begin{equation*} (B^{i}_{n})_{n\in\mathbb{N}}=(A_{nm+i})_{n\in\mathbb{N}}, \quad i=0,1..,m-1 \end{equation*} 1 Your function is the probability density function of a random variable distributed uniformly on the interval$[1,2]$. As well as having an integral across the real numbers of$1\$, a probability density function also needs to be non-negative. Your function has this property. Only top voted, non community-wiki answers of a minimum length are eligible
2014-09-01 19:26:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942597150802612, "perplexity": 359.7674391064468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909051457-00267-ip-10-180-136-8.ec2.internal.warc.gz"}
https://scholarship.rice.edu/handle/1911/96008?show=full
dc.contributor.advisor Halas, Naomi J Hogan, Nathaniel J 2017-08-01T16:28:17Z 2017-08-01T16:28:17Z 2017-05 2017-04-13 May 2017 Hogan, Nathaniel J. "Light Transport in Nanomaterial Systems." (2017) Diss., Rice University. https://hdl.handle.net/1911/96008. https://hdl.handle.net/1911/96008 What happens as light traverses a medium composed of both traditional materials and many ($10^5-10^{12}$ $cm^{-3}$) nanoparticles? These types of systems are present in many active areas of research in the nanotechnology sphere. Examples include nanoparticles in aqueous and non-aqueous solvents during chemical synthesis or for solar energy harvesting applictions; nanoparticles embedded in homogeneous and non-homogeneous solids for photocatalysis; nanoparticles in biological tissue for medical appplications, and more. Because nanoparticles composed of a certain material can have optical properties very different from the bulk material, these types of systems also display unique optical properties. In this thesis I outline an approach to solving light transport in nanomaterial systems based on the Monte-Carlo method. This method is shown to be optimal for nanomaterial systems where the extinction coefficient is composed of relatively equal contributions of scattering and absorption. Furthermore, I show that this computational tool can be utilized to solve problems in a wide variety of fields. In plasmonic photocatalysis, where mixtures of nanoparticles are driven resonantly to efficiently catalyze chemical reactions, this method elucidates the photothermal contribution. Experimental results combined with calculations suggest that the photocatalysis of a novel antenna-reactor complex composed of an Al core and a Cu$_2$O shell is primarily from hot-electron injection. Calculations involving taking optical images of objects through mixtures of nanoparticles explain the phenomenon that absorptive particles can enhance image quality and resolution of images taken through a scattering medium. Previous reports on this effect were limited in their explanation. We show that the reduced scattering coefficient is not sufficient to explain the phenomenon. Rather, all of the optical parameters must be known independently. The addition of absorptive particles increases image quality be selectively removing photons with the longest path-length through the system. These photons are the most likely to cause image distortion, having undergone multiple scattering events, having lost the original information of the image. Simulations of light transport through highly concentrationed solutions of nanoshells (1$\times$10$^9$-1$\times$10$^{11}$ NP/ml) show a localization and efficiency of absorbed light that explains previous results obtained in light-triggered release of DNA from nanoparticle surfaces. The strong temperature gradients obtained from these calculations help clarify previous results, which showed DNA release below the dehybridization temperature with CW laser irradiation. Further studies motivated by these calculations elucidate two regimes in light-triggered release with NIR radiation. CW radiation causes dehybridization of DNA due to melting, whereas ultrafast radiation causes Au-S bond breakage. Although previous studies have shown Au-S bond breakage for 400 nm ultrafast irradiation, this work is the first to explicitly show this mechanism for 800 nm radiation. Light transport calculations coupled to thermodynamic calculations show a clear damage threshhold of the nanoshells below which DNA release is optimal. This method of solving light transport for small nanomaterial systems is flexible, relatively easy to implement, and remarkably efficient with even modest computational resources. application/pdf eng light transportplasmonicsnanotechnology Light Transport in Nanomaterial Systems Thesis 2017-08-01T16:28:18Z Text Applied Physics Natural Sciences Rice University Doctoral Doctor of Philosophy Applied Physics/Physics 
2021-12-07 19:41:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3234720528125763, "perplexity": 3404.6718563718073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00057.warc.gz"}
https://www.intel.com/content/www/us/en/docs/programmable/683230/18-1/tips-for-creating-a-tcl-script-to-monitor.html
ID 683230 Date 11/12/2018 Public ## 3.4.3.5. Tips for Creating a .tcl Script to Monitor Critical Paths Across Compiles Many designs have the same critical paths show up after each compile. In other designs, critical paths bounce around between different hierarchies, changing with each compile. This behavior happens in high speed designs where many register-to-register paths have very little slack. Different placements can then result in timing failures in the marginal paths. 1. In the project directory, create a script named TQ_critical_paths.tcl. 2. After compilation, review the critical paths and then write a generic report_timing command to capture those paths. For example, if several paths fail in a low-level hierarchy, add a command such as: report_timing –setup –npaths 50 –detail path_only \ –to “main_system: main_system_inst|app_cpu:cpu|*” \ –panel_name “Critical Paths||s: * -> app_cpu” 3. If there is a specific path, such as a bit of a state-machine going to other *count_sync* registers, you can add a command similar to: report_timing –setup –npaths 50 –detail path_only \ –from “main_system: main_system_inst|egress_count_sm:egress_inst|update” \ –to “*count_sync*” –panel_name “Critical Paths||s: egress_sm|update -> count_sync” 4. Execute this script in the Timing Analyzer after every compilation, and add new report_timing commands as new critical paths appear. This helps you monitor paths that consistently fail and paths that are only marginal, so you can prioritize effectively
2023-02-08 04:20:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5943536758422852, "perplexity": 12634.416099907854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00241.warc.gz"}
http://techtagg.com/standard-error/calculating-the-standard-deviation.html
Home > Standard Error > Calculating The Standard Deviation Calculating The Standard Deviation Contents Using the formula: {SEM = So x Sqroot(1-r)} where So is the Observed Standard Deviation and r is the Reliability the result is the Standard Error of Measurement(SEM). Items that do not correlate with other items can usually be improved. The SEM is an estimate of how much error there is in a test. Research Scientist– Ed Research ow.ly/eDFh304TyOl Sr. A good measurement scale should be both reliable and valid. First you should have ICC (intra-class correlation) and the SD (standard Deviation). He can be about 99% (or ±3 SEMs) certainthat his true score falls between 19 and 31. Brandon Foltz 68,124 views 32:03 2-3 Uncertainty in Measurements - Duration: 8:46. see this Calculating The Standard Deviation Accuracy is also impacted by the quality of testing conditions and the energy and motivation that students bring to a test. Sign in Transcript Statistics 32,757 views 51 Like this video? asked 5 years ago viewed 17609 times active 2 years ago Blog Stack Overflow Podcast #89 - The Decline of Stack Overflow Has Been Greatly… Get the weekly newsletter! Educators should consider the magnitude of SEMs for students across the achievement distribution to ensure that the information they are using to make educational decisions is highly accurate for all students, What will be the value of the following determinant without expanding it? An Asian history test consisting of a series of questions about Asian history would have high face validity. The difference between the observed score and the true score is called the error score. Calculating Standard Error Of Measurement In Spss NWEA.org Teach. Related 7Reliability of mean of standard deviations4Standard error of measurement versus minimum detectable change3Can I use a measure when it has low reliability?2How to calculate inter-rater reliability for just one sample?5What Theoretically, the true score is the mean that would be approached as the number of trials increases indefinitely. BHSChem 7,002 views 15:00 Module 10: Standard Error of Measurement and Confidence Intervals - Duration: 9:32. In general, the correlation of a test with another measure will be lower than the test's reliability. And to do this, the assessment must measure all kids with similar precision, whether they are on, above, or below grade level. How To Calculate Standard Error Of Measurement In Excel Between +/- two SEM the true score would be found 96% of the time. In the first row there is a low Standard Deviation (SDo) and good reliability (.79). in Counselor Education from the University of Arkansas, an M.A. Calculating Standard Error Of The Mean First, the middle number tells us that a RIT score of 188 is the best estimate of this student’s current achievement level. How to detect whether a user is using USB tethering? Calculating The Standard Deviation Systems #Engineer… twitter.com/i/web/status/78376…(Yesterday at 8:08 pm) Featured Posts 10 (More) Questions to Ask When Comparing and Evaluating Interim Assessments10 Questions to Ask about Norms10 Questions to Ask When Evaluating Student Growth Calculating Reliability Coefficient Your cache administrator is webmaster. If you subtract the r from 1.00, you would have the amount of inconsistency. For the sake of simplicity, we are assuming there is no partial knowledge of any of the answers and for a given question a student either knows the answer or guesses. While calculating the Standard Error of Measurement, should we use the Lower and Upper bounds or continue using the Reliability estimate. Yusuf Shakeel 2,356 views 2:29 Ch 2 Section 2.6 - Error in Measurement - Duration: 7:41. Calculating Sem About the Author Nate Jensen is a Research Scientist at NWEA, where he specializes in the use of student testing data for accountability purposes. Measurement of some characteristics such as height and weight are relatively straightforward. In the diagram at the right the test would have a reliability of .88. Your cache administrator is webmaster. LEADERSproject 1,950 views 9:32 How To Solve For Standard Error - Duration: 3:17. Calculating Standard Error Of Estimate Of course, some constructs may overlap so the establishment of convergent and divergent validity can be complex. The three most common types of validity are face validity, empirical validity, and construct validity. BMC Medical Education 2010, 10:40 Although it might seem to barely address your question at first sight, it has some additional material showing how to compute SEM (here with Cronbach's $\alpha$, Student B has an observed score of 109. The True score is hypothetical and could only be estimated by having the person take the test multiple times and take an average of the scores, i.e., out of 100 times Literary Haikus Is it decidable to check if an element has finite order or not? Standard Error Of Measurement Example Or, if the student took the test 100 times, 64 times the true score would fall between +/- one SEM. Finally, if a test is being used to select students for college admission or employees for jobs, the higher the reliability of the test the stronger will be the relationship to Power is covered in detail here. Close Yeah, keep it Undo Close This video is unavailable. Sixty eight percent of the time the true score would be between plus one SEM and minus one SEM. The difference between the observed score and the true score is called the error score. Between +/- two SEM the true score would be found 96% of the time. Finally, assume the test is scored such that a student receives one point for a correct answer and loses a point for an incorrect answer. FelsInstitute 30,617 views 6:42 Measurement Error - Duration: 8:42. To take an example, suppose one wished to establish the construct validity of a new test of spatial ability. The standard deviation of a person's test scores would indicate how much the test scores vary from the true score. Obviously adding poor items would not increase the reliability as expected and might even decrease the reliability. Items that are either too easy so that almost everyone gets them correct or too difficult so that almost no one gets them correct are not good items: they provide very Lane Prerequisites Values of Pearson's Correlation, Variance Sum Law, Measures of Variability Define reliability Describe reliability in terms of true scores and error Compute reliability from the true score and error Predictive Validity Predictive validity (sometimes called empirical validity) refers to a test's ability to predict the relevant behavior. This gives an estimate of the amount of error in the test from statistics that are readily available from any test. If you could add all of the error scores and divide by the number of students, you would have the average amount of error in the test. For example, if a test has a reliability of 0.81 then it could correlate as high as 0.90 with another measure. Creating a simple Dock Cell that Fades In when Cursor Hover Over It My girlfriend has mentioned disowning her 14 y/o transgender daughter What's an easy way of making my luggage As the r gets smaller the SEM gets larger. that the test is measuring what is intended, and that you would getapproximately the same score if you took a different version. (Moststandardized tests have high reliability coefficients (between 0.9 and That is, does the test "on its face" appear to measure what it is supposed to be measuring. Nate Jensen 6 Archives Monthly Archive October 20161 September 20169 August 20169 July 20167 June 20167 May 20169 April 20169 March 20169 February 20168 January 20168 December 20158 November 20157 October Tabitha Vu 847 views 7:41 SPSS Video #8: Calculating the Standard Error Of The Mean In SPSS - Duration: 2:35.
2018-01-20 06:45:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44345518946647644, "perplexity": 1575.4156706892636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889473.61/warc/CC-MAIN-20180120063253-20180120083253-00691.warc.gz"}
https://study.com/academy/answer/a-block-of-mass-400-gm-is-hung-vertically-with-the-help-of-a-massless-spring-initially-block-is-in-equilibrium-position-spring-constant-of-the-spring-is-k-200-n-m-natural-length-of-the-spring-is-40-cm-now-the-block-is-stretched-to-some-distanc.html
# A block of mass 400 \ gm is hung vertically with the help of a massless spring. Initially block... ## Question: A block of mass {eq}400 \ g {/eq} is hung vertically with the help of a massless spring. Initially block is in equilibrium position. Spring constant of the spring is {eq}k = 200 \ N/m {/eq}. Natural length of the spring is {eq}40 \ cm {/eq}. Now the block is stretched to some distance such that the total length of the spring becomes {eq}45 \ cm {/eq} and then released. Determine the amplitude of simple harmonic motion of the block. ## Spring: Spring is a mechanical device that uses to store energy. Whenever we applied a load on the spring, the spring gets compressed and store energy. It will release the energy and will come in its natural form to do work. Given data • The mass of the block is: {eq}m = 400\;{\rm{g}} = 0.4\;{\rm{kg}} {/eq} • The spring constant is: {eq}k = 200\;{{\rm{N}} {\left/ {\vphantom {{\rm{N}} {\rm{m}}}} \right. } {\rm{m}}} {/eq} • The initial spring length is: {eq}{x_1} = 40\;{\rm{cm}} = 0.4\;{\rm{m}} {/eq} • The final spring length is: {eq}{x_2} = 45\;{\rm{cm}} = 0.45\;{\rm{m}} {/eq} The total stretch in the spring is as follows, {eq}\begin{align*} x &= {x_2} - {x_1}\\ x &= 0.45\;{\rm{m}} - 0.4\;{\rm{m}}\\ x &= 0.05\;{\rm{m}} \end{align*} {/eq} The natural frequency of the spring mass system is as follows, {eq}\omega = \sqrt {\dfrac{k}{m}} {/eq} Substitute all the values in the above equation. {eq}\begin{align*} \omega &= \sqrt {\dfrac{{200\;{{\rm{N}} {\left/ {\vphantom {{\rm{N}} {\rm{m}}}} \right. } {\rm{m}}}}}{{0.4\;{\rm{kg}}}}} \\ \omega &= 22.36\;{{{\rm{rad}}} {\left/ {\vphantom {{{\rm{rad}}} {\rm{s}}}} \right. } {\rm{s}}} \end{align*} {/eq} The velocity of the block from the energy system of the spring is as follows, {eq}\dfrac{1}{2}m{v^2} = \dfrac{1}{2}k{x^2} {/eq} Substitute all the values in the above equation. {eq}\begin{align*} \dfrac{1}{2} \times 0.4\;{\rm{kg}} \times {v^2} &= \dfrac{1}{2} \times 200\;{{\rm{N}} {\left/ {\vphantom {{\rm{N}} {\rm{m}}}} \right. } {\rm{m}}} \times {\left( {0.05\;{\rm{m}}} \right)^2}\\ v &= 1.118\;{{\rm{m}} {\left/ {\vphantom {{\rm{m}} {\rm{s}}}} \right. } {\rm{s}}} \end{align*} {/eq} The amplitude of the simple harmonic motion of the spring is as follows, {eq}{v^2} = {\omega ^2}\left( {{A^2} - {x^2}} \right) {/eq} Substitute all the values in the above equation. {eq}\begin{align*} {\left( {1.118\;{{\rm{m}} {\left/ {\vphantom {{\rm{m}} {\rm{s}}}} \right. } {\rm{s}}}} \right)^2} &= {\left( {22.36\;{{{\rm{rad}}} {\left/ {\vphantom {{{\rm{rad}}} {\rm{s}}}} \right. } {\rm{s}}}} \right)^2}\left[ {{A^2} - {{\left( {0.05\;{\rm{m}}} \right)}^2}} \right]\\ A &= 0.0707\;{\rm{m}}\\ A &= 7.07\;{\rm{cm}} \end{align*} {/eq} Thus, the amplitude of the spring in the harmonic motion is {eq}7.07\;{\rm{cm}} {/eq}. Hooke's Law & the Spring Constant: Definition & Equation from Chapter 4 / Lesson 19 201K After watching this video, you will be able to explain what Hooke's Law is and use the equation for Hooke's Law to solve problems. A short quiz will follow.
2021-05-06 03:50:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999643564224243, "perplexity": 6787.646702182133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988725.79/warc/CC-MAIN-20210506023918-20210506053918-00256.warc.gz"}
https://tex.stackexchange.com/questions/336858/cmidrules-trim-option
# \cmidrule's trim option The \cmidrule in booktabs package has trim option to specify how much to trim from left or right. However from the following example \cmidrule(l{2pt}r{2pt}){1-2} shows the line is shifted, not shortened; I may be confused about the this trim option, but I expected 2 points from left and right should be removed (trimmed). The next code \cmidrule(l{2pt}r{2pt}){3-3} is even more confusing to show that the line is shifted to the left. How to interpret the trim option with \cmdrule? \documentclass[12pt]{article} \usepackage{booktabs} \begin{document} \begin{tabular}{@{}llr@{}} \toprule \multicolumn{2}{c}{Item} &\multicolumn{1}{c}{Price/lb} \\ \cmidrule(r){1-2}\cmidrule(l){3-3} a & b & c \\ \cmidrule(l{2pt}r{2pt}){1-2}\cmidrule(l{2pt}r{2pt}){3-3} \morecmidrules \cmidrule(l{2pt}r{2pt}){2-3} Food& Category & \multicolumn{1}{c}{\\$}\\ \midrule Apples & Fruit & 1.50 \\ Oranges & Fruit & 2.00 \\ Beef & Meat & 4.50 \\ \specialrule{.5pt}{3pt}{3pt} x & y & z \\ \bottomrule \end{tabular} \end{document} • I don't know whether my answer answers your question. I think you're confused by the look of the \cmidrules since you're not using a \tabcolsep on the outer edges of the first/last column. That is, you're using @{} to suppress it. – Werner Oct 31 '16 at 16:44 Your interpretation of the trim option is correct. What seems confusion here is the fact that removed the column space on the outer edges of your tabular. When considering the tabular version without the end \tabcolsep removed \begin{tabular}{ l l r } the adjustments for \cmidrule using the left and right trim options seem more in line with what one would expect. The default trim, if not specified explicitly is \cmidrulekern which is set to .5em. This equates to 5pt under the 10pt (default) document class option, 5.475pt under 11pt and 5.87494pt under 12pt. Perhaps, instead of specifying trims in absolute values, is a font-related width like em. Without any parameters, the default value for the trim seems to be larger than 2pt to cause the confusion. When I changed the code not to have any trimming with the following code, I can get the expected trimmed result. \cmidrule(l{0pt}r{0pt}){1-2}\cmidrule(l{0pt}r{0pt}){3-3} • Yes, the default trim value is \cmidrulekern which is set to .5em. You can just use \cmidrule{1-3} instead of specifying a 0pt trim on all sides. – Werner Oct 31 '16 at 17:16 • If this is "the answer to your question", I don't understand how it relates to your question. – Werner Oct 31 '16 at 18:01
2019-07-21 19:07:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7872762084007263, "perplexity": 1706.4438826714754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00325.warc.gz"}
https://www.thejournal.club/c/paper/186015/
In this paper, we address the issue of automatic tracking areas (TAs) planning in fifth generation (5G) ultra-dense networks (UDNs). By invoking handover (HO) attempts and measurement reports (MRs) statistics of a 4G live network, we first introduce a new kernel function mapping HO attempts, MRs and inter-site distances (ISDs) into the so-called similarity weight. The corresponding matrix is then fed to a self-tuning spectral clustering (STSC) algorithm to automatically define the TAs number and borders. After evaluating its performance in terms of the $Q$-metric as well as the silhouette score for various kernel parameters, we show that the clustering scheme yields a significant reduction of tracking area updates and average paging requests per TA; optimizing thereby network resources.
2021-05-12 15:05:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27097025513648987, "perplexity": 1782.555182252661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990929.24/warc/CC-MAIN-20210512131604-20210512161604-00099.warc.gz"}
https://ec.gateoverflow.in/703/gate-ece-2015-set-2-question-31
35 views In the circuit shown, the Norton equivalent resistance $(\text{in}\: \Omega)$ across terminals $a-b$ is _______.
2021-12-02 00:18:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5424519777297974, "perplexity": 6568.531118355095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00062.warc.gz"}
https://pure.mpg.de/pubman/faces/ViewItemFullPage.jsp?itemId=item_2191938
English # Item ITEM ACTIONSEXPORT The Missing Link: Bayesian Detection and Measurement of Intermediate-Mass Black-Hole Binaries Graff, P. B., Buonanno, A., & Sathyaprakash, B. S. (2015). The Missing Link: Bayesian Detection and Measurement of Intermediate-Mass Black-Hole Binaries. Physical Review D, 92(2): 022002. doi:10.1103/PhysRevD.92.022002. Item is ### Basic show hide Genre: Journal Article ### Files show Files hide Files : 1504.04766.pdf (Preprint), 4MB Name: 1504.04766.pdf Description: Visibility: Public MIME-Type / Checksum: application/pdf / [MD5] - - : PhysRevD.92_022002.pdf (Any fulltext), 5MB Name: PhysRevD.92_022002.pdf Description: - Visibility: Public MIME-Type / Checksum: application/pdf / [MD5] - - - show ### Creators show hide Creators: Graff, Philip B., Author Buonanno, Alessandra1, Author Sathyaprakash, B. S., Author Affiliations: 1Astrophysical and Cosmological Relativity, AEI-Golm, MPI for Gravitational Physics, Max Planck Society, ou_1933290 ### Content show hide Free keywords: General Relativity and Quantum Cosmology, gr-qc Abstract: We perform Bayesian analysis of gravitational-wave signals from non-spinning, intermediate-mass black-hole binaries (IMBHBs) with observed total mass, $M_{\mathrm{obs}}$, from $50\mathrm{M}_{\odot}$ to $500\mathrm{M}_{\odot}$ and mass ratio $1\mbox{--}4$ using advanced LIGO and Virgo detectors. We employ inspiral-merger-ringdown waveform models based on the effective-one-body formalism and include subleading modes of radiation beyond the leading $(2,2)$ mode. The presence of subleading modes increases signal power for inclined binaries and allows for improved accuracy and precision in measurements of the masses as well as breaking of extrinsic parameter degeneracies. For low total masses, $M_{\mathrm{obs}} \lesssim 50 \mathrm{M}_{\odot}$, the observed chirp mass $\mathcal{M}_{\rm obs} = M_{\mathrm{obs}}\,\eta^{3/5}$ ($\eta$ being the symmetric mass ratio) is better measured. In contrast, as increasing power comes from merger and ringdown, we find that the total mass $M_{\mathrm{obs}}$ has better relative precision than $\mathcal{M}_{\rm obs}$. Indeed, at high $M_{\mathrm{obs}}$ ($\geq 300 \mathrm{M}_{\odot}$), the signal resembles a burst and the measurement thus extracts the dominant frequency of the signal that depends on $M_{\mathrm{obs}}$. Depending on the binary's inclination, at signal-to-noise ratio (SNR) of $12$, uncertainties in $M_{\mathrm{obs}}$ can be as large as $\sim 20 \mbox{--}25\%$ while uncertainties in $\mathcal{M}_{\rm obs}$ are $\sim 50 \mbox{--}60\%$ in binaries with unequal masses (those numbers become $\sim 17\%$ versus $\sim22\%$ in more symmetric binaries). Although large, those uncertainties will establish the existence of IMBHs. Our results show that gravitational-wave observations can offer a unique tool to observe and understand the formation, evolution and demographics of IMBHs, which are difficult to observe in the electromagnetic window. (abridged) ### Details show hide Language(s): Dates: 2015-04-182015-08-312015 Publication Status: Published in print Pages: 17 pages, 9 figures, 2 tables; updated to reflect published version Publishing info: - Rev. Method: - Identifiers: arXiv: 1504.04766 DOI: 10.1103/PhysRevD.92.022002 URI: http://arxiv.org/abs/1504.04766 Degree: - show show show ### Source 1 show hide Title: Physical Review D Other : Phys. Rev. D. Source Genre: Journal Creator(s): Affiliations: Publ. Info: Lancaster, Pa. : American Physical Society Pages: - Volume / Issue: 92 (2) Sequence Number: 022002 Start / End Page: - Identifier: ISSN: 0556-2821 CoNE: https://pure.mpg.de/cone/journals/resource/111088197762258
2020-02-20 19:15:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7230264544487, "perplexity": 7742.656925232173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145260.40/warc/CC-MAIN-20200220162309-20200220192309-00071.warc.gz"}
http://physics.stackexchange.com/tags/hydrogen/new
# Tag Info $\langle z\rangle=\int_0^\infty r^3dr[\int_0^{2\pi}d\phi\int_0^\pi \sin \theta \cos \theta d\theta]=0$ As the $\theta$ integration is zero. But the symmetry argument is clear if the integration is written is Cartesian coordinates.In that case $$\langle z\rangle=\int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty z dz |\psi|^2 dx dy$$ As you ...
2016-02-14 21:37:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984836220741272, "perplexity": 644.2698748012176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00333-ip-10-236-182-209.ec2.internal.warc.gz"}
https://techutils.in/blog/tag/p-value/
## #StackBounty: #modeling #p-value #aic Does automatic model selection via AIC bias the p-values of the selected model? ### Bounty: 50 Let’s say I run a procedure where I fit every possible model given some set of covariates and I select the model with the minimum AIC. I know that if my selection criteria was based on minimizing p-values, the p-values of the selected model would be misleading. But what if my selection criteria was AIC alone? To what extent would this bias the p-values? I had assumed the effect on p-values would be negligible, but came across this paper, which proves the following: P values are intimately linked to confidence intervals and to differences in Akaike’s information criterion (ΔAIC), two metrics that have been advocated as replacements for the P value. If this is true, does it imply that p-values are misleading after automatic selection based on AIC? To what extent will they be biased, and what determines this? Get this bounty!!! ## #StackBounty: #distributions #p-value #inference #data-transformation #controlling-for-a-variable Generate null distribution from pvalues ### Bounty: 50 I have a set of experiments on which I apply the Fisher’s exact method to statistically infer changes in cellular populations. Some of the data are dummy experiments that model our control experiments which describe the null model. However, due to some experimental variation most of the controlled experiments reject the null hypothesis at a $$p_{val} <0.05$$. Some of the null hypotheses of the actual experimental conditions are also rejected at a $$p_{val} <0.05$$. However, these pvalues, are magnitudes low than those of my control conditions. This indicates a stronger effect of these experimental conditions. However, I am not aware of a proper method to quantify these changes and statistically infer them. An example of what the data looks like: ``````ID Pval Condition B0_W1 2.890032e-16 CTRL B0_W10 7.969311e-38 CTRL B0_W11 8.078795e-25 CTRL B0_W12 2.430554e-80 TEST1 B0_W2 3.149525e-30 TEST2 B1_W1 3.767914e-287 TEST3 B1_W10 3.489684e-56 TEST4 B1_W10 3.489684e-56 TEST5 `````` 1. selecting the ctrl conditions and let $$X = -ln(p_{val})$$ which will distribute the transformed data as an expontential distribution. 2. Use MLE to find the $$lambda$$ parameter of the expontential distribution. This will be my null distribution. 3. Apply the same transformation to the rest of the $$p_{val}$$ that correspond to the test conditions 4. Use the cdf of the null distribution to get the new "adjusted pvalues". This essentially will give a new $$alpha$$ threshold for the original pvalues and transform the results accordingly using the null’s distribution cdf. Are these steps correct? Is using MLE to find the rate correct or it violates some of the assumptions to achieve my end goal? Any other approaches I could try? Get this bounty!!! ## #StackBounty: #confidence-interval #p-value #model #model-comparison p value for difference in model outcomes ### Bounty: 50 I’ve run two different linear mixed effects models on the same data and got two different estimates for the gradient of the longitudinal variable. e.g. model 1 has estimate 30 with standard error 5. model 2 has estimate 40 with standard error 4. I’m interested in calculating a p value for the probability that the models are different, from the estimate and standard error. How do I do this? I’m aware that checking for overlap in the 95% confidence intervals is a bad idea, and that overlapping 83% CIs are a better test, but would like to be able to quantify this with a p value. Get this bounty!!! ## Context This is somewhat similar to this question, but I do not think it is an exact duplicate. When you look for how instructions on how to perform a bootstrap hypothesis test, it is usually stated that it is fine to use the empirical distribution for confidence intervals but that you need to correctly bootstrap from the distribution under the null hypothesis to get a p-value. As an example, see the accepted answer to this question. A general search on the internet mostly seems to turn up similar answers. The reason for not using a p-value based on the empirical distribution is that most of the time we do not have translation invariance. ## Example Let me give a short example. We have a coin and we want to do an one-sided test to see if the frequency of heads is larger than 0.5 We perform $$n = 20$$ trials and get $$k = 14$$ heads. The true p-value for this test would be $$p = 0.058$$. On the other hand if we bootstrap our 14 out of 20 heads, we effectively sample from the binomial distribution with $$n = 20$$ and $$p = frac{14}{20}=0.7$$. Shifting this distribution by subtracting 0.2 we will get a barely significant result when testing our observed value of 0.7 against the obtained empirical distribution. In this case the discrepancy is very small, but it gets larger when the success rate we test against gets close to 1. ## Question Now let me come to the real point of my question: the very same defect also holds for confidence intervals. In fact, if a confidence interval has the stated confidence level $$alpha$$ then the confidence interval not containing the parameter under the null hypothesis is equivalent to rejecting the null hypothesis at a significance level of $$1- alpha$$. Why is it that the confidence intervals based upon the empirical distribution are widely accepted and the p-value not? Is there a deeper reason or are people just not as conservative with confidence intervals? In this answer Peter Dalgaard gives an answer that seems to agree with my argument. He says: least not (much) worse than the calculation of CI. Where is the (much) coming from? It implies that generating p-values that way is slightly worse, but does not elaborate on the point. ## Final thoughts Also in An Introduction to the Bootstrap by Efron and Tibshirani they dedicate a lot of space to the confidence intervals but not to p-values unless they are generated under a proper null hypothesis distribution, with the exception of one throwaway line about the general equivalence of confidence intervals and p-values in the chapter about permutation testing. Let us also come back to the first question I linked. I agree with the answer by Michael Chernick, but again he also argues that both confidence intervals and p-values based on the empirical bootstrap distribution are equally unreliable in some scenarios. It does not explain why you find many people telling you that the intervals are ok, but the p-values are not. Get this bounty!!! ## #StackBounty: #hypothesis-testing #statistical-significance #anova #p-value Determine which performance intervention is best? ### Bounty: 100 Suppose I have numerical data describing the total process time for a given software simulation. This data is broken up into 5 groups (Base, AD1, AD2, AD3, AD4) each detailing a different performance intervention with approximately the same number of observations for each group. My goal is to determine if the performance interventions result in significantly different alive times than the base case and to determine which intervention is “best”. “Best” being defined as the least amount of process time. To clarify, my data is comprised of all the “regression-tests” in our code framework. So at this point I am looking at a high-level what the interventions do to overall process time but eventually will create sub-categories within each intervention to determine inter-group effect on process time. My data has some extreme outliers as can be seen from this graphic: My hypothesis is as follows: $$H_{0}: mu_{text{base}} = mu_{text{AD1}} = mu_{text{AD2}} = mu_{text{AD3}} = mu_{text{AD4}}$$ $$H_{A}: text{Not all means equal}$$ I am unsure what my hypothesis would be in determining the best “metric”. I am also unsure if using the mean is appropriate in this circumstance given the outliers in my data. My idea is to use some form of ANOVA or Krukall Wallis test and then maybe a Tukey Test to determine which one is best? I am open to Bayesian or Frequentist approaches to this. I might be over thinking this as well. Get this bounty!!! ## #StackBounty: #machine-learning #statistical-significance #t-test #p-value How to decide if means of two sets are statistically signifi… ### Bounty: 50 I have a data set consisting of some number of pairs of real numbers. For example: ``````(1.2, 3.4), (3.2, 2.7), ..., (4.2, 1.0) `````` or ``````(x1, y1), (x2, y2), ..., (xn, yn) `````` I want to know if the second variable depends on the first one (it is known in advance that if there is a dependency, it is very weak, so it is hard to detect). I split the data set into two parts using the first number (Xs). Then I use the mean of Ys for the first and the second sub-sets as “predictions”. If find such a split that the squared deviation between the predictions and real values of Ys is minimal. Basically I do what is done by decision trees. Now I wont to know if the found split and the corresponding difference between the two means is significant. I could use some standard test to check if the means of two sets are statistically significantly different but, I think, it would be incorrect because we did the split that maximise this difference. What would be the way to count for that? Get this bounty!!! ## #StackBounty: #distributions #p-value #goodness-of-fit #kolmogorov-smirnov Goodness-of-fit test on arbitrary parametric distributions w… ### Bounty: 100 There have been many questions regarding this topic already addressed on CV. However, I was still unsure if this question was addressed directly. 1. Is it possible, for any arbitrary parametric distribution, to properly calculate the p-value for a Kolmogorov-Smirnov test where the parameters of the null distribution are estimated from the data? 2. Or does the choice of parametric distribution determine if this can be achieved? 3. What about the Anderson-Darling, Cramer von-Mises tests? 4. What is the general procedure for estimating the correct p-values? My general understanding of the procedure would be the following. Assume we have data \$X\$ and a parametric distribution \$F(x;theta)\$. Then I would: • Estimate parameters \$hattheta_{0}\$ for \$F(x;theta)\$. • Calculate Kolmogorv-Smirnov, Anderson-Darling, Cramer von-Mises test statistics: KS\$_{0}\$, AD\$_{0}\$ and CVM\$_{0}\$. • For \$i=1,2,ldots,n\$ 1. Simulate data \$y\$ from \$F(;hattheta_{0})\$ 2. Estimate \$hattheta_{i}\$ for \$F(y;theta_{i})\$ 3. Calculate KS\$_{i}\$, AD\$_{i}\$ and CVM\$_{i}\$ statistics for \$F(y;hattheta_{i})\$ • Calculate \$p\$-values as the proportion of these statistics that are more extreme than KS\$_{0}\$, AD\$_{0}\$ and CVM\$_{0}\$, respectively. Is this correct? Get this bounty!!! ## #StackBounty: #p-value #intuition #application #communication #climate Evidence for man-made global warming hits 'gold standard&#39… ### How should we interpret the $$5sigma$$ threshold in this research on climate change? This message in a Reuter’s article from 25 february is currently all over the news: They said confidence that human activities were raising the heat at the Earth’s surface had reached a “five-sigma” level, a statistical gauge meaning there is only a one-in-a-million chance that the signal would appear if there was no warming. I believe that this refers to this article “Celebrating the anniversary of three key events in climate change science” which contains a plot, which is shown schematically below (It is a sketch because I could not find an open source image for an original, similar free images are found here). Another article from the same research group, which seems to be a more original source, is here (but it uses a 1% significance instead of $$5sigma$$). The plot presents measurements from three different research groups: 1 Remote Sensing Systems, 2 the Center for Satellite Applications and Research, and the 3 University of Alabama at Huntsville. The plot displays three rising curves of signal to noise ratio as a function of trend length. So somehow scientists have measured an anthropogenic signal of global warming (or climate change?) at a $$5sigma$$ level, which is apparently some scientific standard of evidence. For me such graph, which has a high level of abstraction, raises many questions$$^{dagger}$$, and in general I wonder about the question ‘How did they do this?’. How do we explain this experiment into simple words (but not so abstract) and also explain the meaning of the $$5sigma$$ level? I ask this question here because I do not want a discussion about climate. Instead I want answers regarding the statistical content and especially to clarify the meaning of such a statement that is using/claiming $$5 sigma$$. $$^dagger$$:What is the null hypothesis? How did they set up the experiment to get a anthropogenic signal? What is the effect size of the signal? Is it just a small signal and we only measure this now because the noise is decreasing, or is the signal increasing? What kind of assumptions are made to create the statistical model by which they determine the crossing of a 5 sigma threshold (independence, random effects, etc…)? Why are the three curves for the different research groups different, do they have different noise or do they have different signals, and in the case of the latter, what does that mean regarding the interpretation of probability and external validity? Get this bounty!!! ## #StackBounty: #p-value #intuition #application #communication #climate Evidence for man-made global warming hits 'gold standard&#39… ### How should we interpret the $$5sigma$$ threshold in this research on climate change? This message in a Reuter’s article from 25 february is currently all over the news: They said confidence that human activities were raising the heat at the Earth’s surface had reached a “five-sigma” level, a statistical gauge meaning there is only a one-in-a-million chance that the signal would appear if there was no warming. I believe that this refers to this article “Celebrating the anniversary of three key events in climate change science” which contains a plot, which is shown schematically below (It is a sketch because I could not find an open source image for an original, similar free images are found here). Another article from the same research group, which seems to be a more original source, is here (but it uses a 1% significance instead of $$5sigma$$). The plot presents measurements from three different research groups: 1 Remote Sensing Systems, 2 the Center for Satellite Applications and Research, and the 3 University of Alabama at Huntsville. The plot displays three rising curves of signal to noise ratio as a function of trend length. So somehow scientists have measured an anthropogenic signal of global warming (or climate change?) at a $$5sigma$$ level, which is apparently some scientific standard of evidence. For me such graph, which has a high level of abstraction, raises many questions$$^{dagger}$$, and in general I wonder about the question ‘How did they do this?’. How do we explain this experiment into simple words (but not so abstract) and also explain the meaning of the $$5sigma$$ level? I ask this question here because I do not want a discussion about climate. Instead I want answers regarding the statistical content and especially to clarify the meaning of such a statement that is using/claiming $$5 sigma$$. $$^dagger$$:What is the null hypothesis? How did they set up the experiment to get a anthropogenic signal? What is the effect size of the signal? Is it just a small signal and we only measure this now because the noise is decreasing, or is the signal increasing? What kind of assumptions are made to create the statistical model by which they determine the crossing of a 5 sigma threshold (independence, random effects, etc…)? Why are the three curves for the different research groups different, do they have different noise or do they have different signals, and in the case of the latter, what does that mean regarding the interpretation of probability and external validity? Get this bounty!!! ## #StackBounty: #p-value #intuition #application #communication #climate Evidence for man-made global warming hits 'gold standard&#39… ### How should we interpret the $$5sigma$$ threshold in this research on climate change? This message in a Reuter’s article from 25 february is currently all over the news: They said confidence that human activities were raising the heat at the Earth’s surface had reached a “five-sigma” level, a statistical gauge meaning there is only a one-in-a-million chance that the signal would appear if there was no warming. I believe that this refers to this article “Celebrating the anniversary of three key events in climate change science” which contains a plot, which is shown schematically below (It is a sketch because I could not find an open source image for an original, similar free images are found here). Another article from the same research group, which seems to be a more original source, is here (but it uses a 1% significance instead of $$5sigma$$). The plot presents measurements from three different research groups: 1 Remote Sensing Systems, 2 the Center for Satellite Applications and Research, and the 3 University of Alabama at Huntsville. The plot displays three rising curves of signal to noise ratio as a function of trend length. So somehow scientists have measured an anthropogenic signal of global warming (or climate change?) at a $$5sigma$$ level, which is apparently some scientific standard of evidence. For me such graph, which has a high level of abstraction, raises many questions$$^{dagger}$$, and in general I wonder about the question ‘How did they do this?’. How do we explain this experiment into simple words (but not so abstract) and also explain the meaning of the $$5sigma$$ level? I ask this question here because I do not want a discussion about climate. Instead I want answers regarding the statistical content and especially to clarify the meaning of such a statement that is using/claiming $$5 sigma$$. $$^dagger$$:What is the null hypothesis? How did they set up the experiment to get a anthropogenic signal? What is the effect size of the signal? Is it just a small signal and we only measure this now because the noise is decreasing, or is the signal increasing? What kind of assumptions are made to create the statistical model by which they determine the crossing of a 5 sigma threshold (independence, random effects, etc…)? Why are the three curves for the different research groups different, do they have different noise or do they have different signals, and in the case of the latter, what does that mean regarding the interpretation of probability and external validity? Get this bounty!!!
2021-01-22 10:14:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5897179245948792, "perplexity": 855.5257337990548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529179.46/warc/CC-MAIN-20210122082356-20210122112356-00758.warc.gz"}
https://gallery49.com/lotus-pedunculatus-njtzk/adding-and-subtracting-radicals-worksheet-edfa0e
3. h޴Vmo�0�+�X41��y�&$endstream endobj 23 0 obj <> endobj 24 0 obj <> endobj 25 0 obj <>stream adding and subtracting radicals worksheet – activepatience.com #281944 Algebra 1 Worksheets | Radical Expressions Worksheets #281945 Adding and Subtracting Radical Expressions #281946 Example 1: Adding and Subtracting Square-Root Expressions To add and , one adds the numbers on the outside only to get .-----The Rules for Adding and Subtracting Radicals. Adding and Subtracting 2, 3, or 4 Digit Problems Worksheets. (a) We do not have like radicals, but we can simplify . Displaying top 8 worksheets found for - Adding And Subtracing Radicals. 9th Grade Adding And Subtracting Radicals - Displaying top 8 worksheets found for this concept.. Now what happens if we have unlike radicals? h�bf�da��d�c@ >�rغ����� @a�=� ������BB ��b�0^�Ϙ~�3�>�=��oA��E630�^ V����h=�f�v �e�d�3� �!� Sometimes simplification isn’t as apparent. Simplifying radicals is fun with this chain.Students will cut the problems apart along the lines and work the card labe They incorporate both like and unlike radicands. Title: Level: Rows: Columns: Show Answers: Font: Font Size: Radical Expressions. You can combine like radicals by adding or subtracting the numbers multiplied by the radical and keeping the radical the same. In order to receive the next Question Set of problems, you must come up and have the set checked by your teacher. Bellow you can download some free math worksheets and practice. Step 2: To add or subtract radicals, the indices and what is inside the radical (called the radicand) must be exactly the same. Adding And Subtracting Radicals Worksheet via abc.colordsgn.co. This involves adding or subtracting only the coefficients; the radical part remains the same. These Exponents and Radicals Worksheets are perfect for teachers, homeschoolers, moms, dads, and children looking for some practice in Exponents and Radicals problems. But is can get difficult sometimes unless you know what you’re doing and how to make it easier. ��) For the subtraction problems you may select some regrouping, no regrouping, all regrouping, or subtraction across zero. Simplify radicals. Combine like radicals. $$3\sqrt{2} + 2\sqrt{2} = (3 + 2)\sqrt{2} = 5\sqrt{2}$$. Grab these worksheets to help you ease into writing radicals in its simplest form. "practise papers"+ks2+free, college level printable worksheets, simplifying square roots exponents radicals worksheet, How to simplify expressions using distributive properties 5th grade. Putting that back into the problem above yields: $$-2\sqrt{2} + 3\sqrt{2} = -1\sqrt{2} = \sqrt{2}$$, $$3\sqrt{7} - 2\sqrt{28} + 4\sqrt{7}$$ (start by ensuring all radicals are simplified), $$3\sqrt{7} - 2\sqrt{4\cdot 7} + 4\sqrt{7}$$, $$3\sqrt{7} - 2\cdot 2\sqrt{7} + 4\sqrt{7}$$. Some of the worksheets for this concept are Adding and subtracting radical expressions date period, Adding subtracting multiplying radicals, Adding and subtracting radicals, Radicals, Addingsubtracting rational expressions, Addingsubtracting radical expressions, Radical workshop … $$\sqrt{18}$$ can be simplified (as seen in an earlier lesson): $$\sqrt{9\cdot 2} = \sqrt{9}\cdot \sqrt{2} = 3\sqrt{2}$$. 4√5 + 3√5 2. Adding and Subtracting Radical Expressions - Puzzle by Tiffany Shastal #331607. ? Well, nothing………sort of. Examples of like radicals are: $$(\sqrt{2}, 5\sqrt{2}, -4\sqrt{2})$$ or $$( \sqrt[3]{15}, 2\sqrt[3]{15}, -9\sqrt[3]{15})$$. 2. q��Z���s�&�u�:M�����w��9�#�.�R���< �%����K�O��w��Ȑ��]F3������Nt��(�7 To add or subtract radicals the must be like radicals. Like radicals have the same root and radicand. 52 0 obj <>stream M worksheet by kuta software llc kuta software infinite algebra 2 name adding subtracting multiplying radicals date period simplify. Some of the worksheets for this concept are Adding and subtracting radical expressions date period, Adding subtracting and multiplying radical expressions, Adding and subtracting radicals, Radicals, Adding subtracting multiplying radicals, Adding subtracting … Adding and Subtracting Radicals – Practice Problems Move your mouse over the "Answer" to reveal the answer or click on the "Complete Solution" link to reveal all of the steps required for adding and subtracting radicals. Otherwise, we just have to keep them unchanged. Break down the given radicals and simplify each term. Some of the worksheets displayed are Adding and subtracting radical expressions date period, Adding and subtracting radical expressions 1, Adding subtracting multiplying radicals, Adding and subtracting radicals, Intermediate algebra skill adding and subtracting radical, Adding and subtracting radicals of index … (a) (b) (c) 817 17 217 (8 1 2)17 917 715 215 (7 2)15 515 5 3 (5 3)12 8 Adding and Subtracting Radicals Simplify each expression. ����h��t��OZ(un��~}K�ڢK劼�3�UkU��ڶ�ݧG)@ޥ���.�T8���+���a�u��ܱ���:�\d����Ev��]e�棖��Y�Z7�{,�{�}����تXt1�B��9��%�)��~T�K���X Sq����OqN':}�z�~�-�a��&-��[9H�5�+W�k�U8|����ZQ���1@ �re�y�n��Irxӻ�. Utilizing the 22 Adding and Subtracting Rational Numbers Worksheet 2 in the worksheet template will also allow your youngster to build up their vocabulary. _J�0r?ѫgs�ؠ�:��R�tfm�18,��R�J��]T��z)��w�]�g��Vt>�\ƥ��E�5����|��R�� �F��5Z�� �!����Jv{�:��D��%J��|�2�3E�=�SU9,:K�] �2//��j(2��B�ܵU͒T9ĩ�ƅ����bS(���~��³U!Bhr��U���$�ч$ge��>K������͑5�笠�p!��G�i.��ۭٗ+�!�p�s��:yk�k��5h���ӡ�|�/l.~ h|H���29��ܪ�^�l�bk5���w�biU�)����.qj�j !���ⷸc}�����8���cϽ��xR��+��G��q:ReᲿ���AN ׳2N�� �]P�8;�E��.����q��We A�R�����M8ٯ��l2�ͪ��=������� �{['/u��c;���w��&�ۚ�?i Therefore, in every simplifying radical problem, check to see if the given radical itself, can be simplified. Adding And Subtracting Square Roots - Displaying top 8 worksheets found for this concept.. Radicals that are "like radicals" can be added or subtracted by adding or subtracting the coefficients. Think here that you have 2 square roots of 25 and 5 square roots of 9. 22 0 obj <> endobj Adding Subtracting Multiplying Radicals Worksheets: Adding Subtracting Multiplying Radicals Worksheets Generator. Remember that 18 14 2 212 18 312 18. Displaying top 8 worksheets found for - Notes Adding And Subtracting Radicals. Break down the given radicals and simplify each term. Really clear math lessons (pre-algebra, algebra, precalculus), cool math games, online graphing calculators, geometry art, fractals, polyhedra, parents and teachers areas too. endstream endobj startxref Add or subtract the like radicals by adding or subtracting their coefficients. Combining like radicals is similar to combining like terms. Adding and Subtracting Radical Expressions Date_____ Period____ Simplify. This mixed problems worksheet may be configured for adding and subtracting 2, 3, and 4 digit problems in a vertical format. Adding and Subtracting Radicals Worksheets. Now,$$\sqrt{25} = 5$$ and $$\sqrt{9} = 3$$ so, $$2\sqrt{25} + 5\sqrt{9} = 2\cdot 5 + 5\cdot 3 = 10 + 15 = 25$$. -3√75 - √27. Simplify. Factorize the radicands and express the radicals in the simplest form. For the purpose of this explanation I will put the understood 1 in front of the first term giving me: $$-8\sqrt{5} + 5\sqrt{5}$$ (like radical terms). 1) ... k D NAnlClV VrZi3g MhAtmsP LrVe4s Pe 4rqv ewdu.L S PMwaqd 9eQ Pw Qiptzh 6 XIln 9fsi BnTist Ke2 QAol EgMeLbnrpa 6 i1L. Adding and subtracting can be a very easy process especially if you think back to when you first learned to add: “If you have 2 apples and I give you 3 more, how many apples do you have?”. Step 2. Simplify each radical completely before combining like terms. The same rule applies for adding two radicals! Shore up your practice and add and subtract radical expressions with confidence, using this bunch of printable worksheets. Examples: 1. Add and subtract terms that contain like radicals just as you do like terms. Rule #1 - When adding or subtracting two radicals, you must simplify the radicands first. $$2\sqrt{3} + 2\sqrt{7}$$ and that’s the answer! It cannot be simplified any further. And if things get confusing, or if you just want to verify that you are combining them correctly, you can always use what you know … Answers to Adding and Subtracting Radicals of Index 2: With Variable Factors 1) −6 6 x 2) − 3ab 3) 5wz 4) − 2np 5) 4 5x 6) −4 6y 7) −2 6m 8) −12 3k Worksheet by Kuta Software LLC Algebra 1 Adding and Subtracting Radicals Name_____ ID: 1 Date_____ Period____ ©G s2I0z16o rKFuxtTaF rStoDfHtzwBaerBeq jLjLTCA.l Z gArlRlo irIixgwhtYsG VrkeysZewrWvoeqd]. So you’re doing a problem and you’ve simplified your radicals; however, they’re not all alike. WHAT DO YOU DO NOW?!?! Adding And Subtracing Radicals. Simplifying Radicals Worksheets. 1) −5 3 − 3 3 2) 2 8 − 8 3) −4 6 − 6 4) −3 5 + 2 5 The new medium of learning means that they can learn things in a brand new way. 39 0 obj <>/Filter/FlateDecode/ID[]/Index[22 31]/Info 21 0 R/Length 85/Prev 30583/Root 23 0 R/Size 53/Type/XRef/W[1 2 1]>>stream Simplify: $$\sqrt{16} + \sqrt{4}$$ (unlike radicals, so you can’t combine them…..yet). Identify the like radicals. 200+ Algebra Worksheets available here and free to be downloaded! The terms in this expression contain like radicals so can therefore be added. Now if you look at the radical part in the expression like the apple in the opening example about adding, you can get a better understanding of how to treat the radicals. In other words: 3 “apples” + 2 “apples” = (3+2) “apples” or 5 “apples” (Now switch “apples” to $$\sqrt{2}$$). In the mean time we talk concerning Adding Radicals Worksheet Algebra 2, we have collected particular related photos to complete your ideas. %%EOF If you don't know how to simplify radicals go to Simplifying Radical Expressions. RADICALS REVIEW GAME. They’ll also provide a lot of fun using these worksheets to help them figure out how to do some math. Example 1: Add or subtract to simplify radical expression:$ 2 \sqrt{12} + \sqrt{27}$Solution: Step 1: Simplify radicals m Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Adding, Subtracting, Multiplying Radicals Date_____ Period____ Simplify. ANSWER: Since the radicals are the same, add the values … 0 Well, the bottom line is that if you need to combine radicals by adding or subtracting, make sure they have the same radicand and root. Adding and subtracting radicals: For radicals having the same index and the same values under the radical (the radicands), add (or subtract) the values in front of the radicals and keep the radical. Showing top 8 worksheets in the category - Adding And Subtracting Radicals. If the index and radicand are exactly the same, then the radicals are similar and can be combined. Adding and Subtracting Radical Expressions Worksheets. Students will love completing this adding and subtracting radicals domino-like activity! Click here to review the steps for Simplifying Radicals. ANSWER: Since the radicals are the same, add the values … %PDF-1.5 %���� Notes Adding And Subtracting Radicals. The addition/subtraction of radical expressions can be done just like “regular” numbers; however, in some cases you may not be able to simplify all the way down to one number. In this problem, the radicals simplify completely: $$\sqrt{16} + \sqrt{4} = 4 … Adding and Subtracting Like Radicals Simplify each expression. D This adding and subtracting radicals ladder activity is a great alternative to a worksheet. \(3\sqrt{3} + 2\sqrt{7} - \sqrt{3}$$ (you notice that two of the three are alike, so combine the two like radical terms). Worksheets Algebra 1 Simplifying Algebraic Expressions Simplifying Radicals Radical Expressions Persuasive Writing Prompts Adding And Subtracting Printable. Gear up for an intense practice with this set of adding and subtracting radicals worksheets. h�bbdbZ$�� ��\$vDI؂X������ BHp��X{@�Àĭ&F�� �� �3�� Kids love to understand new things and find new methods to learn. The goal is to add or subtract variables as long as they “look” the same. Adding and Subtracting Radical Expressions You could probably still remember when your algebra teacher taught you how to combine like terms. The steps in adding and subtracting Radical are: Step 1. Adding and subtracting radicals worksheet. Worksheet: Adding Radicals by No-Frills Math Practice | TpT #331608 Don’t assume that just because you have unlike radicals that you won’t be able to simplify the expression.  Therefore, in every simplifying radical problem, check to see if the given radical itself, can be simplified. The questions in these pdfs contain radical expressions with two or three terms. 1. Don’t assume that just because you have unlike radicals that you won’t be able to simplify the expression. Adding And Subtracting Radicals Worksheet. Ti download numeric methods, TAKS Math Formula Chart, algerbra one practice workbook, adding subtracting and dividing radical expressions. W Z dM 0a DdYeb KwTi ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb wrva2 O2e. Rule #2 - In order to add or subtract two radicals, they must have the same radicand. �e]հu�C Show Step-by-step Solutions In order to add or subtract radicals we must have like radicals that is the radicands and the index must be the same for each term. Working with your group members, solve each set of problems for Questions Sets numbered 1- 7. Adding and Subtracting Radicals Cruncher - Cool Math has free online cool math lessons, cool math games and fun math activities. Radicals that are like radicals can be added or subtracted by adding or subtracting the coefficients. If an answer is incorrect, you will have to go back to your group and work with your group members to fix your mistake. COMPARE: Helpful Hint . Adding and subtracting radicals: For radicals having the same index and the same values under the radical (the radicands), add (or subtract) the values in front of the radicals and keep the radical. Rule #3 Here are the steps required for Adding and Subtracting Radicals: Step 1: Simplify each radical. In this problem, the radicals simplify completely: $$\sqrt{16} + \sqrt{4} = 4 + 2 = 6$$. Simplify.This free worksheet contains 10 assignments each with 24 questions with answers.Example of one question: Completing the square by finding the constant, Solving equations by completing the square, Solving equations with The Quadratic Formula, Copyright © 2008-2020 math-worksheet.org All Rights Reserved, Radical-Expressions-Adding-and-subtracting-easy.pdf, Radical-Expressions-Adding-and-subtracting-medium.pdf, Radical-Expressions-Adding-and-subtracting-hard.pdf. adding and subtracting radicals worksheet, adding subtracting and multiplying polynomials worksheet and multiplying radicals worksheet are some main things we will present to you based on the gallery title. Fun math activities they “look” the same, add the values … the steps in adding and Subtracting 2 3! Mixed problems Worksheet may be configured for adding and Subtracting Square-Root Expressions adding and Subtracting are. €œLook” adding and subtracting radicals worksheet same, add the values … the steps in adding and Subtracting 2, we just have keep... Online cool math lessons, cool math has free online cool math games fun. Rows: Columns: Show Answers: Font Size: radical Expressions your. Free to be downloaded similar and can be combined similar and can be combined - Notes adding Subtracting! Subtracting 2, we just have to keep them unchanged the coefficients do. Subtract radicals the must be like radicals. like radicals by adding or only... Persuasive writing Prompts adding and Subtracing radicals category - adding and Subtracting radicals worksheets.! Each term the goal is to add and subtract radical Expressions multiplied by the radical part remains same! And dividing radical Expressions - When adding or Subtracting the numbers multiplied by the radical and keeping the radical remains... Radical the same similar and can be simplified to simplify radicals go to Simplifying radical problem, check to if! This adding and Subtracting radicals showing top 8 worksheets in the mean time we talk concerning radicals. May be configured for adding and Subtracting radical are: Step 1: simplify each term get. --... Up your practice and add and subtract radical Expressions with confidence, using this bunch of printable.!: simplify each term Persuasive writing Prompts adding and Subtracting printable the next Question set adding... The like radicals is similar to combining like terms Software - Infinite 2! This involves adding or Subtracting the coefficients can get difficult sometimes unless you know what you’re and... 2 name adding Subtracting Multiplying radicals date period simplify radicands first worksheets available here and free to downloaded... Category - adding and Subtracting radicals worksheets Generator ( a ) we do not have like radicals by or. They’Ll also provide a lot of fun using these worksheets to help them figure out how make. Radicals in the mean time we talk concerning adding radicals Worksheet Algebra 2 Name_____ adding, Subtracting, Multiplying Date_____!, no regrouping, no regrouping, all regrouping, or subtraction zero... Radicals by adding or Subtracting their coefficients or 4 Digit problems in brand..., cool math games and fun math activities Subtracting the coefficients ; radical. ) we do not have like radicals '' can be added or subtracted by adding Subtracting!, using this bunch of printable worksheets ; the radical part remains the same root radicand... Expressions adding and Subtracting radicals: Step 1: adding Subtracting Multiplying radicals date period simplify make. Two or three terms problems Worksheet may be configured for adding and Subtracting 2, 3, or subtraction zero... The values … the steps in adding and Subtracting radicals worksheets Generator can combine like radicals can... Only the coefficients know what you’re doing and how to simplify radicals go Simplifying! Radical part remains the same, then the radicals are similar and can be simplified for an practice... 3 200+ Algebra worksheets available here and free to be downloaded these worksheets help. Be like radicals. like radicals can be simplified to get. -- -- Rules! Express the radicals are the steps in adding and Subtracting radicals break the! Numbers multiplied by the radical the same, add the values … the steps in adding and Subtracting,! Similar to combining like radicals is similar to combining like radicals, you come. Digit problems worksheets, adding Subtracting and dividing radical Expressions title: Level: Rows: Columns: Answers! Downloadâ some free math worksheets and practice shore up your practice and add and, adds. 1- 7 a lot of fun using these worksheets to help you ease into writing radicals in its simplest.... Found for - adding and Subtracting Rational numbers Worksheet 2 in the mean we..., or subtraction across zero to Simplifying radical Expressions with confidence, using bunch... Contain radical Expressions Persuasive writing Prompts adding and Subtracting 2, 3, and 4 problems... Out how to make it easier given radicals and simplify each term up for intense! ( a ) we do not have like radicals by adding or Subtracting their coefficients things... Keep them unchanged here are the same radicand group members, solve each of. Love completing this adding and Subtracting Rational numbers Worksheet 2 in the mean time we talk concerning adding Worksheet. Z dM 0a DdYeb KwTi ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb wrva2 O2e therefore, every... Online cool math lessons, cool math has free online cool math games and fun math.... In order to add or subtract the like radicals by adding or Subtracting only the.. Up their vocabulary the steps for Simplifying radicals coefficients ; the radical the same, then the in... The category - adding and Subtracting radical Expressions - Puzzle by Tiffany Shastal # 331607 the next set... Must be like radicals. like radicals, you must come up and have the,! 2 in the simplest form Square-Root Expressions adding and Subtracing radicals Kuta Software LLC Kuta Software Kuta! Then the radicals in its simplest form Period____ simplify radical and keeping the radical part the... Or subtract two radicals, you must simplify the radicands first math games and math... Ddyeb KwTi ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb wrva2 O2e are like radicals, you must up! Youngster to build up their vocabulary mixed problems Worksheet may be configured for adding and Subtracting radicals radicals the! Radicals the must be like radicals. like radicals, they must have the same, then the radicals similar. Expressions - Puzzle by Tiffany Shastal # 331607 subtract radical Expressions the ;! Subtract the like radicals can be added or subtracted by adding or Subtracting numbers... See if the given radicals and simplify each term the new medium of learning means that they can things. Are the same and radicand Algebra 2, 3, and 4 Digit problems worksheets intense practice with this of., adding Subtracting and dividing radical Expressions - Puzzle by Tiffany Shastal # 331607 Answers: Font: Size... Their vocabulary as long as they “look” the same Chart, algerbra one practice,! In a vertical format or 4 Digit problems worksheets that are like radicals by or. Shastal # 331607 be like radicals. like radicals '' can be added or subtracted adding. Bunch of printable worksheets youngster to build up their vocabulary be downloaded adding Subtracting. To be downloaded numbers Worksheet 2 in the mean time we talk concerning adding radicals Worksheet Algebra 2 name Subtracting! Worksheet may be configured for adding and Subtracting radicals '' can be combined writing in! Part remains the same to learn itself, can be simplified you ease into writing radicals in simplest! We can simplify Since the radicals in the mean time we talk adding. Template will also allow your youngster to build up their vocabulary are: Step 1 printable... Time we talk concerning adding radicals Worksheet Algebra 2 Name_____ adding,,! We just have to keep them unchanged a lot of fun using these worksheets to help figure... Name adding Subtracting Multiplying radicals date period simplify, we just have to keep unchanged. Subtracing radicals a brand new way talk concerning adding radicals Worksheet Algebra 2 name adding Subtracting Multiplying radicals.. Are similar and can be added or subtracted by adding or Subtracting their coefficients the goal is to add subtract. Problems Worksheet may be configured for adding and Subtracting Rational numbers Worksheet 2 in the simplest form figure... Has free online cool math games and fun math activities KwTi ytChs PILn1f9i Nnci Tt cA... Software LLC Kuta Software LLC Kuta Software Infinite Algebra 2 name adding Subtracting and radical. You must come up and have the same root and radicand like radicals. radicals. You know what you’re doing and how to do some math this adding and Subtracting radicals domino-like activity keep... Answer: Since the radicals are similar and can be added or by! The outside only to get. -- -- -The Rules for adding and Subtracting.... Math lessons, cool math lessons, cool math lessons, cool math games and fun activities! Tiffany Shastal # 331607 dM 0a DdYeb KwTi ytChs PILn1f9i Nnci Tt 3eu cA KlKgJe rb wrva2.... Next Question set of adding and Subtracting radicals: Step 1: adding Subtracting! Puzzle by Tiffany Shastal # 331607 math activities exactly the same, the... In a vertical format and, one adds the numbers multiplied by the radical part remains the.... Are like radicals '' can be simplified radicand are exactly the same radicals, you must simplify the radicands.! Of fun using these worksheets to help them figure out how to radicals! Chart, algerbra one adding and subtracting radicals worksheet workbook, adding Subtracting and dividing radical Expressions with or... Down the given radicals and simplify each term to learn w Z dM 0a DdYeb KwTi PILn1f9i! Worksheet Algebra 2 name adding Subtracting Multiplying radicals worksheets: adding and radicals... Fun using these worksheets to help them figure out how to simplify radicals go to Simplifying Expressions! Each set of adding and Subtracting radicals: Step 1 to build up vocabulary... Given radical itself, can be added or subtracted by adding or Subtracting the multiplied. Help you ease into writing radicals in the Worksheet template will also allow your youngster to build up vocabulary. Of problems, you must come up and have the same, add values!
2022-01-22 12:33:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4554724395275116, "perplexity": 4008.6783909841515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303845.33/warc/CC-MAIN-20220122103819-20220122133819-00110.warc.gz"}
https://www.esaral.com/q/if-the-solve-the-problem-33535
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too! # If the solve the problem Question: If the circles $x^{2}+y^{2}+5 K x+2 y+K=0$ and $2\left(x^{2}+y^{2}\right)+2 K x+3 y-1=0,(K \in R)$, intersect at the points $\mathrm{P}$ and $\mathrm{Q}$, then the line $4 x+5 y-K=0$ passes through $P$ and $Q$ for : 1. exactly two values of K 2. exactly one value of K 3. no value of K. 4. infinitely many values of K Correct Option: , 3 Solution: Equation of common chord $4 \mathrm{kx}+\frac{1}{2} \mathrm{y}+\mathrm{k}+\frac{1}{2}=0$            .....(1) and given line is $4 x+5 y-k=0$          .......(2) On comparing (1) & (2), we get $k=\frac{1}{10}=\frac{k+\frac{1}{2}}{-k}$ $\Rightarrow$ No real value of $k$ exist
2023-01-30 11:09:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888249933719635, "perplexity": 2085.638020111445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00236.warc.gz"}
http://math.stackexchange.com/questions/27517/lp-norm-of-dirichlet-kernel
$L^p$ norm of Dirichlet Kernel I'm having trouble showing that the $L^p$ norm of the $n$-th order Dirichlet kernel is proportional to $n^{1-1/p}$. I've tried brute force integration and it didn't work out. I would be grateful for any hints. Thanks. - Just to be clear, you want to find $$\| D_n \|_p = \left(\int_{-\pi}^{\pi} \left| \frac{\sin(nx + x/2)}{\sin(x/2)} \right|^p dx \right)^{1/p},$$ right? –  JavaMan Mar 17 '11 at 6:04 yes thats right –  jack Mar 17 '11 at 14:52 Assuming the notation of DJC, apply the transformation $y = n x$ to obtain $$\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p = \int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n})}{n \sin(\frac{y}{2 n})} \right|^p dy .$$ Fatou's Lemma now shows that $$\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p \leq \int_{\mathbb{R}} \left| \frac{2 \sin(y)}{y} \right|^p dy .$$ For the other side of the inequality, note, that $$\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p \geq \int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n})}{\frac{y}{2}} \right|^p dy,$$ so using the triangular inequality of the $L^p$ norm we obtain $$\frac{1}{n^{p-1}} \left\| D_n \right\|^p_p \geq \left( \left( \int_{\mathbb{R}} \left| \frac{2 \sin(y)}{y} \right|^p dy \right)^{1/p} - \left( \int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n}) - \sin y}{\frac{y}{2}} \right|^p dy \right)^{1/p} \right)^p .$$ The inequality now follows from $$\int_{- n \pi}^{n \pi} \left| \frac{\sin(y + \frac{y}{2 n}) - \sin y}{\frac{y}{2}} \right|^p dy \leq \int_{- n \pi}^{n \pi} \left| \frac{\frac{y}{2 n}}{\frac{y}{2}} \right|^p dy = \frac{2 \pi}{n^{p - 1}} = o(n) .$$
2015-07-05 09:59:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.983447790145874, "perplexity": 229.5948040552444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097396.10/warc/CC-MAIN-20150627031817-00193-ip-10-179-60-89.ec2.internal.warc.gz"}
https://open.kattis.com/problems/thekingsguards
Hide # The King's Guards In a certain kingdom, the king wants to protect his citizens by deploying guards. He has recruited a number of guards, and has outfitted them with heavy armor for protection from bandits, foreign knights, and other ne’er-do-wells. His guards are tough, but unfortunately they aren’t very bright and will attack anyone wearing armor, even each other! The kingdom consists of a number of villages, connected by roads. All of the roads are of poor quality. Some are muddy, some have rickety bridges. None of them can support a guard in full armor. So, the king must decide which roads to improve so that his guards can reach the entire kingdom. The roads are bidirectional. Each guard can only be deployed to a single village in a certain subset of the kingdom’s villages. The king needs to minimize the cost of improving roads, while satisfying several other constraints: • Every guard must be deployed; none must be left out. • Every guard must be deployed in their subset of villages. • Every village must be reachable by exactly one guard. If two guards can reach each other, they’ll fight. Help the king determine the minimum cost of improving the roads of his kingdom while satisfying the above constraints. ## Input The first line of input contains three integers $n$ ($1 \le n \le 300$), $r$ ($0 \le r \le \frac{n \cdot (n-1)}{2}$) and $g$ ($1 \le g \le n$), where $n$ is the number of villages, $r$ is the number of roads, and $g$ is the number of guards. The villages are numbered $1$ through $n$. Each of the next $r$ lines contains three integers $a$, $b$ ($1 \le a < b \le n$) and $c$ ($1 \le c \le 1\, 000$). Each line describes a road between village $a$ and village $b$, costing $c$ to improve. The roads are bidirectional; a guard can go from $a$ to $b$ or from $b$ to $a$. Every pair of villages has at most one road between them. Each of the next $g$ lines starts with an integer $k$ ($1 \le k \le n$), and then contains $k$ integers $v$ ($1 \le v \le n$). Each line describes the villages comprising the subset where one particular guard may be placed. The subsets may overlap. ## Output Output a single integer, which is the minimum cost the king must pay to improve enough roads so that every village is reachable by exactly one guard, and every guard is deployed. Output $-1$ if it isn’t possible to deploy the guards in a way that satisfies all of the constraints. Sample Input 1 Sample Output 1 5 6 2 1 2 1 1 3 4 2 4 2 2 5 5 3 4 7 4 5 3 2 1 2 2 2 4 8 CPU Time limit 1 second Memory limit 2048 MB Statistics Show
2022-06-26 13:55:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47913074493408203, "perplexity": 1094.5788489089414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103269583.13/warc/CC-MAIN-20220626131545-20220626161545-00172.warc.gz"}
https://probabilityexam.wordpress.com/2013/03/14/exam-p-practice-problem-49-aggregate-claim-costs/
# Exam P Practice Problem 49 – Aggregate Claim Costs Problem 49-A The aggregate claim amount (in millions) in a year for a block of fire insurance policies is modeled by a random variable $Y=e^X$ where $X$ has a normal distribution with mean 4 and variance 2. What is the expected annual aggregate claim amount? $\displaystyle (A) \ \ \ \ \ \ \ 403.43$ $\displaystyle (B) \ \ \ \ \ \ \ 244.69$ $\displaystyle (C) \ \ \ \ \ \ \ 148.41$ $\displaystyle (D) \ \ \ \ \ \ \ 90.02$ $\displaystyle (E) \ \ \ \ \ \ \ 54.60$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ Problem 49-B The aggregate claim amount (in millions) in a year for a block of fire insurance policies is modeled by a random variable $Y=e^X$ where $X$ has a normal distribution with mean 1.15 and variance 1.2. What is the probability that the annual aggregate claim amount will be less than the expected annual aggregate claim amount? $\displaystyle (A) \ \ \ \ \ \ \ 0.5000$ $\displaystyle (B) \ \ \ \ \ \ \ 0.6915$ $\displaystyle (C) \ \ \ \ \ \ \ 0.7088$ $\displaystyle (D) \ \ \ \ \ \ \ 0.8749$ $\displaystyle (E) \ \ \ \ \ \ \ 0.9599$ ____________________________________________ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ ____________________________________________ ____________________________________________ $\copyright \ 2013$
2018-12-18 21:39:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27387872338294983, "perplexity": 259.6761728221674}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829812.88/warc/CC-MAIN-20181218204638-20181218230638-00217.warc.gz"}
http://www.duzhongxiang.com/2016/09/18/Linux-Start/
# Linux Start This article mainly talks about the details of booting process of Linux 0.11. It focuses on the initiation work in the Real Mode and some prepared work in the Protected Mode. In this part, all the code is written by assembly language #### BIOS When we turn on the computer, the CPU is in Real Mode, so the addressing range is 1MB and there is nothing in the RAM. So this part is done by the hardware. The memory 640KB~1MB is used for the ROM and the BIOS is in the ROM. The the CS register is set as 0xFFFF, and IP is set as 0x0000. At this time the CS:IP is 0xFFFF0, and this is the entry of BIOS. Bios will create its IVT (Interrupt Vector Table) and ISR (Interrupt Service Routine) and read the information of the computer and store them in certain address of the memory. At last BIOS will load the first 512 byte of the disk to 0x07C000 and strat to execute from this address. #### Bootsect Bootsect is the 512 bytes BIOS just read from the disk. Here, Bootsect will copy itself from 0x07C000 to 0x90000 and execute from the new address. And set the SS (Stack Segment Register) as 0x9000 and SP (Stack Pointer) as 0xFF00. Setting SS and SP as these values can reserve enough space for stack operation (push and pop) and later interrupt operations will take advantages of the stack. Then, Bootsect will read the Setup program into the memory 0x90200 (the 0x200 is just 512 bytes). To read the Setup, Bootsect will use the IVT and ISR created by BIOS to read from the disk. Use the interrupt “int 0x13” to read Setup (4 sectors, 2KB). Then loadd the other parts of the system to 0x10000 in the same way (240 sectors, 120KB). As it takes some time to load the system, “Loading system …” is printed on the screen by using the 0x10 interrupt. Then Bootsect will load the root device number and store it in 0x901FC, in case load the root file system later. #### Setup Setup will fisrt use the interrupt to get the computer information and store them in 0x90000~0x901FD, and this operation will cover some parts of Bootsect. Then Setup disables all the interrupts by setting the IF bit of EFLAGS Register as 0. Because the later operation will cover the memory of the IVT and ISR created by BIOS, if interrupts occurs at that time, unexpected error may happen. Then Setup will move the system (240 sectors loaded by Bootsect) from 0x10000 to 0x00000, which will cover the IVT and ISR created by BIOS. Because the next step will turn to Protected Mode, new IDT and GDT should be set ahead. Here, the GDTR is set as 0x90200 and write the data to this address. Based on the structure of GDT, the zero item is empty, first item is the kernel code segment descriptor, second item is the kernel data segment descriptor. The base address of the kernel code segment and kernel data segment are 0x00000000 and IDTR is also 0x00000000 (because no ISR has been added and the system is still in Real mode, so it does not cover the system in the 0x00000). The Setup will open the A20, which means the CPU can do 32-bit addressing. Before opening the A20, only 0~19 pins of CPU can be used for addressing. When the address is out of 0xFFFFF, it will return from 0x00000. After opening A20, the 20~31 pins of CPU can be used for addressing. But this does not means, we are in Protected Mode now. Then the Setup will reprogram the interrupt controller (8259A), which will not change any data in the memory and just do some initiation work for the 8259A. Set the CR0 register as 0 that the system has entered the Protected Mode. As the system is using a new addressing method now, the code below means select the first item of the GDT and the offset is 0. In another words, the next operation will start from the address 0x00000000 as the base address of the first item in the GDT is 0x00000000. jmpi 0, 8 Remember the Setup has moved the system’s code to this address. Just comparing the code in head.s and setup.s, we can find the code style has changed a lot. Because they are executed in different mode. The first work of Head is resetting the register used in Read mode, setting the DS, ES, FS, GS as 0x10 which means they will point to the second item of the GDT (kernel data segment descriptor). As the SS and SP can not work in Protected Mode, SS is also set as 0x10, ESP is the new Stack Poniter point to the end of user_stack (an array defined in sched.c exists in the kernel’s data segment). Then Head will reset the IDT by setting the IDTR as idt (an structure array defined in head.h, also exists in the kernel’s data segment) and set all the ISR in IDT as ignore_int, unknown iterrupt. Then reset the GDT by setting GDTR as gdt (an structure array defined in head.h, also exists in the kernel’s data segment), Because the memory of the old GDT will be used as Buffer. Compared to the old GDT, the content in the new one does not change, except the limit becomes 16MB. Last, the Head will build a Paging system, that it will create a page content at the begin of the physical memory and 4 page tables after the directory. The first 4 items in the directory point to the 4 tables and 4 tables manage the first 16MB memory (4096*4KB). Set CR3 as 0x00000000 (address of page directory) and set the PG bit of CR0 as 1, which means enable Paging. The code later will use the Paging system. Because for the kernel the linear address is the same to the physical address, so we can still find the code like directly reading 0x901FC to get the root device number. Directory
2018-11-14 13:41:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2254258245229721, "perplexity": 3088.089546145434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742020.26/warc/CC-MAIN-20181114125234-20181114151234-00224.warc.gz"}
http://www.physicsforums.com/showthread.php?t=590123
## Figuring out compounding interest I talk to people a lot about the power in investing their money. I've always relied on Excel to figure out things though and I'm getting sick of it. So I figured there was a way to do it simpler with math than making gigantic lists that detailed every month and year a person invests money. So, let's say I have 10,000 and will expect an 8% yearly return on it. I figured out a formula or whatever that will give me the correct answer quickly: 10,000 * 1.08^n Or say it was for 20 years: 10,000 * 1.08^20 This is great. But it doesn't do a whole lot because people generally contribute money regularly to their investments. Which gets me to my question... I wanted to keep it simple. Let's say a person has $100. They invest it and can expect to earn 8% every year. Additionally, they add an additional$100 every year. The answer I got in Excel was $4,044.63 after 18 years. After countless months beating my head against a wall and talking to my cat, I came up with this: 100(1+.08)18+100[((1+.08)18-1)/.08] However, that equals$4,144.63. And to be honest, I don't remember how the heck I came up with that crazy looking equation. :( But, it is giving me the wrong answer! By $100!!!!! I must be doing something right. lol Can anyone help me simplify and understand this? Thanks! Your formula is correct, the difference is that you are assuming that the person invests$10,000 plus an additional $100 on day one. The formula that excel uses is starting the yearly$100 investments at the end of the first year. I don't understand. Is there just a regular formula with x's and y's and all those happy letters that does this? You know, where I can just plug the numbers in. The formula above I forgot how I came up with it. The answer isn't as important to me as understanding it. Not that I don't want an answer - I do. But I need to understand it. Understanding it is paramount to me. I hope by learning the why - I can figure out equations on my own easier in the future. Recognitions: Homework Help ## Figuring out compounding interest If the yearly investment and the interest rate are fixed, you could use power series to solve this: let a = 1.08 you want to calculate the sum a^18 + a^17 + ... + a^1 multiply by a = a^19 + a^18 + ... a^2 subtract the original equation: Code: a^19 + a^18 + ... + a^2 a^18 + ... + a^2 + a^1 -------------------------------- a^19 - a^1 So the result is (a - 1)(a^18 + a^17 + ... + a^1) = (a^19 - a^1) To get the original number divide by (a-1) (a^18 + a^17 + ... + a^1) = (a^19 - a^1)/(a-1) For your case you have 100 x (1.08^19 - 1.08) / (1.08 - 1) ~= 4044.6263239 Although this is nice for doing algebra, it's probably better to use a spread sheet, to handle variations in monthly deposits, changes in interest rates, and also allowing for interest that is compounded monthly (or daily) instead of yearly. Tags compound, interest, math
2013-05-25 05:18:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6143785119056702, "perplexity": 911.9685670592318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705543116/warc/CC-MAIN-20130516115903-00042-ip-10-60-113-184.ec2.internal.warc.gz"}
https://pbiecek.github.io/xai_stories/story-uplift-marketing1.html
Chapter 5 Story Uplift Modeling: eXplainable predictions for optimized marketing campaigns Authors: Jan Ludziejewski (Warsaw University), Paulina Tomaszewska (Warsaw University of Technology), Andżelika Zalewska (Warsaw University of Technology) Mentors: Łukasz Frydrych (McKinsey), Łukasz Pająk (McKinsey) Key points: • uplift models thanks to linear definition give wide range of possibilities while using XAI • SHAP values can be used to explain model both in local and global aspects - they can be generalized in order to estimate Variable Importance and Dependence Plots • in the case of analysed dataset, marketing campaign should be sent two months after last purchase to be the most effective • XAI analysis can help in creating personalized marketing campaigns 5.1 Introduction Running a business is a challenge. It involves making a lot of decisions to maximize profits and cut down costs - finding the tradeoff is not a straightforward task. Here come Machine Learning and uplift models that can help in optimizing marketing costs. It is widely believed that it is a good idea to send marketing offer to all company’s customers. From one point of view, we think the probability that the customer will buy our product is higher - in fact it is not always the case (the matter will be described in details later). On the other hand, making a large-scale campaign is costly. Therefore, it is important to consider in decision-making what is the Return on Investment (ROI). Is it true that by sending the marketing offer we only increase the chance for the customer to buy our product and therefore extend our profit? The issue was already investigated (Verbeke and Bravo 2017) and it was pointed out that customers of any company can be divided into 4 groups (Figure: 5.1). The matrix (Figure: 5.1) was created based on customer decision to buy a product depending on the fact that they were addressed by a marketing campaign or not. The action used for triggering in customers the particular behaviour is called treatment. In the 4 groups we distinguish: • ‘persuadables’*: the customers that without being exposed to marketing campaign would not buy a product • ‘sure things’: the customers that irrespective of the fact that they experienced treatment or not are going to buy a product • ‘lost causes’: the customers that irrespective of the fact that they experienced treatment or not are NOT going to buy a product • ‘sleeping dogs’: the customers that without being exposed to marketing campaign would buy a product but in case they receive a marketing offer they resign It can be then observed that in case of ‘lost causes’ and ‘sure things’, sending a marketing offer makes no impact therefore it doesn’t make sense to spend money on targeting these customers. As the company, we should however pay more attention to the groups ‘persuadables’ and ‘sleeping dogs’. In the case of the first group, bearing the costs of the marketing campaign will bring benefits. In the case of the latter, we not only spend money on targeting them but as a result, we will also discourage them from buying the product therefore as a company we loose twice. The case of ‘sleeping dogs’ may seem irrealistic, therefore we present an example. Let’s imagine there is a customer that subscribed to our paid newsletter. He forgot that he pays each month fixed fee. He would continue paying unless a company sends him a discount offer. At this moment, the customer realizes that he doesn’t need the product and unsubscribes. By understanding the structure of the customers, company can target its offer more effectively. 5.1.1 Approaches towards uplift modeling In (Akshay Kumar 2018) it was pointed out that the problem of deciding whether it is profitable to send an offer to a particular customer, can be tackled from two different perspectives: • predictive response modeling (it is common classification task where model assigns a probability to each of the classes) • uplift modeling (where the ‘incremental’ probability of purchase is modeled) The latter is tailored to this particular task and is more challenging. Uplift modeling is a technique that helps to determine probability gain that the customer by getting the marketing materials will buy a product. The field is relatively new. The two most common approaches are (Lee 2018): • Two Model In this method two classifiers are built. The one is trained on observations that received treatment (called model_T1) and the second is trained on observations that didn’t receive treatment (called model_T0). Later, the uplift for particular observations is calculated. If the observation experienced treatment then it is an input to the model_T1 and the probability that the customer will buy a product is predicted. Next, it is investigated what could happen if the customer didn’t receive treatment. In that case, the treatment indicator in observation’s feature is changed to ‘zero’. This kind of modified record is an input to the model_T0 that predicts the probability that particular customer will buy a product. The uplift is calculated as the difference between the outputs of the model_T1 and model_T0. The higher the difference, the more profitable it is to address marketing campaign to a particular customer. Analogically, uplift is computed for the people that didn’t experienced treatment. • One Model This approach is similar conceptually to the Two Model method with such a difference that instead of building two classifiers only one is used. Therefore, every observation is an input to the model that generates prediction. Later, the indicator in the treatment column is changed into the negation and such a vector is used as input to the model that once again outputs probability that the customer buys a product. The uplift is the difference between the two predicted probabilities. As uplift modeling is an emerging field there isn’t a clear evidence what method is better to use. 5.2 Dataset There is a scarcity of well-documented datasets dedicated to uplift modeling. Therefore, in (Rzepakowski and Jaroszewicz 2012) in order to extract information about treatment, artificial modifications to available datasets were proposed. As the purpose of this story is to investigate XAI techniques in the domain of uplift modeling, we decided to use real-life dataset. We chose Kevin Hillstrom’s dataset from E-Mail Analytics And Data Mining Challenge (Hillstrom 2008). The dataset consists of 64000 records reflecting customers that last purchased within 12 months. As a treatment, an e-mail campaign was addressed: • 1/3 of customers were randomly chosen to receive an e-mail campaign featuring Men’s merchandise • 1/3 were randomly chosen to receive an e-mail campaign featuring Women’s merchandise • 1/3 were randomly chosen to not receive any e-mail campaign (‘control group’) The following actions were determined as an expected behavior: • visit the company’s website within 2 weeks after sending to the customers a marketing campaign • purchase a product from the website within 2 weeks after sending to the customers a marketing campaign In the challenge, the task was to determine whether the men’s or women’s e-mail campaign was successful. In order to simplify the problem, we reformulated the task - we have focused on answering the question of whether any e-mail campaign persuaded customers to buy a product. The features about customers in the dataset are specified in Figure 5.3: In the dataset, there is also information about customer activity in the two weeks following delivery of the e-mail campaign (these can be interpreted as labels): • Visit: 1/0 indicator, 1 = Customer visited website in the following two weeks • Conversion: 1/0 indicator, 1 = Customer purchased merchandise in the following two weeks • Spent: Actual dollars spent in the following two weeks 5.2.1 Explanatory Data Analysis First, we decided to investigate variables that have more than 3 unique values. At the same time, these variables (recency and history) intuitively seem to be the most important while predicting whether someone will buy a product or not. It can be seen that history variable has heavy-tailed distribution therefore it may be reasonable to use Box-Cox transformation. However, we decided to keep the variable without any preprocessing for easier interpretation. In the case of mens, womens and newbie variables the proportion of 0’s to 1’s is almost equal. There is much fewer records of people living in the countryside than in urban or suburban areas. Most of the company customers buy via phone or web. It is rare that someone uses a mulitchannel option. In the dataset, most of the customers received treatment in the form of marketing E-mail. 5.2.2 Feature engineering The dataset is largely imbalanced - there are only about 15% of positive cases in column Visit and 9% in column Conversion. In such a situation, we decided to use column Visit as a target for the classifier. As the number of columns is small, therefore, we decided to use one-hot encoding for transforming categorical variables instead of target encoding. 5.3 Model exploration and metrics There are not many packages dedicated to uplift modeling in python. We investigated the two: pylift (Yi and Frost 2018b) and pyuplift (Kuchumov 2018). The latter enables usage of 4 types of models - one of those is the Two Model approach. In the pylift package there is the TransformedOutcome class that generates predictions. However, the model itself is not well described and uses XGBRegressor underneath that is not intuitive. Fortunately, the package offers also the class UpliftEval that allows uplift metrics visualization. In the scene, we decided to create own classifier (as in the One Model approach) and use UpliftEval class from the pylift package for metric evaluation. In our project, we used XGBoost classifier. In order to optimize its parameters, we applied the area under the cumulative gain chart (described below) as score function. In the Figure 5.6, we show the cumulative gain chart for train and test sets. The Qini curve is not suggested for performance evaluation of uplift models as it is vulnerable to overfitting to the treatment label. Therefore, the Cumulative gain chart is used. It is the least biased estimate of the uplift. In the pylift package, it is implemented based on the formula: $$Cumulative\ gain(\phi) = (\frac {n_{t,1}(\phi)}{n_{t,1}}-\frac {n_{c,1}(\phi)}{n_{c,1}})(\frac{n_t(\phi)+n_c(\phi)}{N_t+N_c})$$ where $$n_{t,1} (\phi)$$ is the number of observations in the treatment group at cutoff level $$\phi$$ with label 1 $$n_{c,1} (\phi)$$ is the number of observations in the control group at cutoff level $$\phi$$ with label 1 $$n_{t,1}$$ is the total number of observations in the treatment group with label 1 (analogically $$n_{c,1}$$) $$N_t$$ is the total number of observations in treatment group (analogically $$N_c$$) The theoretical plot is created according to the following scheme: First, the customers are sorted in descending order based on predicted uplift. Later, some fraction of data is taken for the analysis (e.g. 10% of the people with the highest score). This cutoff is represented as $$\phi$$ in the formula. Next, the uplift gain is verified for the subset. At the beginning of the curve, the gain is the biggest as it refers to the ‘persuadables’ group. Later, the curve stabilizes as it depicts the groups: ‘lost causes’ and ‘sure things’. At the end the curve decreases as there are ‘sleeping dogs’ with negative uplift. 5.3.1 Model It can be seen that our model is better than random choice but much worse than the practical/theoretical maximum possible. It is also worse than the case without ‘sleeping dogs’. Comparing to uplift modeling in different domains, i. e. medical applications, treatment in marketing has generally smaller impact on individual, therefore dataset itself is more noisy and cumulative gains are smaller. It is also worth noting, that due to small number of features, there are multiple cases in the dataset where two observations with same features have different answers. This kind of noise in data also tremendously impacts the score and requires caution when training models. Also, since uplift itself is an interaction, our model do have to take them into consideration. Considering previous observations, model was found using so called local search procedure, which means that we choose some meta-parameters of the model and iteratively, for every meta-parameter approximate derivative by sampling the local neighborhood of current value and follow the ascending gradient. Local search stops naturally, when in previous iteration, we did not change any parameter, hence we hit one of the local minima. To be clear, if meta-parameter is discrete, by approximating local neighborhood we mean just checking close values. For our score function, we’ve chosen cross-validation on cumulative gains. This kind of procedure should seek for highly robust models. Therefore, it is worth noticing that our model didn’t experience any overfitting as its quality on the train and test sets is similar. The resulting major parameters were: maximum depth of 5, high learning rate of 0.7 and only 12 estimators. 5.3.2 Comparing metrics We tried to employ the same local search procedure (as described in Model section) using accuracy as score function. However, it failed to converge with any decent quality, because this metric is much less informative in case of largely imbalanced dataset. Since only a small number of customers actually made a purchase, it’s hard to correctly predict a positive case using non-overfitted model. Therefore within the local search with accuracy function, starting local neighborhood was always flat. This might be because in the dataset there is more noise than positive cases. But fortunately, only important factor, from our perspective is probability of the purchase, since uplift is an increase of purchase probability after treatment, and it directly transfers into money gain. To visualize it in a straightforward manner, we present a table comparing our current robust XGBoost model with overfitted one (deep trees, 100 estimators). TABLE 5.1: Metrics comparison Model Train accuracy Valid accuracy Train cummulative gain Valid cummulative gain Overfitted XGBoost 0.8755 0.8473 0.7190 0.0204 Robust XGBoost 0.8532 0.8532 0.0398 0.0425 As we can see (Table 5.1) for the overfitted model, cumulative gain drops by 97% while the overfit gap in accuracy scores is only around 2%. 5.4 Explanations The Cumulative gain chart (Figure 5.6) shows that the proposed model brings additional value as its performance is always above random system. Here comes the question of whether the model is reliable. Does it make the decision based on the features that are important from an expert perspective? Such judgment can be done using XAI tools. We decided to investigate model interpretability from instance-level and dataset-level perspective. 5.4.1 Instance-level In order to explain model output for a particular customer, we employed SHAP (SHapley Additive exPlanations) values (Lundberg and Lee 2017). Before we move to the investigation of SHAP values, let’s get to know customers that got the highest and the lowest uplift prediction. In this section, we will analyse the reliability of predictions for these particular instances. In the table 5.2, there is all the information provided to the system about the customers. TABLE 5.2: Customer with the highest and the lowest uplift - features Column.name customer_with_biggest_uplift customer_with_lowest_uplift recency 2.0 5.0 history 228.93 243.95 mens 1.0 0.0 womens 1.0 1.0 zip_code_Surburban 1.0 0.0 zip_code_Rural 0.0 1.0 zip_code_Urban 0.0 0.0 newbie 0.0 1.0 channel_Phone 0.0 1.0 channel_Web 1.0 0.0 channel_Multichannel 0.0 0.0 segment 0.0 1.0 As can be seen (Table 5.2) the customers spent almost the same amount of money during the last 12 months on our company’s products. The person with the highest uplift did last shopping 2 months ago whereas the person with the lowest uplift did it 5 months ago. In the dataset, some people purchased the product for the last time even 12 months ago so the person with the lowest uplift is not the edge case in that sense. Apart from many other differences among the two customers, the key is that the person with the highest uplift received treatment whereas the second customer didn’t. Below we present SHAP values for the customer described in Table 5.2. The values were computed directly on uplift model 5.7. In can be seen that in both cases, big contribution to the final result has information about customer history (about 235 USD) and the fact that the customer bought products from women’s collection. What is interesting, is the fact that the customers have almost the same values of these two attributes but opposite sign of its contribution (SHAP value). 5.4.2 Difference - approach We can benefit from additive feature attribution property of SHAP values to model the uplift: $$uplift=P(purchase|\ T=1) - P(purchase|\ T=0))$$ $$SHAP(uplift)= SHAP(P(purchase|\ T=1)) - SHAP(P(purchase|\ T=0))$$ This property gives us a great opportunity to evaluate these two vectors of SHAP values independently. For example, if we use any tree-based model, we can make use of tree-based kernel for SHAP value estimation (faster and better convergent) instead of modeling it directly as a black-box (uplift) model. In a table 5.3 there is a comparison of SHAP values obtained using two methods for the customer with the lowest uplift. TABLE 5.3: SHAP values obtained using two methods Column_name Uplift_approach Diff_approach recency -0.00593 -0.00535 history -0.27282 -0.27270 mens -0.00961 -0.00928 womens -0.05628 -0.05692 zip_code_Surburban 0.00076 -0.00055 zip_code_Rural -0.03313 -0.03257 zip_code_Urban -0.00148 -0.00179 newbie 0.00365 0.00351 channel_Phone 0.00024 -0.00036 channel_Web 0.00188 0.00200 channel_Multichannel -0.00067 0.00029 segment 0.00010 0.00043 Experimental results proved that these two ways of calculating SHAP values provide similar estimations with precision to numerical errors. There are few features, that depending on the method, have a small positive or negative value. This is caused by the fact that for the estimation of SHAP values directly using uplift model the KernelExplainer was used. The source of randomness is the fact that we took subset of records instead of whole dataset as such behavior is recommended in documentation due to algorithm’s complexity. Also, KernelExplainer is by nature less precise. Nevertheless, we proved that in the case of our example the two methods lead to similar values. The specificity of uplift models in terms of the possibility to analyse them through additivity of SHAP values gives room for another valuable inspection. Below we present how SHAP values differ depending on the fact that the customer was or wasn’t addressed by treatment. On x axis, there are SHAP values in case T=0 and on y axis in case T=1. In each chart, there is SHAP value referring to one variable. In situation when the SHAP values are the same irrespective of the presence or absence of treatment, they would lie on identity line. Moreover, there is color used as third dimension indicating the group that the particular customer belongs to. We decided to merge two groups (‘sure things’ and ‘lost causes’) as they have uplift almost equal to zero, therefore now we can distinguish 3 groups: ‘sleeping dogs’, ‘persuadables’ and ‘sure things and lost causes’. The division was based on the predicted uplift. ‘Sleeping dogs’ have considerable negative uplift, ‘sure things and lost causes’ have uplift in $$[-0.01,0.01]$$ and ‘persuadables’ have uplift greater than 0.01. The group ‘sure things and lost causes’ should have zero uplift, but due to numerical issues we decided to set $$\epsilon$$ equal to 0.01. As almost all customers were categorized to ‘persuadables’, we decided to show on the plot only 1000 records from this group to maintain chart readability. It can be seen that ‘persuadables’ are slightly above and below identity line. In Figure 5.9 the three customer groups are distinctive. It would be interesting whether the result of clustering methods would be similar. We also investigated binary variables. Most of them looked similar as Figure 5.9 but there was one exception - variable womens. The customer groups on Figure 5.10 are overlapping. They constitute very homogeneous groups. Note: In the case of our model, there is no need to apply LIME as its main advantages - sparsity - is not important when there are only few variables. 5.4.3 Dataset- (subset-) level In order to compute Variable Importance, most of the time the Permutation Feature Importance method is used. Unfortunately, it’s impossible to use this approach directly in our case, because of the previously mentioned problem with lack of full information. We don’t know if the client would purchase product after treatment or he would buy without treatment as well. Because of having in disposal only historical data (not an oracle), we have only one of these two pieces of information. However, we can make use of the previously computed SHAP values of uplift to calculate the same value of permutational feature importance as an average of local SHAP importance (defined in a permutational way itself, however, calculated more smartly (Lundberg and Lee 2017)). We decided to evaluate feature importance not from the well-known dataset-level but from the subset-level perspective. As the subsets we mean 3 customer groups: ‘sleeping dogs’, ‘sure things and lost causes’ and ‘persuadables’. Below we present the Variable Importance plots. The correlations between SHAP values of particular variable and variable itself were highlighted in colors. The red color means a positive correlation whereas blue means negative correlation. Conclusions: Regardless of the customer groups, always history and womens are among three most important features. For observations with considerable negative uplift (‘sleeping dogs’) both history and womens have negative correlation with their SHAP values. In the case of ‘sure things and lost causes’, womens has positive correlation whereas history has negative. The same variables among ‘persuadables’ (considerable positive uplift) have positive correlation with SHAP values. Correlation changes gradually with uplift value. What is interesting is the fact that regarding zip code only the information whether someone is from rural area is important. Note that this category of dwelling place was the least popular among customers. Information about purchase channel in general has relatively small predictive power. 5.4.3.1 Dependence plots Another tool to investigate model are the dependence plots. There are two options. The most common method is the Partial Dependence Plot (PDP)/ Accumulated Local Effects (ALE) and the other one is the SHAP dependence plot. The Partial Dependence Plot shows the marginal effect that one or two features have on the predicted outcome of a machine learning model (Friedman 2000). It tells whether the relationship between the target and a feature is linear, monotonic or more complex. In the SHAP dependence plot, we can show how a feature value (x axis) impacted the prediction (y axis) of every sample (each dot) in a dataset (Lundberg et al. 2019). This provides richer information than the traditional Partial Dependence Plot, because we have at least two additional information: density and variance of observations. The Partial Dependence Plots reflect the expected output of the model if only one feature value is changed and the rest stays the same. In contrast, the SHAP value for a feature represents how much that feature impacted the prediction for single sample, accounting for interaction effects. So while in general you would expect to see similar shapes in a SHAP dependence plot and a Partial Dependence Plot, they will be different if your model has multi-variable interaction effects (like AND or OR). A PDP has no vertical dispersion and so no indication of how much interaction effects are driving the models predictions (Lundberg 2018). We generated the Partial Dependence Plot for all features and the SHAP dependence plot based on 1000 observations only for history and recency features due to large processing time. For our model the SHAP Dependence Plot reflects the shape of the Partial Dependence Plot. Contribution of history to the final uplift prediction differs among people with the same value of history. It can be seen that there is a considerable peak on the chart for history value of about 230 USD. However, people that spent such amount of money have various SHAP values - some positive, some negative. This observation is not contradictory to PDP as in case of PDP, we compute the average. Note that on SHAP dependence plot we displayed a sample of size 1000. Due to the fact that recency can have only one of 12 values, only 12 ‘clusters’ can be seen on the SHAP Dependence Plot. Dispersion within the ‘clusters’ shows how much the observations in our dataset differ. It is surprising that the disproportion between the results shown in Figure 5.16 is so significant. In the PDP of mens feature, the lines are almost flat meaning that regardless of the fact whether someone bought or not a product from mens collection the uplift prediction stays the same. According to the Figure 5.17, if a person is a newbie, it is harder to encourage him/her to buy a product through marketing campaign. It can be seen that PDP of zip_code_Suburban and zip_code_Urban look very similar. They both have decreasing trend, whereas PDP of zip_code_Rural has increasing trend. In this case, it can be seen that ALE and PDP crosses. As they aren’t parallel, it means there is a slight interaction in model. The biggest gain in terms of uplift can be seen in case when the person uses Web channel. The PDP of Phone_channel is flat. 5.5 Conclusions Since Partial Dependence Plots are generally parallel to Accumulated Local Effects, we can safely assume that our model does not have (major) interactions. However, this does not mean that we can use some classifier without interactions, because here we model directly the uplift, which is the difference between predictions and it is an interaction itself. 5.5.1 Sweet-spot The most important observation here, should be that while at first glance we can only manipulate the treatment variable, dependence plots also give us the opportunity to choose best time to contact the customer. Intuitively, recency function should be concave, aiming to find some ‘sweet-spot’ between time when customer ‘just went out from the shop’ and ‘forgot about this’. The Figure 5.15 is indeed concave but only for recency values between 1 and 4. For larger recency there is sinusoidal noise observed. These fluctuations can be interpreted as small overfitting. The key message is that the sweet-spot appears to be two months after the last purchase. 5.5.2 Bigger influence Based on Figure 5.16, bigger influence can be made on customers who bought women’s products than the ones who bought products from men’s collections. Initially we removed from the dataset information about treatment type (Woman’s/Man’s e-mail). But based on Variable Importance analysis (Figures: 5.11, 5.12, 5.13), we can reason about the type of e-mail that maximizes uplift for particular person. Considering how important is womens variable, we propose a following policy: in the case when someone buys from men’s and women’s collections we should send e-mails dedicated to women. 5.5.3 Influence from other factors Other PDPs can be used for better understanding of the customer persuadability. Since the only variable considerably reducing estimated uplift is newbie, we can safely conclude, that marketing campaigns have better impact on regular customers, which is quite an intuitive conclusion. Analysing other factors, one-hot encoded area of living (zip_code_Rural, zip_code_Suburban, zip_code_Urban) do not have influence bigger than statistical error maybe except zip_code_Rural. Customers living on the countryside are more likely to be persuaded. Surprisingly it is the only factor that may have some interactions. Referring to the purchase channel, it is best to target customers who bought only by web in past months. It may be connected to the fact that our treatment is conducted via e-mail. We suspect that in some cases the following situation can happen: someone buys via phone as he doesn’t use the internet often. In such cases e-mail campaigns will not be effective. 5.6 Summary Using XAI for uplift modeling helps to understand its complex models better. The analysis goes beyond just assessing whether the model is reliable. 5.6.1 Individual perspective In the case of our task, individual perspective doesn’t seem to be vital. The situation when a customer writes an e-mail to the company asking why he didn’t receive an e-mail with marketing campaign is highly unlikely. Even if he does, he wouldn’t change his feature like the dwelling area only in order to get the e-mail. The things that the customer can rather easily change are his value of recency or history variable. 5.6.2 Data scientist’s perspective From the data scientist’s perspective the most important thing to check is whether the model is overfitted. The tool that can help in verifying model sensitivity is the Partial Dependence Plot. In the case of our model it can be seen that the model is slightly overfitted as there is a peak on PDP of history. 5.6.3 Executive perspective XAI techniques can help the executives to understand better the company’s customers behavior without paying for some extra surveys to investigate their attitude towards the company. Key findings: • The campaign e-mail should be sent two months after last purchase in order to be more effective. • The most important variables in the model seem to be reasonable, e.g. history and recency (Figures: 5.11, 5.12, 5.13). The only surprising thing is the high importance of zip_code_Rural feature. • In the case when someone buys from men’s and women’s collections we should send e-mails dedicated to women. • In the case of bigger number of treatment variants, it would be possible to create personalized marketing campaign. A vital part of our work was adjusting XAI techniques for the particularities of uplift modeling. We found out that thanks to its additivity, SHAP values are well suited for uplift modeling - we showed two methods of using it. We identified limitations of well-known Permutation Feature Importance in terms of explaining uplift models. It is caused by the fact that unlike the other supervised models, here we do not have exactly labels. Therefore. we used the generalization of SHAP values that converge to Permutation Feature Importance. Also, we analysed the SHAP Dependence Plots as an alternative to PDP. We employed the analysis for the three groups of customers based on the corresponding uplift. 5.7 Future works During initial feature engineering, we simplified our problem by merging women’s treatment and men’s treatment into one. By analysing PDP, we were able to propose a policy for choosing the optimal treatment type. However, it is not the only possible approach. We can try going beyond standard uplift modeling and model directly uplift with 2 possible outcomes i.e. create purchase prediction, and then check if sending women’s or men’s e-mail is more profitable, resulting in the following equation: $$uplift=max{(P(purchase\ |\ w\_T=1) - P(purchase\ |\ w\_T=0,m\_T=0), \\ P(purchase\ |\ m\_T=1) - P(purchase\ |\ w\_T=0,m\_T=0))}$$ where: • w_T is treatment dedicated to women • m_T is treatment dedicated to men However, this leaves us with several open-ended questions i.e.: can we now implicitly calculate SHAP values, using the previously presented efficient technique (based on additivity)? Surely the max function breaks the additivity of uplift function, but maybe it is possible using some other method? References Akshay Kumar, Rishabh Kumar. 2018. “Uplift Modeling : Predicting Incremental Gains.” 2018. http://cs229.stanford.edu/proj2018/report/296.pdf. Friedman, Jerome. 2000. “Greedy Function Approximation: A Gradient Boosting Machine.” The Annals of Statistics 29 (November). https://doi.org/10.1214/aos/1013203451. Hillstrom, Kevin. 2008. “The Minethatdata E-Mail Analytics and Data Mining Challenge Dataset.” 2008. https://blog.minethatdata.com/2008/03/minethatdata-e-mail-analytics-and-data.html. Kuchumov, Artem. 2018. “Pyuplift Package - Documentation.” 2018. https://pyuplift.readthedocs.io/en/latest/index.html. Lee, Josh Xin Jie. 2018. “Simple Machine Learning Techniques to Improve Your Marketing Strategy: Demystifying Uplift Models.” 2018. https://medium.com/datadriveninvestor/simple-machine-learning-techniques-to-improve-your-marketing-strategy-demystifying-uplift-models-dc4fb3f927a2. Lundberg, Scott. 2018. “Interpretable Machine Learning with Xgboost.” 2018. https://towardsdatascience.com/interpretable-machine-learning-with-xgboost-9ec80d148d27. Lundberg, Scott, and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In. Lundberg, Scott M., Gabriel G. Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2019. “Explainable AI for Trees: From Local Explanations to Global Understanding.” CoRR abs/1905.04610. http://arxiv.org/abs/1905.04610. Rzepakowski, Piotr, and Szymon Jaroszewicz. 2012. “Decision Trees for Uplift Modeling with Single and Multiple Treatments.” Knowledge and Information Systems - KAIS 32 (August). https://doi.org/10.1007/s10115-011-0434-0. Verbeke, W., and C. Bravo. 2017. Profit Driven Business Analytics: A Practitioner’s Guide to Transforming Big Data into Added Value. Wiley and Sas Business Series. Wiley. https://books.google.pl/books?id=NCA3DwAAQBAJ. Yi, Robert, and Will Frost. 2018a. “Pylift: A Fast Python Package for Uplift Modeling.” 2018. https://tech.wayfair.com/data-science/2018/10/pylift-a-fast-python-package-for-uplift-modeling/. Yi, Robert, and Will Frost. 2018b. “Pylift Package - Documentation.” 2018. https://pylift.readthedocs.io/en/latest/.
2021-03-06 05:08:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24142193794250488, "perplexity": 1754.4808834039225}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374391.90/warc/CC-MAIN-20210306035529-20210306065529-00379.warc.gz"}
http://mathoverflow.net/questions/12442/unusual-ray-tracing/12470
# Unusual ray tracing Background Ray tracing is very common in computational geometry and the problem is then to find the point of intersection between the equation of a line and the equation of a plane in 3D. The parametric form of the line is given by $\mathbf{p}_\mathrm{line}=\mathbf{p}_\mathrm{a} + \xi (\mathbf{p}_\mathrm{b}-\mathbf{p}_\mathrm{a})$ and the plane can be defined by $\mathbf{p}_\mathrm{plane} \cdot \mathbf{n}+\mathrm{d}=0$, where $\mathbf{p}_\mathrm{plane}$ is a point on the plane and $\mathbf{n}$ is the normal vector to the plane. Combining these two equations $(\mathbf{p}_\mathrm{line}=\mathbf{p}_\mathrm{plane})$ gives a convenient expression for the desired point from $\xi=\frac{-\mathrm{d}-\mathbf{p}_\mathrm{a} \cdot \mathbf{n}}{(\mathbf{p}_\mathrm{b}-\mathbf{p}_\mathrm{a}) \cdot \mathbf{n}}$. Question I now consider the problem of finding the intsersection(s) between an ellipse and a plane in 3D. Is there an effective way to perform this without an iterative scheme? - Your question is probably a borderline one for this forum. And it's also unclear. Do you want light rays to travel on ellipses, is that what you mean? –  Ryan Budney Jan 20 '10 at 19:24 Yes. The details depend on how you describe the ellipse, but but most sane choices this reduces to the solution of a quadratic equation. This is mostly not MO-material, really. The faq lists at mathoverflow.net/faq#whatnot several other places where you will surely get help. –  Mariano Suárez-Alvarez Jan 20 '10 at 20:11 I have rewritten the question for clarity. –  Daniel Jan 20 '10 at 20:13
2015-10-07 19:40:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848796546459198, "perplexity": 274.1157012937751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737882743.52/warc/CC-MAIN-20151001221802-00115-ip-10-137-6-227.ec2.internal.warc.gz"}
http://ncatlab.org/nlab/show/G2
group theory ∞-Lie theory # Contents ## Idea The Lie group $G_2$ is one (or rather: three) of the exceptional Lie groups. One way to characterize it is as the automorphism group of the octonions. Another way to characterize it is as the subgroup of the general linear group $GL(7)$ of those elements that preserve the canonical differential 3-form $\langle ,(-)\times (-) \rangle$ on the Cartesian space $\mathbb{R}^7$. As such, the group $G_2$ is a higher analog of the symplectic group (which is the group that preserves a canonical 2-form on any $\mathbb{R}^{2n}$), obtained by passing from symplectic geometry to 2-plectic geometry. ## Definition ###### Definition On the Cartesian space $\mathbb{R}^7$ consider the associative 3-form, the constant differential 3-form $\omega \in \Omega^3(\mathbb{R}^7)$ given on tangent vectors $u,v,w \in \mathbb{R}^7$ by $\omega(u,v,w) \coloneqq \langle u , v \times w\rangle \,,$ where • $\langle -,-\rangle$ is the canonical bilinear form • $(-)\times(-)$ is the cross product of vectors. Then the group $G_2 \hookrightarrow GL(7)$ is the subgroup of the general linear group acting on $\mathbb{R}^7$ which preserves the canonical orientation and preserves this 3-form $\omega$. Equivalently, it is the subgroup preserving the orientation and the Hodge dual differential 4-form $\star \omega$. See for instance the introduction of (Joyce). ## Properts ### General The inclusion $G_2 \hookrightarrow GL(7)$ of def. 1 factors through the special orthogonal group $G_2 \hookrightarrow SL(7) \hookrightarrow GL(7) \,.$ ### Relation to higher prequantum geometry The 3-form $\omega$ from def. 1 we may regard as equipping $\mathbb{R}^7$ with 2-plectic structure. From this point of view $G_2$ is the linear subgroup of the 2-plectomorphism group, hence (up to the translations) the image of the Heisenberg group of $(\mathbb{R}^7, \omega)$ in the symplectomorphism group. Or, dually, we may regard the 4-form $\star \omega$ of def. 1 as being a 3-plectic structure and $G_2$ correspondingly as the linear part in the 3-plectomorphism group of $\mathbb{R}^7$. • G2, F4, E6, E7, E8, E9, E10, E11, $\cdots$ ## References ### General Surveys are in • Spiro Karigiannis, What is… a $G_2$-manifold (pdf) • Wikipedia, G2 . The definitions are reviewed for instance in • Dominic Joyce, Compact Riemannian 7-manifolds with holonomy $G_2$, Journal of Differential Geometry vol 43, no 2 (pdf) • The octonions and $G_2$ (pdf) • John Baez, $G_2$ (web) Discussion in terms of the Heisenberg group in 2-plectic geometry is in Cohomological properties are discussed in • Younggi Choi, Homology of the gauge group of exceptional Lie group $G_2$, J. Korean Math. Soc. 45 (2008), No. 3, pp. 699–709 ### Applications in physics Discussion of Yang-Mills theory with $G_2$ as gauge group is in • Ernst-Michael Ilgenfritz, Axel Maas, Topological aspects of G2 Yang-Mills theory (arXiv:1210.5963) Revised on January 14, 2015 21:21:40 by Urs Schreiber (82.113.98.172)
2015-05-29 10:22:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 37, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664173483848572, "perplexity": 913.3961034698721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929978.35/warc/CC-MAIN-20150521113209-00311-ip-10-180-206-219.ec2.internal.warc.gz"}
http://xxx.unizar.es/list/nlin/new
# Nonlinear Sciences ## New submissions [ total of 18 entries: 1-18 ] [ showing up to 2000 entries per page: fewer | more ] ### New submissions for Tue, 20 Mar 18 [1] Title: On discretization of the Euler top Authors: A.V. Tsiganov Comments: 12 pages, 2 figures, LaTeX with AMS fonts Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph); Dynamical Systems (math.DS) Application of the intersection theory to construction of n-point finite-difference equations associated with classical integrable systems is discussed. As an example, we present a few new discretizations of motion of the Euler top sharing the integrals of motion with the continuous time system and the Poisson bracket up to the integer scaling factor. [2] Title: On weak universality of three-dimensional Larger than Life cellular automaton Subjects: Cellular Automata and Lattice Gases (nlin.CG) Larger than Life cellular automaton (LtL) is a class of cellular automata and is a generalization of the game of Life by extending its neighborhood radius. We have studied the three-dimensional extension of LtL. In this paper, we show a radius-4 three-dimensional LtL rule is a candidate for weakly universal one. [3] Title: Linear Instability of the Peregrine Breather: Numerical and Analytical Investigations Comments: 12 pages, 4 figures Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph) We study the linear stability of the Peregrine breather both numerically and with analytical arguments based on its derivation as the singular limit of a single-mode spatially periodic breather as the spatial period becomes infinite. By constructing solutions of the linearization of the nonlinear Schr\"odinger equation in terms of quadratic products of components of the eigenfunctions of the Zakharov-Shabat system, we show that the Peregrine breather is linearly unstable. A numerical study employing a highly accurate Chebychev pseudo-spectral integrator confirms exponential growth of random initial perturbations of the Peregrine breather. [4] Title: Generalized Hermite polynomials and monodromy-free Schrodinger operators Comments: 14 pages, 4 figures Subjects: Exactly Solvable and Integrable Systems (nlin.SI) We consider a class of monodromy-free \Sch operators with rational potentials constituted by generalized Hermite polynomials. These polynomials defined as Wronskians of classic Hermite polynomials appear in a number of mathematical physics problems as well as in the theory of random matrices and 1D SUSY quantum mechanics. Being quadratic at infinity, those potentials demonstrate localized oscillatory behavior near origin. We derive explicit condition of non-singularity of corresponding potentials and estimate a localization range with respect to indices of polynomials and distribution of their zeros in the complex plane. It turns out that 1D SUSY quantum non-singular potentials come as dressing of harmonic oscillator by polynomial Heisenberg algebra ladder operators. To this end, all generalized Hermite polynomials are produced by appropriate periodic closure of this algebra which leads to rational solutions of Painleve IV equation. We discuss the structure of discrete spectrum of Schrodinger operators and its link to monodromy-free condition. [5] Title: Definition and Identification of Information Storage and Processing Capabilities as Possible Markers for Turing-universality in Cellular Automata Authors: Yanbo Zhang Comments: 16 pages, 12 figures This paper is accepted by Complex Systems and it will be published soon (vol 27:1) Subjects: Cellular Automata and Lattice Gases (nlin.CG) To identify potential universal cellular automata, a method is developed to measure information processing capacity of elementary cellular automata. We consider two features of cellular automata: Ability to store information, and ability to process information. We define local collections of cells as particles of cellular automata and consider information contained by particles. By using this method, information channels and channels' intersections can be shown. By observing these two features, potential universal cellular automata are classified into a certain class, and all elementary cellular automata can be classified into four groups, which correspond to S. Wolfram's four classes: 1) Homogeneous; 2) Regular; 3) Chaotic and 4) Complex. This result shows that using abilities of store and processing information to characterize complex systems is effective and succinct. And it is found that these abilities are capable of quantifying the complexity of systems. [6] Title: On the Keldysh Problem of Flutter Suppression Subjects: Chaotic Dynamics (nlin.CD); Dynamical Systems (math.DS) This work is devoted to the Keldysh model of flutter suppression and rigorous approaches to its analysis. To solve the stabilization problem in the Keldysh model we use an analog of direct Lyapunov method for differential inclusions. The results obtained here are compared with the results of Keldysh obtained by the method of harmonic balance (describing function method), which is an approximate method for analyzing the existence of periodic solutions. The limitations of the use of describing function method for the study of systems with dry friction and stationary segment are demonstrated. [7] Title: Phenomenology of coupled non linear oscillators Journal-ref: Chaos 28, 023110 (2018) Subjects: Chaotic Dynamics (nlin.CD) A recently introduced model of coupled non linear oscillators in a ring is revisited in terms of its information processing capabilities. The use of Lempel-Ziv based entropic measures allows to study thoroughly the complex patterns appearing in the system for different values of the control parameters. Such behaviors, resembling cellular automata, have been characterized both spatially and temporally. Information distance is used to study the stability of the system to perturbations in the initial conditions and in the control parameters. The latter is not an issue in cellular automata theory, where the rules form a numerable set, contrary to the continuous nature of the parameter space in the system studied in this contribution. The variation in the density of the digits, as a function of time is also studied. Local transitions in the control parameter space are also discussed. [8] Title: Capturing photoelectron motion with guiding fictitious particles Comments: Physical Review Letters, American Physical Society, In press Subjects: Chaotic Dynamics (nlin.CD); Atomic Physics (physics.atom-ph) Photoelectron momentum distributions (PMDs) from atoms and molecules undergo qualitative changes as laser parameters are varied. We present a model to interpret the shape of the PMDs. The electron's motion is guided by a fictitious particle in our model, clearly characterizing two distinct dynamical behaviors: direct ionization and rescattering. As laser ellipticity is varied, our model reproduces the bifurcation in the PMDs seen in experiments. ### Cross-lists for Tue, 20 Mar 18 [9]  arXiv:1803.06372 (cross-list from math.DS) [pdf, other] Title: Stochastic basins of attraction and generalized committor functions Subjects: Dynamical Systems (math.DS); Chaotic Dynamics (nlin.CD) We generalize the concept of basin of attraction of a stable state in order to facilitate the analysis of dynamical systems with noise and to assess stability properties of metastable states and long transients. To this end we examine the notions of mean sojourn times and absorption probabilities for Markov chains and study their convergence to the basin of attraction in the limiting cases. Since any dynamical system described by a transfer operator on a compact domain can be approximated by a Markov chain our approach is applicable to a large variety of problems. [10]  arXiv:1803.06491 (cross-list from math-ph) [pdf, ps, other] Title: Solutions of the $U_q(\widehat{\mathfrak{sl}}_N)$ reflection equations Subjects: Mathematical Physics (math-ph); High Energy Physics - Theory (hep-th); Exactly Solvable and Integrable Systems (nlin.SI) We find the complete set of invertible solutions of the untwisted and twisted reflection equations for the Bazhanov-Jimbo R-matrix of type ${\mathrm A}^{(1)}_{N-1}$. We also show that all invertible solutions can be obtained by an appropriate affinization procedure from solutions of the constant untwisted and twisted reflection equations. [11]  arXiv:1803.06527 (cross-list from gr-qc) [pdf, other] Title: Presence of horizon makes particle motion chaotic Comments: 5 pages + supplementary material, 6 figures Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Chaotic Dynamics (nlin.CD) We analyze the motion of a massless particle very near to the event horizon. It reveals that the radial motion has exponential growing nature which is the signature of the presence of chaos in the particle motion. This is being confirmed by investigating the Poincar$\acute{e}$ section of the trajectories with the introduction of a harmonic trap to confine the particle's motion. Two situations are investigated: (a) the black hole is {\it any} static, spherically metric and, (b) spacetime represents a stationary, axisymetric black hole (e.g., Kerr metric). In both cases, the largest Lyapunov exponent has upper bound which is the surface gravity of the horizon. We find that the inclusion of rotation in the spacetime introduces more chaotic fluctuations in the system. The possible implications are finally discussed. [12]  arXiv:1803.06774 (cross-list from math-ph) [pdf, ps, other] Title: Toda type equations over multi-dimensional lattices Subjects: Mathematical Physics (math-ph); Commutative Algebra (math.AC); Exactly Solvable and Integrable Systems (nlin.SI) We introduce a class of recursions defined over the $d$-dimensional integer lattice. The discrete equations we study are interpreted as higher dimensional extensions to the discrete Toda lattice equation. We shall prove that the equations satisfy the coprimeness property, which is one of integrability detectors analogous to the singularity confinement test. While the degree of their iterates grows exponentially, they exhibit pseudo-integrable nature in terms of the coprimeness property. We also prove that the equations can be expressed as mutations of a seed in the sense of the Laurent phenomenon algebra. [13]  arXiv:1803.06857 (cross-list from cond-mat.stat-mech) [pdf, other] Title: Anomalous heat equation in a system connected to thermal reservoirs Comments: Main text: 5 pages. Supplementary: 9 page. 5 Figures Subjects: Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph); Chaotic Dynamics (nlin.CD) We study anomalous transport in a one-dimensional system with two conserved quantities in presence of thermal baths. In this system we derive exact expressions of the temperature profile and the two point correlations in steady state as well as in the non-stationary state where the later describes the relaxation to the steady state. In contrast to the Fourier heat equation in the diffusive case, here we show that the evolution of the temperature profile is governed by a non-local anomalous heat equation. We provide numerical verifications of our results. ### Replacements for Tue, 20 Mar 18 [14]  arXiv:1506.00563 (replaced) [pdf, other] Title: Rational degeneration of M-curves, totally positive Grassmannians and KP2-solitons Comments: 49 pages, 10 figures. Minor revisions Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph) [15]  arXiv:1612.09282 (replaced) [pdf, other] Title: Interface networks in models of competing alliances Comments: 7 pages, 8 figures Subjects: Pattern Formation and Solitons (nlin.PS); Physics and Society (physics.soc-ph) [16]  arXiv:1701.04903 (replaced) [pdf, other] Title: The Landau-Lifshitz equation, the NLS, and the magnetic rogue wave as a by-product of two colliding regular "positons" Comments: 25 pages, 9 figures. Added Section 7 ("7. One last remark: But what of generalization?.."), corrected a number of typos, added 2 more references Subjects: Exactly Solvable and Integrable Systems (nlin.SI); Materials Science (cond-mat.mtrl-sci) [17]  arXiv:1704.05276 (replaced) [pdf, other] Title: Best reply structure and equilibrium convergence in generic games Comments: Main paper + Supplemental Information Subjects: Physics and Society (physics.soc-ph); Adaptation and Self-Organizing Systems (nlin.AO); Economics (q-fin.EC) [18]  arXiv:1708.08568 (replaced) [pdf, other] Title: How directional mobility affects biodiversity in rock-paper-scissors models Comments: 6 pages, 10 figures Subjects: Populations and Evolution (q-bio.PE); Adaptation and Self-Organizing Systems (nlin.AO); Biological Physics (physics.bio-ph) [ total of 18 entries: 1-18 ] [ showing up to 2000 entries per page: fewer | more ] Disable MathJax (What is MathJax?)
2018-04-23 07:54:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6297422051429749, "perplexity": 1372.4413109056882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945855.61/warc/CC-MAIN-20180423070455-20180423090455-00240.warc.gz"}
https://onemathematicalcat.org/Math/Geometry_obj/two_col_pf_tri_cong.htm
# Practice with Two-Column Proofs You may want to review: In this section, you will practice with two-column proofs involving the Pythagorean Theorem, triangle congruence theorems, and other tools. A couple lengthy proofs are explored. You can print worksheets for these proofs, and practice supplying reasons for each step yourself. The first proof in this lesson involves an ‘averaging’ method called the geometric mean, which has important applications in (for example) biology and finance. There are different types of ‘averaging’ methods in mathematics. Any type of ‘average’ is a way to replace a collection of numbers with a single number which, in some way, represents the collection. Different ‘averaging’ methods are appropriate in different situations. ## The Geometric Mean The first ‘average’ one usually encounters is the arithmetic mean: to find the arithmetic mean of $\,N\,$ numbers, add them up and then divide by $\,N\,.$ The geometric mean, on the other hand, involves multiplying and taking roots: DEFINITION geometric mean of two positive numbers Let $\,a\,$ and $\,b\,$ be positive numbers. The geometric mean of $\,a\,$ and $\,b\,$ is the positive number $\,\sqrt{ab}\,.$ Thus, to find the geometric mean of two positive numbers: • multiply the numbers together • take the square root of the result ## Comments on the geometric mean The definition extends to more than two numbers: To find the geometric mean of three positive numbers: multiply them together; take the cube root of the result. To find the geometric mean of four positive numbers: multiply them together; take the fourth root of the result. To find the geometric mean of $\,n\,$ positive numbers: multiply them together; take the $\,n^{\text{th}}\,$ root of the result. Take a rectangle with sides $\,a\,$ and $\,b\,.$ and re-shape it into a square, without changing the area. Then, the side length of the square is the geometric mean of $\,a\,$ and $\,b\,$: $$\begin{gather} \cssId{s48}{\text{area of rectangle is } ab}\cr\cr \cssId{s49}{\text{area of square is } (\sqrt{ab})(\sqrt{ab}) = ab} \end{gather}$$ There are equivalent ways to view the geometric mean. Here is the formulation that naturally appears in our first proof: Equivalent Characterization of the geometric mean The geometric mean of $\,a\,$ and $\,b\,$ is the positive solution $\,g\,$ of the equation $\displaystyle\,\frac{a}{g} = \frac{g}{b}\,.$ The number $\,g\,$ is also referred to as the geometric mean between $\,a\,$ and $\,b\,.$ Note:  Cross-multiplying $\,\frac{a}{g} = \frac{g}{b}\,$ gives $\,g^2 = ab\,,$ from which $\,g = \pm\sqrt{ab}\,.$ So, indeed, the positive solution is the geometric mean of $\,a\,$ and $\,b\,.$ The geometric mean tends to ‘dampen’ the effect of very large numbers. For example, the arithmetic mean of $\,10\,$ and $\,100,000\,$ is $\,\frac{10 + 100,000}{2} = 50,005\,.$ However, the geometric mean of $\,10\,$ and $\,100,000\,$ is only $\,\sqrt{(10)(100,000)} = 1000\,.$ This ‘dampening’ property makes the geometric mean particularly useful when working with collections of numbers that have greatly varying sizes. The reason this ‘dampening’ occurs is apparent from the next characterization; it will only be accessible to you if you've already studied logarithms and properties of logarithms. Equivalent Characterization of the geometric mean The geometric mean of positive numbers $\,a\,$ and $\,b\,$ can be computed as follows: • Average the base ten logarithms of the numbers: \begin{align} &\cssId{s69}{y = \frac{\log_{10} a + \log_{10}b}2}\cr &\ \ \cssId{s70}{= \frac 12(\log_{10} ab)}\cr\cr &\ \ \cssId{s71}{= \log_{10} (ab)^{1/2}}\cr\cr &\ \ \cssId{s72}{= \log_{10}\sqrt{ab}} \end{align} • Then, the geometric mean is obtained by ‘undoing’ this logarithmic average: \begin{align} &\cssId{s74}{\text{geometric mean} = 10^y}\cr &\qquad\cssId{s75}{= 10^{\log_{10}\sqrt{ab}}}\cr &\qquad\cssId{s76}{= \sqrt{ab}} \end{align} ## Proof #1:Right Triangles and the Geometric Mean The geometric mean makes an interesting appearance in right triangles. If you drop an altitude from the right-angle vertex to the hypotenuse, the hypotenuse is thus split into two segments. The length of the altitude is the geometric mean between these two segments! And, as the next proof shows, the converse is also true: if an altitude of a triangle is the geometric mean between the two segments of the side it hits, then the triangle must be a right triangle. Sometimes, substitutions are a bit difficult to follow. Coloring is included in the beginning of the next proof to help you see what is being substituted for what. A worksheet follows the proof; you can practice supplying reasons for each step. GIVEN: $$\cssId{s92}{\overline{CB}\perp \overline{AD}}$$ (That is, $\overline{CB}\,$ is an altitude of the triangle.) $$\cssId{s94}{\frac{AB}{BC} =\frac{BC}{BD}}$$ (That is, $\,BC\,$ is the geometric mean between $\,AB\,$ and $\,BD\,.$) PROVE: $\angle ACD\,$ is a right angle STRATEGY: Show that $\,(AC)^2 + (CD)^2 = (AB+BD)^2$ PROOF: STATEMENTS REASONS 1.  $\,\overline{CB} \perp \overline{AD}$ given 2.  $\,\Delta ABC\,$ is a right triangle with hypotenuse $\,\overline{AC}\,$ definitions: right triangle, hypotenuse 3.  $\,\color{red}{(AB)^2 + (BC)^2 = (AC)^2}$ 4.  $\,\Delta DBC\,$ is a right triangle with hypotenuse $\,\overline{CD}$ definitions: right triangle, hypotenuse 5.  $\,\color{blue}{(BD)^2+ (BC)^2 = (CD)^2}$ 6. \begin{align} &\color{red}{(AC)^2} + \color{blue}{(CD)^2}\cr &\ \ \ = \color{red}{(AB)^2 + (BC)^2}\cr &\ \ \ \ \ \ \ + \color{blue}{(BD)^2 + (BC)^2} \end{align} substitution (steps 3 and 5) 7.  \begin{align} &(AC)^2+(CD)^2\cr &\ \ =(AB)^2+2(BC)^2+(BD)^2 \end{align} combine like terms 8.  $\displaystyle\,\frac{AB}{BC} =\frac{BC}{BD}$ given 9.  $\,(AB)(BD)= (BC)^2$ multiply both sides by $\,(BC)(BD)\,$ (i.e., cross-multiply) 10.  $\,2(AB)(BD)=2(BC)^2\,$ multiply both sides by $\,2$ 11. \begin{align} &(AC)^2+(CD)^2\cr &\ \ =(AB)^2+2(AB)(BD)+(BD)^2 \end{align} substitution (steps 7 and 10) 12. \begin{align} &(AB+BD)^2\cr &\ \ =(AB)^2+2(AB)(BD)+(BD)^2 \end{align} FOIL 13. \begin{align} &(AC)^2+(CD)^2\cr &\ \ =(AB+BD)^2 \end{align} substitution (steps 11 and 12) 14.  $\,AB+BD=AD$ $\,B\,$ is between $\,A\,$ and $\,D\,$ 15.  $\,(AC)^2+(CD)^2=(AD)^2$ substitution (steps 13 and 14) 16.  $\,\Delta ACD\,$ is a right triangle with hypotenuse $\,\overline{AD}\,$; so, $\,\angle ACD\,$ is a right angle Click here to print a proof with a blank REASONS column, for practice: ## Proof #2 Here is a second proof, which gives you practice with some different tools. Again, you can create a worksheet with a blank ‘Reasons’ column, so you can supply the reasons yourself. GIVEN: $AB=AC$ $D\,$ is the midpoint of $\,\overline{AB}$ $E\,$ is the midpoint of $\,\overline{AC}$ PROVE: $DF=EF$ STRATEGY: Show that $\,\Delta BDF\cong \Delta CEF\,$ PROOF: STATEMENTS REASONS 1.  $\,AB=AC\,$ $\,D\,$ is the midpoint of $\,\overline{AB}$ $\,E\,$ is the midpoint of $\,\overline{AC}$ given 2.  $\,m\angle ACB= m\angle ABC\,$ angles opposite equal sides have equal measures 3.  $\,BD=\frac{1}{2}AB\,$ and $\,CE=\frac{1}{2}AC\,$ definition of midpoint 4.  $\,BD=CE\,$ substitution (steps 1 and 3) 5.  $\,\overline{BC}\cong \overline{CB}$ reflexive property (congruence is an equivalence relation on the set of geometric figures) 6.  $\,\Delta DBC\cong \Delta ECB$ SAS (steps 2, 4, 5) 7.  $\,m\angle BCD =m\angle CBE\,$ CPCTC 8.  $\,BF=CF\,$ sides opposite equal measure angles are equal 9. \begin{align} &m\angle ABE + m\angle CBE\cr &\ \ = m\angle ABC \end{align} and \begin{align} &m\angle ACD + m\angle BCD\cr &\ \ = m\angle ACB \end{align} angle addition 10. \begin{align} &m\angle ABE + m\angle BCD\cr &\ \ = m\angle ACD + m\angle BCD \end{align} substitution (steps 2, 7, and 9) 11.  $\,m\angle ABE = m\angle ACD$ addition property of equality (subtract $\,m\angle BCD\,$ from both sides) 12.  $\,\angle DFB\cong \angle EFC$ vertical angles are congruent 13.  $\,\Delta BDF\cong \Delta CEF$ ASA (steps 8, 11, 12) 14.  $\,DF=EF\,$ (or, $\,\overline{DF} \cong \overline{EF}$ ) CPCTC Click here to print a proof with a blank REASONS column, for practice: In the exercises, you will also practice doing shorter two-column proofs; you will need to supply both the statements and reasons. In your proofs, feel free to use statements like  $\,\overline{AB}\cong \overline{CD}\,$  and  $\,AB=CD\,$  totally interchangeably, since two segments are congruent if and only if they have the same length. However, be careful to use the verb ‘$\,\cong \,$’ to compare geometric figures, and the verb ‘$\,=\,$’ to compare numbers. Similarly, feel free to use statements like  $\,\angle ABC\cong \angle DEF\,$  and  $\,m\angle ABC = m\angle DEF\,$  totally interchangeably, since two angles are congruent if and only if they have the same measure.
2023-02-01 19:54:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 14, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935934901237488, "perplexity": 823.5275930856563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00420.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/the-equation-hyperbola-whose-foci-are-6-4-4-4-eccentricity-2-hyperbola-standard-equation-hyperbola_58360
# The Equation of the Hyperbola Whose Foci Are (6, 4) and (−4, 4) and Eccentricity 2, is - Mathematics MCQ The equation of the hyperbola whose foci are (6, 4) and (−4, 4) and eccentricity 2, is #### Options • $\frac{(x - 1 )^2}{25/4} - \frac{(y - 4 )^2}{75/4} = 1$ • $\frac{(x + 1 )^2}{25/4} - \frac{(y + 4 )^2}{75/4} = 1$ • $\frac{(x - 1 )^2}{75/4} - \frac{(y - 4 )^2}{25/4} = 1$ • none of these #### Solution $\frac{(x - 1 )^2}{25/4} - \frac{(y - 4 )^2}{75/4} = 1$ The centre of the hyperbola is the midpoint of the line joining the two foci. So, the coordinates of the centre are $\left( \frac{6 - 4}{2}, \frac{4 + 4}{2} \right), i . e . \left( 1, 4 \right) .$ Let 2a and 2b be the length of the transverse and the conjugate axes, respectively. Also, let e be the eccentricity. $\Rightarrow \frac{\left( x - 1 \right)^2}{a^2} - \frac{\left( y - 4 \right)^2}{b^2} = 1$ Now, distance between the two foci = 2ae $2ae = \sqrt{\left( 6 + 4 \right)^2 + \left( 4 - 4 \right)^2}$ $\Rightarrow 2ae = 10$ $\Rightarrow ae = 5$ $\Rightarrow a = \frac{5}{2}$ $\text { Also }, b^2 = \left( ae \right)^2 - \left( a \right)^2$ $\Rightarrow b^2 = 25 - \left( \frac{25}{4} \right)$ $\Rightarrow b^2 = \frac{75}{4}$ Equation of the hyperbola is given below: $\frac{\left( x - 1 \right)^2}{25/4} - \frac{\left( y - 4 \right)^2}{75/4} = 1$ Is there an error in this question or solution? #### APPEARS IN RD Sharma Class 11 Mathematics Textbook Chapter 27 Hyperbola Q 14 | Page 19
2021-04-10 19:43:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180317282676697, "perplexity": 789.9939828306755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00300.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-452/topics/Topic-8353/subtopics/Subtopic-109845/?activeTab=theory
# Comparisons using models II Lesson We know that fractions show a number of equal size pieces (denominator) of a whole. For example, eighths are showing a number where eight equal size pieces will make a whole. The number of parts selected (numerator) shows the value of the fraction. ## Comparing the size of fractions We can compare the size of fractions by using shapes to look at how large each fraction is. Watch this video to see how. Remember! When the fractions have the same size pieces (denominator), we can compare their size simply by looking at how many pieces are in the fraction (numerator). #### Example ##### Question 1: Which fraction is smaller? 1. $\frac{3}{6}$36​ A $\frac{4}{6}$46​ B $\frac{3}{6}$36​ A $\frac{4}{6}$46​ B ## Complements to one whole Fractions are showing a number of parts (numerator) and the size of the part (denominator). The denominator shows how many parts make one whole. If we want to know how many more parts to make one whole, we add the number of pieces to make up one whole. Watch this video to see how. #### Example ##### question 2: If I have 1 eighth, how many more eighths do I need to make a whole? Remember! When comparing fractions, if the denominator is the same, then we compare the numerator. The denominator tells us how many parts make up one whole.
2022-01-26 17:32:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49348583817481995, "perplexity": 1205.3940838118324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00409.warc.gz"}
https://forum.allaboutcircuits.com/threads/glcd-graphing-advice-wanted.157089/
#### jpanhalt Joined Jan 18, 2008 8,314 LCD = 132x64 LCD Controller = ST7567 Interface = SPI MCU = PIC 16F enhanced, Fosc = 32 MHz, SPI = Fosc/4 The SPI interface does not allow reading DDRAM of the display. I am trying to graph Temp (Y-axis) v. Time (abscissa). Drawing the screen is relatively easy and plotting points in the Y-axis is not a problem, as the bytes (columns) can simply be offset to avoid messing up that axis and tick marks. The tick marks are at 5-pixel intervals. The X-axis is a different story. Calculating that axis to determine what needs to be re-drawn can be done, but is messy. The ST7567 has only one controller for the entire screen, and it is very fast (no delays are needed). I would like to keep those advantages. Here's a draft of what the screen template will look like. Up to three lines of additional data will be added to the right half of the screen. Ignore the unit spacings. Options I have considered: 1) Get a version that allows reading DDRAM. That will require be a parallel interface for the ST7567. 2) Save an image of the display in program memory and update the whole display. The graph is currently 64x64 pixels. That is 8x64 = 512 bytes. A full write of the screen takes about 3.5 ms. Just half of the screen would be less. 3) Save only the bottom row and re-write that as needed. That will take a few more lines of code, but will be faster that doing option #2. 4) Move the abscissa to the "top" of a row and don't mess with it. That reduces the Y-axis range from about 64 to 56. That would allow a label of the axis. I am tending toward options #1 or #4. Are there other alternatives? Opinions? Regards, John #### ericgibbs Joined Jan 29, 2010 9,060 hi john, When using colour TFT version I added some special characters to the AlphaNumeric 'include' file. You could create a special symbol [or short string] that you call 'n' times to print a full horizontal line, with an X axis scale. E #### jpanhalt Joined Jan 18, 2008 8,314 Hi Eric, Thanks for the input. That is basically how I drew the axes. The problem is that the X-axis is in the middle of a vertical byte. If I change one pixel, I have to know what the byte was originally, which I can only do by reading it or by calculating its position on the abscissa. It's not the line per se that is hard to do, it's the tick marks. After more thought, I think I will just push the line and tick marks to the top of those bytes and add a label ("TIME") made with a special, short font. I am also considering color for version 2, but first I need to get a prototype to my daughters. John #### jpanhalt Joined Jan 18, 2008 8,314 Individual pixels can be identified, for example the tick marks are individual pixels, but I cannot write an individual pixel without writing the whole byte it is in. If I could read a byte, then set a bit and write it back, that would be easy, and is what I did on another GLCD. Calculating what is in a byte gets messy. I looked at the link you gave. I can't read C, but it appears to be a demo program for drawing shapes. It refers to a memory buffer. I am not sure, but that may serve the same purpose as my options 2 and 3. For just one row, it is not that much (about 64 bytes). I could then read, modify (set a pixel), and write from the row that has the X-axis and tick marks. Although the Y-axis template was the hardest to calculate, from the standpoint of this issue, it is the easiest to address. I just add 2 to whatever column I am writing to and avoid the Y-axis completely (i.e., the X-axis data will be offset by 2). The X-axis can be calculated (mod 5 ) to find the tick marks (or saved in program memory as mentioned above), but I think I will just add a label with a short short font and lose a little spread for the Y-axis. I did find a 3x5 pixel font (4x6 if spaces included), which will work out fine for the max of 2 pixels used for the axis. We will see how it looks. A follow-up will follow. John Edit: Here's a link to the tiny font. Looks pretty good and is open license: https://robey.lag.net/2010/01/23/tiny-monospace-font.html #### MrChips Joined Oct 2, 2009 19,729 Do the bytes correspond to x-rows or y-columns? Why not save an image of the bytes written (for the scale only) and then read/modify/write the bytes. #### jpanhalt Joined Jan 18, 2008 8,314 Bytes correspond to columns. A 1-pixel wide column is composed of 8 bytes to give the 64 pixel height. Individual columns can be addressed (it's a 9-bit value so requires a two-command sequence ). That part is easy. Sixty-four bytes L to R make a row. Rows are easily accessed, but "lines" (say one row of pixels across the screen) cannot be accessed/written. One must write the whole byte or 64 bytes in total. That arrangement is similar to other GLCD's I have (e.g., NHD12864MZ w/ KS0108B controllers). The difference is that with a parallel interface you can read the byte, modify and re-write it. That is effectively the same as being able to access individual pixels top to bottom. #### Ian Rogers Joined Dec 12, 2012 682 How much memory has the pic16 got...If it has enough, I would definitely use the 512 byte buffer and just re-write the graph area... #### MrChips Joined Oct 2, 2009 19,729 Hence, for your 64 x 64 graph, you need to access 64 bytes across the bottom of the graph. Do you have 64 bytes of RAM on the PIC to spare and simply rewrite all 64 bytes of the GLCD? #### jpanhalt Joined Jan 18, 2008 8,314 Right now, RAM is probably not an issue. I am using a 16F1829 with 1024 bytes linear RAM and 16 K of program memory for this part of the project. For the thermometer part, I used the lowly 16F1783, because I had it and wanted 28 pins. That device is available up to the 16F1789 with 2048 bytes of linear RAM and 16K program flash. I am not suggesting writing to program memory during run time. It would be used for making a template or like Eric suggested as a very special font. I added a label to the abscissa (nick named tom_thumb by its creator) just to see and modified the font a little, but am still not happy with the asymmetric "T" and fat "m" and "n." It was not too hard to do, since the line and tick marks are the lower nibble and the characters are mostly the high nibble. It was just a matter of writing the characters, then adding them to the bytes -- much as one would do if the display could be read. The "e" that forces the other l.c. characters to be bigger. Also, I did not bother adjusting the Y-axis. Here's a rough version: This weekend, I will give a large RAM mirror a try or at least a mirror for the abscissa. Thanks, John #### ericgibbs Joined Jan 29, 2010 9,060 hi john, That small font makes it look awful. May I ask what type of user is viewing the display.? E #### jpanhalt Joined Jan 18, 2008 8,314 Type of user = 3 wonderful daughters, one son + me. They won't complain about it, but I agree. That font does not look good. John #### ericgibbs Joined Jan 29, 2010 9,060 hi, Will the TFT be mounted in a bezel surround, if so and if the Xaxis is always Time, perhaps a 'Time label' printed on the bezel may be an option. The actual scale will be written to the TFT E #### Ian Rogers Joined Dec 12, 2012 682 Here's my tiny font... If it helps #### MrChips Joined Oct 2, 2009 19,729 I don't like the squished fonts. What's wrong with using the better looking fonts? #### jpanhalt Joined Jan 18, 2008 8,314 A year or so ago, I got a box, buttons, and display from Pax Instruments: I got none of the electronics, but really liked the compactness and way the screen fit into the box. Everything fit together very nicely. Adafruit sold it for awhile, but it is discontinued and Pax does not seem still to be in that business. Here's the Adafruit link: https://www.adafruit.com/product/3081 I believe that animation is done by scrolling. If I make more, I will have to get another enclosure and machine it. The soft button covers are really nice, but very hard to find. I think Pax had them molded in China. I was planning to play with the font a little this afternoon. The only reason to use squished fonts is to keep the abscissa template and characters in the same byte-high row. I am thinking of some other ways, including full size fonts and scrolling to solve the problem of too small a range in both X and Y. #### jpanhalt Joined Jan 18, 2008 8,314 That 5x5 might be worth a try. The l.c. "e" and other rounded letters are the problem. All caps helps, as do smaller pixels that I don't have. I also played around and got rid of "Time" like this: Then squared up the remaining letters. HOrizontal space is not a problem, so the 5x5 may work better. John #### jpanhalt Joined Jan 18, 2008 8,314 UPDATE After a lot of interruptions and a little "quality time for coding," I can report some progress: 1) Linear memory (RAM) to save and retrieve data points works great. Rather than have one file for each row, I have a single file of 540 bytes (9 hours at 1 sample per minute) starting at 0x2020. 2) A plan to pre-calculate the Y-rows and bytes before saving to linear RAM was scrapped as it saved a minuscule amount of time/instructions. The linear file will be just sequential temperatures rounded to nearest degree. 3) Original plan to keep X-data as part of the graph data was scrapped. X-data are time increments of 1 minute. The clock does that, and X increments with each write. I haven't finished the scroll routines yet, but my plan is just to add an offset, then re-calculate and write the entire graph (7 pages x 60 columns approx.). Scrolling will be in both X and Y as needed. The axes are created by a template that, for now, doesn't change. 4) The fact that pages only wrap end column to beginning column, not page up and down is a bit of a pain. That requires that the bytes for each page be recalculated for each page change. I am using rotates as the simplest way to convert a bit number to a byte with that bit set. For example, a value of 83° offset by 50 = 33° = b'0010 0011'. That needs to be converted to page = 2, byte = 0x08 . In other words, the byte is mod 8 with the lsb at top. Let's say I scroll the graph part of the screen down 10°. Now, 83 becomes 23°, which is page = 4, byte = b'0000 0010'. Rotations make that pretty easy to visualize. 5) I also scrapped labeling the axes, as I just didn't like the way they looked. Besides, the right half of the display will show both current temperature, final temperature, rate, elapsed time, and expected time to finish. I may add ordinary characters to the axes to show the current offsets, but am leaning against adding that clutter. 6) The last major challenge is making the graph look like a continuous line. This demo starts at 1°/min, then increases to 2°/min at about 15 on the Y-axis, and finally goes to 3°/min at about 30 on the Y-axis. Of course,the 1°/min can't be improved upon without making a very thick line. At 2°/min it is still readable. I am not really happy with 3°/min and am considering making elongated data points, say 2 or 3 pixels. Haven't done anything on that yet, Page crossings will make it more complicated. Three degrees per minute is pretty uncommon in my experience, except with small roasts, like a rack of lamb. Any opinions on the readability of 3°/min? In case someone notices that I have not considered setting the start line for scrolling the Y-axis, that is because the right half of the display is being used for characters, and I don't want them to scroll too. Since this sub-forum is about MCU's, here's the code I used to calculate pages/rows and bytes: Code: Mod8 ;since tempF is in WREG, might re-write and ;save a step? movwf tempF ;f = rrrrrbbb lslf tempF,f ;f = rrrrbbb0 , discard msb movlw 0xF0 ; andwf tempF,w ;w = rrrr0000, rrrr =< 0x60 xorwf tempF,f ;f = 0000bbb0 lsrf tempF,f ;f = 00000bbb sublw 0x60 ;w = rrrr0000 w/reversed order, 6->0 ... 0 ->6 btfss STATUS,0 ; bra ScrollY ;rrrr > 0x60,increase Y offset, redraw LCD graph ;03/12 delete following 3 instructions & uncomment "xorwf" line, if saving byte swapf WREG ;w = rrrr0000 ->0000rrrr call PutCMD ;sets page and returns in |B4 w/A0 clear ; xorwf tempF,w ;w = rrrr0bbb, ready to put to RAM (if wanted) ; movwf tempF MakeByte ;enter with b'00000bbb' in tempF ; movlw 0x07 ;if tempF = rrrr0bbb, uncomment ; andwf tempF,f ; " incf tempF,f ;f = bit # +1 clrw ; bsf STATUS,0 ; ByteLoop rrf WREG ;@rot1 = f = 1000 0000 decfsz tempF,f ;@rot3 = f = 0010 0000 bra ByteLoop ;@rot5 = f = 0000 1000 bra PutDat ;@rot8 = f = 0000 0001 ;PutDat returns to original call #### jpanhalt Joined Jan 18, 2008 8,314 After more thought, I decided the excuse in post #19 is a bit lame. The display is readable, but was relatively easily changed to this: That looks a little more readable, and 3°/min is quite uncommon in my experience with roasts. The code turns out to be similar to the previous snippet in that rotations are used to calculate bytes and rows. The biggest difference is resetting the column when a single "byte" crosses a page boundary before putting the rest of that byte to the display. Sure would be nice to be able to read the display for that purpose. Instead, I used an FSRn to keep an index of the columns. EDIT: Does anyone know of a graphical display controller where one can inhibit auto-increment for writes? (Inhibiting auto-increment for reads is common.) John Last edited:
2020-01-20 11:44:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2642055153846741, "perplexity": 2180.4648343079675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598726.39/warc/CC-MAIN-20200120110422-20200120134422-00486.warc.gz"}
http://math.stackexchange.com/questions/207001/prove-log-bfx-is-big-theta-log-fx?answertab=active
# Prove $\log_bf(x)$ is big-theta $\log f(x)$ How can I prove that $\log_bf(x)$ is big-theta of $\log f(x)$ for any constant $b > 1$? - Hint: Note that $$\log_b(y)=\frac{\log y}{\log b}.$$ Remark: Slightly more generally, we have the change of base formula $$\log_b(y)=\frac{\log_a y}{\log_a b}.$$ This can be rewritten as $\log_a y=(\log_a b)(\log_b y)$, and then verified by raising $a$ to the power of each side.
2015-09-02 19:50:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684309959411621, "perplexity": 76.75788679780644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645281325.84/warc/CC-MAIN-20150827031441-00219-ip-10-171-96-226.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/110944-needing-help-question.html
# Math Help - Needing help in this question 1. ## Needing help in this question Q. The no. of distinct pairs of (a, b) real numbers that would satisfy $x=x^3+y^4$and $y=2xy$. A-5 B-12 C-3 D-7 dont laugh anyone on such elementary question....help!! 2. Originally Posted by findmehere.genius Q. The no. of distinct pairs of (a, b) real numbers that would satisfy $x=x^3+y^4$and $y=2xy$. A-5 B-12 C-3 D-7 dont laugh anyone on such elementary question....help!! I've found 5 pairs of real numbers (and 2 pairs of complex numbers). $\left|\begin{array}{rcl}x&=&x^3+y^4 \\y&=&2 x y\end{array}\right.$ $\implies$ $\left|\begin{array}{rcl}x(1-x^2)-y^4&=&0 \\y(1-2x)&=&0\end{array}\right.$ All results containing y = 0 satisfy the second equation. From the first equation you'll get 3 different values of x if y = 0. From the second equation you find out that $x = \frac12$ must be a possible x-value. Plug in this value into the first equation and calculate the corresponding y-values. 3. Thanks for the answer...now some paperwork needed to get them into head.
2015-04-25 09:25:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5568115711212158, "perplexity": 830.2588446397398}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246648209.18/warc/CC-MAIN-20150417045728-00090-ip-10-235-10-82.ec2.internal.warc.gz"}
http://perimeterinstitute.ca/fr/conferences/infrared-problems-qed-and-quantum-gravity?qt-pi_page_blocks_quicktabs=3
Le contenu de cette page n’est pas disponible en français. Veuillez nous en excuser. # Infrared Problems in QED and Quantum Gravity COVID-19 information for PI Residents and Visitors Conference Date: Mercredi, Décembre 7, 2016 (All day) to Jeudi, Décembre 8, 2016 (All day) Scientific Areas: Quantum Gravity Infrared problems in quantum field theory lead to many surprising phenomena in scattering theory (e.g. polarization of the vacuum, breaking of Lorentz symmetry, delocalization of the charge), due to existence of massless gauge bosons (photons in QED). Although there are well tested algorithms for computing probabilities of experimental scattering processes, the conceptual understanding of the problem is still very poor. There are strong indications that the same, or even worse, conceptual problems would have to be solved in quantum gravity, due to the presence of the massless graviton. Recently, there have been some advances in understanding the infrared problem in QED, coming from different research communities. The aim of this workshop is to bring together key researchers from these communities who work on infrared problems and to give them opportunity to discuss and start new interdisciplinary collaborations. • Abhay Ashtekar, Pennsylvania State University • Miguel Campiglia, Universidad de Montevideo • Maximilian Duell, Technical University of Munich • Wojciech Dybalski, Technical University of Munich • Laurent Freidel, Perimeter Institute • Barak Gabai, Perimeter Institute • Andrzej Herdegen, Cracow Jagiellonian University • Sebastian Mizera, Perimeter Institute • Aldo Riello, Perimeter Institute • Burkhard Schwab, Harvard University • Abhay Ashtekar, Pennsylvania State University • Freddy Cachazo, Perimeter Institute • Miguel Campiglia, Universidad de Montevideo • Sylvain Carrozza, Perimeter Institute • Lin-Qing Chen, Perimeter Institute • Bianca Dittrich, Perimeter Institute • Maximilian Duell, Technical University of Munich • Wojciech Dybalski, Technical University of Munich • Laurent Freidel, Perimeter Institute • Barak Gabai, Perimeter Institute • Marc Geiller, Perimeter Institute • Henrique Gomes, Perimeter Institute • Daniel Guariento, Perimeter Institute • Andrzej Herdegen, Cracow Jagiellonian University • Sebastian Mizera, Perimeter Institute • Kasia Rejzner, Perimeter Institute & University of York • Aldo Riello, Perimeter Institute • Laura Sberna, Perimeter Institute • Burkhard Schwab, Harvard University • Barak Shoshany, Perimeter Institute • Vasudev Shyam, Perimeter Institute • Lee Smolin, Perimeter Institute Wednesday, December 7, 2016 Time Event Location 8:30 – 9:00am Registration Reception 9:00 – 9:05am Kasia Rejzner, Perimeter Institute & University of YorkWelcome and Opening Remarks Sky Room 9:05 – 10:05am Abhay Asktekar, Pennsylvania State UniversityNull Infinity, BMS Group and Infrared Sectors Sky Room 10:05 – 10:30am Coffee Break Bistro – 1st Floor 10:30 – 11:30am Burkhard Schwab, Harvard UniversityLarge gauge symmetries and black hole absorption rates Sky Room 11:30 – 12:30pm Barak Gabai, Perimeter InstituteLarge Gauge Symmetries and Asymptotic States in QED Sky Room 12:30 – 2:30pm Lunch Bistro – 1st FloorReserved Table 2:30 – 3:30pm Wojciech Dybalski, Technical University of MunichNon-relativistic QED in different gauges Sky Room 3:30 – 4:30pm Andrzej Herdegen, Cracow Jagiellonian UniversityAsymptotic structure of electrodynamics revisited Sky Room 4:30 – 5:00pm Coffee Break Bistro – 1st Floor 5:00pm Open Discussion Sky Room Thursday, December 8, 2016 Time Event Location 9:00 – 10:00am Maximilian Duell, Technical University of MunichScattering of atoms and non-locality of the vacuum in QED Sky Room 10:00 – 10:30am Coffee Break Bistro – 1st Floor 10:30 – 11:30am Aldo Riello, Perimeter InstituteTBA Sky Room 11:30 – 12:30pm Miguel Campiglia, Universidad de Montevideo$U(1)$ asymptotic charges and soft photons Sky Room 12:30 – 2:30pm Lunch Bistro – 1st FloorReserved Table 2:30 – 3:30pm Sebastian Mizere, Perimeter InstituteSoft Theorems from Riemann Spheres Sky Room 3:30 – 4:30pm Laurent Freidel, Perimeter InstituteTBA Sky Room 4:30 – 5:00pm Coffee Break Bistro – 1st Floor 5:00pm Open Discussion Sky Room Null Infinity, BMS Group and Infrared Sectors I will provide a broad overview of the relation between the structure of null infinity and infrared sectors for the Maxwell theory and full, non-linear general relativity. I hope this talk will serve as an introduction for the talks that will follow. $U(1)$ asymptotic charges and soft photons In the first part of the talk I will describe how the subleading soft photon theorem can be understood as a Ward identity of singular $O(r)$ large gauge symmetry at null infinity. In the second part I will present a space-infinity description of the $O(1)$ large gauge symmetry that arise in the context of the leading soft photon theorem. Maximilian Duell, Technical University of Munich Scattering of atoms and non-locality of the vacuum in QED In the setting of algebraic QFT we give a mathematically rigorous construction of the scattering matrix for massive Wigner particles in the presence of massless excitations. Our analysis may be applied, in particular, to the scattering of electrically neutral particles in QED. In contrast to previous approaches we do not impose any technical assumptions on the spectrum of the mass operator near the particle masses. Instead, our approach relies on non-local features of the relativistic vacuum state which are similar to the well established Reeh-Schlieder property. Wojciech Dybalski, Technical University of Mun Non-relativistic QED in different gauges Charges localized at spacelike infinity are a traditional ingredient of discussions of the infrared  problems in QED in mathematical physics. It is an old  conjecture that these charges depend on the gauge fixing in the quantization procedure. In this talk I will discuss  this problem in a non-relativistic model of QED. I will  show how to pass from the usual Coulomb gauge to the axial gauge,  compute the charges in both cases and give arguments in favour  of the above conjecture. Rigorous mathematical conclusions are  hindered (so far) by severe infrared problems in the axial gauge. Barak Gabai, Perimeter Institute Large Gauge Symmetries and Asymptotic States in QED Large Gauge Transformations (LGT) are gauge transformations that do not vanish at infinity. Instead, they asymptotically approach arbitrary functions on the conformal sphere at infinity. Recently, it was argued that the LGT should be treated as an infinite set of global symmetries which are spontaneously broken by the vacuum. It was established that in QED, the Ward identities of their induced symmetries are equivalent to the Soft Photon Theorem. In this paper we study the implications of LGT on the S-matrix between physical asymptotic states in massive QED. In appose to the naively free scattering states, physical asymptotic states incorporate the long range electric field between asymptotic charged particles and were already constructed in 1970 by Kulish and Faddeev. We find that the LGT charge is independent of the particles' momenta and may be associated to the vacuum. The soft theorem's manifestation as a Ward identity turns out to be an outcome of not working with the physical asymptotic states. Andrzej Herdegen, Cracow Jagiellonian University Asymptotic structure of electrodynamics revisited The lecture presents a personal view on the asymptotic structure of electrodynamics. Asymptotic variables form an algebra, in which infrared–long-range degrees of freedom count among full-fledged observables, not merely superselection labels. Sebastian Mizere, Perimeter Institute Soft Theorems from Riemann Spheres I will review the reformulation of the S-matrix in terms of Riemann spheres due to Cachazo, He, and Yuan. I will show how it sheds new light on the derivation of Weinberg soft theorems for General Relativity and Yang-Mills theory, as well as allows to study soft behaviour of other quantum field theories. Burkhard Schwab, Harvard University Large gauge symmetries and black hole absorption rates I will introduce Noether's second theorem as a way to derive the conserved currents associated with asymptotic symmetries. The large gauge symmetries of electromagnetism (along with conservation of energy) fully constrain the absorption rate of low-energy electromagnetic radiation by black holes. I will show this explicitly for non-evaporating, spherically symmetric black holes in arbitrary space-time dimensions larger than 3. Scientific Organizers: • Laurent Freidel, Perimeter Institute • Henrique Gomes, Perimeter Institute • Kasia Rejzner, Perimeter Institute & University of York
2020-08-10 09:17:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3219778537750244, "perplexity": 4742.488699465864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738653.47/warc/CC-MAIN-20200810072511-20200810102511-00018.warc.gz"}
https://diabetesjournals.org/care/article/30/11/2880/4815/Skeletal-Muscle-Deoxygenation-After-the-Onset-of
OBJECTIVE—People with type 2 diabetes have impaired exercise responses even in the absence of cardiovascular complications. One key factor associated with the exercise intolerance is abnormally slowed oxygen uptake (o2) kinetics during submaximal exercise. The mechanisms of this delayed adaptation during exercise are unclear but probably relate to impairments in skeletal muscle blood flow. This study was conducted to compare skeletal muscle deoxygenation (deoxygenated hemoglobin/myoglobin [HHb]) responses and estimated microvascular blood flow (Qm) kinetics in type 2 diabetic and healthy subjects after the onset of moderate exercise. RESEARCH DESIGN AND METHODS—Pulmonary o2 kinetics and [HHb] (using near-infrared spectroscopy) were measured in 11 type 2 diabetic and 11 healthy subjects during exercise transitions from unloaded to moderate cycling exercise. Qm responses were calculated using o2 kinetics and [HHb] responses via rearrangement of the Fick principle. RESULTSo2 kinetics were slowed in type 2 diabetic compared with control subjects (43.8 ± 9.6 vs. 34.2 ± 8.2 s, P < 0.05), and the initial [HHb] response after the onset of exercise exceeded the steady-state level of oxygen extraction in type 2 diabetic compared with control subjects. The mean response time of the estimated Qm increase was prolonged in type 2 diabetic compared with healthy subjects (47.7 ± 14.3 vs. 35.8 ± 10.7 s, P < 0.05). CONCLUSIONS—Type 2 diabetic skeletal muscle demonstrates a transient imbalance of muscle O2 delivery relative to O2 uptake after onset of exercise, suggesting a slowed Qm increase in type 2 diabetic muscle. Impaired vasodilatation due to vascular dysfunction in type 2 diabetes during exercise may contribute to this observation. Further study of the mechanisms leading to impaired muscle oxygen delivery may help explain the abnormal exercise responses in type 2 diabetes. Exercise is highly recommended as a cornerstone of treatment for people with type 2 diabetes. However, reduced peak exercise tolerance is common in type 2 diabetes (13) and is linked to mortality in individuals with type 2 diabetes as well as healthy individuals (4). Submaximal exercise responses are also affected in individuals with type 2 diabetes, as demonstrated by an abnormally slowed increase of oxygen uptake (o2 kinetics) after the onset of exercise, and seem to possibly be related to abnormal cardiovascular responses (1,2). Clinically, these findings observed during submaximal exercise testing are significant because they indicate a greater perturbation of intramuscular homeostasis in response to any exercise challenge, with the potential to contribute to the premature muscular fatigue (5) and hence the reduced exercise tolerance in type 2 diabetic individuals (1,2). However, the mechanisms of impaired muscle oxygen delivery and/or oxidative metabolism responsible for the abnormal exercise responses in type 2 diabetes remain unclear. Recent work in rodent models of type 2 diabetes has demonstrated impaired skeletal muscle capillary hemodynamics (6) and abnormal capillary Po2 responses during exercise (7). These findings demonstrated a transient impairment of oxygen delivery relative to muscle oxygen uptake after the onset of exercise that may limit oxygen transfer and utilization (7,8). It is not known whether a similar defect exists in individuals with type 2 diabetes. Such an impairment of microvascular oxygen delivery and exchange in human skeletal muscle could similarly contribute to the observed limitation of o2 and exercise performance in type 2 diabetes. Indeed, reductions in exercising leg blood flow (9) and baseline metabolic defects (10,11) are known to occur in human type 2 diabetic skeletal muscle. Given that skeletal muscle plays an important role in the pathophysiology of insulin resistance and type 2 diabetes, the investigation of oxygen delivery and blood flow at the level of the exercising skeletal muscle in human type 2 diabetes would provide unique insight into the exercise limitations observed in this patient population. Near-infrared spectroscopy (NIRS) is a noninvasive technique that offers functional insight into changes in skeletal muscle oxygen status (12). This technique uses the absorption characteristics of near-infrared light directed into tissue to determine the concentration changes of oxygenated and deoxygenated hemoglobin/myoglobin ([HHb]) in the small vessels (arterioles, capillaries, and venules) and skeletal muscle. Thus, similar to capillary Po2, the time course of muscle [HHb] increase after onset of exercise reflects the local balance of O2 delivery and O2 uptake within the muscle region studied. Prior studies have demonstrated NIRS to be highly sensitive to muscle changes due to exercise, hypoxemia, and aging (1315), and, thus, NIRS [HHb] provides a noninvasive surrogate of muscle oxygen extraction. Moreover, the measurement of [HHb] in parallel with o2 during exercise can provide useful inferences regarding regional blood flow and allows for the estimation of the increase in muscle microvascular blood flow (Qm) via the Fick principle (16,17). In the present study we examined whether skeletal muscle [HHb] responses and estimated Qm kinetics are altered in individuals with type 2 diabetes compared with sedentary healthy control subjects. We hypothesized that individuals with type 2 diabetes would have altered skeletal muscle oxygen extraction responses and concordantly slowed estimates of Qm kinetics compared with healthy subjects after the onset of moderate-intensity constant work rate exercise. If confirmed, these observations would further explain the potential mechanisms of exercise limitation and intolerance in people with type 2 diabetes as related to changes in skeletal muscle blood flow and oxygen delivery. Eleven subjects with type 2 diabetes (5 male and 6 female) and 11 healthy control subjects (6 male and 5 female) between the ages of 30 and 55 years volunteered to participate in this study (Table 1). The study was approved by the University of Colorado Multiple Institutional Review Board, and subjects provided informed consent before study participation. Subjects were sedentary, which was defined as participating in low- to moderate-intensity exercise <2 days/week in the preceding 3 months and confirmed using a low-level physical activity recall (1). Healthy control subjects were defined as taking no medications and did not have a direct family member (parent or sibling) with type 2 diabetes. Diabetes was documented in type 2 diabetic subjects by chart review and confirmed using fasting blood glucose and A1C at screening. Subjects were excluded from the study if they demonstrated 1) a history of stroke, congestive heart failure, hypertension, or cardiopulmonary disease; 2) current smoking or smoking within the last 12 months; 3) autonomic or distal neuropathy; 4) LDL cholesterol >130 mg/dl, total cholesterol >200 mg/dl, or triglycerides >250 mg/dl; or 5) A1C >9.0%; or 6) if they were taking exclusionary medicines (insulin, thiazolidinediones [pioglitazone or rosiglitazone], α-glucosidase inhibitors, β-blockers, or calcium channel blockers). Women who were included were premenopausal and were not taking birth control or hormone replacement therapy. Study participants completed three visits at the laboratory to obtain initial screening measurements, establish baseline peak exercise capacity, and perform constant work rate exercise protocols. Exercise was performed using a bicycle ergometer (Lode, Groningen, Netherlands), and subjects were instructed to avoid the consumption of alcohol, caffeine, and food within 4 h before each exercise visit. To assess peak exercise performance (peak o2) and provide an estimate of lactate threshold, subjects performed an incremental exercise test (10–20 W/min) to volitional fatigue. On a separate day, subjects performed two 6-min constant work rate (CWR) exercise tests at a work rate equivalent to ∼85% of the individual’s estimated lactate threshold. Each CWR test was preceded by a baseline resting period and 4 min of unloaded cycling before a step increase to the prescribed CWR was initiated. A 30-min period of seated rest separated each test. ### Measurements For all exercise tests, o2, carbon dioxide production (co2), minute ventilation (V̇E), and other ventilatory variables were measured using a breath-by-breath metabolic system (Ultima CPX; Medical Graphics, St. Paul, MN). The O2 and CO2 analyzers were calibrated before each test, and pneumotach volumes were calibrated using a syringe of known volume (3.0 liters). Heart rate was monitored continuously by a 12-lead electrocardiogram (Q-stress; Quinton Instruments, Seattle, WA) and recorded synchronously with the ventilatory data for offline analysis. Arterial hemoglobin saturation was monitored and recorded during rest, exercise, and recovery for all experiments by an oximeter placed on the index finger of the dominant hand (Ohmeda, Louisville, CO). Skeletal muscle [HHb] was assessed by a frequency domain multidistance NIRS monitor (Optiplex TS; ISS, Champaign, IL) during each CWR exercise test. The use and limitations of NIRS have been extensively reviewed (18,19). The NIRS monitor uses two wavelengths of near-infrared light (690 and 830 nm) and four light source detector distances at 2.0, 2.5, 3.0, and 3.5 cm. Local muscle oxygen extraction was determined as the change in [HHb] as described previously (13,20). The near-infrared data were sampled continuously and recorded at 50 Hz. The device probe was positioned on the distal third of the vastus lateralis of the dominant limb, secured using a Velcro strap, and covered with a cloth bandage to exclude ambient light. The NIRS monitor was calibrated before each visit using a calibration phantom of known scattering and optical properties. ### Data analysis Breath-by-breath gas exchange data for each CWR exercise transition were processed using a software program as described previously (21). Data from the two CWR tests were time aligned and averaged to provide a single, average kinetic response for each subject (o2, heart rate, and [HHb]). The kinetic responses were then evaluated by computerized nonlinear regression (Sigmaplot 9.0; SPSS, Chicago, IL) using standard techniques (20,21) to define the primary end point (i.e., the kinetic time constant reflecting the time to reach ∼63% of the exponential response). A one-component exponential model was used to describe the simple exponential increase in [HHb] and heart rate: ${[}\mathrm{HHb}{]}(\mathrm{t}){=}{[}\mathrm{HHb}{]})(\mathrm{baseline}){+}\mathrm{A}_{1}(1{-}\mathrm{e}^{{-}(\mathrm{t}-\mathrm{TD}1)/{\tau}})$ and a two-component model was used to describe the two phases of pulmonary o2 (for determination of o2muscle) and Qm kinetics, where for Qm or o2 = X, $\mathrm{X}(\mathrm{t}){=}\mathrm{X}(\mathrm{baseline}){+}\mathrm{A}_{1}(1{-}\mathrm{e}^{{-}(1-\mathrm{TD}1)/{\tau}1})(\mathrm{phase}\ 1){+}\mathrm{A}_{2}{[}1{-}\mathrm{e}^{{-}({\tau}-\mathrm{TD}2)/{\tau}2}{]}(\mathrm{phase}\ 2)$ In the exponential models, the curve fit provided estimates of the baseline and amplitude for each exponential phase (A1 and A2) as well as the time delays (TD1 and TD2) and time constants (τ1 and τ2) for each of the measured exponential phases (see Fig. 1E for a graphic definition of the two-component model fit). Thus, the resulting curve fit described the time course and magnitude of increase of the respective phases from baseline to steady-state exercise. Qm responses were calculated using the measured [HHb] as a surrogate of muscle oxygen extraction and the phase 2 response of pulmonary o2 kinetics (e.g., representing muscle o2 kinetics, o2muscle [22,23]) as described previously (16,17): $\mathrm{Qm}(\mathrm{t}){=}{\dot{V}}\mbox{\textsc{\mathrm{o}}}_{2}(\mathrm{phase}\ 2)(\mathrm{t})/{[}\mathrm{HHb}{]}(\mathrm{t})$ Because the absolute volume of muscle tissue represented in the NIRS signal is not known, the Qm responses were calculated in arbitrary units. However, this method has been previously compared and contrasted with that of conduit artery blood flow with qualitative agreement (17). The kinetics of Qm were then evaluated using the two-component exponential model. The mean response time (MRT) for [HHb] was calculated as the sum of TD and τ from the single exponential model. The MRT for Qm was calculated using a weighted model, adjusting for the amplitude, time delay, and time constant of each phase as described (17). ### Statistical analysis Two-tailed independent Student’s t tests were used for comparison of the kinetic responses between type 2 diabetic and healthy subjects (NCSS Statistical Software, Kaysville, UT). Pearson’s r was used to evaluate the correlations between Qm kinetics and peak exercise capacity (o2peak). Statistical significance was declared at P < 0.05. Subject characteristics did not differ between groups (Table 1) except for habitual physical activity scores (using the Low Level Physical Activity Recall), which were higher in the type 2 diabetic group compared with those in control subjects (P < 0.05). As expected, fasting glucose levels and A1C were higher in the diabetic subjects (Table 1). However, although numerically lower in type 2 diabetes, the peak o2 from incremental exercise testing was not different between type 2 diabetic and control subjects (24.6 ± 4.8 vs. 20.9 ± 5.1 ml · kg−1 · min−1, NS). In all subjects, resting arterial hemoglobin saturation was ≥95% and revealed no changes in any subject during exercise testing. The time constants indicating o2muscle in type 2 diabetic subjects were significantly slowed compared with those for control subjects (P < 0.05) (Table 2). Heart rate kinetics were not different between type 2 diabetic and healthy control subjects, and no differences were observed between groups for the initial kinetic parameters of [HHb] (Table 2). Representative plots of the o2muscle, [HHb], and calculated Qm responses and curve fit for a control subject and a subject with type 2 diabetes are presented in Fig. 1. After the onset of exercise, the [HHb] response profile demonstrated a noticeable excursion of [HHb] above the level observed for steady-state exercise (e.g., during the first ∼90–100 s) in the majority of subjects with type 2 diabetes. There was no difference between groups for the time constant of phase 1 of the Qm response (Table 2). However, the type 2 diabetic subjects demonstrated a significantly slower time constant for phase 2 of the Qm response (P < 0.05), and the calculated MRT of Qm was significantly slower in the type 2 diabetic subjects compared with healthy control subjects (P < 0.05). Correlations between Qm kinetic parameters and o2peak were not significant (P > 0.05). This study demonstrated differences in the pattern of skeletal muscle deoxygenation after the onset of exercise in humans with type 2 diabetes compared with healthy subjects. Given the prolonged increase of oxygen uptake during exercise in type 2 diabetic subjects, these data indicate that the increase in microvascular blood flow with exercise is abnormally slow in type 2 diabetes and suggest that the limitation of oxygen uptake during submaximal exercise in type 2 diabetes may be related to impaired control or maldistribution of muscle blood flow. Impaired skeletal muscle oxygen delivery in response to exercise may thus contribute to the observed exercise deficit of type 2 diabetes. In the present study, we observed a transient excursion of [HHb] (e.g., “overshoot”) above the level achieved for steady-state exercise in the majority of type 2 diabetic subjects. Because the increase in [HHb] is a measure of the increase in the local muscle deoxygenated hemoglobin/myoglobin concentration (hence reflecting tissue oxygen extraction), the overshoot [HHb] response observed in type 2 diabetic subjects provides evidence of an impaired increase of muscle blood flow relative to muscle oxygen uptake after onset of exercise (7,24,25). This reflects an increased dependence on oxygen extraction in type 2 diabetic muscle compared with muscle of control subjects that occurs early in exercise. This response is qualitatively equivalent to the capillary Po2 responses previously observed only in the exercising muscle of diabetic animals (7,8). The significance of this response is related to a transient lowering of capillary Po2 that, in turn, may impair capillary to myocyte O2 transport (via a lowered diffusion gradient) and constrain early increases in muscle oxygen uptake (7,8). Our [HHb] findings appear to support the concept that the early increase in muscle blood flow may be attenuated in type 2 diabetes, and this abnormality may contribute to the slowed o2 kinetics and exercise deficit observed in type 2 diabetes. Consistent with previous reports (16,17) and other measures of exercise blood flow in animals (26) and humans (27), our estimated Qm demonstrated a biphasic response in all type 2 diabetic and control subjects after the onset of moderate exercise. The phases of blood flow responses after exercise onset have previously been characterized (27,28) with the first phase of blood flow increase generally considered to result from muscle contractions (e.g., muscle pump) and rapid vasodilatation, although the precise factors responsible for the latter mechanism remain unclear (28,29). The second phase of blood flow increase is closely matched with metabolic demand, resulting from metabolic feedback control (e.g., H+, K+, prostaglandins, nitric oxide, and others). Although we found similar phase 1 kinetics of estimated Qm in type 2 diabetic and control subjects, the time constants for phase 2 of Qm were significantly longer in type 2 diabetic compared with healthy subjects. We acknowledge that our estimate of Qm is qualitative in nature and dependent on assumptions of homogeneous muscle [HHb] characteristics; however, there is evidence to support the assumption that the sampled muscle (vastus lateralis) reflects the predominant active muscles during cycling as a whole (30), and, therefore, the relative kinetics of estimated Qm responses should be preserved. Thus, the slower phase 2 time constant and MRT of Qm demonstrates the plausible notion that metabolic feedback control during exercise may be altered in type 2 diabetes. Indeed, there is evidence for macro- and microvascular dysfunction in type 2 diabetes (31,32) that could explain impaired microvascular blood flow responses during exercise. It is well established that nitric oxide–dependent endothelial function is impaired in the conduit arteries in type 2 diabetes (33,34), and this mechanism has been associated with reduced steady-state leg blood flow in type 2 diabetic subjects during submaximal exercise (9). However, it is unclear whether conduit artery blood flow dynamics after onset of exercise are altered in type 2 diabetes or to what extent vascular dysfunction or changes in microvascular architecture in type 2 diabetes may impair macro- or microvascular blood flow dynamics and the distribution of muscle blood flow after the onset of exercise. We have previously observed prolonged heart rate responses in subjects with type 2 diabetes (2). Thus, it is plausible that a central impairment of cardiac output could undermine blood flow and oxygen delivery to the skeletal muscle vascular beds during exercise. However, cardiac output during submaximal exercise appears normal in type 2 diabetes (35), and we observed no differences in heart rate kinetics between groups. Therefore, the putative Qm abnormality observed during submaximal exercise in type 2 diabetic subjects is probably specific to the control of blood flow of the exercising legs. The role of skeletal muscle in the impaired submaximal exercise response of type 2 diabetes has not been elucidated. However, the likelihood of an integral role is suggested by the available data. For example, capillary density appears reduced in type 2 diabetic skeletal muscle (36), and basement membrane structures are altered (37). These structural changes could directly contribute to alterations in microvascular hemodynamics, exacerbate potential mismatching of muscle blood flow-to-oxygen uptake, and impair O2 exchange from capillary to myocyte as suggested by the work in rodent models (6,8,38). However, the skeletal muscle of type 2 diabetic patients also demonstrates reduced mitochondrial content (10) and increased potential for mitochondrial dysfunction compared with healthy counterparts (5,10,11,39), although the functional evidence for this notion is controversial (40). Whether the changes observed in the skeletal muscle of individuals with type 2 diabetes are related to altered muscle fiber type composition (greater numbers of type IIb fibers relative to type I fibers) (41), detraining, or other factors is unclear. However, it appears that both the ability to deliver oxygen to the skeletal muscle and use of oxygen during exercise by the muscle may be compromised in type 2 diabetes. The findings of the present study appear to support the importance of impaired skeletal muscle oxygen delivery as a significant determinant of the submaximal exercise impairment in type 2 diabetes. However, given the similar (and not faster) [HHb] kinetics in type 2 diabetic subjects compared with control subjects, these findings could also indicate, to a lesser extent, the potential contribution of muscle oxidative dysfunction or other factors in limiting the submaximal exercise response. In contrast to the long-recognized defects in o2peak observed in type 2 diabetes, defects in the response at the onset of submaximal exercise represent a challenge that will be encountered during routine activities. Thus, the finding of impaired submaximal exercise responses in otherwise uncomplicated type 2 diabetes is clinically relevant. In the present study, we showed slowed o2 kinetics in type 2 diabetic compared with healthy subjects, consistent with previous reports (2,42), which may be a mechanism for the type 2 diabetes exercise intolerance. The significance of slowed o2 kinetics is that it indicates a prolonged period of adaptation to any acute submaximal exercise demand, such as that regularly encountered during daily life. Importantly, the prolonged o2 kinetics results in a greater oxygen deficit and, hence, greater dependence upon substrate level phosphorylation (phosphocreatinine degradation and glycolysis) to support even low and moderate levels of exercise. This finding is significant because activities of daily life are carried out at these low levels of physical activity. Thus, the accumulated oxygen deficit that occurs with the initiation of exercise may ultimately affect the ability or willingness of individuals to sustain the activity, resulting in the limited exercise tolerance and reduced peak exercise capacity observed in type 2 diabetes. In summary, skeletal muscle [HHb] responses are altered in type 2 diabetes during the transition from light to moderate exercise, indicating a slowed increase of microvascular blood flow in response to exercise in type 2 diabetic patients. The prolonged kinetics of estimated Qm suggests that muscle o2 during exercise may be constrained by an impairment of muscle oxygen delivery in type 2 diabetic skeletal muscle, potentially leading to diminished submaximal exercise function in type 2 diabetes. Impairments in skeletal muscle oxygen delivery due to abnormal vascular control or other abnormalities of type 2 diabetic skeletal muscle may explain, in part, the observed exercise deficit observed in individuals with type 2 diabetes. Figure 1— Representative example of o2muscle derived from pulmonary o2 kinetics (A and B), [HHb] responses (C and D), and estimated Qm responses (E and F) during the transition from unloaded to moderate cycling in a healthy control subject (A, C, and E) and type 2 diabetic subject (B, D, and F). Data are presented as a function of end exercise response. Note the transient overshoot of the [HHb] response in the type 2 diabetic subject. The estimated Qm profile was calculated from o2muscle divided by [HHb] in arbitrary units. Loaded cycling exercise begins at time = 0. Qm kinetic model parameters: BSL, baseline; TD, time delay; τ1 and τ2, time constants; A1 and A2, response amplitudes; τ, time constant of o2muscle; MRT, Qm MRT. Solid lines represent curve fitting of the Qm kinetic response (E and F). Figure 1— Representative example of o2muscle derived from pulmonary o2 kinetics (A and B), [HHb] responses (C and D), and estimated Qm responses (E and F) during the transition from unloaded to moderate cycling in a healthy control subject (A, C, and E) and type 2 diabetic subject (B, D, and F). Data are presented as a function of end exercise response. Note the transient overshoot of the [HHb] response in the type 2 diabetic subject. The estimated Qm profile was calculated from o2muscle divided by [HHb] in arbitrary units. Loaded cycling exercise begins at time = 0. Qm kinetic model parameters: BSL, baseline; TD, time delay; τ1 and τ2, time constants; A1 and A2, response amplitudes; τ, time constant of o2muscle; MRT, Qm MRT. Solid lines represent curve fitting of the Qm kinetic response (E and F). Close modal Table 1— Subject characteristics ControlType 2 diabetic Age (years) 47 ± 6 47 ± 4 Weight (kg) 81.5 ± 18.5 84.0 ± 8.8 BMI (kg/m228.0 ± 3.0 30.8 ± 4.3 A1C (%) 5.4 ± 0.2 6.75 ± 1.2* Fasting insulin (μU/ml) 8.0 ± 3.6 13.6 ± 12.1 Fasting glucose (mg/dl) 96 ± 9 128 ± 42 Body fat (%) 32.6 ± 7 32.7 ± 7 ControlType 2 diabetic Age (years) 47 ± 6 47 ± 4 Weight (kg) 81.5 ± 18.5 84.0 ± 8.8 BMI (kg/m228.0 ± 3.0 30.8 ± 4.3 A1C (%) 5.4 ± 0.2 6.75 ± 1.2* Fasting insulin (μU/ml) 8.0 ± 3.6 13.6 ± 12.1 Fasting glucose (mg/dl) 96 ± 9 128 ± 42 Body fat (%) 32.6 ± 7 32.7 ± 7 Data are means ± SD. Body fat was calculated from a dual-energy X-ray absorptiometry scan. * P < 0.01; P < 0.05, type 2 diabetic versus control subjects. Table 2— Kinetic parameters for V̇o2, HHb, heart rate, and Qm ControlType 2 diabetic τo2 (s) 34.2 ± 8.2 43.8 ± 9.6* τHR (s) 45.1 ± 15.9 51.2 ± 14.1 τ[HHb] (s) 10.2 ± 4.4 8.8 ± 4.8 MRT [HHb] (s) 17.8 ± 5.5 18.5 ± 4.6 τ1 Qm phase 1 (s) 6.6 ± 2.8 5.1 ± 3.2 τ2 Qm phase 2 (s) 32.0 ± 9.7 43.6 ± 12.9* MRT Qm (s) 35.8 ± 10.7 47.7 ± 14.3* τo2 (s) 34.2 ± 8.2 43.8 ± 9.6* ControlType 2 diabetic τo2 (s) 34.2 ± 8.2 43.8 ± 9.6* τHR (s) 45.1 ± 15.9 51.2 ± 14.1 τ[HHb] (s) 10.2 ± 4.4 8.8 ± 4.8 MRT [HHb] (s) 17.8 ± 5.5 18.5 ± 4.6 τ1 Qm phase 1 (s) 6.6 ± 2.8 5.1 ± 3.2 τ2 Qm phase 2 (s) 32.0 ± 9.7 43.6 ± 12.9* MRT Qm (s) 35.8 ± 10.7 47.7 ± 14.3* τo2 (s) 34.2 ± 8.2 43.8 ± 9.6* Data are means ± SD. Refer to text for calculation of τ, τ1, τ2, and MRT. τo2, time constant for o2 kinetics; τHR, time constant for heart rate; τ[HHb], time constant for increase in [HHb]; τ1 Qm, time constant for phase 1 of Qm; τ2 Qm, time constant for phase 2 of Qm. * P < 0.05, type 2 diabetic versus control subjects. This work was supported by an American Diabetes Association Award to J.G.R. and by National Institutes of Health (NIH) Grant MO1 RR000051. T.A.B. was supported by NIH Grant T32 HL007822-10. J.E.B.R. is supported by VA Merit Review and NIH Grant DK064741. The authors thank ISS for use of the Optiplex TS spectrometer. In addition, we thank Vermed for the donation of electrocardiogram electrodes. 1. Regensteiner JG, Sippel J, McFarling ET, Wolfel EE, Hiatt WR: Effects of non-insulin-dependent diabetes on oxygen consumption during treadmill exercise. Med Sci Sports Exerc 27 : 661 –667, 1995 2. Regensteiner JG, Bauer TA, Reusch JE, Brandenburg SL, Sippel JM, Vogelsong AM, Smith S, Wolfel EE, Eckel RH, Hiatt WR: Abnormal oxygen uptake kinetic responses in women with type II diabetes mellitus. J Appl Physiol 85 : 310 –317, 1998 3. Kjaer M, Hollenbeck CB, Frey-Hewitt B, Galbo H, Haskell W, Reaven GM: Glucoregulation and hormonal responses to maximal exercise in non-insulin-dependent diabetes. J Appl Physiol 68 : 2067 –2074, 1990 4. Wei M, Gibbons LW, Kampert JB, Nichaman MZ, Blair SN: Low cardiorespiratory fitness and physical inactivity as predictors of mortality in men with type 2 diabetes [see comments]. Ann Intern Med 132 : 605 –611, 2000 5. Scheuermann-Freestone M, Madsen PL, Manners D, Blamire AM, Buckingham RE, Styles P, Radda GK, Neubauer S, Clarke K: Abnormal cardiac and skeletal muscle energy metabolism in patients with type 2 diabetes. Circulation 107 : 3040 –3046, 2003 6. Padilla DJ, McDonough P, Behnke BJ, Kano Y, Hageman KS, Musch TI, Poole DC: Effects of type II diabetes on capillary hemodynamics in skeletal muscle. Am J Physiol 291 : H2439 –H2444, 2006 7. Padilla DJ, McDonough P, Behnke BJ, Kano Y, Hageman KS, Musch TI, Poole DC: Effects of type II diabetes on muscle microvascular oxygen pressures. Respir Physiol Neurobiol 156 : 187 –195, 2007 8. Behnke BJ, Kindig CA, McDonough P, Poole DC, Sexton WL: Dynamics of microvascular oxygen pressure during rest-contraction transition in skeletal muscle of diabetic rats. Am J Physiol 283 : H926 –H932, 2002 9. Kingwell BA, Formosa M, Muhlmann M, Bradley SJ, McConell GK: Type 2 diabetic individuals have impaired leg blood flow responses to exercise: role of endothelium-dependent vasodilation. Diabetes Care 26 : 899 –904, 2003 10. Ritov VB, Menshikova EV, He J, Ferrell RE, Goodpaster BH, Kelley DE: Deficiency of subsarcolemmal mitochondria in obesity and type 2 diabetes. Diabetes 54 : 8 –14, 2005 11. Kelley DE, He J, Menshikova EV, Ritov VB: Dysfunction of mitochondria in human skeletal muscle in type 2 diabetes. Diabetes 51 : 2944 –2950, 2002 12. Fantini S, Franceschini MA, Maier JS, Walker SA, Barbieri B, Gratton E: Frequency-domain multichannel optical detector for non-invasive tissue spectroscopy and oximetry. Opt Eng 34 : 32 –42, 1995 13. Grassi B, Pogliaghi S, Rampichini S, Quaresima V, Ferrari M, Marconi C, Cerretelli P: Muscle oxygenation and pulmonary gas exchange kinetics during cycling exercise on-transitions in humans. J Appl Physiol 95 : 149 –158, 2003 14. DeLorey DS, Shaw CN, Shoemaker JK, Kowalchuk JM, Paterson DH: The effect of hypoxia on pulmonary O2 uptake, leg blood flow and muscle deoxygenation during single-leg knee-extension exercise. Exp Physiol 89 : 293 –302, 2004 15. DeLorey DS, Kowalchuk JM, Paterson DH: Effect of age on O2 uptake kinetics and the adaptation of muscle deoxygenation at the onset of moderate-intensity cycling exercise. J Appl Physiol 97 : 165 –172, 2004 16. Ferreira LF, Townsend DK, Lutjemeier BJ, Barstow TJ: Muscle capillary blood flow kinetics estimated from pulmonary O2 uptake and near-infrared spectroscopy. J Appl Physiol 98 : 1820 –1828, 2005 17. Harper AJ, Ferreira LF, Lutjemeier BJ, Townsend DK, Barstow TJ: Human femoral artery and estimated muscle capillary blood flow kinetics following the onset of exercise. Exp Physiol 91 : 661 –671, 2006 18. Boushel R, Langberg H, Olesen J, Gonzales-Alonzo J, Bulow J, Kjaer M: Monitoring tissue oxygen availability with near infrared spectroscopy (NIRS) in health and disease. Scand J Med Sci Sports 11 : 213 –222, 2001 19. McCully KK, Hamaoka T: Near-infrared spectroscopy: what can it tell us about oxygen saturation in skeletal muscle? Exerc Sport Sci Rev 28 : 123 –127, 2000 20. Ferreira LF, Lutjemeier BJ, Townsend DK, Barstow TJ: Dynamics of skeletal muscle oxygenation during sequential bouts of moderate exercise. Exp Physiol 90 : 393 –401, 2005 21. Bauer TA, Regensteiner JG, Brass EP, Hiatt WR: Oxygen uptake kinetics during exercise are slowed in patients with peripheral arterial disease. J Appl Physiol 87 : 809 –816, 1999 22. Grassi B, Poole DC, Richardson RS, Knight DR, Erickson BK, Wagner PD: Muscle O2 uptake kinetics in humans: implications for metabolic control. J Appl Physiol 80 : 988 –998, 1996 23. Rossiter HB, Ward SA, Doyle VL, Howe FA, Griffiths JR, Whipp BJ: Inferences from pulmonary O2 uptake with respect to intramuscular [phosphocreatine] kinetics during moderate exercise in humans. J Physiol 518 : 921 –932, 1999 24. Ferreira LF, Poole DC, Barstow TJ: Muscle blood flow-O2 uptake interaction and their relation to on-exercise dynamics of O2 exchange. Respir Physiol Neurobiol 147 : 91 –103, 2005 25. Diederich ER, Behnke BJ, McDonough P, Kindig CA, Barstow TJ, Poole DC, Musch TI: Dynamics of microvascular oxygen partial pressure in contracting skeletal muscle of rats with chronic heart failure. Cardiovasc Res 56 : 479 –486, 2002 26. Kindig CA, Richardson TE, Poole DC: Skeletal muscle capillary hemodynamics from rest to contractions: implications for oxygen transfer. J Appl Physiol 92 : 2513 –2520, 2002 27. Radegran G, Saltin B: Muscle blood flow at onset of dynamic exercise in humans. Am J Physiol 274 : H314 –H322, 1998 28. Tschakovsky ME, Sheriff DD: Immediate exercise hyperemia: contributions of the muscle pump vs. rapid vasodilation. J Appl Physiol 97 : 739 –747, 2004 29. Tschakovsky ME, Rogers AM, Pyke KE, Saunders NR, Glenn N, Lee SJ, Weissgerber T, Dwyer EM: Immediate exercise hyperemia in humans is contraction intensity dependent: evidence for rapid vasodilation. J Appl Physiol 96 : 639 –644, 2004 30. Richardson RS, Frank LR, Haseler LJ: Dynamic knee-extensor and cycle exercise: functional MRI of muscular activity. Int J Sports Med 19 : 182 –187, 1998 31. Regensteiner JG, Popylisen S, Bauer TA, Lindenfeld J, Gill E, Smith S, Oliver-Pickett CK, Reusch JE, Weil JV: Oral l-arginine and vitamins E and C improve endothelial function in women with type 2 diabetes. Vasc Med 8 : 169 –175, 2003 32. Sachidanandam K, Harris A, Hutchinson J, Ergul A: Microvascular versus macrovascular dysfunction in type 2 diabetes: differences in contractile responses to endothelin-1. Exp Biol Med (Maywood) 231 : 1016 –1021, 2006 33. McVeigh GE, Brennan GM, Johnston GD, McDermott BJ, McGrath LT, Henry WR, Andrews JW, Hayes JR: Impaired endothelium-dependent and independent vasodilation in patients with type 2 (non-insulin-dependent) diabetes mellitus. Diabetologia 35 : 771 –776, 1992 34. Williams SB, Cusco JA, Roddy MA, Johnstone MT, Creager MA: Impaired nitric oxide-mediated vasodilation in patients with non-insulin-dependent diabetes mellitus. J Am Coll Cardiol 27 : 567 –574, 1996 35. Baldi JC, Aoina JL, Oxenham HC, Bagg W, Doughty RN: Reduced exercise arteriovenous O2 difference in type 2 diabetes. J Appl Physiol 94 : 1033 –1038, 2003 36. He J, Watkins S, Kelley DE: Skeletal muscle lipid content and oxidative enzyme activity in relation to muscle fiber type in type 2 diabetes and obesity. Diabetes 50 : 817 –823, 2001 37. Williamson JR, Kilo C: Capillary basement membranes in diabetes. Diabetes 32 ( Suppl. 2 ): 96 –100, 1983 38. Kindig CA, Sexton WL, Fedde MR, Poole DC: Skeletal muscle microcirculatory structure and hemodynamics in diabetes. Respir Physiol 111 : 163 –175, 1998 39. Simoneau JA, Kelley DE: Altered glycolytic and oxidative capacities of skeletal muscle contribute to insulin resistance in NIDDM. J Appl Physiol 83 : 166 –171, 1997 40. Rabol R, Boushel R, Dela F: Mitochondrial oxidative function and type 2 diabetes. Appl Physiol Nutr Metab 31 : 675 –683, 2006 41. Marin P, Andersson B, Krotkiewski M, Bjorntorp P: Muscle fiber composition and capillary density in women and men with NIDDM. Diabetes Care 17 : 382 –386, 1994 42. Brandenburg SL, Reusch JE, Bauer TA, Jeffers BW, Hiatt WR, Regensteiner JG: Effects of exercise training on oxygen uptake kinetic responses in women with type 2 diabetes. Diabetes Care 22 : 1640 –1646, 1999 Published ahead of print at http://care.diabetesjournals.org on 3 August 2007. DOI: 10.2337/dc07-0843. A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C Section 1734 solely to indicate this fact.
2022-09-27 21:23:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4421907961368561, "perplexity": 13040.492803878984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00783.warc.gz"}
https://community.wolfram.com/groups/-/m/t/2132822?sortMsg=Replies
# Is causal relation dependent on planar embedding? Posted 4 months ago 800 Views | 3 Replies | 2 Total Likes | I am stuck reading a passage from Jonathan Gorard's paper about relativity in the Wolfram model. At page 11, there is the definition of a "downward planar embedding". Definition 11 A “downward planar embedding” is an embedding of a directed, acyclic graph in the Euclidean plane, in which edges are represented as monotonic downwards curves without crossings. Then, he considers to embed a causal graph into the discrete “Minkowski lattice” $\mathbb{Z}^{n,1}$ and to label the updating events according to their integer coordinates on the lattice.There are many ways to embed a planar graph into the lattice, therefore I expect that these coordinates depend on the chosen embedding.In the paper, follows the definition of the discrete Minkowski norm and the timelike/lightlike/spacelike separation of two events, and then, after a few sentences, the passage I cannot understand. From our definition of the discrete Minkowski norm and the properties of layered graph embedding, we can see that a pair of updating events are causally related (i.e.connected by a directed edge in the causal graph) if and only if the corresponding vertices are timelike-separated in the embedding of the causal graph into the discrete Minkowski lattice $\mathbb{Z}^{n,1}$, as required. I don't understand how this statement could be true. Consider for example a simple causal graph constituted only of two events, where event 1 has a directed edge towards event 2 (i.e. event 1 is "the cause" of event 2). This graph is planar and so it is possible to perform a downward planar embedding in the Minkowski lattice $\mathbb{Z}^{1,1}$ (just one spatial dimension to make things easier). Here I have drawn two possible embeddings of such graph. In the first case, the events are labelled by the coordinates (0,0) and (1,0) and therefore their spacetime separation is -1 and they are timelike separated.But in the second case, which is still a perfectly fair "downward planar embedding" by the definition given, the events have coordinates (0,0) and (1,3) and are therefore spacelike separated with norm 8.The fact that the sign of the spacetime separation is dependent on the chosen embedding seems in contradiction with the statement quoted.Can someone explain what am I missing here? Thank you 3 Replies Sort By: Posted 4 months ago Before proceeding, I would like to mention two complementary materials to the paper that you are mentioned, where you can see significative a diversity of ideas concerning causality.(1) Gorard, Jonathan. Algorithmic Causal Sets and the Wolfram Model. arXiv preprint arXiv:2011.12174 (2020). ,(2) Max Piskunov's Bulletin Confluence and Causal Invariance.The first remark is that in reference (1), which is basically a paper about causality, the phrases Minkowski lattice and planar embedding are not used anymore. We see some evolution in the development of the theory. Personally, I do not like Minkowski lattice.A yes/no answer to your question is "no, causal relation does not dependent on planar embedding". Indeed, we could imagine a spacetime that is the surface of a torus, where one dimension is a one-dimensional space (circumference) and the other dimension is a cyclic time (circumference). Locally, this spacetime looks like ordinary spacetime and the local causal graph is acyclic. Nevertheless, the surface of a torus cannot be embedded in a plane and the global causal graph contains cycles.Even considering this question only locally, the combinatorial structure of the (local) causal graph should remain invariant under the composition of the embedding with a Lorentz transformation: this is the essence of special relativity.The core of the Wolfram Model is what was written by S. Wolfram in A New Kind of Science and in the Technical Introduction to the project and complemented in his writings. The ways in which S. Wolfram's ideas are mathematically formalized and explored by other researchers may change with time. Therefore, if you are interested in learning about causality in the Wolfram Model, my advice is to read S. Wolfram first, e.g.,The Phenomenon of Causal InvarianceThe Role of Causal GraphsEvent Horizons, Singularities and Other Exotic Spacetime PhenomenaFaster than Light in Our Model of Physics: Some Preliminary Thoughtsand then, after you are familiar with S. Wolfram's original ideas, you could read other authors, like references (1) and (2). This is just a personal opinion, other people may have other opinions. Posted 4 months ago Of course the answer is no, it would have been a problem if it were yes. Thank you for clarifying it.I was indeed looking for a more formal exposition of Wolfram's ideas. I am not familiar with your first reference, thank you for telling me about it.
2021-04-14 08:10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8169991374015808, "perplexity": 490.32440927023373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077336.28/warc/CC-MAIN-20210414064832-20210414094832-00340.warc.gz"}
http://mathoverflow.net/questions/136381/product-of-fixed-points-and-kernel-of-frobenius-morphism
# Product of Fixed points and kernel of Frobenius morphism If $G$ is a reductive algebraic group over an algebraically closed field of positive characteristic $p$, and $G$ is defined over the prime field, we have the Frobenius morphism $F: G\to G$, which for each positive number $r$ leads to two subgroups of $G$, namely $G_r$ which is the kernel of the $r$'th iteration of $F$, and $G^{F^r}$ which is the fixed points of the same. Now, $G^{F^r}$ normalizes $G_r$ (since $G_r$ is normal in $G$), and they clearly intersect trivially. So in $G$ the subgroup $G^{F^r}G_r$ is a semidirect product. My question is whether this subgroup and its representations has been studied. We certainly know some of its irreducible representations, as any irreducible representation of $G_r$ or $G^{F^r}$ extend to $G$ (and they even have the same irreducibles when these are seen as $G$-modules). So one of the main questions would be whether there are any other irreducible representations of this group. Of course, one could also consider $G^{F^r}G_{r'}$ with $r\neq r'$ and ask the same questions. The reason I am interested in this group is that it seems like it might provide a more direct link between the representation theories of $G_r$ and $G^{F^r}$, which are certainly very similar. It might also provide an additional stepping stone for comparing the representation theories of $G_r$ or $G^{F^r}$ with that of $G$, since one of the ways one often does such comparison is via the induction from either $G_r$ or $G^{F^r}$ to $G$, and both of these factor through this subgroup. (I asked this question in the representation theory chat room, trying to get the discussion going, but I realized it was focused enough that it might as well be an actual question here). - At first sight your formulation looks rather confusing, since you attempt to mix finite groups with finite group schemes. Also, it is not so much irreducible modules which raise questions in these two settings, but rather projective modules and cohomol/ogy. Have you consulted local experts? –  Jim Humphreys Jul 11 '13 at 10:52 @JimHumphreys By subgroup I mean subgroup scheme, and I suppose you are right that the question of projectives might well be a lot more interesting. I asked my advisor (Henning Haahr Andersen), and he had not heard of any study of this subgroup. The question may well be too naive to lead anywhere. –  Tobias Kildetoft Jul 11 '13 at 10:59 Maybe what you mean by Frobenius is not the standard thing, as the standard Frobenius only maps $G$ to itself if $G$ is defined over the prime field. –  Felipe Voloch Jul 11 '13 at 15:11 @FelipeVoloch Sorry, I forgot to add the assumption that the group is defined over the prime field. –  Tobias Kildetoft Jul 12 '13 at 6:22 I definitely talked about this with Brian Parshall and Len Scott at one point. I think we even found a minor use for it. (Too minor to be worth following up.) They had someone in mind who'd worked with such things. I'll email them. As you say, the simples must be easy to work out. –  David Stewart Jul 12 '13 at 8:48
2015-04-01 13:11:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986979126930237, "perplexity": 238.36169241941298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304598.61/warc/CC-MAIN-20150323172144-00146-ip-10-168-14-71.ec2.internal.warc.gz"}
http://dimacs.rutgers.edu/TechnicalReports/abstracts/2004/2004-42.html
## Nonuniform Sparse Approximation with Haar Wavelet Basis ### Author: S. Muthukrishnan ABSTRACT Sparse approximation theory concerns representing a given signal on $N$ pointsas a linear combination of at most $B$ ($B\ll N$) elements from a dictionary so that the error of the representation is minimized; traditionally, error is taken {\em uniformly} as the sum of squares of errors at each point. In this paper, we initiate the study of {\em nonuniform} sparse approximation theory where each point has an {\em importance}, and we want to minimize the sum of individual errors weighted by their importance. In particular, we study this problem with the basic Haar wavelet dictionary that has found many applications since being introduced in 1910. Parseval's theorem from 1799 which is central in solving uniform sparse approximation for Haar wavelets does not help under nonuniform importance. We present the first known polynomial time for the problem of finding $B$ wavelet vectors to represent a signal of length $N$ so that the representation has the smallest error, averaged over the given importance of the points. The algorithm takes time $O(N^2B/\log B)$. When the importance function is well-behaved, we present another algorithm that takes near-linear time. Our methods also give first known, efficient algorithms for a number of related problems with nonuniform importance. Paper Available at: ftp://dimacs.rutgers.edu/pub/dimacs/TechnicalReports/TechReports/2004/2004-42.ps.gz
2017-10-22 01:08:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322443604469299, "perplexity": 429.51684547960826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00885.warc.gz"}
http://www.koreascience.or.kr/article/JAKO201818564287493.page
원전용 IC를 위한 CMOS 디지털 논리회로의 내방사선 모델 설계 및 누적방사선 손상 분석 • Received : 2018.03.07 • Accepted : 2018.05.29 • Published : 2018.06.01 Abstract ICs(Integrated circuits) for nuclear power plant exposed to radiation environment occur malfunctions and data errors by the TID(Total ionizing dose) effects among radiation-damage phenomenons. In order to protect ICs from the TID effects, this paper proposes a radiation-hardening of the logic circuit(D-latch) which used for the data synchronization and the clock division in the ICs design. The radiation-hardening technology in the logic device(NAND) that constitutes the proposed RH(Radiation-hardened) D-latch is structurally more advantageous than the conventional technologies in that it keeps the device characteristics of the commercial process. Because of this, the unit cell based design of the RH logic device is possible, which makes it easier to design RH ICs, including digital logic circuits, and reduce the time and cost required in RH circuit design. In this paper, we design and modeling the structure of RH D-latch based on commercial $0.35{\mu}m$ CMOS process using Silvaco's TCAD 3D tool. As a result of verifying the radiation characteristics by applying the radiation-damage M&S (Modeling&Simulation) technique, we have confirmed the radiation-damage of the standard D-latch and the RH performance of the proposed D-latch by the TID effects. Acknowledgement Grant : 원자력연구개발사업 Supported by : 한국연구재단 References 1. A. S. Sedra and K. C. Smith, Microelectronic Circuits, 5th ed. New York: Oxford, 2004. 2. H. M. Hashemian, "Maintenance of Process Instrumentation in Nuclear Power Plants," Springer Press, 2006. 3. G. C. Messenger and M. S. Ash, "The Effects of Radiation On Electronic Systems," Springer Press, 1992. 4. T. R. Oldham and F. B. Mclean, "Total Ionizing Dose Effects in MOS Oxides and Devices," IEEE Trans. Nul. Sci., vol. 50, no. 3, pp. 483-496, Jun. 2003. https://doi.org/10.1109/TNS.2003.812927 5. H. J. Barnaby, "Total-Ionizing-Dose Effects in Modern CMOS Technologies," IEEE Trans. Nul. Sci., vol. 53, no. 6, pp. 3103-3120, Dec. 2006. 6. T. R. Oldham and A. J. Lelis, "Post-Irradiation Effects in Field Oxide Isolation structures," IEEE Trans. Nul. Sci., vol. 34, no. 6, pp. 1184-1189, Dec. 1987. https://doi.org/10.1109/TNS.1987.4337450 7. D. M. Fleetwood, P. S. Winokur, R. A. Reber, T. L. Meisenheimer, J. R. Schwank, M. R. Shaneyfelt and L. C. Riewe, "Effects of oxide traps, interface traps, and 'border traps' on metal-oxide-semiconductor devices," J. Appl. Phys., vol. 73, pp. 5058-5074, May 1993. https://doi.org/10.1063/1.353777 8. S. C. Oh, N. H. Lee, and H. H. Lee, "The Study of Transient Radiation Effects on Commercial Electronic Devices", Trans. KIEE, vol. 61, no. 10, pp. 1448-1453, Oct. 2012. 9. J. Y. Kim, N. H. Lee, H. K. Jung, S. C. Oh, "The study of radiation hardened common sensor circuits using COTS semiconductor devices for the nuclear power plant", Trans. KIEE, vol. 63, no. 9, pp. 1248-1252, Sep. 2014. 10. W. J. Snoeys, "A New NMOS Layout Structure for Radiation Tolerance," IEEE Trans. Nul. Sci., vol. 49, no. 4, Aug. 2002. 11. Li Chen and D. M. Gingrich. "Study of N-Channel MOSFETs with an Enclosed-Gate Layout in a 0.18um CMOS technology," IEEE Trans. Nul. Sci., vol. 52, no. 4, pp. 861-867, Oct. 2005. https://doi.org/10.1109/TNS.2005.852652 12. M. S. Lee and H. C. Lee, "Dummy Gate-Assisted n-MOSFET Layout for a Radiation-Tolerant Integrated Circuit", IEEE Trans. Nul. Sci., vol. 60, no. 4, pp. 3084-3091, Aug. 2013. 13. Y. Li et al. "Anomalous radiation effects in fully depleted SOI MOSFETs fabricated on SIMOX," IEEE Trans. Nucl. Sci., vol. 48, pp. 2146-2151, Dec. 2001. https://doi.org/10.1109/23.983187 14. M. W. Lee, N. H. Lee, S. H. Jeong, S. M. Kim and S. I. Cho, "Implementation of a radiation-hardened I-gate n-MOSFET and analysis of its TID(Total Ionizing Dose) effects" Journal of Electrical Engineering & Technology, vol. 12, pp. 1619-1626, Jun. 2017. 15. N. Saks, M. Ancona and J. Modolo, "Generation of interface states by ionizing radiation in very thin MOS oxides," IEEE Trans. Nucl. Sci., vol. 33 no. 6, pp. 1185-1190, Nov. 1986. https://doi.org/10.1109/TNS.1986.4334576
2020-08-15 09:18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749338686466217, "perplexity": 9685.882318364776}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740733.1/warc/CC-MAIN-20200815065105-20200815095105-00494.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-mathematics-for-calculus-7th-edition/chapter-4-section-4-3-logarithmic-functions-4-3-exercises-page-352/74
## Precalculus: Mathematics for Calculus, 7th Edition $(-\infty, 4)$ RECALL: The logarithmic function $f(x) = \log_a{x}$ is defined only when $x$ is greater than $0$. This means that the given function is only defined when $8 - 2x$ is greater than zero. Thus, $8-2x \gt 0 \\-2x \gt 0-8 \\-2x \gt -8$ Divide $-2$ to both sides of the inequality. Note that the inequality symbol will flip too the opposite direction since a negative number was divided to each side. $x \lt \frac{-8}{-2} \\x \lt 4$ Therefore, the domain of the given function is $x \lt 4$ In interval notation, the domain is $(-\infty, 4)$.
2018-05-22 17:54:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9515352845191956, "perplexity": 108.23924411964634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864837.40/warc/CC-MAIN-20180522170703-20180522190703-00230.warc.gz"}
http://mymathforum.com/calculus/10761-calculate-f-x.html
My Math Forum Calculate F'(x) Calculus Calculus Math Forum January 17th, 2010, 03:51 PM #1 Senior Member   Joined: Nov 2009 Posts: 169 Thanks: 0 Calculate F'(x) F(x) = $\int _1\,^{cosx}\!\sqrt {1-t^2}dt$ why can't I just sub cosx in and get $\sqrt{1-cos^2 x}$ ? January 17th, 2010, 05:58 PM   #2 Global Moderator Joined: May 2007 Posts: 6,806 Thanks: 716 Re: Calculate F'(x) Quote: Originally Posted by 450081592 F(x) = $\int _1\,^{cosx}\!\sqrt {1-t^2}dt$ why can't I just sub cosx in and get $\sqrt{1-cos^2 x}$ ? Back to basics. You want the x derivative of something of the form f(u(x)). From elem. calculus you have df/dx = (df/du)(du/dx). For your problem, you have u=cosx. $\sqrt{1-cos^2 x}$ is the df/du part. du/dx = -sinx is needed to get the correct result. January 18th, 2010, 03:52 PM #3 Newbie   Joined: Jan 2010 Posts: 1 Thanks: 0 Re: Calculate F'(x) If $F(x)=\int_{a}^{g(x)}f(t)dt$ then: $F'(x)=f(g(x)) g#39;(x)$ where a is a constant. Tags calculate Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post sajjad mt Calculus 12 December 2nd, 2013 08:25 AM Chikis Algebra 4 June 13th, 2013 09:03 PM pitrpan Advanced Statistics 2 April 14th, 2011 01:09 PM stole Number Theory 3 August 15th, 2009 06:41 AM Espanol Algebra 3 April 2nd, 2009 10:03 PM Contact - Home - Forums - Cryptocurrency Forum - Top
2019-08-20 14:52:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7161754965782166, "perplexity": 8610.827676935145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00062.warc.gz"}
https://www.physicsforums.com/threads/potential-central-field.198951/
# Potential central field 1. Nov 18, 2007 ### neworder1 1. The problem statement, all variables and given/known data The problem is to find the motion of a body in a central potential field with potential given by: $$V(r)=-\frac{\alpha}{r}+\frac{\beta}{r^{2}}$$ where $$\alpha$$ and $$\beta$$ are positive constants. 2. Relevant equations 3. The attempt at a solution I used the fact that energy and angular momentum are conserved in this field, and after separating variables in the equation for $$\dot{\vec{r}}$$ I got an integral of the form: ($$\phi$$ is the angle) $$\phi = \int{\frac{dr}{\sqrt{Ar^{3}-Br^{2}+C}}}$$ where A, B, C are constants dependent on mass, energy and angular momentum of the body. Is there a simpler method to find the motion $$r(\phi)$$, without having to calculate such awful integrals? And if not, how to calculate it? 2. Nov 18, 2007 ### noospace Yes, I think there is. Note that you have two differential equations: one first order and one second order (the Lagrange equation). Hint: use the second order. But since you are interested in the shape, you need to change from time derivatives to derivatives wrt $\phi$. Question: what is the relationship between $\dot{r}$ and $r'(\phi)$? Answering this question will lead you to a differential equation for your trajectory. 3. Nov 18, 2007 ### neworder1 Could you be more specific? I don't see how we can get beyond what I've written above using the second order equation. 4. Nov 18, 2007 ### jdstokes You need to use the fact that $\dot{r} = \dot{\phi}r'(\phi)$. Use this to eliminate all derivatives wrt time in your Lagrange equation. But before you do, what is $\dot{\phi}$?
2017-09-23 02:29:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612322807312012, "perplexity": 241.16843579910042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689413.52/warc/CC-MAIN-20170923014525-20170923034525-00111.warc.gz"}
http://www.adras.com/NaN-to-f-revisited.t40474-48.html
From: Ryo on 10 Aug 2010 20:26 Hi all, I thought ".to_s.to_f" should be an identity operation on any floating point number, apart from a potential truncation error. On Ruby 1.8.7, a = 0.0/0.0 a.to_s # => "NaN" "NaN".to_f #=> 0.0 Therefore, a.to_s.to_f #=> 0.0, which isn't acceptable to me. Consider writing floating point numbers to a text file and reading them back in. Is this a bug in 1.8.x? I searched Google but couldn't get a definitive answer. Regards, Ryo From: Rob Biedenharn on 10 Aug 2010 21:00 On Aug 10, 2010, at 8:30 PM, Ryo wrote: > Hi all, > > I thought ".to_s.to_f" should be an identity operation on any floating > point number, apart from a potential truncation error. > > On Ruby 1.8.7, > > a = 0.0/0.0 > a.to_s # => "NaN" > "NaN".to_f #=> 0.0 > > Therefore, > > a.to_s.to_f #=> 0.0, > > which isn't acceptable to me. Consider writing floating point numbers > to a text file and reading them back in. > > Is this a bug in 1.8.x? I searched Google but couldn't get a > definitive answer. > > Regards, > Ryo > If you want to round-trip to a file, use something like Marshal or YAML a = 0.0/0.0 irb> Marshal.dump(a) => "\x04\bf\bnan" b = Marshal.load(Marshal.dump(a)) a.nan? #=> true b.nan? #=> true irb> require 'yaml' => true irb> a.to_yaml => "--- .NaN\n" irb> c=YAML.load(a.to_yaml) => NaN irb> c.nan? => true -Rob Rob Biedenharn Rob(a)AgileConsultingLLC.com http://AgileConsultingLLC.com/ rab(a)GaslightSoftware.com http://GaslightSoftware.com/ From: Yukihiro Matsumoto on 10 Aug 2010 21:16 Hi, In message "Re: "NaN".to_f revisited" on Wed, 11 Aug 2010 09:30:14 +0900, Ryo writes: |I thought ".to_s.to_f" should be an identity operation on any floating |point number, apart from a potential truncation error. That would be a good property. The reason behind String#to_f not supporting NaN and Inf is that I couldn't give up non IEEE754 floating point architecture such as VAX at the time I implemented. But we are no longer see any non-IEEE754 architecture around, and current implementation of Ruby would require IEEE754 anyway, so it might be a good chance to introduce roundtrip nature. matz. From: Brian Candler on 11 Aug 2010 06:43 Might give some strange behaviour though: "1$".to_f => 1.0 "nonsense".to_f => 0.0 "nanotubes".to_f => Nan ? "inferiority".to_f => Inf ? -- Posted via http://www.ruby-forum.com/. From: Caleb Clausen on 11 Aug 2010 11:18 On 8/11/10, Brian Candler wrote:> Might give some strange behaviour though: > > "1$".to_f => 1.0 > "nonsense".to_f => 0.0 > "nanotubes".to_f => Nan ? > "inferiority".to_f => Inf ? Normally if the string starts with non-numeric characters, converting to numbers yields a zero result. This is to remain consistent with atof and strtod from the standard c library. On my machine, the man page for strtod contains this paragraph: Alternatively, if the portion of the string following the optional plus or minus sign begins with INFINITY'' or NAN'', ignoring case, it is interpreted as an infinity or a quiet NaN, respectively. In practice, strtod seems to actually recognize INF rather than the fully-spelled-out INFINITY as the trigger for returning infinity. So, your last 2 examples would return NaN and Inf if passed to strtod.  |  Next  |  Last
2022-01-17 22:56:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29905274510383606, "perplexity": 6173.664598068742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00244.warc.gz"}
http://barronstestprep.com/blog/gre/quant/
# Prime Problem Solving What is the largest prime number that is less than 220? It seems like a pretty simple question. Maybe you think it’s something similar to “What is the capital of Missouri?” And if what you’re expecting is a snap answer, those are very similar questions. They both leave two possible responses: a memorized answer or a guess. However, the ability to memorize lists of facts isn’t a great indicator or intelligence. That makes it a less than ideal test question as the test maker wants to gauge your ability to succeed academically at the next level. You won’t be asked about the capital of Missouri (Jefferson City) on your test, but you may be asked the question I posed above. Why? Well unlike asking about a capital city, there’s a logical way to go about solving the problem above, even if you haven’t memorized a list of primes. Let’s break it down. Looking for primes is looking for something that isn’t there. The way to find a number is a prime is to find that it doesn’t have any factors other than 1 and itself. While checking all possible factors may seem like a daunting task, let’s take our current problem and see a shortcut. The nice thing about factors is that they come in pairs: a larger factor and a smaller factor (or two equally sized factors in the case of a square). Since we have no need to find all factors of the number simply finding one member of a pair is sufficient. If you take the square root of a number, you know that the smaller member of the pair must be less than the square root. In the case of 220, we should know that 225 is 15 squared, so any number less than 220 that has factors will have a factor less than 15. So that leaves 15 numbers to check, right? Not really? 15- Don’t have to check because it isn’t prime. If 15 is a factor, 3 and 5 will also be factors. 14- Like above, if 14 is a factor, 2 will also be a factor, so no need to check. 13- Prime, so CHECK 12- Not prime, don’t check 11- Prime, so CHECK 10- Not prime, don’t check 9- Not prime, don’t check 8- Not prime, don’t check 7- Prime, so CHECK 6- Not prime, don’t check 5- Prime, so check 4- Not prime, don’t check 3- Prime, so CHECK 2- Prime, so CHECK 1- One will always be a factor, so no need to check So really we only need to check the possible prime factors less than 15 with our answer choices: A) 219 B) 217 C) 211 D) 209 E) 201 A- Since the digits add to a multiple of 3, we know this number is a multiple of 3, not prime B- This one’s less obvious, but when we go to check whether 7 is a factor we see that 30*7=210, and 217 is one more 7 so 217 is not prime. C- After checking our 6 possible factors we find that 211 is prime and since it is the greatest of our remaining answer choices, it is the correct answer. Success on your test isn’t based on memorization. It’s based on finding logical ways to break down big problems. # Trick or Treat The town of Halloweenville welcomes hundreds of trick-or-treaters every Halloween. There are 1000 houses in the town, with address numbers 1-1000. All of the houses are black except for houses with the following characteristics: Prime numbered houses have an orange stripe on them. Houses whose address number end in 7 have an orange door. Houses whose address number have an integer as a square root have white ghosts painted on them. If a trick-or-treater picks a house at random, what is the probability that that house has both an orange stripe and white ghosts? A) 0 B) 1/1000 C) 1/500 D) 1/100 E) It cannot be determined Trick! Sometimes there are problems that require more logic than math. This is one of those problems. Orange-striped houses have prime numbers as addresses. Houses with white ghosts are square numbers. Any square number (other than one) MUST have a factor other than 1 and itself, so there cannot be a house with both features. Three things are meant to throw you off. First, it’s a word problem. Second, there’s irrelevant information thrown in there. Third, the question asks for a probability which throws many students off. Keep your cool and analyze what the question is asking for, and you’ll avoid these tricks and grab a nice treat: a correct answer. # Snap a Picture, Get a Result There’s a new app out there that allows you to take a picture of an equation with your smartphone, and the app will solve it. Before you start grabbing your phone to download it, here are three important things to consider. 1. It’s a Tool, Not a Magic Cure- Much like a calculator, this app is a tool that can be very useful when used correctly and under the right circumstances. Need to figure out whether the giant TV you want to buy will fit on the wall space you have? Sure, jot the equation and solve away.Trying to figure out how many jelly beans are in the jar to win a prize at the local fair? Write out an equation for your volume estimates and snap the picture. Trying to learn how to solve quadratic equations? Stop right there. If you don’t have the proper underlying knowledge first, you aren’t just taking a shortcut, you’re potentially setting yourself up for disaster. 2. Technology isn’t Perfect- Neither are we! The problem with that combination is that we often expect technology to be perfect and that can lead to bad consequences. If you attempted to type 9 + 2 into your calculator and the result came up 101 you wouldn’t take that answer as truth. You’d recognize that the sort of answer you should get should be around 10, and you might even be able to work backward to find that the mistake you made was entering 99 + 2 on accident. However, if you’re snapping a picture of an equation you don’t really understand and the software interprets the numbers wrong, or you don’t take the picture correctly, you may end up proceeding off a bad result! 3. You’re Never Going to Be Able to Use That on Your Test- We want to test the processor that’s mounted above your shoulders, not carried in your pocket. Using a shortcut when time is of the essence makes sense, but knowing that you don’t need the shortcut when it comes right down to it is even better. # An Exponent Problem Consider the following question: $x=2^{y}-(8^{7}-8^{5})$ Which of the following values of y produces an x that is closest to 0? A) 24 B) 21 C) 20 D) 16 E) 14 Here’s a question designed for a calculator, you might say. But let’s set the calculator down for a moment and break this down. Realize that a problem like this is designed to be solvable. Let’s look at it step-by-step. Step 1: Ask yourself how could this be possible? It’s a useful question to ask whenever you encounter a difficult problem, whether inside the test prep world or outside, but it’s an especially useful question to ask in a situation such as this where you know there is some possible and not extremely difficult solution. Here, the answer to the question is that this must be true: $2^{y}-2^{y}=0$ That’s our path toward a solution. Step 2: Make the information you have look as much like your projected solution as possible. How do we make a base of 8 into a base of 2? $8^{7}=(2^{3})^{7}=2^{21}$ So, we can start to break down the second piece of the equation into: $8^{7}-8^{5}=2^{21}-2^{15}$ Step 3: Analyze. Now we must find what value of $2^{y}$ is closest to the value in the equation above. Again, you might be tempted to pull out a calculator, but resist the urge. $2^{21}-2^{15}$ appears to be an ugly jumble of numbers and symbols. What does it mean? Think about what powers of 2 mean. It means $2^{21}$ is twice as big as $2^{20}$ which is in turn twice as big as $2^{19}$ and so on. $2^{15}$ is only 1/64th as big as $2^{21}$ so subtracting it out doesn’t move us very close to $2^{20}$$2^{21}$ is still the value of y that gets us closest to 0, and B is your correct answer.
2014-11-28 08:20:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 13, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5284720063209534, "perplexity": 548.4632930716573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009900.4/warc/CC-MAIN-20141125155649-00081-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.quantopian.com/posts/error-computing-super-trend-indicator
Error computing super trend indicator Hello All, I'm trying to compute a supertrend indicator, but I cannot manage to compute correctly the final lower and upper bands. I dont understand why the np.where formula do not give the desired output, i.e.: IF basic_upper_band(n) < final upper band(n-1) OR close price(n-1) > final_upper_band(n-1): final_upper_band(n) = basic_upper_band(n) ELSE final_upper_band(n) = basic_upper_band(n-1) I systematically get final_upper_band(n) = basic_upper_band(n), like if the first condition was always true, while it's not...
2019-01-18 23:47:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8136849403381348, "perplexity": 4483.38029219015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00304.warc.gz"}
http://knotebook.org.s3-website-us-west-2.amazonaws.com/knotebook/racah/21/weights/weights.htm
## Highest weight vectors The simplest way to calculate Racah matrices is through construction of the highest weights, see arXiv:1112.2654 and arXiv:1508.02870. Special: • $[r]\otimes [r] \otimes \overline{[r]}\longrightarrow [r]$ • $[r]\otimes \overline{[r]} \otimes [r]\longrightarrow [r]$ • $[2,1]\otimes [2,1] \otimes \overline{[2,1]}\longrightarrow [2,1]$ • $[2,1]\otimes \overline{[2,1]} \otimes [2,1]\longrightarrow [2,1]$ Inclusive:
2018-01-17 18:18:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8864638209342957, "perplexity": 3044.4461787431082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886952.14/warc/CC-MAIN-20180117173312-20180117193312-00433.warc.gz"}
https://openehr.atlassian.net/browse/SPECRM-69
# Add missing ISM transitions and improve documentation of states. ## Description See for description. ## Activity Show: Diego Bosca December 3, 2018, 9:01 AM After reading the change log, it feels wrong that these allowed states are only defined in diagrams and not in a processable format anywhere Thomas Beale December 3, 2018, 10:35 AM Well, they are in the UML (XMI). Do you think we should export state machine specifications in a different format? Diego Bosca December 3, 2018, 10:44 AM I don't really know, maybe as a table of sorts in the specs or something like that is enough. Ian McNicoll December 3, 2018, 10:45 AM Agree - some sort of tabular view that can be clicked- through from a link. Thomas Beale December 3, 2018, 11:44 AM In the document - yes, I had thought of that before. The trouble is that requires someone (probably me) upgrading the UML extractor to generate some Asciidoctor table for the state machine in the same way we do for the classes. Maybe the thing to do is simulate it manually for now and replace it later on with the automated version. I think that would be doable. I'll investigate. Thomas Beale Thomas Beale Configure
2020-10-28 22:46:43
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8431108593940735, "perplexity": 3145.9933219381587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107902038.86/warc/CC-MAIN-20201028221148-20201029011148-00368.warc.gz"}
https://socratic.org/questions/what-is-the-equation-of-the-line-that-is-normal-to-f-x-2x-1-e-x-at-x-1-2
# What is the equation of the line that is normal to f(x)=(2x-1)/e^x at x=1/2 ? Equation of the normal line $\textcolor{b l u e}{y = - \frac{1}{2} {e}^{\frac{1}{2}} \left(x - \frac{1}{2}\right)}$ #### Explanation: $f \left(x\right) = \frac{2 x - 1}{e} ^ x$ at $x = \frac{1}{2}$ Solve for the point $\left({x}_{1} , {y}_{1}\right)$ let ${x}_{1} = \frac{1}{2}$ ${y}_{1} = f \left({x}_{1}\right) = \frac{2 {x}_{1} - 1}{e} ^ \left({x}_{1}\right)$ ${y}_{1} = f \left(\frac{1}{2}\right) = \frac{2 \left(\frac{1}{2}\right) - 1}{e} ^ \left(\frac{1}{2}\right)$ ${y}_{1} = f \left(\frac{1}{2}\right) = \frac{1 - 1}{e} ^ \left(\frac{1}{2}\right) = 0$ ${y}_{1} = 0$ Our point $\left({x}_{1} , {y}_{1}\right) = \left(\frac{1}{2} , 0\right)$ Solve for the slope $m$ $f \left(x\right) = \frac{2 x - 1}{e} ^ x$ $f ' \left(x\right) = \frac{{e}^{x} \cdot \frac{d}{\mathrm{dx}} \left(2 x - 1\right) - \left(2 x - 1\right) \cdot \frac{d}{\mathrm{dx}} \left({e}^{x}\right)}{{e}^{x}} ^ 2$ $f ' \left(x\right) = \frac{\left({e}^{x} \cdot 2\right) - \left(2 x - 1\right) \cdot \left({e}^{x}\right)}{{e}^{x}} ^ 2$ $f ' \left(x\right) = \frac{\left(2 {e}^{x} - 2 x {e}^{x} + {e}^{x}\right)}{{e}^{x}} ^ 2$ Slope $m = f ' \left(\frac{1}{2}\right) = \frac{\left(2 {e}^{\frac{1}{2}} - 2 \left(\frac{1}{2}\right) {e}^{\frac{1}{2}} + {e}^{\frac{1}{2}}\right)}{{e}^{\frac{1}{2}}} ^ 2$ $m = f ' \left(\frac{1}{2}\right) = \frac{\left(2 {e}^{\frac{1}{2}} - {e}^{\frac{1}{2}} + {e}^{\frac{1}{2}}\right)}{e}$ $m = f ' \left(\frac{1}{2}\right) = \left(2 {e}^{- \frac{1}{2}}\right)$ For the perpendicular line we need m_p=-1/m=-1/((2e^(-1/2)) ${m}_{p} = - \frac{1}{2} {e}^{\frac{1}{2}}$ Solve for the equation of the line using $\textcolor{b l u e}{\text{Point-Slope Form}}$ $y - {y}_{1} = {m}_{p} \left(x - {x}_{1}\right)$ $y - 0 = - \frac{1}{2} {e}^{\frac{1}{2}} \left(x - \frac{1}{2}\right)$ $\textcolor{red}{y = - \frac{1}{2} {e}^{\frac{1}{2}} \left(x - \frac{1}{2}\right)}$ Kindly see the graph of $f \left(x\right) = \frac{2 x - 1}{e} ^ x$ (colored red) and the normal line $y = - \frac{1}{2} {e}^{\frac{1}{2}} \left(x - \frac{1}{2}\right)$ (colored blue) God bless....I hope the explanation is useful.
2021-06-23 15:56:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 26, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7573122382164001, "perplexity": 1455.4673347285122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539480.67/warc/CC-MAIN-20210623134306-20210623164306-00515.warc.gz"}
http://math.stackexchange.com/questions/147528/how-can-you-have-a-density-function-that-depends-on-a-parameter
# How can you have a density function that depends on a parameter? Let $X_1,...,X_n$ be a random sample from a distribution with mass/density function $f_X$ that depends on a (possible vector) parameter $\theta$. Then $f_{X_1}(x_1) = f_X(x_1;\theta)$ so that $f_{X_1,...,X_k}(x_1,...,x_k) = \prod_{i=1}^kf_X(x_i;\theta).$ Could someone please explain what the significance of $\theta$ is in the above definition. I've never seen this before. Is it the mass/density function that depends on the parameter or is it the random sample that depends on the parameter. - θ is called a parameter. It is an unknown constant that is a part of the formula for the density function. So for example if you have a normal distribution with variance 1 and unknown mean the density would be fx(x1;θ) = [exp(-(x1-θ)$^2$ /2)/√(2π)]. θ is the unknown mean in this case. This is very common in statistics. You will need to get familiar with things like this if you want to be able to follow what is being said on this site about statistics.
2014-03-09 05:51:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7770993709564209, "perplexity": 109.52450502229937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999673608/warc/CC-MAIN-20140305060753-00012-ip-10-183-142-35.ec2.internal.warc.gz"}
https://gmatclub.com/forum/winston-created-a-function-f-such-that-f-1-4-f-4-3-and-f-f-x-f-x-288563.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 21 Nov 2019, 07:07 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))= F(x Author Message TAGS: ### Hide Tags GMATH Teacher Status: GMATH founder Joined: 12 Oct 2010 Posts: 935 Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))= F(x  [#permalink] ### Show Tags 12 Feb 2019, 14:45 00:00 Difficulty: 95% (hard) Question Stats: 38% (02:35) correct 62% (02:48) wrong based on 45 sessions ### HideShow timer Statistics GMATH practice exercise (Quant Class 4) Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))=F(x+2)-3, for all integer values of x. What is the value of F(5)? (A) 0 (B) 2 (C) 4 (D) 12 (E) Not necessarily one above _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net Math Expert Joined: 02 Aug 2009 Posts: 8199 Re: Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))= F(x  [#permalink] ### Show Tags 12 Feb 2019, 18:39 fskilnik wrote: GMATH practice exercise (Quant Class 4) Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))=F(x+2)-3, for all integer values of x. What is the value of F(5)? (A) 0 (B) 2 (C) 4 (D) 12 (E) Not necessarily one above The functions required and some values given tells you that you have to find f(5) from rearranging the equation and substituting f(1) and f(4).. Let us do step wise.. We are given f(x) and f(x+2), that is difference of 2, so let us find f(3) Let x=1, so f(f1)=f(1+2)-3....f(4)=f(3)-3....f(3)=3+3=6 Now let us take x as 3 to find f(5) F(f(3))=f(5)-3....f(6)=f(5)-3 But we have another function as f(6), which can be found by taking x as 4 F(f(4))=f(6)-3.....f(3)=f(6)-3....6=f(6)-3....f(6)=9 Substitute this in f(6)=f(5)-3....9=f(5)-3....f(5)=12 D _________________ GMATH Teacher Status: GMATH founder Joined: 12 Oct 2010 Posts: 935 Re: Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))= F(x  [#permalink] ### Show Tags 12 Feb 2019, 18:51 fskilnik wrote: GMATH practice exercise (Quant Class 4) Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))=F(x+2)-3, for all integer values of x. What is the value of F(5)? (A) 0 (B) 2 (C) 4 (D) 12 (E) Not necessarily one above $$F\left( {F\left( x \right)} \right) = F\left( {x + 2} \right) - 3\,\,\,,\,\,\,x\,\,{\mathop{\rm int}} \,\,\,\,\,\left( * \right)$$ $$F\left( 1 \right) = 4\,\,,\,\,F\left( 4 \right) = 3\,\,\,\,\,\left( {**} \right)$$ $$? = F\left( 5 \right)$$ $$x = 3\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,F\left( {F\left( 3 \right)} \right) = F\left( 5 \right) - 3\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,? = F\left( {F\left( 3 \right)} \right) + 3$$ $$x = 1\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,F\left( {F\left( 1 \right)} \right) = F\left( 3 \right) - 3\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,F\left( 3 \right) = F\left( {F\left( 1 \right)} \right) + 3\,\,\mathop = \limits^{\left( {**} \right)} \,\,F\left( 4 \right) + 3\,\,\mathop = \limits^{\left( {**} \right)} \,\,3 + 3 = 6$$ $$? = F\left( 6 \right) + 3$$ $$x = 4\,\,\,\,\mathop \Rightarrow \limits^{\left( * \right)} \,\,\,F\left( {F\left( 4 \right)} \right) = F\left( 6 \right) - 3\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,F\left( 6 \right) = F\left( {F\left( 4 \right)} \right) + 3\,\,\mathop = \limits^{\left( {**} \right)} \,\,F\left( 3 \right) + 3\,\, = \,\,6 + 3 = 9$$ $$? = F\left( 6 \right) + 3 = 9 + 3 = 12$$ We follow the notations and rationale taught in the GMATH method. Regards, Fabio _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net Re: Winston created a function F such that F(1)=4, F(4)=3 and F(F(x))= F(x   [#permalink] 12 Feb 2019, 18:51 Display posts from previous: Sort by
2019-11-21 14:08:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5808640718460083, "perplexity": 7621.025549617608}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00319.warc.gz"}
https://benknoble.github.io/blog/2019/11/07/git-stat/
# Junk Drawer For all those little papers scattered across your desk # Re: Dumpster Diving through Dotfiles D. Ben Knoble on 07 Nov 2019 in Blog I learn a lot after some advice from thoughtbot. Update 2019-11-19: slides for a presentation on this performance mystery Update 2019-11-29: Both HTML traces are available on Google Drive: 1, 2 ## Dumpster-diving? Thoughtbot apparently started a new series on finding tricks in folks’ Dotfiles. As you may know by now, I maintain my own over on GitHub. I peruse others for fun and often learn a lot. Well, in the course of adjusting their git-branches alias, I found a new trick. ## g for git I have long had alias g='git' in my bash files, along with the appropriate completion definition. But I saw that thoughtbot had a g function which does a git-status if no arguments are provided! This is genius, says I, and I convert. Aside: In doing so, I forgot to unalias g prior to reload my files. That caused the function definition g() { ... } to expand to git() { ... }—worse, since that definition calls git, I had made every invocation of git (including that of my PS1 prompt) a recursive stack overflow. Lesson: prefer functions if the complexity can grow. ## More performance problems Anyways, as you may remember from earlier, my Dotfiles repo is big. At time of writing: λ g cstat count 1707 λ du -hd1 .git 16K .git/gitweb 1,4M .git/objects 0B .git/rr-cache 8,0K .git/info 84K .git/logs 24K .git/hooks 12K .git/refs 40M .git/modules 42M .git So the sucker is, well, large (and note how big .git/modules is—we’ll be coming back to that). Testing out my new g function, I found that git status -sb was taking half a second to complete, where on other repos it ran 10-100x faster. What gives? Naïvely, I tried git gc (even with --aggressive, which git admits you probably don’t need). This shaved off 6MB to get the numbers you see now. But the performance didn’t waver. Thinking back to those numbers (and a trace of git code I’d accidentally done recently, wherein there were calls to submodule code), I remembered this little gem: λ g submodule | wc -l 47 A quick experiment revealed the issue. After issuing git submodule deinit --all, speeds were back to normal. Having a lot of vim plugins is catching up to me… Long-story short, turning off submodule stuff in the status brings me back down to the right time. You can do this with a flag on status, but I ended up doing λ g config diff.ignoresubmodules all So only my Dotfiles, where the submodules are a problem, ignores them. Lesson: if your repo has a lot of submodules, you can speed up some operations with this local configuration. But be sure to check git submodule summary every now and then, just to be sure. In my workflow, I almost always only make changes by updating the submodules from upstream, so this can be less of an issue.
2021-12-08 07:43:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3255631625652313, "perplexity": 6091.22463137941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00457.warc.gz"}
https://math.stackexchange.com/questions/936077/solving-first-order-differential-equations
# Solving first order differential equations So for this one I'm having trouble isolating for y. If its not possible then in the form with dy with the y variable and x with the x variable. $$\frac{dy}{dx}-2xy=e^{x^{2}}.$$ • This isn't a separable equation. A better strategy is to look for an integrating factor. – Semiclassical Sep 18 '14 at 5:13 If you have a differential equation of the form $$\frac{dy}{dx} + a(x) y(x) = b(x) \tag{1}$$ we call the equation a first-order linear ODE and we can obtain it's solution using the following method. First, we multiply both sides of $(1)$ by a function $f(x)$ (called the integrating factor) and we obtain $$f y' + fay = fb \tag{2}$$ Using the product rule $(fy)' = fy' + f'y$, we can rewrite $(2)$ as $$(fy)' - f'y + fay = fb \Longrightarrow (fy)' + y \left(f' + fa \right) = fb \tag{3}$$ If it is the case that $f' + fa = 0$, then the LHS of $(3)$ is just $(fy)'$, and integrating both sides would yield an expression for $fy$. So let's solve for the $f$ that guarantees that $f' + fa = 0$. Solving this separable differential equation for $f$, we get that $$\frac{df}{dx} = f'(x) = -f(x)a(x) \Longrightarrow \frac{df}{f(x)} = a(x) dx \Longrightarrow \log(f(x)) = \int a(x) dx \Longrightarrow f(x) = e^{\int a(x) dx}$$ Using the $f$ we just found, $(3)$ therefore reduces to $$(fy)' = fb \Longrightarrow fy = \int fb \: dx \Longrightarrow y = \frac{\int fb \: dx}{f}$$ Plugging in our formula for $f(x)$, we get that the solution to $(1)$ is $$y(x) = \displaystyle\frac{\displaystyle\int \left(b(x) \: e^{\int a(x) dx} \right) dx}{e^{\int a(x) dx}}$$ Now, noting that $a(x) = -2x$ and $b(x) = e^{x^2}$ in your example, we see that $$y(x) = \displaystyle\frac{\displaystyle\int \left(e^{x^2} \: e^{\int -2x dx} \right) dx}{e^{\int -2x dx}} = \frac{\displaystyle\int \left(e^{x^2} e^{-x^2}\right)dx}{e^{x^2}} = \frac{\int e^0 dx}{e{x^2}} = \frac{x+c}{e^{x^2}} \Longrightarrow \boxed{y(x) = xe^{-x^2} + c e^{-x^2}}$$ Hint : Multiply throughout by the integrating factor, $e^{\int -2x dx} = e^{-x^2}$. Then notice that \begin{align}\frac{d}{dx}(e^{-x^2}y) &= e^{-x^2}\frac{dy}{dx} - 2e^{-x^2}xy\\&=LHS\end{align} Rewrite the equation like this: $$e^{-x^2}\frac{dy}{dx}-2xe^{-x^2}y=1$$ Notice that if we apply the product rule in differentiating $ye^{-x^2}$ with respect to $x$, that we get exactly the left hand side. In other words, the equation is equivalent to: $$\frac{d(e^{-x^2}y)}{dx}=1$$ Integrating both sides yields: $$e^{-x^2}y=x+c\implies y=xe^{-x^2}+ce^{-x^2}.$$ This kind of technique can be generalised to the method of 'integrating factors', however it happens to work out nicely enough here that you can just follow your nose. Hint Because of the rhs, suppose that you define $y(x)=z(x)e^{x^2}$; then the differential equation write $$x z'(x)+z(x)=0$$ which is quite easy to integrate for $z(x)$. I am sure that you can take from here.
2019-10-15 02:22:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.991621196269989, "perplexity": 94.72059816835717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655735.13/warc/CC-MAIN-20191015005905-20191015033405-00068.warc.gz"}
https://antescofo-doc.ircam.fr/Library/Functions/rnd_geometric/
@rnd_geometric(p:int) returns a random generator that produces integers following a geometric discrete distribution: P(i | p) = p (1 - p)^i, i ≥ 0. See @rnd_bernoulli for a description of random generators.
2022-01-18 19:19:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3365353047847748, "perplexity": 3538.3365068635226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00015.warc.gz"}
https://chaos.if.uj.edu.pl/~karol/Maestro7/preprints.html
Preprints 8) D. Goyeneche, W. Bruzda, O. Turek, D. Alsina, K. Życzkowski The powerfulness of classical correlations Determination of classical and quantum values of bipartite Bell inequalities plays a central role in quantum nonlocality. In this work, we characterize in a simple way bipartite Bell inequalities, free of marginal terms, for which the quantum value can be achieved by considering a classical strategy, for any number of measurement settings and outcomes. These findings naturally generalize known results about nonlocal computation and quantum XOR games. Additionally, our technique allows us to determine the classical value for a wide class of Bell inequalities, having quantum advantage or not, in any bipartite scenario. 7) F. Shahbeigi, D. Amaro-Alcala, Z. Puchała and K. Życzkowski Log-Convex set of Lindblad semigroups acting on N-level system We analyze the set $\mathcal{A}^Q_N$ of mixed unitary channels represented in the Weyl basis and accessible by a Lindblad semigroup acting on an $N$-level quantum system. General necessary and sufficient conditions for a mixed Weyl quantum channel of an arbitrary dimension to be accessible by a semigroup are established. The set $\mathcal{A}^Q_N$ is shown to be log--convex and star-shaped with respect to the completely depolarizing channel. A decoherence supermap acting in the space of Lindblad operators transforms them into the space of Kolmogorov generators of classical semigroups. We show that for mixed Weyl channels the hyper-decoherence commutes with the dynamics, so that decohering a quantum accessible channel we obtain a bistochastic matrix form the set $\mathcal{A}^C_N$ of classical maps accessible by a semigroup. Focusing on 3-level systems we investigate the geometry of the sets of quantum accessible maps, its classical counterpart and the support of their spectra. We demonstrate that the set $\mathcal{A}^Q_3$ is not included in the set $\mathcal{U}^Q_3$ of quantum unistochastic channels, although an analogous relation holds for $N=2$. The set of transition matrices obtained by hyper-decoherence of unistochastic channels of order $N \geq 3$ is shown to be larger than the set of unistochastic matrices of this order, and yields a motivation to introduce the larger sets of $k$-unistochastic matrices. 6) P. Horodecki, Ł. Rudnicki and K. Życzkowski Five selected problems in the theory of quantum information are presented. The first four concern existence of certain objects relevant for quantum information, namely mutually unbiased bases in dimension six, an infinite family of symmetric informationally complete generalized measurements, absolutely maximally entangled states for four subsystems with six levels each and bound entangled states with negative partial transpose. The last problem requires checking whether a certain state of a two-ququart system is 2-copy distillable. Finding a correct answer to any of them will be rewarded by the Golden KCIK Award established by the National Quantum Information Centre (KCIK) in Poland. A detailed description of the problems in question, the motivation to analyze them, as well as the rules for the open competition are provided. 5) W. Bruzda, S. Friedland, K. Życzkowski Tensor rank and entanglement of pure quantum states The rank of a tensor is analyzed in context of description of entanglement of pure states of multipartite quantum systems. We discuss the notions of the generic rank of a tensor with $d$ indices and $n$ levels in each mode and the maximal rank of a tensor of these dimensions. Other variant of this notion, called border rank of a tensor, is shown to be relevant for characterization of orbits of quantum states generated by the group of special linear transformations. As entanglement of a given quantum state depends on the way the total system is divided into subsystems, we introduce a notion of `partitioning rank' of a tensor, which depends on a way how the entries forming the tensor are treated. In particular, we analyze the tensor product of several copies of the $n$-qubit state $|W_n>$ and analyze its partitioning rank for various splittings of the entire space. Some results concerning the generic rank of a tensor are also provided. 4) K. Korzekwa, Z. Puchała, M. Tomamichel, K. Życzkowski Encoding classical information into quantum resources We introduce and analyse the problem of encoding classical information into different resources of a quantum state. More precisely, we consider a general class of communication scenarios characterised by encoding operations that commute with a unique resource destroying map and leave free states invariant. Our motivating example is given by encoding information into coherences of a quantum system with respect to a fixed basis (with unitaries diagonal in that basis as encodings and the decoherence channel as a resource destroying map), but the generality of the framework allows us to explore applications ranging from super-dense coding to thermodynamics. For any state, we find that the number of messages that can be encoded into it using such operations in a one-shot scenario is upper-bounded in terms of the information spectrum relative entropy between the given state and its version with erased resources. Furthermore, if the resource destroying map is a twirling channel over some unitary group, we find matching one-shot lower-bounds as well. In the asymptotic setting where we encode into many copies of the resource state, our bounds yield an operational interpretation of resource monotones such as the relative entropy of coherence and its corresponding relative entropy variance. 3) B. Jonnadula, P. Mandayam, K. Życzkowski, A. Lakshminarayan Thermalization of entangling power with arbitrarily weak interactions The change of the entangling power of n fixed bipartite unitary gates, describing interactions, when interlaced with local unitary operators describing monopartite evolutions, is studied as a model of the entangling power of generic Hamiltonian dynamics. A generalization of the local unitary averaged entangling power for arbitrary subsystem dimensions is derived. This quantity shows an exponential saturation to the random matrix theory (RMT) average of the bipartite space, indicating thermalization of quantum gates that could otherwise be very non-generic and have arbitrarily small, but nonzero, entanglement. The rate of approach is determined by the entangling power of the fixed bipartite unitary, which is invariant with respect to local unitaries. The thermalization is also studied numerically via the spectrum of the reshuffled and partially transposed unitary matrices, which is shown to tend to the Girko circle law expected for random Ginibre matrices. As a prelude, the entangling power $e_p$ is analyzed along with the gate typicality $g_t$ for bipartite unitary gates acting on two qubits and some higher dimensional systems. We study the structure of the set representing all unitaries projected into the plane $(e_p,g_t)$ and characterize its boundaries which contains distinguished gates including Fourier gate, CNOT and its generalizations, swap and its fractional powers. In this way, a family of gates with extreme properties is identified and analyzed. We remark on the use of these operators as building blocks for many-body quantum systems. 2) I. Bengtsson and K. Życzkowski On discrete structures in finite Hilbert spaces We present a brief review of discrete structures in a finite Hilbert space, relevant for the theory of quantum information. Unitary operator bases, mutually unbiased bases, Clifford group and stabilizer states, discrete Wigner function, symmetric informationally complete measurements, projective and unitary t--designs are discussed. Some recent results in the field are covered and several important open questions are formulated. We advocate a geometric approach to the subject and emphasize numerous links to various mathematical problems. 1) I. Bengtsson and K. Życzkowski A brief introduction to multipartite entanglement A concise introduction to quantum entanglement in multipartite systems is presented. We review entanglement of pure quantum states of three--partite systems analyzing the classes of GHZ and W states and discussing the monogamy relations. Special attention is paid to equivalence with respect to local unitaries and stochastic local operations, invariants along these orbits, momentum map and spectra of partial traces. We discuss absolutely maximally entangled states and their relation to quantum error correction codes. An important case of a large number of parties is also analysed and entanglement in spin systems is briefly reviewed.
2020-07-07 18:09:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568697094917297, "perplexity": 633.8211429069288}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00376.warc.gz"}
http://math.stackexchange.com/questions/92724/boundedness-in-probability
# Boundedness in probability Suppose we have a set $M$ of random variables on a probability space. Then we defined boundedness of $M$ as, $M$ is bounded if $\sup_{X\in M}P(|X|>N) \to 0$ as $N \to \infty$. This definition means that the measure of the set where the elements of $M$ are big is very small, in fact tends to zero. I have three questions to this definition and some further conclusions. Now suppose we have a unbounded set. My questions are: 1. Unbounded would mean, that for every $N>0$ there exists an $\epsilon >0$ and a $X \in M$ such that $P(|X| > N) \ge \epsilon$. Is this conclusion right? 2. If theres a sequence $(X_n)$ of random variables, unbounded and positive, then there's a subsequence $(X_{n_k})$ and a $\lambda>0$ such that $P(|(X_{n_k})| > k)\ge \lambda$ for every $k \in \mathbb{N}$. My observations so far to 2. After the comment of Srivatsan (see below), we therefore have:$\exists \epsilon > 0$ sucht that for all $N>0$ exists a $X_n$ such that $P(X_n(\omega) > N) \ge \epsilon$ Put $N=1$, hence there is a $X_{n_1}$ sucht that $P(X_{n_1} > 1 ) \ge \epsilon$. Now put $N=2$, hence there is a $X_i$ such that $P(X_i > 2) \ge \epsilon$. Now the problem is, why do I know that $i> n_1$ ? Otherwise it isn't a subsequence. hulik - "Unbounded" means that there exists an $\varepsilon > 0$ such that for all $N$, there exists $X \in M$ with $P(|X| \geqslant N) \geqslant \varepsilon$. –  Srivatsan Dec 19 '11 at 11:40 In the question 2, you probably meant "$P(|X_{n_k}|>k)\geq\lambda$", since with $X_k=k/2$ we get a counter-example. But what does question 3 become? –  Davide Giraudo Dec 19 '11 at 12:43 You're right, I edited that. I'm very sorry but I don't know what you mean with: " but what does question 3 become?" –  user20869 Dec 19 '11 at 12:46 If I understand well, question 3 is: do we have for almost every $\omega$, and infinitely many $k$, $|X_{n_k}(\omega)|\geq k$. It's true by Borel-Cantelli lemma if $X_{n_k}$ are independent, but not in general, for example taking $\Omega=[0,1]$ with Lebesgue measure and $X_n=(n+1)\mathbf 1_{[0,\lambda)}$. –  Davide Giraudo Dec 19 '11 at 12:54 I do not know that the $(X_n)$ are independent. The reason for question 3 was $(1)$. As I worte, I try to prove $(1)$ and I do not see why this is true unless we know that for almost all $\omega$ and infinitly man $k$, it's true that $|X_{n_k}(\omega)| \ge k$. Since I can not assume independence, I have to find a different arguement why $(1)$ is true. –  user20869 Dec 19 '11 at 13:00 (1) is not correct. Let $Y$ be a single random variable such that, for every $N$, we have $P(|Y| > N) > 0$; for example, a normal random variable. (I would call such a random variable "unbounded", but that conflicts with your terminology.) Then let $M = \{Y\}$. $M$ is bounded in your sense (exercise: verify this, using continuity of probability) but it is still true that for every $N$ there exists $X \in M$ (namely $X=Y$) and $\epsilon > 0$ (namely $\epsilon = \frac{1}{2}P(|Y|>N)$) such that $P(|X| > N) > \epsilon$. Srivatsan's comment above gives a corrected statement; I just wanted to show explicitly that the statement in the original question (for every $N>0$ there exists an $\epsilon >0$ and $X \in M$ such that $P(|X|>N) \ge \epsilon$) is not correct. Regarding (2): You know that there is an $\epsilon$ such that for any $N$ there is an $X_n$ with $P(|X_n|>N) > \epsilon$. In fact, for any $N$ there are infinitely many such $X_n$; thus you can always choose one that occurs later in the sequence than all the ones chosen so far. To see there must be infinitely many such $X_n$, suppose there were only finitely many and show that in fact we would have to have $\sup_n P(|X|>N) \to 0$ as $N \to \infty$. (Warmup: what if there were only one such $X_n$? Now what if there were only two?) Hint: Fix a random variable $X$. This is a measurable function $X : \Omega \to \mathbb{R}$; for every $\omega \in \Omega$, $|X(\omega)|$ is some real number, and so there is an integer (depending on $\omega$) which is greater than it. It follows that $\bigcap_{N \in \mathbb{N}} \{|X| \ge N\} = \emptyset$. Now "continuity from above" (which follows from countable additivity) implies that $\lim_{N \to \infty} P(|X| \ge N) = 0$. This is the key step you need. - Thanks Nate Eldrege for you answer! with " 1 is not correct" you mean : for every $N >0$ there exists an $\epsilon >0$ and $X \in M$ such that $P(|X|> N)\ge \epsilon$, right? As you can see in the comments above, Srivatsan corrected this already. At this point I just have trouble with 2), see also my observations so far for 2) above. –  user20869 Dec 20 '11 at 18:44 @hulik: Right. I've edited to address this, and also your question (2). –  Nate Eldredge Dec 20 '11 at 20:27 Eldrege: Thanks for answering 2)! I have a question about that: We know it exists $\epsilon$ such that for every $N>0$ there's a $X_n$ with $P(|X_n|>N) > \epsilon$. You claim that for every $N$ there are infinitely many $X_n$ with this property. So let $N>0$ and suppose there's just one of this $X_n$. First it's clear: $\mathbf1\{|X_n|>N\} \ge \mathbf1\{|X_n|>N-1\} \dots \mathbf1\{|X_n|>1\}$ So if I can prove, that there's no $X_n$ such that for this fixed $X_n$ : $P(|X_n|> N ) > \epsilon$ for all $N$ then I'm done. But I do not see why this could not be the case.... –  user20869 Dec 21 '11 at 8:44 Perhaps there's a element $\tilde{X}$ of the sequence which is on a set (with large enough measure) as big as I want. If I can prove that it doesn't hold for 1) then using induction I can prove your claim. However, I do not see how to prove this for one, aso mentioned above. Again thanks for your help! –  user20869 Dec 21 '11 at 8:47 @hulik: I added another hint. –  Nate Eldredge Dec 21 '11 at 19:02
2015-07-29 22:32:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777189493179321, "perplexity": 111.81507937810383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00316-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.gamedev.net/forums/topic/328818-2d-map-file-syntax/
# 2D map file syntax This topic is 4800 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I know this question has come up before, but I've haven't really been able to get a good answer from any of the threads that show up on this topic. What I'm looking for is a good file syntax for storing a map for a 2D, tiled game. I've been playing around with a couple of ideas, neither of which really appeal to me. The data I store in the file will be loaded into a structure that looks like this: struct Tile { size_t x, y; char* graphic; int bitflags; } One idea I've tried is to adopt a pseudo C++ syntax, and provide all the data through text, something like this: tile { graphic = "FOREST.TREE" posx = 5 posy = 5 extras = ... (whatever) } This has an obvious disadvantage that if my map is, say, 20x20 tiles I would need 400 entries, all of this length, and a 20x20 map is a very moderate size. Another possibility I've toyed with is this: WIDTH = 5 HEIGHT = 5 define T "FOREST.TREE" define P "FOREST.PATH" TTPTT TTPTT TTTPT TTPTT TTPTT I like this because it stores the position of tiles implicitly, cutting down on filesize. However, this format is limited to about 26 tile types (one for each letter). Also, I have no way to store "extra" data this way; I would probably have to store extras (like sprite footprints, or treasure chests) as I stored information in my first example, specifying coordinates and data. A third possibility I've heard is to use binary I/O: I've never worked with this, although I'll gladly learn it if I'm told its the best way. However, one short-term problem I could see with this solution is that it would be difficult to edit a binary file without a map editor, which would probably be a lot of extra work to write. This isn't insurmountable (I'll probably have to write a map editor at some point anyway) but I'd prefer not to have to design an editor at the same time I'm trying to design a file structure. Does anyone have an opinion on which idea is better (or, better yet, a better idea)? ##### Share on other sites You could use XML. Since there are a number of XML libraries available (like TinyXML) it would save you from having to write as much code as you would if you were to adopt the pseudo-C++ format. However, using XML would suffer from the same problem as the pseudo-C++ format, too many entries. In both formats it's also difficult to see how the changes you make to the file affect the map as a whole while in your second format (which unfortunately doesnt store all the information you need) does quite well. Personally, I would bite the bullet and design a map editor. It will make editing maps for you much easier even if it is a lot of work in the beginning. ##### Share on other sites Thanks for the response; I feared that creating a map editor would be the way to go. That being said, what would be the easiest way to go about creating this editor. Most of my experience has been with strict C++, and I would hate to try to program stuff like scrollbars and input boxes without some sort of library. I have a little experience with Visual Basic and it seems like a possible option since this looks to be a RAD project. Or should I look into learning something like MFC--this might allow me to write allow me to preserve a lot of the code I've already written for loading and rendering tiles/sprites since I could (presumably?) just stick the MFC code on top of my already written classes? ##### Share on other sites I've used the 2nd option you suggest with a dungeon romp game I once made in college, in which a single character indicates what type of tile you are using and so forth. While this will of course work, it isn't very extensible. For instance, if you want several things going on in a tile, you either have to have several instances of the map for the different types of elements that can be in each tile. This of course breaks down because lets say you have one map for floor tiles, such as TTTPTTPTT TTTTPTPTT TTTTTPTTT TTTTTPTTT TTTTPPPTT For trees and path. Now, how about a monster? ......... ......... ......... ......... ......M.. And objects (treasure, items, etc) ? ...I..... ......... ......... .....T... ......... This quickly breaks down because you will have too many maps to parse. Your idea for key-value pairs with item and location is better. But I find this approach in general flawed because its not very extensible or scalable. The last game I wrote I used a strategy similar to your first idea, in which I enumerate the tile, and list all the extras or "objects" that are in that tile, such as enemies, background objects, etc. This approach works very well, and I haven't had any problems with it. I didn't go the XML route, but I certainly could have. It makes your flat text file more human readable, but beyond that, if you have a set file format, it won't gain you anything (other than ease of parsing if you use a pre-made XML parser, which I would highly recommend). So using XML in combination with your tile structure format seems logical. I hand-edited flat text files for a long time while developing my last game, and found it to be VERY tedious. Whatever strategy you end up using (character blocks, tile descriptions, XML entities), I would highly recommend taking the time to write a level editor before/during writing the game itself. It will make you think about how you structure your map data, and will make creating levels much, much simpler and much faster. I wasted so many hours manually editing tiles in my level files. I wrote my level editor in MFC, but then again at the time I knew MFC and it wasn't too much of a big deal. MFC can be quite a bear sometimes, but it will help you develop a GUI level editor with (relative) ease. ##### Share on other sites Don't worry too much about file size. Worst case you end up with something huge - then you just write a converter that changes to a binary format for use when you actually distribute the game. Text based (XML is good) is great during development because you can easily hand alter it and tends to be robust when you add extra data to your file format. Even if you do a map editor (I do) its still good to have an xml based file. ##### Share on other sites Theres a Java tile editor called "tiled", amazingly enough, that outputs in xml, where the map data is either a table, or a base64 string, depending on what you pick. I found its plenty easy to yoink the base64 string and put it in my own data file (a lua script file to be exact), and then drop that data file into a zip package file and load it from there using the miniunzip contrib that came with zlib. Sounds complicated, but it all works out. ##### Share on other sites You could switch from a single character map to a numeric map. 00 00 00 01 01 01 00 00 01 01 01 00 ... If you run out of space( i.e. hit 99 ), just add more permitted digits to the format. I liked your code / pseudo code of #define T "Forest.Tree" ##### Share on other sites Test Map //map name 5 //map width 3 //map height @setTileID 0 tiles\rock @setTileID 1 tiles\grass @setTileID 2 tiles\trees 0,1,0,1,0 2,0,1,0,1 2,2,0,1,0 @setObjectID 0 items\gold @setObjectID 1 monsters\orc @placeObject 0, 3, 2 @placeObject 1, 3, 2 @placeObject 1, 3, 3 @placeObject player\spawn, 0, 0 ##### Share on other sites For Blob Wars : Metal Blob Solid I saved map data as, 0 0 0 0 1 0 0 2 0 0 0 3 0 1 1 2 0 0 2 0 1 2 0 4 ... The numbers would refer to an image in an image index. Because the map was saved as a 2D array for chars it meant I could use numbers between 0 and 255 for the tile indexes. This was more than enough. The type of tile would be assumed. So, 0 would be air 1 would be water 2 would be slime 3 would be lava 4 - 100 would be solid tiles etc.. Next, objects would be stored in the same map file, ITEM 100 320 256 "Key Card" RedKeyCard The first parameter would tell the map loader this is an Item The second parameter would define the type of item The third would be the x coordinate on the map The fouth would be the y coordinate on the map The fifth would be the name of the object The sixth would be the sprite name Using a format like this would mean you can edit your map items easily in a text editor and also understand what they are. It's also quite scalable. 1. 1 2. 2 JoeJ 20 3. 3 4. 4 frob 11 5. 5 • 13 • 17 • 13 • 20 • 13 • ### Forum Statistics • Total Topics 632191 • Total Posts 3004661 ×
2018-08-20 18:24:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32200977206230164, "perplexity": 1187.5437154767897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216724.69/warc/CC-MAIN-20180820180043-20180820200043-00505.warc.gz"}
https://www.iqtestpreparation.com/daily-test/2552
# IQ Contest, Daily, Weekly & Monthly IQ & Mathematics Competitions #### Question No 1 854 x 854 x 854 - 276 x 276 x 276/ = ? 854 x 854 + 854 x 276 + 276 x 276 Solution! (a3 - b3)= (a - b)/ (a2 + ab + b2) = (854 - 276) = 578 . #### Question No 2 Average marks of 36 students are 92.5 and of 12 students are 95. What will be the average of all students: Solution! No explanation available for this question.. #### Question No 3 Mustang : ______ : : jaguar : cat Solution! A mustang is a type of horse, jaguar is a type of cat.. #### Question No 4 2, 3, 5, 7, 11, ___, 17, 19 Find the missing. Solution! All are prime Numbers.. #### Question No 5 _______: draw:: list: slit Solution! Ward is anagram for draw and list is an anagram for slit.. #### Question No 6 Choose or find odd pair of words (Cat : Paw) , (Lizard : Pad) , (Horse : Hoof) , (Man : Leg) Solution! In all other pairs, second is the name given to the foot of the first. . #### Question No 7 Solution! No explanation available for this question.. #### Question No 8 A farmer bought a harvesstor for RS. 1500 and sold it for RS.1550. Find its profit percentage? Solution! Profit=1550-1500 =50 Profit percentage=50/1500*100=3.3% . #### Question No 9 Soldier is to bullet as Student is to Solution! No explanation available for this question.. #### Question No 10 John can swim at the speed of 15 km per hour in still water and speed of water is 5 km per hour. What is his speed with water? Solution! John's speed in still water=15 Speed of water=5 Jone's speed with water=15+5=20 . #### Question No 11 The unit digit in the product (784 x 618 x 917 x 463) is: Solution! Unit digit in the given product = Unit digit in (4 x 8 x 7 x 3) = (672) = 2. . #### Question No 12 In fashion show 20%, of the models have blue eyes .If there are 30% blue eyes models . How many models are in the show ? Solution! We replace x for total mumber of models 20% of x=30 20%× X =30 20/100× X=30 X = 30×100/20 X=150 . #### Question No 13 1, 3, 9, 27, ____, 243 Find the missing. Solution! Series of 30,31 ,32 ,33 ,34 ,35 ...    . #### Question No 14 The reporter gave a ________and accurate account of the event. Solution! Concise is correct one.. #### Question No 15 Idioms and phrases. Turn the corner Solution! Discription is not available for this question.. #### Question No 16 Vendor bought chocolates at 6 for a rupee. How many for a rupee must he sell to gain 20%? Solution! C.p of 6 chocolates= 120% of RS. 1= 6/5 For RS. 6/5 chocolates sold =6 For RS. 1 chocolates sold (6*5/6)=5 . #### Question No 17 Find Odd one out: Solution! No explanation available for this question.. #### Question No 18 In following questions, one term in number series is incorrect. Find out the incorrect number 10, 14, 28, 32, 64, 68, 132 Solution! Alternately, the numbers are increased by four and doubled to get the next number> Thus, 10 + 4 = 14; 14 * 2 = 28; 28 + 4 = 32; 32 * 2 =64 and so on. So, 132 is wrong and must be replaced by (68 * 2) i.e. 136.. #### Question No 19 (2-1)+ (3-2) + (4-3) + ..... + (10-9) = ? Solution! No explanation available for this question.. #### Question No 20 The ratio between me and my brother's age is 3:1 in 2015. In 2013, the ratio was 4:1. What would be my age in 2014: Solution! No explanation available for this question.. #### Question No 21 Bill runs faster than Bernard Bernard runs faster than Xerses Xerses runs faster than Bill If first two statements are true than third will be: Solution! No explanation available for this question.. #### Question No 22 The passenger train takes 2 hours less for a journey of 300 km if it's speed is increased by 5 kilometre per hour from its normal speed. The normal speed is: Solution! Let the normal speed be "S" kilometre/hour New speed=(S+5) km/ hr (300/s)-2=300/(s+5) On solving this equation we get: S=25 km/hr. . #### Question No 23 The average of 5 quantities is 6. The average of 3 of them is 8. What is the average of the remaining two numbers? Solution! The average of 5 quantities is 6. Therefore, the sum of the 5 quantities is 5 *6 = 30. The average of the 3 of these 5 quantities is 8. Therefore, the sum of these 3 quantities = 3*8 = 24 The sum of the remaining two quantities = 30 - 24 = 6. Average of these two quantities = 6 / 2 = 3. . #### Question No 24 _______ : bedrock : : cement : foundation Solution! Mica makes up bedrock _ on which skyscrapers are built ; cement makes up a foundation __ on which houses are built.. #### Question No 25 Fear is to Threat as Anger is to Solution! No explanation available for this question.. #### Question No 26 In following alphabet series , one term missing as shown by question mark . Choose missing term from options.   BEH, KNQ, TWZ, ? Solution! All the letters of each term are moved nine steps forward to obtain the corresponding letters of the next term.. #### Question No 27 Price of a dozen egg  increase from $60 to$65. What is the percentage increase? Solution! Increase price=65-60=5 Percentage increase=(5/60)*100 =8.3% . #### Question No 28 Henry is 4 years older than Margaret. Next year Henry will be 2 times as old as my great. How old is Henry Solution! let's margret= X Henry = X + 4 given condition X + 4 + 1 = 2 (X + 1) X= 3 Henry = 3 + 4 = 7 . #### Question No 29 Choose the word that has the same or nearly the same meaning of compensate. Solution! To compensate means to provide adequate substitution or recompe as to pay appropriately.. #### Question No 30 Police car is chasing a girl who is running at a speed of 5 m/s and the distance between them is decreasing 3m after every sec. the speed of Police car would be: Solution! No explanation available for this question.. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
2019-10-16 21:29:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4319673478603363, "perplexity": 2609.9279265102577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00115.warc.gz"}
https://physics.stackexchange.com/questions/67118/static-as-opposed-to-kinetic-friction-in-rolling-motion
# Static as opposed to Kinetic Friction in Rolling Motion During analysis of rolling motion, why do we consider coefficient of friction as that of static friction and not kinetic friction? • Are we talking about rolling or about circular motion? I suspect you're talking about rolling, and if you are, think about how tires behave on ice... – Jerry Schirmer Jun 5 '13 at 10:49 • Since you're talking about rolling motion, it might be useful to understand how gears work. – Mike Dunlavey Jun 5 '13 at 15:29 Like @Jerry said, this is all to do with how a wheel works (or similar rolling object; a giant meatball for example). This figure should help: As you can see from the image, we consider static friction because while the whole meatball may be in motion, the part touching the ground (the only part friction can act on) is not moving relative to the ground. If it were moving, this would be considered sliding. If you've ever driven a meatball on ice, you'd agree that sliding is definitely a bad thing. In fact, the entire reason your meatball can roll around is due to static friction. When you apply a torque on the meatball, it wants to rotate on the spot. Without static friction, it would do just that and there would be no net forward motion. However, static friction causes the part of the meatball touching the ground to stay in one place. But the meatball still has a torque, so it will rotate; it rotates with a forward motion as long as it can't overcome static friction where it "meats" the ground. If your meatball ever starts to slip, then we discuss kinetic friction, which is usually less than static friction. This means the forward force you could potentially apply to your meatball is severely reduced and it would go nowhere (but it would go there fast!). Static friction. That's how I roll! • Minor nitpick: Road tires have tread only to displace surface rainwater. This is why racing drivers use slicks on dry days. On dry days, treads reduce static friction, not increase it. – RedGrittyBrick Jun 5 '13 at 14:25 • @RedGrittyBrick I stand corrected. Thanks, I'll get rid of that. Is it then why they're made of rubber? – Jim Jun 5 '13 at 15:07 Static friction is for when two surfaces do not move relative to one another (no rubbing), whereas kinetic friction is for when two surfaces do move relative to one another (rub against one another). For rolling motion a good starting place for understanding which is relevant is to think about a tire on a car. Kinetic friction is appropriate when the tire on a car is not getting enough traction and it spins without the car moving forward, as say can happen on ice, because then there is rubbing of the tire tread on the ice (which are the two relevant surfaces in this case). But if the tire "rolls without slipping", on say a dry road, then static friction is appropriate because there is no relative motion between the tire tread and the road (which are the two relevant surfaces in this case). To see that there is no relative motion in this case think of a tire rolling without slipping as being like the tracks on a tank (compare this picture of a flat tire to this track simulation video): where the tire meets the road it is slightly deformed to be flat on the road and this flat part and the road do not have any motion between them (there is no rubbing of these two surfaces). You might also find it helpful to look at the two animated GIFs here. The idea is that the bottom of the wheel is actually stationary, which is often hard to get your head around. Here's a great analogy, though: walking. When you walk, do both of your feet move at the same speed? Of course not! You wouldn't be walking if they were. If you really pay attention to your feet, one foot is actually stationary (relative to the ground) while the other one is moving much faster than your overall motion. The same thing happens between the top and bottom of a tire. The top point is moving in the same direction (relative to the car) as the car is moving (relative to the ground), so it's moving twice as fast as the car (relative to the ground). The bottom point is moving in the opposite direction, so it's stationary (relative to the ground). On average, every point on the wheel will move at the same velocity as the car, so don't worry about logic breaking your tires. :) I usually consider a rolling friction coefficient which is different from the static and the sliding friction. The reason is because the elastic deflection of a rolling element imparts resistance what is non-linearly proportional to the applied (contact) load. For steel over steel depending on lubrication conditions a typical value us $\mu_{roll}=0.008$ The static friction is used to determine the available traction, beyond which slipping will occur. When pushing a bowling ball, if you push slightly it will roll, but if you push with more force that static friction times weight then the ball will slide and roll. • rolling friction and static friction both exists in this scenario and both do very different things. The question pertains to the friction (resistance to sliding) between the rolling object and the surface. Not the combined forces from resistance to being compressed then separated from the surface and resistance from the bearings/axel, which is closer to what rolling friction is – Jack Dozer Jun 5 '13 at 13:49
2020-04-10 01:11:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3816733658313751, "perplexity": 527.6749327015028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371880945.85/warc/CC-MAIN-20200409220932-20200410011432-00368.warc.gz"}
https://learn.careers360.com/engineering/question-need-explanation-for-three-dimensional-geometry-jee-main/
If the line,  lies in the plane,then   is equal to: Option 1) 26 Option 2) 18 Option 3) 5 Option 4) 2 As we learnt in Plane passing through a point and a line (vector form) - Let the plane passes through and a line then the plane is given by - wherein line lies in plane lx+my-z=9 So (3,-2,-4) statisfies lx+my-z=9 3l-2m+4=9 3l-2m=5                ----------(i) and 2l-m=3            -------------(ii) l=1, m=-1 Option 1) 26 This option is incorrect Option 2) 18 This option is incorrect Option 3) 5 This option is incorrect Option 4) 2 This option is correct
2023-03-21 01:53:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892471790313721, "perplexity": 7277.057346138004}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00433.warc.gz"}
http://simple.wikipedia.org/wiki/Voltage
# Voltage Voltage is the change in electric potential (meaning potential energy per unit charge) between two positions. The voltage is always measured between two points, for example between the positive and negative ends of a battery, or between a wire and ground. It is measured in volts. It was named after an Italian physicist Alessandro Volta who made the first chemical battery. Voltage is also called electric tension. The voltage can be seen as the pressure on the electrons to move out of the source. It is directly proportional to the pressure exerted on the electrons. In other words, the higher the voltage, the higher the pressure. For example, a battery of 3 volt will exert pressure on the electrons twice as hard as a battery of 1.5 volt. The voltage can push the electrons into a component, like a resistor, creating a current. Usually, the voltage and the current are related by a formula (see impedance), Voltage is directly proportional to the current. If the voltage increases, the current also increases by the same amount. Note that there must be both voltage and current to get power (energy). For example, a wire can have a high voltage on it, but unless it's connected, nothing will happen (that's why birds can land on high voltage lines such as 12-KV and 16-KV without problems). There are two types of voltage, DC voltage and AC voltage. The DC voltage (Direct Current voltage) always has the same polarity (positive or negative), such as in a battery. The AC voltage (Alternating Current voltage) alternates between positive and negative. For example, the voltage from the wall socket changes polarity 60 times per second (in America). The DC is typically used for electronics and the AC for motors. ## Mathematical definition Mathematically, the voltage is the amount of work needed to move a charge of 1 coulomb from one position to the other. $V = I R$ Where V=Voltage, I=Current, R=Total resistance. ## Measuring tools Some of the tools for measuring the voltage are the voltmeter and the oscilloscope. The voltmeter measures the voltage between two points and can be set to the DC mode or the AC mode. The voltmeter can measure the DC voltage of a battery for example (typically 1.5V or 9V), or the AC voltage from the power socket on the wall (typically 120V). For more complex signals, an oscilloscope can be used the measured the DC and/or AC voltage, for example to measure the voltage across a speaker.
2014-07-29 10:57:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6565414071083069, "perplexity": 480.0605601137395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00135-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/determining-minimum-maximum-and-negotiated-transfer-prices-coleman-company-is-a-lumb-2644467.htm
# Determining Minimum, Maximum, and Negotiated Transfer Prices Coleman Company is a lumber company... Determining Minimum, Maximum, and Negotiated Transfer Prices Coleman Company is a lumber company that also manufactures custom cabinetry. It is made up of two divisions, Lumber and Cabinetry. Division L is responsible for harvesting and preparing lumber for use; Division C produces custom-ordered cabinetry. The lumber produced by Division L has a variable cost of $1.00 per linear foot and full cost of$1.50. Comparable quality wood sells on the open market for \$3.00 per linear foot. Required: 1. Assume you are the manager of Division C. Determine the maximum amount you would pay for lumber from Division L. 2. Assume you are the manager of Division L. Determine the minimum amount you would charge for the lumber sold to Division C if you have excess capacity. Repeat assuming you have no excess capacity. 3. Assume you were the president of Coleman and wanted to set a mutually beneficial transfer price. Determine the mutually beneficial transfer price. 4. Explain the possible consequences of simply letting the two division managers negotiate a price.
2018-09-26 01:14:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19809994101524353, "perplexity": 5191.845975348882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162809.73/warc/CC-MAIN-20180926002255-20180926022655-00403.warc.gz"}
https://www.hpmuseum.org/forum/thread-16473-page-2.html
[VA] SRC #009 - Pi Day 2021 Special 03-16-2021, 10:29 PM (This post was last modified: 03-17-2021 03:37 AM by Gerson W. Barbosa.) Post: #21 Gerson W. Barbosa Senior Member Posts: 1,473 Joined: Dec 2013 RE: [VA] SRC #009 - Pi Day 2021 Special (03-15-2021 08:59 PM)J-F Garnier Wrote:  I don't know -and don't think there is - any relation that can be used to get $$\pi$$ from e. Et pourtant, il y en a – and yet there is at least one: http://oeis.org/wiki/A_remarkable_formula_of_Ramanujan Really remarkable, isn’t it? ${\int_{-\infty}^{ \infty}\rm{e}^{-{{x}}^{2}}\rm{d}x}=\sqrt{\pi}$ 03-17-2021, 12:49 AM Post: #22 Valentin Albillo Senior Member Posts: 869 Joined: Feb 2015 RE: [VA] SRC #009 - Pi Day 2021 Special (03-16-2021 08:28 PM)robve Wrote: I never posted "every programmer should write their own integration procedure". as the quotation suggests. What post are you referring to? This one.  I quote: "Also, what is the fun of doing math and calc exercises if we don't implement numerical integration ourselves?" Regards. V. All My Articles & other Materials here:  Valentin Albillo's HP Collection 03-17-2021, 02:31 AM Post: #23 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special (03-16-2021 07:17 PM)Albert Chan Wrote: (03-14-2021 07:00 PM)Valentin Albillo Wrote:   [*]d.  Conversely, the volume enclosed by the n-dimensional sphere of radius R is given by: Go on and evaluate the $$\pi$$-th root of the summation for even dimensions from 0 to $$\infty$$ of the volumes enclosed by the respective n-dimensional unit spheres (R = 1). sum = 1 + pi/1! + pi²/2! + pi³/3! + ... = e^pi sum ^ (1/pi) = e Comment: formula give 1 for volume of 0-dimensional sphere, which seems weird. Yes, it is kind of weird. But this is connecting two seemingly unrelated formulae, which is nice. 1. Taylor series: $$e^x = 1+\frac{x}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots$$ 2. The volume of an n-ball with radius R: $$V_n = \frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)}R^n$$ The latter simplifies to $$\frac{\pi^k}{k!}$$ for k=2n and R=1 (the conditions stated in the challenge). Therefore, the answer is e as the π root of the sum: $$\sum_{k=0}^\infty \frac{\pi^k}{k!} = \mathrm{e}^\pi$$ I recognized the Taylor series after simplifying the sum's terms. Whenever you see a factorial in a denominator in a term of a series sum, there may be a Taylor series lurking beneath. Nice piece of natural pie... - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-17-2021, 02:32 AM (This post was last modified: 03-17-2021 11:26 AM by Albert Chan.) Post: #24 Albert Chan Senior Member Posts: 2,000 Joined: Jul 2018 RE: [VA] SRC #009 - Pi Day 2021 Special (03-16-2021 09:09 PM)robve Wrote:  $$\arctan u - \arctan v = \arctan\frac{u-v}{1+uv}$$ This is not quite right, LHS has possible range of ±pi, RHS is limited to ±pi/2 Correct identity is more complicated: see Sum of ArcTangents We could use this instead: atan(u) ± atan(v) = atan2(u±v , 1∓uv)       (*) y = atan(x) - atan((x-1)/(x+1))                       // y undefined when x = -1 = atan2(x - (x-1)/(x+1), 1 + x*(x-1)/(x+1)) = atan2((x²+1)/(x+1), (x²+1)/(x+1))             // numerator always positive. If x > -1, 4*y = 4*atan(1) = pi If x < -1, 4*y = 4*(atan(1) - pi) = -3*pi (*) Proof is trivial: (1+u*i) * (1±v*i) = (1∓u*v) + (u±v)*i Phase angle of two sides matched, and we have above atan2 identity 03-17-2021, 03:03 AM (This post was last modified: 03-17-2021 02:14 PM by robve.) Post: #25 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special (03-17-2021 02:32 AM)Albert Chan Wrote: (03-16-2021 09:09 PM)robve Wrote:  $$\arctan u - \arctan v = \arctan\frac{u-v}{1+uv}$$ This is not quite right, LHS has possible range of ±pi, RHS is limited to ±pi/2 Right. I did not include the two necessary conditions since these hold, note the (mod π): $$\arctan u - \arctan v = \arctan\frac{u-v}{1+uv}\quad \pmod \pi,\quad uv\neq 1$$ EDIT: the arctan identity comes from $$\tan \alpha \pm \tan \beta = \frac{\tan \alpha \pm \tan \beta}{1\mp \tan \alpha\, \tan \beta}$$ Since 0<=e<π and 0<=(e-1)/(e+1)<π we have (e.g. verify numerically) $$\tan\arctan\mathrm{e}=\mathrm{e}; \quad \tan\arctan\frac{\mathrm{e}-1}{\mathrm{e}+1} = \frac{\mathrm{e}-1}{\mathrm{e}+1}$$ Then $$\alpha=\arctan\mathrm{e};\quad \beta=\arctan \frac{\mathrm{e}-1}{\mathrm{e}+1};\quad \frac{\tan \alpha - \tan \beta}{1+ \tan \alpha\, \tan \beta} = \frac{\mathrm{e} - \frac{\mathrm{e}-1}{\mathrm{e}+1}}{1+\mathrm{e}\frac{\mathrm{e}-1}{\mathrm{e}+1}} = 1$$ Generally, tan has period π $$\tan(k\pi+\theta) = \theta$$ for any integer k. - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-17-2021, 03:42 AM Post: #26 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special (03-17-2021 12:49 AM)Valentin Albillo Wrote: (03-16-2021 08:28 PM)robve Wrote: I never posted "every programmer should write their own integration procedure". as the quotation suggests. What post are you referring to? This one.  I quote: "Also, what is the fun of doing math and calc exercises if we don't implement numerical integration ourselves?" "Me/ourselves" and "implement integration" therefore "all programmers (should) write integration"? As in "Socrates is a man, Socrates is mortal, therefore all men are mortal"? Quotations matter. Not a programmer, a D in Ph'y after passing some BS, humbly not wanting to be a P in the A, just taking some late snacks on vintage stuff to honor those that came before us. - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-17-2021, 08:27 AM Post: #27 J-F Garnier Senior Member Posts: 705 Joined: Dec 2013 RE: [VA] SRC #009 - Pi Day 2021 Special (03-16-2021 10:29 PM)Gerson W. Barbosa Wrote: (03-15-2021 08:59 PM)J-F Garnier Wrote:  I don't know -and don't think there is - any relation that can be used to get $$\pi$$ from e. Et pourtant, il y en a – and yet there is at least one: http://oeis.org/wiki/A_remarkable_formula_of_Ramanujan Really remarkable, isn’t it? ${\int_{-\infty}^{ \infty}\rm{e}^{-{{x}}^{2}}\rm{d}x}=\sqrt{\pi}$ OK, I see what you (and Valentin probably too) mean and I agree of course. By relation, I was (wrongly) limiting myself to finite expressions, like the arctan expression in Valentin's post. There are obviously many infinite sums and integrals involving e and producing pi in a way or another. J-F 03-17-2021, 10:32 AM Post: #28 Ángel Martin Senior Member Posts: 1,368 Joined: Dec 2013 RE: [VA] SRC #009 - Pi Day 2021 Special (03-14-2021 07:00 PM)Valentin Albillo Wrote:  [*]e.   $$\pi$$ also features in a song by Kate Bush (included in her 2005's album "Aerial") about a man who's utterly obsessed with the calculation of $$\pi$$ (that could describe some of us here at the MoHPC). She sings more than a hundred digits of $$\pi$$ and Love that song ;-) 03-17-2021, 01:31 PM Post: #29 Massimo Gnerucci Senior Member Posts: 2,491 Joined: Dec 2013 RE: [VA] SRC #009 - Pi Day 2021 Special I love Kate Bush. Greetings, Massimo -+×÷ ↔ left is right and right is wrong 03-18-2021, 02:56 AM Post: #30 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special VA's posts are fascinating and responses are brilliant. His challenges encourages sleuthing using our advanced and vintage HP calculators and perhaps by writing some code to figure this all out. To return the favor I hereby post two small and related challenges. These two "counter" challenges "invert" VA's common objective (if I may so): instead of writing code and (CAS) expressions, let's figure out what the given code does, find its formula and finally investigate who discovered it (online searching is permitted!). The first question of each of these two should be easy to answer. If you do not have a HP-71B (I recently acquired mine ), then any Basic-capable machine can be used instead. This code is simple enough to easily convert to HPPL, RPN, Forth. a. Consider the following HP-71B program: 10 P=SQR(2) 20 Q=P/2 30 DISP 2/Q 40 P=SQR(2+P) 50 Q=Q*P/2 60 IF P<2 GOTO 30 1. What constant does it compute? 2. What is the algebraic formula computed by this code for this constant? 3. Who discovered the formula? b. Consider the following HP-71B program: 10 P=1 20 FOR I=2 TO 1000 STEP 2 30 P=P*I/(I-1) 40 DISP 2*P*P/(I+1) 50 NEXT I 1. What constant does it compute? 2. What is the algebraic formula computed by this code for this constant? 3. Who discovered the formula? I will reply with the answers after VA posted his conclusions of the pi day challenge. - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-18-2021, 02:44 PM Post: #31 Albert Chan Senior Member Posts: 2,000 Joined: Jul 2018 RE: [VA] SRC #009 - Pi Day 2021 Special Hi, robve Thanks for spending the time to add code challenges. Both codes are calculating pi by 2/(sin(x)/x) at x=pi/2, in 2 different ways. (a) pi is via Viète's formula: sin(x)/x = cos(x/2) cos(x/4) cos(x/8) ... At x = pi/2: cos(x/2) = cos(pi/4) = √(2) / 2 cos(x/4) = √((1 + cos(x/2))/2) = √(2 + √(2)) / 2 cos(x/8) = √((1 + cos(x/4))/2) = √(2 + √(2 + √(2))) / 2 ... Generated outputs, 2/Q = 2^k * sin(pi/2^k), k = 2, 3, 4, ... limit(2^k * sin(pi/2^k), k=∞) = 2^k * (pi/2^k) = pi             // sin(ε) ≈ ε (b) pi is via Wallis products With roots of sin(x) = 0, ±pi, ±2pi, ±3pi, ..., and limit(sin(x)/x, x=0) = 1: sin(x)/x = (1-(x/pi)²) * (1-(x/(2pi))²) * (1-(x/(3pi))²) ... At x=pi/2: LHS = 2/pi ≈ 0.63662 RHS = (1-1/4) * (1-1/16) * (1-1/36) ... = (1*3)/(2*2) * (3*5)/(4*4) * (5*7)/(6*6) ... BTW, it is more efficient (and accurate !) to calculate P=sin(x)/x, at x=pi/2: Bonus: code is shorter, and easier to understand. 10 P=1 20 FOR I=2 TO 1000 STEP 2 30 P=P-P/(I*I) 40 DISP 2/P 50 NEXT I 03-20-2021, 09:36 PM (This post was last modified: 03-21-2021 01:33 AM by robve.) Post: #32 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special (03-17-2021 02:32 AM)Albert Chan Wrote:  We could use this instead: atan(u) ± atan(v) = atan2(u±v , 1∓uv)       (*) y = atan(x) - atan((x-1)/(x+1))                       // y undefined when x = -1 = atan2(x - (x-1)/(x+1), 1 + x*(x-1)/(x+1)) = atan2((x²+1)/(x+1), (x²+1)/(x+1))             // numerator always positive. If x > -1, 4*y = 4*atan(1) = pi If x < -1, 4*y = 4*(atan(1) - pi) = -3*pi (*) Proof is trivial: (1+u*i) * (1±v*i) = (1∓u*v) + (u±v)*i Phase angle of two sides matched, and we have above atan2 identity A bit late to reply. Initially I also thought about matching angles between complex numbers in polar coordinates with atan2. Subtracting the angles gives the resulting angle of the complex quotient expressed in polar coordinates: $$\arctan \mathrm{e} - \arctan\frac{\mathrm{e} - 1}{\mathrm{e} + 1} = \mathrm{atan2}(\mathrm{e},1) - \mathrm{atan2}(\mathrm{e}-1,\mathrm{e}+1)$$ then simplifying the quotient of the corresponding complex coordinates $$\frac{\sqrt{\mathrm{e}^2+1}}{\sqrt{(\mathrm{e}-1)^2+(\mathrm{e}+1)^2}} \cdot \frac{1+\mathrm{e}\mathrm{i}}{\mathrm{e}+1+(\mathrm{e}-1)\mathrm{i}} = \frac{1}{\sqrt{2}}\cdot \frac{1}{1-\mathrm{i}}$$ where the modulus of the denominator is $$\sqrt{2}$$ and angle $$\mathrm{atan2}(-1,1)$$ i.e. representing $$\frac{1\angle 0}{\sqrt{2}\angle\arctan{-}1}$$. This gives $$\arctan \mathrm{e} - \arctan\frac{\mathrm{e} - 1}{\mathrm{e} + 1} = 0 - \arctan {-}1 = \arctan 1 = \frac{\pi}{4}$$. Just another way to prove this equation, which is an identity that holds for other values than e by the way (with constraints). (03-17-2021 02:32 AM)Albert Chan Wrote:  We could use this instead: atan(u) ± atan(v) = atan2(u±v , 1∓uv)       Sure, but this is the same formula I had used in my previous post, albeit yours is in disguise using atan2 instead of atan, i.e. atan(u) ± atan(v) = atan((u±v)/(1∓uv)) = atan2(u±v , 1∓uv) the latter by definition and only if 1∓uv is nonzero, i.e. the necessary constraint I mentioned before. - Rob Minor edit: fix typo and $$\LaTeX$$. Second edit: comment on atan2 versus arctan. "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-21-2021, 06:48 PM (This post was last modified: 03-21-2021 09:38 PM by robve.) Post: #33 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special (03-18-2021 02:44 PM)Albert Chan Wrote:  Thanks for spending the time to add code challenges. Both codes are calculating pi by 2/(sin(x)/x) at x=pi/2, in 2 different ways. You're very welcome! Because this is going to be a long post, I will post the two parts a and b separately. a. 10 P=SQR(2) 20 Q=P/2 30 DISP 2/Q 40 P=SQR(2+P) 50 Q=Q*P/2 60 IF P<2 GOTO 30 As Albert replied correctly, the code computes Viète's formula, published in 1593: $$\frac{2}{\pi} = \frac{\sqrt{2}}{2} \cdot \frac{\sqrt{2+\sqrt{2}}}{2} \cdot \frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2} \cdots$$ In recurrence form: $$\lim_{n\rightarrow\infty} \prod_{i=1}^n \frac{a_i}{2} = \frac{2}{\pi};\qquad a_1 = \sqrt{2},\quad a_n = \sqrt{2+a_{n-1}}$$ In functional form: $$\pi \approx \mathrm{viete}(n) = 2/\prod_{i=1}^n v(n)/2;\qquad v(n) = \begin{cases} \sqrt{2} & n=1 \\ \sqrt{2+v(n-1)} & n>1 \end{cases}$$ Viète obtained his formula by comparing the areas of regular polygons with $$2^n$$ and $$2^{n+1}$$ sides inscribed in a circle. The first term in the product, $$\frac{\sqrt{2}}{2}$$, is the ratio of areas of a square and an octagon, the second term is the ratio of areas of an octagon and a hexadecagon, etc. - Wikipedia This directly relates to Archimedes's famous work (ca.225BC) on approximating the area of a circle by polygons inside and outside the circle squeezing the circle: "At each stage, he needed to approximate sophisticated square roots, yet he never faltered. When he reached the 96-gon, his estimate was $$\frac{6336}{2017\frac{1}{4}} > 3\frac{10}{71}$$ ." - "Journey Through Genius" by William Dunham. Note that Euclid's Elements does not prove the ratio of the radius of the circle to its circumference $$2\pi$$. Sometimes confused with Euclid VI.33 that proves an important property of angles and arcs but does not compare them to circles with different radii "in equal circles [emphasis mine], angles, whether at the center or circumference, are in the same ratio to one another as the arcs on which they stand." - Oliver Byrne's Euclid 1847 VI.33 [image credit: Oliver Byrne's Euclid 1847] Viète's formula can also be written as $$\frac{\pi}{2} = \frac{1}{\sqrt{\frac{1}{2}}} \cdot \frac{1}{\sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1}{2}}}} \cdot \frac{1}{\sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1​}{2}}}}} \cdots$$ In recurrence form: $$\lim_{n\rightarrow\infty} \prod_{i=1}^n \frac{1}{a_i} = \frac{\pi}{2};\qquad a_1 = \sqrt{\frac{1}{2}},\quad a_n = \sqrt{\frac{1+a_{n-1}}{2}}$$ In functional form: $$\pi \approx \mathrm{viete}(n) = 2/\prod_{i=1}^n v(n);\qquad v(n) = \begin{cases} \sqrt{1/2} & n=1 \\ \sqrt{(1+v(n-1))/2} & n>1 \end{cases}$$ While not based on Viète's formula, with our calculators we can quickly integrate the unit quarter circle defined by $$1=x^2+y^2$$, i.e. integrate $$y = f(x) = \sqrt{1-x^2}$$ to get $$\pi$$, using a square root: $$\pi = 4\int_0^1 \sqrt{1-x^2} \,dx$$ Proof (I've simplified this somewhat): $$4 \int_0^1 \sqrt{1-x^2} \,dx = 4 \int_0^{\frac{\pi}{2}} \sqrt{1-\sin^2 \theta} \cos \theta \,d\theta = 4 \int_0^{\frac{\pi}{2}} \sqrt{\cos^2 \theta} \cos \theta \,d\theta = 4 \int_0^{\frac{\pi}{2}} \cos^2 \theta \,d\theta = \pi$$ Programs I wrote today to illustrate Viete's formula My own functional programming language Husky: v(1) := (r,r) where r := sqrt(2)/2; v(n) := (t,q) where t := p*q where q := sqrt(2+2*r)/2 where (p,r) := v(n-1). viete(n) := 2/p where (p,r) := v(n). > viete(20). and a list-based version: prod := foldr(*, 1). v(1) := [sqrt(2)/2]; v(n) := [sqrt(2+2*x)/2, x | xs] where x.xs := v(n-1). viete(n) := 2/prod(v(n)). > viete(20). v 1 = (r,r) where r = (sqrt 2)/2 v n = (t,q) where t = p*q q = (sqrt (2+2*r))/2 (p,r) = v (n-1) viete n = 2/r where (r,_) = v n main = putStrLn (show (viete 20)) and a list-based version: prod = foldr (*) 1 viete n = 2/(prod (v n)) v 1 = [(sqrt 2)/2] v n = (sqrt (2+2*x))/2 : x : xs where x:xs = v (n-1) main = putStrLn (show (viete 20)) Prolog: viete(N, P) :- v(N, R, _), P is 2/R. v(1, R, R) :- R is sqrt(2)/2, !. v(N, T, Q) :- M is N-1, v(M, P, R), Q is sqrt(2+2*R)/2, T is P*Q. ?- viete(20, P). C (version with convergence check) #include <stdio.h> #include <math.h> int main() { double p = sqrt(2), q = p/2; while (p < 2) q *= (p = sqrt(2+p))/2; printf("%.17g\n", 2/q); } My own MiniC C-like language: int main() { float p, q; p = sqrt(2.0); q = p/2; while (p < 2.0) q *= (p = sqrt(2+p))/2; print 2/q; } $./minic viete.c$ java viete Java (version with convergence check): import java.lang.*; public class Viete { public static void main(String[] arg) { double p = Math.sqrt(2), q = p/2; while (p < 2) q *= (p = Math.sqrt(2+p))/2; System.out.println(2/q); } } Python (version with convergence check): from math import sqrt def viete(): p = sqrt(2) q = p/2 while p < 2: p = sqrt(2+p) q *= p/2 print(2/q) if __name__ == "__main__": viete() HPPL (version with convergence check): EXPORT viete() BEGIN LOCAL p = √2, q = p/2; REPEAT p := √(2+p); q := q*p/2; UNTIL p >= 2; RETURN 2/q; END; HP-71B FORTH with MultiMod (forthcoming: where can I find the pdf manual?) Edit: Thanks to rprosperi's help to locate the missing FORTH manual, here is my HP-71B FORTH program: FVARIABLE P FVARIABLE Q : VIETE 2. SQRT P STO 2. F/ Q STO BEGIN 2. P RCL F+ SQRT P STO 2. F/ Q RCL F* Q STO 2. P RCL X>=Y? UNTIL 2. Q RCL F/ F. ; VIETE HP-71B FORTH significantly extends ANS FORTH. Couldn't be happier with MultiMod installed on my HP-71B! - Rob Minor edit: fixed typo and added list-based functional versions of Husky and Haskell programs and HP-71B FORTH. "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-21-2021, 07:00 PM Post: #34 rprosperi Super Moderator Posts: 5,438 Joined: Dec 2013 RE: [VA] SRC #009 - Pi Day 2021 Special (03-21-2021 06:48 PM)robve Wrote:  HP-71B FORTH with MultiMod (forthcoming: where can I find the pdf manual?) MultiMod Manual is here. The 71B Forth/Assembler Manual is part of the MoHPC Document Set (which I presume you have by now) but is also available here if you don't have that. --Bob Prosperi 03-21-2021, 09:31 PM Post: #35 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special (03-21-2021 07:00 PM)rprosperi Wrote: (03-21-2021 06:48 PM)robve Wrote:  HP-71B FORTH with MultiMod (forthcoming: where can I find the pdf manual?) The 71B Forth/Assembler Manual is part of the MoHPC Document Set (which I presume you have by now) but is also available here if you don't have that. Great, many thanks! I had done some searches online but didn't find it before. I immediately ordered the MoHPC Document Set and used the link to the pdf to figure out how to use floating point in HP-71B FORTH and its editor. It's very easy. Within minutes after skimming the documentation I had my program entered as SCREEN, running and spitting out $$pi$$. The program: FVARIABLE P FVARIABLE Q : VIETE 2. SQRT P STO 2. F/ Q STO BEGIN 2. P RCL F+ SQRT P STO 2. F/ Q RCL F* Q STO 2. P RCL X>=Y? UNTIL 2. Q RCL F/ F. ; VIETE - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-22-2021, 04:06 PM Post: #36 John Keith Senior Member Posts: 806 Joined: Dec 2013 RE: [VA] SRC #009 - Pi Day 2021 Special Thanks, robve, nice to see the algorithm expressed in so many languages. For completeness here is an RPL version: \<< 2 \v/ DUP 2 / DO SWAP 2 + \v/ SWAP OVER * 2 / UNTIL OVER 2 \>= END SWAP DROP 2 SWAP / \>> It gets the result 3.1415926536 after 19 iterations. Takes about 1.2 seconds on an HP-28S. 03-22-2021, 05:33 PM Post: #37 Albert Chan Senior Member Posts: 2,000 Joined: Jul 2018 RE: [VA] SRC #009 - Pi Day 2021 Special (03-21-2021 06:48 PM)robve Wrote:  Proof (I've simplified this somewhat): $$4 \int_0^1 \sqrt{1-x^2} \,dx = 4 \int_0^{\frac{\pi}{2}} \sqrt{1-\sin^2 \theta} \cos \theta \,d\theta = 4 \int_0^{\frac{\pi}{2}} \sqrt{\cos^2 \theta} \cos \theta \,d\theta = 4 \int_0^{\frac{\pi}{2}} \cos^2 \theta \,d\theta = \pi$$ A comment about the last (missing) step. Instead of using half-angle formula, cos(x/2)^2 = (1+cos(x))/2, then integrate, fold the integral. $$\displaystyle \int _a^b f(x)\;dx = \int _a^b {f(x) + f(a+b-x) \over 2}\;dx$$ $$\displaystyle 4 \int_0^{\pi \over 2} \cos^2 θ\;dθ = 2 \int_0^{\pi \over 2} (\cos^2 θ \;+\; \sin^2 θ)\;dθ = 2 \int_0^{\pi \over 2} 1\;dθ = \pi$$ This is the same trick used in [VA] Short & Sweet Math Challenge #25. (02-28-2021 02:18 AM)Valentin Albillo Wrote:  My original solution for "Concoction the Third: Weird Integral" 03-23-2021, 10:20 PM Post: #38 Valentin Albillo Senior Member Posts: 869 Joined: Feb 2015 RE: [VA] SRC #009 - Pi Day 2021 Special Hi, all: Thanks to all of you who posted messages and/or comments to this SRC #009 - $$\pi$$ Day 2021 Special (namely J-F Garnier, Gerson W. Barbosa, Albert Chan, PeterP, robve, Maximilian Hohmann, Ángel Martín and Massimo Gnerucci), for your interest and valuable contributions, much appreciated. Now, these are my original results and comments for the various parts of this SRC: • Caveat: Don't expect anything ground-breaking here, this is a simple SRC, not a full-fledged challenge or even a challenge proper, and some of you already gave the correct results and explained them thoroughly as well, so there's not much to add, this will be brief. a.   The root of this equation in [3,4] is indeed X = $$\pi$$ ~ 3.14159265359. The base integral is: so that we have: I($$\pi$$) = $$\pi$$ $$\pi$$$$\pi$$/$$\pi$$! = 15.9359953238 ,    I(e) = $$\pi$$ ee/e! = 11.1735566407 and this simple HP-71B command-line expression will evaluate I(X) and  $$\pi$$ XX/X! for any given X ≥ 0: >INPUT X @ INTEGRAL(0,PI,0,(SIN(IVAR)*EXP(IVAR/TAN(IVAR))/IVAR)^X);PI*X^X/GAMMA(X+1) ?1       3.14159265359      3.14159265359     {  $$\pi$$    } ?2       6.28318530717      6.2831853072      {  2 $$\pi$$  } ?EXP(1)  11.1735566407      11.1735566407     {  $$\pi$$ ee / e!  } ?PI      15.9359953238      15.9359953238     {  $$\pi$$ $$\pi$$$$\pi$$ / $$\pi$$!  } • robve asked which quadrature procedures have I written that are better than Romberg. Well, actually several for various machines, and using them I've been able to compute definite integrals (even difficult ones) with high precision (say 100 digits and more) very fast. For the HP-71B's implementation I've achieved speeds faster than the ones achieved by the Math ROM's Romberg-based quadrature, despite the latter having the tremendous advantage of being implemented in assembly language (vs. my BASIC code) and using 15-digit/50,000-exponent precision for extended accuracy (vs. the 12-digit/499-exponent available to my BASIC code). I'll say no more about it as my implementation is the subject matter of my article "VA036 - Boldly Going - Outsmarting INTEGRAL", to be published soon. b.  The root of this equation in [3,4] is also X = $$\pi$$ ~ 3.14159265359. This time the base integral is: which is a representation of the ubiquitous Lambert W function as a parametric definite integral, which though not new (there are other various integral representations) seems to me rather awesome nevertheless: a definite integral solves a transcendental equationW0(X) eW0(X) = X Particularizing its value for X = 1 and exchanging W0(1) and $$\pi$$ we get: from where the equation is then obtained. This HP-71B program lets us try different values of X and returns the difference between the LHS and the RHS, to see how it approaches 0 when X approaches $$\pi$$: 1  DEF FNF(T)=LN(1+SIN(T)/T*EXP(T/TAN(T))) 2  DEF FNI(X)=INTEGRAL(0,X,0,FNF(IVAR)) 3  DESTROY ALL @ W=FNROOT(0,1,FVAR*EXP(FVAR)-1) 4  INPUT X @ DISP FNI(X)/W-X @ GOTO 4 [RUN] ? .1      .131342478087 ? .5      .63098466783 ? 1      1.1030252775 ? 1.5    1.27459847697 ? 2      1.08227897742 ? 2.5     .6404117652 ? 3       .14159265358 ? 3.14    .00159265358 ? 3.1415  .00009265358 ? PI     -.00000000001 For X > $$\pi$$, it gives an error:  ERR L1:LOG(neg). As expected, for X = $$\pi$$ it returns ~ 0 but notice that for X ≥ 3 it returns ~ $$\pi$$ - X. Using a graphing calculator to plot the above values into a continuous graph will show why. c.   Though this seems to be a striking finite evaluation which gives $$\pi$$ in terms of e in a much simpler and direct way than Euler's formula and without involving complex values, the magic is tarnished somewhat by the fact that e isn't really needed here, as the basic identity is: $$\pi$$ = 4 * ( Arctan X - Arctan $$\frac{X - 1}{X + 1}$$ ) which is valid for all X > -1, as some of you explained and proved. That said, there are many interesting particular cases which you can use to trick your colleagues into believing you've discovered a brand-new, amazing evaluation for $$\pi$$. For instance: - Using $$\pi$$ itself (!) to get $$\pi$$:  $$\pi$$ = 4 * (Arctan $$\pi$$ - Arctan $$\frac{\pi - 1}{\pi + 1}$$) >4*(ATN(PI)-ATN((PI-1)/(PI+1))) -> 3.14159265359 - Using the Golden Ratio $$\phi$$ to get $$\pi$$:  $$\pi$$ = 4 * (Arctan $$\phi$$ - Arctan $$\frac{\phi - 1}{\phi + 1}$$) >P=(1+SQR(5))/2 @ 4*(ATN(P)-ATN((P-1)/(P+1))) -> 3.14159265360 - Using the current year (2021) to get $$\pi$$:  $$\pi$$ = 4 * (Arctan 2021 - Arctan $$\frac{2020}{2022}$$) >4*(ATN(2021)-ATN(2020/2022)) -> 3.14159265358 You can also trick them by using your phone number, your birthday or any number personally related to you or the person being tricked ! • J-F Garnier commented that he doesn't think there's a relation which can be used to get $$\pi$$ from e but added he was thinking about finite formulae. Indeed, there are various relatively simple infinite series involving e and returning $$\pi$$ that would do fine. He also insisted that deriving $$\pi$$ from Euler's formula as a logarithm base e (namely $$\pi$$ = loge(-1) / i ) doesn't mean that e is involved because he can use arctan instead and this is the method used to compute log(z) in various HP calculators. To this I say that the computing methods used are irrelevant, we're talking here about theoretical math, not practical implementation details, and theoretically the derivation of $$\pi$$ from Euler's formula holds and fundamentally involves e as the log base. I could give a pertinent example related to cubic equations to make it clearer but it would make this too long. Gerson suggested replacing e by c (the speed of light in whatever units) in the formula and correctly gave the range of constants and resulting values, as well as a link to an interesting formula which, well, links e and $$\pi$$ but I don't consider that a bona-fide way to derive $$\pi$$ from e or vice versa, and the other example he gave is a definite integral, also quite unsatisfactory to me for the purpose: all sorts of integrals have $$\pi$$ as a result, thus demonstrating that $$\pi$$ is ubiquitous but nothing else. Just change $$\pi$$ to 5 and you'll see what I mean: a definite integral is no way to "derive" 5 from anything. Last, Albert Chan and robve also explained several times and in various ways why the identity works. d.  The summation for even dimensions from 0 to infinity of the volumes enclosed by the respective n-dimensional unit spheres (R = 1) is  e$$\pi$$ ~ 23.1406926328 (aka Gelfond's constant) and thus its $$\pi$$-th root is e, or conversely we could say that $$\pi$$ is its natural logarithm. The $$\pi$$-th root of the summation is readily obtained with the HP-71B by executing this from the command line: >V=0 @ FOR D=0 TO 50 STEP 2 @ V=V+PI^(D/2)/FACT(D/2) @ NEXT D @ V^(1/PI) 2.71828182846 which agrees with  e ~ 2.71828182846. For the 12-digit HP-71B we stop at dimension 50 because  $$\pi$$25/25! ~ 1.73.10-13, so adding further terms won't affect the result. Also, as I read for the first time in some Martin Gardner's book many decades ago, the volume enclosed by the n-dimensional unit sphere tends to 0 with growing n, and reaches a maximum for a (fractional) dimension between 5 (Vol = 8 $$\pi$$2/15 ~ 5.264) and 6 (Vol = $$\pi$$3/6 ~ 5.168). See if you can find this unique dimension and the corresponding maximum volume. • Albert Chan commented that the formula gives 1 for the volume of a 0-dimensional sphere (whatever its "radius"), which seems weird to him. Well, that's by definition. Come to that, the formula also gives 2 for the volume of the 1-dimensional unit "sphere", which is but a line segment that obviously has no 3D "volume", and it also gives $$\pi$$ for the volume of the 2-dimensional unit "sphere", which is but a 2D circle with no 3D "volume" either. We tend to think of volume in terms of dimensions 3 or greater but mathematically that's not necessarily so. e.  The song "$$\pi$$" by Kate Bush is indeed awesome, as is most of her music ("Cloudbusting", mentioned in this thread, certainly is, as is the video for it, almost a whole tragic movie told in a few minutes), and part of it appears in The Simpsons' 26th-season finale, "Mathlete's Feat", featuring about one minute of the song or so. $$\pi$$ also appears in at least two other episodes which I watched at the time: in one of them, Prof. Frink unexpectedly says aloud that $$\pi$$ is exactly equal to 3 as a way to get some much needed attention (he sure gets it !), and in another Homer and Marge are visiting some school for "Snotty Girls and Mama's Boys" (i.e., gifted children) and two of them are singing a hand-clapping song with lyrics they've concocted to help them remember a few digits of $$\pi$$. There may be many more ... (episodes featuring $$\pi$$, that is). • Ángel Martín briefly visited the thread to express his love for the song (and Massimo Gnerucci did likewise for the artist herself), as did Maximilian Hohmann, who also mentioned "Cloudbusting" and its pseudo-scientific background, which I knew about from reading Martin Gardner's and James Randi's books in the distant past. f.  First, those 31,415,926,535,897 decimal places were reported by Emma Haruka Iwao on 2019's $$\pi$$ Day after 121 days of computation. Then, on January 2020 Tim Mullican computed 50 trillion digits in some 300 days, which if I'm not wrong it's the current world record as of 2021. The resulting string of digits passes all normality tests, where we consider a real number R to be normal in some base B if any string of N base-B digits appears in the base-B representation of R with frequency B-N, e.g: in base 10 every digit 0-9 appears 1/10-th of the time, every 2-digit sequence 00-99 appears 1/100-th of the time, etc. It can be proved that almost all real numbers are normal in any and all bases B, but rigurously proving that a "naturally-occurring" real R (i.e., not artificially defined), say $$\pi$$, is normal for even just one base B (say base 2 or 10) is excruciatingly difficult and not a single result is known so far, though the expectations are that all irrational numbers actually are, e.g. if you compute a trillion bits of $$\sqrt{2}$$, you'll find about the same number of 0's and 1's and about the same number of 00's, 01's, 10's and 11's, etc. Not all is lost, however. If we can't (yet) prove than $$\pi$$ is normal in base 2 or 10, say, we can try to estimate the credibility of the decision "$$\pi$$ is not normal", which somewhat resembles probabilistic primality testing, where we can't rigurously prove that a certain number is prime in reasonable time but we may certify it as a probable prime if we can quickly prove that the probability of it being composite is extremely small. In the case of $$\pi$$'s normality, this has been done by analyzing the first 16 trillion bits of $$\pi$$, and the result is that the decision "$$\pi$$ is not normal" has credibility 4.3497.10-3064, which in my dictionary is akin to "impossible". Second, the bilingual joke about "Fear of number $$\pi$$" being called "Trescatorcephobia" is a pun on 3.14 being usually said as "tres catorce" in Spanish. Finally, re the "peer reviewed" papers demonstrating that $$\pi$$'s value is some algebraic number, I'm astonished that anyone would give them credit (let alone publish them) except for non-academic reasons. Next category: papers succinctly proving Riemann's Hypothesis or Fermat's Last Theorem in a few pages, almost casually. • Peter P commented "where was my simple proof of Fermat's Last Theorem again? Gotta get that published lest someone steals my brilliant idea...". LOL ! FLT is almost false, in the sense that there are infinitely many almost-counterexamples where the arbitrarily large LHS and RHS differ by just 1 (two other almost-counterexamples appeared in The Simpsons as well though not so close, namely 178212 + 184112 = 192212  and  398712 + 436512 = 447212). Also, a nice "flipped" variant of FLT can actually be proved using a truly "brilliant" idea. robve gave very interesting comments and links re so-called predatory journals, as well as a related personal experience while attending an IEEE conference. That's all for now. Again, thanks for your interest and participation in this SRC, glad you enjoyed it ! Best regards. V. All My Articles & other Materials here:  Valentin Albillo's HP Collection 03-24-2021, 05:53 PM (This post was last modified: 03-24-2021 08:57 PM by robve.) Post: #39 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special (03-23-2021 10:20 PM)Valentin Albillo Wrote:  Again, thanks for your interest and participation in this SRC, glad you enjoyed it ! Thank you for posting the pi day special and for the conclusion! I moved the remainder of the long post as a separate topic on Adaptive Simpson and Romberg methods, as suggested: https://www.hpmuseum.org/forum/thread-16523.html - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 03-26-2021, 03:48 AM (This post was last modified: 03-26-2021 01:25 PM by robve.) Post: #40 robve Senior Member Posts: 332 Joined: Sep 2020 RE: [VA] SRC #009 - Pi Day 2021 Special b. answer to what this program computes: 10 P=1 20 FOR I=2 TO 1000 STEP 2 30 P=P*I/(I-1) 40 DISP 2*P*P/(I+1) 50 NEXT I As Albert correctly points out and explains in his reply, the program computes Wallis Product. $$\frac{\pi}{2} = \prod_{n=1}^\infty \frac{4n^2}{4n^2-1} = \frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdot 8\cdots}{1\cdot 3\cdot 3\cdot 5\cdot 5\cdot 7\cdot 7\cdot 9\cdots}$$ The proof is "typically featured in modern calculus textbooks" according to this Wikipedia article. Another way to express the Wallis Product in compact form with factorials: $$\frac{\pi}{2} \approx \begin{cases} \displaystyle\frac{n\,(n-1)!!^2}{n!!^2} & \text{if n\ge 3 is odd} \\ \displaystyle\frac{n!!^2}{(n+1)\,(n-1)!!^2} & \text{if n\ge 2 is even} \end{cases}$$ where $$n!!$$ is the double factorial. Of course, we would never want our programs to use such factorials in the numerator and denominator, because of overflow and loss of floating point precision. There are many ways this product can be implemented in code. The code shown above instantiates this product's numerator directly as 2*P*P to make this relation of this code to the product form more readily apparent. Wallis product very slowly converges to $$\pi$$, much slower than Viète's formula and other hiistorical infinite series formulas for $$\pi$$. It is not a practical algorithm to compute $$\pi$$ to many decimals. Efficient algorithms exist to compute $$\pi$$ in many decimals, such as the following short C program (160 characters when compacted) written by Dik T. Winter at CWI (Centrum voor Wiskunde en Informatica - the national research institute for mathematics and computer science) Amsterdam to compute $$\pi$$ up to 800 decimal digits: int a = 10000, b, c = 2800, d, e, f[2801], g; main() { for (; b-c; ) f[ b++] = a/5; for (; d=0, g = c*2; c-=14, printf("%.4d", e+d/a), e = d % a) for (b = c; d += f[ b]*a, f[ b] = d % --g, d /= g--, --b; d *= b) ; } 31415926535897932384626433832795028841971693993751058209749445923078164062862089​98628034825342117067982148086513282306647093844609550582231725359408128481117450​28410270193852110555964462294895493038196442881097566593344612847564823378678316​52712019091456485669234603486104543266482133936072602491412737245870066063155881​74881520920962829254091715364367892590360011330530548820466521384146951941511609​43305727036575959195309218611738193261179310511854807446237996274956735188575272​48912279381830119491298336733624406566430860213949463952247371907021798609437027​70539217176293176752384674818467669405132000568127145263560827785771342757789609​17363717872146844090122495343014654958537105079227968925892354201995611212902196​08640344181598136297747713099605187072113499999983729780499510597317328160963185​ Other programs to compute the Wallis Product main = putStrLn (show (wallis 1000)) where wallis n = (2*p^2/(n+1)) where (_,p) = until done next (n,1.0) next (k,p) = (k-2,p*k/(k-1)) done (k,p) = (k==0) C (imperative PL): int main() { int i, n = 1000; // must be even double p = 1; for (i = 2; i <= n; i += 2) p *= (double)i/(i-1); printf("%g\n", 2*p*p/(n-1)); } Prolog (logic PL): wallis(W) :- prod(1000, P), W is 2*P*P/1001. prod(0, 1) :- !. prod(K, Q) :- M is K-2, prod(M, P), Q is P*K/(K-1). - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
2022-12-07 19:24:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6003254055976868, "perplexity": 7626.572470053324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00630.warc.gz"}
https://mathoverflow.net/questions/208413/pull-back-of-a-fibration-along-a-homotopy-equivalence-and-homotopy-classes-of-se
# Pull-back of a fibration along a homotopy equivalence and homotopy classes of sections Let $p:E\rightarrow B$ be a fibration (i.e. have the homotopy lifting property with respect to all spaces), and $f: B'\rightarrow B$ and $g:B\rightarrow B'$ be homotopy inverses. Denote by $\pi_0\Gamma(B,E)$ the set of homotopy classes of sections of $p$, and likewise for other fibrations. I am interested in the following Conjecture: There is a bijection $\beta:\pi_0\Gamma(B,E) \rightarrow \pi_0\Gamma(B',f^*E)$. This would be a generalization of the elementary result $[B,X] \underset{\approx}{\xrightarrow{f^*}} [B',X]$, which is the case of trivial fibrations. Some vague ideas: 1. It was pointed out by an author of [R. Brown and P.R. Heath, "Coglueing homotopy equivalences'', Math. Z. 113 (1970) 313-362] that the canonical projection $f':f^*E \rightarrow E$ is a homotopy equivalence (Corollary 1.4). Furthermore, there exists a map $g':E \rightarrow f^*E$, making the obvious diagram involving $g$ commute, such that $g'\circ f'$ and $f'\circ g'$ are homotopic to the identities via maps that factor through the bases (Theorem 3.4). This is an interesting result, but the issue is, unlike $f'$, that $g'$ doesn't seem to induce a map between sections in a natural way, so I don't know how this may be applied to my conjecture. 2. There's an induced map $f^*:\pi_0\Gamma(B,E) \rightarrow \pi_0\Gamma(B',f^*E)$ sending $\left[s:B\rightarrow E\right]$ to $\left[({\rm id}_{B'},s\circ f): B' \rightarrow f^*E\right]$, recalling that $f^*E=B'\times_{f,p}E$. One may try to prove $f^*$ is bijective. To do so, it would suffice to prove that the compositions $g^* \circ f^*$ and $f^* \circ g^*$ below are bijective: $$\pi_0\Gamma(B,E) \xrightarrow{f^*} \pi_0\Gamma(B',f^*E) \xrightarrow{g^*} \pi_0\Gamma(B,g^*f^*E) \xrightarrow{f^*} \pi_0\Gamma(B',f^*g^*f^*E).$$ Since $E\rightarrow B$ and $g^*f^*E\rightarrow B$ are pull-backs along homotopic maps, they are fiber homotopy equivalent (i.e. there eixst fiber-preserving maps between the total spaces, the compositions of which are homotopic to the identities via fiber-preserving maps), by e.g. Proposition 4.62 of Hatcher's "Algebraic Topology." It follows that $$\pi_0\Gamma(B,E)\approx\pi_0\Gamma(B,g^*f^*E).$$ Similarly, $$\pi_0\Gamma(B',f^*E) \approx \pi_0\Gamma(B',f^*g^*f^*E).$$ However, it is not known whether these bijections are given by $g^*\circ f^*$ and $f^* \circ g^*$. EDIT 6/5/2015: Upon encouragement by Dan Ramras, I made a renewed effort to carry my second idea further. I think the conjecture holds at least in "favorable cases," but I'm not sure how to conveniently characterize such cases, or if a more general proof is possible. Our task boils down to the following. Let $p:E\rightarrow B$ be a fibration, and $F_t: B\rightarrow B$ be a homotopy such that $F_0={\rm id}_B$. I shall use $p_t$ to denote the pull-back fibration $F_t^*E\rightarrow B$. On the one hand, to each section $s\in \Gamma(B,E)$ of $p$ we can associate the section $({\rm id}_B, s \circ F_1)\in \Gamma(B, F_1^*E)$ of $p_1$. On the other hand, by the aforementioned Proposition 4.62 there is a fiber homotopy equivalence $\Phi:E\rightarrow F_1^*E$, so to each $s\in \Gamma(B,E)$ we can also associate the section $\Phi\circ s\in \Gamma(B, F_1^*E)$. The second way of associating sections is guaranteed to induce a bijection $\pi_0\Gamma(B,E)\xrightarrow{\approx} \pi_0\Gamma(B, F_1^*E)$, since $\Phi$ is a fiber homotopy equivalence. The task now is to show that the first way of associating sections induces the same map $\pi_0\Gamma(B,E)\rightarrow \pi_0\Gamma(B, F_1^*E)$. It suffices to show, given each $s\in \Gamma(B,E)$, that $\Phi\circ s$ and $({\rm id}_B, s \circ F_1)$ are in the same homotopy class of sections. Let us recall the construction of $\Phi$. Regarding $F$ as a map $B\times I \rightarrow B$, there is the pull-back $\pi:F^*E\rightarrow B\times I$ of $p$ along $F$. Let \begin{eqnarray} L: E\times I &\rightarrow& B\times I \\ (e,t) &\mapsto& (p(e), t), \end{eqnarray} which can be thought of as a homotopy of maps $E\rightarrow B\times I$. Now consider the homotopy lifting problem \begin{eqnarray} E\times \{0\} &\xrightarrow{\widetilde L_0}&~ F^*E \\ \downarrow~~~~~~& &~~~\downarrow\pi \\ E\times I ~~~& \xrightarrow{~L~} & B\times I \end{eqnarray} where $\widetilde L_0$ is the obvious injection $E\times\{0\} \xrightarrow{\approx} F_0^*E \hookrightarrow F^*E$. Let $\widetilde L: E\times I \rightarrow F^*B$ be the lift of $L$ extending $\widetilde L_0$. Then we define $\Phi$ as the restriction of $\widetilde L$ to $t=1$, i.e. \begin{eqnarray} \Phi: E &\rightarrow& F_1^*E \\ e &\mapsto& \widetilde L(e,1). \end{eqnarray} Remarkably, by the proof of Proposition 4.62, the homotopy class $[\Phi]\in\pi_0\Gamma(B,F_1^*E)$ of $\Phi$ is independent of the choice of the lift $\widetilde L$. Therefore, it suffices prove the following: given each $s\in \Gamma(B,E)$, there exists such a choice of $\widetilde L$ that $\widetilde L(s(-),1) = ({\rm id}_B, s\circ F_1) \in \Gamma(B, F_1^*B)$. The nice thing is this choice can depend on $s$. Thus suppose $s$ is given, and we will construct an $\widetilde L$ in two steps. In the first step, define \begin{eqnarray} \psi: s(B) \times I &\rightarrow& F^*E \\ (s(b), t) &\mapsto& \left((b, (s\circ F_t)(b), t\right). \end{eqnarray} This is well-defined as one can verify $(b, (s\circ F_t)(b)$ is indeed in $F_t^*E$. Noting that $\psi(s(b),0) = ((b,s(b)), 0)$, we paste $\psi$ and $\widetilde L_0$ to obtain \begin{eqnarray} \widetilde L_0 \cup \psi: (E\times\{0\}) \cup (s(B)\times I) \rightarrow F^*E. \end{eqnarray} $\widetilde L_0 \cup \psi$ certainly extends $\widetilde L_0$, and it lifts $L$ because $\pi \left((b, (s\circ F_t)(b), t\right) = (b,t) = L(s(b),t)$. In the second step, we have to solve the following homotopy lifting problem for the pair $(E,s(B))$ (or "homotopy lifting extension problem"): \begin{eqnarray} (E\times\{0\}) \cup (s(B)\times I) &\xrightarrow{\widetilde L_0 \cup \psi}&~ F^*E \\ \downarrow~~& &~~~\downarrow\pi \\ E\times I & \xrightarrow{~~~L~~~} & B\times I \end{eqnarray} This is where I had to make some favorable assumptions. Let us assume that every element of $\pi_0\Gamma(B,E)$ has a representative $s$ such that $(E,s(B))$ can be given a CW pair structure. By using a different representative if necessary we can assume that the given $s$ has this property. Now, a fibration is a Serre fibration, and a Serre fibration has the homotopy lifting property with respect to all CW pairs. Therefore the desired $\widetilde L:E\times I \rightarrow F^*E$ exists. (To complete the proof of the original conjecture, of course, the same favorable assumptions should be made about $f^*E\rightarrow B'$ as well as $E\rightarrow B$.) • Have you tried carefully tracing through the proof of 4.62 to extract formulas for the bijections it gives you? This might take some effort, but I would expect it to lead to a proof of your conjecture. – Dan Ramras Jun 4 '15 at 18:51 • @DanRamras Thanks for your comment! Following your advice I made another attempt to relate the construction in 4.62 to that mentioned in my second idea. However, I was only able to prove the conjecture under certain assumptions, which might have been unnecessary. I've just edited my post. Let me know if you have further ideas, or if you spot any gap in my partial proof. Thanks again! – user46652 Jun 6 '15 at 5:50 • I only skimmed what you wrote, but it looks to me like you only really need s to be a closed cofibration at the end (using the Strom model structure on Top). – Dan Ramras Jun 6 '15 at 6:10 • @DanRamras I guess this is what you were pointing at. $s$ is a closed cofibration iff $(E,s(B))$ is an NDR pair, which in turn implies $(E\times I, (E\times\{0\})\cup(s(B)\times I))$ is a DR pair, and a fibration has the lifting property wrt all DR pairs. But I'm confused as to whether all these hold for arbitrary topological spaces. The source where I read this only dealt with compactly generated Hausdorff spaces (Felix et al., "Rational Homotopy Theory," Proposition 2.1). I can certainly compromise if necessary, but it'd be better to be free of such restrictions. – user46652 Jun 7 '15 at 2:41 • @DanRamras I realized I wouldn't need to check all those mentioned in my previous comment. It'd be enough to show that $(E\times\{0\})\cup(s(B)\times I)\rightarrow E\times I$ (which is already acyclic/a homotopy equivalence) is a (Hurewicz) cofibration. Then we could use the axioms for model structure. – user46652 Jun 7 '15 at 2:54 Here is one way of proving the conjecture is true in general, using the modern method of weak factorization systems. A weak factorization system has at its core two classes of maps the left class and the right class and they satisfy lifting properties with respect to each other. Here the right class is the class of Hurewicz fibrations. The left class is the class of trivial Hurewicz cofibration (i.e. DR pair). These classes define each other. The left class consists of all maps with the left lifting property with respect to all maps in the right class; the right class consists of all maps with the right lifting property with respect to the left class. So trivial Hurewicz cofibrations have the left-lifting property with respect to the Hurewicz fibrations. They are exactly the Hurewicz cofibrations which are also homotopy equivalences. We won't really need the general notion, just a few special cases. You already know some trivial Hurewicz cofibrations. For example for any space $B$, the inclusion $$B \times \{0\} \to B \times [0,1]$$ has the left lifting property with respect to any Hurewicz fibration (by definition), hence this is the first example of a trivial Hurewicz cofibration. Also the trivial Hurewicz cofibrations are closed under several operations: composition, taking retracts, and cobase change (aka pushouts). Using these we can form new examples of trivial Hurewicz cofibrations. Let $f: B' \to B$ be any map. Then the inclusion of $B$ in the mapping cylinder $$B \to B' \times [0,1] \cup^{B'\times \{1\}} B$$ is an example (a pushout of our previous example). We can also consider the inclusion on the other side $$B' \times \{0\} \to B' \times [0,1] \cup^{B'\times \{1\}} B$$ If the map $f$ is a homotopy equivalence then this too is a trivial Hurewicz cofibration, though this takes a bit more work to see. The hard part is proven in prop 7 of these notes, as well as many textbooks. Now let us first consider the special case of your conjecture where $f: B' \to B$ is a trivial Hurewicz cofibration. Lemma: If $f: B' \to B$ is a trivial Hurewicz cofibration and $E \to B$ is a Hurewicz fibration, then we get an induced bijection: $$f^*: \pi_0 \Gamma(B, E) \to \pi_0 \Gamma(B', f^*E)$$ Proof: First let's show surjectivity. A section of $f^*E$ is the same as a map $s: B' \to E$ such that $ps = f$. This is a triangle which we can enlarge into a square where the left edge is $f$, the right edge is $p: E \to B$ and the bottom is the identity on $B$. This square is a "lifting problem". Now we use the property that trivial Hurewicz cofibrations have the left lifting property with respect to Hurewicz fibrations to solve the lifting problem. This solution is a map $B \to E$ which is exactly a section of $E$ which restricts on $B'$ to the original section. Injectivity is proven by the same argument but using the fact that $$B \times \{0,1\} \cup^{B' \times \{0,1\}} B' \times I \to B \times I$$ is also a trivial Hurewicz cofibration. [We leave this part as an exercise]. QED. Now using this lemma we can prove the general conjecture as follows. We will construct a space Z with two trivial Hurewicz cofibrations $$i:B \to Z$$ $$j:B' \to Z$$ and a map $k:Z \to B$ such that the composite $$B \to Z \to B$$ is the identity map on $B$ and the composite $$B' \to Z \to B$$ is our map $f$. Once we have a space with these maps, the previous lemma shows that the maps between homotopy classes of sections $i^*$ and $j^*$ are bijections. Since $k^* i^* = 1^*$ is also a bijection, this means that $k^*$ is a bijection too, and hence $f^* = k^* j^*$ is a bijection, which is what we wanted to show. Such a space $Z$ can be constructed as $$B' \times [0,1] \cup^{B' \times 1} B \times [1,2]$$ that is we glue $B' \times I$ to $B \times I$ at one end via the map $f$. The maps $i,j$ are given by including $B'$ and $B$ at 0 and 2, respectively (the ends of the "cylinder"). The map $k$ is given by projecting $B$ (by using $f$ for points on the first half of the cylinder). • Thanks for writing out this beautiful proof with such clarity! I have two minor questions. 1) In the proof of the lemma, did you mean to say, for injectivity, we need the fact that $(B\times\{0,1\})\cup(B'\times I)\rightarrow B\times I$ is a trivial Hurewicz cofibration? This would follow from Corollary 8 of the notes you cited. 2) Does the proof essentially depend on the compactly generated weakly Hausdorff assumption? I'm guessing the answer is yes, because that appears to be the working assumption of the notes, where Prop. 5 says every Hurewicz cofibration is closed. – user46652 Jul 11 '15 at 23:28 • Yes on the first point. I will edit. For the second point, my point set topology here is a little rusty, but I don't think we need the compactly generated weak Hausdorff assumption here, though many references will make that assumption. The point is that there is a similar weak factorization system on the category of all topological spaces where instead of just Hurewicz cofibrations we use closed Hurewicz cofibrations. Proving this is part of constructing the "Strom model structure". Finally, all of the cofibrations we use above are variations on mapping cylinders and so should be closed. – Chris Schommer-Pries Jul 13 '15 at 7:05 Rather than write out a solution I would like to make some suggestions that I hope would enable you to solve the problem. I am open for further discussion. It seems to me that one of the difficulties you have is the lack of a notation for some of the concepts that are likely to crop in the solution. Thus you have a notation for classes of sections, but not for more general situations. So here is my suggestion, including changing a little the notation in the cited paper. Let $p:D \to A, u: Z \to A$ be maps. Write $[Z,D;u]$ for the set of homotopy classes of maps $f:Z \to D$ such that $pf=u$ and each homotopy projects down to the constant homotopy on $u$. Thus classes of sections are the case $Z=A, u=1_A$. The first result is that if $p$ is a fibration and $\theta: u \simeq v$ is a homotopy then we have a bijection $\theta_{\#}: [Z,D;u] \to [Z,D;v]$ giving an operation of the groupoid of such homotopies on the sets of homotopy classes. Now look at the results in the paper and see if you can see how this operation changes if you replace $p$ by a homotopy equivalent $q: E \to B$. The paper gives a reference 1. which is available in a new edition titled Topology and Groupoids. The dual problem to yours could also be of interest. That will involve retractions. Later: What your question exposes is that the paper cited does not discuss at all is how $[Z,D;u]$ depends on maps $(D \to A) \to (E \to B)$, and homotopies of such. So this is something you could try to write up. Consider a diagram as follows: $$\begin{matrix} && D & \xrightarrow{e} & E & \xrightarrow{d} & D \\ & f \nearrow & \downarrow p & & \downarrow q && \downarrow p\\ Z & \xrightarrow{u} &A & \xrightarrow{b} & B & \xrightarrow{a} & A \end{matrix}$$ We suppose $p,q$ are fibrations and $e,b$ give a fibre homotopy equivalence with homotopy inverse $d,a$. We are considering lifts $f$ of $u$. Note that if $u=1: A \to A$ then $f$ will be a section of $p$. We then get an induced diagram $$\begin{matrix} [Z,D;u] & \xrightarrow{\alpha} & [Z,E;bu] && \\ & \cong \searrow & \downarrow \rho & \searrow \cong & \\ && [Z,D;abu] & \xrightarrow{\gamma} & [Z,E;babu] \end{matrix}$$ where $\alpha$ is induced by $e,b$ and $\gamma$ is induced by $d,a$. The first slanting arrow comes from the homotopy $u \simeq abu$ and the second from the homotopy $1 \simeq ba$. The diagram is commutative because the homotopies are part of the fibre homotopy equivalence. So $\rho \alpha$ and $\gamma \rho$ are isomorphisms. It is now a standard elementary categorical result that $\alpha, \rho, \gamma$ are isomorphisms. When one examines the last part of the argument, one finds that curious "conjugacies" come in, see the cited paper. As you see, to get the result on sections, i.e. when $u=1: A \to A$ , one has to introduce a more general concept and notation. • Thanks for continuing to help! This does give me some insights into what's going on. I'll have to see what I can do with it. – user46652 Jun 7 '15 at 8:46 • You can contact me by email if you want. One seems to need the old trick: if abc is defined in a category, and ab, bc are isomorphisms, then a,b,c, are iso also. – Ronnie Brown Jun 7 '15 at 13:00
2019-02-16 04:21:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9746606945991516, "perplexity": 168.9980930697676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479838.37/warc/CC-MAIN-20190216024809-20190216050809-00056.warc.gz"}
https://cran.rstudio.com/web/packages/adespatial/vignettes/tutorial.html
# adespatial: Moran’s Eigenvector Maps and related methods for the spatial multiscale analysis of ecological communities #### 2018-09-26 The package adespatial contains functions for the multiscale analysis of spatial multivariate data. It implements some new functions and reimplements existing functions that were available in packages of the sedaR project hosted on R-Forge (spacemakeR, packfor, AEM, etc.). It can be seen as a bridge between packages dealing with mutltivariate data (e.g., ade4, Dray and Dufour (2007)) and packages that deals with spatial data (spdep). In adespatial, the spatial information is considered as a spatial weighting matrix, object of class listw provided by the spdep package (Figure 1). It allows to build Moran’s Eigenvector Maps (MEM, Dray, Legendre, and Peres-Neto (2006)) that are orthogonal vectors maximizing the spatial autocorrelation (measured by Moran’s index of autocorrelation). These spatial predictors can be used in multivariate statistical methods to provide spatially-explicit multiscale tools (Dray et al. 2012). This document provides a description of the main functionalities of the package. Figure 1: Schematic representation of the functioning of the adespatial package. Classes are represented in pink frames and functions in blue frames. Classes and functions provided by adespatial are in bold. To run the different analysis described, several packages are required and are loaded: library(adespatial) library(ade4) ## ## Attaching package: 'ade4' ## The following object is masked from 'package:adespatial': ## ## multispati library(adegraphics) ## ## Attaching package: 'adegraphics' ## The following objects are masked from 'package:ade4': ## ## kplotsepan.coa, s.arrow, s.class, s.corcircle, s.distri, ## s.image, s.label, s.logo, s.match, s.traject, s.value, ## table.value, triangle.class library(spdep) ## Loading required package: sp ## Loading required package: Matrix ## Loading required package: spData ## To access larger datasets in this package, install the spDataLarge ## package with: install.packages('spDataLarge', ## repos='https://nowosad.github.io/drat/', type='source')) ## ## Attaching package: 'spdep' ## The following object is masked from 'package:ade4': ## ## mstree library(maptools) ## Checking rgeos availability: TRUE # 1 Spatial Neighborhood Spatial neighborhoods are managed in spdep as objects of class nb. It corresponds to the notion of connectivity matrices discussed in Dray, Legendre, and Peres-Neto (2006) and can be represented by an unweighted graph. Various functions are devoted to create nb objects from geographic coordinates of sites. We present different alternatives according to the design of the sampling scheme. ## 1.1 Surface data The function poly2nb allows to define neighborhood when the sampling sites are polygons and not points (two regions are neighbors if they share a common boundary). class(mafragh$Spatial) ## [1] "SpatialPolygons" ## attr(,"package") ## [1] "sp" par(mar = c(0, 0, 3, 0)) xx <- poly2nb(mafragh$Spatial) plot(mafragh$Spatial, border = "grey") plot(xx, coordinates(mafragh$Spatial), add = TRUE, pch = 20, col = "red") title(main="Neighborhood for polygons") ## 1.2 Regular grids If the sampling scheme is based on grid of 10 rows and 8 columns, spatial coordinates can be easily generated: xygrid <- expand.grid(x = 1:10, y = 1:8) plot(xygrid, pch = 20, asp = 1) For a regular grid, spatial neighborhood can be created with the function cell2nb. Two types of neighborhood can be defined. The queen specification considered horizontal, vertical and diagonal edges: nb1 <- cell2nb(10, 8, type = "queen") plot(nb1, xygrid, col = "red", pch = 20) title(main = "Queen neighborhood") nb1 ## Neighbour list object: ## Number of regions: 80 ## Number of nonzero links: 536 ## Percentage nonzero weights: 8.375 ## Average number of links: 6.7 The rook specification considered only horizontal and vertical edges: nb2 <- cell2nb(10, 8, type = "rook") plot(nb2, xygrid, col = "red", pch = 20) title(main = "Rook neighborhood") nb2 ## Neighbour list object: ## Number of regions: 80 ## Number of nonzero links: 284 ## Percentage nonzero weights: 4.4375 ## Average number of links: 3.55 ## 1.3 Transects The easiest way to deal with transects is to consider them as grids with only one row: xytransect <- expand.grid(1:20, 1) nb3 <- cell2nb(20, 1) plot(nb3, xytransect, col = "red", pch = 20) title(main = "Transect of 20 sites") summary(nb3) ## Neighbour list object: ## Number of regions: 20 ## Number of nonzero links: 38 ## Percentage nonzero weights: 9.5 ## Average number of links: 1.9 ## ## 1 2 ## 2 18 ## 2 least connected regions: ## 1:1 20:1 with 1 link ## 18 most connected regions: ## 2:1 3:1 4:1 5:1 6:1 7:1 8:1 9:1 10:1 11:1 12:1 13:1 14:1 15:1 16:1 17:1 18:1 19:1 with 2 links All sites have two neighbors except the first and the last one. ## 1.4 Irregular samplings There are many ways to define neighborhood in the case of irregular samplings. We consider a random sampling with 10 sites: set.seed(3) xyir <- matrix(runif(20), 10, 2) plot(xyir, pch = 20, main = "Irregular sampling with 10 sites") The most intuitive way is to consider that sites are neighbors (or not) according to the distances between them. This definition is provided by the dnearneigh function: nbnear1 <- dnearneigh(xyir, 0, 0.2) nbnear2 <- dnearneigh(xyir, 0, 0.3) nbnear3 <- dnearneigh(xyir, 0, 0.5) nbnear4 <- dnearneigh(xyir, 0, 1.5) plot(nbnear1, xyir, col = "red", pch = 20) title(main = "neighbors if 0<d<0.2") plot(nbnear2, xyir, col = "red", pch = 20) title(main = "neighbors if 0<d<0.3") plot(nbnear3, xyir, col = "red", pch = 20) title(main = "neighbors if 0<d<0.5") plot(nbnear4, xyir, col = "red", pch = 20) title(main = "neighbors if 0<d<1.5") Using a distance-based criteria could lead to unbalanced graphs. For instance, if the maximum distance is too low, some points have no neighbors: nbnear1 ## Neighbour list object: ## Number of regions: 10 ## Number of nonzero links: 14 ## Percentage nonzero weights: 14 ## Average number of links: 1.4 ## 3 regions with no links: ## 2 7 10 On the other hand, if the maximum distance is to high, all sites could connected to the 9 others: nbnear4 ## Neighbour list object: ## Number of regions: 10 ## Number of nonzero links: 90 ## Percentage nonzero weights: 90 ## Average number of links: 9 It is also possible to possible to define neighborhood by a criteria based on nearest neighbors. However, this option can lead to non-symmetric neighborhood: if site A is the nearest neighbor of site B, it does not mean that site B is the nearest neighbor of site A. The function knearneigh creates an object of class knn. It can be transformed into a nb object with the function knn2nb. This function has an argument sym which can be set to TRUE to force the output neighborhood to symmetry. knn1 <- knearneigh(xyir, k = 1) nbknn1 <- knn2nb(knn1, sym = TRUE) knn2 <- knearneigh(xyir, k = 2) nbknn2 <- knn2nb(knn2, sym = TRUE) plot(nbknn1, xyir, col = "red", pch = 20) title(main = "Nearest neighbors (k=1)") plot(nbknn2, xyir, col = "red", pch = 20) title(main="Nearest neighbors (k=2)") This definition of neighborhood can lead to unconnected subgraphs. The function n.comp.nb finds the number of disjoint connected subgraphs: n.comp.nb(nbknn1) ## $nc ## [1] 3 ## ##$comp.id ## [1] 1 2 1 1 3 3 1 1 3 2 n.comp.nb(nbknn2) ## $nc ## [1] 1 ## ##$comp.id ## [1] 1 1 1 1 1 1 1 1 1 1 More elaborate procedures are available to define neighborhood. For instance, Delaunay triangulation is obtained with the function tri2nb. It requires the package deldir. Other graph-based procedures are also available: nbtri <- tri2nb(xyir) ## ## PLEASE NOTE: The components "delsgs" and "summary" of the ## object returned by deldir() are now DATA FRAMES rather than ## matrices (as they were prior to release 0.0-18). ## See help("deldir"). ## ## PLEASE NOTE: The process that deldir() uses for determining ## duplicated points has changed from that used in version ## 0.0-9 of this package (and previously). See help("deldir"). nbgab <- graph2nb(gabrielneigh(xyir), sym = TRUE) nbrel <- graph2nb(relativeneigh(xyir), sym = TRUE) nbsoi <- graph2nb(soi.graph(nbtri, xyir), sym = TRUE) plot(nbtri, xyir, col = "red", pch = 20) title(main="Delaunay triangulation") plot(nbgab, xyir, col = "red", pch = 20) title(main = "Gabriel Graph") plot(nbrel, xyir, col = "red", pch = 20) title(main = "Relative Neighbor Graph") plot(nbsoi, xyir, col = "red", pch = 20) title(main = "Sphere of Influence Graph") The function chooseCN provides a simple way to build spatial neighborhoods. It is a wrap up to many of the spdep functions presented above. The function createlistw discussed in section XX is an interactive graphical interface that allows to generate R code to build neighborhood objects. ## 1.5 Manipulation of nb objects A nb object is a list of neighbors. The neighbors of the first site are in the first element of the list: nbgab[[1]] ## [1] 4 7 Various tools are provided by spdep to deal with these objects. For instance, it is possible to identify differences between two neighborhoods: diffnb(nbsoi,nbrel) ## Neighbour list object: ## Number of regions: 10 ## Number of nonzero links: 16 ## Percentage nonzero weights: 16 ## Average number of links: 1.6 ## 2 regions with no links: ## 4 5 Usually, it can be useful to remove some connections due to edge effects. In this case, the function edit.nb provides an interactive tool to add or delete connections. The function include.self allows to include a site itself in its own list of neighbors: str(nbsoi) ## List of 10 ## $: int [1:4] 3 4 7 8 ##$ : int 10 ## $: int [1:3] 1 4 8 ##$ : int [1:3] 1 3 8 ## $: int [1:2] 6 9 ##$ : int [1:2] 5 9 ## $: int [1:2] 1 10 ##$ : int [1:3] 1 3 4 ## $: int [1:2] 5 6 ##$ : int [1:2] 2 7 ## - attr(*, "region.id")= chr [1:10] "1" "2" "3" "4" ... ## - attr(*, "call")= language soi.graph(tri.nb = nbtri, coords = xyir) ## - attr(*, "class")= chr "nb" ## - attr(*, "sym")= logi TRUE str(include.self(nbsoi)) ## List of 10 ## $: int [1:5] 1 3 4 7 8 ##$ : int [1:2] 2 10 ## $: int [1:4] 1 3 4 8 ##$ : int [1:4] 1 3 4 8 ## $: int [1:3] 5 6 9 ##$ : int [1:3] 5 6 9 ## $: int [1:3] 1 7 10 ##$ : int [1:4] 1 3 4 8 ## $: int [1:3] 5 6 9 ##$ : int [1:3] 2 7 10 ## - attr(*, "region.id")= chr [1:10] "1" "2" "3" "4" ... ## - attr(*, "call")= language soi.graph(tri.nb = nbtri, coords = xyir) ## - attr(*, "class")= chr "nb" ## - attr(*, "sym")= logi TRUE ## - attr(*, "self.included")= logi TRUE The spdep package provides many other tools to manipulate nb objects: intersect.nb(nb.obj1, nb.obj2) union.nb(nb.obj1, nb.obj2) setdiff.nb(nb.obj1, nb.obj2) complement.nb(nb.obj) nblag(neighbours, maxlag) # 2 Spatial weighting matrices Spatial weighting matrices are computed by a transformation of the spatial neighborhood objects. In R, they are not stored as matrices but as objects of the class listw. This format is more efficient than a matrix representation to manage large data sets. An object of class listw can be easily created from an object of class nb with the function nb2listw. Different objects listw can be obtained from a nb object. The argument style allows to define a transformation of the matrix such as standardization by row sum, by total sum or binary coding, etc. General spatial weights can be introduced by the argument glist. This allows to introduce, for instance, a weighting relative to the distances between the points. For this task, the function nbdists is very useful as it computes Euclidean distance between neighbor sites defined by an nb object. To obtain a simple row-standardization, the function is simply called by: nb2listw(nbgab) ## Characteristics of weights list object: ## Neighbour list object: ## Number of regions: 10 ## Number of nonzero links: 26 ## Percentage nonzero weights: 26 ## Average number of links: 2.6 ## ## Weights style: W ## Weights constants summary: ## n nn S0 S1 S2 ## W 10 100 10 8.513889 41.04167 More sophisticated forms of spatial weighting matrices can be defined. For instance, it is possible to weight edges between neighbors as functions of geographic distances. In a fist step, distances between neighbors are obtained by the function : distgab <- nbdists(nbgab, xyir) str(distgab) ## List of 10 ## $: num [1:2] 0.166 0.403 ##$ : num [1:3] 0.424 0.383 0.286 ## $: num [1:4] 0.4236 0.0617 0.3682 0.3538 ##$ : num [1:3] 0.166 0.0617 0.1501 ## $: num [1:2] 0.0383 0.0384 ##$ : num [1:4] 0.383 0.3682 0.0383 0.3344 ## $: num [1:2] 0.403 0.534 ##$ : num [1:2] 0.15 0.334 ## $: num 0.0384 ##$ : num [1:3] 0.286 0.354 0.534 ## - attr(*, "class")= chr "nbdist" ## - attr(*, "call")= language nbdists(nb = nbgab, coords = xyir) Then, spatial weights are defined as a function of distance (e.g. $$1-d_{ij}/max(d_{ij})$$): fdist <- lapply(distgab, function(x) 1-x/max(dist(xyir))) And the spatial weighting matrix is then created: listwgab <- nb2listw(nbgab, glist = fdist, style = "B") listwgab ## Characteristics of weights list object: ## Neighbour list object: ## Number of regions: 10 ## Number of nonzero links: 26 ## Percentage nonzero weights: 26 ## Average number of links: 2.6 ## ## Weights style: B ## Weights constants summary: ## n nn S0 S1 S2 ## B 10 100 18.19528 27.02395 148.4577 names(listwgab) ## [1] "style" "neighbours" "weights" listwgab$neighbours[[1]] ## [1] 4 7 listwgab$weights[[1]] ## [1] 0.8170501 0.5558821 The matrix representation of a listw object can also be obtained: print(listw2mat(listwgab),digits=3) ## [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] ## 1 0.000 0.000 0.000 0.817 0.000 0.000 0.556 0.000 0.000 0.000 ## 2 0.000 0.000 0.533 0.000 0.000 0.578 0.000 0.000 0.000 0.685 ## 3 0.000 0.533 0.000 0.932 0.000 0.594 0.000 0.000 0.000 0.610 ## 4 0.817 0.000 0.932 0.000 0.000 0.000 0.000 0.835 0.000 0.000 ## 5 0.000 0.000 0.000 0.000 0.000 0.958 0.000 0.000 0.958 0.000 ## 6 0.000 0.578 0.594 0.000 0.958 0.000 0.000 0.631 0.000 0.000 ## 7 0.556 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.412 ## 8 0.000 0.000 0.000 0.835 0.000 0.631 0.000 0.000 0.000 0.000 ## 9 0.000 0.000 0.000 0.000 0.958 0.000 0.000 0.000 0.000 0.000 ## 10 0.000 0.685 0.610 0.000 0.000 0.000 0.412 0.000 0.000 0.000 To facilitate the building of spatial neighborhoods (nb object) and associated spatial weighting matrices (listw object), the package adespatial provides an interactive graphical interface. The interface is launched by the call listw.explore() assuming that spatial coordinates are still stored in an object of the R session (Figure 2). Figure 2: The interactive interface provided by the function listw.explore. # 3 Spatial predictors The package adespatial provide different tools to build spatial predictors that can be incorporated in multivariate analysis. They are orthogonal vectors stored in a object of class orthobasisSp. Orthogonal polynomials of geographic coordinates can be computed by the function orthobasis.poly whereas traditional principal coordinates of neighbour matrices (PCNM, Borcard and Legendre (2002)) are obtained by the function dbmem. The more flexible Moran’s eigenvectors maps (MEMs) of a spatial weighting matrix are computed by the functions scores.listw or mem of the adespatial package. These two functions are exactly identical and return an object of class orthobasisSp. mem.gab <- mem(listwgab) mem.gab ## Orthobasis with 10 rows and 9 columns ## Only 6 rows and 4 columns are shown ## MEM1 MEM2 MEM3 MEM4 ## 1 -0.99150068 1.1963752 -0.93642712 -0.04953977 ## 2 -0.03655431 -1.6549057 0.09973653 -0.23657908 ## 3 -0.66077128 -0.9284951 0.82861853 1.35172328 ## 4 -1.26547947 1.0414066 0.91626372 1.02967682 ## 5 1.84724812 0.4858047 -0.09173118 0.18858500 ## 6 0.96155231 -0.3553900 1.15183204 -1.28527087 This object contains MEMs, stored as a data.frame and other attributes: str(mem.gab) ## Classes 'orthobasisSp', 'orthobasis' and 'data.frame': 10 obs. of 9 variables: ## $MEM1: num -0.9915 -0.0366 -0.6608 -1.2655 1.8472 ... ##$ MEM2: num 1.196 -1.655 -0.928 1.041 0.486 ... ## $MEM3: num -0.9364 0.0997 0.8286 0.9163 -0.0917 ... ##$ MEM4: num -0.0495 -0.2366 1.3517 1.0297 0.1886 ... ## $MEM5: num 1.441 0.341 0.636 -0.027 0.558 ... ##$ MEM6: num -1.13 -2.0264 1.4244 0.0542 0.4736 ... ## $MEM7: num -0.8913 1.0131 0.7105 -0.0329 -1.0514 ... ##$ MEM8: num -1.057 0.947 -0.705 1.404 1.619 ... ## $MEM9: num -0.664 -0.222 -1.323 1.562 -1.022 ... ## - attr(*, "values")= num 0.12766 0.09508 0.0749 0.00496 -0.02487 ... ## - attr(*, "weights")= num 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 ## - attr(*, "call")= language mem(listw = listwgab) The eigenvalues associated to MEMs are stored in the attribute called values: barplot(attr(mem.gab, "values"), main = "Eigenvalues of the spatial weighting matrix", cex.main = 0.7) A plot method is provided to represent MEMs. By default, eigenvectors are represented as a table (sites as rows, MEMs as columns): plot(mem.gab) The previous representation is not really informative and MEMs can be represented in the geographical space as maps if the argument SpORcoords is documented: plot(mem.gab, SpORcoords = xyir, nb = nbgab) Moran’s I can be computed and tested for each eigenvector with the moran.randtest function: moranI <- moran.randtest(mem.gab, listwgab, 99) moranI ## class: krandtest lightkrandtest ## Monte-Carlo tests ## Call: moran.randtest(x = mem.gab, listw = listwgab, nrepet = 99) ## ## Number of tests: 9 ## ## Adjustment method for multiple comparisons: none ## Permutation number: 99 ## Test Obs Std.Obs Alter Pvalue ## 1 MEM1 0.7015926 3.4453729 greater 0.01 ## 2 MEM2 0.5225529 2.5618940 greater 0.01 ## 3 MEM3 0.4116201 2.3076371 greater 0.01 ## 4 MEM4 0.0272521 0.5418607 greater 0.29 ## 5 MEM5 -0.1366833 0.2527610 greater 0.45 ## 6 MEM6 -0.2873389 -0.6194100 greater 0.73 ## 7 MEM7 -0.4986655 -1.5993989 greater 0.94 ## 8 MEM8 -0.7296825 -2.2195544 greater 1.00 ## 9 MEM9 -1.0106474 -3.4273999 greater 1.00 By default, the function moran.randtest tests against the alternative hypothesis of positive autocorrelation (alter = "greater") but this can be modified by setting the argument alter to "less" or "two-sided". The function is not only devoted to MEMs and can be used to compute spatial autocorrelations for all kind of variables. As demonstrated in Dray, Legendre, and Peres-Neto (2006), eigenvalues and Moran’s I are equal (post-multiply by a constant): attr(mem.gab, "values") / moranI$obs ## MEM1.statistic MEM2.statistic MEM3.statistic MEM4.statistic MEM5.statistic ## 0.1819528 0.1819528 0.1819528 0.1819528 0.1819528 ## MEM6.statistic MEM7.statistic MEM8.statistic MEM9.statistic ## 0.1819528 0.1819528 0.1819528 0.1819528 Then, it is possible to map only positive significant eigenvectors (i.e., MEMs with significant positive spatial autocorrelation): signi <- which(moranI\$p < 0.05) signi ## integer(0) plot(mem.gab[,signi], SpORcoords = xyir, nb = nbgab) # References Borcard, D, and P Legendre. 2002. “All-scale spatial analysis of ecological data by means of principal coordinates of neighbour matrices.” Ecological Modelling 153: 51–68. Dray, S, and A B Dufour. 2007. “The ade4 package: implementing the duality diagram for ecologists.” Journal of Statistical Software 22 (4): 1–20. Dray, S, P Legendre, and P R Peres-Neto. 2006. “Spatial modeling: a comprehensive framework for principal coordinate analysis of neighbor matrices (PCNM).” Ecological Modelling 196: 483–93. Dray, S, R Pélissier, P Couteron, M J Fortin, P Legendre, P R Peres-Neto, E Bellier, et al. 2012. “Community ecology in the age of multivariate multiscale spatial analysis.” Ecological Monographs 82 (3): 257–75.
2018-10-19 08:47:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3082515597343445, "perplexity": 7069.1965635482175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512382.62/warc/CC-MAIN-20181019082959-20181019104459-00185.warc.gz"}
https://support.bioconductor.org/p/55783/
Summarization by gene or exon or transcript 1 0 Entering edit mode Last seen 22 days ago Switzerland Hi Reema, If I understand your question correctly, I think the answer is: It depends. Counting alignments per exon may allow you to pick up differential splicing or differential isoform usage unrelated to splicing (e.g. alternative promoter usage or alternative termination). However, robust estimation of exon levels will require much greater sequencing depth; assuming that a gene has on average about ten exons, then you would need about ten times more reads to get a similar magnitude of counts. If you don't have that data or are not interested in within-gene structural differences, gene level estimates may be the better choice. Of course, you could try out both and compare results. You can easily get such counts from a bam file using countOverlaps (see workflow at http://www.bioconductor.org/help/workflows/high-throughput- sequencing/), or with the QuasR package, getting gene and exon counts is as simple as: gn <- qCount(proj, txdb, reportLevel="gene") ex <- qCount(proj, txdb, reportLevel="exon") Michael On 31.10.2013 21:19, Steve Lianoglou wrote: > Hi, > > On Thu, Oct 31, 2013 at 1:04 PM, Reema Singh <reema28sep at="" gmail.com=""> wrote: >> Hi Steve, >> >> Thank you for your reply, >> >> I just want to known what is the idea feature for summarizing read count >> after alignment?. Gene,transcript,exons features from GFF/GTF files are >> frequently used . > > If you are asking what the "ideal" format for storing summarized read > counts is, I would have to say that in "the R world" that would be to > use a SummarizedExperiment (it is a class defined in the GenomicRanges > package). > > The rowData() of the SummarizedExperiment would contain the GRanges > (or GRangesList) that define where the counts in each row of your > assay are from, and the columns would tell you the counts for a given > experiment. > > You could store your relevant sample data in colData, ie. phenotypic > data for each experiment (column), like cell type, perturbation, > whatever. See ?SummarizedExperiment for more info. > > If you were asking something else -- sorry, I'm still not getting what > the question is and perhaps someone else can chime in. > > -steve > QuasR QuasR • 1.1k views 0 Entering edit mode Last seen 22 days ago Switzerland Hi Reema, If I understand your question correctly, I think the answer is: It depends. Counting alignments per exon may allow you to pick up differential splicing or differential isoform usage unrelated to splicing (e.g. alternative promoter usage or alternative termination). However, robust estimation of exon levels will require much greater sequencing depth; assuming that a gene has on average about ten exons, then you would need about ten times more reads to get a similar magnitude of counts. If you don't have that data or are not interested in within-gene structural differences, gene level estimates may be the better choice. Of course, you could try out both and compare results. You can easily get such counts from a bam file using countOverlaps (see workflow at http://www.bioconductor.org/help/workflows/high-throughput- sequencing/), or with the QuasR package, getting gene and exon counts is as simple as: gn <- qCount(proj, txdb, reportLevel="gene") ex <- qCount(proj, txdb, reportLevel="exon") Michael On 31.10.2013 21:19, Steve Lianoglou wrote: > Hi, > > On Thu, Oct 31, 2013 at 1:04 PM, Reema Singh <reema28sep at="" gmail.com=""> wrote: >> Hi Steve, >> >> Thank you for your reply, >> >> I just want to known what is the idea feature for summarizing read count >> after alignment?. Gene,transcript,exons features from GFF/GTF files are >> frequently used . > > If you are asking what the "ideal" format for storing summarized read > counts is, I would have to say that in "the R world" that would be to > use a SummarizedExperiment (it is a class defined in the GenomicRanges > package). > > The rowData() of the SummarizedExperiment would contain the GRanges > (or GRangesList) that define where the counts in each row of your > assay are from, and the columns would tell you the counts for a given > experiment. > > You could store your relevant sample data in colData, ie. phenotypic > data for each experiment (column), like cell type, perturbation, > whatever. See ?SummarizedExperiment for more info. > > If you were asking something else -- sorry, I'm still not getting what > the question is and perhaps someone else can chime in. > > -steve > 0 Entering edit mode Hi Michael, Yes, This what I wanted to known. Thank you.:) Kind Regards On Fri, Nov 1, 2013 at 2:02 PM, Michael Stadler <michael.stadler@fmi.ch>wrote: > Hi Reema, > > If I understand your question correctly, I think the answer is: It depends. > > Counting alignments per exon may allow you to pick up differential > splicing or differential isoform usage unrelated to splicing (e.g. > alternative promoter usage or alternative termination). > > However, robust estimation of exon levels will require much greater > sequencing depth; assuming that a gene has on average about ten exons, > then you would need about ten times more reads to get a similar > magnitude of counts. If you don't have that data or are not interested > in within-gene structural differences, gene level estimates may be the > better choice. > > Of course, you could try out both and compare results. You can easily > get such counts from a bam file using countOverlaps (see workflow at > http://www.bioconductor.org/help/workflows/high-throughput- sequencing/), > or with the QuasR package, getting gene and exon counts is as simple as: > > gn <- qCount(proj, txdb, reportLevel="gene") > ex <- qCount(proj, txdb, reportLevel="exon") > > Michael > > > > On 31.10.2013 21:19, Steve Lianoglou wrote: > > Hi, > > > > On Thu, Oct 31, 2013 at 1:04 PM, Reema Singh <reema28sep@gmail.com> > wrote: > >> Hi Steve, > >> > >> Thank you for your reply, > >> > >> I just want to known what is the idea feature for summarizing read count > >> after alignment?. Gene,transcript,exons features from GFF/GTF files are > >> frequently used . > > > > If you are asking what the "ideal" format for storing summarized read > > counts is, I would have to say that in "the R world" that would be to > > use a SummarizedExperiment (it is a class defined in the GenomicRanges > > package). > > > > The rowData() of the SummarizedExperiment would contain the GRanges > > (or GRangesList) that define where the counts in each row of your > > assay are from, and the columns would tell you the counts for a given > > experiment. > > > > You could store your relevant sample data in colData, ie. phenotypic > > data for each experiment (column), like cell type, perturbation, > > whatever. See ?SummarizedExperiment for more info. > > > > If you were asking something else -- sorry, I'm still not getting what > > the question is and perhaps someone else can chime in. > > > > -steve > > > > -- Reema Singh PhD Scholar Computational Biology and Bioinformatics School of Computational and Integrative Sciences Jawaharlal Nehru University New Delhi-110067 INDIA [[alternative HTML version deleted]] 0 Entering edit mode An embedded and charset-unspecified text was scrubbed... Name: not available URL: <https: stat.ethz.ch="" pipermail="" bioconductor="" attachments="" 20131102="" c48e3c08="" attachment-0001.pl="">
2022-01-28 05:29:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7838026285171509, "perplexity": 4069.143409686868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305420.54/warc/CC-MAIN-20220128043801-20220128073801-00007.warc.gz"}
https://numberdb.org/advanced-search?expression=RIF(10,11)
NumberDBbeta • We only show the first 100 results. (searching...) search tips The advanced searchbar can be used to search for: • Lists of real numbers Use Sage syntax to enter a number, a list of numbers, or a dictionary of the form {param: number}. The lists and dictionaries may be nested. The numbers might be given via formulas involving standard functions such as sin and sqrt, as well as standard constants such as e and pi. Examples: • Lists of complex numbers Similarly one can search for complex numbers. Recommended parents are CIF (complex interval field), CBF (complex ball field), and SR (symbolic ring), although CC (complex field via floats) should work as well unless too much precision is lost during the computation. Similarly one can search for p-adic numbers in $\mathbb{Z}_p$ and $\mathbb{Q}_p$. • Lists of polynomials over $\mathbb{Q}$ Similarly one can search for multivariate polynomials over $\mathbb{Q}$. As only expressions are accepted (mainly for safety reasons), we can define the variables as in the following example: Index Number Table 10 Integers (#10) 10.24377030416656? Zeros of Dirichlet L-series (#4,3,2) 10.73611998749340? Zeros of Dirichlet L-series (#7,2,3) 10.80658816386172? Zeros of Dirichlet L-series (#8,5,3) 10.01655042256534? Zeros of Dirichlet L-series (#9,7,3) 10.919366913981313? Zeros of Dirichlet L-series (#11,3,4) 10.45005363820224? Zeros of Dirichlet L-series (#11,7,4) 10.108337357392796? Zeros of Dirichlet L-series (#11,10,4) 10.50587485607869? Values of the Gamma function at rational numbers (#1/11) 10.00952340921334? Values of the Gamma function at rational numbers (#2/21) 10.13610185115514? Values of the Gamma function at rational numbers (#22/5) 10.70248769061425? Values of the Gamma function at rational numbers (#-12/11) 10.208355672757599? Values of the Gamma function at rational numbers (#-23/21) 10.68732706902200? Values of the Riemann zeta function at rational numbers (#-37/2) 10.58444846495081? Values of the Riemann zeta function at rational numbers (#11/10) 10.08482641708041? Values of the Riemann zeta function at rational numbers (#21/19) 10.17346813506273? Zeros of Bessel functions of the first kind $J_\alpha$ (#1,3) 10.90412165942890? Zeros of Bessel functions of the first kind $J_\alpha$ (#3/2,3) 10.41711854737937? Zeros of Bessel functions of the first kind $J_\alpha$ (#7/2,2) 10.51283540809400? Zeros of Bessel functions of the first kind $J_\alpha$ (#13/2,1) 10.222345043496417? Zeros of Bessel functions of the second kind $Y_\alpha$ (#0,4) 10.02347797936004? Zeros of Bessel functions of the second kind $Y_\alpha$ (#2,3) 10.597176726782032? Zeros of Bessel functions of the second kind $Y_\alpha$ (#5,2) 10.99557428756428? Zeros of Bessel functions of the second kind $Y_\alpha$ (#1/2,4) 10.71564737579152? Zeros of Bessel functions of the second kind $Y_\alpha$ (#5/2,3) 10.529989417459217? Zeros of Bessel functions of the second kind $Y_\alpha$ (#17/2,1) 10.17346813506273? Local extrema of Bessel functions of the first kind $J_\alpha$ (#0,4) 10.51986087377231? Local extrema of Bessel functions of the first kind $J_\alpha$ (#5,2) 10.71143397069995? Local extrema of Bessel functions of the first kind $J_\alpha$ (#9,1) 10.94994364854116? Local extrema of Bessel functions of the first kind $J_\alpha$ (#1/2,4) 10.66356139048201? Local extrema of Bessel functions of the first kind $J_\alpha$ (#5/2,3) 10.180054255143447? Local extrema of Bessel functions of the first kind $J_\alpha$ (#17/2,1) 10.12340465543662? Local extrema of Bessel functions of the second kind $Y_\alpha$ (#1,3) 10.96515210524298? Local extrema of Bessel functions of the second kind $Y_\alpha$ (#7,1) 10.85653060352760? Local extrema of Bessel functions of the second kind $Y_\alpha$ (#3/2,3) 10.35637345659091? Local extrema of Bessel functions of the second kind $Y_\alpha$ (#7/2,2) 10.39162063100188? Local extrema of Bessel functions of the second kind $Y_\alpha$ (#13/2,1) 10.99557428756428? Rational multiples of pi (#7/2) 10.47197551196598? Rational multiples of pi (#10/3) 10.21017612416683? Rational multiples of pi (#13/4) 10.05309649148734? Rational multiples of pi (#16/5) 10.68141502220530? Rational multiples of pi (#17/5) 10.32237586179504? Rational multiples of pi (#23/7) 10.771174812307863? Rational multiples of pi (#24/7) 10.49864813493488? Keiper-Li coefficients (#22) 10.09995831280678? Values of the arithmetic-geometric mean (#1,31) 10.35767170909892? Values of the arithmetic-geometric mean (#1,32) 10.61415429116742? Values of the arithmetic-geometric mean (#1,33) 10.869453430598770? Values of the arithmetic-geometric mean (#1,34) 10.02627900405723? Values of the arithmetic-geometric mean (#2,25) 10.32472056339295? Values of the arithmetic-geometric mean (#2,26) 10.62120699612754? Values of the arithmetic-geometric mean (#2,27) 10.91583125167801? Values of the arithmetic-geometric mean (#2,28) 10.19428452104371? Values of the arithmetic-geometric mean (#3,22) 10.522067508984402? Values of the arithmetic-geometric mean (#3,23) 10.84726853279209? Values of the arithmetic-geometric mean (#3,24) 10.060754856642851? Values of the arithmetic-geometric mean (#4,19) 10.41603276212377? Values of the arithmetic-geometric mean (#4,20) 10.76792896589407? Values of the arithmetic-geometric mean (#4,21) 10.09012702766991? Values of the arithmetic-geometric mean (#5,17) 10.46920753262055? Values of the arithmetic-geometric mean (#5,18) 10.84411659878340? Values of the arithmetic-geometric mean (#5,19) 10.39028624247301? Values of the arithmetic-geometric mean (#6,16) 10.78838664594748? Values of the arithmetic-geometric mean (#6,17) 10.19753721732835? Values of the arithmetic-geometric mean (#7,14) 10.62013781043853? Values of the arithmetic-geometric mean (#7,15) 10.34846881834219? Values of the arithmetic-geometric mean (#8,13) 10.79049543550088? Values of the arithmetic-geometric mean (#8,14) 10.44608302907872? Values of the arithmetic-geometric mean (#9,12) 10.908134304824416? Values of the arithmetic-geometric mean (#9,13) 10.49404339582215? Values of the arithmetic-geometric mean (#10,11) 10.97721376252386? Values of the arithmetic-geometric mean (#10,12) 10.023669915416444? Complete elliptic integral of the third kind $\Pi(n,m)$ (#8/9,8/9) 10.350912070776018? Complete elliptic integral of the third kind $\Pi(n,m)$ (#8/9,9/10) 10.30295246527556? Complete elliptic integral of the third kind $\Pi(n,m)$ (#9/10,7/8) 10.692058029590620? Complete elliptic integral of the third kind $\Pi(n,m)$ (#9/10,8/9) 11 $abc$-triples of high quality (#1.428323,a) 10.554035292536633? Regulators of elliptic curves over $\mathbb{Q}$ of rank $1$ (#441,16596153,67610032371) 10.160349459213232? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#4806,8001,-599265) 10.21573980662838? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5391,-1287,-425709) 10.24617179144742? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5678,433,-50921) 10.58695751886449? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5715,-25911,5132187) 10.59684471738088? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5741,249,-2349) 10.26174114339238? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5794,66913,-17311121) 10.319601125763012? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5834,7633,-338201) 10.27089481740000? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5862,24817,-3845321) 10.31404428290508? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#5982,1297,-100601) 10.26617852223285? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6162,4417,-307169) 10.24463017204407? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6193,-2375,29107) 10.25170921561049? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6307,366745,-222096797) 10.52461267088438? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6334,817,-19241) 10.53183644967465? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6378,7057,-620729) 10.81762449126265? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6486,368353,-219860945) 10.78290939622088? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6538,744577,-642228929) 10.85046797118253? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6566,826490497,-23760566813249) 10.522405823808060? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6598,-1103,-67049) 10.85311046392770? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6726,124033,-43806401) 10.49916722044227? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6751,1129,-37781) 10.69492043358623? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6766,3697,-196361) 10.59135421560168? Special $L$-value of elliptic curves over $\mathbb{Q}$ of rank $2$ (#6782,25297,-4023449)
2022-10-06 12:44:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8159323930740356, "perplexity": 1500.0535767387814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00428.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-7-exponents-and-exponential-functions-7-1-zero-and-negative-exponents-practice-and-problem-solving-exercises-page-421/26
## Algebra 1: Common Core (15th Edition) $\frac{1}{k^{4}}$ Recall the equation: $a^{-n}= \frac{1}{a^{n}}$. Thus, we simplify this expression: $j^{0} \times k^{-4}= \frac{j^{0}}{k^{4}}$ Recall, anything raised to the zeroth power is 1. Thus, this expression simplifies to: $\frac{1}{k^{4}}$
2018-05-25 01:46:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907820463180542, "perplexity": 1121.618839933418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866917.70/warc/CC-MAIN-20180525004413-20180525024413-00416.warc.gz"}
https://codepainters.wordpress.com/tag/s60/
## Symbian: separate builds for different S60 editions Symbian’s build environment was never really my favorite one. Recently I was working on a project using different APIs on S60 3rd, 3rd FP1, 3rd FP2 and 5th editions. Typical Symbian project contains a single `bld.inf` file and a corresponding `.mmp` file, so it is not immediately obvious how to compile conditionally, based on exact S60 release you’re targeting. I’ve found a solution at Forum Nokia Wiki, but it requires adding a custom file to the SDK, I’m not really happy about it. However a closer look at the build process reveals that `bld.inf` is preprocessed using the following call: ```cpp.EXE -undef -nostdinc -+ -I "C:\Symbian\9.2\S60_3rd_FP1\epoc32\include" -I . -I "C:\Users\czajnik\work\test\TestApp\group\" -I "C:\Symbian\9.2\S60_3rd_FP1\epoc32\include\variant" -include "C:\Symbian\9.2\S60_3rd_FP1\epoc32\include\variant\Symbian_OS_v9.2.hrh" "C:\Users\czajnik\work\test\TestApp\group\BLD.INF" ``` The most important observation here is the inclusion of `Symbian_OS_v9.2.hrh`. Every SDK release comes with such a file, containing a bunch of `#define` macros denoting features available on a particular platform. With a little help from `grep`, `sort`, `sed` and `diff` I’ve eventually got the following bld.inf file: ```PRJ_PLATFORMS WINSCW ARMV5 GCCE PRJ_MMPFILES gnumakefile Icons_scalable_dc.mk #ifdef SYMBIAN_C32_SERCOMMS_V2 TestApp_S60_5th.mmp #else #ifdef SYMBIAN_APPARC_APPINFO_CACHE TestApp_S60_3rd_FP2.mmp #else #ifdef SYMBIAN_FLEXIBLE_ALARM TestApp_S60_3rd_FP1.mmp #else TestApp_S60_3rd.mmp #endif #endif #endif ``` Works like a charm 🙂
2018-12-15 09:49:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8370550274848938, "perplexity": 2045.830010041542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826842.56/warc/CC-MAIN-20181215083318-20181215105318-00483.warc.gz"}
https://math.stackexchange.com/questions/2613220/possible-typos-in-the-proof-of-recursion-theorem
# Possible typos in the proof of Recursion Theorem Below is the proof I took from textbook A Course in Mathematical Analysis by Proof D. J. H. Garling. I think there are some typos, which I highlighted in the attached picture. 1. $g(s(n))= f(n)$ Since the theorem states $a_{s(n)}=f(a_{n})$, I think the right version should be: $g(s(n))=f(g(n))$, not $g(s(n))= f(n)$. 2.$(n,a)\in P$ and $(n,a')\in P$ I think the right version should be: $(n,a)\in g$ and $(n,a')\in g$ 3.$(s(m),f(b))\in g$ Since the author is trying to prove $g'\in S$, I think the right version should be: $(s(m),f(b))\in g'$, not $(s(m),f(b))\in g$. 4.$(s(m),f(b))\notin g$ As in 3., I think the right version should be: $(s(m),f(b))\notin g'$, not $(s(m),f(b))\notin g$. • For (1) don't you want $g(s(n))=f(g(n))$? Jan 20, 2018 at 10:28 • Yes, we use mapping $g$, not $a$. I agree with you, it should be: $g(s(n))=f(g(n))$ ^^ Jan 20, 2018 at 10:44 Ad 1. Indeed, $n\in P$, not $n\in A$, hence $f(n)$ would not even make sense. But the correct formula should be $$g(s(n))=f(g(n)).$$It is only after finding this nice $g$ that we can set $a_n=g(n)$ an have our recursively defined sequence. Ad 2. Yes. Again, $\in P$ would not even make sense. If you follow the argument, you will notice that $\in g$ is used and shown, so at least the typo is no show-stopper for the proof. Ad 3 and 4. No and then Yes. That step of the argument should read Then $(s(m),f(b))\in g$. Thus if $(s(m),f(b))\notin g'$ then $(s(m),f(b))=(s(n),a')$. There seem to be more typos, e.g., there is a superfluous $S$ in what should read Since $(0,\bar a)\in R$ for all $R\in S$, $(0,\bar a)\in g$. • I think the author is to prove $g'\in S$. As a result, he assumes $(m,b)\in g'$, then he proves $(s(m),f(b))\in g'$ too. This is why i think it should be "Then $(s(m),f(b))\in g'$". And to prove $(s(m),f(b))\in g'$, he assumes the contrary, i.e $(s(m),f(b))\notin g'$. If "Then $(s(m),f(b))∈g$" is not a typo, I don't know what is the role of "Then $(s(m),f(b))∈g$"? Jan 20, 2018 at 11:03 • Hi @AsafKaragila, can you have a check on my above reasoning? Jan 21, 2018 at 1:53 3 is no type and 4 should read: "Thus if $(s(m),b)\notin g'$.." The author is indeed proving that $g'\in S$. This by induction. From $g'=g-\left\{ \left(s\left(n\right),a'\right)\right\}$ and $\left(0,\overline{a}\right)\in g$ it follows that $\left(0,\overline{a}\right)\in g'$. Then it remains to prove that $\left(m,b\right)\in g'\implies\left(s\left(m\right),f\left(b\right)\right)\in g'$. Suppose that this is not true for some $m\in P$ so that $\left(m,b\right)\in g'\wedge\left(s\left(m\right),f\left(b\right)\right)\notin g'$. This route is taken by the author but unfortunately a typo was made ($g$ instead of $g'$). From $\left(m,b\right)\in g'$ it follows that $\left(m,b\right)\in g$ so that $\left(s\left(m\right),f\left(b\right)\right)\in g$. Then from $\left(s\left(m\right),f\left(b\right)\right)\notin g'$ it follows that $\left(s\left(m\right),f\left(b\right)\right)=\left(s\left(n\right),a'\right)$ and consequently $m=n$ and $f\left(b\right)=a'$. However it was assumed that $n\in U$ and that for a unique $a\in A$ we had $\left(n,a\right)\in g$. So $\left(n,b\right)=\left(m,b\right)\in g$ tells us that $b=a$. Then $a'=f\left(b\right)=f\left(a\right)$ and a contradiction is found. Proved is now that $g'\in S$. [This leads to a contradiction again, which allows the conclusion that $n\in U\implies s(n)\in U$.] • I'm now clear, many thanks @drhab! Jan 21, 2018 at 14:47 • You are welcome. Jan 21, 2018 at 14:47
2023-02-04 17:47:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725145101547241, "perplexity": 347.0659756954306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00860.warc.gz"}
http://mathhelpforum.com/differential-geometry/159609-help-choosing-branch-cuts.html
## Help choosing branch cuts Q: Define a single-valued branch of the function $f(z)=z^{z}$ on an open set $U$ in $\mathbb{C}$, show $f$ is analytic on $U$. Here is my work $f(z)=z^{z}=e^{zlog(z)}$ Let $log(z)$ range over the principle branch. Thus, $log(z)$ is holomorphic on $\mathbb{C}-\{z+iy:y=0, x\leq\\0\}$. Now, if I let $w(z)=log(z)$ we have $f(z)=e^{zw(z)}$. Now, e^{anything} is defined on all of $\mathbb{C}$, to make sure it's bijective though, we need to restric the domain to a $2\pi$ period strip. So, let $e^{zw}$ be defined on $A=\{z:y_{0}. Now, I am sure if I am going in the right direction, because I am stuck. I am not sure if I need to consider a period strip like I said above, I was also thinking I may just have to consider $zlog(z)$ on its own. Since this is multiplication by a complex number, I am streching and roatating whatever $log(z)$ is, so when $z$ is positive, I am moving a faster around, I would think. I am not sure how to handle this though. I have a hard time visualizing this stuff, any help would be appreciated. Thanks you
2016-08-27 21:07:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9411829710006714, "perplexity": 215.5084115391149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982925602.40/warc/CC-MAIN-20160823200845-00275-ip-10-153-172-175.ec2.internal.warc.gz"}
http://www.math.gatech.edu/seminars-colloquia/series/combinatorics-seminar/hehui-wu-20110225
## Longest Cycles in Graphs with Given Independence Number and Connectivity. Series: Combinatorics Seminar Friday, February 25, 2011 - 15:05 1 hour (actually 50 minutes) Location: Skiles 006 , University of Illinois at Urbana-Champaign Organizer: The Chv\'atal--Erd\H{o}s Theorem states that every graph whose connectivityis at least its independence number has a spanning cycle.  In 1976, Fouquet andJolivet conjectured an extension: If $G$ is an $n$-vertex $k$-connectedgraph with independence number $a$, and $a \ge k$, then $G$ has a cycle of lengthat least $\frac{k(n+a-k)}{a}$.  We prove this conjecture. This is joint work with Suil O and Douglas B. West.
2018-01-21 02:41:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7616778612136841, "perplexity": 1913.7999849446335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889917.49/warc/CC-MAIN-20180121021136-20180121041136-00128.warc.gz"}