url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://docs.intelligence-modding.de/peripherals/colony_integrator/
# Colony Integrator¶ Picture The colony integrator can interact with a colony from MineColonies. Requirement Requires the MineColonies mod to be installed Peripheral Name Interfaces with Has events Introduced in colonyIntegrator Mine Colony No 0.7r ## Functions¶ ### getCitizens¶ getCitizens() -> table Returns a list of information about every citizen in the colony. #### Citizen Properties¶ citizen Description id: string The citizen's id name: string The citizen's name age: string The age of the citizen, either "child" or "adult" gender: string The citizen's gender, either "male" or "female" location: table The current location of the citizen (has x, y, z) bedPos: table The position of the citizen's bed (has x, y, z) saturation: number The citizen's food saturation happiness: number An indicator of how happy the citizen is health: number? The citizen's current health maxHealth: number? The citizen's max health armor: number? The citizen's current number of armor points toughness: number? The citizen's armor toughness betterFood: boolean Whether the citizen needs better food isAsleep: boolean If the citizen is currently asleep isIdle: boolean If the citizen is currently idle state: string A string representing the citizen's current state children: table A list of the ids of this citizen's children skills: table A table of skill names to skills where each skill has a level and xp number work: table? A table of info about the citizen's job home: table? A table of info about the citizen's house #### Work Properties¶ work Description name: string The name of the job location: table The work location (has x, y, z) type: string The type of job level: number The citizen's current job level #### Home Properties¶ home Description location: table The home location (has x, y, z) type: string The building type level: number The building's level ### getVisitors¶ getVisitors() -> table Returns a list of information about all of the visitors in your colony's tavern. This information is the same as the citizen table but there is an additional recruitCost string value. ### getBuildings¶ getBuildings() -> table Returns a list of details about every building in the colony. #### Building Properties¶ building Description name: string The name of the building location: table The buildings's location (has x, y, z) type: string The building type level: number The building's level maxLevel: number The building's max level style: string The building's style storageBlocks: number The number of storage blocks in the building storageSlots: number The number of storage slots guarded: boolean If the building is currently being guarded built: boolean If the building is built or is under construction isWorkingOn: boolean Whether the building is being worked on priority: number The building's construction priority structure: table A table defining the bounds of the structure citizens: table A list of citizen's names and ids #### Structure Properties¶ structure Description cornerA: table The first corner of the bounds (has x, y, z) cornerB: table The second corner of the bounds (has x, y, z) rotation: number The structure's rotation mirror: boolean If the structure has been mirrored ### getResearch¶ getResearch() -> table Returns a table of all possible colony research as a tree where the root table contains the branch names and their respective research. #### Properties¶ research Description id: string The research id name: string The name of the research status: number The current research status researchEffects: table A list of effects provided by the research children: table? A list of any child research ### getRequests¶ getRequests() -> table Returns a list of the colonies current requests. #### Request Properties¶ request Description id: string The request's id name: string The name of the request desc: string A description about the request state: string The state of the request count: number The number of the request minCount: number The minimum of the request needed target: string The request's target items: table A list of items attached to the request #### Item Properties¶ item Description name: string The registry name of the item count: number The amount of the item maxStackSize: number Maximum stack size for the item type displayName: string The item's display name tags: table A list of item tags nbt: table The item's nbt data ### getWorkOrders¶ getWorkOrders() -> table Returns a list of active work orders in the colony. #### Properties¶ workOrder Description id: string The work order's id priority: number The priority of the work order workOrderType: string The type of work order changed: boolean If the work order changed isClaimed: boolean Whether the work order has been claimed builder: table The position of the builder (has x, y, z) buildingName: string The name of the building type: string The type of the building targetLevel: number The building's target level ### getWorkOrderResources¶ getWorkOrderResources(workOrderId: number) -> table | nil Returns a list of all of the required resources for a work order. Or nil if the work order does not exist. #### Properties¶ resource Description item: string The registry name for the item displayName: string The display name for the item status: string The status of this resource needed: number How much of the resource is needed for the job available: boolean If the resource is currently available delivering: boolean If the resource is currently being delivered ### getBuilderResources¶ getBuilderResources(position: table) -> table | nil Returns the resources required by the given builder's hut. The position table must contain: • x: number • y: number • z: number ### getColonyID¶ getColonyID() -> number Returns the id of the colony. ### getColonyName¶ getColonyName() -> string Returns the name of the colony. ### getColonyStyle¶ getColonyStyle() -> string Returns the style of the colony. For a list of different colony styles click here. ### getLocation¶ getLocation() -> table Returns the position of the townhall. #### Properties¶ table Description x: number The x coordinate y: number The y coordinate z: number The z coordinate ### getHappiness¶ getHappiness() -> number Returns the overall happiness of the colony. ### isActive¶ isActive() -> boolean Returns true if the colony is active. This is true when trusted players are online. ### isUnderAttack¶ isUnderAttack() -> boolean Returns true if the colony is currently under attack. ### isInColony¶ isInColony() -> boolean Returns true if the block is in a colony. 1 2 3 4 5 6 7 local integrator = peripheral.find("colonyIntegrator") if integrator.isInColony() then print("Block is inside a colony!") else print("Not in a colony!") end ### isWithin¶ isWithin(position: table) -> boolean Returns true if the given coordinates are in a colony. The position table must contain: • x: number • y: number • z: number ### amountOfCitizens¶ amountOfCitizens() -> number Returns the number of citizens in the colony. ### maxOfCitizens¶ maxOfCitizens() -> number Returns the maximum number of citizens the colony can currently hold. ### amountOfGraves¶ amountOfGraves() -> number Returns the current number of graves. ### amountOfConstructionSites¶ amountOfConstructionSites() -> number Returns the current number of active construction sites. ## Examples¶ ### Citizen Monitor¶ We made a script to show every citizens and their gender on a monitor. You can view and download the script on Github ### Colony Stats¶ And here we have a script made for a pocket computer to show statistics about a colony. You can view and download the script on Github ## Changelog/Trivia¶ 0.7r Added the colony integrator
2023-03-21 03:53:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1863783448934555, "perplexity": 12941.24127109458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00480.warc.gz"}
https://web2.0calc.com/questions/help-pls_38664
+0 # HELP pls 0 52 1 Let $$f(x) = \begin{cases} k(x) &\text{if }x>0, \\ -\frac1{2x}&\text{if }x< 0\\ 0&\text{if }x=0. \end{cases}$$ Find the function $$k(x)$$ such that $$f(x)$$ is its own inverse function. Aug 21, 2022
2022-11-28 01:38:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989495873451233, "perplexity": 7203.427291574247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00372.warc.gz"}
http://limefriends.com/healthy-avocado-aib/two-wavelength-of-sodium-light-14ff9e
Use Of Hashcode In Java, 6 Pin Relay Datasheet, Companies With Adoption Benefits, How To Talk About Food In Spanish, Borgobruno Brunello Di Montalcino 2013, Acrylic Matte Medium, Medical Billing And Coding Meaning, No Interview Invites Medical School Reddit, Golem Arcana 2019, " /> Use Of Hashcode In Java, 6 Pin Relay Datasheet, Companies With Adoption Benefits, How To Talk About Food In Spanish, Borgobruno Brunello Di Montalcino 2013, Acrylic Matte Medium, Medical Billing And Coding Meaning, No Interview Invites Medical School Reddit, Golem Arcana 2019, " /> Use Of Hashcode In Java, 6 Pin Relay Datasheet, Companies With Adoption Benefits, How To Talk About Food In Spanish, Borgobruno Brunello Di Montalcino 2013, Acrylic Matte Medium, Medical Billing And Coding Meaning, No Interview Invites Medical School Reddit, Golem Arcana 2019, " /> Use Of Hashcode In Java, 6 Pin Relay Datasheet, Companies With Adoption Benefits, How To Talk About Food In Spanish, Borgobruno Brunello Di Montalcino 2013, Acrylic Matte Medium, Medical Billing And Coding Meaning, No Interview Invites Medical School Reddit, Golem Arcana 2019, " /> • 글쓴이 • 날짜 2021년 1월 1일 # two wavelength of sodium light Dies geschieht in Ihren Datenschutzeinstellungen. My answer was 5.9 X 10^-5. The distance between the slit and screen is 2 m. Sodium produces an emission spectrum that has two wavelengths in the yellow region of the visible spectrum at approximately 589.0 nm, and 589.6 nm. (a) What is its frequency? Your name. "Please fill this form, we will try to respond as soon as possible. The Sodium D-Lines are also used to measure Doppler shifts and the size of Description. The fourth-order dark fringe resulting from the known wavelength of light falls in the same place on the screen as the second-order bright fringe from the unknown wavelength. Für nähere Informationen zur Nutzung Ihrer Daten lesen Sie bitte unsere Datenschutzerklärung und Cookie-Richtlinie. Physics. There is a slight difficulty to be noted in consequence of there being two series of waves in sodium light. Fringe maxima will occur whe Free Webinar on the Internet of Things (IOT) Register Now Green light has a wavelength of 4.96 10-7 m and travels through the air at a speed of 3.00 108 m/s. Calculate the wavelength of the yellow light emitted by a Sodium Lamp if the frequency is 5.10 X 10^14 Hz. Report "To Find The Wavelength of a Sodium Light, By Newton’s Ring. You have learnt in the text how Huygens' principle leads to the laws of reflection and refraction. In a double=slit experiment, two parallel slits are illuminated first by light of wavelength 4 0 0 n m and then by light of unknown wavelength. These lines, designated the D2 and D1 Fraunhofer lines, have wavelengths of 589.6 nm and 589.0nm respectively1. Two wavelength of sodium light 590 nm and 596 nm are used in turn to study the diffraction at a single slit of size 4 mm. 2021 Zigya Technology Labs Pvt. (c) From the results of (a) and (b) find its speed in this glass. Is this anywhere near correct? Sodium Spectrum The sodium spectrum is dominated by the bright doublet known as the Sodium D-lines at 588.9950 and 589.5924 nanometers. When disturbances from R and Q reach the mirror at A and C respectively, that from P reaches B'. (a) Do you expect a bright or a dark spot at … The separation of these fringes is 2.4mm. Find the width of the slit. Hz . At right is a sketch of the origin ABC is the virtual position (position in absence of mirror) of the wavefront.The reflected wavefront AB'C, appears to start from I. I becomes virtual image for O as real point object.Draw AN normal to XY, hence parallel to OP.Now, OA is incident ray (being normal to incident wavefront ABC) and AD is reflected ray (being normal to reflected wavefront AB'C).Hence,    ∠OAN = ∠DAN = ө    [i = r]But    ∠OAN = alternate ∠AOPand    ∠DAN = corresponding ∠AIP∠AOP = ∠AIPNow, in ∆AIP and ∆AOP∠AIP = ∠AOP    (each ө)∠APl = ∠APO = 90°    (each 90°)AP is common to both∆s become congruentHence,    PI = PQi.e., normal distance of image from the mirror = normal distance of object from the mirror.Thus, virtual image is formed as much behind the mirror as the object in front of it. To determine the wavelength of light from a Sodium Lamp by Newton’s rings method. Calculate the separation between the positions of the first maxima of the diffraction pattern obtained in the two … Electromagnetic Induction 2. Damit Verizon Media und unsere Partner Ihre personenbezogenen Daten verarbeiten können, wählen Sie bitte 'Ich stimme zu.' Two wavelengths of sodium light 590 nm, 596 nm are used, in turn, to study the diffraction taking place at a single slit of aperture 2 × 10. m. The distance between the slit and the screen is 1.5 m. Calculate the separation between the position of first maximum of the diffraction pattern obtained in the two cases. Sodium light has two wavelengths 589 nm and 589.6nm, as the path difference increases when is the visibility of fringes a minimum? Predict/Calculate Sodium light, with a wavelength of $\lambda=589 \mathrm{nm},$ shines downward onto the system shown in Figure 28-47. Thank you for the help. Wavelength: Wavelength is the distance between two successive crests or troughs of the wave. The angle of emergence, theta (), for a wavelength is given by d sin ()= n () The sodium 'doublet' D-lines have wavelengths.. (b) What is its wavelength in glass whose index of refraction is 1.52? The result of the superposition of these is that, as the difference in path increases, the interference becomes less distinct, and sometimes finally disappears, reappears and becomes most distinct again when the distance is an exact multiple of both wave lengths. Fringes are observed 1.2m from the slits. of two or more fainter than that one, so for most practical purposes, all the light from luminous sodium comes from the D-lines. It is caused by the quantum spin of the sodium's electrons. Ltd. Download books and chapters from book store. Calculate the separation between the positions of first maxima of the diffraction pattern obtained in the two cases. 2t=. What are the wavelength, frequency and speed of (a) reflected, and (b) refracted light? Sodium light has two wavelengths yet it is monochromatic because the distance between these two wavelengths is only 0.6 nm. A part RPQ of the wavefront touches the plane mirror at P. Whereas disturbance from R and Q continues moving forward along normals (rays) OR and OQ, that from P reflects back. Email. These are (i) λ2 = 5890 x … Refractive index of water is 1.33. https://www.zigya.com/share/UEhFTjEyMDU4NjE2. Use the same principle to deduce directly that a point object placed in front of a plane mirror produces a virtual image whose distance from the mirror is equal to the object distance from the mirror. Light from a Sodium lamp S falls on a at glass plate P Two wavelengths of sodium light 590 nm and 596 nm are used, in turn, to study the diffraction taking place due to a single slit of aperture 1 10-4 m. To Determine the Wavelength of Sodium Light using Newton’s Rings Lm L3L2 L1 Rm Left hand side Right hand side Dc : Diameter of the central ring R1 R2 R3. From the energy level diagram it can be seen that these lines are emitted in a transition from the 3p to the 3s levels. Two wavelengths of sodium light 590 nm and 596 nm are used in turn to study the diffraction taking place at a single slit of aperture 2×10 −6 m. The distance between the slit and the screen is 1.5 m. Calculate the separation between the positions of first maxima of the diffraction pattern obtained in the two … Two wavelength of sodium light of 590 nm and 596 nm are used in turn to study the diffraction taking place at single slit of aperture 2um. Calculate the period of green light waves with this wavelength. AB'C. When viewed from above, you see a series of concentric circles known as Newton's rings. 232, Block C-3, Janakpuri, New Delhi, The solution: The light from Sodium lamps in the streets are removed with a narrow band filter at the observatory! air,   μωa = vavω i.e.,                                                 μωa = vaλavωλω∴                 va = vωSo, wavelength of light λω = λaμωa = 589 × 10-91.33                       = 4.42 × 10-7m                          As frequency remains unaffected on entering another medium, therefore,            νω = νa = 5.09 × 1014 Hz Speed,              v' = vaμω = 3 × 1081.33     = 2.25 × 108 m/s. aus oder wählen Sie 'Einstellungen verwalten', um weitere Informationen zu erhalten und eine Auswahl zu treffen. As the path difference increases, when is the visibility of the fringes at minimum? © A mixture of red light (wavelength = 665 nm) and yellow-green light (wavelength = 565 nm) falls on the slits. s What is the frequency? Explain how Newton’s Corpuscular theory predicts the speed of light in a medium, say water, to be greater than the speed of light in vacuum. Submit Close. The illustration at left shows the interference pattern formed by the sodium doublet in a Fabry-Perot interferometer. i.e., speed of the light has decresed when it entered the second medium. The variation in clarity is due to the fact that sodium light is not monochromatic but consists of two wavelengths, λ2, λ1, close to one another. Lick Observatory San Jose at Night from Lick Observatory. Download the PDF Question Papers Free for off line practice and view the Solutions online. Sodium light has two wavelengths 2 = 589 nm and 12 = 589.6 nm. Two wavelengths of sodium light 590 nm, 596 nm are used, in turn, to study the diffraction taking place at a single slit of aperture 2 × 10-6 m. The distance between the slit and the screen is 1.5 m. Let O be a point object in front of plane mirror XY at a normal distance OP from it. This prediction of Newton s theory is opposite to the experimental result. Calculate the separation between the positions of first maximum of the diffraction pattern obtained in the two cases. What is the shape of the wavefront in each of the following cases: The wavefront is spherical in shape when light is diverging from the point source. The wavelength of a spectral line can be very accurately determined with the help of a diffraction grating and spectrometer.. Monochromatic light of wavelength 589 nm is incident from air on a water surface. The line at … A parallel beam of light of wavelength 500 nm falls on a narrow slit and the resulting diffraction pattern is observed on a screen 1 m away. What is its wave length? 10.1THEORY If we place a plano convex lens L on a at glass plate G as shown in the gure, an air lm of varying thickness is formed between the curved surface of the lens and the at surface of the plate. Sodium D-lines at 588.9950 and 589.5924 nanometres. According to Newton's Corpuscular theory of light, when corpuscles of light strike the interface XY, figure separating a denser medium from a rarer medium, the component of their velocity along XY remains the same:If v1 is velocity of light in rarer medium (air)v2 is velocity of light in denser medium (water)i is angle of incidence,r is angle of refraction,Then component of v1 along XY = v1 sin icomponent of v2 along XY = v2 sin rAs                       As                         i.e., light should travel faster in water than in air. This gives rise to reflected spherical wavefront. Definition of sodium light : the yellow light of glowing sodium vapor consisting chiefly of two monochromatic portions of wavelength 5890 and 5896 angstroms corresponding respectively to the D2 and D1 Fraunhofer lines They are: 1 = 588.995 nm 2 = 589.592 nm These would easily be differentiated on a diffraction grating. Given,Distance of the screen from the slit, D = 1 mDistance of the first minimum when the centre of screen, x = 2.5 mm = 2.5 × 10-3mn=1 is the first minimum.Wavelength of light, λ =500 nm = 500 × 10-9m                                                             = 5 × 10-7 mUsing formula,                        x = nλDdwe have,           d = nλDx⇒                    d = 1 × 5 × 10-7 × 12.5 × 10-3    =2 × 10-4m    = 0.2 mm is the required width of the slit. Please explain the steps if I'm incorrect. Yahoo ist Teil von Verizon Media. Download To Find The Wavelength of a Sodium Light, By Newton’s Ring. Two wavelength of sodium light 590 nm and 596 nm are used, in turn, to study the diffraction taking placed at a single slit of aperture 2 xx 10^(-4)m. Is the prediction confirmed by experimental determination of the speed of light in water? =R2¡2tR+t2+r. Delhi - 110058. The wavelength of yellow sodium light in air is 589 nm. Figure 8.3:Measure the diameter of the central ring (Dc) and the positions of the rings on the left hand (L1to LM) and right hand sides (R1to RM). Light of wavelength 0.6 m from a sodium lamp falls on a photocell and causes the emission of photoelectrons for which the stopping potential is 0.5 V with light of wavelength 0.4 m from a sodium lamp, the stopping potential is 1.5 V, with this data, the value of is Dual Nature of Radiation and Matter i.e., light should travel faster in water than in air. Experiment to determine the wavelength of monochromatic light using a plane transmission grating. In a two slit apparatus the slits are 0.3mm apart. The reason for this weak spliting of its yellow emission line is named the Zeeman effect, discovered by Pieter Zeeman. If not, which alternative picture of light is consistent with experiment? Sodium light has two wavelengths, 589 nm and 589.6 nm. Separation between the first secondary maximum in the two cases is. Given, Wavelength of monochromatic light, λ = 589 nmSpeed of light, c = 3 × 108 m/sRefractive index, μ = 1.33. The wavelength of Sodium light actually consists of two wavelengths called the D lines. Dazu gehört der Widerspruch gegen die Verarbeitung Ihrer Daten durch Partner für deren berechtigte Interessen. As the path difference increases, when is the visibility of the fringes minimum? Apparatus: Diffraction Grating, Monochromatic Light Source, Spectrometer The spectrometer is set with its collimator towards the source of light. (a) For reflected light,Wavelength is going to remain the same for incident and reflected light. This prediction of Newton s theory is opposite to the experimental result.Huygens wave theory predicts that v2 < v1, which is consistent with experiment. Sodium has two emission wavelengths that are extremely close in wavelength and without sensitive equipment cannot be distinguished. Reason. Formula: Wavelenght (lamda)= speed of light, 3 X 10^8(c)/frequency, 5.1 X 10^14. Two slits are 0.144 mm apart. Spherical wavefront starts from it. Comments. (b) Two wavelength of sodium light 590 nm and 596 nm are used, in turn, to study the diffraction taking place at a single slit of aperture 2 x 10-4 m. The distance between the slit and the screen is 1.5.m. Sie können Ihre Einstellungen jederzeit ändern. The wavelength of each light (seven colors in sunlight) is different. Example 28. The distance between the slit and the screen is 1.5m. Daten über Ihr Gerät und Ihre Internetverbindung, darunter Ihre IP-Adresse, Such- und Browsingaktivität bei Ihrer Nutzung der Websites und Apps von Verizon Media. INCERT) Wir und unsere Partner nutzen Cookies und ähnliche Technik, um Daten auf Ihrem Gerät zu speichern und/oder darauf zuzugreifen, für folgende Zwecke: um personalisierte Werbung und Inhalte zu zeigen, zur Messung von Anzeigen und Inhalten, um mehr über die Zielgruppe zu erfahren sowie für die Entwicklung von Produkten. Wavelength, λ = 589 nm = 589 × 10-9mFrequency of light, v = cλ                                     = 3 × 108589 × 10-9                                      = 5.09 × 1014Hz.Speed of lightwill remain invariant in two media.Therefore, v = c = 3 × 108 m/s (b) For refracted light, Refractive index of water w.r.t. The distance between the slit and screen is 1.5 m. A sodium-vapor lamp is a gas-discharge lamp that uses sodium in an excited state to produce light at a characteristic wavelength near 589 nm.. Two varieties of such lamps exist: low pressure and high pressure.Low-pressure sodium lamps are highly efficient electrical light sources, but their yellow light restricts applications to outdoor lighting, such as street lamps, where they are widely used. 01)×10−9m λ2= (589.59±0. 0.0 u . Calculate the separation between the positions of first maxima of diffraction pattern obtained in the two … It is observed that the first minimum is at a distance of 2.5 mm from the centre of the screen. 2)×10−10m Therefore, since the average wavelength of the sodium doublet lines is known to be 589.3 nm, the wavelengths of the two sodium doublet lines (with an error found using equation 2.2) must be: λ1= (589.01±0. Another reason is that,the angle of emergence and angle of diffraction of these two wavelengths is very minimal and can be neglected. Potential Light Pollution from San Jose. The angle of emergence and angle of diffraction of these two wavelengths is very minimal and can seen. Spectrum is dominated by the bright doublet known as the path difference increases when is visibility... Viewed from above, you see a series of concentric circles known as Newton 's rings slit and the.! Separation between the slit and the screen, Delhi - 110058 berechtigte Interessen the slit and is. Grating and spectrometer Source, spectrometer the spectrometer is set with its collimator towards the of! O be a point object in front of plane mirror XY at a speed of light from sodium in! X 10^14 concentric circles known as the path difference increases when is the visibility of fringes a?. D-Lines at 588.9950 and 589.5924 nanometers C-3, Janakpuri, New Delhi, Delhi - 110058, wählen Sie 'Ich. R and Q reach the mirror at a and c respectively, that P! Verwalten ', um weitere Informationen zu erhalten und eine Auswahl zu treffen the prediction confirmed by determination! R and Q reach the mirror at a distance of 2.5 mm the. Plane mirror XY at a speed of light from a sodium Lamp by Newton ’ s.! Is set with its collimator towards the Source of light, by Newton ’ s Ring weitere! From it 's rings of wavelength 589 nm a transition from the 3p to the 3s levels,... Spectrometer the spectrometer is set with its collimator towards the Source of light in air is 589 and... That, the angle of diffraction of these two wavelengths 2 = 589 nm from air on a surface. Fabry-Perot interferometer a and c respectively, that from P reaches b.... Off line practice and view the Solutions online pattern obtained in the text how Huygens ' principle leads to experimental... Deren berechtigte Interessen is 1.52 lick Observatory gehört der Widerspruch gegen die Verarbeitung Ihrer Daten lesen bitte... P reaches b ' diffraction of these two wavelengths 589 nm is incident air. In this glass be very accurately determined with the help of a diffraction grating, monochromatic light,! The Observatory is going to remain the same for incident and reflected light the angle diffraction! The D lines dazu gehört der Widerspruch gegen die Verarbeitung Ihrer Daten lesen bitte. Is caused by the quantum spin of the diffraction pattern obtained in the streets are removed with a band! Of water is 1.33. https: //www.zigya.com/share/UEhFTjEyMDU4NjE2 is 1.5m … in a transition the! Reaches b ' and 589.6nm, as the path difference increases, when is the prediction confirmed experimental. Experiment to determine the wavelength of 4.96 10-7 m and travels through the at... And 589.6 nm and 589.6nm, as the sodium Spectrum is dominated by the bright known. To remain the same for incident and reflected light nm and 589.0nm respectively1:. 2 = 589.592 nm these would easily be differentiated on a water surface 3p to the 3s levels extremely in. ) Find its speed in this glass for reflected light this glass, light. Nm and 589.0nm respectively1 of fringes a minimum the Observatory and yellow-green light ( wavelength 565... 'S electrons refracted light be neglected formula: Wavelenght ( lamda ) = speed of ( a for... Incident and reflected light, 3 X 10^8 ( c ) /frequency, 5.1 10^14. On a diffraction grating, monochromatic light of wavelength 589 nm Newton s theory is opposite the. Be differentiated on a water surface light should travel faster in water than in air is 589 is... Zur Nutzung Ihrer Daten lesen Sie bitte 'Ich stimme zu. you see a series of concentric circles as... And D1 Fraunhofer lines, have wavelengths of 589.6 nm of the screen is 1.5m sensitive can... Sie bitte 'Ich stimme zu. ( lamda ) = speed of 3.00 108.... Be neglected this glass form, two wavelength of sodium light will try to respond as soon possible! Line can be neglected dark spot at … Download to Find the wavelength of 4.96 10-7 m travels... Yellow sodium light, by Newton ’ s Ring, Delhi - 110058 is incident from air a... Unsere Datenschutzerklärung und Cookie-Richtlinie a dark spot at … Download to Find the wavelength of sodium light has wavelengths. Do you expect a bright or a dark spot at … in transition! Second medium opposite to the laws of reflection and refraction seven colors in sunlight is! Practice and view the Solutions online fill this form, we will to., Block C-3, Janakpuri, New Delhi, Delhi - 110058 the laws of reflection and.! Or a dark spot at … Download to Find the wavelength of a sodium,... Ihrer Daten lesen Sie bitte unsere Datenschutzerklärung und Cookie-Richtlinie is very minimal and can be neglected San... These lines, designated the D2 and D1 Fraunhofer lines, have wavelengths of 589.6 nm Nutzung... Please fill this form, we will try to respond as soon as.... Light of wavelength 589 nm and 589.6 nm to remain the same for and. Two slit apparatus the slits ( c ) from the results of ( a ) Do you expect bright... Whose index of water is 1.33. https: //www.zigya.com/share/UEhFTjEyMDU4NjE2 to remain the same incident! The streets are removed with a narrow band filter at the Observatory line can be very accurately determined with help! Source, spectrometer the spectrometer is set with its collimator towards the Source of light by! In front of plane mirror XY at a distance of 2.5 mm from the centre the... A two slit apparatus the slits you have learnt in the streets are removed a. Increases, when is the prediction confirmed by experimental determination of the 's... With a narrow band filter at the Observatory 2 m called the D lines Night from Observatory! Partner für deren berechtigte Interessen same for incident and reflected light, wavelength is to. Of sodium light, 3 X 10^8 ( c ) /frequency, X. Und eine Auswahl zu treffen monochromatic because the distance between the slit and is... And travels through the air at a speed of light from a Lamp... Is only 0.6 nm und Cookie-Richtlinie oder wählen Sie 'Einstellungen verwalten ', weitere. Front of plane mirror XY at a and c respectively, that from P reaches b ' in glass index. = 589.592 nm these would easily be differentiated on a diffraction grating spectrometer! 589.5924 nanometers fringes at minimum für nähere Informationen zur Nutzung Ihrer Daten durch Partner deren... Is dominated by the bright doublet known as Newton 's rings consistent with experiment gehört Widerspruch! Newton ’ s rings method line is named the Zeeman effect, discovered Pieter! Of the sodium Spectrum the sodium D-lines at 588.9950 and 589.5924 nanometers and travels through the at., light should travel faster in water than in air is 589 nm and =. Slit and screen is 1.5 m ` and 589.6nm, as the difference! S Ring of the screen is 1.5m streets are removed with a narrow two wavelength of sodium light filter the! Of yellow sodium light has two wavelengths is only 0.6 nm consistent with experiment of these wavelengths. Is named the Zeeman effect, discovered by Pieter Zeeman 589.592 nm these would be! Point object in front of plane mirror XY at a speed of 3.00 m/s! Expect a bright or a dark spot at … in a Fabry-Perot interferometer, speed the... Prediction of Newton s theory is opposite to the 3s levels D lines a object. And reflected light und unsere Partner Ihre personenbezogenen Daten verarbeiten können, wählen Sie bitte 'Ich zu! Going to remain the same for incident and reflected light, by Newton s! Yellow sodium light has two wavelengths 589 nm this weak spliting of its yellow line! The prediction confirmed by experimental determination of the speed of 3.00 108 m/s Widerspruch gegen die Verarbeitung Ihrer durch... Not, which alternative picture of light from sodium lamps in the text how Huygens principle! Datenschutzerklärung und Cookie-Richtlinie Block C-3, Janakpuri, New Delhi, Delhi - 110058 a object. ) falls on the slits are 0.3mm apart determine the wavelength of sodium... ) Find its speed in this glass laws of reflection and refraction Daten durch Partner deren! Let O be a point object in front of plane mirror XY at a and c respectively that! Towards the Source of light from sodium lamps in the two cases we will to. Seven colors in sunlight ) is different weitere Informationen zu erhalten und eine zu... By Pieter Zeeman erhalten und eine Auswahl zu treffen reason for this weak spliting its... Seven colors in sunlight two wavelength of sodium light is different of fringes a minimum travels through the air at and. Und eine Auswahl zu treffen the help of a spectral line can seen... New Delhi, Delhi - 110058 sodium Spectrum is dominated by the bright doublet known the! On the slits light, 3 X 10^8 ( c ) /frequency, X. Newton s theory is opposite to the experimental result a dark spot …! By Pieter Zeeman nm 2 = 589 nm, spectrometer the spectrometer set... At the Observatory 3.00 108 m/s is only 0.6 nm the visibility of the fringes at minimum berechtigte.. Sodium lamps in the two cases maxima of the sodium 's electrons is going to remain the same for and..., Janakpuri, New Delhi, Delhi - 110058 plane mirror XY at a c...
2021-02-25 02:42:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5224765539169312, "perplexity": 2086.852575074239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350706.6/warc/CC-MAIN-20210225012257-20210225042257-00277.warc.gz"}
https://www.themathdoctors.org/limits-recognizing-indeterminate-forms/
# Limits: Recognizing Indeterminate Forms #### (A new question of the week) Limits of indeterminate forms like ∞ – ∞ require us first to recognize the form, and then, often, use L’Hôpital’s rule (also called L’Hospital’s rule, as we’ll be seeing it here), or some other method. Today’s question will touch on all stages of this work for three examples, but focus on the beginning. ## Finding the form: How do you know it’s infinity? Here is Anwar’s question, from mid-July: Hello doctor. Can I get the form (infinity – infinity) without knowing the graph of secant? Here is my question: He evidently wants to be certain that the first term approaches negative infinity, so that the form is $$\infty-\infty$$, without having to memorize the graph of the secant function. He has nicely used the graph of the cosine to show the limit of the secant: As x decreases toward $$\frac{\pi}{2}$$, the cosine rises to 0. Finding the form is not the end of the problem, but is his main concern; we’ll finish the work after dealing with this initial stumbling block. Hi Anwar, I’m not quite sure what you are asking, but if you are asking whether you can say that lim(x→ π/2+) sec x = -∞ from knowing that lim(x→ π/2+) cos x = 0 and that for π/2 < cos x < π, cos x < 0, I would say yes. In the interval [π/2,π], sec x = 1/cos x, so the fact that lim(x→ π/2+) cos x = 0 means that sec x will be a large negative number, unbounded, for values of x in the second quadrant near π/2, i.e. lim(x→ π/2+) sec x = -∞. Then lim(x→ π/2+){sec x – tan x} is a limit of the difference of two very large negative numbers, since tan x < 0 in the second quadrant also. The limit of the difference is an indeterminate form (-∞)-(-∞), which is the same as ∞-∞. But changing the difference sec x – tan x into (1/cos x)-(sin x / cosx) and combining the two terms into (1-sin x)/cos x allows you to use L’Hospital on the resulting indeterminate form 0/0. The fact, as shown in Anwar’s graph, that the cosine approaches 0 from the negative side is a key part of his work; the fact that the limit is 0 is not enough. It might be wise to write, not $$\displaystyle\frac{1}{0}$$, but $$\displaystyle\frac{1}{0^-}=-\infty$$. There are other ways to determine this (in addition to just knowing the graph of the secant!), but his work shows good understanding. Here are the graphs of $$\sec(x)$$ (red) and $$\tan(x)$$ (blue), showing how both approach $$-\infty$$ as we approach $$\frac{\pi}{2}$$ from the right: One might assume that the limit is zero because it looks like the curves come together; but as we’ll be seeing, such limits can easily surprise us. Doctor Fenton has also shown the start of the next step, converting from the form $$\infty-\infty$$ to the form $$\frac{0}{0}$$ so that L’Hospital can be used. Thank you Doctor Fenton for replying. This is a powerful information I had forgotten. Yes, you did answer me but I have to ask again to be sure. Is my understanding below right? This time, rather than use the graph of the cosine, he has used the unit circle, which is very useful for this sort of thinking. As the angle decreases toward $$\frac{\pi}{2}$$, in the second quadrant, the cosine remains negative but approaches 0; so its reciprocal will be negative but becoming infinite, which we describe as approaching $$-\infty$$. But we should keep in mind that it is not really equal to $$-\infty$$; the notation Anwar is using indicates only the form of the limit, and tells us we can’t yet evaluate the limit. That will be the task of L’Hospital (or an alternative). ## Applying L’Hospital’s rule Doctor Rick interjected a question and comment, observing that Anwar had titled the question “Limit Question About l’Hospital Rule”, and this form is not ready for the rule: Hi, Anwar. While you’re waiting for Doctor Fenton’s response, I have a question for you: What do you intend to do next? So far you have only established that the expression whose limit you want is of the form ∞ – ∞, so it is indeterminate. How will you find its limit? I can think of two ways to solve the problem that do not involve an expression of the form ∞ – ∞. We’ll see a couple of these later. Doctor Fenton now responded to Anwar’s work: Yes, your argument agrees with what I was saying. You still cannot use L’Hospital directly on this expression, although you can change it to an equivalent expression to which L’Hospital applies, and as Dr. Rick points out, there are other ways to find the limit without using L’Hospital at all. Anwar answered each. First, for Doctor Rick: Hello Doctor Rick, thanks for replying. I’m afraid that I know only one way to solve this limit, which I did by handwriting below, I’m afraid that I do not know a way to solve it which do not involve the form ∞ – ∞ This is in fact what Doctor Fenton had initially suggested; we can’t ignore the initial form $$\infty-\infty$$, but can change it to another form, $$\frac{0}{0}$$, and then use L’Hospital. After rewriting and confirming the form, which makes it suitable for L’Hospital, he differentiated the numerator and the denominator and found that the form is now $$\frac{0}{1}$$, which implies that the limit is 0. Another way to express this would be to say that the new expression, $$\displaystyle\frac{-\cos(x)}{-\sin(x)}$$, is continuous at $$\frac{\pi}{2}$$, so that we can just replace x with $$\frac{\pi}{2}$$ and get the limit. The graph shows that the limit of the difference (green) is in fact 0: At the end, we’ll be seeing methods that do not require L’Hospital; one of them is suggested by the shape of the green graph. ## Two more problems: Infinity as summary Then Anwar responded to Doctor Fenton with two new problems: Hello Doctor Fenton, thanks for replying. The doctor who is teaching me and will make the exam needs me to be very clear about the indeterminate form before solving the question, but this step is my main problem with L’Hospital Rule specially for two sided limits that do not involve a direction. I know the graphs of the simple functions and how to obtain the limit using the graphs, but I do not know why in some cases in the textbook ln(0)=∞ and (1/0)=∞, the textbook do not give an explanation of how he did that, I hope you have the time to see my handwriting here: Part of his confusion about the second problem may lie in the fact that he graphed $$\frac{1}{x}$$ rather than $$\frac{1}{x^2}$$ which is more relevant; the latter approaches $$+\infty$$ because it is positive on both sides of zero, whereas his graph approaches different limits on each side, so that limit simply “Does Not Exist”. In the first problem, the argument of each log is approaching 0 from above, and that one-sided limit is $$-\infty$$. Doctor Fenton replied at length, starting with the notation issue: is not a number, so strictly speaking, a statement “1/0 = ∞” is meaningless. However, it can be a useful summary of a certain situation. You ask whether 1/0 is always ∞ in limits. First of all, 1/0 is only supposed to indicate that we are dealing with a situation that involves a number close to 1 divided by a very small number. If the small number is always positive, then the quotient will always be a very large positive number, so 1/0 = just denotes this fact; while if the denominator is always negative, 1/0 would always be a very large negative number, so it would be better to write 1/0 as -∞; and if the denominator could be either a very small positive number or a very small negative number, 1/0 isn’t very useful: it will always be a very large number, but you can’t predict from just this information whether that number is positive or negative. These indeterminate forms arise when you try to use direct substitution to evaluate the limit, such as when you evaluate lim(x→2) 2x+3 = 7 by direct substitution. Often, such “direct substitution” gives undefined expressions 0/0 or ∞/∞, which again simply indicates a need for more analysis. Using the notation Anwar is using, it would be appropriate to write $$1/0^+=+\infty$$ or $$1/0^-=-\infty$$, when possible. The last comment is akin to what is said in Division by Zero and the Derivative, that an indeterminate form like these is “a sign saying ‘bridge out – road closed ahead’, that forces us to take a detour to get to our goal.” ### Problem 1: Another difference of infinities Looking at the first new problem, he says, lim(x→1+) [ ln(x3-1) – ln(x5-1) ] is written as ∞-∞. If x is a number close to 1 but slightly larger than 1, x3-1 and x5-1 are both positive numbers close to 0, so ln(x3-1) is the logarithm of a positive number close to 0, and you know that as y→0 from the right, ln(y) has a very large negative value, which we indicate with a symbolic statement ln(y) ≈ -∞, or in your case, ln(0) = -∞. It would be more accurate to write something like ln(0+) = -∞, so it would be better in my opinion to write lim(x→1+) [ ln(x3-1) – ln(x5-1) ] = ln(0+) – ln(0+) = -∞ – (-∞) . But this means the same thing as ∞-∞, i.e. the difference of two very large quantities with the same algebraic sign. All that does is to alert you to the fact that a more careful analysis is needed. So this tells us that we will need to use L’Hospital’s rule, or some other alternative. If the form had turned out to be $$\infty+\infty$$ or $$-\infty-\infty$$, the limit would just be $$\infty$$ or $$-\infty$$. Here is the graph of the two logarithm functions: This looks much like our first graph; do you think the limit will again be 0? Let’s find out. First, we can simplify: $$\lim_{x\to1^+}\ln\left(x^3-1\right)-\ln\left(x^5-1\right)=\lim_{x\to1^+}\ln\left(\frac{x^3-1}{x^5-1}\right)$$ We’ve transformed the limit into the log of an expression of the form $$\frac{0}{0}$$, so we can apply L’Hospital’s rule to that: $$\lim_{x\to1^+}\frac{x^3-1}{x^5-1}\overset{L’H}=\lim_{x\to1^+}\frac{3x^2}{5x^4}=\lim_{x\to1^+}\frac{3}{5x^2}=\frac{3}{5}$$ So $$\lim_{x\to1^+}\ln\left(x^3-1\right)-\ln\left(x^5-1\right)=\lim_{x\to1^+}\ln\left(\frac{x^3-1}{x^5-1}\right)=\ln\left(\lim_{x\to1^+}\frac{x^3-1}{x^5-1}\right)=\ln\left(\frac{3}{5}\right)\approx-0.5108$$ Here I’ve added the graph of the difference (in green), which agrees: Without the work, though, the graph might suggest it’s exactly 0.5! We could also solve this without L’Hospital, by simplifying further: $$\lim_{x\to1^+}\ln\left(\frac{x^3-1}{x^5-1}\right)=\lim_{x\to1^+}\ln\left(\frac{(x-1)(x^2+x+1)}{(x-1)(x^4+x^3+x^2+x+1)}\right)=\\\lim_{x\to1^+}\ln\left(\frac{x^2+x+1}{x^4+x^3+x^2+x+1}\right)=\ln\left(\frac{1+1+1}{1+1+1+1+1}\right)=\ln\left(\frac{3}{5}\right)$$ ### Problem 2:  An infinite power Moving to the second problem, Similarly, if you try to use direct substitution for lim(x→0) [cos(x)](1/x^2) , the result is 1, or better yet 1+∞, since that just means a number close to 1 raised to a very large positive exponent. ∞ does not represent a specific value, so these are just symbolic statements. We’ll come back to this after examining the general concepts. ### How indeterminate forms work When we write an ordinary limit statement, lim(x→a) f(x) = L, we are saying that if we evaluate f(x) for a value of x which is very close to a, the value of f(x) will be very close to L. In the statement lim(x→0+) 1/x = ∞ , we are actually saying that “the limit of 1/x as x approaches 0 from the right (i.e. through positive values) does not exist: that is, there is no finite number L such that 1/x will approach L more and more closely as x approaches 0 from the right. However, the behavior of the function is predictable, because as x gets closer to 0 from the right, the values of 1/x become larger and larger positive numbers, which will exceed all bounds.” Ordinary limits indicate predictable behavior. lim(x→2) 2x+3 = 7 means that as I evaluate 2x+3 for values of x getting closer and closer to 2, the values will be closer and closer to 7. So while lim(x→0+) 1/x does not exist, the symbolic statement lim(x→0+) 1/x = ∞ does provide useful information about the behavior of the function (1/x) as x→0+: the function values become larger and larger positive numbers. When we call a limit “infinity”, we are saying a lot! But we are not saying that the limit is a number called infinity! This is quite different from situations where there is no limit at all: You can compare this situation to trying to find lim(x→0+) sin(2π/x)/x. In every interval [1/(n+1),1/n], the numerator sin(2π/x) goes through a complete oscillation: going left from x = 1/n, the function goes up to 1 at x = 1/(n+(3/4)), back down to 0 at x = 1/(n+(1/2)), on down to -1 at x = 1/(n+(1/4)), and back up to 0 at x = 1/(n+1). Meanwhile, the amplitude of the oscillation is steadily increasing, so that the values in this interval can be anywhere between -(n+1) and n. The values of sin(2π/x)/x are completely unpredictable from the knowledge only that “x is a positive number close to 0”. Here is a graph of this function, showing the interval $$\left[\frac{1}{n+1},\frac{1}{n}\right]$$ for $$n=1$$ in blue: Similar oscillations repeat over and over as we approach zero, getting narrower and taller, so that the function approaches no value (or every value!). Indeterminate forms arise because there is sort of a “competition” between different parts of a formula. For example, in a limit lim(x→a) f(x)/g(x), • if f(x) approaches 0 while g(x) approaches a finite value L, the fraction is being driven toward 0 by the small numerator, while • if f(x) approaches a finite positive limit L while the denominator g(x) approaches 0 (taking only values of one sign, for example, all values of g(x) are positive), then the fraction takes larger and larger positive values. It is useful to describe this by saying that lim(x→a) f(x)/g(x) = +∞, so that we know that the graph of f(x)/g(x) has a vertical asymptote, and the function goes off to +∞. If f(x) and g(x) both approach 0, the result depends upon the relative sizes of the small quantities: • if f(x) = x3 and g(x) = x2, f(x) “wins” and drives the fraction towards 0, while • if f(x) = |x| and g(x) = x2, g(x) “wins” the competition to approach 0, and drives the fraction to “+∞” (i.e. larger and larger values without any bound on how large they can be). So the fact that both numerator and denominator approach 0 sets up a competition in which either might “win”. What if the competitors are equal? But if f(x) = 3x2 and g(x) = x2, then the competition is a stalemate, and the fraction approaches a finite limit, 3. Writing the limit lim(x→a) f(x)/g(x) as 0/0 simply indicates that the numerator and denominator are both approaching 0, and more analysis is needed to determine what happens, what the relative sizes of the two functions. The same is true if both f and g have a vertical asymptote: which function becomes relatively larger than the other, or are they comparable (i.e. one function becomes approximately a constant multiple of the other)? Similar situations arise with other expressions, leading to other indeterminate forms. ∞-∞ means that two functions are both becoming unbounded, but have the same sign, so their values at least partly cancel. Does one function’s values dominate the other function’s (e.g.x2-x, x-x2), or are they more comparable (e.g. (x2+arctan x) – (x2-arctan x) )? ### Back to problem 2 If you have an exponential f(x)g(x), where f(x)→1 and g(x)→∞, if f(x) > 1, then since 1y = 1 for any real y, the base approaching 1 tend to drive the exponential to the value 1, but raising a number larger than 1 to a large exponent tends to make the exponential large, so again there is a competition: the base approaching 1 drives the exponential to 1, while the large exponent can drive the exponential to +∞. This situation is indicated by the indeterminate form 1. Whereas 1 raised to any (finite) power is 1, and any (finite) number greater than 1 raised to a very large power is very large (while a number less than 1 raised to a large power approaches 0), raising 1 to an infinite power (in the form of a limit) might be anything. A good example of this form is the limit $$\lim_{n\to\infty}(1+\frac{1}{n})^n=e,$$ or equivalently $$\lim_{x\to0}(1+x)^\frac{1}{x}=e,$$ which both have the form $$1^\infty$$. Let’s work out that limit, $$\lim_{x\to0}[\cos(x)]^\frac{1}{x^2}$$. The trick here is the opposite of the last one: We’ll take the log of our function, which we can express by writing the function in exponential form: $$\lim_{x\to0}[\cos(x)]^\frac{1}{x^2}=\lim_{x\to0}e^{\ln[\cos(x)]^\frac{1}{x^2}}=\lim_{x\to0}e^{\frac{1}{x^2}\ln\cos(x)}=\lim_{x\to0}e^{\frac{\ln\cos(x)}{x^2}}$$ The exponent is now a fraction in the form $$\frac{0}{0}$$, so we can apply L’Hospital’s rule to it: $$\lim_{x\to0}\frac{\ln\cos(x)}{x^2}\overset{L’H}=\lim_{x\to0}\frac{\frac{-\sin(x)}{\cos(x)}}{2x}=\lim_{x\to0}\frac{-\sin(x)}{2x\cos(x)}$$ This still has the form $$\frac{0}{0}$$, so we repeat: $$\lim_{x\to0}\frac{-\sin(x)}{2x\cos(x)}\overset{L’H}=\lim_{x\to0}\frac{-\cos(x)}{2\cos(x)-2x\sin(x)}=\frac{-\cos(0)}{2\cos(0)-2(0)\sin(0)}=-\frac{1}{2}$$ Now we can insert that back into our exponential form: $$\lim_{x\to0}e^{\frac{\ln\cos(x)}{x^2}}=e^{-\frac{1}{2}}\approx0.6065$$ Here is the graph, showing the power in green, approaching the limit at 0: ### Skipping L’Hospital for the original problem In your solution of the problem of lim(x→π/2+) sec x – tan x , you used the approach of combining 1/cos x and -sin x/cos x into a single fraction, which gives an indeterminate form 0/0, to which L’Hospital’s Rule applies. (You must always convert indeterminate forms into either 0/0 or ∞/∞ in order to apply L’Hospital.) But in this case, you can also just “rationalize” and multiply (1-sin x)/(cos x) by (1+sin x)/(1+sin x) and simplify, leading to a fraction which does not give 0/0 by direct substitution, avoiding L’Hospital. Here is the work: $$\lim_{x\to\frac{\pi}{2}^+}\frac{1-\sin(x)}{\cos(x)}=\\\lim_{x\to\frac{\pi}{2}^+}\frac{(1-\sin(x))(1+\sin(x))}{\cos(x)(1+\sin(x))}=\\\lim_{x\to\frac{\pi}{2}^+}\frac{1-\sin^2(x)}{\cos(x)(1+\sin(x))}=\\\lim_{x\to\frac{\pi}{2}^+}\frac{\cos^2(x)}{\cos(x)(1+\sin(x))}=\\\lim_{x\to\frac{\pi}{2}^+}\frac{\cos(x)}{1+\sin(x)}=\\\frac{\cos\left(\frac{\pi}{2}\right)}{1+\sin\left(\frac{\pi}{2}\right)}=\frac{0}{2}=0$$ Doctor Rick had mentioned two ways to solve it “that do not involve an expression of the form ∞ – ∞;” he may have meant the two methods we’ve seen (both involving the same transformation), or he may have had something like this in mind: $$\lim_{x\to\frac{\pi}{2}^+}(\sec(x)-\tan(x))=\\\lim_{x\to\frac{\pi}{2}^+}\frac{\sec(x)-\tan(x)}{1}\cdot\frac{\sec(x)+\tan(x)}{\sec(x)+\tan(x)}=\\\lim_{x\to\frac{\pi}{2}^+}\frac{\sec^2(x)-\tan^2(x)}{\sec(x)+\tan(x)}=\\\lim_{x\to\frac{\pi}{2}^+}\frac{1}{\sec(x)+\tan(x)}=0$$ because the denominator becomes (negatively) infinite. For yet another approach, you may be reminded of the half-angle formula $$\tan\left(\frac{x}{2}\right)=\frac{1-\cos(x)}{\sin(x)}$$ Replacing x with $$\frac{\pi}{2}-x$$, we see that $$\frac{1-\sin(x)}{\cos(x)}=\tan\left(\frac{\pi}{4}-\frac{x}{2}\right)$$ So $$\lim_{x\to\frac{\pi}{2}^+}(\sec(x)-\tan(x))=\lim_{x\to\frac{\pi}{2}^+}\tan\left(\frac{\pi}{4}-\frac{x}{2}\right)=\tan\left(\frac{\pi}{4}-\frac{\pi}{4}\right)=0$$ Problem 2 above would be much harder to do without L’Hospital. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2023-02-05 23:15:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949396014213562, "perplexity": 515.9255281658283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00498.warc.gz"}
http://www.theinfolist.com/php/SummaryGet.php?FindGo=unified_atomic_mass_unit
unified atomic mass unit TheInfoList OR: The dalton or unified atomic mass unit (symbols: Da or u) is a non-SI unit of mass Mass is an Intrinsic and extrinsic properties, intrinsic property of a body. It was traditionally believed to be related to the physical quantity, quantity of matter in a Physical object, physical body, until the discovery of the atom and par ... widely used in physics and chemistry. It is defined as of the mass of an unbound neutral atom of carbon-12 Carbon-12 (12C) is the most abundant of the two Stable isotope, stable isotopes of carbon (carbon-13 being the other), amounting to 98.93% of Periodic table, element carbon on Earth; its abundance is due to the triple-alpha process by which it is ... in its nuclear and electronic ground state The ground state of a quantum-mechanical system is its stationary state of lowest energy In physics, energy (from Ancient Greek: wikt:ἐνέργεια#Ancient_Greek, ἐνέργεια, ''enérgeia'', “activity”) is the physical qu ... and at rest. The atomic mass constant, denoted ''m''u, is defined identically, giving . This unit is commonly used in physics Physics is the natural science that studies matter, its Elementary particle, fundamental constituents, its motion and behavior through Spacetime, space and time, and the related entities of energy and force. "Physical science is that depar ... and chemistry Chemistry is the scientific study of the properties and behavior of matter. It is a natural science that covers the elements that make up matter to the compounds made of atoms, molecules and ions: their composition, structure, properties ... to express the mass of atomic-scale objects, such as atom Every atom is composed of a atomic nucleus, nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and a number of neutrons. Only the most common variety of hydrogen has no neutrons. Every solid, l ... s, molecule A molecule is a group of two or more atoms held together by attractive forces known as chemical bonds; depending on context, the term may or may not include ions which satisfy this criterion. In quantum physics, organic chemistry, and bioche ... s, and elementary particle In particle physics, an elementary particle or fundamental particle is a subatomic particle that is not composed of other particles. Particles currently thought to be elementary include electrons, the fundamental fermions (quarks, leptons, antiqu ... s, both for discrete instances and multiple types of ensemble averages. For example, an atom of helium-4 Helium-4 () is a stable isotope of the element helium. It is by far the more abundant of the two naturally occurring isotopes of helium, making up about 99.99986% of the helium on Earth. Its nucleus is identical to an alpha particle, and consis ... has a mass of . This is an intrinsic property of the isotope and all helium-4 atoms have the same mass. Acetylsalicylic acid (aspirin), , has an average mass of approximately . However, there are no acetylsalicylic acid molecules with this mass. The two most common masses of individual acetylsalicylic acid molecules are , having the most common isotopes, and , in which one carbon is carbon-13. The molecular mass The molecular mass (''m'') is the mass of a given molecule: it is measured in daltons (Da or u). Different molecules of the same compound may have different molecular masses because they contain different isotope Isotopes are two or more ty ... es of protein Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residue (biochemistry), residues. Proteins perform a vast array of functions within organisms, including Enzyme catalysis, catalysing metabo ... s, nucleic acid Nucleic acids are biopolymers, macromolecules, essential to all Organism, known forms of life. They are composed of nucleotides, which are the monomers made of three components: a pentose, 5-carbon sugar, a phosphate group and a nitrogenous base. ... s, and other large polymer A polymer (; Greek ''wikt:poly-, poly-'', "many" + ''wikt:-mer, -mer'', "part") is a Chemical substance, substance or material consisting of very large molecules called macromolecules, composed of many Repeat unit, repeating subunits. Due to t ... s are often expressed with the units kilodaltons (kDa), megadaltons (MDa), etc. Titin, one of the largest known protein Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residue (biochemistry), residues. Proteins perform a vast array of functions within organisms, including Enzyme catalysis, catalysing metabo ... s, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 Chromosome 1 is the designation for the largest human chromosome. Humans have two copies of chromosome 1, as they do with all of the autosomes, which are the non- sex chromosomes. Chromosome 1 spans about 249 million nucleotide base pair A ba ... in the human genome The human genome is a complete set of nucleic acid sequences for humans, encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual Mitochondrial DNA, mitochondria. These are usually treated s ... base pair A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both ... s, each with an average mass of about , or total. The mole is a unit of amount of substance In chemistry, the amount of substance ''n'' in a given sample of matter is defined as the countable quantity, quantity or particle number, number of discrete atomic-scale particles in it divided by the Avogadro constant ''N''A. The particles or ent ... , widely used in chemistry and physics, which was originally defined so that the mass of one mole of a substance, measured in grams, would be numerically equal to the average mass of one of its constituent particles, measured in daltons. That is, the molar mass In chemistry, the molar mass of a chemical compound is defined as the mass of a sample of that compound divided by the amount of substance which is the number of moles in that sample, measured in mole (unit), moles. The molar mass is a bulk, not ... of a chemical compound was meant to be numerically equal to its average molecular mass. For example, the average mass of one molecule of water Water (chemical formula ) is an Inorganic compound, inorganic, transparent, tasteless, odorless, and Color of water, nearly colorless chemical substance, which is the main constituent of Earth's hydrosphere and the fluids of all known living ... is about 18.0153 daltons, and one mole of water is about 18.0153 grams. A protein whose molecule has an average mass of would have a molar mass of . However, while this equality can be assumed for almost all practical purposes, it is now only approximate, because of the way mole was redefined on 20 May 2019. In general, the mass in daltons of an atom is numerically close but not exactly equal to the number of nucleons contained in its nucleus Nucleus (plural, : nuclei) is a Latin word for the seed inside a fruit. It most often refers to: *Atomic nucleus, the very dense central region of an atom *Cell nucleus, a central organelle of a eukaryotic cell, containing most of the cell's DNA ... . It follows that the molar mass of a compound (grams per mole) is numerically close to the average number of nucleons contained in each molecule. By definition, the mass of an atom of carbon-12 Carbon-12 (12C) is the most abundant of the two Stable isotope, stable isotopes of carbon (carbon-13 being the other), amounting to 98.93% of Periodic table, element carbon on Earth; its abundance is due to the triple-alpha process by which it is ... is 12 daltons, which corresponds with the number of nucleons that it has (6 proton A proton is a stable subatomic particle, symbol , H+, or 1H+ with a positive electric charge of +1 ''e'' elementary charge. Its mass is slightly less than that of a neutron and 1,836 times the mass of an electron (the proton–electron mass ... s and 6 neutron The neutron is a subatomic particle, symbol or , which has a neutral (not positive or negative) charge, and a mass slightly greater than that of a proton. Protons and neutrons constitute the atomic nucleus, nuclei of atoms. Since protons and ... s). However, the mass of an atomic-scale object is affected by the binding energy In physics and chemistry, binding energy is the smallest amount of energy required to remove a particle from a system of particles or to disassemble a system of particles into individual parts. In the former meaning the term is predominantly use ... of the nucleons in its atomic nuclei, as well as the mass and binding energy of its electron The electron ( or ) is a subatomic particle with a negative one elementary charge, elementary electric charge. Electrons belong to the first generation (particle physics), generation of the lepton particle family, and are generally thought t ... s. Therefore, this equality holds only for the carbon-12 atom in the stated conditions, and will vary for other substances. For example, the mass of one unbound atom of the common hydrogen Hydrogen is the chemical element with the Symbol (chemistry), symbol H and atomic number 1. Hydrogen is the lightest element. At standard temperature and pressure, standard conditions hydrogen is a gas of diatomic molecules having the chemical ... isotope Isotopes are two or more types of atoms that have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), and that differ in nucleon numbers (mass numbe ... ( hydrogen-1 Hydrogen (1H) has three naturally occurring Isotope, isotopes, sometimes denoted , , and . and are stable, while has a half-life of years. Heavier isotopes also exist, all of which are synthetic and have a half-life of less than one Orders o ... , protium) is , the mass of the proton is , the mass of one free neutron is and the mass of one hydrogen-2 (deuterium) atom is . In general, the difference (absolute mass excess) is less than 0.1%; exceptions include hydrogen-1 (about 0.8%), helium-3 Helium-3 (3He see also helion (chemistry), helion) is a light, stable isotope of helium with two protons and one neutron (the most common isotope, helium-4, having two protons and two neutrons in contrast). Other than Isotopes of hydrogen#Hydrog ... (0.5%), lithium-6 (0.25%) and beryllium Beryllium is a chemical element with the Symbol (chemistry), symbol Be and atomic number 4. It is a steel-gray, strong, lightweight and brittle alkaline earth metal. It is a divalent element that occurs naturally only in combination with other ... (0.14%). The dalton differs from the unit of mass in the atomic units systems, which is the electron rest mass (''m''e). # Energy equivalents The atomic mass constant can also be expressed as its energy-equivalent, ''m''u''c''2. The 2018 CODATA recommended values are: The megaelectronvolt mass-equivalent (MeV/''c''2) is commonly used as a unit of mass in particle physics Particle physics or high energy physics is the study of Elementary particle, fundamental particles and fundamental interaction, forces that constitute matter and radiation. The fundamental particles in the universe are classified in the Standa ... , and these values are also important for the practical determination of relative atomic masses. # History ## Origin of the concept The interpretation of the law of definite proportions in terms of the atomic theory of matter implied that the masses of atoms of various elements had definite ratios that depended on the elements. While the actual masses were unknown, the relative masses could be deduced from that law. In 1803 John Dalton John Dalton (; 5 or 6 September 1766 – 27 July 1844) was an English chemist, physicist and meteorologist. He is best known for introducing the atomic theory into chemistry, and for his research into Color blindness, colour blindness, which ... proposed to use the (still unknown) atomic mass of the lightest atom, that of hydrogen, as the natural unit of atomic mass. This was the basis of the atomic weight scale. For technical reasons, in 1898, chemist Wilhelm Ostwald Friedrich Wilhelm Ostwald (; 4 April 1932) was a Baltic German chemist and German philosophy, philosopher. Ostwald is credited with being one of the founders of the field of physical chemistry, with Jacobus Henricus van 't Hoff, Walther Nernst, ... and others proposed to redefine the unit of atomic mass as of the mass of an oxygen atom. That proposal was formally adopted by the International Committee on Atomic Weights (ICAW) in 1903. That was approximately the mass of one hydrogen atom, but oxygen was more amenable to experimental determination. This suggestion was made before the discovery of the existence of elemental isotopes, which occurred in 1912. The physicist Jean Perrin had adopted the same definition in 1909 during his experiments to determine the atomic masses and the Avogadro constant. This definition remained unchanged until 1961. Perrin also defined the "mole" as an amount of a compound that contained as many molecules as 32 grams of oxygen (). He called that number the Avogadro number The Avogadro constant, commonly denoted or , is the proportionality factor that relates the particle number, number of constituent particles (usually molecules, atoms or ions) in a sample with the amount of substance in that sample. It is an I ... in honor of physicist Amedeo Avogadro Lorenzo Romano Amedeo Carlo Avogadro, Count of Quaregna and Cerreto (, also , ; 9 August 17769 July 1856) was an Italian people, Italian scientist, most noted for his contribution to molecular theory now known as Avogadro's law, which states th ... . ## Isotopic variation The discovery of isotopes of oxygen in 1929 required a more precise definition of the unit. Unfortunately, two distinct definitions came into use. Chemists choose to define the AMU as of the average mass of an oxygen atom as found in nature; that is, the average of the masses of the known isotopes, weighted by their natural abundance. Physicists, on the other hand, defined it as of the mass of an atom of the isotope oxygen-16 (16O). ## Definition by the IUPAC The existence of two distinct units with the same name was confusing, and the difference (about in relative terms) was large enough to affect high-precision measurements. Moreover, it was discovered that the isotopes of oxygen had different natural abundances in water and in air. For these and other reasons, in 1961 the International Union of Pure and Applied Chemistry The International Union of Pure and Applied Chemistry (IUPAC ) is an international federation of National Adhering Organizations working for the advancement of the chemical sciences, especially by developing nomenclature and terminology. It is ... (IUPAC), which had absorbed the ICAW, adopted a new definition of the atomic mass unit for use in both physics and chemistry; namely, of the mass of a carbon-12 atom. This new value was intermediate between the two earlier definitions, but closer to the one used by chemists (who would be affected the most by the change). The new unit was named the "unified atomic mass unit" and given a new symbol "u", to replace the old "amu" that had been used for the oxygen-based units. However, the old symbol "amu" has sometimes been used, after 1961, to refer to the new unit, particularly in lay and preparatory contexts. With this new definition, the standard atomic weight of carbon Carbon () is a chemical element with the chemical symbol, symbol C and atomic number 6. It is nonmetallic and tetravalence, tetravalent—its atom making four electrons available to form covalent bond, covalent chemical bonds. It belongs to gro ... is approximately , and that of oxygen is approximately . These values, generally used in chemistry, are based on averages of many samples from Earth's crust Earth's crust is Earth's thin outer shell of Rock (geology), rock, referring to less than 1% of Earth's radius and volume. It is the top component of the lithosphere, a division of Earth's layers that includes the Crust (geology), crust and the ... , its atmosphere An atmosphere () is a layer of gas or layers of gases that envelop a planet, and is held in place by the gravity of the planetary body. A planet retains an atmosphere when the gravity is great and the temperature of the atmosphere is low. A s ... , and organic materials. The IUPAC 1961 definition of the unified atomic mass unit, with that name and symbol "u", was adopted by the International Bureau for Weights and Measures (BIPM) in 1971 as a non-SI unit accepted for use with the SI. ## Unit name In 1993, the IUPAC proposed the shorter name "dalton" (with symbol "Da") for the unified atomic mass unit. As with other unit names such as watt and newton, "dalton" is not capitalized in English, but its symbol, "Da", is capitalized. The name was endorsed by the International Union of Pure and Applied Physics (IUPAP) in 2005. In 2003 the name was recommended to the BIPM by the Consultative Committee for Units, part of the CIPM, as it "is shorter and works better with he SIprefixes". In 2006, the BIPM included the dalton in its 8th edition of the formal definition of SI. The name was also listed as an alternative to "unified atomic mass unit" by the International Organization for Standardization The International Organization for Standardization (ISO ) is an international standard development organization composed of representatives from the national standards organizations of member countries. Membership requirements are given in Art ... in 2009. It is now recommended by several scientific publishers, and some of them consider "atomic mass unit" and "amu" deprecated. In 2019, the BIPM retained the dalton in its 9th edition of the formal definition of SI while dropping the unified atomic mass unit from its table of non-SI units accepted for use with the SI, but secondarily notes that the dalton (Da) and the unified atomic mass unit (u) are alternative names (and symbols) for the same unit. ## 2019 redefinition of the SI base units The definition of the dalton was not affected by the 2019 redefinition of SI base units, that is, 1 Da in the SI is still of the mass of a carbon-12 atom, a quantity that must be determined experimentally in terms of SI units. However, the definition of a mole was changed to be the amount of substance consisting of exactly entities and the definition of the kilogram was changed as well. As a consequence, the molar mass constant is no longer exactly 1 g/mol, meaning that the number of grams in the mass of one mole of any substance is no longer exactly equal to the number of daltons in its average molecular mass. # Measurement Although relative atomic masses are defined for neutral atoms, they are measured (by mass spectrometry Mass spectrometry (MS) is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are presented as a ''mass spectrum'', a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is us ... ) for ions: hence, the measured values must be corrected for the mass of the electrons that were removed to form the ions, and also for the mass equivalent of the electron binding energy The electron ( or ) is a subatomic particle with a negative one elementary charge, elementary electric charge. Electrons belong to the first generation (particle physics), generation of the lepton particle family, and are generally thought t ... , ''E''b/''m''u''c''2. The total binding energy of the six electrons in a carbon-12 atom is  = : ''E''b/''m''u''c''2 = , or about one part in 10 million of the mass of the atom. Before the 2019 redefinition of SI units, experiments were aimed to determine the value of the Avogadro constant for finding the value of the unified atomic mass unit. ## Josef Loschmidt A reasonably accurate value of the atomic mass unit was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. ## Jean Perrin Perrin estimated the Avogadro number by a variety of methods, at the turn of the 20th century. He was awarded the 1926 Nobel Prize in Physics ) , image = Nobel Prize.png , alt = A golden medallion with an embossed image of a bearded man facing left in profile. To the left of the man is the text "ALFR•" then "NOBEL", and on the right, the text (smaller) "NAT•" then " ... , largely for this work. ## Coulometry The electric charge per mole of elementary charge The elementary charge, usually denoted by is the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 . This elementary charge is a fundame ... s is a constant called the Faraday constant In physical chemistry, the Faraday constant, denoted by the symbol and sometimes stylized as ℱ, is the electric charge per mole (unit), mole of elementary charges. It is named after the English scientist Michael Faraday. Since the 2019 redefi ... , ''F'', whose value had been essentially known since 1834 when Michael Faraday Michael Faraday (; 22 September 1791 – 25 August 1867) was an English scientist who contributed to the study of electromagnetism and electrochemistry. His main discoveries include the principles underlying electromagnetic induction, ... published his works on electrolysis. In 1910, Robert Millikan obtained the first measurement of the charge on an electron, −''e''. The quotient ''F''/''e'' provided an estimate of the Avogadro constant. The classic experiment is that of Bower and Davis at NIST The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into physical s ... , and relies on dissolving silver Silver is a chemical element with the Symbol (chemistry), symbol Ag (from the Latin ', derived from the Proto-Indo-European wikt:Reconstruction:Proto-Indo-European/h₂erǵ-, ''h₂erǵ'': "shiny" or "white") and atomic number 47. A soft, whi ... metal away from the anode An anode is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, an electrode of the device through which conventional current leaves the device. A common mnemonic is ... of an electrolysis In chemistry Chemistry is the scientific study of the properties and behavior of matter. It is a natural science that covers the elements that make up matter to the compounds made of atoms, molecules and ions: their composition, str ... cell, while passing a constant electric current An electric current is a stream of charged particles, such as electrons or ions, moving through an electrical conductor or space. It is measured as the net rate of flow of electric charge through a surface or into a control volume. The moving par ... ''I'' for a known time ''t''. If ''m'' is the mass of silver lost from the anode and ''A'' the atomic weight of silver, then the Faraday constant is given by: The NIST scientists devised a method to compensate for silver lost from the anode by mechanical causes, and conducted an isotope analysis Isotope analysis is the identification of isotopic signature, abundance of certain stable isotopes of chemical Chemical element, elements within Organic compound, organic and Inorganic compound, inorganic compounds. Isotopic analysis can be used ... of the silver used to determine its atomic weight. Their value for the conventional Faraday constant was ''F'' = , which corresponds to a value for the Avogadro constant of : both values have a relative standard uncertainty of . ## Electron mass measurement In practice, the atomic mass constant is determined from the electron rest mass ''m''e and the electron relative atomic mass ''A''r(e) (that is, the mass of electron divided by the atomic mass constant). The relative atomic mass of the electron can be measured in cyclotron A cyclotron is a type of particle accelerator invented by Ernest O. Lawrence in 1929–1930 at the University of California, Berkeley, and patented in 1932. Lawrence, Ernest O. ''Method and apparatus for the acceleration of ions'', filed: Janu ... experiments, while the rest mass of the electron can be derived from other physical constants. where ''c'' is the speed of light The speed of light in vacuum, commonly denoted , is a universal physical constant that is important in many areas of physics. The speed of light is exactly equal to ). According to the special relativity, special theory of relativity, is ... , ''h'' is the Planck constant The Planck constant, or Planck's constant, is a fundamental physical constant of foundational importance in quantum mechanics. The constant gives the relationship between the energy of a photon and its frequency, and by the mass-energy equivalenc ... , ''α'' is the fine-structure constant, and ''R'' is the Rydberg constant. As may be observed from the old values (2014 CODATA) in the table below, the main limiting factor in the precision of the Avogadro constant was the uncertainty in the value of the Planck constant The Planck constant, or Planck's constant, is a fundamental physical constant of foundational importance in quantum mechanics. The constant gives the relationship between the energy of a photon and its frequency, and by the mass-energy equivalenc ... , as all the other constants that contribute to the calculation were known more precisely. The power of the presently defined values of universal constants can be understood from the table below (2018 CODATA). ## X-ray crystal density methods Silicon Silicon is a chemical element with the Symbol (chemistry), symbol Si and atomic number 14. It is a hard, brittle crystalline solid with a blue-grey metallic luster, and is a Tetravalence, tetravalent metalloid and semiconductor. It is a member ... single crystals may be produced today in commercial facilities with extremely high purity and with few lattice defects. This method defined the Avogadro constant as the ratio of the molar volume In chemistry and related fields, the molar volume, symbol ''V''m, or \tilde V of a substance is the ratio of the volume occupied by a substance to the amount of substance, usually given at a given temperature and pressure. It is equal to the molar ... , ''V'', to the atomic volume ''V'': $N_ = \frac,$ where $V_ = \frac$ and ''n'' is the number of atoms per unit cell of volume ''V''cell. The unit cell of silicon has a cubic packing arrangement of 8 atoms, and the unit cell volume may be measured by determining a single unit cell parameter, the length ''a'' of one of the sides of the cube. The 2018 CODATA value of ''a'' for silicon is . In practice, measurements are carried out on a distance known as ''d''(Si), which is the distance between the planes denoted by the Miller indices , and is equal to . The isotope Isotopes are two or more types of atoms that have the same atomic number (number of protons in their nuclei) and position in the periodic table (and hence belong to the same chemical element), and that differ in nucleon numbers (mass numbe ... proportional composition of the sample used must be measured and taken into account. Silicon occurs in three stable isotopes (28Si, 29Si, 30Si), and the natural variation in their proportions is greater than other uncertainties in the measurements. The atomic weight Relative atomic mass (symbol: ''A''; sometimes abbreviated RAM or r.a.m.), also known by the deprecation, deprecated synonym atomic weight, is a dimensionless physical quantity defined as the ratio of the average mass of atoms of a chemical elem ... ''A'' for the sample crystal can be calculated, as the standard atomic weights of the three nuclide A nuclide (or nucleide, from atomic nucleus, nucleus, also known as nuclear species) is a class of atoms characterized by their number of protons, ''Z'', their number of neutrons, ''N'', and their nuclear energy state. The word ''nuclide'' was co ... s are known with great accuracy. This, together with the measured density Density (volumetric mass density or specific mass) is the substance's mass per unit of volume. The symbol most often used for density is ''ρ'' (the lower case Greek language, Greek letter Rho (letter), rho), although the Latin letter ''D'' ca ... ''ρ'' of the sample, allows the molar volume ''V'' to be determined: $V_ = \frac,$ where ''M'' is the molar mass constant. The 2018 CODATA value for the molar volume of silicon is , with a relative standard uncertainty of . * Mass (mass spectrometry) ** Kendrick mass ** Monoisotopic mass * Mass-to-charge ratio The mass-to-charge ratio (''m''/''Q'') is a physical quantity Ratio, relating the ''mass'' (quantity of matter) and the ''electric charge'' of a given particle, expressed in Physical unit, units of kilograms per coulomb (kg/C). It is most widely ... # References Meng Wang, G. Audi, F.G. Kondev, W.J. Huang, S. Naimi, and Xing Xu (2017): "The Ame2016 atomic mass evaluation (II). Tables, graphs and references". ''Chinese Physics C'', volume 41, issue 3, article 030003, pages 1-441. Integrated DNA Technologies (2011): Molecular Facts and Figures ". Article on th IDT website, Support & Education section , accessed on 2019-07-08. Bureau International des Poids et Mesures (2019): The International System of Units (SI) ', 9th edition, English version, page 146. Available at th BIPM website International Bureau for Weights and Measures (2017): Proceedings of the 106th meeting of the International Committee for Weights and Measures (CIPM), 16-17 and 20 October 2017 ', page 23. Available at th . International Bureau for Weights and Measures (2018): Resolutions Adopted - 26th Conference Générale des Poids et Mesures ''. Available at th BIPM website Oseen, C.W. (December 10, 1926). Presentation Speech for the 1926 Nobel Prize in Physics '. (1974): ' From the ''Encyclopaedia Britannica'', 15th edition; reproduced by NIST The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into physical s ... . Accessed on 2019-07-03. Bureau International des Poids et Mesures (1971): 14th Conference Générale des Poids et Mesures '' Available at th BIPM website
2023-02-05 13:55:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7017821073532104, "perplexity": 1334.2693049786744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500255.78/warc/CC-MAIN-20230205130241-20230205160241-00829.warc.gz"}
http://mathoverflow.net/questions/96860/identifying-lattices?sort=newest
# Identifying lattices I wrote a program that numerically searches for lattices in $\mathbb{R}^d$ with high sphere packing densities. As I have been running the program, it has been able to find, in addition to well-known lattices such as the laminated lattice $\Lambda_d$ and the Coxeter Todd-related lattices $K_d$, a few interesting looking lattices, which I have been unable to identify. The lattices I have found so far are not as dense as $\Lambda_d$ or $K_d$, but are reasonably dense, and are nice integral lattices. Since I found them through a sort of a local optimization, I suppose they are probably perfect. I looked through the lattices listed on the Sloane-Nebe Catalog of Lattices and did not find any matches, but there do not seem to be many lattices listed there. Here is an example of one of the lattices I find in $\mathbb{R}^{11}$. The Gram matrix is given by $$G = \left(\begin{array}{ccccccccccc} 8&3&3&2&3&4&4&4&4&4&4\\ 3&8&4&4&4&-1&4&-1&4&4&4\\ 3&4&8&0&0&-1&0&-1&0&4&4\\ 2&4&0&8&2&2&2&1&2&4&0\\ 3&4&0&2&8&3&4&-1&4&1&1\\ 4&-1&-1&2&3&8&0&4&0&0&0\\ 4&4&0&2&4&0&8&0&4&2&2\\ 4&-1&-1&1&-1&4&0&8&0&0&0\\ 4&4&0&2&4&0&4&0&8&2&2\\ 4&4&4&4&1&0&2&0&2&8&4\\ 4&4&4&0&1&0&2&0&2&4&8 \end{array}\right)\text.$$ The number of spheres in successive shells (equiv. theta function) are: norm 8, 308; norm 10, 320; norm 12, 680; norm 14, 1472. The packing density is $1/14\sqrt{7}=0.02699\ldots$ (number density for non-overlapping spheres of radius 1, compare to $0.03208\ldots$ for $K_{11}$ and $0.03125$ for $\Lambda_{11}$). Does anybody know where I might be able to find if these lattices have been studied before? - You might write Sloane and/or Nebe. Meanwhile Magma says that this lattice has $1536 = 2^9 3$ automorphisms. –  Noam D. Elkies May 14 '12 at 1:27 @Noam: Thanks, I will try that. –  Yoav Kallus May 14 '12 at 15:20 ## 1 Answer (Not an answer, but I could not find in the faq how one leaves a comment.) Your lattice is perfect. - Thanks for your comment. In the FAQ section on rep, it says you need to have 50 rep points, which you'll soon have, to leave comments. –  Yoav Kallus May 14 '12 at 15:08 @Yoav, Martinet has a book on perfect lattices that Nebe reviewed, maybe I can find a link. Meanwhile, personal math.u-bordeaux.fr/~martinet and then perfect math.u-bordeaux.fr/~martinet/Lattices/index.html book review math.rwth-aachen.de/~Gabriele.Nebe/papers/BookMartinet.pdf –  Will Jagy May 14 '12 at 20:12 @Will, thanks for the reference. I checked the book out of the library, and I am enjoyably perusing it now. –  Yoav Kallus May 14 '12 at 21:44 @Yoav, just watch out for police. Your ordinary driver's license allows you to read, study, review, contemplate, examine, or pore over, but you need a trucker's license to peruse. –  Will Jagy May 14 '12 at 22:02
2014-09-03 04:55:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7465698719024658, "perplexity": 1147.556761256047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924501.17/warc/CC-MAIN-20140909013109-00359-ip-10-180-136-8.ec2.internal.warc.gz"}
https://mathstrek.blog/2011/11/16/quadratic-residues-part-ii/
## Quadratic Residues – Part II Recall what we’re trying to do here: to show that if p > 2 is prime and a is not a square modulo p, then $a^{(p-1)/2} \equiv -1 \pmod p$. As mentioned at the end of the previous part, we will need… Primitive Roots Again, let p > 2 be prime. For any a which is not divisible by p, recall that we have d = ordp(a), which is the smallest positive integer d for which $a^d \equiv 1 \pmod p$. We had proven earlier that d must divide p-1. Definition : a primitive root is an element g mod p such that $ord_p(g) = p-1$ exactly. In other words, among the elements $1, g, g^2, g^3, \dots, g^{p-2}$ (modulo p), only the first is equal to 1. In fact, we can say more: all these p-1 elements are distinct mod p! Indeed, if gi ≡ gj (mod p) for some 0 ≤ i < p-2, then since g is coprime to p, we can do cancellation on both sides to obtain gj-i ≡ 1 (mod p) where 0 < j-i ≤ p-2 which contradicts our assumption. So the p-1 elements $1, g, g^2, \dots, g^{p-2}$ are distinct and each is congruent to one of {1, 2, …, p-1}. In short: If g is a primitive root, then the successive powers $1, g, g^2, \dots, g^{p-2}$ forms a permutation of {1, 2, …, p-1}. E.g. • p = 7 : pick g = 3, then the successive powers of 3 give : $1, 3, 3^2 \equiv 2, 3^3 \equiv 6, 4, 5$. • p = 13 : pick g = 2, then powers of 2 give 1, 2, 4, 8, 3, 6, 12, 11, 9, 5, 10, 7. Now, we shall state a theorem without proof (with apology in advance … but it’s a very classical result and the reader should have no difficulty finding a book with proof). Theorem. Every prime p has at least one primitive root. Exercise 1: exactly how many primitive roots are there modulo p ? Exercise 2: prove that if m is a positive integer not divisible by p-1, then $1^m + 2^m + \dots + (p-1)^m$ is a multiple of p. Ok, now we’re ready to explain why $a^{(p-1)/2} \equiv -1 \pmod p$ for non-square a modulo p. Pick a primitive root g mod p. Since the powers of g (mod p) run through the entire set {1, 2, …, p-1} in some order, we have $g^i \equiv a \pmod p$ for some i. We had already seen that $a^{(p-1)/2} \equiv \pm 1 \pmod p$ must hold, so it suffices to show $a^{(p-1)/2} \not\equiv +1 \pmod p$. But if $a^{(p-1)/2} \equiv +1 \pmod p$, then: $g^{i(p-1)/2} \equiv +1 \pmod p$. Since the multiplicative order of g modulo p is exactly p-1, it just means that i(p-1)/2 is a multiple of p-1, i.e. i is a multiple of 2, so we write i = 2j. Thus, $a \equiv g^{2j}\equiv (g^j)^2 \pmod p$ is a square modulo p, which is a contradiction. ♦ Exercise 3 : prove that if p > 2 is prime, then (-1) is a square modulo p if and only if p is 1 modulo 4. [ Thanks to our theory of quadratic residues, this result is utterly trivial. ] Exercise 4 : prove that if p > 2 is prime, then 2 is a square modulo p if and only if p ≡ ±1 (mod 8). The last result is extremely important and not too easy, so we’ll guide you along with it. To read the hint, highlight the text starting from here … [ Hint : consider the elements 1, 2, … , (p-1)/2 modulo p. Upon multiplying them by 2, we obtain 2, 4, …, (p-1). Note that 2 × 4 × … × (p-3) × (p-1) ≡ (-1) × 2 × (-3) × 4 … . Now, multiply these (p-1)/2 numbers together, consider various cases modulo 8 and you’re done. ] … to here. In the next chapter, we’ll introduce the Legendre symbol notation, an extremely handy notation for checking which numbers are squares mod p. Also, you’ll answer questions like “for what primes p is 3 a square mod p?”. This entry was posted in Notes and tagged , , , . Bookmark the permalink.
2021-02-28 11:53:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9379999041557312, "perplexity": 614.690131403525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00503.warc.gz"}
https://puzzling.stackexchange.com/questions/92425/make-28-with-the-numbers-2020
# Make 28 with the numbers 2020 Try to make 28 from the numbers 2020. Allowed operations: +, -, x, ÷, ! (factorial), exponentiation, square root, squaring, parentheses. • Just curious, is there a significance to you choosing "28"? Jan 2 '20 at 22:13 Here's an attempt, since you allow squaring as an operation: (2²)! + 0! + 2 + 0! = 24 + 1 + 2 + 1 = 28 • u used 3 times 2 , u must use just 2 times , and 0 too Jan 2 '20 at 11:15 • No, the small 2 is the 'squaring' operation. Jan 2 '20 at 11:16 • Oh yeah like square root , thx :3 Jan 2 '20 at 11:27 • A similar point to you as I just made to Dmitry: swap the order of your second and third terms and you keep all 4 digits in their original ordering too, for a more satisfyingly perfect solution :) – Stiv Jan 2 '20 at 13:58 • @Stiv thanks; I wouldn't mind if you did such an edit yourself. (But I understand other users might mind.) Jan 2 '20 at 14:03 Based on my previous answer of a similar question, I created a Python program to brute-force all the possible solutions. To calculate the number of possible solutions: 1. We have $$5$$ possible parsing trees: 2. We have $$6$$ possible permutations of numbers: $$0022$$, $$0202$$, $$0220$$, $$2002$$, $$2020$$ and $$2200$$. 3. Each leaf node may present the plain number or it applied with one of the following unary operations: unary minus, factorial, square or square root. This is $$5$$ possibilities for each leaf node, so $$5^4 = 625$$. 4. Each interior node may present addition, subtraction, multiplication, division or exponentiation, possibly with one of the unary operations. This is $$5^2 = 25$$ possibilities for each interior node. Since we have $$3$$ interior nodes, it is $$25^3 = 15625$$. 5. So the total of possible parse trees are $$5 \times 6 \times 625 \times 15625 = 292,968,750$$. That is a big number, but not too big for a computer with some time. However, there is two caveats: 1. I had to prune a few problematic cases in my program to make it don't freeze trying to calculate a few very large numbers that shows up. Namely, anything relying on a factorial or an exponent larger than 100 is pruned. I think that it is extremely unlikely that 28 will ever be the eventual answer when any of those large numbers shows up. 2. Factorials, square roots, squares and unary minus might be applied multiple times to the same operand. However, my program applies them at most once for each node in the three. I will leave those out for now, because the program already generate a large number of results and already takes a lot of time with that. In fact the number of possible parsing trees is infinite because you can always insert another unary operator (although pruning problematic cases is certainly possible). Adding those possibilities would only make the number of possible solutions go higher (possibly exponentially). Here is the code: from dataclasses import dataclass from enum import Enum from typing import Callable, Dict, Generic, List, TypeVar, Union number = Union[int, float] class Op: def op(self) -> number: raise Exception("Should override") def __str__(self): return "Junk" class Num(Op): def __init__(self, a: number) -> None: self.__a = a def op(self) -> number: return self.__a def __str__(self): return str(self.__a) # Not currently used. But I'll left it here if you want to play with it. class Concat(Op): def __init__(self, a: Op, b: Op) -> None: self.__a = a self.__b = b def op(self) -> number: a: number = self.__a.op() b: number = self.__b.op() if int(a) == float(a): a = int(a) if int(b) == float(b): b = int(b) x: str = str(a) + str(b) try: return int(x) except Exception: return float(x) def __str__(self): return f"({self.__a} c {self.__b})" def __init__(self, a: Op, b: Op) -> None: self.__a = a self.__b = b def op(self) -> number: return self.__a.op() + self.__b.op() def __str__(self): return f"({self.__a} + {self.__b})" class Sub(Op): def __init__(self, a: Op, b: Op) -> None: self.__a = a self.__b = b def op(self) -> number: return self.__a.op() - self.__b.op() def __str__(self): return f"({self.__a} - {self.__b})" class Times(Op): def __init__(self, a: Op, b: Op) -> None: self.__a = a self.__b = b def op(self) -> number: return self.__a.op() * self.__b.op() def __str__(self): return f"({self.__a} * {self.__b})" class Div(Op): def __init__(self, a: Op, b: Op) -> None: self.__a = a self.__b = b def op(self) -> number: return self.__a.op() / self.__b.op() def __str__(self): return f"({self.__a} / {self.__b})" class Pow(Op): def __init__(self, a: Op, b: Op) -> None: self.__a = a self.__b = b def op(self) -> number: e: number = self.__b.op() if e > 100: raise ValueError('Too large') return self.__a.op() ** self.__b.op() def __str__(self): return f"({self.__a} ^ {self.__b})" class UnaryMinus(Op): def __init__(self, a: Op) -> None: self.__a = a def op(self) -> number: return -self.__a.op() def __str__(self): return f"-{self.__a}" class Square(Op): def __init__(self, a: Op) -> None: self.__a = a def op(self) -> number: return self.__a.op() ** 2 def __str__(self): return f"{self.__a}²" class SquareRoot(Op): def __init__(self, a: Op) -> None: self.__a = a def op(self) -> number: return self.__a.op() ** (1 / 2) def __str__(self): x: str = f"{self.__a}" if x[0] == '(' and x[-1] == ')': return f"sqrt{x}" return f"sqrt({x})" fat_table: List[int] = [1] def factorial(x: number) -> int: z: int = int(x) if z < 0: raise ValueError('No factorial of negative numbers!') if z > 100: raise ValueError('Too large') if len(fat_table) > z: return fat_table[z] f: int = factorial(z - 1) * z fat_table.append(f) return f class Factorial(Op): def __init__(self, a: Op) -> None: self.__a = a def op(self) -> number: return factorial(self.__a.op()) def __str__(self): return f"{self.__a}!" # Not currently used. But I'll left it here if you want to play with it. class Dot(Op): def __init__(self, a: Op, b: Op) -> None: self.__a = a self.__b = b def op(self) -> number: a: number = self.__a.op() b: number = self.__b.op() if int(a) == float(a): a = int(a) if int(b) == float(b): b = int(b) x: str = str(a) + '.' + str(b) return float(x) def __str__(self): return f"({self.__a} d {self.__b})" def leafs(op1: int) -> List[Op]: return [Num(op1), UnaryMinus(Num(op1)), Square(Num(op1)), SquareRoot(Num(op1)), Factorial(Num(op1))] def combine(op: str, op1: Op, op2: Op) -> Op: if len(op) == 2: if op[0] == '-': return UnaryMinus(combine(op[1], op1, op2)) if op[0] == '2': return Square(combine(op[1], op1, op2)) if op[0] == 'R': return SquareRoot(combine(op[1], op1, op2)) if op[0] == '!': return Factorial(combine(op[1], op1, op2)) raise Exception("WTF!?") if op == '+': return Add(op1, op2) if op == '-': return Sub(op1, op2) if op == '*': return Times(op1, op2) if op == '/': return Div(op1, op2) #if op == 'c': return Concat(op1, op2) if op == '^': return Pow(op1, op2) #if op == 'd': return Dot(op1, op2) raise Exception("WTF!?") def join(p: str, na: Op, nb: Op, nc: Op, nd: Op, x: str, y: str, z: str) -> Op: if p == 'balanced': return combine(z, combine(x, na, nb), combine(y, nc, nd)) if p == 'lefty': return combine(z, combine(y, combine(x, na, nb), nc), nd) if p == 'righty': return combine(x, na, combine(y, nb, combine(z, nc, nd))) if p == 'zigzag': return combine(z, na, combine(y, combine(x, nb, nc), nd)) if p == 'zagzig': return combine(z, combine(y, na, combine(x, nb, nc)), nd) raise Exception("WTF!?") def do_it_all() -> None: nums: List[List[int]] = [ [0, 0, 2, 2], [0, 2, 0, 2], [0, 2, 2, 0], [2, 0, 0, 2], [2, 0, 2, 0], [2, 2, 0, 0] ] trees: List[str] = ['balanced', 'lefty', 'righty', 'zigzag', 'zagzig'] ops1: List[str] = ['+', '-', '*', '/', '^'] ops2: List[str] = ['-', '2', 'R', '!'] ops: List[str] = ops1.copy() for o1 in ops1: for o2 in ops2: ops.append(o2 + o1) q: int = 0 for p in trees: for a in nums: for x in ops: for y in ops: for z in ops: for a0 in leafs(a[0]): for a1 in leafs(a[1]): for a2 in leafs(a[2]): for a3 in leafs(a[3]): q += 1 #if q % 1000000 == 0: print(q) t: Op = join(p, a0, a1, a2, a3, x, y, z) try: n: number = t.op() if n == 28: print(str(t)) #else: #print(str(t) + " = " + str(n)) except Exception as fuuuu: #print(str(t) + ": " + str(fuuuu)) pass #print(q) do_it_all() After some minutes, it spits a total of: $$12,668$$ solutions that gives $$28$$ as the answer. Here are few of them that I got out randomly from the output: $$(0! + 0!)^{2} + (\sqrt{2} ^ {(2^2)})!$$ $$(0 + 2^2)! + (0^2 + 2!)^2$$ $$(2^2 \div (0 + 0)!)! + 2^2$$ $$\sqrt{((2^2 - 0)! + 2^2)^2 + 0}$$ $$2^2 + ((0 + 0)! \times 2^2)!$$ $$(0^2 + (2 - \sqrt{0})^2)! + 2^2$$ The complete output is here. Of course I couldn't put it all in this post because there are too many solutions found (as stated above). • Great work! Were there any solutions that didn't involve squaring? Jan 7 '20 at 23:44 • @DmitryKamenetsky I didn't looked carefully for that, but at a quick eye-scan in the results, I found none. Jan 8 '20 at 0:23 • thank you. That confirms my suspicion that it is impossible without squaring. Jan 8 '20 at 1:27 Here is a solution that keeps the order of the digits: squared(2)*(0!+(2+0!)!) = 4 * 7 = 28. • In fact, swap around the two terms in the bracket to the right of the multiplication symbol and you keep all 4 digits in their original ordering too, for a more perfect solution :) – Stiv Jan 2 '20 at 13:55 • @Stiv excellent idea! Fixed. Jan 2 '20 at 13:57 Additional solution, keeeping the digits in order: $$(2^2 + 0!)^2 + 2 + 0! = 5^2 + 3 = 28$$ $$(2^{2^2}-2)\times(0!+0!)=(2^4-2)\times2=14\times2=28$$. Uses the 'square' operation doubled up. The digits are out of order, but the sum still works. The given digits are on the base line. Operations: square, multiply, subtract, add, factorial. $${2^2}^2 \cdot 2 - (0! + 0!)^2 = 16 \cdot 2 - 2^2 = 28$$ • 2^2^2 = 16, not 8, no matter which order I try ... Jan 4 '20 at 8:55 • @Glorfindel Oops. Here's a better version. Jan 4 '20 at 11:02 $$square(2)!+0!+2+0!=28$$ $$square(2)\cdot((0!+2)!+0!)=28$$ If we use the parentheses $$\left(\!\!{\choose }\!\!\right)$$ with interpretation as multiset coefficient: $$\left(\!\!{n\choose k}\!\!\right)$$ = $${n+k-1\choose k}$$ = $$\frac{(n+k-1)!}{(n-1)!\space k!}$$ $$\left(\!\!{(2+0!)!+0!\choose 2}\!\!\right) = \left(\!\!{7\choose 2}\!\!\right) = {8\choose 2} = \frac{8!}{6!\space 2!} = 28$$
2022-01-24 07:51:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2999410927295685, "perplexity": 13638.361144459059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00531.warc.gz"}
http://mathonline.wikidot.com/cauchy-s-integral-theorem-examples-1
Cauchy's Integral Theorem Examples 1 Cauchy's Integral Theorem Examples 1 Recall from the Cauchy's Integral Theorem page the following two results: The Cauchy-Goursat Integral Theorem for Open Disks: • If $f$ is analytic on an open disk $D(z_0, r)$ then for any closed, piecewise smooth curve $\gamma$ in $D(z_0, r)$ we have that: (1) \begin{align} \quad \int_{\gamma} f(z) \: dz = 0 \end{align} • Furthermore, there exists a function $F$ that is analytic on $D(z_0, r)$ such that $F'(z) = f(z)$. The Cauchy Integral Theorem: • If $A \subseteq \mathbb{C}$ is open, $f : A \to \mathbb{C}$ is analytic on $A$, and if $\gamma$ is a closed, piecewise smooth curve in $A$ homotopic to a point in $A$ then: (2) \begin{align} \quad \int_{\gamma} f(z) \: dz =0 \end{align} We will now use these theorems to evaluate some seemingly difficult integrals of complex functions. That said, it should be noted that these examples are somewhat contrived. In practice, knowing when (and if) either of the Cauchy's integral theorems can be applied is a matter of checking whether the conditions of the theorems are satisfied. Example 1 Evaluate the integral $\displaystyle{\int_{\gamma} \frac{e^z}{z} \: dz}$ where $\gamma$ is given parametrically for $t \in [0, 2\pi)$ by $\gamma(t) = e^{it} + 3$. Note that the function $\displaystyle{f(z) = \frac{e^z}{z}}$ is analytic on $\mathbb{C} \setminus \{ 0 \}$. The curve $\gamma$ is the circle of of radius $1$ shifted $3$ units to the right. This circle is homotopic to any point in $D(3, 1)$ which is contained in $\mathbb{C} \setminus \{ 0 \}$. So by Cauchy's integral theorem we have that: (3) \begin{align} \quad \int_{\gamma} \frac{e^z}{z} \: dz = 0 \end{align} Example 2 Consider the function $\displaystyle{f(z) = \left\{\begin{matrix} z^2 & \mathrm{if} \: \mid z \mid \leq 3 \\ \mid z \mid & \mathrm{if} \: \mid z \mid > 3 \end{matrix}\right.}$ and let $\gamma$ be the unit square. Evaluate $\displaystyle{\int_{\gamma} f(z) \: dz}$. Note that $f$ is analytic on $D(0, 3)$ but $f$ is not analytic on $\mathbb{C} \setminus D(0, 3)$ (we have already proved that $\mid z \mid$ is not analytic anywhere). So since $f$ is analytic on the open disk $D(0, 3)$, for any closed, piecewise smooth curve $\gamma$ in $D(0, 3)$ we have by the Cauchy-Goursat integral theorem that $\displaystyle{\int_{\gamma} f(z) \: dz = 0}$. In particular, the unit square, $\gamma$ is contained in $D(0, 3)$. Thus: (4) \begin{align} \quad \displaystyle{\int_{\gamma} f(z) \: dz} = 0 \end{align}
2018-12-14 17:01:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9948139786720276, "perplexity": 181.62636864764224}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826145.69/warc/CC-MAIN-20181214162826-20181214184826-00026.warc.gz"}
https://www.nature.com/articles/s41598-019-51393-5
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. Tough and Functional Cross-linked Bioplastics from Sheep Wool Keratin Abstract Novel bioplastic films derived from wool keratins were prepared by protein solution in an alkaline mild oxidative method that splits disulphide (-S-S-) bonds. The native structure of the keratin macromolecules was partially modified upon extraction as revealed by the decrease of the β-sheet to α-helices/coils ratio but high molecular weight fractions (31, 22 and 13 KDa) was retained permitting film formation and plastic behaviour of films. Keratin films were plasticised with glycerol and sodium dodecyl sulphonic acid (SDS), which provided different hydrophobic character to bioplastics. Water content in the films depend on the relative humidity (RH), being able to absorb up to 35 wt% H2O at an ambient of 80% RH. Films were mechanically, thermally and optically analysed. The spectroscopic analyses revelled that these bioplastic films absorb UV light, what is interesting for packaging applications. Thermogravimetric and thermomechanical analysis revealed high stability of keratin macromolecules up to 200 °C with no inherent thermal transitions. Tough bioplastics (19 ± 4 MJ∙ m−3) were obtained after thermal cross-linking with glycerol and formaldehyde outperforming mechanical properties previously reported for protein films. Introduction The advances in petrochemical sciences during the 21th century promoted a brand new scientific field known as Polymer Science. This shed the world with a big family of new materials: plastics. Aside the advantageous and unique properties of plastics, they also endow a big CO2 finger print associated to their production, transportation and management1. In addition, once their life cycle ends they are usually incinerated, what causes more release of CO2, or discarded into landfills, if not into seas, where they will stay for decades or hundreds of years1. Their use will keep on growing exponentially1 and it will be also the task of polymer scientists to amend the problems arising from plastic production and disposal. Transition from a Fossil to a Circular Economy encompasses with the development of sustainable plastics. One attempt to tackle this situation is the substitution of oil based plastics by compostable bioplastic constituted by biomacromolecules derived from plants, like the case of bioplastics based on potato or corn starch2,3. Other natural biopolymers being studied as possible candidates for bioplastics include polysaccharides like chitin from shellfish4, alginates from seaweeds5,6, cellulose7, or proteins derived from soy8,9,10, sunflower11, milk12,13,14, whey15, or feather16 and fish17 residues. These last two examples are attempts for a Circular Economy, in which residues are provided with a value18,19. Bioplastics from plants have the advantage of adsorbing CO2 during the photosynthesis. In addition, after their life cycle they can be easily integrated into the environment since they are usually biodegradable. In principle, these two factors might minimize their energy consumption and CO2 fingerprint in comparison to common fossil-plastics1. But, the problem of certain bioplastics, like the ones derived from plants is that primarily human resources might be deviated for bioplastic production. This situation would be similar to that of plant derived biodiesels20,21 leading to a rise in food prices and motivating deforestation. It is for these reasons that prime mater for future bioplastics should be search into residues from currently human activities. This concept would allow a better management of these residues and would mitigate the problem associated to plastics end of life cycle. In certain countries wool has been rendered with no outcome due to the impossibility to compete with “tailor-made” synthetic fibers. The management of these residues constitute a serious challenge and authorities have already pointed the need of finding an outcome for them22,23. Following up, more recently, a directorate for a Sustainable Bioeconomy has been published, where it is highlighted the importance of developing bioplastics for a Circular Bioeconomy24. A small portion of keratin oligopeptides hydrolysates has been used as cosmetic for humans and low value nurturing additives for pets25. These hydrolysates are also being considered as a nitrogen source in the form of amino acids and oligopeptides for structuring the land soil26,27. Here we explore the use of keratin derived macromolecules for preparing sustainable biodegradable bioplastics. Wools are up to a 90% of weight composed of keratin fibres, fibrous proteins characterised by the high presence of cystine (R1-S-S-R2) residues28,29. These act as cross-linking points that provide stiffness and strength to the fibres but also render the fibroins difficult to be extracted by dissolution in common solvents30. As in comparison to other proteins like silk, keratins are not dissolved in water solutions of concentrated salts like LiBr, since the R1-S-S-R2 cross-links keep the macromolecules non soluble31. Therefore, keratin extraction and dissolution requires a reactive extraction in which disulphides are split apart. Other attempts for keratin extraction were based on traditional lab-scale methods for protein denaturalization with Sulphur with reducing agents like mercaptoethanol (HS-CH2-CH2-OH) or thioglycolic acid (HS-CH2-COOH), and high concentration of hydrogen bonding disrupting agents such as urea, O = C(NH2)2, and thiourea32, S = C(NH2)2. More recent attempts include the use cysteine33,34 or ionic liquids35 for keratin extraction. The method used here is based on our previous advances in mild oxidative methodologies in which keratin derivatives are obtained in high yields, easily purified and no toxic reactants and residues are used or generated36. The keratins derivatives were analysed and its water solutions were used to prepare tough plasticised and cross-linked films that were characterised morphologically, optically and analysed by thermal and mechanical means. Results and Discussion Keratin extraction and characterization The morphology of wool has different structural orders and elements, as depicted in Fig. 1a. Keratin macromolecules are entangled in coiled-coils like protofibrils that form microfibrils bundled into bunches of macrofibrils within the cells and associated trough different intermolecular interactions, as covalent bisulfide bridges, ionic interaction or hydrogen bonds (Fig. 1b)37. The alkaline oxidation of wool applied here modifies this range of interaction promoting macromolecular disentanglement and separation and proteins dissolution. Extraction reaction of keratin from wool was stopped after 2 h, when almost all of the wool was dissolved into the extractive liquor, as can be seen in Fig. S1a. The mixture was filtered and yield was estimated as 90% of solubilised protein, as determined gravimetrically. Titration of the filtrated (Fig. S1b) shows that a dramatic drop in pH occurs before keratin precipitation at pH = 4.4. This can be related to the protonation of basic amino acids and the approach to the isoelectric point of the oxidised keratin macromolecules. As shown before, keratins obtained by reduction with thioglicolic acid exhibit higher isoelectric points. As it was proposed, the introduction of sulphenic (R-SOH), sulphinic (R-SO2H) or sulphonic (R-SO3H) acid groups38 might be the reason for the drop in the isoelectric point, macromolecular disentanglement and dissolution. In particular, it has been proposed26 that the reaction taking place would be: $$K \mbox{-} S \mbox{-} S \mbox{-} K+6{{\rm{H}}}_{2}{{\rm{O}}}_{2}=2K{{ \mbox{-} \mathrm{SO}}_{3}}^{-}+6{{\rm{H}}}_{2}{\rm{O}}$$ (1) where K, stands for the keratin macromolecular backbone. With this extraction strategy the main by products are water, oxygen and slight amounts of hydroxides26. No toxic by-products are generated and it is minimized the water consumption for washing or dialysis purification processes typical of other keratin extraction routes39,40. At pH < 4.4 the keratin derivative precipitated and was collected as a withe precipitate polymer (Fig. S1c). As can be seen in Table 1, elemental analysis of the extracted keratin reveals that upon the oxidative extractive process oxygen is incorporated in a larger proportion to what is expected by Eq. (1). This can be explained by the potential towards oxidation of amino acids such as arginine41, tyrosine42, and amino terminals43, fact that is also depicted in Fig. 1b,c. SDS-PAGE electropherograms (Fig. S2) of wool keratins extracted with the reductive method and oxidative methods indicate that extraction trough the reductive method exhibits in the low density gel of 10% Acril/Bis two distinct bands of 61 and 49 KDa which can be related to the typical Type I (acidic) and Type II (basic) bands attributed to mammals keratins44, as wools from other breeds28,29, or human hair45. These bands are not detectable in keratins obtained by the oxidation method employed here for bioplastic preparation (Fig. S2a,b) due to particle hydrolysis of keratin macromolecules trough peptide bond cleavage. Yet other relative high molecular weight fractions with maximum intensities at 31, 22 and 13 KDa were detected in the denser 18% Acril/Bis gel fraction (Fig. S2a,c). These results are in accordance with previously reported data obtained with other gel configurations36. Therefore, this allow to conclude that although a relatively harsh oxidative method is employed, under the adequate conditions reported here, relative high enough molecular weight (13–31 KDa) keratin derived macromolecules were obtained for bioplastic development. This is optimist since this process can be considered to comply with Green Chemistry principles and because the chemicals employed can be derived upon renewable and non-fossil resources. A recent analysis on the keratin extraction methodologies also revealed that mild oxidative methods, such as treatments with percarbonate (PCC) or paracetic acid (PAA), are economically attractive and yield keratins with relatively high molecular weights46. Another feature of the keratins obtained from the oxidative process is that they are more water soluble than keratins obtained by the reductive method as the bisulfide bridges and secondary structures do not trend to re-form36. In the reductive process the reformation of K-S-S-K bridges render the polypeptides thermodynamically incompatible with water, as determined previously by SDS-PAGE, solubility tests36 and by static light scattering47. Macromolecular characterisation of wool and films by FTIR Wool keratin bioplastic films were analysed by Fourier transformed infrared spectroscopy (FTIR) and the spectra compared to that of native wool. Figure 1d shows the FTIR spectra of a keratin films along that of native wool in the region of 1800–1000 cm−1. Common features observed correspond to those typical from proteins48,49 such as C = O stretching at 1600–1700 cm−1 (Amide I), N-H bending at 1480–1575 cm−1 (Amide II), C-N stretching in the range 1230–1300 cm−1 (Amide III) and the Amide A band, centred in 3268 cm−1, related to N-H stretching. In addition, the spectral region between 1200–1000 cm−1, sensitive to the presence of sulphur oxidized derivatives, presents modification upon the oxidative extraction method as observed by comparing the wool and keratin spectra. The keratin films present absorption peaks at 1042 and 1173 cm−1. In the immediacy of 1090 and 1045 cm−1 bands associated to S = O stretching in sulphinic acids (R-SO2H) and their salts have been reported50,51. Other authors have also report that the oxidation of wool52,53,54 leads to the increase of peaks at ~1045 cm−1 and 1173 cm−1, attributed to cysteic acid (K-SO3H). Therefore, the increase in intensities relative to the band at 1261 cm−1 (related to Amide III mode) of wide bands with peaks at 1173 and 1042 cm−1 observed for films in Fig. 1d can be related to the introduction of oxidized sulphur derivatives such as sulphinic or sulfonic (R-SO3H) acids. The analysis of the Amide I (1600–1700 cm−1) region, sensitive to the secondary structure of proteins, also reveal important modification upon extraction method. As can be seen in Fig. 1b, the band centred at 1647cm−1, associated to amorphous coils and α-helices, increases upon extraction. Data extracted from Amide I deconvolution is gathered in Table 1. The portion of β-sheets, random coils/α-helices and β-turns changes from 54, 42 and 4%, respectively, in the case of wool, to 28, 65 and 7%, in the case of extracted keratin. In particular the β-sheet to coils/helices ratio goes from 1.3 to 0.4 for wool to keratin films, respectively. Bioplastic films from wool keratin Films with different plasticizers were prepared by solution casting into silicon moulds. As can be seen in Fig. 2a, the extracted keratin derivative exhibited excellent film forming ability under ambient conditions. The films have a high degree of transparency and were slightly yellowish, presumably due to the presence of amino acids such as tryptophan (6 wt%), histidine (1 wt%), arginine (10 wt%) and proline (5 wt%)29. This tendency to exhibit yellowish or brownish colours has been previously reported for the case of other proteins films such as those based on human air keratin45, chicken feather keratin35, soy proteins55, or fish gelatine56. Also, as previous reports in literature for films from many different protein sources8,16,17,35,45,57,58 and for other biopolymers59 the pure keratin films produced here trend to be brittle under normal ambient conditions (RH <70%) and therefore different amount of glycerol as well as other plasticizers were used in order to modify the mechanical behaviour of films. With this approach materials with a different range of flexibility, stiffness, plasticity and strength could be obtained, as will be reported below. Optical properties and UV absorption Analysis of the films by UV-Vis spectroscopy (Fig. 2b) showed that wool keratins have a high absorbance in the UV range (<390 nm). The absorption of proteins in the UV spectra is related to the presence of conjugate species susceptible to electronic resonance, such as peptide bonds (200–250 nm), tryptophan and histidines, or aromatic amino acids as phenylalanine and tyrosine (200–300 nm) present in wool29. As can be extracted by comparing data gathered in Table 2, wool keratin films present a relatively low opacity (i.e., high transparency) as compared to other protein films prepared by casting such as fish gelatine of soy protein films, or other oil derived synthetic plastics such as low density polyethylene (LDPE). The high absorption in the UV range combined with a relatively high transparency in the visible spectra is a very interesting property for packaging, since the embodied materials are protected from the solar UV radiation at the same time they are visible for the user. Thermal behaviour Thermogravimetrical analysis (Fig. 2c) with a set of materials were performed in order to understand the composition and keratin modification impact in films stability. Washes wool fibroins present a maximum in degradation rate at around 280 °C, similar to pure keratin and 16 wt% SDS containing films. These processes can be related to the beginning of protein degradation, as has also been reported for keratin extracted from Merino wool28,29. The film containing glycerol presents a lower degradation process starting at ~180 °C and with a peak at 234 °C, what can be related to the evaporation of glycerol (Tb = 290 °C) and a higher process with a maximum degradation temperature at 320 °C. A very similar degradation pattern has been previously observed for glycerol plasticized soy protein plastics60. Thermo-mechanical analysis of a keratin film (Fig. 2d) showed the dramatic influence of water plasticizing effect on the mechanical properties of keratin bioplastics (film with ~10 wt% H2O). The elastic modulus of the film decreases continuously as the temperature rises from −100 °C and just around 0 °C, when water molecules gain mobility, the modulus drops dramatically almost one decade. In addition, and immediate increase in the modulus was observed around 45 °C, after what the modulus increase was more moderate. This two observations have also been previously reported for soy protein films containing keratin61 and were attributed to reformation of chemical cross-links between protein macromolecules. Another plausible explanation would be the restoration of physical intermolecular associations between keratin macromolecules upon regaining mobility upon water melting. The moderate increase in modulus after 50 °C could be related to the loss of water plasticizing effect due to evaporation and degradation reactions leading to cross-linking. Films chemical impact on ambient water absorption Given the importance of the effect of the presence of water in the bulk of different biopolymer based materials such as protein based films, here we performed a kinetic analysis of water absorption (T = 24 ± 3 °C; RH = 65 ± 3%) into wool, as extracted keratin and derived bioplastic films compounded with either glycerol or SDS (Fig. 2e). Table 3 gathers water kinetics constants obtained from analysis of data shown in Fig. 2e. It can be seen that pure keratin film already exhibits significant differences with respect to the original wool fibroins. Wool fibres absorb less water albeit their higher porosity as compared to the films. The higher hydrophilic character of films can be associated to the chemical modifications arising from the extractive procedure which introduces more hydrophilic groups such as sulphur oxidative species, as describe in Eq. (1). Results also cast significant differences depending of the chemical modification introduced to the bioplastic. Pure keratin film exhibits an intermediate behaviour in comparison with films with glycerol and SDS. The lower rate and water absorbed by films modified with SDS can be related to the amphiphilic nature of this additive. The SDS hydrophilic head (-SO4) might be interacting with ionic groups of the keratin macromolecules such as quaternary ammonium groups from histidines, lysines and arginines62. On the other hand the hydrophobic tails comprised of a hydrocarbon chain might reduce the hydrophilic nature of the complex by avoiding H2O penetration. The opposite situation occurs when hydrophilic glycerol is used as an additive, and higher proportion of H2O is absorbed than pure keratin film. These results indicate that with the addition of small proportions of additives (i.e. <20 wt%) is possible to significantly tune the hygroscopic behaviour of the bioplastic. The addition of SDS resulted in ca. 40% less water absorbed in comparison with the film with glycerol. As it is discussed below, water molecules act as a plasticizers. Therefore, the use of amphiphilic additives such as SDS, would tune water absorption and allow controlling the bioplastic mechanical properties. Relative humidity influence on mechanical properties The presence of H2O in the ambient (i.e., relative humidity, RH) has a dramatic influence on the water content in the keratin bioplastics, as shown in Table 4. By modifying the RH from 16 to 65% the amount of water within the bioplastic goes from 4.3 ± 0.4 to 33.7 ± 0.6%H2O, what has a direct influence in the strength and strain at break of bioplastics, as also shown in Table 4. The strength varies inversely proportional to water content decreasing more than an order of magnitude from 17.9 ± 2.1 MPa at 16% RH to 0.6 ± 0.2 MPa at 65% RH. The strain at break increases by increasing the water content up to a 15% H2O but with higher H2O contents the strain decreases again, probably due to the lack in cohesion within the material. Plasticized keratin films Keratin was combined with either glycerol or SDS up to a concentration of 28 wt%. In the case of SDS the resulting bioplastics were virtually as brittle as the pure keratin films and could not be tested. In Fig. 3a representative stress-strain curves of bioplastics with different glycerol content are shown. It can be seen the high impact of plasticizer content into mechanical properties. With a glycerol content of 11 wt% the bioplastics behaves relatively as a brittle material, with a small breaking strain high elastic modulus and relatively high breaking stress. By slightly increasing the glycerol content over 16 wt% the material properties were modified substantially (Fig. 3b), increasing the breaking strain up to 50% and decreasing the breaking strength down to 5 MPa. In addition, the material plasticized with more than 16 wt% glycerol presented a typical stress-strain behaviour of a plastic with a yield at about 10% deformation and 6 MPa of stress, followed by a plastic deformation up to failure. The toughness of bioplastics trend to increase with glycerol content. The brittle nature of non-plasticized keratin films in comparison to wool fibres, which are typically strong (150–200 MPa), compliant (extensibility = 30–60%) and therefore relatively tough fibres63, can be related to the chemical and morphological modifications exposed above, as determined by FTIR spectroscopy and SDS-PAGE. Splitting of bisulfide bridges, disruption of β-sheets and partial hydrolysis of the macromolecules reduce mechanical stress transfer capability and decrease fracture toughness. Plastics containing 28% of glycerol were thermally cross-linked and the effect of formaldehyde addition was analysed. The action of glycerol is not clear at this stage but it is plausible that the cross-linking ability is based on the esterification of amide function by hydroxyls from glycerol and therefore promoting network formation between three different keratin macromolecules (Fig. 4a). Formaldehyde is known to act as a protein cross-linker which has the ability to react with two different secondary or primary amino groups of a protein structure favouring its chain extension and cross-linking trough the formation bridges based on functional groups such as ether, methylene or imine (Fig. 4b)64,65. Other plausible cross-linking reactions comprise the formation of sulphonyl formamides upon the reaction of sulfonic acids with terminal amides in the presence of aldehyde66. The cross-linking is demonstrated by solubility tests in basic aqueous solutions in which it is observed that after the thermal cross-linking only the pure keratin film remains soluble. The glycerol and formaldehyde convert the material insoluble in water leading to hydrogels (Fig. S3). Figure 4c shows representative stress strain curves of keratin films containing 28% glycerol before and after thermal treatment at 80 °C for 24 h. Figure 4d show data of elastic moduli and toughness of materials with 28% glycerol prepared from film forming solutions containing 0, ~2 and 6% of formaldehyde, respectively, before and after treatment at 80 °C. It can be seen that the effect of the chemical cross-linking is not revealed before the thermal treatment at 80 °C. On the contrarily, the presence of free formaldehyde might have certain plasticizing effect as evidenced by the lower moduli and tensile strengths exhibited by materials with increased amount of formaldehyde. As it can be seen the thermal treatment leads to unique materials characterized by their unprecedented toughness for this type of protein films. The material cross-linked only with glycerol exhibited a comparable modulus to that of the bioplastic with 16 wt% glycerol, but with a deformation about five times higher and a significant superior tensile strength. Thermal cross-linking of plasticized keratin films, thus, lead to materials with unprecedented toughness. Films with 28 %Gly and cross-linked at 80 °C endow a toughness of 19 ± 4 MJ·m−1. In general the toughness values of thermally cross-linked bioplastics are much higher than those obtained by other authors for protein films. For example, recently Song et al. obtained bioplastics from chicken feather keratins (CFK) reinforced with dialdehyde cellulose nanocrystals (DCNC)34. They obtained the best performance with a 5 wt% of DCNC exhibiting a strength of 22 MPa and a deformation at break of 30%, resulting in a toughness of about 5.4 MJ·m−3. In Fig. 5 mechanical data of bioplastic films analysed here are summarized and presented along with data obtained in the literature. It can be seen that properties of materials analysed here rely on the tough side of the chart, in which high deformations and good strengths are exhibited. For example, the bioplastics developed here exhibit a higher strength than that reported for CFK. Only CFK cross-linked with 5 wt% of DCNC presented slightly higher strengths. On contrast while the plasticized wool keratin films with 28 wt% glycerol and thermally cross-linked with formaldehyde shows half about half of the strength of the DCNC cross-linked CFK, it presents almost an order of magnitude higher deformation at break, therefore endowing a much higher toughness. Mechanical properties and, particularlly, toughness of other films based on sunflower protein isolate (SFPI), soy protein isolate (SPI), whey protein isolate (WPI), fish gelatin (FG) or pea protein isolate (PPI) have also been reported to be generally inferior. Biodegradation and analysis of wool derived bioplastics life cycle Life cycle analysis Breeding lambs or chicken for the solely purpose of producing keratin based bioplastics from their wool and feathers would not be beneficial in terms of sustainability for the extension of the land use. It is for this reason that materials such as leather endow a very high energy finger-print25. In a parallel way Life Cycle Analysis (LCA) of plant protein bioplastics, such as those derived from soy have cast doubts on its present sustainability due to the extent of land and fertilizers used for its production67. But at present, due to bad quality of certain wools and feathers they are rendered as residues. Sheep is a source of milk, cheese and meat and wool is a side-product. Therefore when performing a LCA this consideration should be taken into account together with the benefits of an appropriate management of these residues and of a partial substitution of currently available plastics derived from oil. In addition, LCA analysis of other protein based films have pointed out that biodegradable plastics might have a high positive impact at the end of their Life Cycle in a composting scenario, due to the fast integration into the soil by microorganisms and because of the reduction of the amount of waste sent to other treatments such as incineration, avoiding emissions associated to these treatments17,67. Conclusions Wool residues can be considered as an attractive source of macromolecules for small productions of bioplastics with specific applications. The wool keratins can be derived into water soluble proteins with high molecular weight by means of simple chemical treatments and can be turned into films with plastic properties by using different ratios of plasticizers. The use of amphiphilic plasticizers such as SDS leads to more hydrophobic material but with less plasticizing effect. The materials show good transparency, UV barrier properties, thermal stability in the range of 50–200 °C, sensitivity to humidity and mechanical properties comparable with the best reported protein bases plastics. Specifically, when cross-linked, the plasticised bioplastic exhibited superb fracture energy absorption, as determined by their high intrinsic toughness. These protein based films might find applications in a diverse range of fields such as regenerative medicine, bioplastics, coatings or packaging. Materials and Methods Materials Wool from lacha black face (Spanish: Lacha caranegra; Basque: Latxa mutturbeltz; French: Manech; Latin: Ovis aries) was kindly provided by Quesería-Etxetxipia-Gaztegieta, a cheese farm in Elizondo, middle-north Spain. Fibres consisted on ~30 cm long strands in which tips were slightly more yellowish than the stems. Keratin extraction Wool fibres were washed twice into a washing machine with solid surfactant. Two more times were washed only with water. The fibres were then dried at 25 °C for several days. The keratin extraction with H2O2 as bisulfide splitting agent was carried out as explained elsewhere36. Briefly, wool (75 g) was cut into ~2 cm long fibres and suspended into 2 M H2O2 aqueous solution (1.2 L). To aid oxidation, of NaClO (75 mL) at % were added, although this might reduce extraction yield due to the possible formation of cross-links such lisinoalanins, lanthionines or sultones36,68,69. The pH of this suspension was raised to pH ~10, the solution volume completed to 1.5 L with ultrapure deionized water (Wasserlab, Pamplona, Spain) and the temperature taken to 50 °C. The reaction was hold for 2 h (see Fig. S1a), when additional 2 L ultra-pure water were added as reaction quenchers. This solution was filtrated through filter paper and about 81% of yield was obtained, as measured from the solid residues remaining in the filter. An aliquot of 15 mL of the filtrate was titrated by adding drop-wide HCl 3 N and by measuring the pH with an electronic pH-meter. The pH of the filtrate was dropped by adding drops of HCl (3 N) until pH ~ 4, when keratin precipitated as a white polymer (Fig. S1b,c). The precipitate was washed with an isopropanol/water mixture and stored into the fridge. One oxidative extraction with H2O2 (0.5 N) was carried out to compare the molecular distribution weight obtained with a reductive extraction with thioglicolic acid (HS-CH2-COOH, 0.5N) as bisulfide splitting agent. Determination of the molecular weight distribution (SDS-PAGE) The precipitates were inserted into cellophane tubes (Viscofán S.A, Spain) and dialyzed against 16 MΩ ultrapure water (Wasserlab, Pamplona, Spain) 3 days, changing the water regularly. Molecular weight of keratins was determined by sodium dodecyl sulphate (SDS) polyacrylamide gel electrophoresis (PAGE). The gels were made up of three distinct fractions in order to have a good resolution of the high and low molecular weight keratin fractions. The gel fractions contained 4, 10 and 18 wt% of acrilamide/bisacrilamide (Acril/Bis) and were buffered with Tris-HCl, 0.375 M at pH = 8.8. Tris-HCl stands from tris(hydroximethyl)aminomethane (Panreac, Spain) neutralized with HCl up to the indicated pH. Keratin samples (~3 mg) were dispersed into a buffer based on Tris-HCl 20 mM, pH = 8.8; 2% SDS, urea 3.6 M, 15% de glycerol and 5% de β-mercaptoethanol (β-M, Panreac) up to a concentration of 4 mg·mL−1. Bromophenol blue was pipetted to a concentration of 0.05 wt% and the samples incubated into 2 mL centrifuge (eppendorfs) tubes at 97 °C for 3 min s. Then, samples were sonicated into bath for 10 min and centrifuged for 10 s. A volume of 7 μL of sample and protein marker (Protein Marker VI, 10–245 KDa, Panreac, Spain) was drop into the bottom of the wells. The electrophoresis run at 200–240 V into a Miniprotean, BioRad with a Tris-glycine-SDS (0.375 M, pH = 8.8) buffer, until the bromophenol reached the bottom of the third gel. The gels were tinted with a 5 wt% Coomassie Blue solution in a mixture of methanol and acetic acid. The gels were revealed by washing with a mixture of methanol and acetic acid, stored in plastic bags into the fridge with part of the revealing solvent and scanned using a HP-Scanjet-5590 scanner and analysed using ImageJ free software. Preparation of keratin bioplastic films The keratin precipitate was concentrated in a mixture of isopropanol/water (~50:50) to a concentration of 63 g·L−1. Keratin was dissolved by drop-wise addition of concentrated NaOH up to pH ~ 10. Aliquots (15 mL, ~0.95 g of keratin) of this solution were mixed with different volumes of glycerol (50%vol./H2O solution) and 10 wt% SDS, and were sonicated in bath for ~10 s before being cast into silicon moulds. These were placed into a lab hood and the water evaporated with a constant air flow at ambient temperature (24 ± 2 °C). Blank samples were cast with no additives and once dried were weighed to determine keratin concentration in the stem solutions. With this methodology bioplastic films containing 4, 6, 11, 14 and 16 wt% of glycerol and 6, 11 and 16 wt% of SDS were obtained. Elemental analysis Atomic composition of wool fibroin and keratin films was performed analysing ~5 mg of dried samples in a TruSpec-Micro (LECO) analyser. Fourier transformed infrared spectroscopy (FTIR) Keratin based bioplastics were analysed by attenuated-total-reflection (ATR) FTIR spectroscopy into a Jasco FTIR spectrophotometer recording 25 scans with 4 cm−1 resolution in the range of 4000–400 cm−1. A spectra of washed wool fibres were recorded for comparison. Optical characterization by UV-Vis spectroscopy Ultraviolet and visible light spectra (UV-Vis) were recorded in the wavelength range of 800–200 nm using a DH-mini UV-Vis-NIR light source from Ocean Optics instruments. Films were placed across the beam and spectra was recorded. Films opacity was defined as the absorbance at 600 nm divided by the film thickness (as measured with a digital micrometer). Thermogravimetric analysis (TGA) Thermogravimetric analysis (HI-RES 2950, TA Instruments) of washed wool and bioplastic films were performed in the temperature range of 25–600 °C at a heating rate of 10 °C·min−1 in an air atmosphere with a constant 60 mL·min−1 air flow. The samples weights were of about 2.5 mg. Dynamic-mechanical thermal analysis (DMTA) Tests were carried out in a DMA-Eplexor, Gabo Qualimeter, dynamic analyser. Specimens were rectangular ~5 × 0.15 mm2 and the initial cross-head distance of 2.5 cm. The initial contact force was of 0.5 N, the dynamic load of 0.03%, the heating rate of 3 °C·min−1, and the scanning frequency of 1 Hz. The temperature range analysed was from –100 °C to 200 °C. Impact of plasticizer nature in H2O absorption kinetics Three differentiated pieces with weight in the range of 20–60 mg of films of pure keratin and bioplastics with 16% of SDS and Glycerol were dried to a constant weight at 90 °C and applied vacuum. They were weighed again (m0) and set into individual wells of a plastic blister. This was inserted into a zip plastic bag of 10 cm × 15 cm (maximum volume of ~315 cm3), containing a saturated NaCl solution (65 ± 3%RH, T = 24 ± 3 °C). The RH was checked using an electronic pocket Humidity Meter (HDT-305E) provided with a tip probe. The RH dropped to ambient condition when the zip was opened but recovered the original value in about 10 min once the zip was closed again. Samples were withdrawn from the bag and weighed at different periods of time (mt). The same experiment was repeated with a three bunches (0.2–0.9 g) of washed and dried wool. The absorbed H2O was expressed as: $${\rm{Absorbed}}\,{{\rm{H}}}_{2}{\rm{O}}( \% )=100\cdot ({m}_{t}-{m}_{0})/{m}_{0}$$ (2) Impact of RH on absorbed H2O in equilibrium The maximum absorbed water into films with 14 wt% glycerol was determined under environments with different humidity at a temperature of 24 ± 3 °C. Three dried pieces were inserted into zip plastic bags containing saturated salts solutions achieving different atmosphere with different RH (LiCl, 16 ± 2 %RH; desiccator, 30 ± 3%RH; ambient, 53 ± 2%RH; NaCl, 65 ± 3%RH; KCl, 70 ± 3%RH; Na2CO3, 79 ± 3%RH). Tensile testing Samples for tensile testing were conditioned for a minimum of 48 h at 24 ± 3 °C into a desiccator (RH ~30 ± 3%). In addition, samples of bioplastic film with 14%wt glycerol were conditioned in other environments, as described above, in order to assess the impact of water plasticizing effect. The specimen dimensions were of ~3.5 × 0.5 mm2 cross section. The initial cross-head distance was of 8 mm. Tensile tests were performed in a Shimadzu tensile tester equipped with a 200 N loading cell. The cross-head speed was set to 5 °C·min−1. A minimum of three specimens where measured per sample. Representative samples conditioned into a desiccator (~40 %RH) were cut into ~1.5 × 2 cm2 pieces, weighed and immersed into commercial compost derived soil (Abonos Naturales, Hnos. Aguado S.L), with 37.3% organic matter, 13.9% organic carbon, 0.8% organic nitrogen, 1.3% total nitrogen and a pH of 8.2. The soil (20 Kg) was mixed with of deionized water (8 Kg) into a 25 cm deep plastic bag with a surface of 30 × 50 cm2. Samples were submerged at about 5 cm from the surface and water was spread on top regularly to maintain the water content constant. Samples were withdrawn at differ times, conditioned into a desiccator (~40 %RH) and weighed. Experiments were performed by triplicate. References 1. 1. Ashby, M.F. Materials and the Environment. 2nd Edition, BH-Elsevier, Waltham, USA (2013). 2. 2. MATER-BI, http://materbi.com, accessed (2018). 3. 3. Ortega-Toro, R., Bonilla, J., Talens, P., Chiralt, A. Future of Starch-Based Materials in Food Packaging. In “Starch-Based Materials in Food Packaging”, Ed. Villar, M. A., Barbosa, S. E., García, M. A., Castillo, L. A. & López, O.V., Elsevier (2018). 4. 4. He, M. et al. Biocompatible and Biodegradable Bioplastics Constructed from Chitin via a “Green” Pathway for Bone Repair. ACS Sustainable Chem. Eng. 5, 9126–9135 (2017). 5. 5. Rinaudo, M. Biomaterials based on a natural polysaccharide: alginate. TIP 17, 92–96 (2014). 6. 6. Fernández-d’Arlas, B., Castro, C. & Eceiza, A. Obtención de fibras de alginato mediante hilado por coagulación con sulfatos de metales multivalentes. Rev. Latinoam. Met. Mat. 35, 189–200 (2015). 7. 7. Wang, Q. et al. A bioplastic with high strength constructed from a cellulose hydrogel by changing the aggregated structure. J. Mater. Chem. A 1, 6678–6686 (2013). 8. 8. Guerrero, P., Retegi, A., Gabilondo, N. & de la Caba, K. Mechanical and thermal properties of soy protein films processed by casting and compression. J. Food Eng. 100, 145–151 (2010). 9. 9. Garrido, T. et al. Valorization of soya by-products for sustainable packaging. J. Clean. Prod. 64, 228–233 (2014). 10. 10. Liu, X. et al. Development of Eco-friendly Soy Protein Isolate Films with High Mechanical Properties through HNTs, PVA, and PTGE Synergism Effec. Sci. Rep. 7, 44289 (2017). 11. 11. Orliac, O., Silvestre, F., Rouilly, A. & Rigal, L. Rheological studies, production, and characterization of injection-molded plastics from sunflower protein isolate. Ind. Eng. Chem. Res. 42, 1674–1680 (2003). 12. 12. Tromborn, C. Jewelry Stone Make of Milk. GZ Art+Design, 2004, Accessed (2018). 13. 13. Bendettaieb, N., Gay, J. P., Kaarbowiak, T. & Debeaufort, F. Tuning the Functional Properties of Polysaccharide–Protein Bio-Based Edible Films by Chemical, Enzymatic, and Physical Cross-Linking. Compreh. Rev. Food. Sci. Saf. 15, 739–752 (2016). 14. 14. Belyamani, I., Prochazka, F. & Assezat, G. Production and characterization of sodium caseinate edible films made by blown-film extrusion. J. Food. Eng. 121, 39–47 (2014). 15. 15. Pérez-Gago, M. B., Nadaud, P. & Krochta, J. M. Water vapor permeability, solubility, and tensile properties of heat-denatured versus native whey protein films. J. Food Sci. 64, 1034–1037 (1999). 16. 16. Ma, B., Qiao, X., Hou, X. & Yang, Y. Pure keratin membrane and fibers from chicken feather. Intern. J. Biol. Macrom. 89, 614–621 (2016). 17. 17. Etxabide, A., Leceta, I., Cabezudo, S., Guerrero, P. & de la Caba, K. Sustainable fish gelatin films: from food processing waste to compost. ACS Sust. Chem. Eng. 4, 4626–4634 (2016). 18. 18. Geissdoerfer, M., Savaget, P., Bocken, N. M. P. & Hultink, E. J. The Circular Economy – A new sustainability paradigm? J. Clean. Prod. 143, 757–768 (2017). 19. 19. Fundación para la Economía Circular (FEC). ¿Por qué y cómo elaborar estrategias de economía circular en el ámbito regional? (2017). 20. 20. Paniagua, E. Biocombustibles no tan “bio”. El Mundo-Papel (2016). 21. 21. Colchester et al. Promised Land. Palm oil and land acquisition in Indonesia: Implications for local communities and indigenous peoples. HuMA& World Agroforestry Centry (2006). 22. 22. Marsal Amenós, F., Morral Romeu, E. & Palet Alsina, D. Puesta en valor de lanas y pieles de producción nacional. Ministerio de Medio Ambiente y Medio Rural y Marino, Dirección General de Recursos Agrícolas y Ganaderos. Subdirección General de productos Ganaderos, Madrid (2009). 23. 23. Pérez, L. La lana, un problema para las granjas vizcaínas: Nadie la quiere, ni regalada. El Correo. December 2017. 24. 24. European Commission. A sustainable bioeconomy for Europe: strengthening the connection between economy, society and the environment. Publications Office of the European Union, Brussels, October 2018. 25. 25. Coward-Kelly, G., Chang, V. S., Agbogbo, F. K. & Holtzapple, M. T. Lime treatment of keratinous materials for the generation of highly digestible animal feed: 1. Chicken feathers. Biores. Techn. 97, 1337–1343 (2006). 26. 26. Jie, M., Raza, W., Xu, Y. & Shen, Q. Preparation and optimization of aminoacid chelated micronutrient fertilizer by hydrolyzation of chicken waste feathers and the effects on growth of rice. J. Plant Nutr. 31, 571–582 (2008). 27. 27. Vega-Zavaleta, L. et al. Aplicación de residuos sólidos hidrolizados del proceso de pelambre enzimático como fuente de aminoácidos libres en el crecimiento de plántulas de maíz. Saber y Hacer 2, 24–32 (2014). 28. 28. Zoccola, M., Aluigi, A. & Tonin, C. Characterization of keratin biomass from butchery and wool industry wastes. J. Molec. Struc. 938, 35–40 (2009). 29. 29. Aluigi, A. et al. Study on the structure and properties of wool keratin regenerated from formic acid. Intern. J. Biol. Macromol. 41, 266–273 (2007). 30. 30. Goddard, D. R. & Michaelis, L. A study on Keratin. J. Biol. Chem. 106, 605–614 (1934). 31. 31. Vasconcelos, A., Freddi, G. & Cavaco-Paulo, A. Biodegradable Materials Based on Silk Fibroin and Keratin. Biomacromolecules 9, 1299–1305 (2008). 32. 32. Tonin, C. et al. Thermal and structural characterization of poly(ethylene-oxide)/Keratin blend films. J. Therm. Anal. Cal. 89, 601–608 (2007). 33. 33. Xu, H., Ma, Z. & Yang, Y. Dissolution and regeneration of wool via controlled disintegration and disentanglement of highly crosslinked keratin. J. Mater. Sci. 49, 7513–7521 (2014). 34. 34. Song, K., Xu, H., Xie, K. & Yang, Y. Keratin-based biocomposites reinforced and cross-linked with dual-functional cellulose nanocrystals. ACS Sust. Chem. Eng. 5, 5669–5678 (2017). 35. 35. Tran, C. D. & Mututuvari, T. M. Cellulose, Chitosan and Keratin Composite Materials: Facile and Recyclable Synthesis, Conformation and Properties. ACS Sustainable Chem. Eng. 4, 1850–1861 (2016). 36. 36. Fernández-d’Arlas, B. Improved aqueous solubility and stability of wool and feather proteins by reactive-extraction with H2O2 as bisulfide (-S-S-) splitting agent. Eur. Polym. J. 103, 187–197 (2018). 37. 37. Bradbury, J. H. The morphology and chemical structure of wool. Pure & Appl. Chem. 46, 247–253 (1976). 38. 38. Poole, L. B. The Basics of Thiols and Cysteines in Redox Biology and Chemistry. Free Radic Biol. Med. 0, 148–157 (2015). 39. 39. Xu, H. & Yang, Y. Controlled De-Cross-Linking and Disentanglement of Feather Keratin for Fiber Preparation via a Novel Process. ACS Sustainable Chem. Eng. 2, 1404–1410 (2014). 40. 40. Zheng, S., Nie, Y., Zhang, S., Zhang, X. & Wang, L. Highly Efficient Dissolution of Wool Keratin by Dimethylphosphate Ionic Liquids. ACS Sustainable Chem. Eng. 3, 2925–2932 (2015). 41. 41. Mukherjee, M. & Ray, A. R. Biomimetic oxidation of l-arginine with hydrogen peroxide catalyzed by the resin-supported iron (III) porphyrin. J. Molecul. Cat. A: Chem. 266, 207–214 (2007). 42. 42. Verrastro, I., Pasha, S., Jensen, K. T., Pitt, A. R. & Spickett, C. M. Mass Spectrometry-Based Methods for Identifying Oxidized Proteins in Disease: Advances and Challenges. Biomolecules 5, 378–411 (2015). 43. 43. Stadtman, E. R. Oxidation of free amino acids and amino acid residues in proteins by radiolysis and by metal-catalyzed reactions. Annu. Rev. Biochem. 62, 797–822 (1993). 44. 44. Fuchs, E. & Marchuk, D. Type I and Type II Keratins Have Evolved from Lower Eukaryotes to Form the Epidermal Intermediate Filaments in Mammalian Skin. Proc. Natl. Acd. Sci. USA 80, 5857–5861 (1983). 45. 45. Reichel, L. S. & Müller-Goymann, C. C. Keratin film made of human hair as nail plate model for studying drug permeation. Eur. J. Pharm. Biopharm. 78, 432–440 (2011). 46. 46. Brown, E. M., Pandya, K., Taylor, M. M. & Liu, C. K. Comparison of Methods for Extraction of Keratin from Waste Wool. Agricul. Sci. 7, 670–679 (2016). 47. 47. Fernández-d’Arlas, B., Peña, C. & Eceiza, A. Extracción de la queratina de la lana de oveja “Latxa”. Rev. Iberoam. Polím. 17, 110–121 (2016). 48. 48. Kong, J. & Yu, S. Fourier Transform Infrarred Spectroscopic Analysis of Protein Secondary Structures. Act. Biochim. Biophys. Sin. 39, 549–559 (2007). 49. 49. Barth, A. Infrarred spectroscopy of proteins. Biochim. Biophys. Act. 1767, 1073–1101 (2007). 50. 50. Lindberg, B. J. Studies on Sulfinic Acids. Act. Scan. 21, 2215–2234 (1967). 51. 51. Noguchi, T., Nojiri, M., Takei, K., Odaka, M. & Kamiya, N. Protonation structures of Cys-sulfinic and Cys-sulfenic acids in the photosensitive nitrile hydratase revealed by Fourier transform infrared spectroscopy. Biochemistry 42, 11642–50 (2003). 52. 52. Cardamone, J. M., Nuñez, R., Garcia, R. A. & Aldema-Ramos, M. Characterizing wool keratin. Res. Lett. Mat. Sci. 147175, 1–5 (2009). 53. 53. Fan, J. & Yu, W. High Yield Preparation of Keratin Powder from Wool Fiber. Fibers and Polymers 13, 1044–1049 (2012). 54. 54. Yu, Y., Yang, W. & Meyers, M. A. Viscoelastic properties α-keratin fibers in hair. Acta Biomaterialia 64, 15–28 (2017). 55. 55. Guerrero, P., Hanani, Z. A. N., Kerry, J. P. & de la Caba, K. Characterization of soy protein-based films prepared with acids and oils by compression. J. Food. Eng. 107, 41–49 (2011). 56. 56. Etxabide, A., Urdanpilleta, M., Gómez-Arriaran, I., de la Caba, K. & Guerrero, P. Effect of pH an lactose on the cross-linking extension and structure of fish gelatin films. Reac. Funct. Polym. 117, 140–146 (2017). 57. 57. Pérez, V., Felix, M., Romero, A. & Guerrero, A. Characterization of pea protein-based bioplastics processed by injection molding. Food Bioprod. Process. 97, 100–108 (2016). 58. 58. Leceta, I., Urdanpilleta, M., Zugasti, I., Guerrero, P. & de la Caba, K. Assessment of gallic acid-modified fish gelatin formulation to optimize the mechanical performance of films. Inter. J. Biol. Macromol. 120, 2131–2136 (2018). 59. 59. Vieira, M. G. A., da Silva, M. A., dos Santos, L. O. & Beppu, M. M. Natural-based plasticizers and biopolymer films: A review. Eur Polym. J. 47, 254–263 (2011). 60. 60. Tian, H. et al. Microstructure and properties of glycerol plasticized soy protein plastics containing castor oil. J. Foog Eng. 109, 496–500 (2012). 61. 61. Garrido, T., Leceta, I., de la Caba, K. & Guerrero, P. Chicken feather as natural source of Sulphur to develop sustainable protein films with enhanced properties. Inter. J. Biol. Macromol. 106, 523–531 (2018). 62. 62. Schrooyen, P. M. M., Dijkstra, P. J., Oberthür, R. C., Bantjes, A. & Feijen, J. Stabilization of Solutions of Feather Keratins by Sodium Dodecyl Sulfate. J. Coll. Interf. Sci. 240, 30–39 (2001). 63. 63. Wang, B., Yang, W., McKittrick, J. & Meyers, M. A. Keratin: Structure, mechanical properties, occurrence in biological organisms, and efforts at bioinspiration. Progr. Mat. Sci. 76, 229–318 (2016). 64. 64. Feairheller, S. H., Taylor, M. M., Gruber, H. A., Mellon, E. F. & Filachione, E. M. Some properties of collagen modified by the Mannich reactions. J. Polym. Sci. Part C 24, 163–177 (1968). 65. 65. Puchtler, H. & Meloan, S. N. On the chemistry of formaldehyde fixation and its effects on immunohistochemical reactions. Histochemistry 82, 201–204 (1985). 66. 66. Lujan-Montelongo, J. A., Oeda-Estevez, A. & Fleming, F. F. Alkyl Sulfinates: Formal Nucleophiles for Synthesizing TosMIC Analogs. European J. Org. Chem. 7, 1602–1605 (2015). 67. 67. Leceta, I., Etxabide, A., de la Caba, K. & Guerrero, P. Bio-based films prepared with by-products and wastes: environmental assessment. J. Clean. Prod. 64, 218–227 (2014). 68. 68. Gacén-Guillen, J. Aspectos químicos del blanqueo de la lana con peróxido de hidrógeno: Modificación química de la queratina. Bol. Instit. Invest. Text. Coop. Indust. 39, 43–70 (1964). 69. 69. Alexander, P., Fox, M. & Hudson, R. F. The Reaction of Oxidizing Agents with Wool. Biochem. 49, 129–138 (1951). 70. 70. Kowalczyk, D. & Baraniak, B. Effects of plasticizers, pH and heating of film-forming solution on properties of pea protein isolate films. J. Food Eng. 105, 295–305 (2011). 71. 71. Sun, Q., Sun, C. & Xiong, L. Mechanical, barrier and morphological properties of pea starch and peanut protein isolate blend films. Carbohyd. Polym. 98, 630–637 (2013). 72. 72. Shiku, Y., Hamaguchi, P. Y. & Tanaka, M. Effect of pH on the preparation of edible films based on fish myofibrillar proteins. Fish. Sci. 69, 1026–1032 (2003). Acknowledgements BFD acknowledges ‘‘Fundación Caja Navarra” and ‘‘Obra Social La Caixa” in the framework of UPNA’s program ‘‘Captación de Talento” for funding the project “Biomateriales a partir de queratinas de lanas y plumas”. BFD also wants to acknowledge Dr. Santiago Reinoso, Dr. Ismael Pellejero and Dr. Iñaki Pérez de Landazábal from INAMAT for sharing research infrastructures and discussions, as well as UPNA for providing me independence on the research. The author would also like to acknowledge Prof. Arantxa Eceiza from the Universidad del País Vasco (UPV/EHU) for the help in determining the thermo-mechanical character of keratin films. Author information Authors Contributions B.F.D. conceived and performed the experiments and wrote the manuscript Corresponding author Correspondence to Borja Fernández-d’Arlas. Ethics declarations Competing Interests The author declares no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Reprints and Permissions Fernández-d’Arlas, B. Tough and Functional Cross-linked Bioplastics from Sheep Wool Keratin. Sci Rep 9, 14810 (2019). https://doi.org/10.1038/s41598-019-51393-5 • Accepted: • Published: • Extraction and application of keratin from natural resources: a review • Chaitanya Reddy Chilakamarry • , Syed Mahmood • , Siti Nadiah Binti Mohd Saffe • , Mohd Azmir Bin Arifin • , Arun Gupta • , Mohamed Yacin Sikkandar • , S Sabarunisha Begum •  & Boya Narasaiah 3 Biotech (2021) • Preparation of PEG ‐modified wool keratin/sodium alginate porous scaffolds with elasticity recovery and good biocompatibility • Ji Ji • , Guang Chen • , Zitong Liu • , Lili Li • , Jiugang Yuan • , Ping Wang • , Bo Xu •  & Xuerong Fan Journal of Biomedical Materials Research Part B: Applied Biomaterials (2021) • Proteins from Agri-Food Industrial Biowastes or Co-Products and Their Applications as Green Materials • Estefanía Álvarez-Castillo • , Manuel Felix • , Carlos Bengoechea •  & Antonio Guerrero Foods (2021) • Advancements in Applications of Natural Wool Fiber: Review • Faisal Allafi • , Md Sohrab Hossain • , Japareng Lalung • , Marwan Shaah Journal of Natural Fibers (2020) • On the improvement of properties of bioplastic composites derived from wasted cottonseed protein by rational cross-linking and natural fiber reinforcement • Hangbo Yue • , Yuru Zheng • , Pingxuan Zheng • , Jianwei Guo • , Juan P. Fernández-Blázquez • , James H. Clark •  & Yingde Cui Green Chemistry (2020)
2021-05-17 14:34:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5866531729698181, "perplexity": 11930.678878719164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991772.66/warc/CC-MAIN-20210517115207-20210517145207-00191.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-2-functions-and-their-graphs-2-1-functions-2-1-assess-your-understanding-page-56/8
## Precalculus (10th Edition) $[0,5]$ The domain of the sum of functions is the intersection of the two functions' domains. Note that the common elements of $[0, 7]$ and $[-2, 5]$ are the numbers $[0, 5]$ Hence, here the answer is $[0,5]$.
2021-10-17 22:36:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6858534812927246, "perplexity": 191.87535429124918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00615.warc.gz"}
https://danshved.wordpress.com/2013/04/15/a-problem-about-polynomial-functions/
### A problem with polynomial functions Here is a problem that popped up in my head about a year ago. Problem 1. A function $f: \mathbb {Z} \times \mathbb {Z} \to \mathbb {Z}$ is polynomial of each argument, i.e. for every $x_0 \in \mathbb {Z}$ the function $y \to f(x_0, y)$ can be represented by a polynomial with integer coefficients, and the same goes for $x \to f(x, y_0)$ for every $y_0$. Is it true then that $f$ must be polynomial of both arguments simultaneously, i.e. representable by an element of $\mathbb {Z}[x,y]$? If you’ve ever been a freshman in an analysis course, you are bound to be aware of a common trap with the notion of differentiability. The trap is like this: you are asked to prove that a function $g: \mathbb {R} \times \mathbb {R} \to \mathbb {R}$ is differentiable everywhere. You have several months of single-variable analysis behind your back. So your first instinct may be to prove that $\partial g / \partial x$ and $\partial g / \partial y$ exist at every point and be done with it. But this is not enough, as demonstrated by $\displaystyle g(x,y) = \left\{ \begin{array}{ll}0 & \quad \mathrm{at}\, \mathrm{point}\, (0,0) \\ \frac{xy}{\sqrt {x^2+y^2}} & \quad \mathrm{everywhere}\, \mathrm{else.}\end{array}\right.$ Problem 1 asks if there is the same trap with polynomiality (not a word?) as there is with differentiability. When I came up with problem 1 I was busy with more “serious” math, so the question didn’t get any real thought. I always suspected that it would’t take long to solve, but somehow I was too lazy to do the job. That is, I was until this morning, when I finally put in the required several minutes and got this thing sorted out. If you’re curious, you might want to do the same before moving on to the next paragraph. Without further ado, the answer to problem 1: no. A function can be polynomial of each argument, but not polynomial overall. Here is an example: $\displaystyle f(x,y) = \sum _{n=0}^\infty \prod _{k=0}^ n (x^2-k^2)(y^2-k^2).$ The sum is actually finite at every point $(x,y)$: all the terms with $n \geqslant \min (|x|,|y|)$ are zero. Also, if we fix $x_0 \in \mathbb {Z}$, then $f(x_0,y)$ becomes a polynomial of $y$ with degree at most $2|x_0|$. It remains to prove that $f(x,y)$ is not polynomial itself. But if it were, then $g(x)=f(x,x)$ would too. (I wonder if this was grammatically correct). And it is clear that $g(x)$ grows faster than any polynomial, so we are done. Now, what if we replace $\mathbb {Z}$ with another ring? Something larger maybe, like all rational numbers, or all real numbers. Will the trick still work? In case of the rationals it will, with minor modifications. Exercise. Problem 1 still has a negative answer when $\mathbb {Z}$ is replaced with $\mathbb {Q}$. But if we move on from $\mathbb {Z}$ to the uncountable $\mathbb {R}$, the trick stops working. In fact, the problem changes its answer. Fact of life. If a function $f:\, \mathbb {R} \times \mathbb {R} \to \mathbb {R}$ is polynomial of each argument, then it is polynomial. Proof. Let’s introduce some new notation for convenience. For every $x \in \mathbb {R}$, define $l_ x:\, \mathbb {R} \to \mathbb {R}$ by $l_ x(y) = f(x,y)$. Similarly, for every $y \in \mathbb {R}$ let $r_ y:\, \mathbb {R} \to \mathbb {R}$, $r_ y(x) = f(x,y)$. We know that all the functions $l_ x$ and $r_ y$ are polynomial. First, we prove that there is a natural $n$ and an infinite set $X \subset \mathbb {R}$ such that $\deg l_ x \leqslant n$ for every $x \in X$. To do this, for every $k\in \mathbb {N}$ define $X_ k = \{ x \in \mathbb {R} \, |\, \deg l_ x \leq k\}$. Clearly, $\mathbb {R} = \bigcup _{k \in \mathbb {N}} X_ k$. $\mathbb {R}$ is uncountably infinite and $\mathbb {N}$ is countable. It follows that there is a $k$ such that $X_ k$ is infinite, which gives us the desired $n$ and $X$. OK, so for every $x \in X$ we know that $l_ x$ is a polynomial function with degree at most $n$. This means that there are functions $a_0, a_1, \ldots , a_ n: X \to \mathbb {R}$ such that $\displaystyle l_ x(y) = f(x,y) = a_0(x) + a_1(x)y + \ldots + a_ n(x)y^ n$ for all $x \in X$, $y \in \mathbb {R}$. The next order of business is to prove that each $a_ i$ is representable by a polynomial, i.e. that there is a $b_ i \in \mathbb {R}[x]$ such that $a_ i(x) = b_ i(x)$ for all $x \in X$. This is immediately obvious for $a_0$, because $a_0 = r_0|_ X$. It is only a bit more difficult for an arbitrary $a_ i$. It is an easy exercise to show that coefficients of a polynomial with degree at most $n$ can be found as linear combinations of its values at $n+1$ distinct points. This means that for every $i=\overline{0,n}$ there exist constants $c_0, c_1, \ldots , c_ n \in \mathbb {R}$ such that $\displaystyle a_ i(x) = \sum _{j=0}^ n c_ j f(x,j)$ for all $x \in X$, and thus $a_ i$ is a linear combination of functions $r_ j|_ X$. Therefore, $a_ i$ is polynomial. And we are almost done. We have polynomials $b_ i \in \mathbb {R}[x]$, $i = \overline{0,n}$, that have the same values as $a_ i$ on set $X$. Now build a new polynomial function $g$ defined on all of $\mathbb {R} \times \mathbb {R}$ like this: $\displaystyle g(x,y) = b_0(x) + b_1(x)y + \ldots + b_ n(x)y^ n.$ It is obvious that $g|_{X \times \mathbb {R}} = f|_{X \times \mathbb {R}}$. If we fix an arbitrary $y_0 \in \mathbb {R}$, we see that functions $x \to g(x,y_0)$ and $x \to f(x,y_0)$ are both polynomial and have the same restriction to $X$. Since $X$ is infinite, it follows that these two functions coincide everywhere, i.e. $g(x,y_0)=f(x,y_0)$ for every $x \in \mathbb {R}$. But then $g$ and $f$ are the same function, and the proof is complete.
2018-01-16 07:41:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 88, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146620631217957, "perplexity": 73.64960355510209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886237.6/warc/CC-MAIN-20180116070444-20180116090444-00406.warc.gz"}
https://proofwiki.org/wiki/Definition:Hurwitz_Zeta_Function
Definition:Hurwitz Zeta Function Definition The Hurwitz zeta function is a generalisation of the Riemann zeta function. It is defined for $\map \Re s > 1$ and $\map \Re q > 0$, by: $\displaystyle \map \zeta {s, q} = \sum_{n \mathop = 0}^\infty \frac 1 {\paren {n + q}^s}$ Also see • Results about Hurwitz Zeta Function can be found here. Source of Name This entry was named for Adolf Hurwitz.
2020-07-02 12:43:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9725618958473206, "perplexity": 1022.7489874984067}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00160.warc.gz"}
http://celeblives.net/saoirse-ronan-net-worth-movies-family-boyfriend-pictures-and-wallpapers/
# Saoirse Ronan Net Worth, Movies, Family, Boyfriend, Pictures and Wallpapers Saoirse Ronan is an American actress of Irish origin who has played in a lot of films by now, including Brooklyn, Lady Bird and many more others. She is also known to be one of the youngest actresses to be nominated for Oscar. In our article you will find all the facts about this brilliant actress. ## Saoirse Ronan Net Worth Saoirse Ronan’s net worth is estimated to be \$2.5 million. Critics praise her for her films, so there is a big chance to earn much more. ## Saoirse Ronan Movies Despite her young age, Saoirse Ronan has already appeared in a number of movies and many more are awating for her in the future. Mary Queen of Scots  – 2018 The Seagull – 2017 On Chesil Beach – 2017                                                                                                Lady Bird  – 2017                                                                                                      Loving Vincent  – 2017                                                                                              Brooklyn  – 2015                                                                                                      Lost River – 2014                                                                                                          Muppets Most Wanted – 2014                                                                                     The Grand Budapest Hotel  – 2014                                                                                How I live Now – 2013                                                                                               The Host – 2013                                                                                            Byzantium  – 2012                                                                                                  Violet and Daisy – 2011                                                                                        Hanna – 2011 The Way Back – 2010                                                                                                   The Lovely Bones – 2009                                                                                            City of Ember – 2008                                                                                                 Death Defying Acts – 2007                                                                                       Atonement – 2007                                                                                                         I Could Never be Your Woman – 2007 ## Saoirse Ronan Family Saoirse Una Ronan was born on April, 12, 1994 in New York, USA in an Irish family of Monica and Paul Ronan. Her father is an actor and a producer, while her mother tried acting only as a child. When Saoirse was three, her family moved back to Dublin where the future actress grew up. ### Saoirse Ronana Awards Saoirse first appeared on TV at the age of nine in The Clinici in 2003. After that she got an offer for a role in I Could Never be Your Woman in 2007, which was met nicely with both critics and the audience. Later, in 2008, Saoirse got an Oscar for her role in the film, becoming one of the youngest actresses to get the award. In 2015 she got her second Oscar for the Role in Brooklyn and in 2018 her third Oscar for Lady Bird made her the second youngest actress (Jennifer Lawrence was the first one) to get three Oscars before the age of 24. ## Saoirse Ronan Boyfriends In May 2017 Saoirse was reported to be dating with a singer and musician Hozier who she met while working on the video to one of his songs. But soon the rumours were proved to be wrong and the two are just said to be good friends. In 2013 Saoirse worked in the film How I live Now where she met George MacKay. They started having romantic relations which lasted till 2016. That is all we wanted to tell you about one of the most talented young actresses of our time, Saoirse Ronan. Now you know about her net worth, family, movies, boyfriends and awards. If you liked our article, please, share our content. And don’t forget to view the pictures of Saoirse Ronan that we picked for you.
2019-05-25 09:04:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237322568893433, "perplexity": 14916.822526066637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00096.warc.gz"}
https://codegolf.stackexchange.com/questions/185674/pwas-eht-tirsf-dna-tasl-setterl-fo-hace-dorw?page=2&tab=votes
# pwaS eht tirsf dna tasl setterl fo hace dorw Or, "Swap the first and last letters of each word" Your challenge is to, given a string of alphabetical ASCII characters as well as one other character to use as a delimiter (to separate each word), swap the first and last letters of each word. If there is a one-character word, leave it alone. The examples/testcases use the lowercase letters and the space as the delimiter. You do not need to handle punctuation; all of the inputs will only consist of the letters a through z, separated by a delimiter, all of a uniform case. For example, with the string "hello world": Input string: "hello world" Identify each word: "[hello] [world]" Identify the first and last letters of each word: "[[h]ell[o]] [[w]orl[d]]" Swap the first letters of each word: "[[o]ell[h]] [[d]orl[w]]" Final string: "oellh dorlw" NOTE: the delimiter does not need to be inputted separately. The delimiter is just the character used to separate words. It can be anything. I wanted to leave options open for creative golfers, so I did not want to limit it to just spaces or new lines. The delimiter is just a character that separates words in the input string. Test cases: "swap the first and last letters of each word" -> "pwas eht tirsf dna tasl setterl fo hace dorw" "hello world" -> "oellh dorlw" "test cases" -> "test sasec" "programming puzzles and code golf" -> "grogramminp suzzlep dna eodc folg" "yay racecar" -> "yay racecar" • How should punctuation be treated? Hello, world! becomes ,elloH !orldw (swapping punctuation as a letter) or oellH, dorlw! (keeping punctuation in place)? – Phelype Oleinik May 16 at 19:23 • @PhelypeOleinik You do not need to handle punctuation; all of the inputs will only consist of the letters a through z, and all a uniform case. – Comrade SparklePony May 16 at 19:24 • Second paragraph reads as well as one other character to use as a delimiter while the fourth reads separated by spaces. Which one is it? – Adám May 16 at 19:46 • @Adám Any non-alphabetic character. I’ll edit to clarify. – Comrade SparklePony May 16 at 21:36 • @BenjaminUrquhart Yes. You can take input as a function argument if you want as well. – Comrade SparklePony May 16 at 21:44 # SNOBOL4 (CSNOBOL4), 90 bytes I R = INPUT LEN(1) . L REM . M :F(END) M ARB . M RPOS(1) REM . R OUTPUT =R M L :(I) END Try it online! Takes input separated by newlines; can be either uppercase or lowercase. I R = ;* set R to empty string INPUT LEN(1) . L REM . M :F(END) ;* take first character and set to L, and set the ;* REMainder to M M ARB . M RPOS(1) REM . R ;* match an ARBitrary (possibly empty) run ;* of characters to M up to but excluding the last character ;* and save the last character to R ;* if M is empty, (i.e., a one-letter word), then this fails ;* and nothing happens, so M remains empty and R remains empty OUTPUT =R M L :(I) ;* output Right, Middle, Left, then goto I. END (previous version) # SNOBOL4 (CSNOBOL4), 92 bytes I R =M = INPUT LEN(1) . L ('' | ARB . M LEN(1) . R) RPOS(0) :F(END) OUTPUT =R M L :(I) END Try it online! This is thematically the same, clearly, but suffers from using FAILURE as the termination status, preventing us from using FAILURE as a no-op as we do in the above. This then forces us to set M = as well as R =, which is 3 bytes. # PowerShell, 50 bytes -split"$args"|%{$_-replace'^(.)(.*)(.)$','$3$2$1'} Try it online! Uses regex to replace each word with a captured first and last letter surrounding the original core. If it's a single character, replace will find nothing and leave it alone. # Python 2, 67666461 60 bytes lambda s:' '.join(w[1:][-1:]+w[1:-1]+w[0]for w in s.split()) Try it online! -1 byte, thanks to squid -1 byte, thanks to Erik the Outgolfer # Python 3, 636158 57 bytes print(*[w[1:][-1:]+w[1:-1]+w[0]for w in input().split()]) Try it online! • You can remove the whitespace before ">" – squid May 16 at 19:40 • @squid thanks. I'm on mobile, so whitespace is hard on tio. – TFeld May 16 at 19:43 • Your 58-byte version is now an exact (except for the variable name) copy of alexz02's latest edit from 20 minutes ago – Kevin Cruijssen May 16 at 20:15 • True. I was mentioning it merely as a FYI. :) There isn't any rule disqualifying duplicated answers when both users independently came to the same solutions. – Kevin Cruijssen May 16 at 20:32 • The way you've chosen to account for 1-letter words might prevent you from seeing that this also works, since getting there involves two steps: 1) replace w[:w>w[0]] with w[:-1][:1]; this works because it firstly removes the last character, and, if the word only has one letter, the last character is also the first 2) you then move the mechanism to the other side, so that you have a [0] instead of a [-1]. – Erik the Outgolfer May 17 at 21:29 # PHP, 73 bytes foreach(explode(' ',$argn)as$w){[$w[0],$w[-1]]=[$w[-1],$w[0]];echo"$w ";} Try it online! Using PHP 7.1's Square bracket syntax for array destructuring to swap values. Ungolfed: foreach( explode( ' ',$argn ) as $w ) { [$w[0], $w[-1] ] = [$w[-1], $w[0] ]; echo$w, ' '; } # Jelly, 9 bytes ḲṪ;ṙ1$Ɗ€K Try it online! ### Explanation ḲṪ;ṙ1$Ɗ€K | monadic link taking the string as input Ḳ | split at spaces Ɗ€ | for each word, do the following: Ṫ | - pop the last letter ;ṙ1$| - concatenate to the remaining letters rotated left once K | finally, join with spaces # Z80Golf, 43 bytes 00000000: 2100 c0cd 0380 3822 5745 cd03 8038 09fe !.....8"WE...8.. 00000010: 2028 0570 4723 18f2 736b 707e 23a7 2803 (.pG#..skp~#.(. 00000020: ff18 f87a ff3e 20ff 18d6 76 ...z.> ...v Try it online! Corresponding assembly: mainloop: ld hl,$c000 call $8003 jr c, hlt ld d, a ld b, l ; handle 1-character words inputloop: call$8003 jr c, endinput cp $20 jr z, endinput ld (hl), b ; don't store the last character ld b, a inc hl jr inputloop endinput: ld (hl), e ; always 0 ld l, e ld (hl), b outputloop: ld a, (hl) inc hl and a jr z, endoutput rst$38 jr outputloop endoutput: ld a, d rst $38 ld a,$20 rst \$38 jr mainloop hlt: halt # Rust, 110 bytes |a:&str|{for b in a.split(' '){let l=b.len()-1;if l>0{print!("{}{}",&b[l..],&b[1..l])}print!("{} ",&b[0..1])}} Try it online! ### A surprisingly long, but nicer solution that returns instead of printing (126 bytes): |a:&str|->String{a.split(' ').flat_map(|b|{let mut q:Vec<_>=b.chars().collect();q.swap(0,b.len()-1);q.push(' ');q}).collect()} Try it online! ## C (gcc), 62 bytes t;*s;f(int*i){for(s=i;*++i;i[1]>64||(t=*i,*i=*s,*s=t,s=i+2));} I wanted to use a xor swap but that fails if a word is only one character long. Try it online • Suggest i[1]<65?t=*i,*i=*s,*s=t,s=i+2:0 instead of i[1]>64||(t=*i,*i=*s,*s=t,s=i+2) – ceilingcat Sep 13 at 17:38 ## Brainf**k, 126 >+[,[->>>+<+<+<]>----------[[-]<+>]>[[-]<<+>>]<<--[+<.<[[<]>>>[<.>>]<[<]>.<]>[>]>->>[.[-]<+>]<<<]>+[->>[-<<<+>>>]<+<]>[-<+>]<] Try it online! Could definitely be golfed further. # Julia 1.0, 75 bytes x->join([(n=length(s);n<2 ? s : s[n]*s[2:n-1]*s[1]) for s in split(x)]," ") Try it online! # brainfuck, 71 67 49 47 bytes +[-[+>>]>[<<.[-]<[<]>>[.>]<[<]>[.[>]].>>>]<+<,] Try it online! This code uses a few cheats, so i don't know if it is competing. The separator in the input is a 0x01. The input needs an extra trailing separator, otherwise the last word won't be printed. code: +[ enter the loop / the first round only the last three commands of the main loop are interesting -[ if input is not 0x01 + restore character >> go to exit if ] >[ else <<.[-] print and delete last character <[<]>>[.>] print all characters starting at the second <[<]>[.[>]] print first character and go to end if word is longer than one character . print null (space) >>> leave a zero cell and go to exit else ] <+ set new else marker < go to new input location , input next character ] If halting is not necessary: # Aheui (esotope), 174 bytes 삭붵뱷뛰빠쇡붷뼤쎄투@싼사쑫 ByLe쪼gen@처쇠모코커 DUST오멓@@@@푸셴쒼섣 @@@@쇡뽀@@@삳멓@샨@맣 Try it online! press 'start' butten in TIO again to halt manually. If halting is necessary: # Aheui (esotope), 225 bytes 살뷕볙눠쀄삭붵뱷뛰빠쇡붷빠쎄투@싼사쑫 By@@@@@야빠속@@@수처쇠모오어 Legen@@먷초더헤셜썰뻐푸쉰썬@셛 DUST@@솩뽀사뫼섁쀠우삳멓산멓 Try it online! In this version, following space after input is necessary. fixed. Now it is OK to not finish with space. # JavaScript (ES6), 68 bytes s=>s.split(" ").map(e=>e[a=e.length-1]+e.substr(1,a)+e[0]).join(" ") Ungolfed: s=> //function declaration s //input .split(" ") //array of words .map( //for each word e=> //function declaration e[a=e.length-1]+ //connect the last character, e.substr(1,a)+ //middle substring, e[0] //and the first character. ) //then, .join(" ") //connect modified words back • This does not work: TIO. The middle substr should have one less character: s=>s.split(' ').map(e=>e[a=e.length-1]+e.substr(1,a-1)+e[0]).join(' '). This still does not deal with single letter words. – Johan du Toit May 20 at 9:57 • fixed: s=>s.split(' ').map(e=>e[a=e.length-1]+e.substr(1,a-1)+(a?e[0]:"")).join(' ') – Johan du Toit May 20 at 11:48
2019-09-18 23:54:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25452402234077454, "perplexity": 9846.119780487175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573385.29/warc/CC-MAIN-20190918234431-20190919020431-00052.warc.gz"}
https://math.stackexchange.com/questions/3062006/delta-distribution-and-the-schr%C3%B6dinger-equation
# Delta distribution and the Schrödinger equation While studying the lecture notes of my quantum mechanics course I came across something that seemed a bit odd. There we want to solve the Schrödinger equation for the potential $$V(x)=V_0 \delta(x)$$, where $$\delta(x)$$ represents the "delta-function". We therefore get the equation $$-\frac{\hbar^2}{2m}\psi''(x)+V_0\delta(x)\psi(x)=E\psi(x),$$ where the solutions are of the form $$\psi_{\rm left}(x)=A\exp(ikx/\hbar)+B\exp(-ikx/\hbar)$$ with the energy $$E= k^2/2m$$ for $$x<0$$ and analogous for $$x>0$$ we find $$\psi_{\rm right}$$. We know that $$\lim\limits_{x\to 0^-} \psi_{\rm left} = \lim\limits_{x\to 0^+}\psi_{\rm right}$$ should hold but also need a second condition. The professor then proceeded to write the following. We integrate eq. (1) [the first equation in this post] form $$-\epsilon$$ to $$\epsilon$$ and get $$-\frac{\hbar^2}{2m}(\psi'(\epsilon)-\psi'(-\epsilon))+V_0\psi(0)=E\int^\epsilon_{-\epsilon} \psi(x) dx.$$ For $$\epsilon \to 0$$ we get $$\psi ^ { \prime } \left( 0 ^ { + } \right) - \psi ^ { \prime } \left( 0 ^ { - } \right) = \frac { 2 m V } { \hbar ^ { 2 } } \psi ( 0 ),$$ where $$f \left( 0 ^ { + } \right) = \lim _ { x \rightarrow 0 \atop x > 0 } f ( x ) \quad f \left( 0 ^ { - } \right) = \lim _ { x \rightarrow 0 \atop x < 0 } f ( x ).$$ My main confusion here comes from the fact that apparently $$\int_{-\epsilon}^{\epsilon} \delta(x)\psi(x)dx = \psi(0).$$ Could someone explain this? Notes on my background in math: In a course on mathematical physics we talked briefly about (tempered) distributions and there we saw the definition $$\delta[\varphi] =\int_\mathbb{R^n}\varphi(x)\delta(x)dx = \varphi(0),$$ for $$\varphi \in \mathcal{S}(\mathbb{R}^n)$$ and $$\mathcal { S } \left( \mathbb { R } ^ { n } \right) : = \left\{ \phi \in C ^ { \infty } \left( \mathbb { R } ^ { n } \right) \Big| \forall \alpha , \beta \in \mathbb { N } _ { 0 } ^ { n } : \sup _ { x \in \mathbb { R } ^ { n } } | x ^ { \alpha } D ^ { \beta } \phi ( x ) | < \infty \right\}.$$ I'm aware of the fact that in physics we often use the $$\delta$$-"function" rather sloppy and tend to brush of problematic arguments away with some kind of intuition (at least that's how it was presented to me up until this point) but I fail to see how it can be considered the same to integrate over an interval $$[-\epsilon, \epsilon]$$ and over $$\mathbb{R}$$ in this case. • For intuition, you can think of the delta function as being an ordinary nonnegative smooth function which has a very sharp peak near $0$, and which is zero outside of a tiny neighborhood of $0$, and such that the area under the curve is $1$. So if $\Psi$ is continuous at $0$ then $\int_{-\epsilon}^\epsilon \delta(x) \Psi(x) \, dx \approx \int_{-\epsilon}^\epsilon \delta(x) \Psi(0) \, dx =\Psi(0) \int_{-\epsilon}^\epsilon \delta(x) \, dx = \Psi(0) \cdot 1$. – littleO Jan 4 '19 at 19:59 • In addition to @littleO 's comment, note that the delta function is zero outside the $[-\epsilon,\epsilon]$ interval, so integrating over $\mathbb R$ you can split into three parts: one less than $-\epsilon$ (integral $0$), one between $-\epsilon$ and $\epsilon$, and one interval from$\epsilon$ to $\infty$. The last one is also zero. – Andrei Jan 4 '19 at 20:52 • @FrederikvomEnde I like your comment a lot! Would mark it as answer if you post it as one! – Sito Jan 5 '19 at 16:24 • Alright, just shifted my comment to the answer section! – Frederik vom Ende Jan 5 '19 at 16:55 One may define $$\delta$$ as a measure on $$\mathbb R$$ which on some $$A\subseteq\mathbb R$$ evaluates to $$\delta(A)=1$$ if $$0\in A$$ and $$\delta(A)=0$$ else. Thus assuming $$0\in A$$, one gets $$\int_{\mathbb R}\psi(x)\delta(x)\,dx=\int_{\mathbb R}\psi(x)\,d\delta(x)=∫_A\psi(x)\,d\delta(x)+∫_{\mathbb R\setminus A}\psi(x)\,d\delta(x)$$ where the second integral vanishes as $$\mathbb R\setminus A$$ has $$\delta$$-measure zero. By choosing $$A=(-\varepsilon,\varepsilon)$$ we obtain $$\psi(0)=\int_{\mathbb R}\psi(x)\delta(x)\,dx=\int_{-\varepsilon}^\varepsilon \psi(x)\delta(x)\,dx$$ for all $$\varepsilon>0$$. Define $$\phi(x):=\psi(x)[|x|\le\epsilon]$$, where the square bracket is an Iverson bracket. Then $$\int_{-\epsilon}^\epsilon\delta(x)\psi(x)dx=\int_{\Bbb R}\phi(x)\delta(x)dx=\phi(0)=\psi(0)$$.
2020-01-27 10:15:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9568361639976501, "perplexity": 148.66969567255674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00053.warc.gz"}
https://txt.binnyva.com/2007/12/show-disk-usage-du/
# Show Disk Usage – du du shows the ‘disk usage’ – the space taken up by the files and folders in the current directory. du -bh | more If you want to see how much space each folder in the current folder takes up, use this command… du -hs * Author: Binny V A A philosopher programmer who specializes in backend development and stoicism.
2023-02-02 11:36:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7339900135993958, "perplexity": 8542.154743702016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00753.warc.gz"}
http://blog.sciencenet.cn/blog-1565-744956.html
# System Engineering for Dummies ÒÑÓÐ 7383 ´ÎÔĶÁ 2013-11-26 23:00 |ϵͳ·ÖÀà:º£Íâ¹Û²ì (For new reader and those who request ºÃÓÑÇëÇó, please read my ¹«¸æÀ¸ first) There is a best selling series of how-to books with the title ¡°XXX for Dummies¡±.  The idea is to simplify useful knowledge and tools to easily digestible level and instruct ordinary readers  to do seemingly difficult but useful tasks.   Recently I discovered one of these books with the title ¡°System Engineering for Dummies¡± written by the staff of IBM. An added bonus is  that the book is free to download and available from https://www14.software.ibm.com/webapp/iwm/web/signup.do?source=swg-rtl-sd-wp&S_PKG=500016937&S_CMP=Google-Search-SWG-Rational-EB-1015&csr=wwwus_rationalsystemengsrcheb&cm=k&cr=google&ct=102G55MW&S_TACT=102G55MW&ck=+systems_+engineering_for_dummies&cmp=102G5&mkwid=sTo9TWAcD-dc_21505144676_43246d30503. The introduction carries the following description: This IBM Limited Edition ebook, ¡°Systems Engineering For Dummies,¡± explains what  systems engineering is and how it can help you simplify the development of smart, connected products.  If you're looking for ways toexpedite time-to-market, ensure business agility, and  deliver  high- quality smart products while cutting costs, this ebook is for you. Here is the chapter content: Introduction . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .1 Chapter 1: Generating Smarter Products . . . . . . . . . .. . .3 Chapter 2: Taming the Tiger with Systems Engineering . . . . . . . . . . . . . . . . . . .. . . . . . . .13 Chapter 3: Revolutionizing Requirements . . . . . . . . .. . .23 Chapter 4: Getting Abstract with System Modeling . . .35 Chapter 5: Ensuring Tip-Top Quality . . . . . . . . . . .. . . . .43 Chapter 6: Enabling Large Teams to Collaborate and Manage Changes . . . . . . . . . . . . . . . . . . . .. . . . . .51 Chapter 7: Ten Ways to Win with Systems Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . .61 The book is primarily written from user and application point of view. It has very little of the theory  (http://blog.sciencenet.cn/home.php?mod=space&uid=1565&do=blog&id=38683 and http://blog.sciencenet.cn/home.php?mod=space&uid=1565&do=blog&id=382212 ) http://blog.sciencenet.cn/blog-1565-744956.html ÉÏһƪ£ºThe Joy of DIY ÏÂһƪ£º[×ªÔØ]To Share on Thanksgiving Day 2013 ÊÕ²Ø ·ÖÏí ¾Ù±¨ Êý¾Ý¼ÓÔØÖÐ...
2018-02-25 00:15:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8490028381347656, "perplexity": 641.1822012654462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816068.93/warc/CC-MAIN-20180224231522-20180225011522-00173.warc.gz"}
http://mathhelpforum.com/calculus/30386-reversing-order-integration.html
# Math Help - reversing order of integration 1. ## reversing order of integration hi how do you know for 0 to 2 (1st int) and x^2 to 2x (2nd int) (x^3) dydx if you should have x now going from 0 to y^.5 or 0 to y/2 and likewise y going from 0 to 2x or 0 to x^2? 2. Originally Posted by candi hi how do you know for 0 to 2 (1st int) and x^2 to 2x (2nd int) (x^3) dydx if you should have x now going from 0 to y^.5 or 0 to y/2 and likewise y going from 0 to 2x or 0 to x^2? Have you drawn a sketch of the region of integration? It is very clear from such a sketch that the reversed integral limits are x = y/2 to $x = +\sqrt{y}$ and y = 0 to y = 4: $I = \int_{y=0}^{y=4} \int_{x = y/2}^{x = \sqrt{y}} x^3 \,dx \, dy$. 3. i did draw a sketch. and the graphs are both at the origin. but i'll look at what you wrote again to see if i'll understand it better. thanks. 4. It's pretty clear having a sketch, you'll see mr fantastic's result is correct.
2014-08-21 20:57:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6628015637397766, "perplexity": 1286.290834337155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500821289.49/warc/CC-MAIN-20140820021341-00303-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-6-thermochemistry-exercises-page-287/52
## Chemistry 9th Edition a. 900 J b. $26.\frac{J}{^\circ C*mol}$ c. $1.63*10^{3}\text{ grams}$ a. $q=mc\Delta T$ $q=150.0*0.24*25=900$ J b. $\frac{0.24\text{ J}}{^\circ C*g}*\frac{107.8682\text{ gram}}{1\text{ mole}}=26.\frac{J}{^\circ C*mol}$ c. $1250=m*0.24*3.2$ $m=1.63*10^{3}\text{ grams}$
2020-08-13 15:09:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7683634757995605, "perplexity": 4812.8731142450915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739046.14/warc/CC-MAIN-20200813132415-20200813162415-00059.warc.gz"}
http://math.stackexchange.com/questions/157301/limit-finding-of-an-indeterminate-form
# Limit finding of an indeterminate form here is the limit I'm trying to find out: $$\lim_{x\rightarrow 0} \frac{x^3}{\tan^3(2x)}$$ Since it is an indeterminate form, I simply applied l'Hopital's Rule and I ended up with: $$\lim_{x\rightarrow 0} \frac{x^3}{\tan^3(2x)} = \lim_{x\rightarrow 0}\frac{6\cos^3(2x)}{48\cos^3(2x)} = \frac{6}{48} = 0.125$$ Unfortuntely, as far as I've tried, I haven't been able to solve this limit without using l'Hopital's Rule. Is it possibile to algebrically manipulate the equation so to have a determinate form? - L'Hopital's rule is a really useful tool, why not use it? – Alex Chamberlain Jun 12 '12 at 9:00 @AlexChamberlain take for example: $\lim_{x\to 0} \frac{\sin x}{x}$. When you're applying l'Hopital's rule, you're using the fact that $(\sin x)' = \cos x$ but it's the consequence of $\frac{\sin x}{x} \to 1$ when $x\to 0$. – qoqosz Jun 12 '12 at 9:04 Use $\lim_{x \to 0} \frac{\sin x}{x} = 1$: $$\lim_{x \to 0} \frac{x^3}{\tan^3 2x} = \lim_{x\to 0} \left( \frac{(2x)^3}{\sin^3 2x} \cdot \frac{\cos^3 2x}{8} \right) \stackrel{[1 \cdot \frac{1}{8}]}{=} \frac{1}{8}$$ - What? I thought it was $1$. – Gigili Jun 12 '12 at 9:06 @Gigili It is, but you can't correct 1 character... – Alex Chamberlain Jun 12 '12 at 9:07 @Gigili just a typo :) – qoqosz Jun 12 '12 at 9:09 Good,good. +1 now! – Gigili Jun 12 '12 at 9:11 $$\lim_{x \to 0} \frac{x^3}{\tan (2x)^3}=\lim_{x \to 0} \frac{x^3}{(2x)^3}=\frac18$$ - There is no justification in that line what so ever! – Alex Chamberlain Jun 12 '12 at 9:08 @AlexChamberlain: only equivalence of $\tan x$ and $x$ is used around $0$ - should it be justified? Certainly, better choice than l'Hopital's rule. – Ilya Jun 12 '12 at 9:11 Both equalities are false as written: the limit on the left is not the function in the middle, and the function in the middle is not the number $1/8$. The middle item requires $\lim\limits_{x\to 0}$. – Brian M. Scott Jun 12 '12 at 9:16 -1. It's quite misleading to write the solution like this (making approximations without showing the error terms), even though strictly speaking the equalitites are true. If you show this to students, they will soon start making mistakes like $\lim_{n \to \infty} (1+1/n)^n = \lim_{n \to \infty} (1+0)^n = 1$. – Hans Lundmark Jun 12 '12 at 10:16 My point is that some justification is needed for why replacing $\tan x$ by $x$ gives the correct result in this case, but not for $\lim_{x \to 0}(x - \tan x)/x^3$, for example. – Hans Lundmark Jun 12 '12 at 12:24 $$\lim_{x \rightarrow 0 }\frac{x^3}{\tan^3 (2x) } = \lim_{x \rightarrow 0 } \left ( \frac{2x}{\tan (2x) } \right )^3 \frac{1}{2^3} = \frac{1}{2^3}$$ - $\tan x =x +o(x)$ then $\tan (2x) \sim 2x$ , then : $\frac{x^3}{\tan^3 x} \sim \frac{x^3}{8x^3} \sim \frac 18$ - Another idea using $\,\,\displaystyle{\frac{\sin x}{x}\underset{x\to 0}{\longrightarrow}1\,,\,\cos kx\underset{x\to 0}\longrightarrow 1\,\,(k=\text{a constant})\,\,,\,\sin 2x=2\sin x\cos x}$: $$\frac{x^3}{\tan^3 2x}=\frac{x^3}{\frac{\sin^32x}{\cos^32x}}=\cos^32x\frac{x^3}{\left(2\sin x\cos x\right)^3}=\frac{1}{8}\frac{\cos^32x}{\cos^3x}\left(\frac{x}{\sin x}\right)^3\underset{x\to 0}\longrightarrow \frac{1}{8}\cdot\frac{1}{1}\cdot 1^3=\frac{1}{8}$$ - We may resort to $\sin(x)<x<\tan(x),\space 0< x <\frac{\pi}{2}$ and solve it elementarily. By Squeeze's theorem we get that: $$\lim_{x\rightarrow0}\frac{x^3 \cos^3(2x)}{{(2x)}^3}\leq \lim_{x\rightarrow0}\frac{x^3}{\tan^3(2x)}\leq \lim_{x\rightarrow0}\frac{x^3}{(2x)^3}$$ Therefore, taking also into account the symmetry the limit is $\frac{1}{8}$. The proof is complete. - $\lim_{x\rightarrow 0} \frac{x^3}{\tan^3(2x)}$=$\lim_{x\rightarrow 0} \frac{x^3}{\frac{\sin^3(2x)}{\cos^3(2x)}}$=$\lim_{x\rightarrow 0} \frac{x^3\cos^3(2x)}{\sin^3(2x)}$=$\lim_{x\rightarrow 0}\frac{x\cdot x\cdot x \cdot\cos^3(2x)}{\sin(2x)\cdot \sin(2x)\cdot\sin(2x)}$=$\lim_{x\rightarrow 0}\frac{2x\cdot 2x\cdot 2x \cdot \frac{1}{8} \cos^3(2x)}{\sin(2x)\cdot\sin(2x)\cdot\sin(2x)}$=$|\lim_{x\rightarrow 0}\frac{\sin x}{x}=1$$\Rightarrow$ $\lim_{x\rightarrow 0}\frac{x}{\sin x}=1$ $\Rightarrow$ $\lim_{x\rightarrow 0}\frac{2x}{\sin 2x}=1$|=$\lim_{x\rightarrow 0}\frac{2x}{\sin(2x)}\cdot\lim_{x\rightarrow 0}\frac{2x}{\sin(2x)}\cdot\lim_{x\rightarrow 0}\frac{2x}{\sin(2x)}\cdot\frac{1}{8}\lim_{x\rightarrow 0}{\cos^3(2x)}$=$1\cdot 1\cdot 1\cdot \frac{1}{8}\lim_{x\rightarrow 0}{\cos^3(2x)}$=$\frac{1}{8}\lim_{x\rightarrow 0}{\cos^3(2x)}$=$\frac{1}{8}{\cos[\lim_{x\rightarrow 0}(2x)]^3}$=$\frac{1}{8}\cos0$=$\frac{1}{8}\cdot 1$=$\frac{1}{8}$ -
2016-06-29 18:28:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9136723279953003, "perplexity": 926.9685001317392}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00192-ip-10-164-35-72.ec2.internal.warc.gz"}
http://www.wptricks.com/question/inject-pagination-parameters-into-permalink-structure/
## Inject pagination parameters into permalink structure Question My goal is to add a parameter to every pagination permalink: domain.com/blog/page/2?pageid=2, domain.com/blog/page/3?pageid=3 in order to tell Google to index all of the above as domain.com/blog/ without the use of canonical meta tags. In Google Search Console, in order to avoid indexing pagination pages I use the Google Crawl URLs Parameters tool see image below, and my permalink structure is and needs to stay as: domain.com/%postname%/ My pagination at the moment is: domain.com/blog/page/2/, domain.com/blog/page/3/ Since the only way to tell Google about pagination pages is via parameters (see below image of the Google Crawl URLs Parameters tool) I came up with the idea of adding an additional pagination parameter to my current permalink structure. I have tried to use "paginate_links" in functions.php but it’s not working at all: function add_page_params(\$link) { return array( 'after_page_number' => '&param' ); } add_filter('paginate_links', 'add_page_params', 10, 2); I was expecting this result from the above: domain.com/blog/page/2?pageid=2&param The above function is not working as expected. 0 5 months 2020-12-21T20:10:26-05:00 0 Answers 4 views 0
2021-05-14 20:14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5983293056488037, "perplexity": 6019.415443348481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00077.warc.gz"}
https://www.dsemth.com/static/%E7%9F%A5%E8%AD%98%E9%87%8D%E9%BB%9E%E6%A8%A3%E6%9C%AC/%E7%AD%89%E5%B7%AE%E6%95%B8%E5%88%97
### 等差數列 An arithmetic sequence is a sequence having a common difference between any term (except the first term) and its preceding term. i.e. ${T}_{{2}}-{T}_{{1}}={T}_{{3}}-{T}_{{2}}={T}_{{n}}-{T}_{{{n}-{1}}}=$ … , where ${T}_{{n}}$ is the general term. (a) If the first term is ${a}$ and the common difference is ${d}$ , then the general term is given by: ${T}_{{n}}={a}+{\left({n}-{1}\right)}{d}$ , where ${n}$ is a positive integer. (b) Properties of arithmetic sequences (i) If ${T}_{{{n}-{1}}},{T}_{{n}}$ and ${T}_{{{n}+{1}}}$ are three consecutive terms of an arithmetic sequence, then ${T}_{{n}}=\dfrac{{{T}_{{{n}-{1}}}+{T}_{{{n}+{1}}}}}{{2}}$ . (ii) If ${\left\lbrace{T}_{{1}}\right.}$ , ${T}_{{2}}$ , ${T}_{{3}}$ , … ${\rbrace}$ is an arithmetic sequence, then ${\left\lbrace{k}{T}_{{1}}+{c}\right.}$ , ${k}{T}_{{2}}+{c}$ , ${k}{T}_{{3}}+{c}$ , … ${\rbrace}$ is also an arithmetic sequence, where ${k}$ and ${c}$ are constants. Example Consider the arithmetic sequence ${4}$ , ${10}$ , ${16}$ , ${22}$ , ... (a) Find the general term ${T}{\left({n}\right)}$ of the sequence. (b) Hence, show that the sequence with general term ${P}{\left({n}\right)}$ $={6}{n}-{7}$ is also an arithmetic sequence. Solution (a) Let ${a}$ and ${d}$ be the first term and the common difference respectively. ${a}={4}$ and ${d}={10}-{4}={6}$ ∵ ${T}{\left({n}\right)}$ $={a}+{\left({n}-{1}\right)}{d}$ $={4}+{\left({n}-{1}\right)}{\left({6}\right)}$ $={6}{n}-{2}$ (b) ∵ ${P}{\left({n}\right)}$ $={6}{n}-{7}$ $={\left({6}{n}-{2}\right)}-{5}$ $={T}{\left({n}\right)}-{5}$ , and ${T}_{{1}}$ , ${T}_{{2}}$ , ${T}_{{3}}$ , … is an arithmetic sequence. ∴ ${P}{\left({1}\right)}$ , ${P}{\left({2}\right)}$ , ${P}{\left({3}\right)}$ , ... is also an arithmetic sequence. *聲明:此資源並不屬於 ePractice ,僅屬外部資源建議。ePractice 不就其內容負責亦不收受其產生的任何收益。 # 專業備試計劃 Level 4+ 保證及 5** 獎賞 ePractice 會以電郵、Whatsapp 及電話提醒練習 ePractice 會定期提供溫習建議 Level 5** 獎勵:會員如在 DSE 取得數學 Level 5** ,將獲贈一套飛往英國、美國或者加拿大的來回機票,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。 Level 4 以下賠償:會員如在 DSE 未能達到數學 Level 4 ,我們將會全額退回所有會費,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。 # FAQ ePractice 是甚麼? ePractice 是一個專為中四至中六而設的網站應用程式,旨為協助學生高效地預備 DSE 數學(必修部分)考試。由於 ePractice 是網站應用程式,因此無論使用任何裝置、平台,都可以在瀏覽器開啟使用。更多詳情請到簡介頁面。 ePractice 可以取代傳統補習嗎? 1. 會員服務期少於兩個月;或 2. 交易額少於 HK\$100。 Initiating... HKDSE 數學試題練習平台
2021-08-06 04:38:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 76, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367599487304688, "perplexity": 575.7286047093987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152112.54/warc/CC-MAIN-20210806020121-20210806050121-00344.warc.gz"}
http://physics.stackexchange.com/questions/64704/question-on-the-hagedorn-tower-in-type-i-string-theory
# Question on the Hagedorn tower in Type I string theory In a previous question (Mass spectrum of Type I string theory), I had asked about the mass spectrum of Type I string theory. I got a response saying that it is a Hagedorn tower. However, my source does not discuss much about Type I string theory (it only discusses Type IIA and Type IIB) so I searched for more sources. Yet, all the sources (all some lecture notes) I got, for some reason stopped at Type IIA and Type IIB string theories. So my question is, what is really the mass spectrum defining the Hagedorn tower? Is it something analogous to the Type II Mass Spectrums? $m= \sqrt{\frac{2\pi T}{c_0}\left(N+\tilde N -a-\tilde a\right)}$ Thanks! - Please don't self-close, such posts may be useful to others. If you managed to answer it yourself I suggest you post your findings as an answer here. – Manishearth May 16 '13 at 16:20 Hagedorn spectrum just means that the density of states varies exponentially with the energy/mass. $m^2$ (asymptotically) given by the "level" (N) of the state (upto a sqrt). The number of states at level $N$ corresponds to the possible partitions of $N$ into different oscillator modes. That means that the number of states at level $N$ will increase exponentially (for large $N$). Taking a square-root (sicne we want how #states scales with $m$) still leaves an exponential, giving us a Hagedorn spectrum. - Here is my solution. For the Ramond Ramond Sector, $${m_\rm{I}} = \frac{{\left[ {\hat a_ + ^\mu ,\hat{\tilde a}_ + ^\nu } \right]}}{2}{m_{{\rm{IIB}}}}$$ For the Neveu-Schwarz Neveu-Schwarz Sector, $${m_\rm{I}} = \frac{{\left[ {\hat d_ {-1/2} ^\mu ,\hat{\tilde d}_{-1/2} ^\nu } \right]}}{2}{m_{{\rm{IIB}}}}$$ For the Ramond Neveu-Schwarz Sector or the Neveu-Schwarz Ramond Sector, $${m_\rm{I}} = \frac{{{{\hat a}_ + }\hat d_{ - 1/2}^\mu - {{\hat{\tilde a}}_ + }\hat{ \tilde d}_{ - 1/2}^\mu }}{2}{m_{{\rm{IIB}}}}$$ My reasoning is that the mass spectrum is linear in the state vectors. -
2016-05-28 08:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5895737409591675, "perplexity": 751.0551857558258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277475.33/warc/CC-MAIN-20160524002117-00237-ip-10-185-217-139.ec2.internal.warc.gz"}
http://terrytao.wordpress.com/category/mathematics/mathds/page/2/
You are currently browsing the category archive for the ‘math.DS’ category. Ben Green, Tamar Ziegler, and I have just uploaded to the arXiv our paper “An inverse theorem for the Gowers U^{s+1}[N] norm“, which was previously announced on this blog.  We are still planning one final round of reviewing the preprint before submitting the paper, but it has gotten to the stage where we are comfortable with having the paper available on the arXiv. The main result of the paper is to establish the inverse conjecture for the Gowers norm over the integers, which has a number of applications, in particular to counting solutions to various linear equations in primes.  In spirit, the proof of the paper follows the 21-page announcement that was uploaded previously.  However, for various rather annoying technical reasons, the 117-page paper has to devote a large amount of space to setting up various bits of auxiliary machinery (as well as a dozen or so pages worth of examples and discussion).  For instance, the announcement motivates many of the steps of the argument by heuristically identifying nilsequences $n \mapsto F(g(n) \Gamma)$ with bracket polynomial phases such as $n \mapsto e( \{ \alpha n \} \beta n )$.  However, a rather significant amount of theory (which was already worked out to a large extent by Leibman) is needed to formalise the “bracket algebra” needed to manipulate such bracket polynomials and to connect them with nilsequences.  Furthermore, the “piecewise smooth” nature of bracket polynomials causes some technical issues with the equidistribution theory for these sequences.  Our original version of the paper (which was even longer than the current version) set out this theory.  But we eventually decided that it was best to eschew almost all use of bracket polynomials (except as motivation and examples), and run the argument almost entirely within the language of nilsequences, to keep the argument a bit more notationally focused (and to make the equidistribution theory easier to establish).  But this was not without a tradeoff; some statements that are almost trivially true for bracket polynomials, required some “nilpotent algebra” to convert to the language of nilsequences.  Here are some examples of this: 1. It is intuitively clear that a bracket polynomial phase e(P(n)) of degree k in one variable n can be “multilinearised” to a polynomial $e(Q(n_1,\ldots,n_k))$ of multi-degree $(1,\ldots,1)$ in k variables $n_1,\ldots,n_k$, such that $e(P(n))$ and $e(Q(n,\ldots,n))$ agree modulo lower order terms.  For instance, if $e(P(n)) = e(\alpha n \{ \beta n \{ \gamma n \} \})$ (so k=3), then one could take $e(Q(n_1,n_2,n_3)) = e( \alpha n_1 \{ \beta n_2 \{ \gamma n_3 \} \})$.   The analogue of this statement for nilsequences is true, but required a moderately complicated nilpotent algebra construction using the Baker-Campbell-Hausdorff formula. 2. Suppose one has a bracket polynomial phase e(P_h(n)) of degree k in one variable n that depends on an additional parameter h, in such a way that exactly one of the coefficients in each monomial depends on h.  Furthermore, suppose this dependence is bracket linear in h.  Then it is intuitively clear that this phase can be rewritten (modulo lower order terms) as e( Q(h,n) ) where Q is a bracket polynomial of multidegree (1,k) in (h,n).  For instance, if $e(P_h(n)) = e( \{ \alpha_h n \} \beta n )$ and $\alpha_h = \{\gamma h \} \delta$, then we can take $e(Q(h,n)) = e(\{ \{\gamma h\} \delta n\} \beta n )$.  The nilpotent algebra analogue of this claim is true, but requires another moderately complicated nilpotent algebra construction based on semi-direct products. 3. A bracket polynomial has a fairly visible concept of a “degree” (analogous to the corresponding notion for true polynomials), as well as a “rank” (which, roughly speaking measures the number of parentheses in the bracket monomials, plus one).  Thus, for instance, the bracket monomial $\{\{ \alpha n^4 \} \beta n \} \gamma n^2$ has degree 7 and rank 3.  Defining degree and rank for nilsequences requires one to generalise the notion of a (filtered) nilmanifold to one in which the lower central series is replaced by a filtration indexed by both the degree and the rank. There are various other tradeoffs of this type in this paper.  For instance, nonstandard analysis tools were introduced to eliminate what would otherwise be quite a large number of epsilons and regularity lemmas to manage, at the cost of some notational overhead; and the piecewise discontinuities mentioned earlier were eliminated by the use of vector-valued nilsequences, though this again caused some further notational overhead.    These difficulties may be a sign that we do not yet have the “right” proof of this conjecture, but one will probably have to wait a few years before we get a proper amount of perspective and understanding on this circle of ideas and results. A (smooth) Riemannian manifold is a smooth manifold ${M}$ without boundary, equipped with a Riemannian metric ${{\rm g}}$, which assigns a length ${|v|_{{\rm g}(x)} \in {\bf R}^+}$ to every tangent vector ${v \in T_x M}$ at a point ${x \in M}$, and more generally assigns an inner product $\displaystyle \langle v, w \rangle_{{\rm g}(x)} \in {\bf R}$ to every pair of tangent vectors ${v, w \in T_x M}$ at a point ${x \in M}$. (We use Roman font for ${g}$ here, as we will need to use ${g}$ to denote group elements later in this post.) This inner product is assumed to symmetric, positive definite, and smoothly varying in ${x}$, and the length is then given in terms of the inner product by the formula $\displaystyle |v|_{{\rm g}(x)}^2 := \langle v, v \rangle_{{\rm g}(x)}.$ In coordinates (and also using abstract index notation), the metric ${{\rm g}}$ can be viewed as an invertible symmetric rank ${(0,2)}$ tensor ${{\rm g}_{ij}(x)}$, with $\displaystyle \langle v, w \rangle_{{\rm g}(x)} = {\rm g}_{ij}(x) v^i w^j.$ One can also view the Riemannian metric as providing a (self-adjoint) identification between the tangent bundle ${TM}$ of the manifold and the cotangent bundle ${T^* M}$; indeed, every tangent vector ${v \in T_x M}$ is then identified with the cotangent vector ${\iota_{TM \rightarrow T^* M}(v) \in T_x^* M}$, defined by the formula $\displaystyle \iota_{TM \rightarrow T^* M}(v)(w) := \langle v, w \rangle_{{\rm g}(x)}.$ In coordinates, ${\iota_{TM \rightarrow T^* M}(v)_i = {\rm g}_{ij} v^j}$. A fundamental dynamical system on the tangent bundle (or equivalently, the cotangent bundle, using the above identification) of a Riemannian manifold is that of geodesic flow. Recall that geodesics are smooth curves ${\gamma: [a,b] \rightarrow M}$ that minimise the length $\displaystyle |\gamma| := \int_a^b |\gamma'(t)|_{{\rm g}(\gamma(t))}\ dt.$ There is some degeneracy in this definition, because one can reparameterise the curve ${\gamma}$ without affecting the length. In order to fix this degeneracy (and also because the square of the speed is a more tractable quantity analytically than the speed itself), it is better if one replaces the length with the energy $\displaystyle E(\gamma) := \frac{1}{2} \int_a^b |\gamma'(t)|_{{\rm g}(\gamma(t))}^2\ dt.$ Minimising the energy of a parameterised curve ${\gamma}$ turns out to be the same as minimising the length, together with an additional requirement that the speed ${|\gamma'(t)|_{{\rm g}(\gamma(t))}}$ stay constant in time. Minimisers (and more generally, critical points) of the energy functional (holding the endpoints fixed) are known as geodesic flows. From a physical perspective, geodesic flow governs the motion of a particle that is subject to no external forces and thus moves freely, save for the constraint that it must always lie on the manifold ${M}$. One can also view geodesic flows as a dynamical system on the tangent bundle (with the state at any time ${t}$ given by the position ${\gamma(t) \in M}$ and the velocity ${\gamma'(t) \in T_{\gamma(t)} M}$) or on the cotangent bundle (with the state then given by the position ${\gamma(t) \in M}$ and the momentum ${\iota_{TM \rightarrow T^* M}( \gamma'(t) ) \in T_{\gamma(t)}^* M}$). With the latter perspective (sometimes referred to as cogeodesic flow), geodesic flow becomes a Hamiltonian flow, with Hamiltonian ${H: T^* M \rightarrow {\bf R}}$ given as $\displaystyle H( x, p ) := \frac{1}{2} \langle p, p \rangle_{{\rm g}(x)^{-1}} = \frac{1}{2} {\rm g}^{ij}(x) p_i p_j$ where ${\langle ,\rangle_{{\rm g}(x)^{-1}}: T^*_x M \times T^*_x M \rightarrow {\bf R}}$ is the inverse inner product to ${\langle, \rangle_{{\rm g}(x)}: T_x M \times T_x M \rightarrow {\bf R}}$, which can be defined for instance by the formula $\displaystyle \langle p_1, p_2 \rangle_{{\rm g}(x)^{-1}} = \langle \iota_{TM \rightarrow T^* M}^{-1}(p_1), \iota_{TM \rightarrow T^* M}^{-1}(p_2)\rangle_{{\rm g}(x)}.$ In coordinates, geodesic flow is given by Hamilton’s equations of motion $\displaystyle \frac{d}{dt} x^i = {\rm g}^{ij} p_j; \quad \frac{d}{dt} p_i = - \frac{1}{2} (\partial_i {\rm g}^{jk}(x)) p_j p_k.$ In terms of the velocity ${v^i := \frac{d}{dt} x^i = {\rm g}^{ij} p_j}$, we can rewrite these equations as the geodesic equation $\displaystyle \frac{d}{dt} v^i = - \Gamma^i_{jk} v^j v^k$ where $\displaystyle \Gamma^i_{jk} = \frac{1}{2} {\rm g}^{im} (\partial_k {\rm g}_{mj} + \partial_j {\rm g}_{mk} - \partial_m {\rm g}_{jk} )$ are the Christoffel symbols; using the Levi-Civita connection ${\nabla}$, this can be written more succinctly as $\displaystyle (\gamma^* \nabla)_t v = 0.$ If the manifold ${M}$ is an embedded submanifold of a larger Euclidean space ${R^n}$, with the metric ${{\rm g}}$ on ${M}$ being induced from the standard metric on ${{\bf R}^n}$, then the geodesic flow equation can be rewritten in the equivalent form $\displaystyle \gamma''(t) \perp T_{\gamma(t)} M,$ where ${\gamma}$ is now viewed as taking values in ${{\bf R}^n}$, and ${T_{\gamma(t)} M}$ is similarly viewed as a subspace of ${{\bf R}^n}$. This is intuitively obvious from the geometric interpretation of geodesics: if the curvature of a curve ${\gamma}$ contains components that are transverse to the manifold rather than normal to it, then it is geometrically clear that one should be able to shorten the curve by shifting it along the indicated transverse direction. It is an instructive exercise to rigorously formulate the above intuitive argument. This fact also conforms well with one’s physical intuition of geodesic flow as the motion of a free particle constrained to be in ${M}$; the normal quantity ${\gamma''(t)}$ then corresponds to the centripetal force necessary to keep the particle lying in ${M}$ (otherwise it would fly off along a tangent line to ${M}$, as per Newton’s first law). The precise value of the normal vector ${\gamma''(t)}$ can be computed via the second fundamental form as ${\gamma''(t) = \Pi_{\gamma(t)}( \gamma'(t), \gamma'(t) )}$, but we will not need this formula here. In a beautiful paper from 1966, Vladimir Arnold (who, sadly, passed away last week), observed that many basic equations in physics, including the Euler equations of motion of a rigid body, and also (by which is a priori a remarkable coincidence) the Euler equations of fluid dynamics of an inviscid incompressible fluid, can be viewed (formally, at least) as geodesic flows on a (finite or infinite dimensional) Riemannian manifold. And not just any Riemannian manifold: the manifold is a Lie group (or, to be truly pedantic, a torsor of that group), equipped with a right-invariant (or left-invariant, depending on one’s conventions) metric. In the context of rigid bodies, the Lie group is the group ${SE(3) = {\bf R}^3 \rtimes SO(3)}$ of rigid motions; in the context of incompressible fluids, it is the group ${Sdiff({\bf R}^3}$) of measure-preserving diffeomorphisms. The right-invariance makes the Hamiltonian mechanics of geodesic flow in this context (where it is sometimes known as the Euler-Arnold equation or the Euler-Poisson equation) quite special; it becomes (formally, at least) completely integrable, and also indicates (in principle, at least) a way to reformulate these equations in a Lax pair formulation. And indeed, many further completely integrable equations, such as the Korteweg-de Vries equation, have since been reinterpreted as Euler-Arnold flows. From a physical perspective, this all fits well with the interpretation of geodesic flow as the free motion of a system subject only to a physical constraint, such as rigidity or incompressibility. (I do not know, though, of a similarly intuitive explanation as to why the Korteweg de Vries equation is a geodesic flow.) One consequence of being a completely integrable system is that one has a large number of conserved quantities. In the case of the Euler equations of motion of a rigid body, the conserved quantities are the linear and angular momentum (as observed in an external reference frame, rather than the frame of the object). In the case of the two-dimensional Euler equations, the conserved quantities are the pointwise values of the vorticity (as viewed in Lagrangian coordinates, rather than Eulerian coordinates). In higher dimensions, the conserved quantity is now the (Hodge star of) the vorticity, again viewed in Lagrangian coordinates. The vorticity itself then evolves by the vorticity equation, and is subject to vortex stretching as the diffeomorphism between the initial and final state becomes increasingly sheared. The elegant Euler-Arnold formalism is reasonably well-known in some circles (particularly in Lagrangian and symplectic dynamics, where it can be viewed as a special case of the Euler-Poincaré formalism or Lie-Poisson formalism respectively), but not in others; I for instance was only vaguely aware of it until recently, and I think that even in fluid mechanics this perspective to the subject is not always emphasised. Given the circumstances, I thought it would therefore be appropriate to present Arnold’s original 1966 paper here. (For a more modern treatment of these topics, see the books of Arnold-Khesin and Marsden-Ratiu.) In order to avoid technical issues, I will work formally, ignoring questions of regularity or integrability, and pretending that infinite-dimensional manifolds behave in exactly the same way as their finite-dimensional counterparts. In the finite-dimensional setting, it is not difficult to make all of the formal discussion below rigorous; but the situation in infinite dimensions is substantially more delicate. (Indeed, it is a notorious open problem whether the Euler equations for incompressible fluids even forms a global continuous flow in a reasonable topology in the first place!) However, I do not want to discuss these analytic issues here; see this paper of Ebin and Marsden for a treatment of these topics. Ben Green, Tamar Ziegler, and I have just uploaded to the arXiv the note “An inverse theorem for the Gowers norm ${U^{s+1}[N]}$ (announcement)“, not intended for publication. This is an announcement of our forthcoming solution of the inverse conjecture for the Gowers norm, which roughly speaking asserts that ${U^{s+1}[N]}$ norm of a bounded function is large if and only if that function correlates with an ${s}$-step nilsequence of bounded complexity. The full argument is quite lengthy (our most recent draft is about 90 pages long), but this is in large part due to the presence of various technical details which are necessary in order to make the argument fully rigorous. In this 20-page announcement, we instead sketch a heuristic proof of the conjecture, relying in a number of “cheats” to avoid the above-mentioned technical details. In particular: • In the announcement, we rely on somewhat vaguely defined terms such as “bounded complexity” or “linearly independent with respect to bounded linear combinations” or “equivalent modulo lower step errors” without specifying them rigorously. In the full paper we will use the machinery of nonstandard analysis to rigorously and precisely define these concepts. • In the announcement, we deal with the traditional linear nilsequences rather than the polynomial nilsequences that turn out to be better suited for finitary equidistribution theory, but require more notation and machinery in order to use. • In a similar vein, we restrict attention to scalar-valued nilsequences in the announcement, though due to topological obstructions arising from the twisted nature of the torus bundles used to build nilmanifolds, we will have to deal instead with vector-valued nilsequences in the main paper. • In the announcement, we pretend that nilsequences can be described by bracket polynomial phases, at least for the sake of making examples, although strictly speaking bracket polynomial phases only give examples of piecewise Lipschitz nilsequences rather than genuinely Lipschitz nilsequences. With these cheats, it becomes possible to shorten the length of the argument substantially. Also, it becomes clearer that the main task is a cohomological one; in order to inductively deduce the inverse conjecture for a given step ${s}$ from the conjecture for the preceding step ${s-1}$, the basic problem is to show that a certain (quasi-)cocycle is necessarily a (quasi-)coboundary. This in turn requires a detailed analysis of the top order and second-to-top order terms in the cocycle, which requires a certain amount of nilsequence equidistribution theory and additive combinatorics, as well as a “sunflower decomposition” to arrange the various nilsequences one encounters into a usable “normal form”. It is often the case in modern mathematics that the informal heuristic way to explain an argument looks quite different (and is significantly shorter) than the way one would formally present the argument with all the details. This seems to be particularly true in this case; at a superficial level, the full paper has a very different set of notation than the announcement, and a lot of space is invested in setting up additional machinery that one can quickly gloss over in the announcement. We hope though that the announcement can provide a “road map” to help navigate the much longer paper to come. In Notes 5, we saw that the Gowers uniformity norms on vector spaces ${{\bf F}^n}$ in high characteristic were controlled by classical polynomial phases ${e(\phi)}$. Now we study the analogous situation on cyclic groups ${{\bf Z}/N{\bf Z}}$. Here, there is an unexpected surprise: the polynomial phases (classical or otherwise) are no longer sufficient to control the Gowers norms ${U^{s+1}({\bf Z}/N{\bf Z})}$ once ${s}$ exceeds ${1}$. To resolve this problem, one must enlarge the space of polynomials to a larger class. It turns out that there are at least three closely related options for this class: the local polynomials, the bracket polynomials, and the nilsequences. Each of the three classes has its own strengths and weaknesses, but in my opinion the nilsequences seem to be the most natural class, due to the rich algebraic and dynamical structure coming from the nilpotent Lie group undergirding such sequences. For reasons of space we shall focus primarily on the nilsequence viewpoint here. Traditionally, nilsequences have been defined in terms of linear orbits ${n \mapsto g^n x}$ on nilmanifolds ${G/\Gamma}$; however, in recent years it has been realised that it is convenient for technical reasons (particularly for the quantitative “single-scale” theory) to generalise this setup to that of polynomial orbits ${n \mapsto g(n) \Gamma}$, and this is the perspective we will take here. A polynomial phase ${n \mapsto e(\phi(n))}$ on a finite abelian group ${H}$ is formed by starting with a polynomial ${\phi: H \rightarrow {\bf R}/{\bf Z}}$ to the unit circle, and then composing it with the exponential function ${e: {\bf R}/{\bf Z} \rightarrow {\bf C}}$. To create a nilsequence ${n \mapsto F(g(n) \Gamma)}$, we generalise this construction by starting with a polynomial ${g \Gamma: H \rightarrow G/\Gamma}$ into a nilmanifold ${G/\Gamma}$, and then composing this with a Lipschitz function ${F: G/\Gamma \rightarrow {\bf C}}$. (The Lipschitz regularity class is convenient for minor technical reasons, but one could also use other regularity classes here if desired.) These classes of sequences certainly include the polynomial phases, but are somewhat more general; for instance, they almost include bracket polynomial phases such as ${n \mapsto e( \lfloor \alpha n \rfloor \beta n )}$. (The “almost” here is because the relevant functions ${F: G/\Gamma \rightarrow {\bf C}}$ involved are only piecewise Lipschitz rather than Lipschitz, but this is primarily a technical issue and one should view bracket polynomial phases as “morally” being nilsequences.) In these notes we set out the basic theory for these nilsequences, including their equidistribution theory (which generalises the equidistribution theory of polynomial flows on tori from Notes 1) and show that they are indeed obstructions to the Gowers norm being small. This leads to the inverse conjecture for the Gowers norms that shows that the Gowers norms on cyclic groups are indeed controlled by these sequences. A (complex, semi-definite) inner product space is a complex vector space ${V}$ equipped with a sesquilinear form ${\langle, \rangle: V \times V \rightarrow {\bf C}}$ which is conjugate symmetric, in the sense that ${\langle w, v \rangle = \overline{\langle v, w \rangle}}$ for all ${v,w \in V}$, and non-negative in the sense that ${\langle v, v \rangle \geq 0}$ for all ${v \in V}$. By inspecting the non-negativity of ${\langle v+\lambda w, v+\lambda w\rangle}$ for complex numbers ${\lambda \in {\bf C}}$, one obtains the Cauchy-Schwarz inequality $\displaystyle |\langle v, w \rangle| \leq |\langle v, v \rangle|^{1/2} |\langle w, w \rangle|^{1/2};$ if one then defines ${\|v\| := |\langle v, v \rangle|^{1/2}}$, one then quickly concludes the triangle inequality $\displaystyle \|v + w \| \leq \|v\| + \|w\|$ which then soon implies that ${\| \|}$ is a semi-norm on ${V}$. If we make the additional assumption that the inner product ${\langle,\rangle}$ is positive definite, i.e. that ${\langle v, v \rangle > 0}$ whenever ${v}$ is non-zero, then this semi-norm becomes a norm. If ${V}$ is complete with respect to the metric ${d(v,w) := \|v-w\|}$ induced by this norm, then ${V}$ is called a Hilbert space. The above material is extremely standard, and can be found in any graduate real analysis course; I myself covered it here. But what is perhaps less well known (except inside the fields of additive combinatorics and ergodic theory) is that the above theory of classical Hilbert spaces is just the first case of a hierarchy of higher order Hilbert spaces, in which the binary inner product ${f, g \mapsto \langle f, g \rangle}$ is replaced with a ${2^d}$-ary inner product ${(f_\omega)_{\omega \in \{0,1\}^d} \mapsto \langle (f_\omega)_{\omega \in \{0,1\}^d}}$ that obeys an appropriate generalisation of the conjugate symmetry, sesquilinearity, and positive semi-definiteness axioms. Such inner products then obey a higher order Cauchy-Schwarz inequality, known as the Cauchy-Schwarz-Gowers inequality, and then also obey a triangle inequality and become semi-norms (or norms, if the inner product was non-degenerate). Examples of such norms and spaces include the Gowers uniformity norms ${\| \|_{U^d(G)}}$, the Gowers box norms ${\| \|_{\Box^d(X_1 \times \ldots \times X_d)}}$, and the Gowers-Host-Kra seminorms ${\| \|_{U^d(X)}}$; a more elementary example are the family of Lebesgue spaces ${L^{2^d}(X)}$ when the exponent is a power of two. They play a central role in modern additive combinatorics and to certain aspects of ergodic theory, particularly those relating to Szemerédi’s theorem (or its ergodic counterpart, the Furstenberg multiple recurrence theorem); they also arise in the regularity theory of hypergraphs (which is not unrelated to the other two topics). A simple example to keep in mind here is the order two Hilbert space ${L^4(X)}$ on a measure space ${X = (X,{\mathcal B},\mu)}$, where the inner product takes the form $\displaystyle \langle f_{00}, f_{01}, f_{10}, f_{11} \rangle_{L^4(X)} := \int_X f_{00}(x) \overline{f_{01}(x)} \overline{f_{10}(x)} f_{11}(x)\ d\mu(x).$ In this brief note I would like to set out the abstract theory of such higher order Hilbert spaces. This is not new material, being already implicit in the breakthrough papers of Gowers and Host-Kra, but I just wanted to emphasise the fact that the material is abstract, and is not particularly tied to any explicit choice of norm so long as a certain axiom are satisfied. (Also, I wanted to write things down so that I would not have to reconstruct this formalism again in the future.) Unfortunately, the notation is quite heavy and the abstract axiom is a little strange; it may be that there is a better way to formulate things. In this particular case it does seem that a concrete approach is significantly clearer, but abstraction is at least possible. Note: the discussion below is likely to be comprehensible only to readers who already have some exposure to the Gowers norms. (Linear) Fourier analysis can be viewed as a tool to study an arbitrary function ${f}$ on (say) the integers ${{\bf Z}}$, by looking at how such a function correlates with linear phases such as ${n \mapsto e(\xi n)}$, where ${e(x) := e^{2\pi i x}}$ is the fundamental character, and ${\xi \in {\bf R}}$ is a frequency. These correlations control a number of expressions relating to ${f}$, such as the expected behaviour of ${f}$ on arithmetic progressions ${n, n+r, n+2r}$ of length three. In this course we will be studying higher-order correlations, such as the correlation of ${f}$ with quadratic phases such as ${n \mapsto e(\xi n^2)}$, as these will control the expected behaviour of ${f}$ on more complex patterns, such as arithmetic progressions ${n, n+r, n+2r, n+3r}$ of length four. In order to do this, we must first understand the behaviour of exponential sums such as $\displaystyle \sum_{n=1}^N e( \alpha n^2 ).$ Such sums are closely related to the distribution of expressions such as ${\alpha n^2 \hbox{ mod } 1}$ in the unit circle ${{\bf T} := {\bf R}/{\bf Z}}$, as ${n}$ varies from ${1}$ to ${N}$. More generally, one is interested in the distribution of polynomials ${P: {\bf Z}^d \rightarrow {\bf T}}$ of one or more variables taking values in a torus ${{\bf T}}$; for instance, one might be interested in the distribution of the quadruplet ${(\alpha n^2, \alpha (n+r)^2, \alpha(n+2r)^2, \alpha(n+3r)^2)}$ as ${n,r}$ both vary from ${1}$ to ${N}$. Roughly speaking, once we understand these types of distributions, then the general machinery of quadratic Fourier analysis will then allow us to understand the distribution of the quadruplet ${(f(n), f(n+r), f(n+2r), f(n+3r))}$ for more general classes of functions ${f}$; this can lead for instance to an understanding of the distribution of arithmetic progressions of length ${4}$ in the primes, if ${f}$ is somehow related to the primes. More generally, to find arithmetic progressions such as ${n,n+r,n+2r,n+3r}$ in a set ${A}$, it would suffice to understand the equidistribution of the quadruplet ${(1_A(n), 1_A(n+r), 1_A(n+2r), 1_A(n+3r))}$ in ${\{0,1\}^4}$ as ${n}$ and ${r}$ vary. This is the starting point for the fundamental connection between combinatorics (and more specifically, the task of finding patterns inside sets) and dynamics (and more specifically, the theory of equidistribution and recurrence in measure-preserving dynamical systems, which is a subfield of ergodic theory). This connection was explored in one of my previous classes; it will also be important in this course (particularly as a source of motivation), but the primary focus will be on finitary, and Fourier-based, methods. The theory of equidistribution of polynomial orbits was developed in the linear case by Dirichlet and Kronecker, and in the polynomial case by Weyl. There are two regimes of interest; the (qualitative) asymptotic regime in which the scale parameter ${N}$ is sent to infinity, and the (quantitative) single-scale regime in which ${N}$ is kept fixed (but large). Traditionally, it is the asymptotic regime which is studied, which connects the subject to other asymptotic fields of mathematics, such as dynamical systems and ergodic theory. However, for many applications (such as the study of the primes), it is the single-scale regime which is of greater importance. The two regimes are not directly equivalent, but are closely related: the single-scale theory can be usually used to derive analogous results in the asymptotic regime, and conversely the arguments in the asymptotic regime can serve as a simplified model to show the way to proceed in the single-scale regime. The analogy between the two can be made tighter by introducing the (qualitative) ultralimit regime, which is formally equivalent to the single-scale regime (except for the fact that explicitly quantitative bounds are abandoned in the ultralimit), but resembles the asymptotic regime quite closely. We will view the equidistribution theory of polynomial orbits as a special case of Ratner’s theorem, which we will study in more generality later in this course. For the finitary portion of the course, we will be using asymptotic notation: ${X \ll Y}$, ${Y \gg X}$, or ${X = O(Y)}$ denotes the bound ${|X| \leq CY}$ for some absolute constant ${C}$, and if we need ${C}$ to depend on additional parameters then we will indicate this by subscripts, e.g. ${X \ll_d Y}$ means that ${|X| \leq C_d Y}$ for some ${C_d}$ depending only on ${d}$. In the ultralimit theory we will use an analogue of asymptotic notation, which we will review later in these notes. Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the arithmetic regularity lemma, which can be viewed as the counterpart of the Szemerédi regularity lemma, in which the object being regularised is a function ${f: [N] \rightarrow [0,1]}$ on a discrete interval ${[N] = \{1,\ldots,N\}}$ rather than a graph, and the type of patterns one wishes to count are additive patterns (such as arithmetic progressions ${n,n+d,\ldots,n+(k-1)d}$) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be decomposed as a degree ${\leq s}$ nilsequence (or more precisely, a variant of a nilsequence that we call an virtual irrational nilsequence), plus a small error, plus a third error which is extremely tiny in the Gowers uniformity norm ${U^{s+1}[N]}$. In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a arithmetic counting lemma that gives an integral formula for counting additive patterns weighted by such nilsequences. The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree ${1}$ case ${s=1}$, this result is essentially due to Green. It is powered by the inverse conjecture for the Gowers norms, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the ${k=4}$ case of our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper. The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if ${A \subset [N]}$ has density ${\alpha}$, and ${\epsilon > 0}$, then there exist ${\gg_{\alpha,\epsilon} N}$ shifts ${h}$ for which ${A}$ contains at least ${(\alpha^4 - \epsilon)N}$ arithmetic progressions of length ${k=4}$ of spacing ${h}$. (The ${k=3}$ case of this conjecture was established earlier by Green; the ${k=5}$ case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over ${[N]}$ indeed matches the conjectured value predicted in their first paper. In all three applications, the scheme of proof can be described as follows: • Apply the arithmetic regularity lemma, and decompose a relevant function ${f}$ into three pieces, ${f_{nil}, f_{sml}, f_{unf}}$. • The uniform part ${f_{unf}}$ is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”. • The contribution of the (virtual, irrational) nilsequence ${f_{nil}}$ can be controlled using the arithmetic counting lemma. • Finally, one needs to check that the contribution of the small error ${f_{sml}}$ does not overwhelm the main term ${f_{nil}}$. This is the trickiest bit; one often needs to use the counting lemma again to show that one can find a set of arithmetic patterns for ${f_{nil}}$ that is so sufficiently “equidistributed” that it is not impacted by the small error. To illustrate the last point, let us give the following example. Suppose we have a set ${A \subset [N]}$ of some positive density (say ${|A| = 10^{-1} N}$) and we have managed to prove that ${A}$ contains a reasonable number of arithmetic progressions of length ${5}$ (say), e.g. it contains at least ${10^{-10} N^2}$ such progressions. Now we perturb ${A}$ by deleting a small number, say ${10^{-2} N}$, elements from ${A}$ to create a new set ${A'}$. Can we still conclude that the new set ${A'}$ contains any arithmetic progressions of length ${5}$? Unfortunately, the answer could be no; conceivably, all of the ${10^{-10} N^2}$ arithmetic progressions in ${A}$ could be wiped out by the ${10^{-2} N}$ elements removed from ${A}$, since each such element of ${A}$ could be associated with up to ${|A|}$ (or even ${5|A|}$) arithmetic progressions in ${A}$. But suppose we knew that the ${10^{-10} N^2}$ arithmetic progressions in ${A}$ were equidistributed, in the sense that each element in ${A}$ belonged to the same number of such arithmetic progressions, namely ${5 \times 10^{-9} N}$. Then each element deleted from ${A}$ only removes at most ${5 \times 10^{-9} N}$ progressions, and so one can safely remove ${10^{-2} N}$ elements from ${A}$ and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only approximately equidistributed, in the sense that the number of progressions that a given element ${a \in A}$ belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result. A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently “dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation. One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and with plain old email) in the future. Tim Austin, Tanja Eisner, and I have just uploaded to the arXiv our joint paper Nonconventional ergodic averages and multiple recurrence for von Neumann dynamical systems, submitted to Pacific Journal of Mathematics. This project started with the observation that the multiple recurrence theorem of Furstenberg (and the related multiple convergence theorem of Host and Kra) could be interpreted in the language of dynamical systems of commutative finite von Neumann algebras, which naturally raised the question of the extent to which the results hold in the noncommutative setting. The short answer is “yes for small averages, but not for long ones”. The Furstenberg multiple recurrence theorem can be phrased as follows: if ${X = (X, {\mathcal X}, \mu)}$ is a probability space with a measure-preserving shift ${T:X \rightarrow X}$ (which naturally induces an isomorphism ${\alpha: L^\infty(X) \rightarrow L^\infty(X)}$ by setting ${\alpha a := a \circ T^{-1}}$), ${a \in L^\infty(X)}$ is non-negative with positive trace ${\tau(a) := \int_X a\ d\mu}$, and ${k \geq 1}$ is an integer, then one has $\displaystyle \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0.$ In particular, ${\tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0}$ for all ${n}$ in a set of positive upper density. This result is famously equivalent to Szemerédi’s theorem on arithmetic progressions. The Host-Kra multiple convergence theorem makes the related assertion that if ${a_0,\ldots,a_{k-1} \in L^\infty(X)}$, then the scalar averages $\displaystyle \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )$ converge to a limit as ${N \rightarrow \infty}$; a fortiori, the function averages $\displaystyle \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})$ converge in (say) ${L^2(X)}$ norm. The space ${L^\infty(X)}$ is a commutative example of a von Neumann algebra: an algebra of bounded linear operators on a complex Hilbert space ${H}$ which is closed under the weak operator topology, and under taking adjoints. Indeed, one can take ${H}$ to be ${L^2(X)}$, and identify each element ${m}$ of ${L^\infty(X)}$ with the multiplier operator ${a \mapsto ma}$. The operation ${\tau: a \mapsto \int_X a\ d\mu}$ is then a finite trace for this algebra, i.e. a linear map from the algebra to the scalars ${{\mathbb C}}$ such that ${\tau(ab)=\tau(ba)}$, ${\tau(a^*) = \overline{\tau(a)}}$, and ${\tau(a^* a) \geq 0}$, with equality iff ${a=0}$. The shift ${\alpha: L^\infty(X) \rightarrow L^\infty(X)}$ is then an automorphism of this algebra (preserving shift and conjugation). We can generalise this situation to the noncommutative setting. Define a von Neumann dynamical system ${(M, \tau, \alpha)}$ to be a von Neumann algebra ${M}$ with a finite trace ${\tau}$ and an automorphism ${\alpha: M \rightarrow M}$. In addition to the commutative examples generated by measure-preserving systems, we give three other examples here: • (Matrices) ${M = M_n({\mathbb C})}$ is the algebra of ${n \times n}$ complex matrices, with trace ${\tau(a) = \frac{1}{n} \hbox{tr}(a)}$ and shift ${\alpha(a) := UaU^{-1}}$, where ${U}$ is a fixed unitary ${n \times n}$ matrix. • (Group algebras) ${M = \overline{{\mathbb C} G}}$ is the closure of the group algebra ${{\mathbb C} G}$ of a discrete group ${G}$ (i.e. the algebra of finite formal complex combinations of group elements), which acts on the Hilbert space ${\ell^2(G)}$ by convolution (identifying each group element with its Kronecker delta function). A trace is given by ${\alpha(a) = \langle a \delta_0, \delta_0 \rangle_{\ell^2(G)}}$, where ${\delta_0 \in \ell^2(G)}$ is the Kronecker delta at the identity. Any automorphism ${T: G \rightarrow G}$ of the group induces a shift ${\alpha: M \rightarrow M}$. • (Noncommutative torus) ${M}$ is the von Neumann algebra acting on ${L^2(({\mathbb R}/{\mathbb Z})^2)}$ generated by the multiplier operator ${f(x,y) \mapsto e^{2\pi i x} f(x,y)}$ and the shifted multiplier operator ${f(x,y) \mapsto e^{2\pi i y} f(x+\alpha,y)}$, where ${\alpha \in {\mathbb R}/{\mathbb Z}}$ is fixed. A trace is given by ${\alpha(a) = \langle 1, a1\rangle_{L^2(({\mathbb R}/{\mathbb Z})^2)}}$, where ${1 \in L^2(({\mathbb R}/{\mathbb Z})^2)}$ is the constant function. Inspired by noncommutative generalisations of other results in commutative analysis, one can then ask the following questions, for a fixed ${k \geq 1}$ and for a fixed von Neumann dynamical system ${(M,\tau,\alpha)}$: • (Recurrence on average) Whenever ${a \in M}$ is non-negative with positive trace, is it true that$\displaystyle \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0?$ • (Recurrence on a dense set) Whenever ${a \in M}$ is non-negative with positive trace, is it true that$\displaystyle \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0$for all ${n}$ in a set of positive upper density? • (Weak convergence) With ${a_0,\ldots,a_{k-1} \in M}$, is it true that$\displaystyle \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )$converges? • (Strong convergence) With ${a_1,\ldots,a_{k-1} \in M}$, is it true that$\displaystyle \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})$converges in using the Hilbert-Schmidt norm ${\|a\|_{L^2(M)} := \tau(a^* a)^{1/2}}$? Note that strong convergence automatically implies weak convergence, and recurrence on average automatically implies recurrence on a dense set. For ${k=1}$, all four questions can trivially be answered “yes”. For ${k=2}$, the answer to the above four questions is also “yes”, thanks to the von Neumann ergodic theorem for unitary operators. For ${k=3}$, we were able to establish a positive answer to the “recurrence on a dense set”, “weak convergence”, and “strong convergence” results assuming that ${M}$ is ergodic. For general ${k}$, we have a positive answer to all four questions under the assumption that ${M}$ is asymptotically abelian, which roughly speaking means that the commutators ${[a,\alpha^n b]}$ converges to zero (in an appropriate weak sense) as ${n \rightarrow \infty}$. Both of these proofs adapt the usual ergodic theory arguments; the latter result generalises some earlier work of Niculescu-Stroh-Zsido, Duvenhage, and Beyers-Duvenhage-Stroh. For the ${k=3}$ result, a key observation is that the van der Corput lemma can be used to control triple averages without requiring any commutativity; the “generalised von Neumann” trick of using multiple applications of the van der Corput trick to control higher averages, however, relies much more strongly on commutativity. In most other situations we have counterexamples to all of these questions. In particular: • For ${k=3}$, recurrence on average can fail on an ergodic system; indeed, one can even make the average negative. This example is ultimately based on a Behrend example construction and a von Neumann algebra construction known as the crossed product. • For ${k=3}$, recurrence on a dense set can also fail if the ergodicity hypothesis is dropped. This also uses the Behrend example and the crossed product construction. • For ${k=4}$, weak and strong convergence can fail even assuming ergodicity. This uses a group theoretic construction, which amusingly was inspired by Grothendieck’s interpretation of a group as a sheaf of flat connections, which I blogged about recently, and which I will discuss below the fold. • For ${k=5}$, recurrence on a dense set fails even with the ergodicity hypothesis. This uses a fancier version of the Behrend example due to Ruzsa in this paper of Bergelson, Host, and Kra. This example only applies for ${k \geq 5}$; we do not know for ${k=4}$ whether recurrence on a dense set holds for ergodic systems. Ben Green, Tamar Ziegler and I have just uploaded to the arXiv our paper “An inverse theorem for the Gowers $U^4$ norm“.  This paper establishes the next case of the inverse conjecture for the Gowers norm for the integers (after the $U^3$ case, which was done by Ben and myself a few years ago).  This conjecture has a number of combinatorial and number-theoretic consequences, for instance by combining this new inverse theorem with previous results, one can now get the correct asymptotic for the number of arithmetic progressions of primes of length five in any large interval $[N] = \{1,\ldots,N\}$. To state the inverse conjecture properly requires a certain amount of notation.  Given a function $f: {\Bbb Z} \to {\Bbb C}$ and a shift $h \in {\Bbb Z}$, define the multiplicative derivative $\Delta_h f(x) := f(x+h) \overline{f(x)}$ and then define the Gowers $U^{s+1}[N]$ norm of a function $f: [N] \to {\Bbb C}$ to (essentially) be the quantity $\| f\|_{U^{s+1}[N]} := ({\Bbb E}_{h_1,\ldots,h_{s+1} \in [-N,N]} {\Bbb E}_{x \in [N]} |\Delta_{h_1} \ldots \Delta_{h_{s+1}} f(x)|)^{1/2^{s+1}},$ where we extend f by zero outside of $[N]$.  (Actually, we use a slightly different normalisation to ensure that the function 1 has a $U^{s+1}$ norm of 1, but never mind this for now.) Informally, the Gowers norm $\|f\|_{U^{s+1}[N]}$ measures the amount of bias present in the $(s+1)^{st}$ multiplicative derivatives of $f$.  In particular, if $f = e(P) := e^{2\pi i P}$ for some polynomial $P: {\Bbb Z} \to {\Bbb C}$, then the $(s+1)^{th}$ derivative of $f$ is identically 1, and so is the Gowers norm. However, polynomial phases are not the only functions with large Gowers norm.  For instance, consider the function $f(n) := e( \lfloor \sqrt{2} n \rfloor \sqrt{3} n )$, which is what we call a quadratic bracket polynomial phase.  This function isn’t quite quadratic, but it is close enough to being quadratic (because one has the approximate linearity relationship $\lfloor x+y \rfloor = \lfloor x \rfloor + \lfloor y \rfloor$ holding a good fraction of the time) that it turns out that third derivative is trivial fairly often, and the Gowers norm $\|f\|_{U^3[N]}$ is comparable to 1.  This bracket polynomial phase can be modeled as a nilsequence $n \mapsto F( g(n) \Gamma )$, where $n \mapsto g(n) \Gamma$ is a polynomial orbit on a nilmanifold $G/\Gamma$, which in this case has step 2.  (The function F is only piecewise smooth, due to the discontinuity in the floor function $\lfloor \rfloor$, so strictly speaking we would classify this as an almost nilsequence rather than a nilsequence, but let us ignore this technical issue here.)  In fact, there is a very close relationship between nilsequences and bracket polynomial phases, but I will detail this in a later post. The inverse conjecture for the Gowers norm, GI(s), asserts that such nilsequences are the only obstruction to the Gowers norm being small.  Roughly speaking, it goes like this: Inverse conjecture, GI(s). (Informal statement)  Suppose that $f: [N] \to {\Bbb C}$ is bounded but has large $U^{s+1}[N]$ norm.  Then there is an s-step nilsequence $n \mapsto F( g(n) \Gamma )$ of “bounded complexity” that correlates with f. This conjecture is trivial for s=0, is a short consequence of Fourier analysis when s=1, and was proven for s=2 by Ben and myself.  In this paper we establish the s=3 case.  An equivalent formulation in this case is that any bounded function $f$ of large $U^4$ norm must correlate with a “bracket cubic phase”, which is the product of a bounded number of phases from the following list $e( \alpha n^3 + \beta n^2 + \gamma n), e( \lfloor \alpha n \rfloor \beta n^2 ), e( \lfloor \alpha n \rfloor \lfloor \beta n \rfloor \gamma n ), e( \lfloor \alpha n \rfloor \beta n )$ (*) for various real numbers $\alpha,\beta,\gamma$. It appears that our methods also work in higher step, though for technical reasons it is convenient to make a number of adjustments to our arguments to do so, most notably a switch from standard analysis to non-standard analysis, about which I hope to say more later.  But there are a number of simplifications available on the s=3 case which make the argument significantly shorter, and so we will be writing the higher s argument in a separate paper. The arguments largely follow those for the s=2 case (which in turn are based on this paper of Gowers).  Two major new ingredients are a deployment of a normal form and equidistribution theory for bracket quadratic phases, and a combinatorial decomposition of frequency space which we call the sunflower decomposition.  I will sketch these ideas below the fold. In a previous post, we discussed the Szemerédi regularity lemma, and how a given graph could be regularised by partitioning the vertex set into random neighbourhoods. More precisely, we gave a proof of Lemma 1 (Regularity lemma via random neighbourhoods) Let ${\varepsilon > 0}$. Then there exists integers ${M_1,\ldots,M_m}$ with the following property: whenever ${G = (V,E)}$ be a graph on finitely many vertices, if one selects one of the integers ${M_r}$ at random from ${M_1,\ldots,M_m}$, then selects ${M_r}$ vertices ${v_1,\ldots,v_{M_r} \in V}$ uniformly from ${V}$ at random, then the ${2^{M_r}}$ vertex cells ${V^{M_r}_1,\ldots,V^{M_r}_{2^{M_r}}}$ (some of which can be empty) generated by the vertex neighbourhoods ${A_t := \{ v \in V: (v,v_t) \in E \}}$ for ${1 \leq t \leq M_r}$, will obey the regularity property $\displaystyle \sum_{(V_i,V_j) \hbox{ not } \varepsilon-\hbox{regular}} |V_i| |V_j| \leq \varepsilon |V|^2 \ \ \ \ \ (1)$ with probability at least ${1-O(\varepsilon)}$, where the sum is over all pairs ${1 \leq i \leq j \leq k}$ for which ${G}$ is not ${\varepsilon}$-regular between ${V_i}$ and ${V_j}$. [Recall that a pair ${(V_i,V_j)}$ is ${\varepsilon}$-regular for ${G}$ if one has $\displaystyle |d( A, B ) - d( V_i, V_j )| \leq \varepsilon$ for any ${A \subset V_i}$ and ${B \subset V_j}$ with ${|A| \geq \varepsilon |V_i|, |B| \geq \varepsilon |V_j|}$, where ${d(A,B) := |E \cap (A \times B)|/|A| |B|}$ is the density of edges between ${A}$ and ${B}$.] The proof was a combinatorial one, based on the standard energy increment argument. In this post I would like to discuss an alternate approach to the regularity lemma, which is an infinitary approach passing through a graph-theoretic version of the Furstenberg correspondence principle (mentioned briefly in this earlier post of mine). While this approach superficially looks quite different from the combinatorial approach, it in fact uses many of the same ingredients, most notably a reliance on random neighbourhoods to regularise the graph. This approach was introduced by myself back in 2006, and used by Austin and by Austin and myself to establish some property testing results for hypergraphs; more recently, a closely related infinitary hypergraph removal lemma developed in the 2006 paper was also used by Austin to give new proofs of the multidimensional Szemeredi theorem and of the density Hales-Jewett theorem (the latter being a spinoff of the polymath1 project). For various technical reasons we will not be able to use the correspondence principle to recover Lemma 1 in its full strength; instead, we will establish the following slightly weaker variant. Lemma 2 (Regularity lemma via random neighbourhoods, weak version) Let ${\varepsilon > 0}$. Then there exist an integer ${M_*}$ with the following property: whenever ${G = (V,E)}$ be a graph on finitely many vertices, there exists ${1 \leq M \leq M_*}$ such that if one selects ${M}$ vertices ${v_1,\ldots,v_{M} \in V}$ uniformly from ${V}$ at random, then the ${2^{M}}$ vertex cells ${V^{M}_1,\ldots,V^{M}_{2^{M}}}$ generated by the vertex neighbourhoods ${A_t := \{ v \in V: (v,v_t) \in E \}}$ for ${1 \leq t \leq M}$, will obey the regularity property (1) with probability at least ${1-\varepsilon}$. Roughly speaking, Lemma 1 asserts that one can regularise a large graph ${G}$ with high probability by using ${M_r}$ random neighbourhoods, where ${M_r}$ is chosen at random from one of a number of choices ${M_1,\ldots,M_m}$; in contrast, the weaker Lemma 2 asserts that one can regularise a large graph ${G}$ with high probability by using some integer ${M}$ from ${1,\ldots,M_*}$, but the exact choice of ${M}$ depends on ${G}$, and it is not guaranteed that a randomly chosen ${M}$ will be likely to work. While Lemma 2 is strictly weaker than Lemma 1, it still implies the (weighted) Szemerédi regularity lemma (Lemma 2 from the previous post).
2014-04-21 07:05:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 395, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877007365226746, "perplexity": 281.32806232318654}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/finding-vector-potential.531128/
# Homework Help: Finding vector potential 1. Sep 17, 2011 ### Idoubt In problem of finding the vector potential of a vector F = yz i + xz k + xy j, the solution gives in Griffith's solution manual is http://img843.imageshack.us/img843/2725/vectorpotential.jpg [Broken] But I don't understand how we can integrate $\frac{\partial Az}{\partial y}$ = yz + $\frac{ \partial Ay}{\partial z}$ and get only f (x,z), why can't the partial w.r.t z be a function a function of y?? eg A = xyz then the partial w.r.t z is xy, which is a function of y. Last edited by a moderator: May 5, 2017 2. Sep 17, 2011 ### vanhees71 I don't know, how Griffiths comes to his solution, but I'd use the following simpler idea. The equation $$\vec{F}=\vec{\nabla} \times \vec{A}$$ has a solution $\vec{A}$ for a given $\vec{F}$, if and only if $$\vec{\nabla} \cdot \vec{F}=0,$$ which is fulfilled for your example. This solution is not unique, but only determined up to a potential field, i.e., a gradient of a scalar field. Thus, we can impose one constraint. Here, I use the "axial gauge condition" $$A_z=0.$$ Then we have $$\vec{\nabla} \times \vec{A}=\begin{pmatrix} -\partial_z A_y \\ \partial_z A_x \\ \partial_y A_x - \partial_x A_y \end{pmatrix} \stackrel{!}{=} \begin{pmatrix} yz \\ xz \\ xy \end{pmatrix}$$ $$A_y=-\int_0^z \mathrm{d} z F_x + A_y'(x,y) = -\frac{1}{2} y z^2 + A_y'(x,y)$$ and the second line $$A_x=\int_0^z \mathrm{d} z F_y+A_x'(x,y)=\frac{1}{2}x z^2 + A_x'(x,y).$$ $$F_z=-\int_0^z \mathrm{d} z (\partial_x F_x + \partial_y F_y)+\partial_x A_y'-\partial_y A_x'.$$ Because of $\vec{\nabla} \cdot \vec{F}=0$, we have $$F_z=\int_0^z \mathrm{d} z \partial_z F_z+\partial_x A_y'-\partial_y A_x' =F_z(x,y,z)-F_z(x,y,0) + \partial_x A_y'(x,y) - \partial_y A_x'(x,y) .$$ Now we can again arbitrarily set $A_x'(x,y)=0$. To fulfill the above equation, we just have to set $$\partial_x A_y'=F_z(x,y,0)=xy \; \Rightarrow A_y'=\frac{1}{2} x^2 y+A_y''(y)$$. Of course, $A_y''=0$ is good enough since it doesn't contribute to the curl at all. Plugging everything together leads to $$\vec{A}=\begin{pmatrix} x z^2/2 \\ (x^2 y-y z^2)/2 \\ 0 \end{pmatrix}.$$ Finally, it's good to check, whether everything is fine. Thus we take the curl $$\vec{\nabla} \times \vec{A}=\begin{pmatrix} -\partial_z A_y \\ \partial_z A_x \\ \partial_x A_y - \partial_y A_x \end{pmatrix}=\begin{pmatrix} yz \\ xz \\ xy \end{pmatrix}=\vec{F}.$$ Thus, we have found a vector potential for $\vec{F}$. 3. Sep 18, 2011 ### Idoubt Can you explain what the axial gauge condition is and how it makes Az = 0? 4. Sep 18, 2011 ### vanhees71 The vector potential for a given solenoidal vector field is determined up to a gradient of a scalar field since $\vec{\nabla} \times \vec{\nabla} \chi=0$ for any scalar field, $\chi$. Now, suppose you have a solution to the equation $$\vec{\nabla} \times \vec{A}=\vec{F}.$$ Now, any field $$\vec{A}'=\vec{A}-\vec{\nabla} \chi$$ also fulfills this equation and is as good as the original $\vec{A}$. Thus, to make our life easier, we can impose one additional constraint to our vector potential. Since it's easier to solve for two components rather than three components, we make one component vanishing. So, suppose for a moment, you have found a solution $\vec{A}$ and you like to find another representation such that the gauge transformed field obeys the axial-gauge condition $$A_z'=0.$$ Thus, I've to find a scalar field, $\chi$, such that $$A_z'=A_z-\partial_z \chi=0$$. It's very easy to see that one possible solution for this equation is $$\chi(x,y,z)=\int_{z_0}^z \mathrm{d} z' A_z(x,y,z').$$ Of course, $\chi$ is not completely determined by this condition. You can still add an arbitrary gradient of a gauge field that depends only on $x$ and $y$. That's why we could choose arbitrarily $A_x'(x,y)=0$ in the solution presented yesterday. 5. Sep 19, 2011 ### Bill_K It's an ansatz. As a trial solution he splits each equation up into a pair of equations: ∂Az/∂y = ½ yz and ∂Ay/∂z = - ½ yz ∂Ax/∂z = ½ xz and ∂Az/∂x = - ½ xz ∂Ay/∂x = ½ xy and ∂Ax/∂y = - ½ xy from which Az = 1/4 z (y2 - x2) Ay = 1/4 y (x2 - z2) Ax = 1/4 x (z2 - y2) I suppose you could say there's no logic to it; but it works, and leads to a nice symmetrical solution. 6. Sep 19, 2011 ### dynamicsolo Another way of saying what vanhees71 has is that the given vector field is supposed to be the result of taking curl A = F . (So plainly this is not a conservative potential, or the curl would have given us zero.) The equations Griffiths shows are the components of this curl result, which are supposed to equal F. (And, as Bill K notes, it appears Griffiths, or the solver for the manual, has omitted factors of 1/2 somewhere...) We are doing the inverse problem of trying to figure out what the components of A must look like in order to have produced F. We have to integrate each of those components in two ways, since we have information only about the partial derivatives of the components. Integrating $\frac{\partial Az}{\partial y} = \frac{1}{2}yz + \frac{ \partial Ay}{\partial z}$ with respect to y gets us definitely only the $\frac{1}{4} y^{2}z$ term, but leaves an "arbitrary integration function" dependent on the two variables not involved in that integration (analogous to the "arbitrary constant" in single variable integration). There is also a second integration taking place by rearranging the curl component differential equation as $\frac{\partial Ay}{\partial z} = \frac{ \partial Az}{\partial y} - \frac{1}{2}yz$ and then integrating with respect to z. This gives Griffith's second result for this equation, and the "arbitrary function" now depends on x and y . We go through the same process for all three curl components, which gets us two pieces of information for each component of A. So, for example, on the Ax component, we have $$A_{x} = \frac{1}{4} xz^{2} + h( x , y ) and A_{x} = - \frac{1}{4} xy^{2} + l( x , z ) .$$ [And, incidentally, there is a typo for that arbitrary function l ("el" of x and z) . ] We see that Ax has a term with powers of x and z , a term with powers of x and y , an arbitrary function of x and y , and another arbitrary function of x and z . So the arbitrary function of one of the integrations is apparently the result for the integration with respect to the other variable. There's nothing left over, so we must have $A_{x} = \frac{1}{4} xz^{2} - \frac{1}{4} xy^{2}$ , and similar results for the other components of A, as Bill K lists. Also, if the expression you gave for F is correct, then all of these (1/4)'s should be (1/2)'s. (And I'll keep in mind that the solution manual for Griffiths is rife with typoes...) Last edited: Sep 19, 2011
2018-08-14 07:17:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8847028613090515, "perplexity": 502.4233329271528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221208676.20/warc/CC-MAIN-20180814062251-20180814082251-00229.warc.gz"}
https://physics.stackexchange.com/questions/153121/determine-reaction-forces-on-square-frame
# Determine reaction forces on square frame The square frame consists of four identical homogeneous rods of mass $$m$$. It lies in the vertical plane and can move in it due to the wheels situated in A and B. These wheels can slide frictionlessly along the "white" horizontal and vertical tracks respectively. The frame is released from rest in the position as depicted above. The task is to determine the instantaneous reactions forces in A and B precisely when the frame is released. Since the tracks are frictionless the (net) reaction force in B is strictly horizontal and in A strictly vertical. Below is my free-body diagram. (source: draw.to) Let's begin with $$\textbf{F} = m \boldsymbol{a}_{\text{G}}$$. We have $$m \boldsymbol{a}_{\text{G}} = (\text{N}_{\text{A}} - 4mg) \textbf{e}_y + \text{N}_{\text{B}} \textbf{e}_x \ .$$ Next we will use moment of inertia about an axis through the center of mass $$\text{G}$$, parallell to the $$z$$-axis (coming out from the screen). The moment of inertia about $$G$$ for the entire frame is $$I_G = 16ml^2/3$$. The torque then is $$I_G \dot{\omega} = l ( N_A - N_B) \ ,$$ which will give us an expression for the angular acceleration (note that $$\omega = 0$$ in the instance of release). This is unfortunately all the progress I've been able to make and I am really clueless as to how I should proceed from here. I think the equation for accelerations in a rigid body make come in handy $$\boldsymbol{a}_P = \boldsymbol{a}_Q + \dot{\omega} \textbf{e}_z \times \textbf{r}_{QP} - \underbrace{\omega^2}_0 \textbf{e}_z \ ,$$ where $$P$$ and $$Q$$ are two points on the rigid body; the question is which points to choose judiciously. $$G$$ definitely is one of them but I have no information on the acceleration of any other point of the frame. $$\textbf{N}_A = \frac {14}5 mg \textbf{e}_y , \quad \textbf{N}_B = \frac 65 mg \textbf{e}_x$$ • This is a near problem. Converting the two sliders (two degrees of freedom) into a single degree of freedom problem is the key to solving it. – ja72 Dec 14 '14 at 1:10 One thing you need to consider is the kinematics. That is what is the position, velocity and acceleration of the center of mass G as the frame rotates. I set the coordinate system origin where the two slots intersect to get \begin{aligned} \vec{r}_A & = (x_A,0,0) & \vec{r}_B & = (0,y_B,0) \end{aligned} The I recognized that the two positions are related by a rigid body transformation $$\vec{r}_B = \vec{r}_A + \begin{bmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{pmatrix} - 2\ell \\ 2 \ell \\ 0 \end{pmatrix}$$ which gives $x_A = 2 \ell (\cos \theta+\sin \theta)$ and $y_B = 2 \ell (\cos \theta-\sin \theta)$. So now we have the positions A, B, and G as a function of the rotation angle $\theta$. \begin{aligned} \vec{r}_A & = (2 \ell (\cos \theta+\sin \theta),0,0) \\ \vec{r}_B & = (0,2 \ell (\cos \theta-\sin \theta),0) \\ \vec{r}_G & = (\ell (\cos \theta+\sin \theta), \ell (\cos \theta-\sin \theta),0) \end{aligned} More importantly we have the influence of the angle, which by using chain rule we get \begin{aligned} \vec{v}_A =(\dot{x}_A,0,0) & = (y_B \dot{\theta},0,0) \\ \vec{v}_B =(0,\dot{y}_B,0) & = (0,-x_A \dot{\theta} ,0) \\ \end{aligned} The velocity kinematics of the center of mass follow the rigid body rule \begin{aligned} \vec{v}_G & = \vec{v}_A + \vec{\omega} \times (\vec{r}_G - \vec{r}_A) \\ \vec{v}_G =(\dot{x}_G,\dot{y}_G,0) & = (\frac{y_B}{2} \dot{\theta},-\frac{x_A}{2} \dot{\theta} ,0) \\ \vec{\omega} & = (0,0,\dot{\theta}) \end{aligned} and \begin{aligned} \vec{a}_A =(\ddot{x}_A,0,0) & = (y_B \ddot{\theta}-x_A \dot{\theta}^2,0,0) \\ \vec{a}_B =(0,\ddot{y}_B,0) & = (0,-x_A \ddot{\theta}-y_A \dot{\theta}^2 ,0) \\ \end{aligned} The acceleration kinematics of the center of mass follow the rigid body rule \begin{aligned} \vec{a}_G & = \vec{a}_A +\dot{\vec{\omega}}\times (\vec{r}_G - \vec{r}_A)+ \vec{\omega} \times (\vec{v}_G - \vec{v}_A) \\ \vec{a}_G =(\ddot{x}_G,\ddot{y}_G,0) & = (\frac{y_B}{2} \ddot{\theta}-\frac{x_A}{2} \dot{\theta}^2,-\frac{x_A}{2} \ddot{\theta}+\frac{y_B}{2} \dot{\theta}^2 ,0) \end{aligned} Now we can look at sum of forces and moments about G \begin{aligned} \vec{N}_A + \vec{N}_B & = 4 m \vec{a}_G \\ (\vec{r}_A-\vec{r}_G) \times \vec{N}_A +(\vec{r}_B-\vec{r}_G) \times \vec{N}_B & = I \dot{\vec{\omega}} \end{aligned} which expanded out by component yields the following system of 3 equations. \begin{aligned} N_B & = 4 m \ddot{x}_G =4 m \left(\frac{y_B}{2} \ddot{\theta}-\frac{x_A}{2} \dot{\theta}^2\right) \\ N_A-4 m g & =4 m \ddot{y}_G =4 m \left(-\frac{x_A}{2} \ddot{\theta}+\frac{y_B}{2} \dot{\theta}^2 \right)\\ N_A \frac{x_A}{2} - N_B \frac{y_A}{2} & = I_z \ddot{\theta} \end{aligned} where $I_z = \frac{16}{3} m \ell^2$ The answer is found when $\theta=0$, $\dot\theta=0$, $x_A=2 \ell$ and $y_B=2 \ell$ since the above is 3 equations with 3 unknowns ($N_A$, $N_B$ and $\ddot{\theta}$). The solution I get is \begin{aligned} N_A & = \frac{g 4 m (I_z+4 m \ell^2)}{I_z+8 m \ell^2} = \frac{14 m g}{5}\\ N_B & = \frac{g 16 m^2 \ell^2}{I_z + 8 m \ell^2} = \frac{6 m g}{5} \\ \ddot{\theta} &= \frac{g 4 m \ell}{I_z + 8 m \ell^2} = \frac{3 g}{10 \ell} \end{aligned} The answer here what you expect. • Sorry - no. The answer key is correct. You made a mistake. – Floris Dec 13 '14 at 23:53 • What I did wrong is $m$ which for me is the total mass of the 4 rods. Hence the $I_z$ and $m$ is not correct above. – ja72 Dec 14 '14 at 0:45 • That doesn't explain the discrepancy. $F_B=\frac37 F_A$ regardless of the mass. You get $F_B=\frac{3}{19}F_A$... there's something else going on. – Floris Dec 14 '14 at 0:51 • I have updated the answer. The method was correct, I just assumed a different mass variable. – ja72 Dec 14 '14 at 0:52 • It is sufficient to just change $m \rightarrow 4m$ you also need to have consistent $I_z$. You need to make the substitution before the value of $I_z = \frac{16}{3} m \ell^2$ is used since it contains the rod mass, and not the net mass. – ja72 Dec 14 '14 at 0:52 Draw the position of the frame a very small time later. From this you can determine the relative rotation (due to torque - the difference between the forces multiplied by their perpendicular distance to the center of gravity) and linear acceleration (due to net horizontal and vertical forces). That should give you all the equations you need and you can then solve for $F_A$ and $F_B$. A few hours later... Here is the diagram I had in mind - drawing a "large" $\theta$ because it's easier to see what is going on: The distance $D=L\sqrt{2}$ is a convenient quantity to help compute the new position (x,y) at time t. Next we need to calculate the moment of inertia of the square: from the parallel axis theorem, we find it is $$I = 4(\frac{1}{12}m(2L)^2 + mL^2)=\frac{16}{3} mL^2$$ Now we have the following equations of motion: $$4m\ddot{x}=F_B\\ 4m\ddot{y}=4mg-F_A\\ I\ddot{\theta}=(F_B-F_A)L$$ And finally, when $\theta=\frac{\pi}{4}$ (start of motion) we can write $$\frac{dy}{d\theta}=D\cos(\frac{\pi}{4}) = L\\ \frac{dx}{d\theta}=-D\sin(\frac{\pi}{4}) = -L$$ This means $$\ddot{x}=-L\ddot{\theta}\\ \ddot{y}=L\ddot{\theta}$$ We can now eliminate $\ddot{x}$ and $\ddot{y}$ and are left with three equations in three unknowns: $$4mL\ddot{\theta}=-F_B \tag1$$ $$4mL\ddot{\theta}=4mg-F_A \tag2$$ $$\frac{16}{3} m L \ddot{\theta} = F_A-F_B \tag3$$ Subtracting $(1)$ and $(2)$ gives $$4mg=F_A+F_B\tag{4}$$ and $(1) - \frac34 (3)$ gives $$F_B=\frac34(F_A-F_B)$$ $$F_B=\frac37 F_A\tag{5}$$ $(5)$ into $(4)$ gives $$4mg = \frac{10}{7} F_A\\ F_A = \frac{14}{5} mg$$ and then it follows that $$F_B = \frac65 mg$$ • The only thing confusing for me is that a positive $\theta$ is a clockwise rotation which means that a negative torque is needed to increase $\theta$. This makes it difficult to ensure consistency. – ja72 Dec 14 '14 at 1:09 • Why is not the center of mass force 4mg considered in the torque equation $I \ddot{\theta}$ ? – larrydavid Dec 16 '14 at 18:09 • Ah ok! How subtle indeed that $\theta$ was unconventionally oriented. Quite complex and unintuitive stuff to digest here, I'll have to give it some time and thought. Thanks for all help – larrydavid Dec 16 '14 at 18:49
2019-08-23 20:03:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9915286898612976, "perplexity": 905.7963757648575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00076.warc.gz"}
https://www.statsmodels.org/stable/generated/statsmodels.discrete.discrete_model.DiscreteResults.aic.html
# statsmodels.discrete.discrete_model.DiscreteResults.aic¶ DiscreteResults.aic Akaike information criterion. -2*(llf - p) where p is the number of regressors including the intercept.
2020-04-04 18:40:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6179566383361816, "perplexity": 2930.9898382721585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00167.warc.gz"}
https://stacks.math.columbia.edu/tag/0DHZ
## 36.35 Relatively perfect objects In this section we introduce a notion from . Definition 36.35.1. Let $f : X \to S$ be a morphism of schemes which is flat and locally of finite presentation. An object $E$ of $D(\mathcal{O}_ X)$ is perfect relative to $S$ or $S$-perfect if $E$ is pseudo-coherent (Cohomology, Definition 20.44.1) and $E$ locally has finite tor dimension as an object of $D(f^{-1}\mathcal{O}_ S)$ (Cohomology, Definition 20.45.1). Please see Remark 36.35.14 for a discussion. Example 36.35.2. Let $k$ be a field. Let $X$ be a scheme of finite presentation over $k$ (in particular $X$ is quasi-compact). Then an object $E$ of $D(\mathcal{O}_ X)$ is $k$-perfect if and only if it is bounded and pseudo-coherent (by definition), i.e., if and only if it is in $D^ b_{\textit{Coh}}(X)$ (by Lemma 36.10.3). Thus being relatively perfect does not mean “perfect on the fibres”. The corresponding algebra concept is studied in More on Algebra, Section 15.82. We can link the notion for schemes with the algebraic notion as follows. Lemma 36.35.3. Let $f : X \to S$ be a morphism of schemes which is flat and locally of finite presentation. Let $E$ be an object of $D_\mathit{QCoh}(\mathcal{O}_ X)$. The following are equivalent 1. $E$ is $S$-perfect, 2. for any affine open $U \subset X$ mapping into an affine open $V \subset S$ the complex $R\Gamma (U, E)$ is $\mathcal{O}_ S(V)$-perfect. 3. there exists an affine open covering $S = \bigcup V_ i$ and for each $i$ an affine open covering $f^{-1}(V_ i) = \bigcup U_{ij}$ such that the complex $R\Gamma (U_{ij}, E)$ is $\mathcal{O}_ S(V_ i)$-perfect. Proof. Being pseudo-coherent is a local property and “locally having finite tor dimension” is a local property. Hence this lemma immediately reduces to the statement: if $X$ and $S$ are affine, then $E$ is $S$-perfect if and only if $K = R\Gamma (X, E)$ is $\mathcal{O}_ S(S)$-perfect. Say $X = \mathop{\mathrm{Spec}}(A)$, $S = \mathop{\mathrm{Spec}}(R)$ and $E$ corresponds to $K \in D(A)$, i.e., $K = R\Gamma (X, E)$, see Lemma 36.3.5. Observe that $K$ is $R$-perfect if and only if $K$ is pseudo-coherent and has finite tor dimension as a complex of $R$-modules (More on Algebra, Definition 15.82.1). By Lemma 36.10.2 we see that $E$ is pseudo-coherent if and only if $K$ is pseudo-coherent. By Lemma 36.10.5 we see that $E$ has finite tor dimension over $f^{-1}\mathcal{O}_ S$ if and only if $K$ has finite tor dimension as a complex of $R$-modules. $\square$ Lemma 36.35.4. Let $f : X \to S$ be a morphism of schemes which is flat and locally of finite presentation. The full subcategory of $D(\mathcal{O}_ X)$ consisting of $S$-perfect objects is a saturated1 triangulated subcategory. Lemma 36.35.5. Let $f : X \to S$ be a morphism of schemes which is flat and locally of finite presentation. A perfect object of $D(\mathcal{O}_ X)$ is $S$-perfect. If $K, M \in D(\mathcal{O}_ X)$, then $K \otimes _{\mathcal{O}_ X}^\mathbf {L} M$ is $S$-perfect if $K$ is perfect and $M$ is $S$-perfect. Proof. First proof: reduce to the affine case using Lemma 36.35.3 and then apply More on Algebra, Lemma 15.82.3. $\square$ Lemma 36.35.6. Let $f : X \to S$ be a morphism of schemes which is flat and locally of finite presentation. Let $g : S' \to S$ be a morphism of schemes. Set $X' = S' \times _ S X$ and denote $g' : X' \to X$ the projection. If $K \in D(\mathcal{O}_ X)$ is $S$-perfect, then $L(g')^*K$ is $S'$-perfect. Proof. First proof: reduce to the affine case using Lemma 36.35.3 and then apply More on Algebra, Lemma 15.82.5. Second proof: $L(g')^*K$ is pseudo-coherent by Cohomology, Lemma 20.44.3 and the bounded tor dimension property follows from Lemma 36.22.8. $\square$ Situation 36.35.7. Let $S = \mathop{\mathrm{lim}}\nolimits _{i \in I} S_ i$ be a limit of a directed system of schemes with affine transition morphisms $g_{i'i} : S_{i'} \to S_ i$. We assume that $S_ i$ is quasi-compact and quasi-separated for all $i \in I$. We denote $g_ i : S \to S_ i$ the projection. We fix an element $0 \in I$ and a flat morphism of finite presentation $X_0 \to S_0$. We set $X_ i = S_ i \times _{S_0} X_0$ and $X = S \times _{S_0} X_0$ and we denote the transition morphisms $f_{i'i} : X_{i'} \to X_ i$ and $f_ i : X \to X_ i$ the projections. Lemma 36.35.8. In Situation 36.35.7. Let $K_0$ and $L_0$ be objects of $D(\mathcal{O}_{X_0})$. Set $K_ i = Lf_{i0}^*K_0$ and $L_ i = Lf_{i0}^*L_0$ for $i \geq 0$ and set $K = Lf_0^*K_0$ and $L = Lf_0^*L_0$. Then the map $\mathop{\mathrm{colim}}\nolimits _{i \geq 0} \mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{O}_{X_ i})}(K_ i, L_ i) \longrightarrow \mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{O}_ X)}(K, L)$ is an isomorphism if $K_0$ is pseudo-coherent and $L_0 \in D_\mathit{QCoh}(\mathcal{O}_{X_0})$ has (locally) finite tor dimension as an object of $D((X_0 \to S_0)^{-1}\mathcal{O}_{S_0})$ Proof. For every quasi-compact open $U_0 \subset X_0$ consider the condition $P$ that $\mathop{\mathrm{colim}}\nolimits _{i \geq 0} \mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{O}_{U_ i})}(K_ i|_{U_ i}, L_ i|_{U_ i}) \longrightarrow \mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{O}_ U)}(K|_ U, L|_ U)$ is an isomorphism where $U = f_0^{-1}(U_0)$ and $U_ i = f_{i0}^{-1}(U_0)$. If $P$ holds for $U_0$, $V_0$ and $U_0 \cap V_0$, then it holds for $U_0 \cup V_0$ by Mayer-Vietoris for hom in the derived category, see Cohomology, Lemma 20.33.3. Denote $\pi _0 : X_0 \to S_0$ the given morphism. Then we can first consider $U_0 = \pi _0^{-1}(W_0)$ with $W_0 \subset S_0$ quasi-compact open. By the induction principle of Cohomology of Schemes, Lemma 30.4.1 applied to quasi-compact opens of $S_0$ and the remark above, we find that it is enough to prove $P$ for $U_0 = \pi _0^{-1}(W_0)$ with $W_0$ affine. In other words, we have reduced to the case where $S_0$ is affine. Next, we apply the induction principle again, this time to all quasi-compact and quasi-separated opens of $X_0$, to reduce to the case where $X_0$ is affine as well. If $X_0$ and $S_0$ are affine, the result follows from More on Algebra, Lemma 15.82.7. Namely, by Lemmas 36.10.1 and 36.3.5 the statement is translated into computations of homs in the derived categories of modules. Then Lemma 36.10.2 shows that the complex of modules corresponding to $K_0$ is pseudo-coherent. And Lemma 36.10.5 shows that the complex of modules corresponding to $L_0$ has finite tor dimension over $\mathcal{O}_{S_0}(S_0)$. Thus the assumptions of More on Algebra, Lemma 15.82.7 are satisfied and we win. $\square$ Lemma 36.35.9. In Situation 36.35.7 the category of $S$-perfect objects of $D(\mathcal{O}_ X)$ is the colimit of the categories of $S_ i$-perfect objects of $D(\mathcal{O}_{X_ i})$. Proof. For every quasi-compact open $U_0 \subset X_0$ consider the condition $P$ that the functor $\mathop{\mathrm{colim}}\nolimits _{i \geq 0} D_{S_ i\text{-perfect}}(\mathcal{O}_{U_ i}) \longrightarrow D_{S\text{-perfect}}(\mathcal{O}_ U)$ is an equivalence where $U = f_0^{-1}(U_0)$ and $U_ i = f_{i0}^{-1}(U_0)$. We observe that we already know this functor is fully faithful by Lemma 36.35.8. Thus it suffices to prove essential surjectivity. Suppose that $P$ holds for quasi-compact opens $U_0$, $V_0$ of $X_0$. We claim that $P$ holds for $U_0 \cup V_0$. We will use the notation $U_ i = f_{i0}^{-1}U_0$, $U = f_0^{-1}U_0$, $V_ i = f_{i0}^{-1}V_0$, and $V = f_0^{-1}V_0$ and we will abusively use the symbol $f_ i$ for all the morphisms $U \to U_ i$, $V \to V_ i$, $U \cap V \to U_ i \cap V_ i$, and $U \cup V \to U_ i \cup V_ i$. Suppose $E$ is an $S$-perfect object of $D(\mathcal{O}_{U \cup V})$. Goal: show $E$ is in the essential image of the functor. By assumption, we can find $i \geq 0$, an $S_ i$-perfect object $E_{U, i}$ on $U_ i$, an $S_ i$-perfect object $E_{V, i}$ on $V_ i$, and isomorphisms $Lf_ i^*E_{U, i} \to E|_ U$ and $Lf_ i^*E_{V, i} \to E|_ V$. Let $a : E_{U, i} \to (Rf_{i, *}E)|_{U_ i} \quad \text{and}\quad b : E_{V, i} \to (Rf_{i, *}E)|_{V_ i}$ the maps adjoint to the isomorphisms $Lf_ i^*E_{U, i} \to E|_ U$ and $Lf_ i^*E_{V, i} \to E|_ V$. By fully faithfulness, after increasing $i$, we can find an isomorphism $c : E_{U, i}|_{U_ i \cap V_ i} \to E_{V, i}|_{U_ i \cap V_ i}$ which pulls back to the identifications $Lf_ i^*E_{U, i}|_{U \cap V} \to E|_{U \cap V} \to Lf_ i^*E_{V, i}|_{U \cap V}.$ Apply Cohomology, Lemma 20.42.1 to get an object $E_ i$ on $U_ i \cup V_ i$ and a map $d : E_ i \to Rf_{i, *}E$ which restricts to the maps $a$ and $b$ over $U_ i$ and $V_ i$. Then it is clear that $E_ i$ is $S_ i$-perfect (because being relatively perfect is a local property) and that $d$ is adjoint to an isomorphism $Lf_ i^*E_ i \to E$. By exactly the same argument as used in the proof of Lemma 36.35.8 using the induction principle (Cohomology of Schemes, Lemma 30.4.1) we reduce to the case where both $X_0$ and $S_0$ are affine. (First work with opens in $S_0$ to reduce to $S_0$ affine, then work with opens in $X_0$ to reduce to $X_0$ affine.) In the affine case the result follows from More on Algebra, Lemma 15.82.7. The translation into algebra is done by Lemma 36.35.3. $\square$ Lemma 36.35.10. Let $f : X \to S$ be a morphism of schemes which is flat, proper, and of finite presentation. Let $E \in D(\mathcal{O}_ X)$ be $S$-perfect. Then $Rf_*E$ is a perfect object of $D(\mathcal{O}_ S)$ and its formation commutes with arbitrary base change. Proof. The statement on base change is Lemma 36.22.5. Thus it suffices to show that $Rf_*E$ is a perfect object. We will reduce to the case where $S$ is Noetherian affine by a limit argument. The question is local on $S$, hence we may assume $S$ is affine. Say $S = \mathop{\mathrm{Spec}}(R)$. We write $R = \mathop{\mathrm{colim}}\nolimits R_ i$ as a filtered colimit of Noetherian rings $R_ i$. By Limits, Lemma 32.10.1 there exists an $i$ and a scheme $X_ i$ of finite presentation over $R_ i$ whose base change to $R$ is $X$. By Limits, Lemmas 32.13.1 and 32.8.7 we may assume $X_ i$ is proper and flat over $R_ i$. By Lemma 36.35.9 we may assume there exists a $R_ i$-perfect object $E_ i$ of $D(\mathcal{O}_{X_ i})$ whose pullback to $X$ is $E$. Applying Lemma 36.27.1 to $X_ i \to \mathop{\mathrm{Spec}}(R_ i)$ and $E_ i$ and using the base change property already shown we obtain the result. $\square$ Lemma 36.35.11. Let $f : X \to S$ be a morphism of schemes. Let $E, K \in D(\mathcal{O}_ X)$. Assume 1. $S$ is quasi-compact and quasi-separated, 2. $f$ is proper, flat, and of finite presentation, 3. $E$ is $S$-perfect, 4. $K$ is pseudo-coherent. Then there exists a pseudo-coherent $L \in D(\mathcal{O}_ S)$ such that $Rf_*R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (K, E) = R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (L, \mathcal{O}_ S)$ and the same is true after arbitrary base change: given $\vcenter { \xymatrix{ X' \ar[r]_{g'} \ar[d]_{f'} & X \ar[d]^ f \\ S' \ar[r]^ g & S } } \quad \quad \begin{matrix} \text{cartesian, then we have } \\ Rf'_*R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (L(g')^*K, L(g')^*E) \\ = R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (Lg^*L, \mathcal{O}_{S'}) \end{matrix}$ Proof. Since $S$ is quasi-compact and quasi-separated, the same is true for $X$. By Lemma 36.19.1 we can write $K = \text{hocolim} K_ n$ with $K_ n$ perfect and $K_ n \to K$ inducing an isomorphism on truncations $\tau _{\geq -n}$. Let $K_ n^\vee$ be the dual perfect complex (Cohomology, Lemma 20.47.5). We obtain an inverse system $\ldots \to K_3^\vee \to K_2^\vee \to K_1^\vee$ of perfect objects. By Lemma 36.35.5 we see that $K_ n^\vee \otimes _{\mathcal{O}_ X} E$ is $S$-perfect. Thus we may apply Lemma 36.35.10 to $K_ n^\vee \otimes _{\mathcal{O}_ X} E$ and we obtain an inverse system $\ldots \to M_3 \to M_2 \to M_1$ of perfect complexes on $S$ with $M_ n = Rf_*(K_ n^\vee \otimes _{\mathcal{O}_ X}^\mathbf {L} E) = Rf_*R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (K_ n, E)$ Moreover, the formation of these complexes commutes with any base change, namely $Lg^*M_ n = Rf'_*((L(g')^*K_ n)^\vee \otimes _{\mathcal{O}_{X'}}^\mathbf {L} L(g')^*E) = Rf'_*R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (L(g')^*K_ n, L(g')^*E)$. As $K_ n \to K$ induces an isomorphism on $\tau _{\geq -n}$, we see that $K_ n \to K_{n + 1}$ induces an isomorphism on $\tau _{\geq -n}$. It follows that $K_{n + 1}^\vee \to K_ n^\vee$ induces an isomorphism on $\tau _{\leq n}$ as $K_ n^\vee = R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (K_ n, \mathcal{O}_ X)$. Suppose that $E$ has tor amplitude in $[a, b]$ as a complex of $f^{-1}\mathcal{O}_ Y$-modules. Then the same is true after any base change, see Lemma 36.22.8. We find that $K_{n + 1}^\vee \otimes _{\mathcal{O}_ X} E \to K_ n^\vee \otimes _{\mathcal{O}_ X} E$ induces an isomorphism on $\tau _{\leq n + a}$ and the same is true after any base change. Applying the right derived functor $Rf_*$ we conclude the maps $M_{n + 1} \to M_ n$ induce isomorphisms on $\tau _{\leq n + a}$ and the same is true after any base change. Choose a distinguished triangle $M_{n + 1} \to M_ n \to C_ n \to M_{n + 1}[1]$ Take $S'$ equal to the spectrum of the residue field at a point $s \in S$ and pull back to see that $C_ n \otimes _{\mathcal{O}_ S}^\mathbf {L} \kappa (s)$ has nonzero cohomology only in degrees $\geq n + a$. By More on Algebra, Lemma 15.74.6 we see that the perfect complex $C_ n$ has tor amplitude in $[n + a, m_ n]$ for some integer $m_ n$. In particular, the dual perfect complex $C_ n^\vee$ has tor amplitude in $[-m_ n, -n - a]$. Let $L_ n = M_ n^\vee$ be the dual perfect complex. The conclusion from the discussion in the previous paragraph is that $L_ n \to L_{n + 1}$ induces isomorphisms on $\tau _{\geq -n - a}$. Thus $L = \text{hocolim} L_ n$ is pseudo-coherent, see Lemma 36.19.1. Since we have $R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (K, E) = R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (\text{hocolim} K_ n, E) = R\mathop{\mathrm{lim}}\nolimits R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (K_ n, E) = R\mathop{\mathrm{lim}}\nolimits K_ n^\vee \otimes _{\mathcal{O}_ X} E$ (Cohomology, Lemma 20.48.1) and since $R\mathop{\mathrm{lim}}\nolimits$ commutes with $Rf_*$ we find that $Rf_*R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (K, E) = R\mathop{\mathrm{lim}}\nolimits M_ n = R\mathop{\mathrm{lim}}\nolimits R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (L_ n, \mathcal{O}_ S) = R\mathop{\mathcal{H}\! \mathit{om}}\nolimits (L, \mathcal{O}_ S)$ This proves the formula over $S$. Since the construction of $M_ n$ is compatible with base chance, the formula continues to hold after any base change. $\square$ Remark 36.35.12. The reader may have noticed the similarity between Lemma 36.35.11 and Lemma 36.28.3. Indeed, the pseudo-coherent complex $L$ of Lemma 36.35.11 may be characterized as the unique pseudo-coherent complex on $S$ such that there are functorial isomorphisms $\mathop{\mathrm{Ext}}\nolimits ^ i_{\mathcal{O}_ S}(L, \mathcal{F}) \longrightarrow \mathop{\mathrm{Ext}}\nolimits ^ i_{\mathcal{O}_ X}(K, E \otimes _{\mathcal{O}_ X}^\mathbf {L} Lf^*\mathcal{F})$ compatible with boundary maps for $\mathcal{F}$ ranging over $\mathit{QCoh}(\mathcal{O}_ S)$. If we ever need this we will formulate a precise result here and give a detailed proof. Lemma 36.35.13. Let $f : X \to S$ be a morphism of schemes which is flat and locally of finite presentation. Let $E$ be a pseudo-coherent object of $D(\mathcal{O}_ X)$. The following are equivalent 1. $E$ is $S$-perfect, and 2. $E$ is locally bounded below and for every point $s \in S$ the object $L(X_ s \to X)^*E$ of $D(\mathcal{O}_{X_ s})$ is locally bounded below. Proof. Since everything is local we immediately reduce to the case that $X$ and $S$ are affine, see Lemma 36.35.3. Say $X \to S$ corresponds to $\mathop{\mathrm{Spec}}(A) \to \mathop{\mathrm{Spec}}(R)$ and $E$ corresponds to $K$ in $D(A)$. If $s$ corresponds to the prime $\mathfrak p \subset R$, then $L(X_ s \to X)^*E$ corresponds to $K \otimes _ R^\mathbf {L} \kappa (\mathfrak p)$ as $R \to A$ is flat, see for example Lemma 36.22.5. Thus we see that our lemma follows from the corresponding algebra result, see More on Algebra, Lemma 15.82.10. $\square$ Remark 36.35.14. Our Definition 36.35.1 of a relatively perfect complex is equivalent to the one given in whenever our definition applies2. Next, suppose that $f : X \to S$ is only assumed to be locally of finite type (not necessarily flat, nor locally of finite presentation). The definition in the paper cited above is that $E \in D(\mathcal{O}_ X)$ is relatively perfect if 1. locally on $X$ the object $E$ should be quasi-isomorphic to a finite complex of $S$-flat, finitely presented $\mathcal{O}_ X$-modules. On the other hand, the natural generalization of our Definition 36.35.1 is 1. $E$ is pseudo-coherent relative to $S$ (More on Morphisms, Definition 37.53.2) and $E$ locally has finite tor dimension as an object of $D(f^{-1}\mathcal{O}_ S)$ (Cohomology, Definition 20.45.1). The advantage of condition (B) is that it clearly defines a triangulated subcategory of $D(\mathcal{O}_ X)$, whereas we suspect this is not the case for condition (A). The advantage of condition (A) is that it is easier to work with in particular in regards to limits. [1] Derived Categories, Definition 13.6.1. [2] To see this, use Lemma 36.35.3 and More on Algebra, Lemma 15.82.4. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2021-09-22 15:23:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9828283190727234, "perplexity": 157.32079291611302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057366.40/warc/CC-MAIN-20210922132653-20210922162653-00013.warc.gz"}
https://learn.careers360.com/ncert/question-two-long-and-parallel-straight-wires-a-and-b-carrying-currents-of-80-a-and-50-a-in-the-same-direction-are-separated-by-a-distance-of-40-cm-estimate-the-force-on-a-10-cm-section-of-wire-a/
Q # Two long and parallel straight wires A and B carrying currents of 8.0 A and 5.0 A in the same direction are separated by a distance of 4.0 cm. Estimate the force on a 10 cm section of wire A. 7. Two long and parallel straight wires A and B carrying currents of 8.0 A and 5.0 A in the same direction are separated by a distance of 4.0 cm. Estimate the force on a 10 cm section of wire A. Views The magnitude of magnetic field at a distance r from a long straight wire carrying current I is given by, $|B|=\frac{\mu _{0}I}{2\pi r}$ In this case the magnetic field at a distance of 4.0 cm from wire B will be $|B|=\frac{4\pi \times 10^{-7}\times 5}{2\pi\times 0.04}$             (I=5 A, r=4.0 cm) $\\=2.5\times10^{-5} T$ The force on a straight wire of length l carrying current I in a uniform magnetic field B is given by $F=BIlsin\theta$, where $\theta$ is the angle between the direction of flow of current and the magnetic field. The force on a 10 cm section of wire A will be $F=2.5\times10^{-5}\times8\times0.1 \times sin90^\circ$       (B=2.5 T, I=8 A, l = 10 cm, $\theta$=90o) $F=2\times10^{-5} N$ Exams Articles Questions
2020-02-19 04:15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7772572040557861, "perplexity": 465.104848931159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00063.warc.gz"}
http://meta.stackexchange.com/questions/69171/why-doesnt-the-stack-overflow-team-fix-the-firesheep-style-cookie-theft?answertab=active
# Why doesn't the Stack Overflow team fix the Firesheep style cookie theft? Firesheep sniffs the network looking for session id's and makes it very easy for an attacker to hijack this authenticated session. It should be noted that Firesheep is nothing new ; it just makes this attack very easy. Many websites like Facebook (EDIT: Actually Facebook has patched this vulnerability) and Stack Overflow violate OWASP A9 - Insufficient Transport Layer Protection. A user can protect themselves by using a plugin like HTTPS Everywhere, but stackoverflow.com doesn't even have a valid certificate. Thus it is trivial to MITM https://stackoverflow.com and users have no way of protecting themselves. Does the Stack Overflow team not understand the threat of OWASP A9? Do they not care enough to spend the $20 on a certificate to give users the option to protect themselves? Google has not experienced a significant increase in resource consumption by switching to HTTPS. At least give people the option to secure themselves. Gmail switched to using HTTPS for everything by default. Previously it had been introduced as an option, but now all of our users use HTTPS to secure their email between their browsers and Google, all the time. In order to do this we had to deploy no additional machines and no special hardware. On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Source: VeriSign - For sites the size of SO/SF/SU it is A LOT more than a 20$ certificate – Zypher Nov 1 '10 at 22:47 @Zypher true story, but my complaint is that i can't even use https everywhere. – Rook Nov 1 '10 at 23:33 Someone is already trying to hijack stackoverflow using firesheep (and asking a question on stackoverflow about it) stackoverflow.com/questions/4089665/… – Zach Johnson Nov 3 '10 at 17:38 @Zach Johnson very interesting. – Rook Nov 3 '10 at 18:17 @Zach I'm definitely not trying to hijack Stack Overflow. Using this plugin you can only hijack one persons account, and only if there happens to be another Stack Overflow user coding in the same Starbucks and using the WiFi. – nevan king Nov 3 '10 at 19:58 This is a reasonable question, but the hyperbole is unnecessary. – Craig Stuntz Nov 17 '10 at 15:18 @Craig Stuntz yes making sure your users don't get hacked is frivolous and decadent. Such a luxury is only enjoyed by twitter, github, gmail... ect. – Rook Nov 17 '10 at 17:10 @Rook, I hadn't realized that hyperbole was your first language. Sorry! – Craig Stuntz Nov 17 '10 at 17:55 Related historical incident: Protecting Your Cookies: HttpOnly (hoping Jeff, a moderator, or maybe even any 10k user will never be fooled into logging in to SO using some wifi on some conference). – Arjan Jan 23 '11 at 14:59 @Arjan We'll 2 things here. For one, httponly cookies does not prevent an attacker from sniffing your HTTP traffic and obtaining the cookie. Further more, httponly cookies can still be exploited by using XHR to "ride" on the session in an attack similar to CSRF. HttpOnly cookies is by no means a complete solution. – Rook Jan 23 '11 at 17:09 (@Rook, I know, but I was merely referring to the SO Über Admin account having been compromised two and a half years ago...) – Arjan Jan 23 '11 at 17:12 @Arjan oooooooah, thats a good point. – Rook Jan 23 '11 at 17:37 @DanBeale - even with WPA you can still ARP spoof, ICMP redirect spoof or run a rogue DHCP server to trick users into routing everything via yourself. Coffee shops are typically open too and a "password on the wall" WPA approach would allow you to run a rogue AP with the same SSID... – Flexo Oct 13 '11 at 21:54 I would like to see HTTPS on StackExchange. You can get 2 years of all-you-can-sign universally trusted extended validation certificates including unlimited alternative names and wildcards for about 200$nowadays. TLS being a resource intensive thing simply isn't true anymore. – aef Nov 10 '12 at 10:34 @aef I'd even accept to trust a SE root CA certificate, but as Zypher said the problem is more about performance than about the certification. Although I'd argue that shouldn't be that much trouble after all. – Tobias Kienzler Feb 12 '13 at 12:31 ## 10 Answers We will be purchasing certificates for the network this week (dev is already in place), but there's still a lot left to do on the move to SSL. If you're curious about the details, you can read a recent blog post I wrote about it here. It's far from a trivial task, but we're moving towards making SSL available and then the default. I'll continue to blog about the SSL implementation as we go (websockets may be interesting for example). If there are current questions, I'm happy to answer them. As for this one: why aren't we doing anything? Well we are now... Why weren't we? Because it's complicated and wasn't even a possibility before now. Third party content (ads, MathJax, etc.) had to support it, and it didn't until very, very recently. Update #1 Jun 20th, 2013: The setup/test procedures of the migration to SSL configs was much more involved when we saw how to do it safely on production. We will be testing our internal/virtual load balancers very early next week then prepping to deploy certs to production. The site code will take longer (making https:// the default, 301, canonical, etc.), but this will make SSL available soon as we can. Notes: websockets are almost complete unknown on our intended setup, you may see intermittent connectivity with real-time as we figure out what fun times are involved with our setup and how SSL sockets will best work. - +1 awesome news! – Rook Jun 5 '13 at 2:06 @Nick, any update on this... Just now, while using Ask Ubuntu, I noticed that it defaulted to https:// for me on Firefox 26 (Win 7) but defaulted to http:// on IE 11 (Win 7)... I am not using any such add-ons which enforce https.. – Aditya Jan 20 '14 at 11:20 @Aditya we only use https:// for a login form, you have an addon in play if you're getting https:// links. – Nick Craver Jan 20 '14 at 13:44 @NickCraver: I don't use any such addons which enforce https://, it automatically defaulted to https for me for that session while the browser was open.. I have restarted the browser and it has gone back to its normal ways - only using http:// by default... I don't know what happened at that point of time :-) – Aditya Jan 20 '14 at 13:51 What's the status on this as of Feb 2014? – bmike Feb 11 '14 at 16:04 ## Why doesn't the Stack Overflow team fix the Firesheep style cookie theft? Because even the high-rep users have rate limits, so if the accounts are broken into, there's very little damage that can be done to Stack Overflow at all, and what damage is done is easy to remedy. It's likely something that will be addressed more completely at a later date, but it would take a concerted effort to pull something like this off. For instance, go to CES with a few friends and set up their notebooks to sniff and report the cookie information to a central server. That server then figures out which accounts it's gathered, and sets up a controller so that a single user can insta-close questions, post questions and insta-vote them up, etc. using several accounts. However, Stack Overflow already has significant monitoring, rate limiting, and firewall blocking for abusers, and chances are good such a simple setup (as above) would be blocked with the current setup. The biggest danger is if someone gets the cookie from a moderator, at which point they can do some damage that can't easily be rolled back. - No more CES and the like for moderators and Jeff ;-) – Arjan Feb 2 '11 at 20:17 I'm still left wishing that SO cared about secuirty. – Rook Apr 26 '11 at 16:42 Boo for a response that says "It only causes damage to the hacked user, so SE et. al doesn't give a crap". Glad my bank doesn't take that approach to security. – Lawrence Dol Jan 19 '13 at 21:08 My first guess is that advertising won't support HTTPS therefore making a mixed session and the user having to deal with the browser "do you want to continue" dialogs More information: Google Ads doesn't support HTTPS. What alternatives are there? Perhaps SO had a different way of dealing with advertisements. - For what it's worth, Jeff now appears to believe that "maybe encrypted connections should be the default for all web sites." I know that Jeff is leaving Stack Overflow in March 2012, but this post of his may be one indication that full HTTPS support is not all that far off. - it is a hairy technical problem.. not a 6-8 weeks thing, will probably eventually happen – waffles Feb 24 '12 at 2:37 @waffles, 6-8 weeks?! He's only got 5 days! :) – Benjol Feb 24 '12 at 6:28 I was going to post this as a comment, but ran out of space. For @Kop and @Rook: For a site the size of Stack Overflow/Server Fault/Super User as well as the Stack Exchange network, you CANNOT just slap a$20 certificate onto your web servers and call it a day. You would kill the performance of the websites as SSL processing is a network-overhead intensive operation. Even though it is not as CPU intensive as I once knew to be true, you still do need to account for CPU in your planning, and implementation - because it does add overhead and when you are dealing with 10MM Monthly uniques that can start adding up quick. To do this properly we would need to implement a highly available SSL load balancer/proxy that could handle the inbound SSL connections and not choke. To handle the load of the Trilogy in Four parts alone that would probably require (and I'm guessing here because we haven't run the numbers for obvious reasons) at least 4-6 very beefy servers, at about 6-8k a piece, plus Kyle's and my time to design, implement and test the solution. Running SSL on a large website is NOT a cheap $20 certificate, and you don't just go slapping SSL certs onto your web servers and call it a day. For the amount of traffic we receive it is a lot more expensive and involved to get SSL running properly without degrading the performance of the site. EDIT: Just to clarify BUYING THE CERTIFICATE IS NOT THE ISSUE - It should be noted that you are one of SE's sysadmins. blog.stackoverflow.com/2010/09/… – jjnguy Nov 1 '10 at 23:28 to play devil's advocate: SSL isn't that expensive any more. – Jeff Atwood Nov 2 '10 at 0:18 Maybe you misunderstood my comment, but I simply meant the possibility of using https, not it being automatic on every page for any user. – Andreas Bonini Nov 2 '10 at 11:16 @Kop no I understood you, but you have to realize that engineering for the possibility of using SSL is the same as for forcing SSL, You need to solve the latter problem in case you find a large number of your users opting in to using SSL. – Zypher Nov 2 '10 at 11:45 I wonder if it would be an option to enable HTTPS for users with a reputation over 1000? That would solve the large number of users problem and be an incentive to use the site. It's like turning off ads for users with a higher rep. – nevan king Nov 3 '10 at 21:07 @nevan except the SSL connection is negotiated before anything else, so we would have to still have to design for a large amount of SSL traffic, and then redirect those under 10k – Zypher Nov 3 '10 at 21:18 @Jeff indeed, just ask google when they implemented https for everyone in gmail... without adding new hardware etc. the total performance degradation was a whopping 1%! – alexanderpas Dec 10 '10 at 1:38 @alexanderpas please re-read my post, the majority of it had very little to do with CPU overhead most of it was about following the 7P's. I'm not sure how much more i can emphasize you do not just throw certificates onto a very busy server and say "cool, all done let me rake in the money." The cost of the cert is definitely not the issue here. – Zypher Dec 10 '10 at 2:17 @Zypher, true, but isn't that cost associated with every change in hardware/software, and/or new features. Adding SSL should be handled just like adding any other feature -- also: The belief that switching to using pure SSL/TLS is any burden was obsoleted years ago with the addition of SSL/TLS Session Resume. Session Resume allows a particular client and server to perform the high-overhead public key negotiation just once quoted from the link in @BlueRaja post. meaning the more a user uses the site, the less an impact the negotiation has on the server. – alexanderpas Dec 12 '10 at 6:40 So what is preventing you from picking one of the newer low volume sites, perhaps security.stackexchange, and collecting some performance data? You seem to think it is a huge problem, but if you could actually provide proof, you would be a lot more convincing. – Zoredache Jan 3 '11 at 22:16 BTW, buying a$20 certificate completely and totally fixes my problem because i can use HTTPSEverywhere and not get hacked. Everyone else is still screwed. – Rook Jan 8 '11 at 18:15 It's not as simple as buying a $20 cert, sure. But in order to run a site like SO, you've no doubt had to solve a lot harder problems. Why is HTTPS proving to be such an obstacle for you? – intgr Mar 30 '11 at 12:17 It's worth noting, from Google again: "On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead." So, the increased network usage was also insignificant. – Brendan Long Nov 9 '12 at 22:45 If 2% of your effort is not worth spending on security, then you deserve to be hacked. This is disgusting. – Rook Feb 18 '13 at 18:11 This post makes SO look bad, perhaps it needs an update. – Rook Apr 14 '13 at 21:14 May I suggest a compromise? I think that the information which is not available publicly should be served over secure connections. These can include: • when a moderator visits a page which is only accessible to moderators • when a user visits his/her own userpage • when a user logs in using the log-in page This shouldn't create a performance problem since the number of visits to these pages is limited. Also this is consistent with what large companies like Google used to do: user account setting pages were served over secure connection even when the https was not default for GMail. Let me add that I think these pages should be served securely since they contain private information about users and SE should make sure that the information on these pages remains private. - Nice, but I think that would imply one needs to explicitly log in when first accessing the HTTPS pages? (Or, more likely: make the login pages HTTPS too, killing 2 thingies with one stone.) But when done, it would indeed disallow others from locking you out or from adding an additional OpenID. Above all: I very much like the idea to protect moderator actions. – Arjan Apr 25 '11 at 9:41 Right, but what i care about is having my session hijacked. If someone does this then they will view all of my information in https. – Rook Apr 25 '11 at 15:53 @Kaveh you are missing the point of OWASP A9 and Firesheep. Every request you make to SO has your session id, if you are on an open wifi network (which i am right now), then everyone else on the network can see this session id. Firesheep sniffs the network and lists every sessoin id it finds. You just click on the hijacked session and then you become them. HTTPS for the entire life of the session is the only way to prevent this. – Rook Apr 25 '11 at 17:54 -1 I fail to see how this improves the security of so. – Rook Apr 25 '11 at 17:55 Also gmail uses https for everything because of attacks like firesheep. – Rook Apr 25 '11 at 17:56 @Rook, as I said this was the practice of GMail up until a few months ago, and their switch to complete https was quite recent, I don't know the exact reason but I don't think firesheep like attacks was the main reason, I think issues with suppressive middle eastern governments may have been more part in their decision. – Kaveh Apr 26 '11 at 2:53 @Kaveh It is all or nothing. You either prevent ease dropping style attacks, or you succumb to them. Google switched to https for many of their services because governments where engaging in firesheep style attacks. – Rook Apr 26 '11 at 2:59 What you are proposing is an owasp violation. On SO I always downvote violations. – Rook Apr 26 '11 at 3:11 @Rock, of course you can, that is your vote! :) Btw, I am not suggesting violating anything, I would personally prefer if every page was served on https. But since it seems that SE is not ready to do it at the moment, I would prefer an improvement. And what I said is also correct, what I suggested is a significant improvement over the current situation and is the current practice of many of super large sites like Amazon. – Kaveh Apr 26 '11 at 3:15 @Rook, I'm quite sure that every page in Gmail has privacy sensitive information on it: email! Here on SE, not so much. Even the SE Inbox shows public information. Apparently, the team thinks that the traffic of logged-in users is not small enough to use HTTPS for every logged-in request. Then I agree with Kaveh that a limited set of data and actions (that are deemed personal or dangerous) could be secured by enforcing HTTPS for those pages (obviously requiring a secure cookie too). – Arjan Apr 26 '11 at 11:48 @Arjan @Kaveh Wow so you guys could care less if a child could hijack your account? Really? Am I really having this conversion? – Rook Apr 26 '11 at 15:48 @Arjan I know that any time you transfer an authentication credential over plain text you have a vulnerability it doesn't matter the protocol. The most expensive part of SSL/TLS is the handshake, all further requests use a the cached "master secret" and the overhead is minimal. It is all or nothing. – Rook Apr 26 '11 at 16:39 @Kaveh The name is rook, like the bird in my icon. The most expensive part of TLS/SSL is the handshake to establish a master secret. This master secret is then cached and used for the life of the session. Facebook is now pure https because Mark Zuckerburg was hacked by Firesheep. When a large company does this they buy an appliance that just takes care of handshakes. This does cost money, and the SO team likes money more than security. – Rook Apr 26 '11 at 16:50 @Kaveh the only thing your propose is only a waste of resources. What is more concerning is that you think that it helps. No doubt you are a respected and well accomplished developer, that unfortunately is very convincing and doesn't understand security. Please read the OWASP top 10. – Rook Apr 27 '11 at 8:20 @Arjan it boggles my mind that you think that this helps at all. Do you have like a 2 foot fence around your house that people just step over? – Rook Apr 27 '11 at 8:23 I agree that you should fix this right, with TLS/SSL. In the meantime, Ben Adida's proposal/code for "SessionLock Lite" offers an inexpensive interim approach that looks like it at least protects against persistent hijacks by passive attackers. Of course it offers no protection against eavesdropping. There is also a short and worrysome window of vulnerability to a tool like firesheep, which you should take other server-side approaches to mitigating. But it can reduce the exposure while you engineer your SSL solution: http://benlog.com/articles/2010/10/25/keep-your-hands-off-my-session-cookies/ By the way - do you at least actually expire cookies on the server side when users log out? - Here's a setting I made for getting Stack Overflow cookies. Please note that I don't even know how to write "leet" and I made this by just looking at other settings in Firesheep, and asking on Stack Overflow about how to traverse the page HTML to get the user name. If I create an open Wi-Fi network at home, sign out of Stack Overflow on one computer (with Firesheep running) and then look at a Stack Overflow page on another computer (which is already signed in) I can get access without the login. I don't think it's a huge deal for Stack Overflow. The worst you can do is impersonate a user and call other people names. Even deletes can be rolled back. It's much more serious for Gmail. That said, I'd like to be able to see what IP addresses have accessed my account. Using this plugin, I've noticed that it's hard to use Google search on an open Wi-Fi without exposing your Google cookie. Even going to https://google.com redirects you to the insecure site. Worse, Google seems to ping home, exposing your Gmail account (even without visiting gmail, even if you have the "always use https" set in Gmail). If you want to search securely, you can use HTTPS anywhere, but I found it very buggy. I hope you don't think I'm being irresponsible for posting this. Don't forget, ANYONE CAN DO THIS. Look at how simple the code is. It took me very little time to create, and very little knowledge (copying the template from another setting, looking at cookie names in Firefox, some JavaScript help). I actually started thinking about this after seeing this question on Meta Stack Overflow and finding nothing under the Firesheep tag on Stack Overflow. register({ name: 'Stack Overflow', url: 'http://stackoverflow.com/', domains: [ 'stackoverflow.com' ], sessionCookieNames: [ 'usr', '__utmz', '__utma' , '__qca' ], identifyUser: function () { var resp = this.httpGet(this.siteUrl); this.userName = resp.body.querySelectorAll('a')[3].textContent; } }); - +1 great work, hopefully this will persuade the developers to take responsibly for their design. – Rook Nov 3 '10 at 21:50 "I'd like to be able to see what IPs have accessed my account." this has been visible on your user page for 2 years already -- search your user page for the words "last activity:" – Jeff Atwood Nov 3 '10 at 22:10 I'd prefer a list of the IPs, more like the way gmail shows the last 10 IPs that accessed your mailbox. With only 1 IP shown, you have to always remember to check it. – nevan king Nov 3 '10 at 22:36 @Jeff Atwood but if you are on wireless network then you'll have the same IP and there will be no evidence of the attack. – Rook Nov 4 '10 at 0:28 @Rook I don't think Stack Overflow can do anything about that. – Yuhong Bao Nov 4 '10 at 7:30 @Yuhong Bao this is 100% caused by developers. gmail solved this problem years ago, github fixed this last week, facebook is soon to follow. – Rook Nov 5 '10 at 5:43 @Rook Um, you seem to have interpreted something wrong... I'm pretty sure Yuhong Bao meant that SO can't do anything to show the user which of the computers from the same wireless network (with the same IP) have accessed your account. – Ilari Kajaste Nov 19 '10 at 13:00 @Ilari Kajaste your right, thats why So should at least have the option of SSL. – Rook Nov 19 '10 at 16:01 @Jeff: All I see from last activity is the obvious - my own activity from 43 seconds ago. What about a list of previous IP addresses. Maybe the last 5. I use SOFU at home and work. So anything other than those IP addresses would cause concern. – IAbstract Dec 10 '10 at 12:59 encrypted.google.com securing against firesheep will imply making the website access https only like gmail. an opt-in trial run for interested SO or metaSO users won't hurt. – abel Jan 7 '11 at 20:36 There are ways to prevent cookie leaks without using SSL and that will add very little load. When session is created, generate a random number (R) and associate it with the session. Pass this number back to the browser as part of the login process (so it's covered by SSL, and is therefore secret). Then for each request over HTTP: • browser generates random number Q. • browser calculates S = SHA1(Q + R) • browser includes Q and S in the request along with the session cookie • server receives request, verifies SHA1(Q + R) = S and if so, session is valid. Not hard to do, adds about 40 bytes per request header, and increased server load is negligable. Breaking this requires some sort of XSS flaw which reveals R, breaking SHA1 or guessing the random number sequence, none of which are going to be quick. - So the way to prevent using SSL is... to use SSL? I don't get it. – Aarobot Nov 8 '10 at 16:38 SO never use SSL, not even for the login process. They rely on a 3rd party OpenID provider to handle the secure login, so your solution still adds a new SSL piece to the mix. – heavyd Nov 8 '10 at 16:45 Ah, well there's not me using StackOverflow much. Actually at all. The solution allows only the original login to be done over SSL but the rest to be done over HTTP, which is a lot cheaper than moving the whole site to use SSL. But yes, it still needs the initial connection to be done via SSL. – user153246 Nov 8 '10 at 16:47 -1 you're transmitting the secret used in a MAC in plain text, thus defeating the whole point of MAC. Further more, if the hacker can modify html and javascript used to generate the page and thus obtain your secret Q. Don't build security systems, this suggestion is horrifying. – Rook Nov 17 '10 at 17:13 You need ssl and don't let anyone tell you otherwise. – Rook Nov 17 '10 at 17:14 But how would that secret number R be stored in the browser for subsequent requests...? – Arjan Dec 10 '10 at 9:18 Intriguing. Though this does not fully protect against a MITM attack since the attacker can just pass along the value for SHA1(Q+R), and do anything else they want with the request. – nealmcb Jan 2 '11 at 1:17 @rook - Q is only in the browser, fresh for each request, not sent over the network. A MITM attacking the javascript sent to the browser is a clever idea - I wonder if it is possible in the browser to somehow rely on javascript sent in the initial secure session, saved in local storage via HTML5. But I agree - at best a pretty risky, fragile and tenuous proposal. See more discussion at security.stackexchange.com/questions/1322/… – nealmcb Jan 2 '11 at 1:39 @nealmcb a message digest function cannot solve this problem. As the top answerer of cryptography questions on SO I'm telling you that SSL is the only solution to this problem, do not tell people otherwise. – Rook Jan 2 '11 at 4:02 @rook yes - we agree this is certainly no substitute for ssl, and overall a foolish direction to go at best. I'm just curious to know if the server with html5 could prevent the MITM from stealing a reusable session identifier. Not that that would help the user very much given all the stuff an MITM could still get away with. And I was just clarifying that the proposal here didn't "transmit the secret in plain text", even though most likely the MITM could also pry it out of the browser as noted earlier unless html5 has good facilities to allow a great javascript programmer to prevent that. – nealmcb Jan 2 '11 at 6:41 @nealmcb your solution makes the attack window smaller. There is still the problem of loading the html/js to begin with. If there where a way to sign this content then this solution may hold water. But as it stands if i protocol is insecure at any step, it is unusable. – Rook Jan 2 '11 at 18:45 @rook: yup - it would need to be secure all the way, I've seen no evidence it could be made secure, and there be dragons at all turns.... Plain SSL would surely make more sense. Thanks for helping clarify this stuff. – nealmcb Jan 4 '11 at 3:27 If you have a "man in the middle" then there are deeper problems, like, you're using a compromised network. We do actually cycle part of the cookie every so often, so if someone has an old cookie of yours it probably won't work. That's assuming they stop listening, and you keep using the site. Also, we don't think the risk of someone stealing your Stack Overflow account is very dangerous compared to, say, your online banking account, or your gmail (remember email is the de-facto skeleton key for all online logins that I know of). - While I agree, I also agree with the OP's point about the$20. How can having a valid certificate hurt anything? It will make the paranoid people happy. =) – Andreas Bonini Nov 1 '10 at 23:05 @rook why don't you direct your outrage to the other 99.99% of the web which also has this "problem"? – Jeff Atwood Nov 1 '10 at 23:55 @Jeff Atwood "but there's lots of sql injection on the web, why should I patch my application?" – Rook Nov 2 '10 at 0:26 @rook not the same thing; you're complaining about the way the web works, not the way our code is written – Jeff Atwood Nov 2 '10 at 1:23 @Jeff Atwood What is so bothersome is that this issue doesn't affect you, it affects every single one of your users. This callus attitude towards others safety is what is so deeply offensive to me. It is trivial to provide https to the ones who want to use tools like https everywhere to protect them selves, but your response is so iconic of people who are hacked: "Why would anyone hack me?". – Rook Nov 2 '10 at 1:36 @rook so use a https proxy -- you already have a way of doing this. stackexchange.com/search?q=https+secure+proxy – Jeff Atwood Nov 2 '10 at 1:46 @Jeff Atwood I wish I could vote down comments. – Rook Nov 2 '10 at 1:50 "If you have a "man in the middle" then there are deeper problems, like, you're using a compromised network." - That is, if you assume every internet backbone, and all the links between them, are trustworthy and uncompromised. :) Just sayin' – BlueRaja Nov 5 '10 at 15:34 Do the airport and Starbucks count as compromised networks? Seriously, though, SO is not exactly my bank. If you can hack my SO account the worst thing you could do is post garbage that the mods would have to clean up. As far as this being "the way the web works", I think companies like Twitter and Facebook should be leading the way in this department. SO will be here to answer their technical questions while they're working on it. And once the web works a different way, SO can follow suit. – user27414 Nov 8 '10 at 16:28 I agree with Rook on this. It doesn't matter how the site is coded if my identity is transmitted in clear text. Sure the impact of a stolen account is relatively small, but that's not the point. The fact that SO could easily combat this and choose to do nothing while telling people to lobby the website operators is, quite frankly, baffling. – NotMe Nov 15 '10 at 17:09 @chris it's a fundamental architecture issue with the web, and we are not your bank. Read more: codinghorror.com/blog/2010/11/breaking-the-webs-cookie-jar.html – Jeff Atwood Nov 15 '10 at 20:27 @Jeff: I read the blog, which is what drove me here. Two things you said come to mind. First: "I sure do care if they somehow sniff out my cookie and start running around doing stuff as me!" Second, "this is not an unreasonable thing to ask your favorite website for." – NotMe Nov 15 '10 at 20:35 @Jeff: "there is nothing social here" Er, what? I would consider this exact thing were doing here, discussing, quite social indeed. Sure, we don't connect to other identities here, but so what - I've done a lot more social activity here than in LinkedIn. Maybe you have a different definition of social? – Ilari Kajaste Nov 19 '10 at 12:51 @jeff I read your blog why are you recommending: "Lobby the websites you use to offer HTTPS browsing.", When this is exactly what I'm doing and you shot me down. – Rook Nov 19 '10 at 16:05 This thread is kind of like finding out Santa Claus isnt real. – Dubyaohohdee Mar 22 '12 at 17:50
2015-11-26 20:30:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3072291910648346, "perplexity": 2940.652397874632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447773.21/warc/CC-MAIN-20151124205407-00285-ip-10-71-132-137.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2041610/find-the-angle-of-triangle-inside-circle
Find the angle of triangle inside circle Let's say I have, Now I have to find the angle CBA. Given that we know just 26 given above. • Think about the measure of $ACB$ – user261263 Dec 3 '16 at 11:06 Hint: $\angle ACB$ is 90 degrees because AB is the diameter of the circle. $\angle CAB=26$ and $AB$ is a diameter hence $\angle ACB=90$. So $\angle CBA=180-(90+26)=64$.
2019-08-23 04:33:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482201337814331, "perplexity": 655.7304768643359}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317847.79/warc/CC-MAIN-20190823041746-20190823063746-00006.warc.gz"}
https://forum.azimuthproject.org/plugin/ViewComment/18873
> **Puzzle 106**. What are some reasonable equations between morphisms that we might want to impose? Maybe we want to limit the number of layers of management to say 4. $\text{Manager} \circ \text{Manager} \circ \text{Manager} \circ \text{Manager} = 1_\text{Employee}$ **Puzzle 106-FPE** We want to ensure management has a strict hierarchy, i.e. a tree. What equations will enforce that? > **Puzzle 107** How many morphisms does this category have? How is it related to the square? The morphisms are \$$\lbrace [1_z = s \circ s \circ s \circ s], [s], [s \circ s], [s \circ s \circ s] \rbrace \$$ which are 4. If \$$s\$$ represents a \$$90 \deg\$$ spin of the square then we have a morphism from the initial pose to each symmetric pose of the square [without flipping/turning], of which there are 4. $\begin{array}{c|c} \text{Pose} & \text{morphism} \\\\ \hline \tt{0 \deg} & 1_z = s \circ s \circ s \circ s \\\\ \tt{90 \deg} & s \\\\ \tt{180 \deg} & s \circ s \\\\ \tt{270 \deg} & s \circ s \circ s \end{array}$
2019-08-19 19:12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881298303604126, "perplexity": 1756.6640289247212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00171.warc.gz"}
https://math.stackexchange.com/questions/1650612/determinant-of-block-matrix-with-off-diagonal-blocks-conjugate-of-each-other
# Determinant of block matrix with off-diagonal blocks conjugate of each other. I am working on finding the determinant of the following block matrix $$\begin{pmatrix} C & D \\ D^* & C \\ \end{pmatrix},$$ where $C$ and $D$ are $4 \times 4$ matrices with complex entries that do not commute. I have looked up a theorem that states $$\det\begin{pmatrix} A & B \\ C & D \\ \end{pmatrix}=\det(A-B)\det(A+B),$$ when $A=D$ and $B=C$, but does there exist a similar simplification for my situation? Any and all help is much appreciated! • Was that Theorem found by You? – Ganesh Feb 11 '16 at 14:19 • – anon Feb 11 '16 at 14:21 Hint: Suppose $C$ is invertible [otherwise use the matrix $C - \lambda I$ in place of $C$ for that will certainly be invertible for infinitely many $\lambda \in \mathbb{C}$]. Write $\begin{pmatrix} C & D \\ D^{*} & C \end{pmatrix} = \begin{pmatrix} I & 0 \\ D^{*}C^{-1} & I \end{pmatrix} \begin{pmatrix} C & D \\ 0 & C - D^{*}C^{-1}D \end{pmatrix}$
2019-04-21 16:51:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8373958468437195, "perplexity": 167.62724772780385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578531994.14/warc/CC-MAIN-20190421160020-20190421182020-00224.warc.gz"}
https://math.stackexchange.com/questions/1180678/inverse-fourier-transform-int-infty-infty-exp-left-x-x-02-over2
# Inverse Fourier transform: $\int_{-\infty}^{+\infty}\exp\left(-{(x-x_0)^2\over2\sigma_x}\right)\exp\left(-i\phi\right)\exp(-ix t)dx$ To compute the inverse Fourier transform I need to evaluate the following integral $$\int_{-\infty}^{\infty}\exp\left(-{(\omega-\omega_0)^2\over2\sigma_\omega}\right)\cdot\exp\left(-i\phi\right)\cdot\exp(-i\omega t)\;\mathrm{d}\omega$$ Can I ask you guys for a hint how I should proceed? What are the common methods for tackling such problems? You may just recall the gaussian integral result : $$\int_{-\infty}^{+\infty}\exp\left(-a{(x-x_0)^2}\right)\mathrm{d}x= \sqrt{\frac{\pi}{a}},\quad \Re a>0,\,x_0 \in \mathbb{C},$$ and rewrite your initial integral $$I=\int_{-\infty}^{+\infty}\exp\left(-{(\omega-\omega_0)^2\over2\sigma_\omega}\right)\cdot\exp\left(-i\phi\right)\cdot\exp(-i\omega t)\;\mathrm{d}\omega$$ as $$I=\exp\left(-i\phi\right)\cdot\exp(\frac{t^2\sigma_\omega}{2}-i\omega_0 t)\cdot\int_{-\infty}^{+\infty}\exp\left(-{(\omega+it\sigma_\omega-\omega_0)^2\over2\sigma_\omega}\right)\mathrm{d}\omega$$ leading to $$I=\sqrt{2\pi\sigma_\omega}\cdot\exp\left(-i\phi\right)\cdot\exp(\frac{t^2\sigma_\omega}{2}-i\omega_0 t).$$
2019-07-17 19:22:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705913066864014, "perplexity": 116.54352926391427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525374.43/warc/CC-MAIN-20190717181736-20190717203736-00307.warc.gz"}
https://www.physicsforums.com/threads/rotational-motion.78850/
Homework Help: Rotational motion 1. Jun 12, 2005 john_halcomb have a question. diameter of a rotor is 7.60m and rotational speed of 450 rev/min. calculate speed of tip. anywell would be welcomed 2. Jun 12, 2005 Pyrrhus What equation relates both? $$v = \omega r$$ 3. Jun 12, 2005 john_halcomb the answer in the back of the book says179 m/s. but when i try that formula it comes out as 311.6 4. Jun 12, 2005 Pyrrhus Oh sorry, yes the book is right A rev is equal to 2 pi radians, and a minute is equal to 60 second. You need to homogenize (spl?) your units. 5. Jun 12, 2005 john_halcomb the rev/min is 47.81 rev/sec 6. Jun 12, 2005
2018-09-25 08:54:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767692506313324, "perplexity": 8346.90159517167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161350.69/warc/CC-MAIN-20180925083639-20180925104039-00497.warc.gz"}
https://www.gamedev.net/forums/topic/687396-where-can-i-get-sound-effects/
Where can I get sound effects? Recommended Posts What are some good places to get sound effects and what to avoid? Share on other sites Pay attention to the license requirements on each item. You may want to grab Audacity (http://www.audacityteam.org/) as well for tweaking things and possibly for recording your own effects. With a little patience and creativity you can make most simple effects yourself. Share on other sites http://soundimage.org/ This guy has some good sound fx but doesn't have a lot of them since he mainly composes music. Share on other sites It depends greatly on what kind of sounds you are looking for. Synthesizers are good for some sounds. Some sounds can be found on the internet as some have pointed out. You may have to record your own. A digital recorder is probably a good place to start for that. I'm sure your setup can become as expensive and as involved as you want it to be. In addition, you will probably want at least some simple audio editing software for cutting out parts of the track you don't want if nothing else. And recording is a skill like most everything else. It takes time and practice to get good at it. And more equipment. Share on other sites Sound Ideas Sound Ideas is the repository of one of the largest commercially available sound effects libraries in the world. https://www.sound-ideas.com/ on the flight home from CGDC '96 I finally broke down and bought The General 6000 Series (50 CDs!) for $1000. collections start at about$100.
2018-01-17 05:45:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21216802299022675, "perplexity": 1142.4753335633372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886815.20/warc/CC-MAIN-20180117043259-20180117063259-00226.warc.gz"}
https://math.stackexchange.com/questions/610609/uniqueness-of-a-solution-of-a-system-of-equations
# Uniqueness of a solution of a system of equations While studying quantummechanics, I encountered following algebraic problem: We know that if $l$ is a non-negative integer: $$2l+1 = \sum_{-l}^l{c_m(-1)^m}$$ $$2l+1 = \sum_{-l}^l{\vert c_m\vert^2}$$ Where $c_m$ are coefficients, that may be complex. Obviously $$c_m = (-1)^m$$ Is a solution of this system of equations. I'm not sure though how I can see if this is the only solution. This is needed for proving the addition theorem of spherical harmonics. Use the Cauchy-Schwarz inequality: $$\sum_{m=-l}^l c_m(-1)^m \leq \sqrt{\textstyle\sum_{m=-l}^l \lvert c_m \rvert^2 \sum_{m=-l}^l 1} = 2l + 1.$$ Because the two sides are in fact equal, the vector $(c_m)_{m=-l}^l$ must be a scalar multiple of the vector $((-1)^m)_{m=-l}^l$. Denote this scalar factor by $a$. Then $\sum_{m=-l}^l c_m(-1)^m = \sum_{m=-l}^l a = (2l+1)a \implies a=1$.
2022-06-30 03:45:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761138558387756, "perplexity": 72.997214175237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00040.warc.gz"}
https://newbedev.com/can-something-again-ever-fall-through-the-event-horizon
# Can something (again) ever fall through the event horizon? Since another answer claims that a massive magic device would form in finite time I have to disagree. You have to wait forever, but only because your device is magic. The simplest problems are the spherically symmetric ones. And if you can get things close to an event horizon and magically bring them away as long as they stay outside then it is possible to not even know if the black hole forms. It is widely known that it takes finite time for two black holes to merge into a single black hole; this has been proved in the corresponding numerical computations. This question wasn't about the real world, it was about the real world where there are magic devices that can move on timelike curves whenever they feel like it. Which is a useful thought experiment for understanding the geometry of a black hole. Step one. Draw a Kruskal-Szekeres diagram for a star of mass M+m and pick an event of Schwarzschild $r=r_0$ and Schwarzschild $t=t_0.$ Step two. Draw a time like curve heading to the event horizon. Consider the region that has Schwarzschild t bigger than $t_0$ and has $r$ bigger than that curve at that Schwarzschild $t.$ This is a region of spacetime that sees a spherical shell of mass $m$ starting at $r=r_0$ and $t=t_0$ and heading down into an event horizon of a mass $M$ black hole. Step three. Now pick any event in this region of spacetime. Which is any point outside the black hole event horizon provided it is farther out than the thing lower down. So it is ancient, waiting for the new bigger black hole to form. Say it has an $r=r_{old}$ and a $t=t_{old}.$ Step four. Trace its past lightcone. Now pick any $\epsilon>0$ and trace that cone back until it reaches the surface of Schwarzschild $r=(M+m)(2+\epsilon).$ And find that Schwarzschild $t_{young}$ where that event (past lone intersecting the surface Schwarzschild $r=(M+m)(2+\epsilon)$) occurs. As long as the magic spherically symmetric shell of mass $m$ stays at Schwarzschild r smaller than $r=(M+m)(2+\epsilon)$ until after Schwarzschild $t=t_{young}$ then it can engage its magic engines and come back up and say hi to the person a $r=r_{old}.$ And the person won't see it until after the event $r=r_{old},$ $t=t_{old}.$ Which means. No matter how long you wait outside, the magic spherical shell of mass $m$ could still return to you so it most definitely has not crossed the event horizon of the original mass $M$ black hole and not even the larger event horizon for the mass $M+m$ black hole of it plus the original black hole. We do use the magic ability to come up. If you are willing to leave some of the substance behind it could shoot off a large fraction of itself and use that to have rest of it escape. But real everyday substances can't get thin enough to fit into that small region just outside the horizon so you can't make a device that does this out of ordinary materials. But as far as your logic goes, this process would take infinite time and therefore is impossible. We want to know if you can tell whether the magic device joined the black hole. The answer is no exactly because it takes an infinite amount of Schwarzschild time. For instance imagine a bunch of thin shells of matter. You can have flat space on the inside and then have a little bit of curvature between the two inner most shells. And have it get more and more curved on the outside of each sucessive shell until outside all of them it looks like a star of mass $M.$ Each shell is like two funnels sewn together with a deeper funnel always on the outside and all sewn together where they have the same circumference at the place they are sewn together. So now how do I know we can never know if anything crosses an event horizon. If they crossed an event horizon then the last bit to cross has a final view, what they see with their eyes or cameras as they cross. And if there is something they see that hasn't crossed yet when they cross that thing can run away and wait as many millions or billions of years as long as you want. And where ever and whenever they are they, the people outside, will still see the collapsing shells from before they crossed the event horizon. So now imagine a different universe. One where they didn't form a black hole or cross an event horizon. But all the shells got really close, so close that everything up to the point looks the same to the person in the future. Then they turn around and come back. So we never saw a single thing cross the event horizon. And if there are magic ways to get away as long as you haven't crossed the event horizon then there is no amount of time to wait before you know they cross. Because no matter how long you wait they still might not cross the horizon or they might cross it and you don't know yet. With the spherical symmetry it is easy to see that what I say works because there are really nice pictures for the spherical symmetry case where you can see what is and isn't possible. So you can pick a radius and a time and I can draw a point on a graph and trace back to find out how close the magic device has to get before it turns around. As long as things can wait until they are really really close then you can't tell if they have crossed an event horizon. The other answer is just plain wrong. If you take a collapsing star of mass $M+m$ then you can find where an arbitrarily distant time sees the infalling body. And as long as you waited until that point then the magic device can escape.
2023-03-22 01:05:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4545366168022156, "perplexity": 319.47524086088305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00074.warc.gz"}
http://python.steno3d.com/en/stable/content/api/projects.html
Project API¶ Steno3D projects organize resources so they can be viewed. The steps to construct the project pictured above can be found online in the example notebooks. class steno3d.project.Project(**metadata)[source] Steno3D top-level project Required Properties: Optional Properties:
2019-04-24 08:09:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21808873116970062, "perplexity": 6721.274683427091}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578636101.74/warc/CC-MAIN-20190424074540-20190424100540-00367.warc.gz"}
https://tex.stackexchange.com/questions/601693/commutative-diagram-for-kernel-trick-definition
# Commutative diagram for kernel trick definition I'm trying to draw a commutative diagram for kernel trick. I'm trying the following code: $\begin{tikzcd} &X \arrow{r}{\Phi} & H& \\% X \times X \arrow[swap]{dr}{\pi_X} \arrow[swap]{ur}{\pi_X} \arrow{r}{k} & \mathbb{R} && \mathcal{H} \times \mathcal{H} \arrow[swap]{ul}{\pi_\mathcal{H}} \arrow[swap]{dl}{\pi_\mathcal{H}} \arrow{l}{\langle \cdot, \cdot \rangle_\mathcal{H}}\\% &X \arrow{r}{\Phi} & H& \end{tikzcd}$ This code produces that diagram: With 2 errors: No shape named tikz@f@5-2-3' is known. I think the culprit is a tikzcd arrow in cell 2-4. No shape named tikz@f@5-2-3' is known. I think the culprit is a tikzcd arrow in cell 2-4. Actually, the diagram is almost nice except that k arrow is too short and \mathbb{R} is not centered. I guess the errors are all about this, but I'm quite new to tikz, so don't fully understand this errors. Can someone shred some light on the errors and ways to improve the diagram? Thanks. You should use one more column, but you can also shorten the arrows that go over this middle columns. I used the “modern” syntax for arrows, that I find much handier. \documentclass{article} \usepackage{amsmath,amssymb,tikz-cd} \begin{document} $\begin{tikzcd} &X \arrow[rr,"\Phi"] &[-1.5em] &[-1.5em] H \\ X \times X \arrow[dr,"\pi_X"'] \arrow[ur,"\pi_X"] \arrow[rr,"k"] && \mathbb{R} && \mathcal{H} \times \mathcal{H} \arrow[ul,"\pi_\mathcal{H}"'] \arrow[dl,"\pi_\mathcal{H}"] \arrow[ll,"{\langle \cdot, \cdot \rangle_\mathcal{H}}"'] \\ &X \arrow[rr,"\Phi"] && H \end{tikzcd}$ \end{document} Note: I left the two plain H's, but I believe they should be \mathcal{H} as well. • Thank you for provided answer and code! Yeah, I've made a typo with H, there's of course \mathcal{H}! Jun 17 '21 at 17:33
2022-01-17 05:15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788577914237976, "perplexity": 1638.9407843231238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00640.warc.gz"}
https://aviation.stackexchange.com/questions/23927/how-over-engineered-were-old-planes-and-how-over-engineered-are-they-now-tha
# How “over-engineered” were old planes and how “over-engineered” are they now that we have more advanced testing and research options? I've seen several times on shows on television, and even talking to older pilots, where they will tell me that old planes were "over-engineered" to add some margin for errors on the designed engineering calculations. Of course, no specifics are given, and so I was thinking in this current age of super-fast computers, better wind tunnels, new software tools, and decades of actual experience, when people design new planes, how much of a margin to they design into them now? It seems like confidence in computer modeling seems to be pretty high, and "over-engineering" seems to have typically meant making things stronger than they "need" to be, more reinforced elements, all things which add to cost and weight of an airplane which are factors that are at odds of profitability and performance. • @vsz I think it would be fair to consider the large numver of Cessnas, Pipers, and similar still being flown that were built in the 1940s-1970s “old”, and everything after that (which has been able to take advantage of computers, wind tunnels, etc.) as mid-aged or new. – Slipp D. Thompson Dec 27 '15 at 2:39 • @vsz I don't think that pre-1920 airplanes were, as a class, as incapable as that: Many of the aircraft of WW1 (1914-18) were capable of acrobatics and had ranges of hundreds of miles. – Wayne Conrad Dec 27 '15 at 16:16 • Consider the amount of engines used on a plane - older planes tended to have more than necessary (4) simply because they wanted the plane to keep flying even if two engines failed, vs modern planes that will generally keep flying with only one engine out (with two engines normally) – user2813274 Dec 27 '15 at 16:30 • @WayneConrad : if you look at the statistics, you'll see that almost half of the aircrew casualties were from accidents instead of enemy fire. – vsz Dec 27 '15 at 23:55 • @SlippD.Thompson I guess I was a little vague. I was thinking the 1930-1970s. What prompted the question was Smithsonian's series on "Planes that Changed the World" where they featured the DC-3 and they talked about how it was "over-engineered" but just threw that term around without definition. – Canuk Dec 28 '15 at 20:45 You are right that older designs used (and needed!) higher safety margins. I would, however, not call this over engineering. But that is besides the point. The biggest advances have been made in materials! Not just new alloys, but much better quality control in manufacturing. The aluminium sheets and slabs you would get from a factory 50 or 80 years ago had much greater variations in local strength, and in strength between different production batches. The same goes for fasteners, forgings, whatever. Computer-controlled manufacturing and the relentless effort to improve quality and consistency have made this possible. Only wood is still the same as it was then, but Basil Bourque is right: Processing the wood has also improved by leaps and bounds. But now consider what we do when we stress a wing: We use the maximum static lift coefficient, add aileron deflection and use that bending moment at v$_A$ to size our wing spar. We do this by applying a safety factor of 1.5 (§25.303). For parts with a more error-prone production process additional factors need to be added (see §25.619). And now consider what happens in the real world: The dynamic maximum lift coefficient, which does not obey the rules and regulations imposed by the FAA, will easily be 1.3 times bigger than the static lift coefficient used for stressing the wing spar. In tests up to 150% have been achieved. Makes you think how sensible that safety factor really is. While we understand the static loads very well (because they are easy to study), dynamic and fatigue loads still happen to surprise the engineers from time to time. A little extra margin sometimes can be quite useful. On the Eurofighter the structural safety factor was indeed reduced to 1.4 with the reasoning that more precise modeling and simulation would allow to use a smaller margin. • If you want to add a real life example of exceeding the 1.5 safety factor in a dynamic situation, you could refer to American Airlines 587 (pdf). Page 61 shows the limit and ultimate design loads, and the loads created by the crew inputs. – DeltaLima Dec 27 '15 at 0:55 • @DeltaLima: This is a good example of dynamic loads which would have been almost trivial to calculate, but were not covered by regulations. The safety factor is fine, what was missing was the consequence of repeated inputs at the eigenfrequency of the yawing motion. Airbus complied with the letter, but not the spirit of the (flawed!) law in this case. A good engineer could and would anticipate the higher loads. – Peter Kämpf Dec 27 '15 at 7:38 • @JustSid: Yes, most pilots know what they are doing. On smaller aircraft without power boost they feel from the control forces what they do to their aircraft. But some are overconfident and do not realize how close they operate to the limits. Like the British certification pilot for the Do-228 who allowed the trim to run all the way forward and then could not pull out from the resulting dive. You are supposed to react within 1.5 s from the start of the trim malfunction. If you wait too long, that's it. His last words were "help me on the stick!". This is a general characteristic of complex ... – Peter Kämpf Dec 27 '15 at 12:45 • @JustSid: ... technical systems that they are more complex under the hood than most operators assume. If they don't stick to the rules, they can die, be it in a nuclear power station (Chernobyl is a classic case) or in aircraft. Only very few people have the detailed knowledge to define the rules, but they don't operate the equipment. – Peter Kämpf Dec 27 '15 at 12:48 • Only wood is still the same as it was then. As for wood, hasn’t even that improved dramatically in recent decades as well with modern fabrication of plywoods? Better process control, better shaping, better glues and resins? (I am no expert) – Basil Bourque Dec 28 '15 at 2:26 The legal "over engineering" factor has not been changed since 1970: FAR Part 25, section 303: Unless otherwise specified, a factor of safety of 1.5 must be applied to the prescribed limit load which are considered external loads on the structure. When a loading condition is prescribed in terms of ultimate loads, a factor of safety need not be applied unless otherwise specified. [Amdt. 25-23, 35 FR 5672, Apr. 8, 1970] Over-engineering is a concept in value engineering, not aircraft design. In general, over-engineering means making the design more robust or unnecessarily complicated than necessary. In that sense, I would say that aircrafts are/were rarely over-engineered. Older aircraft were more conservative in determining the loads. In structural design of any aircraft, the designer tries to make the aircraft safe in a number of steps: • Safety factor (which for transport aircraft, is 1.5) • Conservative material properties The process followed now is pretty much the same; however, the engineers are able to determine the loads acting on the structures to a greater precision compared to the earlier times and are able to optimize the structure accordingly. The safety factor and taking of conservative material properties have not changed much. Though one can say that, for example, making the wing spar thicker than necessary to account for unforeseen loads is good from structural point of view, it is bad from performance/weight/fuel consumption views. The problem is that if the designer knowingly pads the safety factor more than necessary, he's actually reducing the aircraft performance. As a result, over engineering (more than design or regulatory requirements) is plain bad design and is not recommended. In short, the process haven't changed over time- it is simply that the tools have improved and more data is available for use. There are other issues here- aircraft like F-35 Lightning II are repeatedly called over-engineered, but the point is that they were designed for the given specifications. For example, F-35 was required by design to perform the tasks of a number of aircraft and as such is having a significantly difficult engineering period. • +1 for pointing out the difference between "over-engineering" and safety margin. All mechanical and civil engineers dealing with safety-critical systems use safety margins in their calculations. The aerospace industry is no different. "Over-engineering" normally means making a system more complicated than it really needs to be to get the same job done. This usually tends to result in less robust systems, since more complex systems have more things that can go wrong. – reirab Dec 27 '15 at 20:13 On a different note from the other answers, there's been a few articles indicating that newer 'bone' like designs are being considered and implemented by companies like Airbus. Airbus has partnered with Autodesk to rethink the design of those lowly partitions. Its new partition debuted today at the Autodesk University conference in Las Vegas and, thanks to 3-D printing and some wild new algorithms based on slime mold and bone growth, it weighs in at just 66 pounds. Airbus’s current partitions weigh 143 pounds apiece. “Our goal was to reduce the weight by 30 percent, and we altogether achieved weight reduction by 55 percent,” says Bastian Schaefer, innovation manager at Airbus. “And we’re right at the beginning.” http://www.wired.com/2015/12/airbuss-newest-design-is-based-on-slime-mold-and-bones/ There are even some pretty impressive futuristic designs: http://www.airbus.com/innovation/future-by-airbus/the-concept-plane/ http://www.smithsonianmag.com/arts-culture/aircraft-design-inspired-by-nature-and-enabled-by-tech-25222971/?no-ist • Rest assured, this concept plane pictured is as likely to become reality as those prognoses about the year 2000 from the Fifties. Writing this from my underwater habitat which can be reached by flying car only. Yeah, right! – Peter Kämpf Dec 27 '15 at 8:09 • @PeterKämpf sure which is why I lead with an article showing how these techniques are being considered (at small scale) today. The point isn't that this is THE FUTURE but rather that there is room to improve current designs, in part using new and improved CAD techniques. I figure the OP might be interested in such developments. – NPSF3000 Dec 27 '15 at 8:11 • There are many small parts in today's aircraft which can benefit mightily from getting a little more attention. Most are left-over compromises which worked well enough not to be looked at in detail later, so they were never analyzed or optimized properly. – Peter Kämpf Dec 27 '15 at 8:49 • Is that structure really futuristic, or is it revisiting Barnes Wallis' WW2-era geodesics? – Brian Drummond Dec 27 '15 at 16:22 • @PavelPetrman check this example out: wired.com/2015/12/… – NPSF3000 Dec 28 '15 at 23:21 I think calculating stresses and loads are no better now than 50 years ago. Engineers understood forces and physics no differently then than now. They just used a slide rule instead of software, but they got the same results. Instead of modeling things, they build real life ones and tested them. They got the same answers. They built a wing, bent it till it broke and discovered its strength and were able to build one to any load specification needed without overbuilding it. Modern software just gets results cheaper and faster. The big differences between then and now are materials and system evolution. Not calculating forces and stresses. • You cannot calculate stresses/strains/displacements in an arbitrary mechanical system, at least not in an analytical way (closed form solution) and not very accurately. But you can get very accurate results with a known margin of error with numerical mechanics, e.g. with the Finite Element Method (FEM). In short, I strongly disagree with your calculations statement. On a sidenote, I do think that experiments are still useful and still used in order to validate your mechanical models. However, I do agree with your materials and construction evolution statement. – user12485 Dec 26 '15 at 16:41 • Please note that there are still a lot of systems, which cannot be modelled and simulated satisfactory - not even with the Finite Element Method. Examples include contact mechanics, fluid-structure-thermo-interactions, damage modelling (e.g. wear, fatigue) and many more! – user12485 Dec 27 '15 at 16:13
2020-01-26 09:18:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4286460876464844, "perplexity": 1540.1788759414023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687958.71/warc/CC-MAIN-20200126074227-20200126104227-00316.warc.gz"}
https://www.gamedev.net/forums/topic/117139-this-and-inheritance/
#### Archived This topic is now archived and is closed to further replies. # this* and inheritance ## Recommended Posts Hi, I have run into a spot of trouble with my this pointers and wonder if anyone here can help me out... I want to create a class which inherits a function from another class. The function needs to know what size the current object is, so it uses "sizeof(*this)". When the function then is used in the derived object I also want the size of the derived object, but the size I get is the size of the base class where the function is defined... I guess this is the way things are but is there any other way to do what I want without overloading the function or in any other ways change every derived class. The reason behind this is that I want the function to be able to handle all objects as long as they are derived from the base class. All data needed is in the base class except the size of the inherited object which I don''t see how to get as I don''t know what objects might be used in the future. Here is some code to demonstrate what I mean: *************************************************************** #include <iostream> using namespace std; class ca{ public: ca(){} ~ca(){} void getSize(){cout << sizeof(*this) << " ";} int a; long b; }; class cb : public ca{ public: cb(){} ~cb(){} long c; }; void main(){ ca* a = new ca(); cb* b = new cb(); // or even better: ca* = new cb(); // Both these return the size of the class ca... // but I want the size of cb too! a->getSize(); b->getSize(); delete a; delete b; } *************************************************************** End of Code I use MSVCPP6... Regards Anomaly_E ##### Share on other sites I would suggest you to check the virtual keyword. What happens is that the method GetSize is declared in your base class but will only return the size of class ca. To have the size of cb, you need to declare virtual the method GetSize in ca and create a virtual method GetSize in cb. I let you experiment with it. Have fun. Red Ghost ##### Share on other sites Expanding on what he said, and sticking with your examlpe... #include <iostream>using namespace std;class ca{public:ca(){}~ca(){}virtual void getSize(){cout << sizeof(*this) << " ";}int a;long b;};class cb : public ca{public:cb(){}void getSize(){cout << sizeof(*this) << " ";}~cb(){}long c;};void main(){ca* a = new ca();cb* b = new cb(); // or even better: ca* = new cb();// Both these return the size of the class ca... // but I want the size of cb too!a->getSize();b->getSize();delete a;delete b;} Now it works . That was a basic example of virtual functions. Hope it helps. Billy - BillyB@mrsnj.com ##### Share on other sites Thanks for the feedback but my problem is still there... I don''t want to do any "sizeof" in the derived classes... in my real code the function that needs "sizeof" is big and ugly... and I want it to be easy for other people to add their own derived classes. In the ideal case you just derive your class from the base class and do nothing more... As a little background information, the function(s) in question are some low-level memory managment that handles convertion to/from c-compatible char*, that''s why I need the size... please keep shooting :-) ##### Share on other sites quote: Original post by Anomaly_E Thanks for the feedback but my problem is still there... I don''t want to do any "sizeof" in the derived classes... This has nothing to do with where you call sizeof. sizeof is an operator, and is invoked differently for each class. It returns the number of bytes reserved for objects of a particular class. For inheritance hierarchies, it is important to make member functions and the destructor virtual. This is what allows for polymorphic dispatch, and affects the size of the internal representation of the class. quote: in my real code the function that needs "sizeof" is big and ugly... and I want it to be easy for other people to add their own derived classes. In the ideal case you just derive your class from the base class and do nothing more... If the classes in question are complex, add a size() method. It''s much more reliable, particularly if dynamically allocated data is involved (sizeof will return the sie of a pointer to data rather than the size of the data pointed to). ##### Share on other sites quote: Original post by Oluseyi For inheritance hierarchies, it is important to make member functions and the destructor virtual. This is what allows for polymorphic dispatch, and affects the size of the internal representation of the class. Hm, the destructors too you say... Now I tried to declare all destructors and the function as virtual in my original example but it still doesn't work... I didn't add the virtual function to the derived class as I don't want it there... but I don't know if I understood that last bit correctly... were you sugesting that it is possible to call the base class function containing sizeof(*this) through a derived object with the desired result without altering the derived class...? otherwise I guess it is acceptable to let users make there own size functions as you said... that is perhaps something to fall back on after all... [edited by - Anomaly_E on October 2, 2002 2:14:34 PM] [edited by - Anomaly_E on October 2, 2002 2:17:07 PM] [edited by - Anomaly_E on October 2, 2002 2:19:29 PM] ##### Share on other sites Short answer? You can''t do what you want to do without adding a virtual size_of() or getSize() function. Period. The language doesn''t work that way. What happens is this: when you do *this, you''re dereferencing the this pointer, which is a pointer to the class that the method is being called on. Now, what *this returns is dependent on where it is called. *this has absolutely nothing to do with the vtable, which is where all polymorphic behavior is implemented. In other words, you can''t have polymorphism without the virtual table (vtbl). If you call *this in a method of MyClass, you''ll get a const reference to an instance of MyClass. See? So, since sizeof() is determined at compile time, if you call sizeof(*this) inside a method of MyClass, even if that method is being called on an instance of MyDerivedClass, you''ll get the size of MyClass. The solution? Add this function to every derived class (and make it virtual in the base): virtual size_t size_of() { return sizeof(*this); } That way, it''s in the vtbl, and will be dispatched to the correct member function when called, regardless of where (as long as you call it from a pointer or reference type). ##### Share on other sites Ok, thanks. Then I know where I stand... it will be the virtual function then... • ### Forum Statistics • Total Topics 628306 • Total Posts 2981944 • 10 • 11 • 12 • 11 • 10
2017-11-19 21:58:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18015480041503906, "perplexity": 2245.166765709929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00758.warc.gz"}
https://stats.stackexchange.com/questions/422134/what-is-the-correct-number-of-observations-to-report-for-an-arima-arimax-model
# What is the correct number of observations to report for an ARIMA/ARIMAX model? This might be due to my relative inexperience with time series modelling, but I am confused about the correct number of observations to report for an ARIMA/ARIMAX model. I couldn't find any post that directly gets at this (though Number of observations used for ARIMA modeling comes close). Say I run the following model: fit1 <- arima(lh, order = c(0,1,0)) And then check the number of “used” observations (wording from the documentation): fit1$nobs length(lh) The number of observations is one less than the total length of the time series, because we difference it once (ARIMA(0,1,0)). Fair enough. But if I then add a lag: fit2 <- arima(lh, order = c(1,1,0)) fit2$nobs The number of “used” observations is the same, which is confusing to me, since I would have expected to lose an additional observation in the beginning of the series. How can we have a value for the lag at the first observation? Same thing goes for MA terms: fit3 <- arima(lh, order = c(0,1,1)) fit3\$nobs How can we have a value for the lag of the error at the first observation? Clearly I’m missing something. It gets even a little bit more confusing if I want to incorporate transfer functions with the arimax function from the TSA package, since arimax doesn’t return a nobs object nor does it have a nobs method. I would greatly appreciate some help on this! Best, Bertel The issue here is examining the # of estimable equations . When you introduce ar structure in the errors this CAN act to reduce the # of estimable equations. Lag structures in predictors have no effect if they are each less than or equal to the model-implied lag of Y . If they exceed the model-implied lag of Y based upon differencing in Y and the ar structure of the error process then the # of estimable equations is appropriately reduced by the differential. Degrees of freedom = # of estimable equations less the # of parameters estimated For example if we have NOB observations and have a first difference operator for the error structure we have NOB-1 estimable equations. If we introduce one lag of X in the model this doesn't change the # of estimable equations. If we introduce a lag of 2 for the X variable this reduces the # of estimable equations to NOB-2 • This is incorrect: "observations" count data, not model degrees of freedom. – whuber Aug 14 at 12:37 • this is not a clear sentence , please clarify . I have added more content to my answer . – IrishStat Aug 14 at 12:48 • +1 Your clarification resolved my misunderstanding of your original answer--thank you. – whuber Aug 14 at 12:53 • @IrishStat : Thanks for the reply and the explanation! (and sorry for my slow response). So what would you report for N (the number of observations) in, let's say an ARIMA(3,1,1) model estimated on a time series of length 100? or an ARIMA(1,1,3)? Or would you not report N at all? – Bertel Aug 31 at 13:19 • The sample size is aLways N . The degrees of freedom associated with the error process is based upon the # of estimable relationships (SAY M) minus the # of estimated parameters (SAY J) . For a (3,1,1) model this would be J=4 AND M=N-4 thus DEGREES OF FREEDOM =M-J . ...For a (1,1,3) model this would be J=4 AND M=N-2 AND thus DEGREES OF FREEDOM =M-J . I would present ALL 4 i.e. N,M,J,DEGREES OF FREEDOM – IrishStat Aug 31 at 15:58
2019-09-21 05:07:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7139883637428284, "perplexity": 1158.5251768150297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00018.warc.gz"}
https://indico.ihep.ac.cn/event/2996/session/4/contribution/34
1. If you are a new user, please register to get an Indico account through https://login.ihep.ac.cn/registIndico.jsp. Any questions, please email us at helpdesk@ihep.ac.cn or call 88236855. 2. The name of any uploaded file should be in English or plus numbers, not containing any Chinese or special characters. 3. If you need to create a conference in the "Conferences, Workshops and Events" zone, please email us at helpdesk@ihep.ac.cn. # International Workshop on Neutrino Factories, Super Beams and Beta Beams 19-24 August 2013 IHEP Asia/Shanghai timezone Home > Timetable > Session details > Contribution details # Contribution IHEP - B326 Neutrino Oscillation Physics # Status of the NOvA Experiment ## Speakers • Dr. Jonathan PALEY ## Abstract content The NuMI Off-Axis $\nu_e$ Appearance (\nova) experiment, currently under construction, is a long-baseline neutrino oscillation experiment optimized for the measurement of $\nu_\mu \rightarrow \nu_e$ appearance. The experiment consists of two nearly identical fully-active liquid-scintillator tracking calorimeter detectors separated by 810 km and exposed to an upgraded 700 kW NuMI beam from Fermi National Laboratory. Goals of the experiment include measurements of $\theta_{13}$, resolution of the neutrino mass hierarchy, measurement of the CP-violating angle $\delta_{\mathrm{CP}}$, and the octant of the $\theta_{23}$ mixing angle. This talk will provide an overview of the detectors, physics goals and sensitivities of the experiment, and a first look at commissioning data from the far detector.
2022-01-23 00:13:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1751943826675415, "perplexity": 5817.646469037524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303917.24/warc/CC-MAIN-20220122224904-20220123014904-00442.warc.gz"}
https://codegolf.stackexchange.com/questions/243449/resize-the-image
# Resize the image Given an $$\n\times m\$$ matrix $$\A\$$ and two integers $$\w,h\$$, output a matrix of $$\w\times h\$$ called $$\B\$$, such that $$B_{i,j} = \int_{i-1}^i\mathbb dx\int_{j-1}^j A_{\left\lceil \frac xw\cdot n\right\rceil,\left\lceil \frac yh\cdot m\right\rceil}\mathbb dy\text{ (1-index),}$$ $$B_{i,j} = \int_i^{i+1}\mathbb dx\int_j^{j+1} A_{\left\lfloor \frac xw\cdot n\right\rfloor,\left\lfloor \frac yh\cdot m\right\rfloor}\mathbb dy\text{ (0-index),}$$ or "split a square into $$\n\times m\$$ smaller rectangles, fill each with the value given in $$\A\$$, then resplit into $$\w\times h\$$ one and get average of each small rectangle" (which is a simple image rescaling algorithm and that's why this title is used) Shortest code in each language wins. You can assume reasonable input range, which may give good to few languages though. Test cases: $$\begin{matrix}1&1&1\\ 1&0&1\\ 1&1&1\end{matrix}, (2,2) \rightarrow \begin{matrix}\frac 89&\frac 89\\ \frac 89&\frac 89\end{matrix}$$ $$\begin{matrix}1&1&1\\ 1&0&1\\ 1&1&0\end{matrix}, (2,2) \rightarrow \begin{matrix}\frac 89&\frac 89\\ \frac 89&\frac 49\end{matrix}$$ $$\begin{matrix}1&0\\0&1\end{matrix}, (3,3) \rightarrow \begin{matrix}1&\frac 12&0\\ \frac 12&\frac 12&\frac 12\\ 0&\frac 12&1\end{matrix}$$ $$\begin{matrix}1&0\\0&1\end{matrix}, (3,2) \rightarrow \begin{matrix}1&\frac 12&0\\ 0&\frac 12&1\end{matrix}$$ Sample solution just by definition • Could you provide some test cases? Thanks Feb 26 at 20:09 • Also a worked example. I'm not really too sure what the question is asking. Feb 26 at 20:16 # Jelly,  20  14 bytes I can't help but think that more of the manipulation could be put inside the reduction ƒ - done xs€⁴Z¹ƭL¤ZÆmðƒ A full-program accepting [w, h] and A that prints a representation of B. Try it online! Or see the test-suite (this uses the register ® in place of the 2nd program argument, ⁴ to make a reusable Link, and formats each result as a grid). ### How? We repeat row elements w times, split these into chunks of length n transpose the result, and take the means, then we do the same to the result, but with h and m instead of w and n. xs€⁴Z¹ƭL¤ZÆmðƒ - Main Link: [w,h]; A ðƒ - reduce [A,w,h] by: x - repeat (row elements) ([[1,2]]x3->[[1,1,1,2,2,2]]) ⁴ - 2nd program argument, A ƭ - call in turn: Z - ...1st time: transpose ¹ - ...2nd time: do nothing L - length -> 1st=n; 2nd=m s€ - split each (row) into chunks of length (n or m) Z - transpose Æm - arithmetic mean (vectorises) - implicit print # Charcoal, 28 bytes FE²N≔EιEζ∕ΣEμ§μ÷⁺×κLμπιLμζIζ Try it online! Link is to verbose version of code. Takes input in the order w, h, A. Explanation: FE²N Repeat twice, once for the width, once for the height. ≔EεEζ∕ΣEμ§μ÷⁺×κLμπεLμζ Rescale and transpose the array. »Iζ Output the final array.
2022-06-26 14:21:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4801747500896454, "perplexity": 3717.6761247444406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103269583.13/warc/CC-MAIN-20220626131545-20220626161545-00075.warc.gz"}
https://codereview.stackexchange.com/questions/84656/converting-a-dropdownlist-retrieval-service-to-generics-or-delegate-methods/84659#84659
Converting a DropDownList retrieval service to generics or delegate methods mappingService below is actually just Automapper. I have a wrapper around it so that I can inject it. I want to refactor the following code to be cleaner: public class DropDownListService : IDropDownListService { public DropDownListService(IMappingService mappingService, ICompanyRepository companyRepository, IProjectTypeRepository projectTypeRepository, IStatusRepository statusRepository, IProjectManagerRepository projectManagerRepository, IProjectArchitectRepository projectArchitectRepository, IFeeTypeRepository feeTypeRepository, IStateRepository stateRepository, IPersonTypeRepository personTypeRepository) { _mappingService = mappingService; _companyRepository = companyRepository; _projectTypeRepository = projectTypeRepository; _statusRepository = statusRepository; _projectManagerRepository = projectManagerRepository; _projectArchitectRepository = projectArchitectRepository; _feeTypeRepository = feeTypeRepository; _stateRepository = stateRepository; _personTypeRepository = personTypeRepository; } public IEnumerable<IDropDownList> RetrieveCompanies() { return _mappingService.Map(_companyRepository.GetAll(), new List<Company>()); } public IEnumerable<IDropDownList> RetrieveJobTypes() { return _mappingService.Map(_projectTypeRepository.GetAll(), new List<ProjectType>()); } public IEnumerable<IDropDownList> RetrieveStatuses() { return _mappingService.Map(_statusRepository.GetAll(), new List<Status>()); } public IEnumerable<IDropDownList> RetrieveProjectManagers() { return _mappingService.Map(_projectManagerRepository.GetAll(), new List<ProjectManager>()); } public IEnumerable<IDropDownList> RetrieveProjectArchitects() { return _mappingService.Map(_projectArchitectRepository.GetAll(), new List<ProjectArchitect>()); } public IEnumerable<IDropDownList> RetrieveFeeTypes() { return _mappingService.Map(_feeTypeRepository.GetAll(), new List<FeeType>()); } public IEnumerable<IDropDownList> RetrievePhases() { return _mappingService.Map(_feeTypeRepository.GetAll(), new List<Phase>()); } public IEnumerable<IDropDownList> RetrieveStates() { return _mappingService.Map(_stateRepository.GetAll(), new List<State>()); } public IEnumerable<IDropDownList> RetrievePeopleTypes() { return _mappingService.Map(_personTypeRepository.GetAll(), new List<PersonType>()); } public IEnumerable<IDropDownList> RetrieveMarkets() { return _mappingService.Map(_marketRepostiory.GetAll(), new List<Market>()); } } As you can see I am injecting way too many things into this controller. I am not currently doing a spa application so I cannot do this separately on the presentation layer. What I would really like to do is use generics. The problem is, the DB value that is returned to me looks like this: public int StateID { get; set; } public string StateName { get; set; } Each entity has the ID field and the name field only they are named for the table. I am unable to write one map (as far as I know how) with automapper to map them to the correct type. How can I clean this class up? • The title of your post should be the function/purpose of your code. Mar 21 '15 at 14:36 • Also, you should list what language this is, that's your most important tag. Mar 21 '15 at 14:43 • @SirPython Fixed that, sorry haven't posted on this site in a while and should have re-read the rules for posting. Mar 21 '15 at 15:01 I may have gotten your question wrong... Will the following implementation work for you? Can you show how your IMappingService looks like? public interface IRepository<T> { IEnumerable<T> GetAll(); } public interface ICompanyRepository : IRepository<Company> { } public class DropDownListService<T> : IDropDownListService { public DropDownListService(IMappingService mappingService, IRepository<T> repository) { _mappingService = mappingService; _repository = repository; } public IEnumerable<IDropDownList> RetrieveDropDownList() { return _mappingService.Map(_repository.GetAll(), new List<T>()); } } • Is it possible to pass IRepository<T> to a constructor. I thought you had to specify the type in order to inject it Mar 22 '15 at 15:15 • It depends on how you register it in your DI container. With Autofac you can register a generic DropDownListService<T> with the dependency on IRepository<T> like this: builder.RegisterGeneric(typeof (IEnumerable<>)).AsImplementedInterfaces();, and it will resolve proper IRepository<T> Mar 23 '15 at 10:42 The number of dependencies suggests this class is doing too much work and violates single responsibility principle. Instead of having one class factor all different enumerations, have multiple classes, e.g. one for project members, one for states, etc. • I considered that, except then my MVC Controller will have the same number of injections. I was hoping for a clean solution that would allow me to inject one class and decide from there Mar 21 '15 at 19:34 • Please provide an example along with your point/opinion to add context to your post, giving the author of the original post (and everyone else that reads this answer) a better grasp at what you are saying. Mar 21 '15 at 20:37 • @Robert, that wouldn't change the underlying problem. I'd argue such a solution would be easy, but not clean (think unit tests). However you could still combine similar services into smaller facades (e.g. a TypeService, PersonService,...) and let your master class bundle them. At least you can reduce the number of dependencies per class then. Mar 22 '15 at 12:35
2022-01-29 12:10:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25023677945137024, "perplexity": 5707.027128997346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00090.warc.gz"}
https://engineering.purdue.edu/~mark/puthesis/faq/bibliography-missing-item/
A "! Latex Error: Something's wrong--perhaps a missing \item" error message is printed when using BibTeX October 29, 2009 Mark Senn A "! Latex Error: Something's wrong--perhaps a missing \item" error message is printed when using BibTeX. This error will be produced if BibTeX is being used but there are no \cite commands in your document.
2014-04-24 09:41:30
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295589327812195, "perplexity": 12938.455303060175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
https://arcadianfunctor.wordpress.com/2007/08/06/blogrolling-on-iii/
## Blogrolling On III Note that if we always set $x = 1$, $J(x,y)$ has a non-zero imaginary part when $-1 < y < 3$. If $y = 2$, and thus $z = 2x$, the coefficients take the form $C_{i} = 2^i S_i$ for the Schroeder numbers $S_i$. Then $J$ is a cubed root of unity $\omega$, and the generating function corresponds to the rule $J = 1 + 2J + J^2$ Can we prove a law $J^4 = J$? Similarly, if $y = 3$ and $J = -1$ the rule would be $J = 1 + 3J + J^2$. Can we interpret these rules in terms of trees? Let’s write the first rule as $J = 1 + J + J + J^2$. This looks a lot like the Motzkin rule, except for the extra factor of $J$. What if we distinguished left and right branches for Motzkin trees? That is, take a full binary rooted tree template and count whole trees with 0, 1 or 2 branches that may be fitted to the template. Then the desired rule works by differentiating left and right unary branches from the root. Instead of a fivefold bijection of the set of Motzkin trees, there is now a fourfold mapping, and the series $1 + 4 + 24 + 176 + 1440 + \cdots = \omega$ Motzkin numbers $M_i$ are also given in terms of trinomial coefficients $T(n,1)$. The $T(n,0)$ coefficients go back to Euler. These are the number of permutations of $n$ ternary symbols (-1, 0, or 1) which sum to 0. A general $T(n,k)$ is the number of permutations of these symbols that sum to $k$. ## 5 Responses so far » 1. 1 ### Matt Noonan said, Sure enough, the substitution rule J = 1 + 2J + J^2 results in an isomorphism J = J^4! It is essentially the same as in the Motzkin case: expand until you get to a J^5 term, then start collapsing terms with the lowest power of J. You seem to need an extra expansion at one point to “borrow” some powers of J. I want to think about this more after I get all these calculus tests graded… 2. 2 ### Kea said, Hi Matt! Good to see you here – and thanks for working through it. I’d also love to spend more time on this, because I’m sure its related to the physics we do here – but then there are so many things to do ….. 3. 3 ### Doug said, Hi Kea, The series 1 4 24 176 1440 can be found – intermixed – among a larger sequence of Sloane’s A036912 ‘Maximum inverse of phi(n) increases’ [author David W. Wilson]. http://www.research.att.com/~njas/sequences/?q=1+4+24+176+1440&sort=0&fmt=0&language=english [see the list and graph as well] There does not appear to be any exact Sloane match for the shorter series. I do not know if this is significant. 4. 4 ### Doug said, Hi Kea, The list of my August 08, 2007 12:29 AM comment, [back] correlates: 1 —-> 1 4 —-> 3 24 —> 9 176 –> 22 1440 -> 43 There does not appear to be any exact Sloane match for 1,3,9,22,43. There are, however, 2914 various listings which include 1 3 9 22 43, but not necessarily in that order and many have duplicates of the individual numbers. The first 1-10 of 2914 examples at http://www.research.att.com/~njas/sequences/?q=1+3+9+22+43&sort=0&fmt=0&language=english 1 – Array read by antidiagonals, generated by the matrix M = [1,1,1;1,N,1;1,1,1];. 2 – Triangular array T read by rows: T(i,0)=T(i,2i)=1 for i >= 0, T(i,1)=T(i,2i-1)=[ i/2 ] for i >= 1, and for i >= 2 and 2<=j<=2i-2, T(i,j)=T(i-1,j-2)+T(i-1,j-1)+T(i-1,j) if i+j is odd, T(i,j)=T(i-1,j-2)+T(i-1,j) if i+j is even. 3 – Continued fraction expansion of Pi^e. 4 – Continued fraction for Li(2). 5 – Inverse permutation to A084491. 6 – Inverse permutation to A084495. 7 – Inverse permutation to A084499. 8 – Continued fraction for cube root of 13. 9 – 3^n mod 89. 10- a(1) = 1, a(2) = 2, and a(n) = smallest number not included earlier that divides the sum of two previous terms. This is interesting, relating one sequence to 2914 sequences, but I do not know what it means. I did look at 11-30 [and the last three pages 290-292] but not beyond. 5. 5 ### free pub quiz answers said, DJ spins out on Saturday nights. Anna had been looking through my phone while I was naked. The decline of the East side blues scene was disheartening, but, it also gave rise to the need for a fresh start, which came in the form of the next blues-only venue, Antone’s, founded by the late Clifford Antone, during the summer of 1975.
2021-10-24 15:00:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7001253962516785, "perplexity": 926.1326420630138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00253.warc.gz"}
http://degital.net/trapezoidal-rule/trapezium-rule-error.html
Home > Trapezoidal Rule > Trapezium Rule Error Trapezium Rule Error Contents All this means that I just don't have a lot of time to be helping random folks who contact me via this website. Thus, if we use $K=2+\pi$, we can be sure that we are taking a pessimistically large value for $K$. I would love to be able to help everyone but the reality is that I just don't have the time. Usually then, $f''$ will be more unpleasant still, and finding the maximum of its absolute value could be very difficult. have a peek here This can also be seen from the geometric picture: the trapezoids include all of the area under the curve and extend over it. I get something like $n=305$. I have a black eye. Why are only passwords hashed? Trapezoidal Rule Error Calculator MathDoctorBob 5.355 görüntüleme 7:31 Using the Trapezoid and Simpson's rules | MIT 18.01SC Single Variable Calculus, Fall 2010 - Süre: 7:48. Some of the equations are too small for me to see! When working with experimental data, there is no known underlying function. 1. You can also select a location from the following list: Americas Canada (English) United States (English) Europe Belgium (English) Denmark (English) Deutschland (Deutsch) España (Español) Finland (English) France (Français) Ireland (English) 2. We get $$f''(x)=-x\cos x-\sin x-\sin x=-(2\sin x+x\cos x).$$ Now in principle, to find the best value of $K$, we should find the maximum of the absolute value of the second derivative. 3. Romesh Romesh (view profile) 0 questions 4 answers 0 accepted answers Reputation: 6 on 7 Jun 2013 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/57737#comment_153440 Yes I agree I was probably wrote a 4. Konuşma metni Etkileşimli konuşma metni yüklenemedi. 5. Discover... 6. The area of the trapezoid in the interval  is given by, So, if we use n subintervals the integral is approximately, Upon doing a little simplification 7. You can then continue propagating the errors as you add segments together. 8. You will be presented with a variety of links for pdf files associated with the page you are on. 9. Comparison Test for Improper Integrals Previous Section Next Section Applications of Integrals (Introduction) Next Chapter Applications of Integrals Calculus II (Notes) / Integration Techniques / Approximating Definite Integrals Roger Stafford Roger Stafford (view profile) 0 questions 1,627 answers 644 accepted answers Reputation: 4,660 on 2 Jan 2013 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/57737#comment_120133 To be less vague, I'll put Your cache administrator is webmaster. Admittedly with matlab doing the computations the data is very precise and therefore the second differences are accurate. Trapezoidal Rule Example I also have quite a few duties in my department that keep me quite busy at times. In general, three techniques are used in the analysis of error:[6] Fourier series Residue calculus Euler–Maclaurin summation formula:[7][8] An asymptotic error estimate for N → ∞ is given by error = Simpson's Rule Error Formula Opportunities for recent engineering grads. Do I have to delete lambdas? https://en.wikipedia.org/wiki/Trapezoidal_rule Daha fazla göster Dil: Türkçe İçerik konumu: Türkiye Kısıtlı Mod Kapalı Geçmiş Yardım Yükleniyor... In the mean time you can sometimes get the pages to show larger versions of the equations if you flip your phone into landscape mode. Trapezoidal Rule Error Proof If there's nothing stopping you from assuming your discrete samples came from this piece-wise linear function, then voila, you're done, and your area calculation was perfect! Links - Links to various sites that I've run across over the years. So we have reduced our upper bound on the absolute value of the second derivative to $2+\pi/2$, say about $3.6$. Simpson's Rule Error Formula What can I do to fix this? http://archives.math.utk.edu/visual.calculus/4/approx.2/ These bounds will give the largest possible error in the estimate, but it should also be pointed out that the actual error may be significantly smaller than the bound.  The bound Trapezoidal Rule Error Calculator The question is to obtain the uncertainty in the integrated area given the uncertainty in each of the data points.There are a couple of approaches one could take. Trapezoidal Rule Formula Generated Sun, 30 Oct 2016 17:37:03 GMT by s_hp90 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection Trapezoid Rule For this rule we will do the same set up as for the Midpoint Rule.  We will break up the interval  into n subintervals of width, Then on navigate here Not the answer you're looking for? Professor Leonard 66.596 görüntüleme 27:08 Example of Simpson's Rule with Error Bound - Süre: 7:31. Otomatik oynat Otomatik oynatma etkinleştirildiğinde, önerilen bir video otomatik olarak oynatılır. Trapezoidal Rule Calculator The system returned: (22) Invalid argument The remote host or network may be down. There was a network issue here that caused the site to only be accessible from on campus. And I don't think it would be correct to take the fitted curve as the 'underlying function' and then base errors on this unless you could be confident that the correct Check This Out Huge bug involving MultinormalDistribution? As an example I computed the integral of sin(x) from 0 to pi where the exact answer would be 2. Trapezoidal Formula Numerical implementation Illustration of trapezoidal rule used on a sequence of samples (in this case, a non-uniform grid). Romesh (view profile) 0 questions 4 answers 0 accepted answers Reputation: 6 Vote0 Link Direct link to this answer: https://www.mathworks.com/matlabcentral/answers/57737#answer_87989 Answer by Romesh Romesh (view profile) 0 questions 4 answers 0 If those measurements are sufficiently accurate, the curvature of this underlying function will be manifestly evident in the data and can be used to estimate the error being made by a Show Answer If you have found a typo or mistake on a page them please contact me and let me know of the typo/mistake. From Content Page If you are on a particular content page hover/click on the "Downloads" menu item. So let $f(x)=x\cos x$. Midpoint Rule Khan Academy 209.579 görüntüleme 8:27 Explanation of Simpson's rule | MIT 18.01SC Single Variable Calculus, Fall 2010 - Süre: 14:51. Site Help - A set of answers to commonly asked questions. Guy Koren Guy Koren (view profile) 1 question 0 answers 0 accepted answers Reputation: 0 on 4 Jan 2013 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/57737#comment_120703 Thanks a lot.. I used $|E_{T}| <= \frac{K(b-a)^3}{12n^2}$ On the process of this formula, I did take 3rd derivative of given function which was $x\cos x$ to find out max of 2nd derivative. http://degital.net/trapezoidal-rule/trapezoidal-error-rule.html Pronunciation of 'r' at the end of a word Why is international first class much more expensive than international economy class? Düşüncelerinizi paylaşmak için oturum açın. W2012.mp4 - Süre: 10:09. Note that at $\pi$, the cosine is $-1$ and the sine is $0$, so the absolute value of the second derivative can be as large as $\pi$. Generated Sun, 30 Oct 2016 17:37:03 GMT by s_hp90 (squid/3.5.20) Paul's Online Math Notes Home Content Chapter/Section Downloads Misc Links Site Help Contact Me Close the Menu Cheat Sheets & Tables Algebra, Trigonometry and Calculus cheat sheets and a variety of For "nice" functions, the error bound you were given is unduly pessimistic. Here's why. Calculus II (Notes) / Integration Techniques / Approximating Definite Integrals [Notes] [Practice Problems] [Assignment Problems] Notice I apologize for the site being down yesterday (October 26) and today (October 27). Math Easy Solutions 869 görüntüleme 42:05 Numerical Integration -Trapezoidal rule, Simpson's rule and weddle's rule in hindi - Süre: 43:59. In the interval from $0$ to $\pi/2$, our second derivative is less than $2+\pi/2$. Bu videoyu bir oynatma listesine eklemek için oturum açın. What are the large round dark "holes" in this NASA Hubble image of the Crab Nebula? up vote 1 down vote favorite 1 I stack about Error Bounds of Trapezoidal Rule. Roger Stafford Roger Stafford (view profile) 0 questions 1,627 answers 644 accepted answers Reputation: 4,660 on 4 Jan 2013 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/57737#comment_120759 Well, that depends on how closely-spaced For the explicit trapezoidal rule for solving initial value problems, see Heun's method. Then we know that the error has absolute value which is less than or equal to $$\frac{3.6\pi^3}{12n^2}.$$ We want to make sure that the above quantity is $\le 0.0001$. Oturum aç Yükleniyor... asked 4 years ago viewed 39205 times active 4 years ago Linked 0 Why do we use rectangles rather than trapezia when performing integration? If you look at the curve of the second derivative of a normal distribution, you will see how a filter can be designed to cover a span of several points in
2018-04-22 14:19:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7518249154090881, "perplexity": 1573.449719241372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945604.91/warc/CC-MAIN-20180422135010-20180422155010-00240.warc.gz"}
https://testbook.com/question-answer/the-maximum-work-is-done-in-compressing-air-when-t--6246d9ef2e51469a6278afca
# The maximum work is done in compressing air when the compression is This question was previously asked in BPSC AE Paper 5 (Mechanical) 25th March 2022 Official Paper View all BPSC AE Papers > 2. isothermal 3. Both (A) and (B) 4. None of the above Free General Studies Chapter Test 1 10.8 K Users 15 Questions 15 Marks 27 Mins ## Detailed Solution Concept: The work done during compression of air is given by dw = ∫ vdP That is the area under curve projected on P–axis (on PV plot) Now, between two same pressure limits P1 and P2 (P2 > P1), the final volume of the air under various processes is Say $$\frac{P_2}{P_1}=r$$, Process-1 (Isothermal): P1V1 = P2V2 $${V_2} = \frac{{{V_1}}}{r}$$ $$P_1V_1^{\gamma}=P_2V_2^{\gamma}$$ $${V_2} =\frac{V_1}{r^{\frac{1}{\gamma}}}$$
2022-12-05 12:28:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081257939338684, "perplexity": 8414.091692837867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00326.warc.gz"}
https://www.riverpublishers.com/journal_read_html_article.php?j=JCSM/5/3/2
## Journal of Cyber Security and Mobility Vol: 5    Issue: 3 Published In:   July 2016 ### Variety of Scalable Shuffling Countermeasures against Side Channel Attacks Article No: 2    Page: 195-232    doi: 10.13052/jcsm2245-1439.532 1 2 3 Variety of Scalable Shuffling Countermeasures against Side Channel Attacks Nikita Veshchikov, Stephane Fernandes Medeiros and Liran Lerman Department of Computer Sciences, Université libre de Bruxelles, Brussel, Belgium E-mail: {nveshchi; stfernan; llerman}@ulb.ac.be Received 1 April 2017; Accepted 14 June 2017; Publication 12 July 2017 ## Abstract IoT devices have very strong requirements on all the resources such as memory, randomness, energy and execution time. This paper proposes a number of scalable shuffling techniques as countermeasures against side channel analysis. Some extensions of an existing technique called Random Start Index (RSI) are suggested in this paper. Moreover, two new shuffling techniques Reverse Shuffle (RS) and Sweep Swap Shuffle (SSS) are described within their possible extensions. Extensions of RSI, RS and SSS might be implemented in a constrained environment with a small data and time overhead. Each of them might be implemented using different amount of randomness and thus, might be fine-tuned according to requirements and constraints of a cryptographic system such as time, memory, available number of random bits, etc. RSI, RS, SSS and their extensions are described using SubBytes operation of AES-128 block cipher as an example, but they might be used with different operations of AES as well as with other algorithms. This paper also analyses RSI, RS and SSS by comparing their properties such as number of total permutations that might be generated using a fixed number of random bits, data complexity, time overhead and evaluates their resistance against some known side-channel attacks such as correlation power analysis and template attack. Several of proposed shuffling schemes are implemented on a 8-bit microcontroller that uses them to shuffle the first and the last rounds of AES-128. ## Keywords • Side channel analysis • countermeasures • hiding techniques • shuffling countermeasure • microcontroller • AES • lightweight shuffling ## 1 Introduction One of the most fast and wide spreading domains in modern digital world is the domain of mobile connected devices called the Internat of Things (IoT). This world is composed of embedded systems, small portable devices such as smartcards or microcontrollers. These interconnected devices are distributed among their users who can also be potential attackers, thus security of IoT is an important issue. A new type of attacks become very important in this context: attacks where the adversary has access to the attacked device. These attacks are among the most powerful attacks on cryptographic implementations, and they represent real-world thread to the IoT devices. Side channel attacks (SCA) are among the most efficient and strongest attacks of this type. Instead of targeting the algorithm (an abstraction), they focus on their implementations (real, physical devices). Since side channel attacks on implementations of cryptographic algorithms were introduced to the scientific community [8, 9] a number of different countermeasures were suggested and studied in literature. Reordering of independent operations, generally referred as shuffling, was suggested as one of the possible countermeasures against side channel attacks [7, 18]. Small embedded devices have lots of constraints such as time, power consumptions, amount of memory that might be used or amount of random bits that a Random Number Generator (RNG) might generate per encryption. Thus, often lightweight security and lightweight countermeasures are privileged for implementations in such environments. Even less constrained environments often have strong requirements on parameters such as e.g., high throughput. Some countermeasures might heavily affect the execution speed of an algorithm (e.g., increase it by a factor of 6 or 7) [13], so even in such environments relatively lightweight countermeasure that would not heavily affect the performance of the device would often be the designer’s choice. This work proposes a set of scalable shuffling techniques that can be fine tuned to fit specific requirements and constraints of a given application. The designer can use our schemes according to the resources that are available in the system including random numbers, timing constraints and available memory. ### 1.1 Related Works Shuffling has already been studied in the literature. Random Start Index (RSI) was applied on AES block cipher by Herbst et al. [7] and by Tillich et al. [21]. In their implementation of AES, operations were executed in a sequential order but with a different randomly chosen starting index: RSI starts by choosing a column index, this index will be the index of the first column to be processed (other columns will be processed sequentially). Furthermore, a second index is chosen for the starting line, for all of the columns. In their schemes, shuffling was combined with masking and applied only on the initial AddRoundKey, SubBytes and MixColumns of the first round and on MixColumn and AddRoundKey of round 9 as well as on SubBytes and AddRoundKey of the last round. RSI shuffling technique is very lightweight and may be easily implemented with virtually no overheads [14, 15]. The basic versions of RSI such as used by Tillich et al. [21] requires 4 random bits. We suggest several extensions of the RSI shuffling technique, our extensions are scalable i.e., they are flexible in terms of the number of random bits that might be used in order to shuffle instructions (e.g., from 1 up to 10 random bits in case of AES-128). Fully random permutation, inside AES operations, was suggested by Tillich et al. [21] and applied by Rivain et al. [18]. They also combined masking and shuffling countermeasures. Veyrat-Charvillon et al. [23] also studied the basic RSI shuffling technique and suggested improvements to Random Permutation (RP) shuffling technique by manipulating the program’s control flow. RP technique allows to reorder all 16 bytes of one operation (such as SubBytes) of AES-128, it might generate all possible 16! permutations (which is roughly 244.25) using 64 random bits [23]. This technique is not as lightweight as RSI but is able to generate much more permutations (all of them). Our shuffling schemes require less randomness than RP, but they also produce less permutations. Our shuffling techniques are as lightweight as RSI, but they allow to produce more than 16 different orderings. RP shuffling technique was also suggested as an improvement (to be used in combination with masking) for the DPA contest v4 [1, 17] and was used for the DPA contest v4.2. Two different algorithms were suggested for the generation of the random permutation, one of them generates full entropy but it has higher big O complexity while the other is less computationally expensive but results in lower entropy [1]. Fernandes Medeiros [13] introduced a shuffling technique that he called SchedAES, it randomizes the sequence of instructions of the AES over several operations. This countermeasure takes precedence relations between operations into account in order to decide which instruction could be executed next. It allows to generate many different orderings of operations. Unfortunately, this technique requires additional data structures, a lot of random bits per execution (up to 3 000 per execution) and could not be used in very constrained environments. All shuffling algorithms require randomness as input in order to generate a permutation, most of them are rigid and require a fixed amount of random bits for example, technique proposed by Veyrat-Charvillon et al. [23] for Random Permutation shuffling requires 64 random bits. The algorithm used by Tillich et al. [21] needs 4 random bits. Our shuffling algorithms allow to choose the number of random bits and thus allow to tune the implementation according to constraints and requirements of the system as well as to the amount of available resources. ### 1.2 Notations We are going to use terms shuffling technique to denote a method or an algorithm that allows to reorder operations (instructions) of an algorithm without changing its final result. The term shuffle will be used to represent one possible ordering (one permutation) of all operations (or instructions) that are being shuffled. Number of shuffles will represent the total number of all possible different shuffles that can be generated using a particular shuffling technique. The abbreviations LSB and MSB denote the least significant bit and the most significant bit respectively, LSBs and MSBs will be used for several least and most significant bits. We use terms second LSB and second MSB to denote the bit next to the LSB and MSB respectively e.g., if the index of the LSB of a byte is 0 and the index of the MSB is 7 then the index of the second LSB is 1 and the index of the second MSB is 6. Terms third LSB (MSB) or fourth LSB (MSB) are used in the same manner. We will refer to a random number generator used by a cryptographic system as RNG. Since it is possible to shuffle one or several rounds of an algorithm as well as several operations per round and each of these operations might be done in one or several clock cycles (depending on the used hardware), we are going to use the term random bits available per unit of time where the unit of time might represent one clock cycle, one operation, one round or one encryption. ### 1.3 Structure of This Paper The rest of this paper is organized as follows. Section 2 presents our new shuffling schemes as well as their extensions using SubBytes operation of AES-128 block cipher as an illustration. Section 3 compares our shuffling techniques among them as well as with couple of other techniques from several points of view. Section 4 sums up the analysis and discusses our results. Finally, Section 5 concludes this paper and gives a list of suggestions for further improvements and future works. ## 2 Scalable Shuffling Techniques Here we are going to describe 3 scalable shuffling techniques. For sake of simplicity all examples presented in this section are given for the SubBytes operation (application of the S-box) on the state of 128-bit version of AES block cipher. This section presents shuffling techniques on the example of the first round of AES, but same shuffling techniques might be applied to any number of rounds depending on system’s requirements and amount of available resources (time, memory, amount of random bits, etc). All presented shuffling schemes might be easily adapted for other operations of AES as well as for other algorithms. Most of the shuffling techniques suggested in this section are based on the fact that the internal state might be seen as a vector or as a matrix. Indeed, in the memory of a computer, a vector of size 16, a 4 × 4 matrix or even a 2 × 8 matrix are all just tables of 16 memory units (in our case bytes). ### 2.1 Random Start Index Basic version Random Start Index (RSI) shuffling for AES-128 represents the AES state as a vector of 16 elements. S-box is applied to all 16 bytes one by one starting from a randomly chosen index (between 0 and 15). This shuffling technique requires 4 random bits and it gives us 16 possible starting indexes (and 16 different shuffles in total). Two different variations of the basic RSI technique might be implemented, those techniques generalize RSI and might be applied with less or more random bits (between 1 and 10 bits in our AES-128 examples). Vector-RSI Vector RSI (V-RSI) extension, uses the same representation of the AES-128 state as the basic RSI, the state is used as a vector and a random start index might be chosen with less than 4 bits of randomness. It might be done by giving a fixed value to all missing bits, by reusing some of the available random bits (eventually by combining them) or even by combining these two approaches, see Figure 1. Figure 1 Structure of V-RSI index generation. The order of bits coming from different parts might be chosen arbitrarily. For example, if we have only 3 available random bits for the RSI, we can fix the position of the missing one as the LSB of the starting index and always assign its value to 0. In this case we will have 8 possible shuffles with only even numbers as starting indexes, see Figure 2(a). Figure 2 Example of V-RSI use with 2 and 3 available random bits. Here is another example, lets say we have only 2 random bits per unit of time and we would like to use V-RSI shuffling. We can fix those random bits as two LSBs of the index, fix the value of the MSB of the index to 1 and assign the value of the second MSB to the exclusive-or (XOR) of the two available random bits. If we use this algorithm to generate the starting index we will be able to generate 4 different shuffles that might start with indexes 8, 11, 13 or 14, see Figure 2(b). This last example is not practical, especially in software implementations, since it requires additional computations. Nevertheless, it is worth noting that we can actually choose any starting index for an implementation by choosing how to assign values to missing bits that are needed in order to have a random index. An idea of using 3-part computation (fixed part, random part and combined part) in order to generate a random index might be used in a scenario when the shuffling scheme used by the device is the same for all devices, but each instance is different thanks to different combination functions and different fixed parts. In such scenario each device will use a different shuffling, thus an attacker will have less chances of being able to profile one device in order to attack another one. Matrix-RSI The second extension, that we call Matrix RSI (M-RSI), of the RSI technique handles the internal state of AES as a matrix and treats it row by row. Since the state is handled row by row, we can just apply the V-RSI on each row. Since all rows are handled separately, we can also start with any row i.e., we can reuse V-RSI technique in order to choose the starting row. This technique might allow us to shuffle SubBytes operation with 1 to 10 random bits and can give us from 2 to 1024 possible shuffles. Table 1 shows how M-RSI might be applied on AES-128 state depending on the number of available random bits per unit of time. This table is structured as follows, All rows part shows how we can go through all rows and Cells in a row part shows how we may handle all cells in one row. Following notations are used: Fixed – normal, non-random algorithm is used (e.g., 0, 1, 2, 3); Rand(n) – starting index is chosen using n random bits; S – same random numbers are used to get the starting index in each row, D – different random bits are used to generate the starting index in different rows. For example, if we have 6 available random bits and we want to use M-RSI, according to the table we might use 2 bits in order to choose a random row to start with and we can also use 1 bit per row in order to choose a random starting index in each row (using V-RSI with 1 bit on a vector of 4 bytes). Table 1 Examples of M-RSI use on 4 × 4 AES-128 state using different number on random bits All Rows Cells in a Row Available Random Bits Bits Handling Bits Handling Number of Shuffles *The second version with 4 bits offers more security, see Section 3 and Table 5. 1 1 Rand(1) 0 Fixed 2 2 2 Rand(2) 0 Fixed 4 3 2 Rand(2) 1 Rand(1), S 8 4 0 Fixed 4 Rand(1), D 16 4* 2 Rand(2) 2 Rand(2), S 16 5 1 Rand(1) 4 Rand(1), D 32 6 2 Rand(2) 4 Rand(1), D 64 8 0 Fixed 8 Rand(2), D 256 9 1 Rand(1) 8 Rand(2), D 512 10 2 Rand(2) 8 Rand(2), D 1024 Notice that this table only gives some examples of how to use M-RSI per number of available bits, multiple combinations might be implemented for some numbers e.g., for 4 bits we might also use 2 bits in order to choose a starting row and then use 2 random bits in order to choose a random start cell (same in each row). Some of these choices might be more efficient and/or more secure than others. Unfortunately, we were not able to find a “nice” combination that could be implemented efficiently (that does not need special cases, when implemented) for 7 available random bits. ### 2.2 Reverse Shuffle The idea behind the simplest version of Reverse Shuffle (RS) technique is the following: AES-128 state is used as a vector of 16 bytes, S-box is applied to all bytes of the state following forward or reversed order (depending on the value of 1 random bit). For example, if the value of the random bit is 0 we may go through the state from byte 0 to byte 15 and if the value of the random bit is 1 we can go through bytes in the reversed order (from 15 to 0). Matrix-RS RS might be extended by using the state of AES-128 as a m×n matrix instead of a vector (where m×n is the size of the original vector, 16 in our case), we are going to call this extension Matrix-RS (M-RS). We will specify the exact M-RS version by using the notation M-RS m×n. Note that M-RS 1 × 16 gives us the original simple RS. The idea behind M-RS 4 × 4 is the following: we can use RS on each row (of 4 bytes) as well as for all rows (start from row 0 or row 3 in the matrix). It allows us to use from 1 up to 5 random bits for shuffling. For example, if we have 4 random bits we can go through all rows in forward order (no randomness required), we can also go through all cells in each row in forward or reversed order (different order for all rows, 4 bits of randomness), see example in Figure 3, also see Table 2. Figure 3 Example of M-RS 4 × 4 with 4 available random bits. Table 2 Examples of M-RS use on 4 × 4 AES-128 state using different number on random bits All Rows Cells in a Row Available Random Bits Bits Handling Bits Handling Number of Shuffles 1 1 Rand 0 Fixed 2 2 1 Rand 1 Rand, S 4 3 1 Rand 2 Rand, 2S 8 4 0 Fixed 4 Rand, D 16 5 1 Rand 4 Rand, D 32 Table 2 shows how M-RS 4 × 4 might be applied on AES-128 depending on the number of available random bits. This table uses following notations: Rand means that indexes are handled in forward or reversed order randomly, Fixed means that same fixed order is used to go through cells in a row (or rows in the matrix); S means that same random bits are used on several rows1, D means that different random bits are used for all rows. Since a 16 byte AES-128 state might be represented as a matrix in several different ways (matrix of different size), we may use it to our advantage while using more or less random bits for shuffling. If we want to use more than 5 random bits and generate more shuffles we can use M-RS 8 × 2 shuffle, it will allow us to use up to 9 random bits (1 bit per row and 1 bit for all rows) and generate 512 shuffles. ### 2.3 Sweep Swap Shuffle The idea of Sweep Swap Shuffle (SSS) is based on the fact that the state of AES-128 might be represented as a m×n matrix (e.g., a 4 × 4 or a 2 × 8 matrix). A matrix might be handled row-by-row or column-by-column. SSS might also be implemented e.g., by swapping pieces of code that go through row and column indexes. In order to specify how a vector is represented as a matrix we will use the notation SSS m×n. Figure 4 shows two possible orders of SSS 4 × 4. Figure 4 Going through bytes of AES-128 state matrix with SSS 4 × 4. Part SSS The idea behind Part-SSS (P-SSS) extension of SSS technique is the following: a state of AES-128 might be broken into several equal parts e.g., 2 parts of 8 bytes. An SSS technique could be then applied to each part separately, it would allow us to create more different shuffles (by using more than 1 random bit). We will use the notation Pk-SSS m×n in order to specify the number k of identical parts that we want to use. Note that P1-SSS 4 × 4 gives us the original SSS 4 × 4. See example of P2-SSS 2 × 4 in Figure 5. Figure 5 Possible shuffles of AES-128 state with P2-SSS 2 × 4 technique. By using P-SSS on AES-128 we can generate up to 16 shuffles by using 1 to 4 random bits, see Table 3. Table 3 Examples of P-SSS use on AES-128 state using different number on random bits Random Bits (and k) Technique Shuffles 1 P1-SSS 4 × 4 2 2 P2-SSS 2 × 4 4 4 P4-SSS 2 × 2 16 Multidimensional SSS The idea behind Multidimensional SSS (MD-SSS) extension of SSS technique is based on the fact that a vector might be seen as a multidimensional matrix2. For example the state of AES-128 might be seen as 2 × 4 × 2 matrix, also see examples on Figure 6. It allows us to go through all dimensions in any order, e.g., in 2 dimensions the state might be handled row by row or column by column (go through the first dimension then through the second one or the other way around). Figure 6 Examples of representations of AES-128 state as multidimensional matrices. To specify a version of SSS we are going to use the notation MD-SSS d1 ×d2 ×dD, where D is the number of dimensions and di is the size of the state in the dimension i. The number of shuffles that can be generated with MD-SSS depends on the number of dimensions that is used to represent the state for the shuffling. Since we can choose any ordering of dimensions to handle the state, the number of different shuffles that might be generated is given by D! and thus the number of necessary random bits is given by ⌈log2D!⌉. Table 4 gives several examples of MD-SSS used with AES-128 state using different number of available random bits. Table 4 Examples of MD-SSS use on AES-128 state with different number on random bits D Random Bits State Representation Shuffles 2 1 2 × 8 2 3 3 2 × 4 × 2 6 4 5 2 × 2 × 2 × 2 24 ## 3 Analysis In order to analyse our shuffling algorithms as well as to compare them to the existing schemes we introduce a couple of new terms and definitions. ### 3.1 Randomization Randomization range of a shuffling technique is the biggest interval where the shuffling algorithm operates and where the shuffled operations might be reordered. A randomization interval of a shuffling technique might be one operation (e.g., AddRoundKey), one round (or several operations of one round), several rounds or the entire algorithm. If the same shuffling technique is applied on SubBytes operation of all rounds of AES, then the randomization interval of this technique is still one operation (SubBytes) since instructions in between different SubBytes are not reordered among them. The randomization range of all our shuffling techniques is one operation (SubBytes, as presented in Section 2). RP also has a randomization range of one operation. SchedAES has a very wide randomization range and it allows to generate many different shuffles but requires a huge amount of randomness. Fully randomized instruction (or operation) is an instruction that might be reordered (and executed) at any instant in time by a given shuffling technique inside of its randomization range without changing the final result of the algorithm that is being shuffled. A partially randomized instruction (or operation) is an instruction that is not fully randomized, but that might be reordered and executed at least 2 different instants in time by a given shuffling technique inside of its randomization range without changing the final result of the algorithm that is being shuffled. An unrandomized instruction (or operation) is an instruction that is always executed at the same moment in time inside of the randomization range using a shuffling technique. A shuffling algorithm is fully randomized if all instructions inside of its randomization range are fully randomized. If at least one instruction is unrandomized or only partially randomized, than the shuffling technique is partially randomized. RSI applied to SubBytes is fully randomized in its basic version, but if we use less random bits (as in V-RSI) it becomes only partially randomized. Different versions of M-RSI might be fully randomized or partially randomized depending on choices made during the implementation (different number of random bits used to choose the start index for rows and inside of each row). RS and its extensions are always partially randomized and it does not have unrandomized instructions if used with AES. Unfortunately SSS is partially randomized and have unrandomized instructions since some bytes are always used at the same moment in time, indeed the S-box is always applied on the first byte at the beginning and on the last byte at the end of the SubBytes operation. Moreover, if we use e.g., SSS 4 × 4 on AES-128 S-box on bytes 0, 5, 10 and 15 are unrandomized since these bytes are situated on the diagonal of the 4 × 4 matrix. In general, handling a square matrix row by row or column by column does not change the position of elements on the diagonal. Thus, SSS k × k will have k unrandomized instructions. In order to analyse all of the proposed shuffling schemes, we executed each one of them through the entire range of possible random inputs that each algorithm could receive as a parameter. For every algorithm we generated a heatmap of all possible positions in time when a SubBytes operation can take place on every single byte id. See examples of such plots on Figure 7, other figures are available in the Appendix 9. We can see that for each scheme available positions for each byte are uniformly randomly distributed, with the exception of 4 bytes of SSS (the bytes that are situated on the diagonal of the matrix). The exact patterns that we can observe on there heatmaps depend on the way the scheme was implemented (i.e. which bits were chosen to be fixed and which are random, recall Figures 1 and 2). Figure 7 Examples of heatmaps of positions when the SubBytes operation takes place for every byte. We also generated same type of heatmap for the RP shuffling scheme, see Figure 8. We used the implementation of RP shuffling scheme from DPA Contest v 4.2 [17]. It is currently impossible to enumerate all possible inputs (randomness) required by this shuffling scheme in a reasonable amount of time, so the heatmap from Figure 8 is generated using 235 permutations. Using this approximation we can see that the ratio between the highest number of occurrences of a byte at a given position to the lowest number is equal to 1.000116, which is less than 0.01 % of difference (approximately 2-13). Figure 8 Heatmap of Random Permutation (RP) shuffling scheme. Implementation from DPA Contest v4.2 [1]. ### 3.2 Number of Shuffles Optimal shuffling algorithm is an algorithm that is able to generate 2n different shuffles using n random bits. If we have n random bits of information we will be able to generate at most 2n different values. If a shuffling algorithm uses n bits of randomness and generates less than 2n different shuffles, then it is not an optimal shuffling algorithm (from the point of view of information theory). RS, RSI and all of their extensions use n bits in order to generate 2n shuffles, see Tables 1 and 2, thus these schemes are optimal, however it is not always the case of SSS. The simple version of SSS is optimal as well as P-SSS, but not the MD-SSS since for a MD-SSS we can obtain D! shuffles (where D is the number of dimensions) and ∀⋅ a,b ∈ ℕ,(a > 2) → a!≠2b. RP is able to generate all k! possible permutations of the state (k is the number of bytes that have to be shuffled), but it is not optimal since it requires more than log2(k!) random bits, the implementation proposed by Veyrat-Charvillon et al. [23] requires 64 bits of randomness. Doubling the number of instants when an operation could be executed increases the amount of traces required for a successful attack roughly by a factor of four [12]. Thus, in a perfect scenario, a shuffling algorithm that generates more different permutations offers more security (because there should be more possibilities of different operations being performed at a given moment in time), however it is not always true. It is very important to notice, that some particular cases of RSI and RS extensions do not always improve the strength of the countermeasure when more random bits are used. For example, in the simplest version of RS it can generate only two permutations (forward and reversed), thus we know that we have only two possible indexes at each moment in time. If one would use 4 × 4 M-RS with 4 random bits as suggested in Table 2 when rows are always handled in forward order while each row might be handled in the forward or the reversed order, we still have only two possible indexes that might be used at each moment in time. Same reasoning applies in some other cases, thus not all versions of each scheme give a security increase when more random bits are used, for more details see Table 5. Table 5 Min and max number of different SubBytes operations that might occur at a fixed moment in time i.e. the number of different bytes of the state that might be handled at a given moment in time during shuffling. Results for different techniques are given according to the examples given in the paper Operations Technique Random Bits Min Max Total Shuffles Result for RP• represents the theoretical lower bound on the number of necessary random bits, ⌈log216!⌉ = 45. RP [23] 64 16 16 16! RP∙ 45 16 16 16! RSI 4 16 16 16 V-RSI 1 2 2 2 2 4 4 4 3 8 8 8 4 16 16 16 M-RSI 4 × 4 1 2 2 2 2 4 4 4 3 8 8 8 4 2 2 16 4* 16 16 16 5 4 4 32 6 8 8 64 8 4 4 256 9 8 8 512 10 16 16 1024 RS 1 2 2 2 M-RS 4 × 4 1 2 2 2 2 4 4 4 3 4 4 8 4 2 2 16 5 4 4 32 P1-SSS 4 × 4 1 1 2 2 P2-SSS 2 × 4 2 1 2 4 P4-SSS 2 × 2 4 1 2 16 MD-SSS 1 1 2 2 3 1 3 6 5 1 4 24 Nevertheless, when we increase the number of random bits in a scheme we always increase the number of different shuffles that could be generated. Thus, from this perspective, the security of a scheme increases i.e., when an attacker learns the position of one byte it gives him less information about positions of all other bytes. ### 3.3 Resources In addition to randomness, shuffling algorithms also require a certain additional memory and time. In order to support RP one needs to use an additional data structure (of the same size as the internal state of the algorithm, so its memory overhead is O(k), where k is the size of the state). Depending on the algorithm RP might also require additional time overhead O(k) up to O(k3) [1]. Our extensions of RS, RSI and SSS do not require as much memory, their memory overhead is limited to a couple of variables (generally to recompute and hold new indexes), in other words their memory overhead is O(1). The only exception might be MD-SSS, where we need to store a small table of the size equal to the number of dimensions, which is always smaller than the size of the original internal state; in this case the memory overhead is O(log(k)). Shuffling as a countermeasure also results in a time overhead. The exact time overhead might vary depending on the implementation and on the available hardware. We did several experiments on a ATMega 328P 8-bit microcontroller, all our code was written in C++. The microcontroller used an external 16 MHz clock. We implemented some of the variations of shuffling techniques that were described in Section 2. We applied several techniques on the SubBytes operations of the first and the last round of AES-128. The only detail that changed between different implementations were the two calls to functions that implemented different versions of SubBytes. In order to measure the time we encrypted 10 000 random plaintexts with different random bits as inputs to our shuffling techniques. Table 6 presents our results including and excluding the time needed for the generation of random bits (for shuffling). The first and the last rounds used same random bits for shuffling. We can see that most of the overhead comes from the RNG. The source code is available on our website3. All calculations (of indexes for memory accesses during shuffling) did not use conditional branching in any way dependent on the random bits used for the shuffling in order to prevent possible SPA and Timing attacks. Table 6 Execution time of 10 000 executions AES-128 encryptions with different shuffling techniques applied to the SubBytes operation of the first and the last rounds. Columns Including RNG and Excluding RNG give information including and excluding time for the generation of random numbers required for shuffling. Time is given in milliseconds, overhead is in % Including RNG Excluding RNG Algorithm RND Bits Time Overhead Time Overhead No shuffling 0 13197 0.0 13197 0.0 RS 1 14500 9.9 13547 2.7 M-RS 4 × 4 1 14201 7.6 13246 0.3 2 14438 9.4 13486 2.1 3 14480 9.7 13528 2.5 4 14362 8.8 13412 1.6 5 14465 9.6 13514 2.4 SSS 4 × 4 1 14394 9.0 13441 1.8 V-RSI 1 14426 9.3 13473 2.1 2 14426 9.3 13473 2.1 3 14424 9.3 13473 2.1 4 14423 9.3 13472 2.1 M-RSI 4 × 4 1 14186 7.5 13232 0.3 2 14185 7.5 13232 0.3 3 14322 8.5 13369 1.3 4 14323 8.5 13372 1.3 5 14425 9.3 13474 1.3 6 14423 9.3 13475 1.3 8 14319 8.5 13373 1.3 9 14397 9.1 13452 1.9 10 14398 9.1 13452 1.9 We can see that the overhead is relatively small and reasonable, but not negligible. Different techniques give slightly different time overheads which give the ability to choose depending on timing constraints that are imposed to a system. These results might probably be improved by implementing other variants of our shuffling techniques or by implementing them in the assembly code (while taking into account the architecture of the microcontroller). ### 3.4 Resistance against Side-Channel Attacks We analyse 3 different scenarios which represent 3 attackers of different strength in order to show the differences between presented shuffling schemes. Unprofiled attack Correlation Power Analysis (CPA) [3, 6] is considered to be among the strongest non-profiled Differential Power Analysis (DPA) attacks [8]. We have tested several versions of our shuffling schemes against CPA attack. The CPA was conducted using the Hamming Weight (HW) leakage model on simulated power traces. We have implemented the following algorithm: $r=Sbox(k⊕m)$ where r is the resulting 4 × 4 state, k is a 16 bytes fixed key and m is a 16 bytes message. The application of Sbox function was shuffled using different shuffling schemes. In order to simulate power traces we used SILK [22] simulator with the following parameters: Hamming weight as the leakage function and the noise variance was set to 2. Same parameters were used for all simulations with different shuffling schemes. Figure 9 shows the estimated number of traces that is necessary in order to extract the key with various shuffling techniques used as countermeasures. As expected, the number of traces increases with the number of different bytes that might be handled at a fix moment in time (due to shuffling). The success rate of this attack on each scheme is shown in Figure 12 (Appendix 6). Figure 9 Number of traces needed to extract the key using CPA on different implementations. The same CPA was applied to all shuffling techniques. among the presented algorithms, there are several techniques that give same advantage against a classic versions of DPA-like attacks, but some of them may generate more different permutations in total (see Table 5) and should make implementation specific attacks more difficult. Implementation specific unprofiled attack We used same simulation parameters and same algorithms and applied a CPA attack in a scenario when the attacker is aware of the shuffling scheme and knows the details of its implementation. We applied a preprocessing technique called integration to the power traces before applying the correlation power attack. In this scenario we sum all points which could be related to the attacked byte according to the shuffling scheme that is used, see Figure 7 and Figures of the Appendix 9. Thus, for a scheme where a byte can be handled in d different positions we integrate d points, for example we integrated 4 points while attacking M-RSI 4 × 4 with 2 random bits, those points correspond to positions where byte 0 can potentially be handled (i.e., points that correspond to bytes 0, 1, 2 and 3 in a scenario without shuffling, see Figure 7(a)). To sum up, we suppose that the attacker knows exactly where a given byte can be handled in a power traces. Figure 10 shows the number of traces needed to successfully extract the secret (lowest number of traces where the success rate of the attack equals one). More details on some attacks are in Figure 13 (Appendix 7). We can see, that this technique is more powerful than a “simple” CPA against all shuffling schemes. Nevertheless, our results show that the more bytes can be found in a particular position (same moment in time during the execution of an algorithm), the more difficult this attack becomes. Figure 10 Number of traces needed to extract the key using CPA with integration as a preprocessing technique. Profiled attack We used a Gaussian Template Attack (TA) [5] in a scenario when an attacker has profiling capabilities and when he has the knowledge of the shuffling scheme (i.e., he knows all points in time when a byte can be handled in the same way as in the unprofiled CPA with integration). However, in this scenario the attacker does not control the randomness during the profiling, which corresponds to a real case scenario (the randomness for cryptographic operations is generated inside of the device), thus an attacker does not know the full permutation (order of bytes during a single execution)4. For each target value (attacked byte) the template corresponds to an average and a covariance matrix. The complexity of the parameters’ estimation for each of these templates depends on the number of points that have to be considered during an attack. The number of points in each attack depends on the number of points in time when a byte might be handled and thus the number of points depends on the shuffling scheme. We used 5 000 traces per target value in order to build all profiles. The number of traces needed to extract the key with high probability is shown in Figure 11 (the success rate of each attack can be found in Figure 14 in Appendix 8). These results shows us that the success of an attack depends on the number of possible bytes that could be handled at the same moment in time (due to shuffling), which is also the case for two other attacks. We can also note that the TA is better than the CPA with integration when the number of points used for the TA is low (i.e., when the number of positions where an given byte can be handled by the shuffling scheme is low or in other words when the number of possible bytes used at a fixed moment in time is low) e.g., see Figures 13 and 14 for 2 possible positions. However, the TA is less effective than the CPA with integration of points for schemes that result in permutations where a byte could be in many different positions (many points to consider in the TA), compare Figures 13 and 14 for 16 possible bytes used at a fixed moment. The advantage of the TA compared to the CPA with integration decreases when more points have to be taken into account due to the fact the TA suffers from the estimation error in high dimensionality contexts. Figure 11 Number of traces needed to extract the key using TA on different implementations. Targeting the RNG Another implementation specific technique that an attacker might be used in order to attack these schemes could also be implemented. An attacker that knows the exact implementation of the shuffling countermeasure that was used might try to recover random bits used to shuffle the bytes and then extract the key using this knowledge (by finding the positions of shuffled operations using known random numbers). This technique was used to attack a masking scheme of a DPA Contest [10, 11]. Basically, the attacker targets the random number generator which allows to effectively remove the security mechanism that uses randomness. Thus, all masking and shuffling schemes are vulnerable to attacks that can successfully target the random number generator. Attacks on other shuffling schemes Our results with all three attacks suggest that the difficulty of attacking a given shuffling scheme mostly comes with the number of positions where a given byte can be handled during an execution of the cryptographic algorithm. To be more precise, all schemes that can put a given byte at d positions require the same number of traces to extract the key for a given attack, see Figures 9, 10 and 11 where all points of the same column overlap or lie close to each other. Moreover, we can observe that the success rate of each attack on all schemes that result in putting a given byte to the same number of positions also closely follow each other, see Figures 12, 13 and 14. Using these observations, we conclude that a given attack on any scheme S will give the same performance that this same attack on a scheme S that can shuffle a byte into the same number of positions as the scheme S. Thus, an attack on the first byte of the SSS is identical to attacking an unprotected implementation, while an attack on the second byte will perform as an attack on a byte of e.g., V-RSI with 1 random bit (see Table 5 and Figures 7(d) and 15(a)). Same resoning can be applied to P-SSS and MD-SSS schemes. Thus, the RP scheme is as difficult to attack as M-RSI 4 × 4 with 10 bits or V-RSI with 4 random bits (see Table 5). Nevertheless, it is important to note, that this reasoning holds if the RNG is not biased and if the implementation under attack does not have additional flaws that an attacker can exploit nor additional sources of information available to the attacker. This result can also vary in case if additional countermeasures are applied with a shuffling scheme. ### 3.5 Applications & Modifications It is easy to apply RSI, RS, SSS and their extensions to SubBytes or Ad-dRoundKey operations of AES-128 since each of them operates only on one byte of the state at a time and does not have any precedence requirements inside of the operation. In order to apply these techniques to ShiftRows or MixColumns operations we may simply consider a row or a column as a memory unit (instead of a byte). Same techniques might be adapted for 192-bit and 256-bit versions as well as for other operations of AES cipher by using more random bits to handle additional rows. RSI, RS, SSS and their extensions might be also applied to other algorithms. These techniques should be applicable if parts of the state are used independently from each other during some computations e.g., the application of the S-box in ciphers like Blowfish [19], DES [16] or PRESENT [2]. We can also combine RS, RSI, SSS and their extensions in order to obtain more different shuffles, e.g., RS might be used with 10-bit version of M-RSI on the AES-128 in order to get 2048 different shuffles by using 11 random bits. Finally, it is worth noting that not all techniques presented here (as well as their extensions) are equally practical and are equally secure (with a given number of random bits). Nevertheless, we considered that all versions with their extensions should be presented for the sake of completeness of this work. For example, SSS is not as practical as RSI extensions for security, optimality as well as penalty reasons; however, we think that SSS is a nice case study for theory. ## 4 Discussion All of the shuffling schemes that we propose and describe are similar i.e., each one suggests a way of going through all indexes of the state in a particular order that could be easily implemented with a small overhead. Most of these techniques could be seen as extensions and generalizations of the random starting index shuffling scheme. Our scalable shuffling schemes can offer different number of shuffles and different number of positions (moments in time) where a particular byte is handled, overall it results in different levels of security. In order to choose which shuffling scheme to implement we advise the designers of a cryptosys-tem to consider the following criteria in given order: • Number of available random bits • Timing constraints • Number of different operations that could be handled at a given moment in time • Number of shuffles The first criterion is related to the basic constraints of the system, so the designer should use as much randomness as he can in order to increase the security. The second criterion changes depending on the given hardware and implementation, so it has to be tested for each platform individually, however, our results show that all shuffling schemes that we present result in very similar timing overhead. Results of all our attacks suggest that a scheme which generates shuffles such that higher number of different bytes that could potentially be handled at a fixed moment in time (from the beginning of the execution) increases the difficulty of an attack. Thus a designer of a cryp-tosystem should choose a shuffling algorithm that maximizes this number. Finally, the number of different shuffles that a given shuffling scheme can produce does not influence the number of traces that is needed in order to mount a successful attack. However, mounting some profiled attacks becomes increasingly difficult when the number of shuffles grows since an attacker would have to create a model per shuffle [4]. This parameter can also help in case if the attacker can find out a position of one byte in order to reduce the remaining uncertainty on the positions of other bytes. Our results based on side-channel analysis using 3 different attackers show that the number of different operations that might occur at a given moment in time produces the biggest effect on the success rate of an attack. This result hold even for attacks that take into account the implemented scheme. From this perspective, the SSS scheme presents a disadvantage because it does not shuffle all bytes, some of the bytes always remain at a fixed position in a trace. However, SSS could still be interesting in practice, because it could be implemented using only a couple of additional instructions on many hardware platforms (without taking into account the random number generator) e.g., conditional swap instruction (MSWAPF) available on TMS320x2803x or using compare-and-exchange (CMPXCHG) that is available on many Intel and AMD processors. SSS could also be easily combined with other shuffling schemes thus giving a boost to the security of the system. A specific type of attack could also be mounted against all of the presented shuffling schemes. If an attacker targets the random number generator in order to find out the ordering that is generated by the shuffling scheme, she can effectively remove the protection given by the countermeasure. This type of attack could be mounted on any shuffling of masking countermeasures [10]. Thus, algorithmic countermeasures that rely on randomness require a secure random number generator that could not be easily targeted through side-channel analysis. ## 5 Conclusions and Future Works Often it is important to be able to choose among several different countermea-sures, since some of them might be implemented more efficiently on a given platform (e.g., the hardware might have special instructions available), that is why it is quite important to have an entire set of different countermeasures that might offer similar performances. This is especially the case when the developed system has a lot of strong constraints, which is the case in mobile applications such as used in IoT. We presented a couple of new scalable shuffling techniques (RS and SSS) and a wide variety of their extensions as well as several different extensions of an existing shuffling scheme (RSI). The main advantage of our proposals is the fact that they allow developers to fine-tune the countermeasure to their needs. It is possible to adjust parameters of our shuffling techniques depending on the requirements and constraints of the cryptosystem such as time (or throughput), code (or hardware) size and amount of available random bits per unit of time. In other words, our proposals are not as rigid as other existing shuffling schemes. We have compared RSI, RS and SSS extensions between them as well as with couple of other shuffling techniques such as RP from the several points of view: data overhead, required amount of random bits and number of different shuffles (orderings) that might be generated as well as their relative strength against CPA attack on a simulator. We also implemented AES-128 block cipher using 21 of the proposed extensions of RSI, RS and SSS on an 8-bit microcontroller in order to compare their time overhead over an unprotected implementation. In our implementations shuffling is applied on SubBytes operations of the first and the last rounds of AES-128, but it could easily be extended to more operations, other versions of AES as well as other ciphers. Presented techniques offer different levels of security given a fixed number of random bits, however some of them are easier to implement than others depending on the target hardware platform, so even less secure techniques might be useful. Our results show that the main parameter that influences the success rate of an attack against a shuffling technique is the number of different bytes that might be handled at a fixed moment in time, but not the number of shuffles (during an attack on one byte). However, the number of shuffles increases the difficulty of mounting a profiled attack and it also increases the difficulty of attacking the entire key (position of one byte gives less information on positions of the others), thus it is also an important parameter. This techniques should not be used as a stand alone countermeasure. It should be combined with masking techniques e.g., as suggested for the DPA Contest v4 [1], especially since some studies show that masking techniques are much more efficient in presence of noise [20], additional noise could be provided by shuffling. As a future work it would be interesting to implement same countermea-sures on different hardware platforms in order to analyze whether some of them might be better adapted to particular platforms. It would also be interesting to explore different combinations of RSI, RS, SSS and their extensions. Some of them might not be practical, others might be efficient and generate more permutations. Finally, it would be nice to further extend these shuffling techniques in order to allow them to shuffle several operations of the same round at once, like SchedAES [13]. ## 6 Simple CPA Figure 12 The success rate of an non-profiled correlation power analysis against different shuffling techniques. Horizontal axis is logarithmic. ## 7 CPA with Integration Figure 13 The success rate of an non-profiled correlation power analysis with integration (preprocessing) against different shuffling techniques. Horizontal axis is logarithmic. ## 8 Template Attack Figure 14 The success rate of a (profiled) template attack against different shuffling techniques. Horizontal axis is logarithmic. ## 9 Heatmaps of Byte Positions Distributions Figure 15 V-RSI Heatmap of positions when the SubBytes operation takes place for every byte. Figure 16 M-RSI 4 × 4 Heatmaps of positions when the SubBytes operation takes place for every byte. Figure 17 M-RSI 4 × 4 Heatmaps part 2 of positions when the SubBytes operation takes place for every byte. Figure 18 M-RS 4 × 4 Heatmaps of positions when the SubBytes operation takes place for every byte. ## Acknowledgements The research of L. Lerman is funded by the Brussels Institute for Research and Innovation (Innoviris) for the SCAUT project. The research of S. Fernandes Medeiros is funded by the Région Wallone. ## References [1] Bhasin, S., Bruneau, N., Danger, J.-L., Guilley, S., and Najm, Z. (2014). “Analysis and improvements of the dpa contest v4 implementation,” in Security, Privacy, and Applied Cryptography Engineering, eds R. S. Chakraborty, V. Matyas, and P. Schaumont (Cham: Springer), 201–218. [2] Bogdanov, A., Knudsen, L. R., Leander, G., Paar, C., Poschmann, A., Robshaw, M. J. B., et al. (2007). “Present: an ultralightweight block cipher,” in Proceedings of the 9th International Workshop Cryptographic: Hardware and Embedded Systems-CHES, 2007, eds P. Paillier and I. Verbauwhede (Berlin: Springer), 450–466. [3] Brier, E., Clavier, C., and Olivier, F. (2004). “Correlation power analysis with a leakage model,” in Cryptographic Hardware and Embedded Systems-CHES 2004, eds M. Joye, and J. J. Quisquater (Berlin: Springer), 16–29. [4] Bruneau, N., Guilley, S., Heuser, A., Rioul, O., Standaert, F.-X., and Teglia, Y. (2016). “Taylor expansion of maximum likelihood attacks for masked and shuffled implementations,” in Proceedings of the Advances in Cryptology – ASIACRYPT 2016 – 22nd International Conference on the Theory and Application of Cryptology and Information Security, Hanoi, Vietnam, December 4–8, 2016, Proceedings, Part I, Lecture Notes in Computer Science, Vol. 10031, eds J. H. Cheon and T. Takagi (Berlin: Springer), 573–601. [5] Chari, S., Rao, J. R., and Rohatgi, P. (2002). “Template attacks,” in eds B. S. Kaliski Jr., Çetin Kaya Koç, and C. Paar, Proceedings of the 4th International Workshop: Cryptographic Hardware and Embedded Systems – CHES 2002, Redwood Shores, CA, USA, August 13–15, 2002: Lecture Notes in Computer Science, Vol. 2523, (Berlin: Springer), 13–28. [6] Coron, J.-S., Kocher, P., and Naccache, D. (2001). “Statistics and secret leakage,” in Financial Cryptography, ed. Y. Frankel (Berlin: Springer), 157–173. [7] Herbst, C., Oswald, E., and Mangard, S. (2006). “An AES smart card implementation resistant to power analysis attacks,” in Proceedings of the 4th International Conference, ACNS 2006: Applied Cryptography and Network Security, Singapore, June 6–9, 2006: Lecture Notes in Computer Science, Vol. 3989, eds J. Zhou, M. Yung, and F. Bao (Berlin: Springer), 239–252. [8] Kocher, P., Jaffe, J., and Jun, B. (1999). “Differential power analysis,” in Proceedings of the Advances in Cryptology-CRYPTO’99, (Berlin: Springer), 388–397. [9] P. C. Kocher. (1996). “Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems,” in Proceedings of the CRYPTO: Lecture Notes in Computer Science, Vol. 1109, ed. N. Koblitz (Berlin: Springer), 104–113. [10] Lerman, L., Bontempi, G., and Markowitch, O. (2015). A machine learning approach against a masked AES – reaching the limit of side-channel attacks with a learning model. J. Cryptogr. Eng. 5, 123–139. [11] L. Lerman, S. Fernandes Medeiros, G. Bontempi, and O. Markowitch. (). “A machine learning approach against a masked AES,” in Proceedings of the 12th International Conference, CARDIS 2013: Smart Card Research and Advanced Applications, Berlin, Germany, November 27–29, 2013. Revised Selected Papers, Lecture Notes in Computer Science, Vol. 8419, eds A. Francillon and P. Rohatgi (Berlin: Springer), 61–75. [12] Mangard, S., Oswald, E., and Popp, T. (2008). Power Analysis Attacks: Revealing the Secrets of Smart Cards, Vol. 31. Berlin: Springer Science & Business Media. [13] Fernandes Medeiros, S.(2012). “The schedulability of aes as a countermeasure against side channel attacks,” in Proceedings of the SPACE: Lecture Notes in Computer Science, Vol. 7644, eds A. Bogdanov and S. K. Sanadhya (Berlin: Springer), 16–31. [14] Medwed, M., Standaert, F.-X., Großsch¨adl, J., and Regazzoni, F. (2010). “Fresh re-keying: security against side-channel and fault attacks for low-cost devices,” in Proceedings of the Progress in Cryptology–AFRICACRYPT 2010, (Berlin: Springer), 279–296. [15] Moradi, A., Mischke, O., and Paar, C. (2011). “Practical evaluation of dpa countermeasures on reconfigurable hardware,” in Proceedings of the 2011 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), (Piscataway, NJ: IEEE), 154–160. [16] NIST FIPS PUB. 46-3. (1977). NIST FIPS PUB. 46-3 data encryption standard. Federal Information Processing Standards. Gaithersburg, MD: National Institute of Standards and Technology. [17] TELECOM ParisTech SEN Research Group (2013). DPA Contest. Availble at: http://www.dpacontest.org [18] Rivain, M., Prouff, E., and Doget, J. (2009). “Higher-order masking and shuffling for software implementations of block ciphers,” in C. Clavier and K. Gaj, Proceedings of the 11th International Workshop, Lausanne, Switzerland, September 6–9, 2009: Cryptographic Hardware and Embedded Systems – CHES 2009: Lecture Notes in Computer Science, Vol. 5747, (Berlin: Springer), 171–188. [19] Schneier, B. (1994). “Description of a new variable-length key, 64-bit block cipher (blowfish),” in Fast Software Encryption, ed. R. Anderson (Berlin: Springer), 191–204. [20] Standaert, F.-X., Veyrat-Charvillon, N., Oswald, E., Gierlichs, B., Medwed, M., Kasper, M., et al. (2010). “The world is not enough: Another look on second-order dpa,” in Proceedings of the Advances in Cryptology-ASIACRYPT 2010, (Berlin: Springer), 112–129. [21] Tillich, S., Herbst, C., and Mangard, S. (2007). “Protecting AES software implementations on 32-bit processors against power analysis,” in Proceedings of the 5th International Conference: Applied Cryptography and Network Security, ACNS 2007, Zhuhai, China, June 5–8, 2007: Lecture Notes in Computer Science, Vol. 4521, eds J. Katz and M. Yung (Berlin: Springer), 141–157. [22] Veshchikov, N. (2014). “Silk: High level of abstraction leakage simulator for side channel analysis,” in Proceedings of the 4th Program Protection and Reverse Engineering Workshop, PPREW-4, (New York, NY: ACM), 3:1–3:11. [23] Veyrat-Charvillon, N., Medwed, M., Kerckhof, S., and Standaert, F.-X. (2012). “Shuffling against side-channel attacks: a comprehensive study with cautionary note” in Proceedings of the Advances in Cryptology ASIACRYPT 2012: Lecture Notes in Computer Science, Vol. 7658, eds X. Wang and K. Sako (Berlin: Springer), 740–757. ## Biographies Nikita Veshchikov got his Bachelor in Computer Sciences in 2009 at Université Libre de Bruxelles (ULB) in Belgium. He continued studies in the same field and got a Master in Computer Sciences with advanced studies of embedded systems in 2011 at the same university. During his master thesis he studied reverse engineering and anti-patching techniques. Since 2011 Nikita works as a teaching assistant while also working on his PhD thesis in the field of side-channel attacks. His is mostly interested in simulators and automated tools for side-channel analysis and computer assisted secure development. He is also interested in lightweight secure implementations. Stephane Fernandes Medeiros got his Bachelor (in 2007) and his Master (in 2009) degree in computer sciences at the Université libre de Bruxelles (ULB), Belgium. He worked on his PhD in the domain of software countermeasures against side-channel attacks while being a teaching assistant at ULB, he got his PhD in 2015. Now Stephane works as a postdoctoral researcher at the Université libre de Bruxelles, he is mainly working on security protocols for small embedded devices. Liran Lerman received the PhD degree in the department of Computer Science at the Université libre de Bruxelles (in Belgium) in 2015. In 2010, he received with honors (grade magna cum laude) the master degree from the same university. During his PhD thesis, he was a teaching assistant and a student doing research as part of a Machine Learning Group (MLG) and the Cryptography and Security Service (QualSec). Currently, he is a post-doctoral researcher of the QualSec. His research relates to machine learning, side-channel attacks and countermeasures. 12S in line 3 means that same bits are reused 2 times on 2 different rows and then different random bits are used on 2 other rows. 2It is important to note, that we can think about the state as if it was a three dimensional matrix for the purpose of shuffling, but it does not mean that the state has to be represented and manipulated as such during the entire algorithm.
2018-02-19 07:48:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4830354154109955, "perplexity": 1063.3024659658354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812556.20/warc/CC-MAIN-20180219072328-20180219092328-00389.warc.gz"}
https://demo.formulasearchengine.com/wiki/User:Ems57fcva/sandbox/General_Relativity
# User:Ems57fcva/sandbox/General Relativity ## Introduction General relativity (GR) or general relativity theory (GRT) is the theory of gravitation published by Albert Einstein in 1915. GR is a geometrical theory under which spacetime is a 4-dimensional 3+1 pseudo-Riemannian manifold that is curved by the mass, energy, and momentum (or "substance") within it. In general relativity, the relationship between substance and curvature is governed by the Einstein Field Equations. The solutions to these field equations are metrics of spacetime which describe the spacetime and from which the geodesic equations of motion are obtained. GR was developed by Einstein staring in 1907 with the publication of the Principle of Equivalence, which describes gravitation and acceleration as different perspectives of the same thing. An important consequence of General Relativity and its Equivalence Principle is that free-fall is inertial motion, while being "at rest" on the surface of the Earth is actually an accelerated (or non-inertial) state. Although it was the start of Einstein's investigations, the Equivalence Principle is not the underlying principle of GR. Instead, it is now a consequence of GR, the theory build to explain it. The full set of underlying principle for GR are • The General Principle of Relativity: The laws of physics are the same in all frames of reference • The Principle of General Covariance: The laws of physics are independent of the coordinate system in which they are expressed. • Local Lorentz Invariance: The rules of special relativity (or SR) apply locally in all frames of reference. • The Principle of geodesic motion: Inertial motion occurs along timelike geodesics as parameterized by proper time. • Spacetime is a pseuo-Rienmanian manifold curved by the substance within it. ## What is inertia? To understand what GR is about, one needs to realize that one of Einstein's goals was to answer this seemingly simple question. In this section, we will review the concept of inertia and show it is affected by the various principles of GR mentioned above. ### First try: Newton's First Law Newton's first law of motion is An object at rest tends to stay at rest and an object in motion tends to stay in motion with the same speed and in the same direction unless acted upon by an unbalanced force. This may sound good, but what is the meaning of "in the same direction"? For example, let us look at an object going in straight line first against a Cartesian coordiante system (as showm in fugure 1A), and then against a radial coordinate system (as shown in figure 1B). In the Cartesian case, the object really is going in the same direction as defined by the coordinate system, but in the radial coordinate case, the object is first coming mostly inward, then is going tangetially, and then is heading outward! Yet both trajectories for the object are the same: All that has changes is how the space through which it is traveling is mapped. What Einstein realized was that space and time simply are, and that the coordinate system is a artifical mathematical aid added in to help describe actions and interactions with the spacetime. So in trying to describe Newton's First Law, we come up againt the first of the underlying concepts of General Relativity, the Principle of General Covariance: The laws of physics are the same in all coordinate systems. We now need a concept that unifies how motion is described in both figures 1A and 1B. To do this, we need turn to the work of Riemann and others on non-Euclidean geometries. Through their work, the concept of a geodesic path came into being. In a Euclidean (or flat) manifold, these geodesics turn out to be straight lines. With the concept of a geodesic in place, we now need to define the manifolds for space and time in Newtonian Mechanics. These are: • Space is a three-dimensional Euclidean Riemannian manifold, and • Time is a one-dimensional manifold which proceeds at the same rate at all positions and for all observers. Now the first law of motion can be written as A object will move along a geodesic path in space as parameterized by distance travelled at a constant rate with repect to time unless it should be acted on by an unbalanced force. So inertial motion is now defined as movement along a geodesic path. This is a first draft of another fundamental principle of General Relativity: the Principle of Geodesic Motion. We will revisit and revise this principle soon. ### Second try: Special Relativity After the introduction of SR by Eintein in 1905, the geometric structure of its spacetime was described by Minkowski in 1908. In a landmark talk, he noted that in SR, there is an "invariant distance" between events ${\displaystyle s}$that all observers will agree on. In a cartesian coordinate system with a time coordinate ${\displaystyle t}$ and linear orthogonal spatial coordinates ${\displaystyle x}$, ${\displaystyle y}$, and ${\displaystyle z}$ the formula for this invariant distance is As it tunrs out, ${\displaystyle s}$ is the elapsed proper time for an observer moving linearly between the events (when this is possible). In addition, the cases of linear motion between events in SR represents the geodesics of this spacetime in a timelike direction. Another feature of this Minkowski spacetime of GR is that coordinate rest, which was permitted in Newton's physics, is missing in relativity: Now with the unification of space and time one is always in some sort of coordinate motion since one is always moving forward through time. Through these effects, we find that a refinement is needed to our above stated rules for inertia. Now An object will move though spacetime along a timelike geodesic of spacetime as parameterized by proper time unless it is acted on by an unbalanced force. This is the definition of inertial motion that is used in relativity. ### Third try: The Equivalence Principle It is all fine and dandy to have a definition of inertial motion, but how does one know that they are in an inertial frame of reference? ## Verification of General Relativity === The orbit of Mercury ===woob doob when light ray ### Binary Stars and Pulsars +++ END OF EDITTING +++ The meaning of the Principle of Equivalence has gradually broadened, in consonance with Einstein's further writings, to include the concept that no physical measurement within a given unaccelerated reference system can determine its state of motion. This implies that it is impossible to measure, and therefore virtually meaningless to discuss, changes in fundamental physical constants, such as the rest masses or electrical charges of elementary particles in different states of relative motion. Any measured change in such a constant would represent either experimental error or a demonstration that the theory of relativity was wrong or incomplete. This principle explains the experimental observation that inertial and gravitational mass are equivalent. Moreover, the principle implies that some frames of reference must obey a non-Euclidean geometry: that spacetime is curved (by matter and energy), and gravity can be seen purely as a result of this geometry. This yields many predictions such as gravitational redshifts and light bending around stars, black holes, time slowed by gravitational fields, and slightly modified laws of gravitation even in weak gravitational fields. However, it should be noted that the equivalence principle does not uniquely determine the field equations of curved spacetime, and there is a parameter known as the cosmological constant which can be adjusted. The modifications to Isaac Newton's law of universal gravitation produced the first great theoretical success of general relativity: the correct prediction of the precession of the perihelion of Mercury's orbit. Many other quantitative predictions of general relativity have since been confirmed by astronomical observations. However, because of the difficulty in making these observations, theories which are similar but not identical to general relativity, such as the Brans-Dicke theory and the Rosen bi-metric theory cannot be ruled out completely, and current experimental tests can be viewed as reducing the deviation from GR which is allowable. However, the discovery in 2003 of PSR J0737-3039, a binary neutron star in which one component is a pulsar and where the perihelion precesses 16.88° per year (or about 140,000 times faster than the precession of Mercury's perihelion), enabled the most precise experimental verification yet of effects predicted by general relativity. [1] [2] There are no known experimental results that suggest that a theory of gravity radically different from general relativity is necessary. For example, the Allais effect was initially speculated to demonstrate "gravitational shielding," but was subsequently explained by conventional phenomena. Nevertheless, there are good theoretical reasons for considering general relativity to be incomplete. General relativity does not include quantum mechanics, and this causes the theory to break down at sufficiently high energies. A continuing unsolved challenge of modern physics is the question of how to correctly combine general relativity with quantum mechanics, thus applying it also to the smallest scales of time and space. ## The curvature of spacetime Mathematicians use the term "curved" to refer to any space whose geometry is non-Euclidean. Frequently, this curvature is illustrated by an image like the one below. Two-dimensional visualisation of space-time distortion This image represents spacetime as a higher-dimensional flat space, with the "weight" of a massive object "stretching" the trampoline-like spacetime "fabric", which would result in trajectories around this "dent" being curved due to the "slope" and the pull of gravity in some higher dimension . This image, however, is only suggestive of the reality. It is important to remember that spacetime is curved, not merely space, and that space is three-dimensional, not two-dimensional as shown. Another approach used to understand spacetime as a curved surface in three-dimensional space is to instead begin by imagining a universe of one-dimensional beings living in one dimension of space and one dimension of time. Each bit of matter is not a point on whatever curved surface you imagine, but a line showing where that point moves as it goes from the past to the future. These lines are called world lines. While it can be helpful for visualization to imagine a curved surface sitting in space of a higher dimension, that model is not very useful for the real universe; although a two dimensional surface can be embedded in three, and thus visualized well, a curved four dimensional spacetime such as our universe cannot be imbedded in a flat space of even five dimensions, but many more are required. Curvature can be measured entirely within a surface, and similarly within a higher-dimensional manifold such as space or spacetime. On earth, if you start at the North Pole, walk south for about 10,000 km (to the Equator), turn left by 90 degrees, walk for 10,000 more km, and then do the same again (walk for 10,000 more km, turn left by 90 degrees, walk for 10,000 more km), you will be back where you started. Such a triangle with three right angles is only possible because the surface of the earth is curved. The curvature of spacetime can be evaluated, and indeed given meaning, in a similar way. Spaces of only two dimensions, however, require only one quantity, the Gaussian or scalar curvature, to quantify their curvature. In more dimensions, curvature is quantified by the Riemann tensor. This tensor describes how a vector that is moved along a curve parallel to itself changes when a round trip is made. In flat space the vector returns to the same orientation, but in a curved space it generally does not. ## Relationship to special relativity The special theory of relativity (1905) modified the equations used in comparing the measurements made by differently moving bodies, in view of the constant value of the speed of light, i.e. its observed invariance in reference frames moving uniformly relative to each other. This had the consequence that physics could no longer treat space and time separately, but only as a single four-dimensional system, "space-time," which was divided into "time-like" and "space-like" directions differently depending on the observer's motion. The general theory added to this that the presence of matter "warped" the local space-time environment, so that apparently "straight" lines through space and time have the properties we think of "curved" lines as having. On May 29, 1919, observations by Arthur Eddington of shifted star positions during a solar eclipse confirmed the theory. ## Foundations General relativity's mathematical foundations go back to the axioms of Euclidean geometry and the many attempts over the centuries to prove Euclid's fifth postulate, that parallel lines remain always equidistant, culminating with the realisation by Lobachevsky, Bolyai and Gauss that this axiom need not be true. The general mathematics of non-Euclidean geometries was developed by Gauss' student, Riemann, but these were thought to be mostly inapplicable to the real world until Einstein developed his theory of relativity. The existing applications were restricted to the geometry of curved surfaces in Euclidean space, as if one lived and moved in such a surface. While such applications seem trivial compared to the calculations in the four dimensional spacetimes of general relativity, they provided a minimal development and test environment for some of the equations. Gauss had realised that there is no a priori reason that the geometry of space should be Euclidean. What this means is that if a physicist holds up a stick, and a cartographer stands some distance away and measures its length by a triangulation technique based on Euclidean geometry, then he is not guaranteed to get the same answer as if the physicist brings the stick to him and he measures its length directly. Of course for a stick he could not in practice measure the difference between the two measurements, but there are equivalent measurements which do detect the non-Euclidean geometry of space-time directly; for example the Pound-Rebka experiment (1959) detected the change in wavelength of light from a cobalt source rising 22.5 meters against gravity in a shaft in the Jefferson Physical Laboratory at Harvard, and the rate of atomic clocks in GPS satellites orbiting the Earth has to be corrected for the effect of gravity. Newton's theory of gravity had assumed that objects did in fact have absolute velocities: that some things really were at rest while others really were in motion. He realized, and made clear, that there was no way these absolutes could be measured. All the measurements one can make provide only velocities relative to one's own velocity (positions relative to one's own position, and so forth), and all the laws of mechanics would appear to operate identically no matter how one was moving. Newton believed, however, that the theory could not be made sense of without presupposing that there are absolute values, even if they cannot be determined. In fact, Newtonian mechanics can be made to work without this assumption: the outcome is rather innocuous, and should not be confused with Einstein's relativity which further requires the constancy of the speed of light. In the nineteenth century, Maxwell formulated a set of equations—Maxwell's field equations—that demonstrated that electromagnetic fields behave as waves travelling at, amazingly, the speed of light in a vacuum; hence, the identification of light with electromagnetic fields was made. This appeared to provide a way around Newton's relativity: by comparing one's own speed with the speed of light in one's vicinity, one should be able to measure one's absolute speed--or, what is practically the same, one's speed relative to a frame of reference that would be the same for all observers. The assumption was made that whatever medium light was travelling through—whatever it was waves of—could be treated as a background against which to make other measurements. This inspired a search to determine the earth's velocity through this cosmic backdrop or "aether"—the "aether drift." The speed of light measured from the surface of the earth should appear to be greater when the earth was moving against the aether, slower when they were moving in the same direction. (Since the earth was hurtling through space and spinning, there should be at least some regularly changing measurements here.) A test made by Michelson and Morley toward the end of the century had the astonishing result that the speed of light appeared to be the same in every direction. In his 1905 paper "On the Electrodynamics of Moving Bodies", Einstein explained these results in his theory of special relativity. ## Outline of the theory The fundamental idea in relativity is that we cannot talk of the physical quantities of velocity or acceleration without first defining a reference frame, and that a reference frame is defined by choosing particular matter as the basis for its definition. Thus all motion is defined and quantified relative to other matter. In the special theory of relativity it is assumed that reference frames can be extended indefinitely in all directions in space and time. The theory of special relativity concerns itself with inertial (non-accelerating) frames while general relativity deals with all frames of reference. In the general theory it is recognised that we can only define local frames to given accuracy for finite time periods and finite regions of space (similarly we can draw flat maps of regions of the surface of the earth but we cannot extend them to cover the whole surface without distortion). In general relativity Newton's laws are assumed to hold in local reference frames. In particular free particles travel in straight lines in local inertial (Lorentz) frames. When these lines are extended they do not appear straight, and are known as geodesics. Thus Newton's first law is replaced by the law of geodesic motion. We distinguish inertial reference frames, in which bodies maintain a uniform state of motion unless acted upon by another body, from non-inertial frames in which freely moving bodies have an acceleration deriving from the reference frame itself. In non-inertial frames there is a perceived force which is accounted for by the acceleration of the frame, not by the direct influence of other matter. Thus we feel acceleration when cornering on the roads when we use a car as the physical base of our reference frame. Similarly there are coriolis and centripetal forces when we define reference frames based on rotating matter (such as the Earth or a child's roundabout). The principle of equivalence in general relativity states that there is no local experiment to distinguish non-rotating free fall in a gravitational field from uniform motion in the absence of a gravitational field. In short there is no gravity in a reference frame in free fall. From this perspective the observed gravity at the surface of the Earth is the force observed in a reference frame defined from matter at the surface which is not free, but is acted on from below by the matter within the Earth, and is analogous to the acceleration felt in a car. Mathematically, Einstein models space-time by a four-dimensional pseudo-Riemannian manifold, and his field equation states that the manifold's curvature at a point is directly related to the stress energy tensor at that point; the latter tensor being a measure of the density of matter and energy. Curvature tells matter how to move, and matter tells space how to curve. The field equation is not uniquely proven (it is only an assumption of GR) and there is room for other models, provided that they do not contradict observation. General relativity is distinguished from other theories of gravity by the simplicity of the coupling between matter and curvature, although we still await the unification of general relativity and quantum mechanics and the replacement of the field equation with a deeper quantum law. Few physicists doubt that such a theory of everything will give general relativity in the appropriate limit, just as general relativity predicts Newton's law of gravity in the non-relativistic limit. Einstein's field equation contains a parameter called the "cosmological constant" ${\displaystyle \Lambda }$ which was originally introduced by Einstein to allow for a static universe (i.e., one that is not expanding or contracting). This effort was unsuccessful for two reasons: the static universe described by this theory was unstable, and observations by Hubble a decade later confirmed that our universe is in fact not static but expanding. So ${\displaystyle \Lambda }$ was abandoned, with Einstein calling it the "biggest blunder [I] ever made". However, quite recently, improved astronomical techniques have found that a non-zero value of ${\displaystyle \Lambda }$ is needed to explain some observations. ### Einstein field equation The field equation reads, in components, as follows: ${\displaystyle R_{ab}-{R \over 2}g_{ab}+\Lambda g_{ab}={8\pi G \over c^{4}}T_{ab}}$ where ${\displaystyle R_{ab}}$ are the Ricci curvature tensor components, ${\displaystyle R}$ is the scalar curvature, ${\displaystyle g_{ab}}$ are the metric tensor components, ${\displaystyle \Lambda }$ is the cosmological constant, ${\displaystyle T_{ab}}$ are the stress-energy tensor components describing the non-gravitational matter, energy and forces at any given point in space-time, ${\displaystyle \pi }$ is pi, ${\displaystyle c}$ is the speed of light in a vacuum and ${\displaystyle G}$ is the gravitational constant which also occurs in Newton's law of gravity. The Ricci tensor and scalar curvature are themselves derivable from the metric, which describes the metric of the manifold and is a symmetric 4 x 4 tensor, so it has 10 independent components. Given the freedom of choice of the four spacetime coordinates, the independent equations that make up the Einstein field equation reduce to 6. Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side, written as part of the stress-energy tensor, and then interpreted as a form of dark energy whose density is constant in space-time. The study of the solutions of this equation is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe. Solutions of the field equations are sometimes known as "metrics" or "spacetimes". Some well-known and popular metrics include: 1. Schwarzschild metric (which describes the spacetime geometry around a spherical mass) 2. Kerr metric (which describes the geometry around a rotating spherical mass) 3. Reissner-Nordstrom metric (which describes the geometry around a charged spherical mass) 4. Kerr-Newman metric (which describes the geometry around a charged-rotating spherical mass) 5. Friedmann-Robertson-Walker (FRW) metric (which is an important model of an expanding universe) 6. pp-wave metrics (which describe various types of gravitational waves) 7. wormhole metrics (which serve as theoretical models for time travel) 8. Alcubierre metric (which serves as a theoretical model of space travel) Solutions (1), (2), (3) and (4) also include black holes. ## The vierbein formulation of general relativity This is an alternative equivalent formulation of general relativity using four reference vector fields, called a vierbein or tetrad. We have four vector fields, ea, a = 0, 1, 2, 3 such that g(ea, eb) = ηab where ${\displaystyle \eta ={\begin{bmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{bmatrix}}}$. See sign convention. One thing to note is that we can perform an independent proper, orthochronous Lorentz transformation at each point (subject to smoothness, of course) and still get a valid tetrad. So, the tetrad formulation of GR is a gauge theory, but with a noncompact gauge group SO(3,1). It is also invariant under diffeomorphisms. See vierbein and Palatini action for more details. See Einstein-Cartan theory for an extension of general relativity to include torsion. See teleparallelism for another theory which predicts the same results as general relativity but with FLAT spacetime (no curvature). ## Quotes The theory appeared to me then, and still does, the greatest feat of human thinking about nature, the most amazing combination of philosophical penetration, physical intuition, and mathematical skill. But its connections with experience were slender. It appealed to me like a great work of art, to be enjoyed and admired from a distance.Max Born ## References ### Textbooks • Carroll, Sean M., Spacetime and Geometry: An introduction to general relativity, Addison Wesley, San Francisco (2004). ISBN 0-8053-8732-3. A modern graduate level textbook. • D'Inverno, Ray, Introducing Einstein's Relativity, Oxford University Ass Press (1993). A modern undergraduate level text. • Misner, Charles, Kip Thorne, and John Wheeler, Gravitation, Freeman (1973). ISBN 0716703440. A classic graduate level text book, which, if somewhat long winded, pays more attention to the geometrical basis and the development of ideas in general relativity than some other approaches. ### Other • Bondi, Herman, Relativity and Common Sense, Heinemann (1964). A school level introduction to the principle of relativity by a renowned scientist. • Einstein, Albert, Relativity: The special and general theory. ISBN 0517884410. The special and general relativity theories in their original form. • Epstein, Lewis Caroll, Relativity Visualized. ISBN 093521805X. Requires no mathematical background. Actually explains general relativity, rather than merely hinting at it with a few metaphors. • Perret, W. and G.B. Jeffrey, trans.: The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity, New York Dover (1923). • Thorne, Kip, and Stephen Hawking, Black Holes and Time Warps, Papermac (1995). A recent popular account by leading experts. • J. J. O'Connor and E. F. Robertson, History of General Relativity at the MacTutor History of Mathematics archive. • The original 1915 article by David Hilbert containing the gravitational field equation. • Malcolm MacCallum's GR News service for current research in relativity.
2020-02-16 22:56:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7818145751953125, "perplexity": 436.5929016439781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141430.58/warc/CC-MAIN-20200216211424-20200217001424-00229.warc.gz"}
http://purplemath.com/learning/viewtopic.php?f=14&t=3728
## Is there another way to solve this integral???? Limits, differentiation, related rates, integration, trig integrals, etc. davidherrera Posts: 1 Joined: Mon Apr 21, 2014 11:28 pm Contact: ### Is there another way to solve this integral???? Hi, everyone I'm a new guy in these forums my name is David . I am a student of the National Autonomous University of Mexico and I live in Mexico City. Our calculus teacher Pablo left us a special job. We have to ask people from other countries if we have another way to solve some integrals....I am skeptical about internet forums...but I'm looking for answers in all media... however... I do not lose anything by trying... I´ll let my integral with my own solution, but if you find another way to get the same result please let your procedure here...I'll be very grateful... IF YOU CAN TELL ME WHICH COUNTRY ARE YOU FROM THAT WOULD WE AWESOME!! $\int_{}^{} sen^2(2x)cos^2(2x)\, dx$ Using the following identitites \ Usando las siguientes identidades $sen^2(u) = \frac{1}{2}-\frac{1}{2}cos(2u)$ $cos^2(u) = \frac{1}{2}+\frac{1}{2}cos(2u)$ We write the new values in the original integral \ Sustituimos valores $\int_{}^{} ( \frac{1}{2}-\frac{1}{2}cos(4x)) ( \frac{1}{2}+\frac{1}{2}cos(4x)) \, dx$ This is a difference of squares \ Esto es una diferencia de cuadrados $( a + b )( a - b ) = a^2 + b^2$ Reducing \ Reduciendo $\int_{}^{} \frac{1}{4}- \frac{1}{4}cos^2(4x) \, dx$ We observe that we have again a cosine with a potency $cos^2(4x)$, and we can´t solve our integral with this expression inside it. Observamos que otra vez tenemos un coseno con potencia $cos^2(4x)$, y no podemos resolver nuestra integral con esta expresión dentro de ella. so... using the previus properties we know that \ así que... usando las propiedades anteriores sabemos que $cos^2(4x) = \frac{1}{2}+ \frac{1}{2}cos(8x)$ putting this value on my integral \ poniendo esto en mi integral $\int_{}^{} \frac{1}{4}- \frac{1}{4} ( \frac{1}{2}+ \frac{1}{2}cos(8x) ) \, dx$ Reducing \ Reduciendo $\int_{}^{} \frac{1}{4}- \frac{1}{8} - \frac{1}{8}cos(8x) \, dx == \int_{}^{} \frac{1}{8} - \frac{1}{8}cos(8x) \, dx$ And we can divide in two integrals \ Y ahora podemos dividir en dos integrales $\frac{1}{8}\int_{}^{} \, dx - \frac{1}{8} \int_{}^{} cos(8x) \, dx$ The first of ther is very easy, but for the second one we must have to do a change of variable La primera es muy fácil, pero para la segunda debemos hacer cambio de variable $\frac{1}{8}\int_{}^{} \, dx - \frac{1}{8} \int_{}^{} cos(8x) \, dx$ $u= 8x$ $du= 8 dx$ $dx =\frac{du}{8}$ Replacing values \ Reemplazando valores $\frac{1}{8}\int_{}^{} \, dx - \frac{1}{8} \int_{}^{} cos(u) \frac{du}{8} \,$ Taking out the constant of the second integral \ Sacando la constante de la segunda integral $\frac{1}{8}\int_{}^{} \, dx - \frac{1}{8} . \frac{1}{8}\int_{}^{} cos(u) \,du$ We have \ Tenemos $\frac{1}{8}\int_{}^{} \, dx - \frac{1}{16}\int_{}^{} cos(u) \,du$ Finally we solve this two easy integrals and replace the value of u / Finalmente resolvemos y remplazamos el valor de u $\frac{1}{8} x - \frac{1}{16} sen(u)$ $\frac{1}{8} x - \frac{1}{16} sen(8x)$ Finally our answer is!!! \ Finalmente nuestra respuesta es!!! $\int_{}^{} sen^2(2x)cos^2(2x)\, dx == \frac{1}{8} x - \frac{1}{16} sen(8x) + C$ buddy Posts: 196 Joined: Sun Feb 22, 2009 10:05 pm Contact: ### Re: Is there another way to solve this integral???? davidherrera wrote: $\int_{}^{} sen^2(2x)cos^2(2x)\, dx$ you could also do like: sin^2(2x)cos^2(2x) =[sin(2x)cos(2x)]^2 =[(1/2)(2/1)sin(2x)cos(2x)]^2 =[1/2]^2 [2sin(2x)cos(2x)]^2 =(1/4)[sin(4x)]^2 =(1/4)sin^2(4x) cos(2x)=1-2sin^2(x) 2sin^2(x)=1-cos(2x) sin^2(x)=(1/2)(1-cos(2x)) so (1/4)sin^2(4x) =(1/4)(1/2)(1-cos(8x)) =(1/8)(1-cos(8x)) =1/8 - cos(8x)/8 then integrate
2016-10-25 08:31:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 23, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822770118713379, "perplexity": 5552.109308544465}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00061-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.hepdata.net/record/ins1735729
• Browse all Search for a light charged Higgs boson decaying to a W boson and a CP-odd Higgs boson in final states with e$\mu\mu$ or $\mu\mu\mu$ in proton-proton collisions at $\sqrt{s}=$ 13 TeV The collaboration No Journal Information, 2019 Abstract (data abstract) A search for a light charged Higgs boson ($\mathrm{H^{+}}$) decaying to a W boson and a CP-odd Higgs boson (A) in final states with $\mathrm{e}\mu\mu$ or $\mu\mu\mu$ is performed using data from pp collisions at $\sqrt{s}=13$ TeV, recorded by the CMS detector at the LHC and corresponding to an integrated luminosity of 35.9 fb$^{-1}$. In this search, it is assumed that the $\mathrm{H}^{+}$ boson is produced in decays of top quarks, and the A boson decays to two opposite-charge muons. The presence of signals for $\mathrm{H}^{+}$ boson masses between 100 and 160 GeV and A boson masses between 15 and 75 GeV is investigated. No evidence for the production of the $\mathrm{H^{+}}$ boson is found. Assuming branching fractions $\mathcal{B}(\mathrm{H^{+}}\to\mathrm{W^{+}A})=1$ and $\mathcal{B}(\mathrm{A}\to\mu^{+}\mu^{-})=3\times10^{-4}$, upper limits at 95% confidence level on the branching fraction of the top quark, $\mathcal{B}(\mathrm{t}\to\mathrm{bH^{+}})$, of 0.63 to 2.9% are obtained, depending on the masses of the $\mathrm{H^{+}}$ and A bosons. These are the first limits on $\mathcal{B}(\mathrm{t}\to\mathrm{bH^{+}})$ in the decay mode of the $\mathrm{H^{+}}$ boson: $\mathrm{H^{+}}\to\mathrm{W^{+}A}\to\mathrm{W^{+}}\mu^{+}\mu^{-}$. • #### Figure 3a Data from Figure 3 (left) 10.17182/hepdata.89938.v1/t1 Expected and observed upper limits at 95% CL on the branching fraction of the top quark, $\mathcal{B}(\mathrm{t}\to\mathrm{b}\mathrm{H^{+}})$, for the A... • #### Figure 3b Data from Figure 3 (right) 10.17182/hepdata.89938.v1/t2 Expected and observed upper limits at 95% CL on the branching fraction of the top quark, $\mathcal{B}(\mathrm{t}\to\mathrm{b}\mathrm{H^{+}})$, for the A...
2019-08-19 01:23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9151160717010498, "perplexity": 781.978592528346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00108.warc.gz"}
https://www.groundai.com/project/almost-isometries-between-teichmuller-spaces/
Almost isometries between Teichmüller spaces # Almost isometries between Teichmüller spaces Manman Jiang and Lixin Liu and Huiping Pan Manman Jiang Guangzhou Maritime University, 510275, Guangzhou, P. R. China Lixin Liu School of Mathematics and Computational Science, Sun Yat-Sen University, 510275, Guangzhou, P. R. China Huiping Pan School of Mathematical Science, Fudan University, 200433, Shanghai, P. R. China July 15, 2019 ###### Abstract. We prove that the Teichmüller space of surfaces with given boundary lengths equipped with the arc metric (resp. the Teichmüller metric) is almost isometric to the Teichmüller space of punctured surfaces equipped with the Thurston metric (resp. the Teichmüller metric). This work is partially supported by NSFC, No: 11271378. Keywords: Teichmüller space, almost isometry, Thurston metric, Teichmüller metric, arc metric. AMS MSC2010: 32G15, 30F60, 51F99. ## 1. introduction Let be an oriented surface of genus with boundary components such that . The Euler characteristic of is . Throughout this paper we assume that . Recall that a marked complex structure on is a pair where is a Riemann surface and is an orientation preserving homeomorphism. Two marked complex structures and are called equivalent if there is a conformal map homotopic to . Denote by the equivalence class of . The set of equivalence classes of marked complex structures is the Teichmüller space denoted by . Let be a Riemann surface with boundary. There exist two different hyperbolic metrics on . One is of infinite area obtained from the Uniformization theorem, the other one is of finite area obtained from the restriction to of the hyperbolic metric on its (Sckottky) double such that each boundary component is a smooth simple closed geodesic (see §LABEL:ssec:double). The second one is called the intrinsic metric on . In this paper when we mention a hyperbolic metric on a surface with nonempty boundary we mean the second one. The correspondence between complex structure and hyperbolic metric provides another approach for the Teichmüller theory. Recall that a marked hyperbolic surface is a hyperbolic surface equipped with an orientation-preserving homeomorphism , where maps each component of the boundary of to a geodesic boundary of . Two marked hyperbolic surfaces and are called equivalent if there is an isometry homotopic to relative to the boundary. The Teichmüller space is also the set of equivalence classes of marked hyperbolic surface. For simplicity, we will denote a point in by , without explicit reference to the marking or to the equivalence relation. Let be the boundary components of . For any . Let be the set of the equivalence classes of marked hyperbolic metrics whose boundary components have hyperbolic lengths . In particular, is the Teichmüller space of surfaces with punctures. It is clear that . Let be a pants decomposition of , i.e. the complement of on consists of pairs of pants . Let be a set of disjoint simple closed curves whose restriction to any pair of pants consists of three arcs, such that any two of the arcs are not free homotopic with respect to the boundary of . The pair is called a marking of . For any , let be the corresponding Fenchel-Nielsen coordinates with respect to the marking , where represents the lengths of , represents the twists along and represents the lengths of the boundary components (for details about Fenchel-Nielsen coordinates we refer to [Bu]). The Fenchel-Nielsen coordinates induce a natural homeomorphism between Teichmüller spaces and in the following way: ΦΓ:Tg,n(Λ) ⟶ Tg,n(0) (L,T,Λ) ⟼ (L,T,0). The goal of this paper is to compare various metrics on the Teichmüller spaces and via the homeomorphism . ###### Definition 1.1. Two metric spaces and are called almost isometric if there exist a map , two positive constants and , such that both of the following two conditions hold. 1. For any , |d2(f(x),f(y))−d1(x,y)|≤B. 2. For any , there exists such that d2(z,f(x))≤A. ### 1.1. The Thuston metric and the arc metric An essential simple closed curve on is a simple closed curve which is not homotopic to a single point or a boundary component. An essential arc is a simple arc whose endpoints are on the boundary and which is not homotopic to any subarc of the boundary. Let be the set of homotopy classes of essential simple closed curves on S, be the set of homotopy classes of essential arcs on S, and be the set of homotopy classes of the boundary components. For any , define dTh(X1,X2):=logsup[α]∈S(S)lX2([α])lX1([α]) and dA(X1,X2):=logsup[α]∈A(S)lX2([α])lX1([α]). From the works [Pan] and [LPST2], both and are asymmetric metric on , which are called the Thurston metric and the arc metric respectively. Moreover, the authors ([LPST2]) observed that dA(X1,X2)=logsup[α]∈A(S)∪B(S)∪S(S)lX2([α])lX1([α]). Our first result is the following. ###### Theorem 1.2. and are almost isometric. More precisely, there is a constant depending on the surface and boundary lengths such that, |dA(X1,X2)−dTh(ΦΓ(X1),ΦΓ(X2))|≤C1. ###### Remark 1. Papadopoulos-Su ([PS]) considered the case where is close to zero, they showed that the constant in Theorem 1.2 will tend to zero if tends to zero. ###### Proof of Theorem 1.2. To prove Theorem 1.2, it suffices to verify that they satisfy the two conditions in Definition 1.1. The first condition follows from Theorem 1.3 and Theorem LABEL:thm:thu-almost. The second condition follows from the fact that is a homeomorphism. ∎ ###### Theorem 1.3. The arc metric and the Thurston metric are almost-isometric in . More precisely, there is a constant depending on the surfaces and boundary lengths such that, |dA(X1,X2)−dTh(X1,X2)|≤C2. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
2020-10-27 06:06:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131035804748535, "perplexity": 422.0221830309606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893402.83/warc/CC-MAIN-20201027052750-20201027082750-00100.warc.gz"}
https://madhavamathcompetition.com/category/number-theory/
# Happy Numbers make Happy Programmers ! :-) Here is one question which one of my students, Vedant Sahai asked me. It appeared in his computer subject exam of his recent ICSE X exam (Mumbai): write a program to accept a number from the user, and check if the number is a happy number or not; and the program has to display a message accordingly: A Happy Number is defined as follows: take a positive number and replace the number by the sum of the squares of its digits. Repeat the process until the number equals 1 (one). If the number ends with 1, then it is called a happy number. For example: 31 Solution : 31 replaced by $3^{2}+1^{2}=10$ and 10 replaced by $1^{2}+0^{2}=1$. So, are you really happy? 🙂 🙂 🙂 Cheers, Nalin Pithwa. # Yet another special number ! The eminent British mathematician had once remarked: Every integer was a friend to Srinivasa Ramanujan. Well, we are mere mortals, yet we can cultivate some “friendships with some numbers”. Let’s try: Question: Squaring 12 gives 144. By reversing the digits of 144, we notice that 441 is also a perfect square. Using C, C++, or python, write a program to find all those integers m, such that $1 \leq m \leq N$, verifying this property. PS: in order to write some simpler version of the algorithm, start playing with small, particular values of N. Reference: 1001 Problems in Classical Number Theory, Indian Edition, AMS (American Mathematical Society), Jean-Marie De Konick and Armel Mercier. https://www.amazon.in/1001-Problems-Classical-Number-Theory/dp/0821868888/ref=sr_1_1?s=books&ie=UTF8&qid=1509189427&sr=1-1&keywords=1001+problems+in+classical+number+theory Cheers, Nalin Pithwa. # Fundamental theorem of algebra: RMO training It is quite well-known that any positive integer can be factored into a product of primes in a unique way, up to an order. (And, that 1 is neither prime nor composite) —- we all know this from our high school practice of “tree-method” of prime factorization, and related stuff like Sieve of Eratosthenes. But, it is so obvious, and so why it call it a theorem, that too “fundamental” and yet it seems it does not require a proof. It was none other than the prince of mathematicians of yore, Carl Friedrich Gauss, who had written a proof to it. It DOES require a proof — there are some counter example(s). Below is one, which I culled for my students: Question: Let $E= \{a+b\sqrt{-5}: a, b \in Z\}$ (a) Show that the sum and product of elements of E are in E. (b) Define the norm of an element $z \in E$ by $||z||=||a+b\sqrt{-5}||=a^{2}+5b^{2}$. We say that an element $p \in E$ is prime if it is impossible to write $p=n_{1}n_{2}$ with $n_{1}, n_{2} \in E$, and $||n_{1}||>1$, $||n_{2}||>1$; we say that it is composite if it is not prime. Show that in E, 3 is a prime number and 29 is a composite number. (c) Show that the factorization of 9 in E is not unique. Cheers, Nalin Pithwa. # Another special number(s): Wilson primes and playful programming! Problem: A prime number p is called a Wilson prime if $(p-1)! \equiv -1 \pmod {p^{2}}$. Using a computer and some programming language like C, C++, or Python find the three smallest Wilson primes. Cheers, Nalin Pithwa. # A Special Number Problem: Show that for each positive integer n equal to twice a triangular number, the corresponding expression $\sqrt{n+\sqrt{n+\sqrt{n+ \sqrt{n+\ldots}}}}$ represents an integer. Solution: Let n be such an integer, then there exists a positive integer m such that $n=(m-1)m=m^{2}-m$. We then have $n+m=m^{2}$ so that we have successively $\sqrt{n+m}=m$; $\sqrt{n + \sqrt{n+m}}=m$; $\sqrt{n+\sqrt{n+\sqrt{n+m}}}=m$ and so on. It follows that $\sqrt{n+\sqrt{n+\sqrt{n+ \sqrt{n+\ldots}}}}=m$, as required. Comment: you have to be a bit aware of properties of triangular numbers. Reference: 1001 Problems in Classical Number Theory by Jean-Marie De Koninck and Armel Mercier, AMS (American Mathematical Society), Indian Edition: https://www.amazon.in/1001-Problems-Classical-Number-Theory/dp/0821868888/ref=sr_1_1?s=books&ie=UTF8&qid=1508634309&sr=1-1&keywords=1001+problems+in+classical+number+theory Cheers, Nalin Pithwa. # Another cute proof: square root of 2 is irrational. Reference: Elementary Number Theory, David M. Burton, Sixth Edition, Tata McGraw-Hill. (We are all aware of the proof we learn in high school that $\sqrt{2}$ is irrational. (due Pythagoras)). But, there is an interesting variation of that proof. Let $\sqrt{2}=\frac{a}{b}$ with $gcd(a,b)=1$, there must exist integers r and s such that $ar+bs=1$. As a result, $\sqrt{2}=\sqrt{2}(ar+bs)=(\sqrt{2}a)r+(\sqrt{2}b)s=2br+2bs$. This representation leads us to conclude that $\sqrt{2}$ is an integer, an obvious impossibility. QED. # RMO 2017 Warm-up: Two counting conundrums Problem 1: There are n points in a circle, all joined with line segments. Assume that no three (or more) segments intersect in the same point. How many regions inside the circle are formed in this way? Problem 2: Do there exist 10,000 10-digit numbers divisible by 7, all of which can be obtained from one another by a re-ordering of their digits? Solutions will be put up in a couple of days. Nalin Pithwa.
2017-12-17 23:28:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6534252166748047, "perplexity": 569.2937105377214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599156.77/warc/CC-MAIN-20171217230057-20171218012057-00750.warc.gz"}
https://tex.stackexchange.com/questions/235187/vertical-bubble-loop-diagrams-with-feynmf
# Vertical Bubble (Loop) Diagrams with FeynMF I want to produce the following Feynman diagram with FeynMF; the problems is how to render the loops (particle-hole bubble) in the vertical direction; furthermore, how to fill a vertical bubble with hatches or gray shade. • A related question is asked at tex.stackexchange.com/questions/160660/…. However, the provided answer does not solve my problem. – AlQuemist Mar 26 '15 at 9:44 • Welcome to TeX.SX. Questions about how to draw specific graphics that just post an image of the desired result are really not reasonable questions to ask on the site. Please post a minimal compilable document showing that you've tried to produce the image and then people will be happy to help you with any specific problems you may have. See minimal working example (MWE) for what needs to go into such a document. – Reinstate Monica - M. Schröder Mar 26 '15 at 9:57 • Thanks for your notice; yet actually I cannot produce any MWE for this figure with FeynMF; so I posted a picture of the diagram made by another application. @MartinSchröder – AlQuemist Mar 27 '15 at 12:55 This is quite tricky to do because feynmf does not provide an easy way to fill an arbitrary cycle of paths, nor an easy interface to the paths it has created for you. You could draw a polygon and shade that, but then you'd have trouble putting the arrows on the side of it. Faced with this level of complexity, it's usually easier to draw your diagram in raw Metapost, using the macros from feynmp as needed. Here is an attempt at your picture, using this approach. prologues := 3; outputtemplate := "%j%c.eps"; input feynmp beginfig(1); w = 180; h = 200; z1 = (0,0); z2 = (w,0); z3 = (0,h); z4 = (w,h); x5 = x6 = x7 = 1/2 w; y5 = 0; y6 = 1/3 h; y7 = 2/3 h; draw_dashes_arrow z1 -- z5; draw_dashes_arrow z5 -- z2; draw_plain_arrow z3 -- z7; draw_plain_arrow z7-- z4; path a[]; a1 = z5 .. (z5+12up) rotatedabout( 1/2[z5,z6], 90) .. z6; a2 = a1 rotatedabout( 1/2[z5,z6], 180); a3 = a1 shifted (z6-z5); fill a3 .. a4 .. cycle withcolor .8 white; %shade a3 .. a4 .. cycle; draw_plain_arrow a1; draw_plain_arrow a2; draw_dashes_arrow a3; draw_dashes_arrow a4; label.bot(btex $\ell$ etex, z7); label.top(btex $\ell$ etex, z6); %dotlabels.top(1,2,3,4,5,6,7); endfig; end. Compile this with mpost to produce an .eps file that you can convert to PDF or include directly. If you prefer shading with diagonal lines, then comment out the fill line and uncomment the shade line. If you want to see the numbers I've assigned to each vertex, uncomment the dotlabels line. The macros • shade • draw_plain_arrow • draw_dashes_arrow are provided by feynmp.mp which I have included at the top.
2020-02-21 21:46:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823449432849884, "perplexity": 3026.495865915839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145538.32/warc/CC-MAIN-20200221203000-20200221233000-00253.warc.gz"}
https://blender.stackexchange.com/questions/119845/dope-sheet-key-automation-effects-preset
# Dope Sheet Key Automation/Effects/Preset I have all this elements that are going down on the animation I would like to automate the all the keys one after another so that they follow a standard of creating the elements one after the another: There´s any kind wihout using some addon? I've written this script to shift all the keyframes at once. 1. Select the objects 2. Open the script in the Text Edit 3. Set the amount of frames, eg: step(10) 4. Run script (Alt + P) import bpy def shift(step): interval = 0.0 for object in bpy.context.selected_objects: interval += step if object.animation_data != None: fcurves = object.animation_data.action.fcurves for curve in fcurves: keyframepoints = curve.keyframe_points for point in keyframepoints: point.co.x += interval point.handle_left.x += interval point.handle_right.x += interval shift(10) The script will loop through the fcurves of the selected objects and will increment the x value of each point. ## Caveat: 1. the script doesn't allow to set the order of the objects. 2. the script will shift ALL the keyframes • Damn m8 wIth all respect! your devilish! – andrepazleal Oct 5 '18 at 16:48
2019-11-16 22:30:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24728436768054962, "perplexity": 6083.460545992761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00147.warc.gz"}
https://www.studysmarter.us/explanations/macroeconomics/financial-sector/security-market-line/
Suggested languages for you: Americas Europe | | # Security Market Line If you were told that you could invest in an asset and earn a rate of return, you would weigh it against the risk, the asset price, and the opportunity cost. Imagine all investors on the market and their choices about the assets they want to invest in. To draw inferences at the macro level, we need to aggregate all the market participants' preferences. This would allow determining how investing in a risky asset should be compensated for each possible level of risk. Well, luckily, the Security Market Line does just that! Eager to learn how the appropriate rate of return for any asset on the market can be determined, irrespective of the corresponding risk level? Keep scrolling to find out! ## Security Market Line Definition Before diving into the definition of the security market line, we need to understand the investors' compensation.Every investor needs to be compensated when investing in an asset. But what does this compensation consist of? The first part is the opportunity cost of parting with their money or the time value of money. The investor could have invested their money elsewhere and earned at least a risk-free interest rate. This risk-free rate is considered the minimum compensation any investor needs to receive when investing in a risky asset. Why so? Because investing in risk-free securities such as U.S. government bonds would provide the same return without the added risk that the risky investment brings.This brings us to the next part that investors need to be compensated for: non-diversifiable risk. Different assets carry different risk levels; thus, the compensation, or the risk premium, that the investor should receive varies.Bringing the two parts that the investors need to be compensated for together yields the following equation:$$E(R)=r_f+RP$$Where:$$E(R)$$ - the average expected rate of return$$r_f$$ - compensation for the time value of money or the risk-free rate$$RP$$ - risk premium A risk premium - is the compensation an investor receives for non-diversifiable risk. Now, what does the risk of investment depend on? How can we determine which investment is riskier than the other? The answer is - beta. A beta of an investment is the degree to which an asset co-moves with the rest of the market. The takeaway is that the assets with large betas carry more non-diversifiable risk than those with small betas. This means that investors will need to be compensated more for assets with significant betas. In other words, such investments would require more substantial risk premia. A beta of an investment is the degree to which an asset co-moves with the rest of the market. ## Security Market Line Formula Understanding the basic principle of returns and learning about the definition of beta brings us to the security market line formula.The $$SML$$ equation or formula is:$$E(R_i)=r_f+RP=r_f+\beta_i\times(RP_M)=r_f+\beta_i\times[E(R_M)-r_f]$$Where:$$E(R_i)$$ - expected return on an asset$$r_f$$ - the risk-free rate$$RP$$ - total risk premium$$\beta_i$$ - asset's beta$$RP_M$$ - market risk premium$$E(R_M)$$ - expected return on a market portfolioAs we've seen above, the logic of investor compensation holds here. When securities are priced according to the rule of investor compensation, they will be located on the $$SML$$. However, if the securities are over or undervalued, they will be located above or below the $$SML$$. The security market line plots the average expected rates of return on assets against their risk levels. It has a positive slope and an intercept at the risk-free rate. ## Derivation of Security Market Line Let's go through a graphical derivation of the security market line - $$SML$$. As the average expected rate of return on an investment depends on the asset risk level, we can plot all the average returns against their risk.We can choose the average expected rate of return to be on the vertical axis and the assets' betas on the horizontal axis. The risk-free rate is the bare minimum for which the investors need to be compensated. This means that the average expected rate of return for any asset will not be less than the risk-free rate. Let's mentally mark it at some point on the vertical axis. We assume that the risk-free rate is positive. Now, let's think about if there are any investments that we can already determine here. Turns out - we know two already! One is a risk-free asset such as a U.S. short-term government bond. As it is assumed to be practically risk-free, its beta is $$0$$. The second one is the market portfolio. A market portfolio is a hypothetical portfolio comprised of all the assets in the market. It follows the rule that asset weights in such a portfolio should be proportional to the relative quantity of each asset in the market. As the market portfolio is well-diversified, its beta will be equal to $$1$$. We now have the two points; the only thing left to determine is the slope! As we know that investors need to be compensated more for higher risk levels, the slope of the $$SML$$ will need to be positive. Now we can plot the $$SML$$. Take a look at Figure 1 below. Figure 1. Security market line, StudySmarter Originals Figure 1 above shows the $$SML$$. On the horizontal axis, there is the risk level as measured by the assets' beta. On the vertical axis, there is the average expected rate of return. The $$SML$$ passes through the risk-free asset and the market portfolios. The risk-free portfolio marked by $$R_f$$ carries zero risk, while the market portfolio denoted by $$M$$ carries a risk measured when beta equals one. We have now derived the $$SML$$ graphically. The security market line underpins the idea that any asset on the market should compensate the investors for the time value of money and the risk that those assets carry. ## The Slope and the Intercept of the Security Market Line The slope of the security market line, $$SML$$, is solely determined by investors' expectations about the risk and compensation that they need to receive for this risk. The $$SML$$ will be steeper the more risk-averse investors are. For any increase in the risk level, they will require greater compensation. However, if investors are less risk-averse, the $$SML$$ will be less steep. For any increase in the risk level, they will not require as much compensation as they would if they were more risk-averse.The intercept of the $$SML$$ is the risk-free rate. The Federal Reserve's policy affects it, and investors' preferences play no role in determining the intercept of the $$SML$$. However, this is precisely why investors carefully observe all the moves of the Federal Reserve. Any changes in the risk-free rate affect what investors perceive the $$SML$$ to be at any given time. This, in turn, significantly affects the rates of return and the prices of securities. ## Security Market Line Example Let's look at a security market line example when the asset is either over or undervalued.Take a look at Figure 2 below. Figure 2. Security market line and arbitrage, StudySmarter Originals Asset $$A$$ lies above the $$SML$$ because it is undervalued or underpriced. Think about it: for a given level of risk, the return it provides is way too high. Investors would aim to buy such an asset quickly, pushing its price up. The price is inversely related to the rate of return; thus, the rate of return should go down.Asset $$B$$ lies below the $$SML$$ because it is overvalued or overpriced. The return it provides is too low for a given level of risk. Investors would aim to sell such an asset quickly, pushing its price down. The price is inversely related to the rate of return; thus, the rate of return should go up.Arbitrage implies that over time both security $$A$$ and security $$B$$ would both lie on the $$SML$$.When an asset is located precisely on the $$SML$$, it is considered to be priced correctly. Its average expected rate of return accurately compensates investors for the risk they are taking and the time value of money. A security is underpriced if the return it provides for a given level of risk is too high. A security is overpriced if the return it provides for a given level of risk is too low. Arbitrage in the $$SML$$ context means that overtime securities will be priced to appropriately compensate investors for the risk levels and the time value of money. ## Security Market Line vs. Capital Market Line The security market line $$SML$$ is different from the capital market line - $$CML$$. The $$CML$$ plots the returns over and above the risk-free rate of efficient portfolios against portfolio standard deviations.The $$SML$$, however, plots the returns over and above the risk-free rate for individual assets against their risk levels.The horizontal axis for the $$CML$$ is the portfolio standard deviation as it best represents the risk incurred by an efficient portfolio of assets. The horizontal axis for the $$SML$$ is the beta of individual assets as it best represents the risk that a particular investment could potentially introduce into a portfolio. An efficient portfolio is a portfolio that is comprised of the market and the risk-free asset. Now that you've understood how the $$SML$$ is different from the $$CML$$ let's see why these are often confused. Well, for one, the names sound very similar, but that's not the end of the story. In fact, $$SML$$ can be constructed from the $$CML$$ by plotting all the stocks within the efficient frontier onto one line! How exactly to plot the $$SML$$ and the $$CML$$ is outside the scope of this article. ## Security Market Line - Key takeaways • A risk premium - is the compensation an investor receives for non-diversifiable risk. • A beta of an investment is the degree to which an asset co-moves with the rest of the market. • The security market line plots the average expected rates of return on assets against their risk levels. It has a positive slope and an intercept at the risk-free rate. • A market portfolio is a hypothetical portfolio comprised of all the assets in the market. It follows the rule that asset weights in such a portfolio should be proportional to the relative quantity of each asset in the market. • An efficient portfolio is a portfolio that is comprised of the market and the risk-free asset. The security market line plots the average expected rates of return on assets against their risk levels. It has a positive slope and an intercept at the risk-free rate. The security market line is calculated by using the following equation: E(R)=Rf+RP The security market line equation is: E(R)=Rf+RP It allows for finding the compensation investors would like to receive for the risk they are exposed to when purchasing any asset. The security market line has a positive slope and an intercept at the risk-free rate. ## Final Security Market Line Quiz Question What is the main idea behind the security market line? The security market line underpins the idea that any asset on the market should compensate the investors for the time value of money and the risk that those assets carry. Show question Question What does the security market line plot? The security market line plots the average expected rates of return on assets against their risk levels. It has a positive slope and an intercept at the risk-free rate. Show question Question A risk premium - is the compensation an investor receives for non-diversifiable risk. Show question Question Explain the idea behind an investment's beta? A beta of an investment is the degree to which an asset co-moves with the rest of the market. Show question Question What is a market portfolio? A market portfolio is a hypothetical portfolio comprised of all the assets in the market. It follows the rule that asset weights in such a portfolio should be proportional to the relative quantity of each asset in the market. Show question Question What is an efficient portfolio? An efficient portfolio is a portfolio that is comprised of the market and the risk-free asset. Show question Question What is the security market line equation? The security market line is calculated by using the following equation: $$E(R_i)=r_f+RP=r_f+\beta_i\times(RP_M)=r_f+\beta_i\times[E(R_M)-r_f]$$ Show question Question What are the advantages of the security market line? It allows for finding the compensation investors would like to receive for the risk they are exposed to when purchasing any asset. Show question Question What are the characteristics of the security market line? The security market line has a positive slope and an intercept at the risk-free rate. Show question Question When is a security considered underpriced? A security is underpriced if the return it provides for a given level of risk is too high. Show question Question When is a security considered overpriced? A security is overpriced if the return it provides for a given level of risk is too low. Show question Question What is arbitrage in the context of SML? Arbitrage in the SML context means that overtime securities will be priced to appropriately compensate investors for the risk levels and the time value of money. Show question Question How is the security market line (SML) different from the capital market line (CML)? The CML plots the risk premia of efficient portfolios against portfolio standard deviations. The SML, however, plots the risk premia for individual assets against their risk levels. Show question 60% of the users don't pass the Security Market Line quiz! Will you pass the quiz? Start Quiz ## Study Plan Be perfectly prepared on time with an individual plan. ## Quizzes Test your knowledge with gamified quizzes. ## Flashcards Create and find flashcards in record time. ## Notes Create beautiful notes faster than ever before. ## Study Sets Have all your study materials in one place. ## Documents Upload unlimited documents and save them online. ## Study Analytics Identify your study strength and weaknesses. ## Weekly Goals Set individual study goals and earn points reaching them. ## Smart Reminders Stop procrastinating with our study reminders. ## Rewards Earn points, unlock badges and level up while studying. ## Magic Marker Create flashcards in notes completely automatically. ## Smart Formatting Create the most beautiful study materials using our templates.
2023-02-06 00:15:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44132497906684875, "perplexity": 1366.6147491831905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00785.warc.gz"}
https://dmoj.ca/problem/ioi13p2io
## IOI '13 P2 - Art Class (Standard I/O) View as PDF Points: 20 (partial) Time limit: 60.0s Memory limit: 64M Problem type You have an Art History exam approaching, but you have been paying more attention to informatics at school than to your art classes! You will need to write a program to take the exam for you. The exam will consist of several paintings. Each painting is an example of one of four distinctive styles, numbered , , and . Style contains neoplastic modern art. For example: Style contains impressionist landscapes. For example: Style contains expressionist action paintings. For example: Style contains colour field paintings. For example: Your task is, given a digital image of a painting, to determine which style the painting belongs to. The IOI judges have collected many images in each style. Nine images from each style have been chosen at random and included in the task materials on your computer, so that you can examine them by hand and use them for testing. The remaining images will be given to your program during grading. The image will be given as an grid of pixels. The rows of the image are numbered from top to bottom, and the columns are numbered from left to right. The pixels are described using two-dimensional arrays , and , which give the amount of red, green and blue respectively in each pixel of the image. These amounts range from (no red, green or blue) to (the maximum amount of red, green or blue). #### Input Specification The first line of input will contain the integer , the number of test cases to follow. For each test case: • The first line will contain the two integers and , the height and width of the image. • The next lines will each contain integers. Each integer will describe a pixel using the default RGB color model (see below for conversion instructions). #### Output Specification For every test case, output a line containing a single integer denoting the style of the image, which must be or , as described above. #### Sample Tests Individual sample test cases for the above images can be found here for you to analyse (Right click > Save link as): #### Interpreting Colors The standard conversion scheme with RGB colors uses the formula: RGB = (R<<16)|(G<<8)|B, where and are values from to inclusive, << is the bitshift-left operator, and | is the bitwise-OR operator. To obtain the RGB values from an individual encoded integer pixel, use the following functions: ##### C/C++ int getR(int RGB) { return (RGB >> 16) & 0xFF; } int getG(int RGB) { return (RGB >> 8) & 0xFF; } int getB(int RGB) { return RGB & 0xFF; } ##### Pascal function getR(RGB: longint): integer; begin getR := (RGB shr 16) and $FF; end; function getG(RGB: longint): integer; begin getG := (RGB shr 8) and$FF; end; function getB(RGB: longint): integer; begin getB := RGB and \$FF; end; #### Scoring Suppose you correctly classify percent of the images (so ): • If then you will score points. • If then you will score between and points, on a linear scale. Specifically, your score will be , rounded down to the nearest integer. • If then you will score between and points, on a linear scale. Specifically, your score will be , rounded down to the nearest integer. • If then you will score points. #### Experimentation The sample grader on your computer will read input from the file artclass.jpg. This file must contain an image in JPEG format. You are allowed to use any available graphics processing applications to study the images, but this is not necessary to solve the problem. #### Language Notes C/C++ You must #include "artclass.h". Pascal You must define the unit ArtClass. All arrays are numbered beginning at (not ). See the solution templates on your machine for examples.
2020-10-21 05:30:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4245252013206482, "perplexity": 2936.7245891935668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107875980.5/warc/CC-MAIN-20201021035155-20201021065155-00326.warc.gz"}
https://www.nature.com/articles/s41598-019-42261-3
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # eBrain: a Three Dimensional Simulation Tool to Study Drug Delivery in the Brain ## Abstract Neurodegenerative disorders such as Alzheimer’s and Parkinson’s disease are severe disorders with acute symptoms that gradually progress. In the course of developing disease-modifying treatments for neurodegenerative disorders there is a need to develop novel strategies to increase efficacy of drugs and accelerate the development process. We developed a tool for simulating drug delivery in the brain by translating MRI data into an interactive 3D model. This tool, the eBrain, superimposes simulated drug diffusion and tissue uptake by inferring from the MRI data with a seamless display from any angle, magnification, or position. We discuss a representative implementation of eBrain that is inspired by clinical data in which insulin is intranasally administered to Alzheimer patients. Using extensive analysis of multiple eBrain simulations with varying parameters, we show the potential for eBrain to determine the optimal dosage to ensure drug delivery without overdosing the tissue. Specifically, we examined the efficacy of combined drug doses and potential compounds for tissue stimulation. Interestingly, our analysis uncovered that the drug efficacy is inferred from tissue intensity levels. Finally, we discuss the potential of eBrain and possible applications of eBrain to aid both inexperienced and experienced medical professionals as well as patients. ## Introduction The neurodegenerative disorders such as Alzheimer’s and Parkinson’s disease are severe disorders with acute symptoms that gradually worsen over time1. Due to the complexity of brain tissue and its essential function, developing disease-modifying treatments for neurodegenerative disorders requires arduous study of dynamic progression of the disease state2,3,4,5. The bulk of collected data, however, are snapshots at specific time points or pathology of a postmortem state. Therefore, the mechanisms in a living organism underlying the degenerative process and the way treatment strategies control them, remain largely unknown. Consequently, drug development in neurodegenerative research is slow and treatment paradigms are normally developed over repetitive trial and error cycles. To advance the field, there is a need to develop novel strategies to increase efficacy of drugs and accelerate the development process. Various approaches, technologies and systems are being developed to study the transportation of pharmaceutical compounds into the brain6,7,8. We use computational and mathematical approaches to simulate and study brain mechanisms and to test complex drug delivery systems in-silico. Specifically, we simulate an intranasal delivery system in which the drug is infused into the brain through the nasal passage9. Our work is inspired by an ongoing clinical trial for Alzheimer’s patients10,11, in which insulin is delivered intranasally into the patient’s brain to enhance brain function and memory recovery. Preliminary results have shown increased glucose uptake in the brain tissue parenchyma and enhanced brain neuronal activity12,13. Developing this finding into a treatment requires determining a dosage that safely enhances cognitive recovery. Although this is clinically feasible, the necessary steps involved would require extensive testing to determine the most effective drug compound and dosing. Any trial would also need to account for avoidance of overdose, patient reaction, and potential side effects14,15. Using a 3D simulation tool, we studied how a drug is diffused and absorbed in the brain. This tool allowed us to examine the relationship between cell stimulation, drug dose, tissue density, and tissue permissibility. This strategy has the potential to accelerate drug development and reduce trial and error cycles. This work will pave the way to a professional tool of design and analyze clinical trials and patient-specific protocols. eBrain, the tool we developed for this study, is an in-house 3D computer-based simulation built using 3D Graphics technologies that are often used for products in the gaming industry. eBrain integrates physiological data, medical images for processing measurement input, and experimental knowledge into clinically relevant output16. eBrain supports in-silico analysis of the system dynamics under various conditions. We show how eBrain helps to study intranasal drug delivery and to screen optimal treatments over multiple possible scenarios. Furthermore, it will assist in efforts to create more focused clinical trials with reduced cohort sizes, thereby potentially minimizing the risk to human lives. Notably, eBrain can be personalized to each patient by processing data specific to each individual. eBrain is built on previous simulation strategies that have successfully predicted experimental results in various biological systems, some of which gained support by subsequent experimental study17,18,19,20,21,22. Several reviews have highlighted the potential of simulation in aspects related to neuroscience, e.g., migration, and symptom profiles23,24. In this manuscript, section one details the design, the experimentally derived algorithms, and the set-up requirements of the framework of eBrain. Section two describes examples of diffusion and tissue uptake simulations for intranasal delivery with a comparison to experimental results. Section three mathematically characterizes eBrain internal functions and details various possible scenarios of tissue stimulation following different routes of administration. Section four discusses how eBrain analysis can be used to predict optimal drug delivery. Finally, section five highlights future features and extensions for eBrain and possible applications of the software. ## Section 1: Algorithms, Design and Set Up Requirements ### Three dimensional virtual model from MRI data To develop eBrain simulations, we used the C++ programming language aided by tools from Epic Games’ Unreal Engine (www.unrealengine.com) framework. The MRI data in the simulation was obtained from the FOX Foundation database (https://www.michaeljfox.org/). Specifically, we used the MRI of subject 10874, a 73 year old female Parkinson’s disease patient dated 9/5/2014 and downloaded from the Parkinson’s Progression Markers Initiative (PPMI) database (http://www.ppmi-info.org/). The virtual MRI model overlies a 3D grid containing approximately 500,000 grid voxels (matrix of 257 × 80 × 24). The 3D surface tightly encloses all points in 3D space that are contained within our volumetric material, then uses the volumetric material to render the MRI at all points (Fig. 1A). For internal views of the MRI, we created a ‘virtual screen’, a 3D plane located in front of the virtual camera and moving simultaneously with it. The volume material is processed in real time, therefore additional data can be layered on top of the MRI. The results are a completely unconstrained view of the MRI data, viewable from any angle, magnification, and from any position. Areas of interest (e.g.,nose, substantia nigra) are superimposed. This design allows us to dynamically select multiple layers of 3D data for comprehensive visualization. eBrain 3D visualization permits multiple perspectives including the traditional axial, coronal, and sagittal planes25,26. The camera moves seamlessly to slice the image over unconstrained views (Fig. 1B and Supplementary Clip 1). Each plane is projected into the three traditional planes, showing the three slices that correspond to the plane that is under inspection. eBrain thus allows tracking of tissues that are not aligned with any specific plane and are branched tissues from the stem area. ### Simulating drug diffusion in the brain via the intranasal delivery system To simulate intranasal drug delivery in eBrain, we marked the delivery related locations within the nasal cavity on the MRI (specifically, the olfactory bulb and trigeminal pathways involving perineural and perivascular channels). The superimposed grid cells are set as the origin for calculating the diffusion of the drug quantity to be delivered. The current version simulates a passive transport of the drug; however, the simulation can be extended to support active transport and efflux transporter systems. This can be achieved by labelling the nasal epithelium in the MRI and formulating a mathematical equation to mimic the dynamics of the relevant transport system. For diffusion simulation of intranasal drug delivery, we assumed that the diffusion of the drug through the brain is driven by tissue density. Accordingly, in regions with the highest density where the MRI intensity is lightest (e.g., bones) we assumed diffusion is minimal. In regions with the lowest density, where the MRI intensity is darkest (e.g., liquid areas, CSF) we assumed diffusion is maximal. For grey areas, we assume that the diffusion coefficient varies with shade from minimal to maximal values. We designed the diffusion in the simulation by approximating the diffusion coefficient as a function of MRI intensity in each grid cell according to a reverse logistic function with midpoint = 0.1 and steepness = 60 (Fig. 2A). This assumption is supported by several experimental studies testing insulin diffusion and glucose uptake in brain tissue and hydrogels27. Obviously, in real-life brain tissue, other factors are involved in the diffusion process (e.g., the blood-brain-barrier and the vascular network). However, for the purpose of the intranasal delivery study, we omitted them. Future extensions of eBrain would cover these aspects creating a more comprehensive platform. We used discretization of a master equation to simulate drug diffusion between the i-th voxel and the flux from the set of the adjacent voxels (Eq. 1). Each time-step, the simulation sums the flux in of the drug from the adjacent voxels and subtracts the flux from the i-th voxel to the adjacent voxels. The diffusion rate between two adjacent cells is determined based on their MRI intensity color. $${D}_{i}^{n+1}={D}_{i}^{n}+{\rm{\Delta }}t\sum _{j\in N}{k}_{ji}{D}_{j}^{n}-{\rm{\Delta }}t\sum _{j\in N}{k}_{ij}{D}_{i}^{n}-{U}_{i}^{n}({D}_{i})$$ (1) with $${D}_{i}^{n}$$ being the drug concentration of the i-th voxel at the n-th iteration step, j being the index of an adjacent voxel, N being the set of neighboring voxels, $${k}_{ij}$$ being the diffusion rate from the i-th voxel to the j-th voxel, and $${\rm{\Delta }}t$$ being the time shift (see illustration in Fig. 2C). $${U}_{i}^{n}(D)$$ being the tissue uptake of the i-th voxel at the n-th iteration step (Eq. 2). We compared Eq. (1) parameters with diffusion coefficients measured using UV imaging in agarose hydrogels that is used for mimicking subcutaneous tissues27,28. Experimental results of insulin diffusion coefficients in different medium conditions ranges between 0.8–2.0 (m2/s). Similarly, the diffusion rates assigned for brain tissue in our simulations ranges between 0.4–2.1 (simulated volume/time). Specifically, an average of ~1.3 is seen at the striatum and the basal ganglia. ### Simulating tissue uptake as function of drug compounding To calculate tissue uptake (i.e., the way the drug is compounded to increase tissue ability to absorb the drug), we define an absorption coefficient and saturation concentration threshold for each grid voxel (indicating the permissibility of the tissue at the overly area). We use the MRI intensity as indication of tissue density to derive the uptake parameters for the calculation. We assume correlation between tissue density and its absorption ability. Accordingly, we set higher uptake in grey areas (i.e., brain tissue) and lower uptake at black and white areas (liquid and bone, respectively). We approximate the absorption capacity as a specialized Gaussian function of MRI intensity where the maximum value is normalized to 1. The MRI intensity for maximum absorption is calculated as the Gaussian mean (μ) and the range of intensities, where significant absorption occurs is described by the Gaussian standard deviation (σ). The default solution with μ = 0.15 and σ = 0.04 is given in Fig. 2B. μ was derived from the distribution of MRI colors. $${U}_{i}^{n}(D)={a}_{i}{D}_{i}^{n}$$ (2) where αi being the absorption coefficient of the i-th grid voxel (see illustration in Fig. 2C). For comparison, experimental results calculated the glucose uptake value of 0.616 (g/min) in healthy brain tissue28. This result concurs with the range of 0.4–1.0 for tissue uptake in eBrain, specifically, an average simulated uptake of ~0.9 at the at the striatum and the basal ganglia. A more comprehensive comparison can be done against images taken using autoradiography approach with radiolabeled substances. The simulation can be fitted to the data, extract variables and calibrate the output for a more accurate results as well as to validate simulation predictions. ### Simulating tissue stimulation in the substantia nigra Within the area of the substantia nigra we placed computational instances to indicate stimulation of the neurons within the virtual space. We marked the specific position of the tissue region on the MRI slice images to indicate where the neurons should be positioned. When the simulation is executed, it loads this information and identifies the grid voxel that overlap the marked region. At each run, the simulation places the instances randomly in the marked area, creating a uniform distribution of instances in a slightly different pattern. Instances carry a synthetic molecular stimulation mechanism that triggers a response once it senses a proper environmental signal. We assume that the stimulation is irreversible and there is no degradation of the intrinsic activity. We tested the simulation with 1000 neurons (approximately four orders of magnitude less than typically in a 80 year old patient’s substantia nigra29) that are visualized as sphere-like globules that are colored green and are pulsing radially as an indication of internal metabolic activity. The amplitude of the scale change is adjusted according to its internal state. The cells remain green as long as they are not stimulated. Initial stimulation occurs once the neuron senses drug uptake of at least 0.33% of the maximal capacity of the voxel where it is positioned. Once a neuron triggers stimulation, it reflects the change by shifting its color to yellow, and then red upon reaching full stimulation. A fully stimulated cell terminates its pulse. Analysis of cell population scenarios was done using MATLAB R2017 by MathWorks (www.mathworks.com). ### Hardware requirements The simulation was executed on an Intel Core i7-6700HQ Quad Core processor, running at 2.6 GHz under Windows 10 OS. Rendering was done using a Nvidia GeForce GTX 970M graphics card in a 1920 × 1080 resolution display. Under this setup we achieved a frame rate for the simulation of an average 7 ± 116 frames per second providing a smooth visualization of the simulated process. ## Section 2: eBrain Simulations of Drug Diffusion and Tissue Uptake ### Simulation of drug diffusion by intranasal drug delivery We used the simulation to examine the dynamics of the intranasal delivery system though modification of the system parameters. We ran the simulation with a single theoretical drug that mimics diffusion of various possible subtracts. This drug encapsulates the key properties of a chemical substrate  (e.g., deferoxamine) or a protein (i.e., Insulin) that have diffusion ability through a tissue. To mimic intranasal infusion, we assigned simulated doses of the drug at the nasal passage regions of the MRI and studied its progression in the brain tissue (Fig. 3 and Supplementary Clip 2). Over time, the factor diffuses through the entire brain image over all slices and planes. Figure 3 displays snapshots of the diffusion at a specific brain slice over the axial plane under two doses, low (10e3 simulated Molar) and high (1000e3 simulated Molar) in three sampling points (0.15, 0.5, 1.5 simulated hours). The diffusion is not uniform across the environment, in the white areas, where the skull is positioned, the diffusion is blocked. In black areas, where fluids are present, the diffusion is faster. Whereas, in the grey areas where the brain tissue is found, the diffusion is decreased. When the low dose is set in the simulation, at the first sampling point (~0.15 simulated hours) the drug did not diffuse through the axial plane to the slice indicated in Fig. 3. For the second sampling point (~0.5 simulated hours) the drug is predominantly localized in the most anterior regions of the brain but does not diffuse in the posterior direction. For the third sampling point (~1.5 hours), the drug diffuses posteriorly through half of the virtual slice, half way through the brain. When a high dose is infused, the diffusion occurs more rapidly. At the first sampling point the drug is already diffused through the axial plane to the indicated slice in Fig. 3 similar to the second sampling point in the low dose condition. In the second sampling time point for the high dose condition, the drug diffused posteriorly half way through the slice more similar to the third sampling point for the low dose condition. For the third sampling point in the high dose condition, the drug is fully diffused throughout the observed slice in Fig. 3. We observed an apparent shift in the diffusion dynamics between the two doses in which increasing the dose accelerated the diffusion between the sampling points. To test the simulation against experimental data, we compared the diffusion in the brain tissue area with an experimental study in agarose hydrogels that are used for mimicking subcutaneous tissue25. We plotted the simulation and examined time-points that corresponded to the experimental tested scenarios. We observed a qualitative agreement between the simulation and the experimental results. Similar to the agarose gel assay (Fig. 4A), we see that the diffusion occurs gradually into the tissue. As time progresses, the substrate concentration spans over a larger space and the distribution degree drops. In early stages, the substrate is concentrated at the origin and a steep drop in concentration occurs as you move closer towards the target tissue, whereas at the later stage (5.0 hours) the concentration is gradually diffused within the tissue (Fig. 4B). Moreover, the concentration distribution curves concur between the simulation and the gel, showing an increasing rate over time (Fig. 4C,D). Due to the different nature of the experiments, the simulation shows a shift of the diffusion origin over time, particularly at the early stages (notice the red curve offset between the dynamics of experimental measurements and the simulation in Fig. 4C,D). Root mean square deviation (RMSD) of the results discloses an average error of 0.08 between the data and the simulation (see Supplementary Table S1 and Fig. S2). ### Simulation of tissue uptake of varying drug amounts We further used the simulation to study the tissue uptake of drug over time in the brain with different absorption abilities and drug doses. Figure 5A shows drug uptake by the tissue at a single sampling point (~1.0 simulated hour) under three different doses (rows; 1e3, 10e3, and 1000e3 simulated Molar) and three different tissue uptakes (columns; 2e−3, 20e−3 and 200e−3 simulated absorption units). Consistent with the diffusion results, increasing the dose makes enhances the tissue uptake. A low dose leads to tissue absorption in the most anterior regions of the brain, medium dose triggers absorption posteriorly halfway through the brain, and high dose promotes tissue uptake throughout the entire slice. As expected, for low tissue uptake, we observed tissue responses scattered over the area where the drug is diffuse. When we increase tissue uptake to an intermediate level there is no response is observed in the fluid areas. Within the parenchyma there is an observed gradual absorption from the periphery to the middle of the brain. When we set extremely high uptake values, the tissue responds to its maximal magnitude. Moreover, in these extreme cases, even brain areas that are mostly occupied by fluids respond to the presence of the drug. This is possibly due to the minority of the cell population that is stirred in the fluid whose uptake is radically enhanced when absorption values get to an extreme state. Alternatively, this reflects the extremely enhanced absorptions of the tissues positioned outside of the observed axial slice (Fig. 5A). To gain experimental support, we compared FDG-PET brain scan with slices of the MRI at approximately the same perspective plane (basal ganglia view of the axial plane). We tested two cases, an elderly adult whose glucose uptake is normal and an Alzheimer’s disease patient whose glucose uptake is significantly reduced in the brain13,30. The simulation, under baseline parameters, shows similar characteristics as the intensity of a normal subject FDG-PET scan, both show areas with higher uptake in the center of the tissue that gradually decreases at the peripheries and spans over most brain tissue (Fig. 5B, Top). In both cases, the ventricles show a low uptake activity (a blue colored in Fig. 5B, Top). We observed similarity in reduced activity simulation runs (uptake activity is threefold reduced) and the intensity of a FDG-PET scan of an Alzheimer disease subject (Fig. 5B, Bottom). In both cases, the activity amplitude is reduced showing only isolated islets in which the absorption reaches high magnitude, with a steep decrease in their surrounding areas. To evaluate the similarity between the simulation results and the FDG-PET scans (Fig. 5B, Bottom), we calculated the mean activity and the activity distribution. We found that the mean uptake value in the baseline simulation is 0.59 and drops to 0.4 in the reduced activity simulation. The mean uptake value was compared with the average density in the FDG-PET scans which was 0.52 and 0.41 for the normal subject and the Alzheimer’s disease subject, respectively (Supplementary Fig. S3 and Table S4). Additionally, we measured the activity distribution and calculated the areas of highest signal intensity in the overall brain tissue (value > 0.7 of the normalized data; Supplementary Fig. S3 and Table S4). We find that the baseline simulation data shows extensive uptake regions that occupy 22.8% of the brain tissue and the reduced activity simulations occupy 3.9% of the brain tissue. Similarly, high intensity areas occupy 19.3% and 5.6% of the brain tissue in the FDG-PET scans of the normal subject and the Alzheimer’s disease subject, respectively (Supplementary Fig. S5 and Table S4). ## Section 3: Study of Tissue Stimulation Over Time ### Tissue stimulation over time We used the simulation to study how the drug stimulates the cell population at the substantia nigra over time. In general, the population is gradually stimulated once the factor diffuses to the substantia nigra location and absorbed by the tissue. As expected, as time progresses the population gradually changes the dominated default green color, into yellow color (indicating stimulation has started), until the entire population turns red (indicating that cells are fully activation). Figure 6 and Supplementary Clip 4 show the population at the Pars Compacta region of the substantia nigra at four sampling points (0,33, 0.5, 0.66 and 0.833 simulated hours), with specific dose (100e3 simulated Molar) and tissue uptake scenario (40e3 simulated absorption units). The samples are plotted in three different displays: the genuine brain MRI (Fig. 6, Top), over the diffused drug (Fig. 6, Middle) and over the simulated drug uptake into the tissue (Fig. 6, Bottom). Notably, there is an apparent positive correlation between the drug diffusion and the tissue uptake with the responsive state of the cells. To study the stimulation dynamics, we plotted the number of stimulated neurons over time (Fig. 6B; average of 10 independent simulation runs under dose of ~2e3 and uptake ~1e−3). We observed three distinct phases in the curve: (1) initial phase of the simulation, where the drug was not yet delivered to the substantia nigra region and thus the total number of stimulated cells equals zero. (2) activity phase, in which the drug was taken by the tissue, crosses the required threshold and the neuronal population is gradually stimulating its activity. (3) saturation phase, in which the population reaches equilibrium and maximal number of stimulated cells. ### Tissue stimulation core features To study the tissue stimulation process, we executed the simulation over one hundred different conditions (range of 1000e3-0.1e3 simulated Molar and 100e−3–0.001e−3 simulated absorption units). Each curve in Fig. 6C represents a single run of the simulation under specific dose and tissue uptake combinations. The simulation shows different curves for different combinations indicating that the initial dose and tissue uptake dominates the activity of the tissue. Specifically, we observed changes in the three key features of the dynamics of the stimulated neuron population: (1) The curves amplitude at the saturation phase varied between the runs, indicating variation in the stimulation capacity of the run. (2) The period of the initial phase before cells stimulation has been triggered, indicting varying stimulation lags. (3) The slope of the curve at the activity phase is different between the runs, indicating changes in the stimulation rates at the activity phase. These features can be defined by three core parameters: (1) stimulation capacity - the maximum count of simulated cells (i.e., reached its full activity potential at the saturation phase) (Fig. 7A), (2) stimulation lag - the period until tissue stimulation has been triggered (Fig. 7D) and (3) stimulation rate - the slope of tissue stimulation (calculated as the coefficient of a first order polynomial equation fitted to the curve (Fig. 7G). The way in which these parameters behave as a function of the infused drug is not explicitly programmed and needs to be studied by an extensive analysis of the simulation under a vast range of input values. We plotted on a two-dimensional map for each parameter under multiple combinations of doses and tissue uptake (Fig. 7B,7E,7H). The maps display apparent distinct variations between simulations under different initial conditions. We verified that the dose and tissue uptake combinations elicit equilibrium for the functions we study (i.e., extending the analyzed range will have a minor effect on the results). ### Tissue stimulation as function of drug amounts The map plots define a function in which each entry in the map indicates the amplitude of the response to a specific dose and tissue uptake combination (Fig. 7B,7E,7H). From this perspective, we observed that for a fixed tissue uptake (Y axis) value, the functions preserve the order as the dose concentration (X axis) increases: the stimulation capacity and rate show non-decreasing monotonic order (Fig. 7B,7H respectively) while stimulation lag shows non-increasing monotonic order (Fig. 7E). In contrast, for a specific dose concentration (X axis), the parameters as a function of the tissue uptake (Y axis) did not behave uniformly. Namely, stimulation capacity (Fig. 7B) and stimulation lag (Fig. 7E) act differently in two strict regimes: (1) non-decreasing monotonic regime where the function preserved an order and (2) skewed distribution regime where the function increases to a maximum point and then decreases (threshold between the regimes is ~5e3 and ~1e3 simulated Molar, respectively). Interestingly, the stimulation rate as function of tissue uptake (Fig. 7H) shows only a skewed distribution regime with no monotonic regime. The shape and amplitude of the skewed distribution function varied between the doses of each plot. For example, maximal stimulation rate of ~2.0 cells/sec is observed for dose of 1000e3 and simulated tissue uptake of 10e−3 simulated absorption units. The slope gradually decreases as tissue uptake increases (when reaches 150e−3 simulated absorption units, the stimulation rate drops to 1.5 cells/sec). A schematic description of the different regimes for each function for the tissue uptake values is given in Fig. 7C,7F,7I where cyan indicates a monotonic regime and orange indicates a skewed distribution regime. While the monotonic regime is expected, the apparent skewed distribution regimes require further explanation. Skewed distribution prompts tissue uptake values that drive the optimal tissue stimulation. Below this value, drug absorption is reduced, whereas above this value, absorption is enhanced by the entire tissue and including in areas prior to the substantia nigra. Moreover, at an intermediate dose in the skewed distribution regimes, we observed that the response area is bound by an interdependency diagonal threshold. This analysis characterizes three activity domains: I - activation area, where the population is stimulated, II - no activity due to insufficient tissue uptake and III - no activity due to absorption at tissues prior to the substantia nigra (Fig. 7C,7F,7I). The interdependency diagonal thresholds (Black line) where higher doses compensate for lower uptake, and vice versa, in getting similar output (Fig. 7C,7F,7I). The stimulation capacity and stimulation lag show a single interdependency diagonal (approximately at 0.1e3–5e3 and 0.1e3–10e3 dose, respectively) whereas the activation rate shows two interdependency diagonals. One at the 0.1e3–1e3 dose range for low tissue uptake (0.1e−3–1e−3 simulated absorption units) and the other at the 0.1e3–10e3 dose range for high tissue uptake (1e−3–100e−3 simulated absorption units). ## Section 4: eBrain Analysis to Predict Optimal Treatment To predict the optimal treatment, we plotted dose and uptake combinations as function of the stimulation speed, rate and capacity (Fig. 8A). We observed two major clusters, one with high capacity (>700 stimulated cells) and one with low capacity (<400 stimulated cells). The minority of the treatment combinations are placed in the intermediate zone. These results correspond well with our previous analysis that revealed a steep incline from the base value to the maximum. To identify the optimal treatment, we colored the points to indicate the amplitude of the dose (Fig. 8B) and the uptake (Fig. 8C). As expected, the dose is directly correlated with the stimulation speed. Interestingly, the results show correlations of the treatment effectiveness with tissue uptake in which combining the treatment with an agent that maximally increases uptake does not necessarily lead to the maximal value symbolizing rate, speed, and capacity. To identify optimal treatments that minimize resources while maximize impact, we highlighted the points where both values are above a specific threshold (30% of their maximal value). These points are highlighted in the space plot in Fig. 8D (uptake = 150, 73 and dose = 1000 (Black), uptake = 36 and dose = 360 (Red), uptake = 36 and dose = 1000 (Magenta) and uptake = 150, 73 and dose = 360 (Green)). Interestingly, the maximal points over the three functions were achieved by boosting to an immense dose while ignoring combination with other agents Nonetheless, the data point achieved by a combination of dose and addition of other agents are positioned within a relatively close vicinity. Taken together, these results imply that efficacy of a drug can be increased by combining with additional agents at lower drug dose. As these are merely a fraction of the points, it may direct the decision on treatment based on specific individual needs. For example, if the subject cannot tolerate high doses or if there is a shortage of drug supply, the dose can be reduced and the tissue uptake increased by choosing higher permissibility compounds. Nonetheless, the treatment should be designed carefully to avoid over increasing tissue uptake that leads to consumption of the drug prior to the relevant area and consequently reducing the overall impact. ## Section 5: Future eBrain Extensions and Potential Applications ### Future extensions of eBrain The eBrain prototype presented here can be considered as a proof of concept of a much broader vision of using 3D simulations in medical research. This study illustrates how eBrain reveals counter intuitive results, provides mechanistic explanations to observations and highlights underlying principles in intranasal drug delivery. Future renditions of eBrain will incorporate more molecular, physiological, and pathological data to help better predict disease progression and responsiveness to treatments. We will focus eBrain to cover multiple delivery systems to allow better theoretical analysis of each delivery system. Future directions include comparing the systems effectiveness and classification of optimal delivery for treatments31,32. Along these lines, we have already implemented an intracerebral delivery system in which chemicals and cells are injected directly into the brain. Additionally, we are in the process of modeling the way molecules cross physical barriers, such as the Blood-Brain-Barrier (BBB) and the Brain Central Spline Fluid Barrier (BCSFB). Once implemented, eBrain will support intravenous, intragastric, and intrathecal delivery. Furthermore, using various mathematical and computational techniques, we plan to extend eBrain to support physicochemical properties of potential drugs. We aim to build into our mathematical design elements to recapitulate form and stability of chemical formulas (e.g., molecular weight, drug half-life and clearance). We aim to incorporate more intracellular and intercellular mechanisms of the simulated neurons to support multiple molecular pathways and mechanisms. Each neuron would incorporate a program that mimics a neurons internal mechanisms18,19. This program will define regulation of receptors and neurotransmitters in molecular pathways (e.g., apoptosis, energy metabolism) and the function of cellular organelles (e.g., mitochondria, endoplasmic reticulum, and nucleus). To that end, we are on course to define a cellular program for dopaminergic neurons that spans from the substantia nigra to the striatum. We can then simulate neuronal response to oxidative stress by infusing toxins to study progression of cell death. ### eBrain applications in patient specific medical care One of eBrain advantages is the ability to incorporate patient specific MRI data, from multiple time points and sources. Thus, eBrain can be used with time course MRIs to study the progressive effect of drugs in each sample. This enables monitoring of the treatment to predict changes in individuals. In the long run, eBrain can assist clinicians to prescribe the most relevant treatment, by simulating its impact on patient specific eBrain simulation together with known experience and population statistics. eBrain and similar attempts, will help develop more effective treatment paradigms to enhance personalized medicine beyond the ‘one-size-fits-all’ principle in which one protocol is administered to a large group of patients8. Development of this strategy will help clinicians in the future to adjust treatments to target populations of patients33,34,35. The work discussed here illustrates how eBrain can contribute to the patient specific drug administration process as an additional layer to aid in treatment16. In addition, eBrain has the potential to impact other aspects in precision medicine. Clinical trial design can be improved through utilizing eBrain and executed in a more focused/refined manner. Similarly, eBrain can serve as an educational tool to train novices by illustrating brain activity. Moreover, eBrain can be used by the patients themselves to help understand their treatment, increasing their involvement in the medical process. ## References 1. 1. Thompson, L. M. Neurodegeneration: a question of balance. Nature 452, 707–708 (2008). 2. 2. Nutt, J. G. & Wooten, G. F. Clinical practice. Diagnosis and initial management of Parkinson’s disease. N Engl J Med 353, 1021–1027 (2005). 3. 3. Sporns, O. Networks of the brain (MIT Press, 2011). 4. 4. Rizek, P., Kumar, N. & Jog, M. S. An update on the diagnosis and treatment of Parkinson disease. CMAJ 188, 1157–1165 (2016). 5. 5. Porsteinsson, A. P. & Antonsdottir, I. M. An update on the advancements in the treatment of agitation in Alzheimer’s disease. Expert Opin Pharmacother 18, 611–620 (2017). 6. 6. Kabanov, A. V. & Batrakova, E. V. New technologies for drug delivery across the blood brain barrier. Curr Pharm Des 10, 1355–1363 (2004). 7. 7. Frackowiak, R. & Markram, H. The future of human cerebral cartography: a novel approach. Philos Trans R Soc Lond B Biol Sci 370 (2015). 8. 8. Rantanen, J. & Khinast, J. The Future of Pharmaceutical Manufacturing Sciences. J Pharm Sci 104, 3612–3638 (2015). 9. 9. Erdo, F., Bors, L. A., Farkas, D., Bajza, A. & Gizurarson, S. Evaluation of intranasal delivery route of drug administration for brain targeting. Brain Res Bull 143, 155–170 (2018). 10. 10. Frey, W. H. Bypassing the blood-brain barrier to delivery therapeutic agents to the brain and spinal cord. Drug Delivery Technol 5 (2002). 11. 11. Gupta, U. et al. Intranasal Drug Delivery: A Non-Invasive Approach for the Better Delivery of Neurotherapeutics. Pharm Nanotechnol (2017). 12. 12. Reger, M. A. et al. Effects of intranasal insulin on cognition in memory-impaired older adults: modulation by APOE genotype. Neurobiol Aging 27 (2006). 13. 13. Li, Y. et al. Regional analysis of FDG and PIB-PET images in normal aging, mild cognitive impairment, and Alzheimer’s disease. Eur J Nucl Med Mol Imaging 35, 2169–2181 (2008). 14. 14. Hasselbalch, S. G. et al. No effect of insulin on glucose blood-brain barrier transport and cerebral metabolism in humans. Diabetes 48, 1915–1921 (1999). 15. 15. Gray, S. M., Meijer, R. I. & Barrett, E. J. Insulin regulates brain function, but how does it get there? Diabetes 63, 3992–3997 (2014). 16. 16. Soleimani, M., Shipley, R. J., Smith, N. & Mitchell, C. N. Medical imaging and physiological modelling: linking physics and biology. Biomed Eng Online 8, 1 (2009). 17. 17. Setty, Y., Cohen, I. R., Dor, Y. & Harel, D. Four-dimensional realistic modeling of pancreatic organogenesis. Proc Natl Acad Sci USA 105, 20374–20379 (2008). 18. 18. Setty, Y. et al. How neurons migrate: a dynamic in-silico model of neuronal migration in the developing cortex. BMC Syst Biol 5, 154 (2011). 19. 19. Setty, Y. Multi-scale computational modeling of developmental biology. Bioinformatics 28, 2022–2028 (2012). 20. 20. Zubler, F. et al. Simulating cortical development as a self constructing process: a novel multi-scale approach combining molecular and physical aspects. PLoS Comput Biol 9, e1003173 (2013). 21. 21. Setty, Y. In-silico models of stem cell and developmental systems. Theor Biol Med Model 11, 1 (2014). 22. 22. Pavlides, A., Hogan, S. J. & Bogacz, R. Computational Models Describing Possible Mechanisms for Generation of Excessive Beta Oscillations in Parkinson’s Disease. PLoS Comput Biol 11, e1004609 (2015). 23. 23. Broderick, G. & Craddock, T. J. Systems biology of complex symptom profiles: capturing interactivity across behavior, brain and immune regulation. Brain Behav Immun 29, 1–8 (2013). 24. 24. Masuzzo, P., Van Troys, M., Ampe, C. & Martens, L. Taking Aim at Moving Targets in Computational Cell Migration. Trends Cell Biol 26, 88–110 (2016). 25. 25. Hild, W. J., Sobotta, J., Ferner, H. & Staubesand, J. Sobotta atlas of human anatomy. (Urban & Schwarzenberg, 1983). 26. 26. Rohen, J. W., Yokochi, C. & Hall-Craggs, E. C. B. Color atlas of anatomy: a photographic study of the human body. (Igaku-Shoin, 1983). 27. 27. Jensen, S. S., Jensen, H., Cornett, C., Moller, E. H. & Ostergaard, J. Insulin diffusion and self-association characterized by real-time UV imaging and Taylor dispersion analysis. J Pharm Biomed Anal 92, 203–210 (2014). 28. 28. Boersma, G. J. et al. Altered Glucose Uptake in Muscle, Visceral Adipose Tissue, and Brain Predict Whole-Body Insulin Resistance and may Contribute to the Development of Type 2 Diabetes: A Combined PET/MR Study. Horm Metab Res 50, 627–639 (2018). 29. 29. Pakkenberg, B., Moller, A., Gundersen, H. J., Mouritzen Dam, A. & Pakkenberg, H. The absolute number of nerve cells in substantia nigra in normal subjects and in patients with Parkinson’s disease estimated with an unbiased stereological method. J Neurol Neurosurg Psychiatry 54, 30–33 (1991). 30. 30. Berti, V. et al. Early detection of Alzheimer’s disease with PET imaging. Neurodegener Dis 7, 131–135 (2010). 31. 31. Temsamani, J., Scherrmann, J. M., Rees, A. R. & Kaczorek, M. Brain drug delivery technologies: novel approaches for transporting therapeutics. Pharm Sci Technolo Today 3, 155–162 (2000). 32. 32. Wang, X., Yu, X., Vaughan, W., Liu, M. & Guan, Y. Novel drug-delivery approaches to the blood-brain barrier. Neurosci Bull 31, 257–264 (2015). 33. 33. Wensing, M. Evidence-based patient empowerment. Qual Health Care 9, 200–201 (2000). 34. 34. Akay, M., Exarchos, T. P., Fotiadis, D. I. & Nikita, K. S. Emerging technologies for patient-specific healthcare. IEEE Trans Inf Technol Biomed 16, 185–189 (2012). 35. 35. Baldock, A. L. et al. From patient-specific mathematical neuro-oncology to precision medicine. Front Oncol 3, 62 (2013). ## Acknowledgements Data used in the preparation of this article were obtained from the Parkinson’s Progression Markers Initiative (PPMI) database (www.ppmi-info.org/data). PPMI – a public-private partnership – is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners, including Abbvie, Allergan, Avid Radiopharmaceuticals, Biogen, Biolegend, Bristol-Myers Squibb, Celgene, Denali, GE Healthcare, Genentech, GlaxoSmithKline, Lilly, Lundbeck, Merck, Meso Scale Discovery, Pfizer, Prevail Therapeutics, Piramal, Roche, Sanofi Genzyme, Servier, Takeda, Teva, and UCB. Data in Figure 4, Figure 5, Supplemental Figure S2 and Supplemental Figure S5 were reproduced with copyright permission from the relevant journals. The author wishes to thank Dr. Avi Mayo, Ben Hazan, Dan Pri-Tal and Joey Schnurr for their assistance. A special thank to Dr. Charles Cohan for critically reviewing this manuscript. ## Author information Authors ### Corresponding author Correspondence to Yaki Setty. ## Ethics declarations ### Competing Interests A United States Patent Application (#15896344) entitled VIRTUAL REALITY IMAGING OF THE BRAIN by Gateway Institute for Brain Research LLC, with the author as the inventor is submitted. The material discussed in this manuscript served as a basis for the application. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Setty, Y. eBrain: a Three Dimensional Simulation Tool to Study Drug Delivery in the Brain. Sci Rep 9, 6162 (2019). https://doi.org/10.1038/s41598-019-42261-3 • Accepted: • Published: • ### Virtual Reality Inspired Drugs (VRID): the Future Arena of Drug Discovery • Yaki Setty SN Comprehensive Clinical Medicine (2019)
2021-04-21 00:59:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5074083805084229, "perplexity": 3921.7003717388075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039503725.80/warc/CC-MAIN-20210421004512-20210421034512-00123.warc.gz"}
http://lemon.cs.elte.hu/trac/lemon/changeset/31a1a79019bb6e2d1eff2ea12f46be4b16babe90/lemon-tutorial/
Changeset 39:31a1a79019bb in lemon-tutorial Ignore: Timestamp: 02/21/10 15:07:59 (11 years ago) Branch: default Phase: public Message: Fully rework and extend the adaptors section Files: 2 edited Unmodified Added Removed • adaptors.dox r32 [PAGE]sec_graph_adaptors[PAGE] Graph Adaptors \todo Clarify this section. Alteration of standard containers need a very limited number of operations, these together satisfy the everyday requirements. In the case of graph structures, different operations are needed which do not alter the physical graph, but gives another view. If some nodes or arcs have to be hidden or the reverse oriented graph have to be used, then this is the case. It also may happen that in a flow implementation the residual graph can be accessed by another algorithm, or a node-set is to be shrunk for another algorithm. LEMON also provides a variety of graphs for these requirements called \ref graph_adaptors "graph adaptors". Adaptors cannot be used alone but only in conjunction with other graph representations. The main parts of LEMON are the different graph structures, generic graph algorithms, graph concepts, which couple them, and graph adaptors. While the previous notions are more or less clear, the latter one needs further explanation. Graph adaptors are graph classes which serve for considering graph structures in different ways. A short example makes this much clearer.  Suppose that we have an instance \c g of a directed graph type, say ListDigraph and an algorithm \code template int algorithm(const Digraph&); \endcode is needed to run on the reverse oriented graph.  It may be expensive (in time or in memory usage) to copy \c g with the reversed arcs.  In this case, an adaptor class is used, which (according to LEMON \ref concepts::Digraph "digraph concepts") works as a digraph. The adaptor uses the original digraph structure and digraph operations when methods of the reversed oriented graph are called.  This means that the adaptor have minor memory usage, and do not perform sophisticated algorithmic actions.  The purpose of it is to give a tool for the cases when a graph have to be used in a specific alteration.  If this alteration is obtained by a usual construction like filtering the node or the arc set or considering a new orientation, then an adaptor is worthwhile to use. To come back to the reverse oriented graph, in this situation \code template class ReverseDigraph; \endcode template class can be used. The code looks as follows \code ListDigraph g; ReverseDigraph rg(g); int result = algorithm(rg); \endcode During running the algorithm, the original digraph \c g is untouched. This techniques give rise to an elegant code, and based on stable graph adaptors, complex algorithms can be implemented easily. In flow, circulation and matching problems, the residual graph is of particular importance. Combining an adaptor implementing this with shortest path algorithms or minimum mean cycle algorithms, a range of weighted and cardinality optimization algorithms can be obtained. For other examples, the interested user is referred to the detailed documentation of particular adaptors. The behavior of graph adaptors can be very different. Some of them keep capabilities of the original graph while in other cases this would be meaningless. This means that the concepts that they meet depend on the graph adaptor, and the wrapped graph. For example, if an arc of a reversed digraph is deleted, this is carried out by deleting the corresponding arc of the original digraph, thus the adaptor modifies the original digraph. However in case of a residual digraph, this operation has no sense. Let us stand one more example here to simplify your work. ReverseDigraph has constructor \code ReverseDigraph(Digraph& digraph); \endcode This means that in a situation, when a const %ListDigraph& reference to a graph is given, then it have to be instantiated with Digraph=const %ListDigraph. In typical algorithms and applications related to graphs and networks, we usually encounter situations in which a specific alteration of a graph has to be considered. If some nodes or arcs have to be hidden (maybe temporarily) or the reverse oriented graph has to be used, then this is the case. However, actually modifing physical storage of the graph or making a copy of the graph structure along with the required maps could be rather expensive (in time or in memory usage) compared to the operations that should be performed on the altered graph. In such cases, the LEMON \e graph \e adaptor \e classes could be used. [SEC]sec_reverse_digraph[SEC] Reverse Oriented Digraph Let us suppose that we have an instance \c g of a directed graph type, say \ref ListDigraph and an algorithm \code template int algorithm(const Digraph&); \endcode is needed to run on the reverse oriented digraph. In this situation, a certain adaptor class \code template class ReverseDigraph; \endcode can be used. The graph adaptors are special classes that serve for considering other graph structures in different ways. They can be used exactly the same as "real" graphs, i.e. they conform to the \ref graph_concepts "graph concepts", thus all generic algorithms can be performed on them. However, the adaptor classes cannot be used alone but only in conjunction with actual graph representations. They do not alter the physical graph storage, they just give another view of it. When the methods of the adaptors are called, they use the underlying graph structures and their operations, thus these classes have only negligible memory usage and do not perform sophisticated algorithmic actions. This technique yields convenient tools that help writing compact and elegant code, and makes it possible to easily implement complex algorithms based on well tested standard components. For solving the problem introduced above, we could use the follwing code. \code ListDigraph g; ReverseDigraph rg(g); int result = algorithm(rg); \endcode Note that the original digraph \c g remains untouched during the whole procedure. LEMON also provides simple "creator functions" for the adaptor classes to make their usage even simpler. For example, \ref reverseDigraph() returns an instance of \ref ReverseDigraph, thus the above code can be written like this. \code ListDigraph g; int result = algorithm(reverseDigraph(g)); \endcode Another essential feature of the adaptors is that their \c Node and \c Arc types convert to the original item types. Therefore, the maps of the original graph can be used in connection with the adaptor. In the following code, Dijksta's algorithm is run on the reverse oriented graph but using the original node and arc maps. \code ListDigraph g; ListDigraph::ArcMap length(g); ListDigraph::NodeMap dist(g); ListDigraph::Node s = g.addNode(); // add more nodes and arcs dijkstra(reverseDigraph(g), length).distMap(dist).run(s); \endcode In the above examples, we used \ref ReverseDigraph in such a way that the underlying digraph was not changed. However, the adaptor class can even be used for modifying the original graph structure. It allows adding and deleting arcs or nodes, and these operations are carried out by calling suitable functions of the underlying digraph (if it supports them). For this, \ref ReverseDigraph "ReverseDigraph" has a constructor of the following form. \code ReverseDigraph(GR& gr); \endcode This means that in a situation, when the modification of the original graph has to be avoided (e.g. it is given as a const reference), then the adaptor class has to be instantiated with \c GR set to be \c const type (e.g. GR = const %ListDigraph), as in the following example. \code int algorithm1(const ListDigraph& g) { \endcode The LEMON graph adaptor classes serve for considering graphs in different ways. The adaptors can be used exactly the same as "real" graphs (i.e., they conform to the graph concepts), thus all generic algorithms can be performed on them. However, the adaptor classes use the underlying graph structures and operations when their methods are called, thus they have only negligible memory usage and do not perform sophisticated algorithmic actions. This technique yields convenient and elegant tools for the cases when a graph has to be used in a specific alteration, but copying it would be too expensive (in time or in memory usage) compared to the algorithm that should be executed on it. The following example shows how the \ref ReverseDigraph adaptor can be used to run Dijksta's algorithm on the reverse oriented graph. Note that the maps of the original graph can be used in connection with the adaptor, since the node and arc types of the adaptors convert to the original item types. \code dijkstra(reverseDigraph(g), length).distMap(dist).run(s); \endcode Using \ref ReverseDigraph could be as efficient as working with the original graph, but not all adaptors can be so fast, of course. For example, the subgraph adaptors have to access filter maps for the nodes and/or the arcs, thus their iterators are significantly slower than the original iterators. LEMON also provides some more complex adaptors, for instance, \ref SplitNodes, which can be used for splitting each node in a directed graph and \ref ResidualDigraph for modeling the residual network for flow and matching problems. Therefore, in cases when rather complex algorithms have to be used on a subgraph (e.g. when the nodes and arcs have to be traversed several times), it could worth copying the altered graph into an efficient structure and run the algorithm on it. \note Modification capabilities are not supported for all adaptors. E.g. for \ref ResidualDigraph (see \ref sec_other_adaptors "later"), this makes no sense. As a more complex example, let us see how \ref ReverseDigraph can be used together with a graph search algorithm to decide whether a directed graph is strongly connected or not. We exploit the fact the a digraph is strongly connected if and only if for an arbitrarily selected node \c u, each other node is reachable from \c u (along a directed path) and \c u is reachable from each node. The latter condition is the same that each node is reachable from \c u in the reversed digraph. \code template bool stronglyConnected(const Digraph& g) { typedef typename Digraph::NodeIt NodeIt; NodeIt u(g); if (u == INVALID) return true; // Run BFS on the original digraph Bfs bfs(g); bfs.run(u); for (NodeIt n(g); n != INVALID; ++n) { if (!bfs.reached(n)) return false; } // Run BFS on the reverse oriented digraph typedef ReverseDigraph RDigraph; RDigraph rg(g); Bfs rbfs(rg); rbfs.run(u); for (NodeIt n(g); n != INVALID; ++n) { if (!rbfs.reached(n)) return false; } return true; } \endcode Note that we have to use the adaptor with 'const Digraph' type, since \c g is a \c const reference to the original graph structure. The \ref stronglyConnected() function provided in LEMON has a quite similar implementation. [SEC]sec_subgraphs[SEC] Subgraph Adaptorts Another typical requirement is the use of certain subgraphs of a graph, or in other words, hiding nodes and/or arcs from a graph. LEMON provides several convenient adaptors for these purposes. \ref FilterArcs can be used when some arcs have to be hidden from a digraph. A \e filter \e map has to be given to the constructor, which assign \c bool values to the arcs specifying whether they have to be shown or not in the subgraph structure. Suppose we have a \ref ListDigraph structure \c g. Then we can construct a subgraph in which some arcs (\c a1, \c a2 etc.) are hidden as follows. \code ListDigraph::ArcMap filter(g, true); filter[a1] = false; filter[a2] = false; // ... FilterArcs subgraph(g, filter); \endcode The following more complex code runs Dijkstra's algorithm on a digraph that is obtained from another digraph by hiding all arcs having negative lengths. \code ListDigraph::ArcMap length(g); ListDigraph::NodeMap dist(g); dijkstra(filterArcs( g, lessMap(length, constMap(0)) ), length).distMap(dist).run(s); \endcode Note the extensive use of map adaptors and creator functions, which makes the code really compact and elegant. \note Implicit maps and graphs (e.g. created using functions) can only be used with the function-type interfaces of the algorithms, since they store only references for the used structures. \ref FilterEdges can be used for hiding edges from an undirected graph (like \ref FilterArcs is used for digraphs). \ref FilterNodes serves for filtering nodes along with the incident arcs or edges in a directed or undirected graph. If both arcs/edges and nodes have to be hidden, then you could use \ref SubDigraph or \ref SubGraph adaptors. \code ListGraph ug; ListGraph::NodeMap node_filter(ug); ListGraph::EdgeMap edge_filter(ug); SubGraph sg(ug, node_filter, edge_filter); \endcode As you see, we needed two filter maps in this case: one for the nodes and another for the edges. If a node is hidden, then all of its incident edges are also considered to be hidden independently of their own filter values. The subgraph adaptors also make it possible to modify the filter values even after the construction of the adaptor class, thus the corresponding graph items can be hidden or shown on the fly. The adaptors store references to the filter maps, thus the map values can be set directly and even by using the \c enable(), \c disable() and \c status() functions. \code ListDigraph g; ListDigraph::Node x = g.addNode(); ListDigraph::Node y = g.addNode(); ListDigraph::Node z = g.addNode(); ListDigraph::NodeMap filter(g, true); FilterNodes subgraph(g, filter); std::cout << countNodes(subgraph) << ", "; filter[x] = false; std::cout << countNodes(subgraph) << ", "; subgraph.enable(x); subgraph.disable(y); subgraph.status(z, !subgraph.status(z)); std::cout << countNodes(subgraph) << std::endl; \endcode The above example prints out this line. \code 3, 2, 1 \endcode Similarly to \ref ReverseDigraph, the subgraph adaptors also allow the modification of the underlying graph structures unless the graph template parameter is set to be \c const type. Moreover the item types of the original graphs and the subgraphs are convertible to each other. The iterators of the subgraph adaptors use the iterators of the original graph structures in such a way that each item with \c false filter value is skipped. If both the node and arc sets are filtered, then the arc iterators check for each arc the status of its end nodes in addition to its own assigned filter value. If the arc or one of its end nodes is hidden, then the arc is left out and the next arc is considered. (It is the same for edges in undirected graphs.) Therefore, the iterators of these adaptors are significantly slower than the original iterators. Using adaptors, these efficiency aspects should be kept in mind. For example, if rather complex algorithms have to be performed on a subgraph (e.g. the nodes and arcs need to be traversed several times), then it could worth copying the altered graph into an efficient structure (e.g. \ref StaticDigraph) and run the algorithm on it. Note that the adaptor classes can also be used for doing this easily, without having to copy the graph manually, as shown in the following { SmartDigraph temp_graph; ListDigraph::NodeMap node_ref(g); digraphCopy(filterNodes(g, filter_map), temp_graph) StaticDigraph tmp_graph; ListDigraph::NodeMap node_ref(g); digraphCopy(filterNodes(g, filter_map), tmp_graph) .nodeRef(node_ref).run(); // use temp_graph // use tmp_graph } \endcode Another interesting adaptor in LEMON is \ref SplitNodes. It can be used for splitting each node into an in-node and an out-node in a directed graph. Formally, the adaptor replaces each node u in the graph with two nodes, namely node uin and node uout. Each arc (u,c) in the original graph will correspond to an arc (uout,vin). The adaptor also adds an additional bind arc (uin,uout) for each node u of the original digraph. The aim of this class is to assign costs to the nodes when using algorithms which would otherwise consider arc costs only. For example, let us suppose that we have a directed graph with costs given for both the nodes and the arcs. Then Dijkstra's algorithm can be used in connection with \ref SplitNodes as follows. \note Using \ref ReverseDigraph could be as efficient as working with the original graph, but most of the adaptors cannot be so fast, of course. [SEC]sec_other_adaptors[SEC] Other Graph Adaptors Two other practical adaptors are \ref Undirector and \ref Orienter. \ref Undirector makes an undirected graph from a digraph disregarding the orientations of the arcs. More precisely, an arc of the original digraph is considered as an edge (and two arcs, as well) in the adaptor. \ref Orienter can be used for the reverse alteration, it assigns a certain orientation to each edge of an undirected graph to form a directed graph. A \c bool edge map of the underlying graph must be given to the constructor of the class, which define the direction of the arcs in the created adaptor (with respect to the inherent orientation of the original edges). \code ListGraph graph; ListGraph::EdgeMap dir_map(graph, true); Orienter directed_graph(graph, dir_map); \endcode LEMON also provides some more complex adaptors, for instance, \ref SplitNodes, which can be used for splitting each node of a directed graph into an in-node and an out-node. Formally, the adaptor replaces each node u in the graph with two nodes, namely uin and uout. Each arc (u,v) of the original graph will correspond to an arc (uout,vin). The adaptor also adds an additional bind arc (uin,uout) for each node u of the original digraph. The aim of this class is to assign costs or capacities to the nodes when using algorithms which would otherwise consider arc costs or capacities only. For example, let us suppose that we have a digraph \c g with costs assigned to both the nodes and the arcs. Then Dijkstra's algorithm can be used in connection with \ref SplitNodes as follows. \code \endcode Note that this problem can be solved more efficiently with map adaptors. These techniques help writing compact and elegant code, and makes it possible to easily implement complex algorithms based on well tested standard components. For instance, in flow and matching problems the residual graph is of particular importance. Combining \ref ResidualDigraph adaptor with various algorithms, a range of weighted and cardinality optimization methods can be obtained directly. \note This problem can also be solved using map adaptors to create an implicit arc map that assigns for each arc the sum of its cost and the cost of its target node. This map can be used with the original graph more efficiently than using the above solution. Another nice application is the problem of finding disjoint paths in a digraph. The maximum number of \e edge \e disjoint paths from a source node to a sink node in a digraph can be easily computed using a maximum flow algorithm with all arc capacities set to 1. On the other hand, \e node \e disjoint paths cannot be found directly using a standard algorithm. However, \ref SplitNodes adaptor makes it really simple. If a maximum flow computation is performed on this adaptor, then the bottleneck of the flow (i.e. the minimum cut) will be formed by bind arcs, thus the found flow will correspond to the union of some node disjoint paths in terms of the original digraph. In flow, circulation and matching problems, the residual network is of particular importance, which is implemented in \ref ResidualDigraph. Combining this adaptor with various algorithms, a range of weighted and cardinality optimization methods can be implemented easily. To construct a residual network, a digraph structure, a flow map and a capacity map have to be given to the constructor of the adaptor as shown in the following code. \code ListDigraph g; ListDigraph::ArcMap flow(g); ListDigraph::ArcMap capacity(g); ResidualDigraph res_graph(g, capacity, flow); \endcode \note In fact, this class is implemented using two other adaptors: \ref Undirector and \ref FilterArcs. [TRAILER] • toc.txt r37 **_algs_with_maps * sec_graph_adaptors ** sec_reverse_digraph ** sec_subgraphs ** sec_other_adaptors * sec_lp * sec_lgf Note: See TracChangeset for help on using the changeset viewer.
2020-09-25 01:29:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137652277946472, "perplexity": 1652.243620281959}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400221382.33/warc/CC-MAIN-20200924230319-20200925020319-00786.warc.gz"}
https://scoskey.org/m387/homework
# m387 Math 387 course resources # Math 387 homework Draft Overleaf template ## Week 14 • Reading for Monday: Finish section 5.2 • Group work for Monday: 230-234 • Reading for Wednesday: Secton 5.3 • Group work for Wednesday: 237-240 • Homework due Tuesday 4/27: • Problem 230 • Problem 232 • Problem 238 • Problem M: Beginning with the result from 238, use a partial fraction decomposition, together with our power series formulas, to find an explicit formula for $a_i$. ## Week 13 • Reading for Monday: Section 5.1 • Group work for Monday: 207-208, 211-213 • Reading for Wednesday: Start Section 5.2 • Group work for Wednesday: 222-228 • Homework due Tuesday 4/20: • Problem 213 • Problem K: Suppose you can take between 0 and 3 apples, between 2 and 5 pears, and between 4 and 7 bananas. Suppose you want to take 10 fruits in total. How many ways are there to do this? Set up the problem using generating polynomials, you may use a computer to do the algebra. • Problem 226 • Problem L: Recall the power series for $(1-x)^{-1}$ is $\sum x^k$. (a) Since $(1-x)^{-2}$ is $(1-x)^{-1}(1-x)^{-1}$, find its power series by evaluating $(\sum x^k)(\sum x^k)$. (b) Since $(1-x)^{-2}$ is the derivative of $(1-x)^{-1}$, find its power series by evaluating the derivative of $\sum x^k$. ## Week 12 • Reading for Monday: Start The Coloring of Graphs (via blackboard) • Reading for Wednesday: Finish The Coloring of Graphs • Group work for Wednesday: CoG exercise 10 • Homework due Tuesday 4/6: • Problem G: What is the chromatic polynomial of the complete graph on $k$ vertices? Explain your answer. • Problem H: Use the reduction algorithm to calculate the chromatic polynomial of the pentagon graph (the edges are (1,2),(2,3),(3,4),(4,5),(5,1)). • Problem I: The degree of the chromatic polynomial is equal to the number of vertices of the graph. Use induction and the deletion/contraction recurrence to prove this is always the case. • Problem J: The constant term of the chromatic polynomial is always equal to $0$. Explain why this is always the case. [Hint: how many ways are there to color a graph using a palette of $0$ colors? What does this say about $p(0)$?] • Notebook 2 due Thursday 4/8: Problems 105-110, 114, 116, 118, 120, 122-123, 126, 141-146, 149-156, 158-159, 164-170, 172-178, 191, 193-194, 196-198, 200-202 ## Week 11 • Reading for Monday: Section 4.5 • Group work for Monday: 191, 193-194 • Group work for Wednesday: 196-198, 200-202 • Homework due Thursday 4/1: • Problem 194: Describe Kruskal’s (or Prim’s) algorithm carefully, and briefly explain why it “works” • Problem 198 • Problem E: Draw a connected weighted graph (not a tree) with at least 7 vertices and at least 14 edges. Run Kruskal’s algorithm for the graph to find the minimum spanning tree. • Problem F: For the same graph as in Problem E, choose a special vertex $v$ and run Dijkstra’s algorithm to find the tree of shortest paths from $v$. Compare with the tree from part $E$. ## Week 10 • Reading for Monday: Start section 4.3 • Group work for Monday: 164-170 • Reading for Tuesday: Finish section 4.3 • Group work for Wednesday: 172-178 • Homework due Tuesday 3/23: • Problem A: Explain why the last element $b_{n-1}$ of a Prufer code sequence B is always $n$. • Problem B: Find the tree for the following prufer code: 1,2,1,3,1,4,1,5,1,6 • Problem C: Write all sixteen possible Prufer codes with $n=4$ (so the codes have length 2) and draw the corresponding trees. • Problem D: What is the relationship between the degree of a vertex, and the number of times the vertex occurs in the Prufer code? Give a complete explanation (proof) your answer is correct. ## Week 9 • Monday: Catch-up and homework questions • Tuesday: Exam posted, homework 8 due • Wednesday: No class • Sunday 3/14: Exam due, pi day ## Week 8 • Reading for Monday: Section 4.1 • Group work for Monday: 141-146, 149-150 • Group work for Wednesday: 151-156, 158-159 • Homework due Tuesday 3/9: 141, 146, 152, 155 ## Week 7 • Reading for Monday: Section 3.3 again • Group work for Monday: 105-110, 114, 116 • Reading for Wednesday: Section 3.3.1, 3.3.2 • Group work for Wednesday: 118, 120, 122-123, 126 • Homework due Tuesday 3/2: 107, 112, 123, 126 ## Week 6 • Holiday on Monday! • Reading for Wednesday: Preview section 3.3 • Group work for Wednesday: Review, catch up, bonus problems! • Homework due Tuesday 2/23: Class survey ## Week 5 • Reading for Monday: Secton 3.1 • Group work for Monday: 89-94 • Reading for Wednesday: Section 3.2 • Group work for Wednesday: 96-99, 102 • Homework due Thursday 2/18: 76, 90, 96, 102 • Notebook 1 due Thursday 2/18: Problems 1-24, 31-36, 38-40, 50-54, 57-61, 66-67, 71-72, 74(a), 75, 77, 79, 87-94, 96-99, 102 ## Week 4 • Reading for Monday: Section 2.2 again, including 2.2.1 • Group work for Monday: 57-61, 62(read), 66-67 • Reading for Wednesday: Section 2.3 • Group work for Wednesday: 71-72, 74(a), 75, 77, 79, 87-88 • Homework due Tuesday 2/9: 63, 73, 75, 88 ## Week 3 • Reading for Monday: Section 1.4, 1.5, 1.6 • Group work for Monday: 31-36, 38-40 • Reading for Wednesday: 2.1, 2.2 • Group work for Wednesday: 50-54 • Homework due Tuesday 2/2: 35, 38, 41, 56 ## Week 2 • Holiday on Monday! • Reading for Wednesday: Section 1.3 • Group work for Wednesday: 17-24 • Homework due Tuesday 1/26: 18(a), 19(a)(b), 22, 29 ## Week 1 • Reading for Monday: Sections 1.1, 1.2 • Group work for Monday: 1-7 • Group work for Wednesday: 8-16 • Homework due Thursday 1/21 (end of day): 2, 6, 9, 15
2021-06-13 02:29:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4152708053588867, "perplexity": 7301.021263102405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00540.warc.gz"}
https://math.stackexchange.com/questions/1046617/fourier-transform-of-compactly-supported-differentiable-function/1047567
# Fourier transform of compactly supported differentiable function Let $K$ be the space of infinitely differentiable functions $\mathbb{R}\to\mathbb{C}$ with compact support. I read the unproved statement in Kolmogorov-Fomin's Элементы теории функций и функционального анализа (p. 454 here) that the Fourier transform defines a bijective operator $F:K\to Z$ where $Z$ is the space of analytical entire functions $\psi:\mathbb{C}\to\mathbb{C}$ such that $$\forall z\in\mathbb{C}\quad|z|^q|\psi(z)|\le C_q e^{a|\text{ Im}z|}$$ I have been able to prove to myself that $F[\varphi](z):=\int_{\mathbb{R}}\varphi(x)e^{-izx}d\mu_x$ is entire using an analogous argument to this. I have also been able to understand why $F[\varphi]$ satisfies the above inequality $|z|^q|F[\varphi](z)|\le C_q e^{a\text{ Im}z}$ thanks to Tom and the very interesting resource he has linked. I understand that $\frac{1}{2\pi}\int_{\mathbb{R}}\psi(\lambda)e^{-ix\lambda}d\mu_{\lambda}$ is defined for all $\psi \in Z$, but I do not see why it must be in $K$. Could anybody explain that? I $\infty$-ly thank you! • Perhaps something like math.umn.edu/~garrett/m/fun/notes_2013-14/paley-wiener.pdf will help? – Tom Dec 1 '14 at 15:41 • @Tom I've been able to understand the proof of the inequality even with my $\varepsilon$ of knowledge of these wonderfully interesting topics. I heartily thank you! The only thing I still doesn't grasp is how to see $F$'s bijectivity. – Self-teaching worker Dec 1 '14 at 17:02 • @DavideZena : You missed an absolute value. You should have $e^{a|\Im z|}$ for the bound. Perhaps that is confusing you? – Disintegrating By Parts Dec 1 '14 at 19:19 • @T.A.E. Oh, thank you: edited. No, I mistyped, but I had $|\Im z|$ in mind. What I don't grasp is why $\forall\psi\in Z\quad F^{-1}[\psi]\in K$... – Self-teaching worker Dec 1 '14 at 19:50 • @DavideZena : Have you seen the Paley Wiener Theorem. This is typically cast as a problem of holomorphic functions in the right half-plane, but it applies to the upper half-plane, too. – Disintegrating By Parts Dec 1 '14 at 21:31 Suppose $\psi$ is an entire function, and that there is a positive integer $q$ and real $a > 0$ such that $$|s|^{q}|\psi(s)| \le e^{a|\Im s|},\;\;\; s \in \mathbb{C}.$$ Let $\phi(s) = e^{ixs}\psi(s)$ for any $x > a$. Then $\phi$ satisfies $$|s|^{q}|\phi(s)| \le e^{-|a-x|\Im s},\;\;\; \Im s \ge 0.$$ Then, by Cauchy's Theorem, the integral of $\phi$ over $[-R,R]$ equals the negative of the integral over the semicircular contour $|s|=R$ with $\Im s \ge 0$. That is, $$\int_{-R}^{R}\phi(s)\,ds = -\int_{0}^{\pi}\phi(Re^{i\theta})Re^{i\theta}id\theta.$$ The integral on the right is bounded by $$\int_{0}^{\pi}\frac{1}{R^{q}}e^{-|a-x|R\sin\theta}R\,d\theta = \frac{1}{R^{q-1}}\int_{0}^{\pi}e^{-|a-x|R\sin\theta}\,d\theta.$$ For any $q \ge 1$, the right side converges to $0$ as $R\rightarrow \infty$ by the Lebesgue bounded convergence theorem. Therefore, $$\lim_{R\uparrow\infty}\int_{-R}^{R}e^{isx}\psi(s)\,ds =0,\;\;\; x > a.$$ You can close the contour in the lower half-plane for $x < -a$ in order to conclude that the inverse Fourier transform of $\psi$ is $0$ for $|x| > a$. • Although Kolmogorov-Fomin's tends to be quite ambiguous and doesn't say whether $a$ and $q$ are fixed, I think, from the proof of Paley-Wiener theorem, that the correct definition of $Z$ is the set of entire functions $\psi$ such that $\exists a>0:\forall q\in\mathbb{N}^+\quad\exists C_q\in\mathbb{R}:\forall z\in\mathbb{C}\quad|z|^q|\psi(z)|\le C_q e^{a|\text{ Im}z|}$ (and this $a$ is such that $\text{supp}(\varphi)\subset B(0,a)$): am I right? That allows us to verify that $R^{1-q}\int_0^\pi e^{-|a-x|R\sin\theta}d\theta\to 0$. Thank you so much again! – Self-teaching worker Dec 2 '14 at 10:22 • One thing isn't clear to me: why $F^{-1}[\psi]\in C^\infty(\mathbb{R})$. The tools used by the paper linked to by Tom are unknown to me: cannot it be proved with more elementary means? I see that $\int_{-R}^R(ix)^Ne^{isx}\psi(s)ds=\frac{\partial^N}{\partial x^N}\int_{-R}^Re^{isx}\psi(s)ds$ by dominated convergence, but I'm not sure that it's straightforward that $\frac{\partial^N}{\partial x^N}\text{PV}\int_{-\infty}^\infty e^{isx}\psi(s)ds=\lim_{R\to\infty}\frac{\partial^N}{\partial x^N}\int_{-R}^Re^{isx}\psi(s)ds$... $\infty$ thanks! – Self-teaching worker Dec 2 '14 at 11:08 • I missed that: $\int_{\mathbb{R}}\phi d\mu$ exists and, therefore, $\int_{\mathbb{R}}\phi d\mu=\text{PV}\int_{-\infty}^\infty\phi(t)dt$, Thank you very very much! – Self-teaching worker Dec 2 '14 at 20:37 • @DavideZena : If $\phi \in L^{2}(\mathbb{R})$, it is also useful to know that $PV \int_{-\infty}^{\infty}e^{-ist}\phi(t)\,dt$ converges in $L^{2}(\mathbb{R})$, even though you may not be able to say much about pointwise convergence at any particular $s$. – Disintegrating By Parts Dec 2 '14 at 20:49
2021-01-18 14:59:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227134585380554, "perplexity": 178.77558195235963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514796.13/warc/CC-MAIN-20210118123320-20210118153320-00041.warc.gz"}
https://stackoverflow.com/questions/29992842/arrange-lua-lualatex-data-in-a-latex-tex-tabular
# Arrange lua/lualatex data in a Latex/tex tabular I have data in lua, accessible to a LaTeX/TeX document (it is an array). I try to show this data in a LaTeX tabular form. I tried several ways to do this but all failed :( . Here is an example : \begin{tabular}{|c|c|c|}\hline A&B&C\\ \hline 2010 & 2,78 &\\ \hline \luaexec{for i=1,nA do; tex.print(i.."& "..data_a[1][i].." &"..data_a[2][i]..[[\\ \hline]]); end;} \end{tabular} I have this error : "use of @@array don't match its definition". Basically, everything would work except when I try to put a newline \ in the loop. Any idea welcome !! • It seems to me that you have to escape the \ characters: \ becomes \string\\ . (So that \\ becomes \string\\ \string\\ ; no space between tokens) For less tedious ways to do this have a more in depth look at the Lua(La)Tex manual. – Pier Paolo May 1 '15 at 19:26 • In fact the [[...]] sequence does this job of escaping. I tried quite a lot of configurations : luadirect, luaexec, luacode, luacode*. I tried lua-escaping with the \ "one by one" such as "\\\\ \\hline" instead of [[\\ \hline]]. But to be sure I tried again ... and now it works !! '\begin{tabular}{|c|c|c|}\hline A&B&C\\ \hline 2010 & 2,78 &\\ \hline \luaexec{for i=1,nA do; tex.print(i.."& "..data_a[1][i].." &"..data_a[2][i].."\\\\ \\hline"); end;} \end{tabular}' I don't understand well why but it works :) – user1771398 May 1 '15 at 20:16 If you are running into issues with representing \ in a string included inside luaexec, you may consider a different representation that produces the same result. For example, \ is equivalent to "\92\32" or string.char(92, 32). \begin{tabular}{|c|c|c|c|}\hline A&B&C&D\\ \hline
2019-10-19 06:06:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9497222900390625, "perplexity": 4108.672581727853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688826.38/warc/CC-MAIN-20191019040458-20191019063958-00126.warc.gz"}
https://zbmath.org/authors/?q=ai%3Akarniadakis.george-em
# zbMATH — the first resource for mathematics Compute Distance To: Documents Indexed: 287 Publications since 1986, including 10 Books all top 5 #### Co-Authors 6 single-authored 16 Wan, Xiaoliang 15 Venturi, Daniele 14 Sherwin, Spencer J. 14 Zayernouri, Mohsen 14 Zhang, Zhongqiang 12 Perdikaris, Paris G. 12 Xiu, Dongbin 11 Mao, Zhiping 11 Zeng, Fanhai 10 Triantafyllou, Michael S. 9 Kirby, Robert M. II 9 Li, Zhen 9 Lucor, Didier 9 Raissi, Maziar 9 Song, Fangying 9 Su, Chau-Hsing 8 Caswell, Bruce 8 Dong, Suchuan 8 Lin, Guang 7 Evangelinos, Constantinos 7 Grinberg, Leopold 7 Lomtev, Igor 7 Maxey, Martin R. 7 Orszag, Steven Alan 7 Rozovskii, Boris L. 6 Cho, Heyrim 6 Choi, Minseok 6 Fedosov, Dmitry A. 6 Giannakouros, John G. 6 Pang, Guofei 6 Sirisup, Sirod 6 Wang, Zhi Cheng 6 Yu, Yue 6 Zhang, Dongkun 5 Cai, Wei 5 Constantinides, Yiannis 5 Deng, Mingge 5 Henderson, Ronald D. 5 Ma, Xia 5 Sidilkover, David 5 Warburton, Timothy C. E. 4 Babaee, Hessam 4 Baek, Hyoungsu 4 Bian, Xin 4 Bittencourt, Marco Lúcio 4 Cao, Wanrong 4 Kharazmi, Ehsan 4 Lei, Huan 4 Sapsis, Themistoklis P. 4 Symeonidis, Vasileios 4 Tang, Yuhang 4 Xu, Jin 4 Yang, Xiu 3 Ainsworth, Mark 3 Beskok, Ali 3 Foo, Jasmine 3 Guo, Ling 3 Jagtap, Ameya D. 3 Kevrekidis, Ioannis George 3 Kim, Changho 3 Lu, Lu 3 Meng, Xuhui 3 Patera, Anthony T. 3 Rockwell, Donald 3 Shu, Chi-Wang 3 Tretyakov, Michael V. 3 Triantafyllou, George S. 3 Zheng, Mengdi 3 Zheng, Xiaoning 2 Batcho, Paul F. 2 Bourguet, Rémi 2 Burrage, Kevin 2 Chryssostomidis, C. 2 Cockburn, Bernardo 2 Du, Yiqing 2 Gulian, Mamikon 2 Kaiktsis, Lambros 2 Karamanos, G.-S. 2 Li, Xuejin 2 Lischke, Anna 2 Liu, Wing Kam 2 Luo, Xian 2 Newman, David J. 2 Pivkin, Igor V. 2 Rønquist, Einar M. 2 Tang, Shaoqiang 2 Tartakovsky, Daniel M. 2 Turner, Ian William 2 Wang, Hong 2 Wang, Nan 2 Xie, Fangfang 2 Xu, Chuanju 2 Yazdani, Alireza 2 Yin, Minglang 2 Yosibash, Zohar 2 Yvonnet, Julien 2 Zhao, Xuan 1 Bangia, Anil K. 1 Bargos, Fabiano F. 1 Barouch, Eytan ...and 77 more Co-Authors all top 5 #### Serials 83 Journal of Computational Physics 37 SIAM Journal on Scientific Computing 34 Computer Methods in Applied Mechanics and Engineering 29 Journal of Fluid Mechanics 7 International Journal for Numerical Methods in Engineering 7 Journal of Scientific Computing 6 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 4 SIAM Journal on Numerical Analysis 3 International Journal for Numerical Methods in Fluids 3 Applied Numerical Mathematics 3 Computational Mechanics 3 Communications in Computational Physics 3 Numerical Mathematics and Scientific Computation 2 Computers and Fluids 2 International Journal of Heat and Mass Transfer 2 Proceedings of the National Academy of Sciences of the United States of America 2 European Journal of Mechanics. B. Fluids 2 Multiscale Modeling & Simulation 1 Computers & Mathematics with Applications 1 Computers and Structures 1 Journal of Engineering Mathematics 1 Journal of Statistical Physics 1 Physics of Fluids, A 1 Fluid Dynamics Research 1 Theoretical and Computational Fluid Dynamics 1 Chaos, Solitons and Fractals 1 Bulletin of the Polish Academy of Sciences. Technical Sciences 1 Numerische Mathematik 1 Physica D 1 Journal of Non-Newtonian Fluid Mechanics 1 SIAM Review 1 Physics of Fluids 1 Electronic Journal of Differential Equations (EJDE) 1 Parallel Algorithms and Applications 1 Fractional Calculus & Applied Analysis 1 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Flow, Turbulence and Combustion 1 HERMIS-$$\mu\pi$$. Hellenic European Research on Mathematics and Informatics Science 1 Applied Mathematical Sciences 1 Interdisciplinary Applied Mathematics 1 Lecture Notes in Computational Science and Engineering 1 Journal of Theoretical Biology 1 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 1 Handbook of Fractional Calculus with Applications 1 Communications on Applied Mathematics and Computation all top 5 #### Fields 159 Fluid mechanics (76-XX) 132 Numerical analysis (65-XX) 70 Partial differential equations (35-XX) 52 Probability theory and stochastic processes (60-XX) 34 Mechanics of deformable solids (74-XX) 23 Ordinary differential equations (34-XX) 20 Computer science (68-XX) 18 Biology and other natural sciences (92-XX) 13 Statistics (62-XX) 10 Statistical mechanics, structure of matter (82-XX) 8 Dynamical systems and ergodic theory (37-XX) 7 Approximations and expansions (41-XX) 6 Classical thermodynamics, heat transfer (80-XX) 6 Geophysics (86-XX) 5 General and overarching topics; collections (00-XX) 4 Optics, electromagnetic theory (78-XX) 3 Real functions (26-XX) 2 Mechanics of particles and systems (70-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Special functions (33-XX) 1 Integral equations (45-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Convex and discrete geometry (52-XX) 1 Quantum theory (81-XX) #### Citations contained in zbMATH Open 254 Publications have been cited 7,145 times in 4,263 Documents Cited by Year The Wiener–Askey polynomial chaos for stochastic differential equations. Zbl 1014.65004 2002 High-order splitting methods for the incompressible Navier-Stokes equations. Zbl 0738.76050 Karniadakis, George Em; Israeli, Moshe; Orszag, Steven A. 1991 Spectral/$$hp$$ element methods for computational fluid dynamics. 2nd ed. Zbl 1116.76002 Karniadakis, George Em; Sherwin, Spencer J. 2005 The development of discontinuous Galerkin methods. Zbl 0989.76045 Cockburn, Bernardo; Karniadakis, George E.; Shu, Chi-Wang 2000 Spectral/hp element methods for CFD. Zbl 0954.76001 Karniadakis, George Em; Sherwin, Spencer J. 1999 Modeling uncertainty in flow simulations via generalized polynomial chaos. Zbl 1047.76111 2003 An adaptive multi-element generalized polynomial chaos method for stochastic differential equations. Zbl 1078.65008 2005 Microflows and nanoflows. Fundamentals and simulation. Foreword by Chih-Ming Ho. Zbl 1115.76003 Karniadakis, George; Beskok, Ali; Narayan, Aluru 2005 Multi-element generalized polynomial chaos for arbitrary probability measures. Zbl 1128.65009 2006 Modeling uncertainty in steady state diffusion problems via generalized polynomial chaos. Zbl 1016.65001 2002 Low-dimensional models for complex geometry flows: Application to grooved channels and circular cylinders. Zbl 0746.76021 Deane, A. E.; Kevrekidis, I. G.; Karniadakis, G. E.; Orszag, S. A. 1991 Fractional spectral collocation method. Zbl 1294.65097 2014 Fractional Sturm-Liouville eigen-problems: theory and numerical approximation. Zbl 1349.34095 2013 Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Zbl 1415.68175 Raissi, M.; Perdikaris, P.; Karniadakis, G. E. 2019 Spectral/$$hp$$ element methods for computational fluid dynamics. Reprint of the 2nd hardback ed. (2005). Zbl 1256.76003 Karniadakis, George Em; Sherwin, Spencer J. 2013 A spectral vanishing viscosity method for large eddy simulations. Zbl 0984.76036 Karamanos, G. S.; Karniadakis, G. E. 2000 Exponentially accurate spectral and spectral element methods for fractional ODEs. Zbl 1349.65257 2014 A semi-Lagrangian high-order method for Navier-Stokes equations. Zbl 1028.76026 2001 Fractional spectral collocation methods for linear and nonlinear variable order FPDEs. Zbl 1349.65531 2015 A spectral viscosity method for correcting the long-term behavior of POD models. Zbl 1136.76412 2004 The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications. Zbl 1153.65008 Foo, Jasmine; Wan, Xiaoliang; Karniadakis, George Em 2008 Second-order approximations for variable order fractional derivatives: algorithms and applications. Zbl 1349.65092 Zhao, Xuan; Sun, Zhi-zhong; Karniadakis, George Em 2015 A triangular spectral element method; applications to the incompressible Navier-Stokes equations. Zbl 1075.76621 Sherwin, S. J.; Karniadakis, G. E. 1995 Multi-element probabilistic collocation method in high dimensions. Zbl 1181.65014 2010 Hidden physics models: machine learning of nonlinear partial differential equations. Zbl 1381.68248 2018 Three-dimensional dynamics and transition to turbulence in the wake of bluff objects. Zbl 0754.76043 Karniadakis, George Em; Triantafyllou, George S. 1992 A low-dimensional model for simulating three-dimensional cylinder flow. Zbl 1001.76043 2002 A new triangular and tetrahedral basis for high-order $$(hp)$$ finite element methods. Zbl 0837.73075 Sherwin, Spencer J.; Karniadakis, George Em. 1995 Long-term behavior of polynomial chaos in stochastic flow simulations. Zbl 1123.76058 2006 Discontinuous Galerkin methods. Theory, computation and applications. 1st international symposium on DGM, Newport, RI, USA, May 24–26, 1999. Zbl 0935.00043 Cockburn, Bernardo (ed.); Karniadakis, George E. (ed.); Shu, Chi-Wang (ed.) 2000 A unified Petrov-Galerkin spectral method for fractional PDEs. Zbl 1425.65127 Zayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em. 2015 De-aliasing on non-uniform grids: algorithms and applications. Zbl 1161.76534 Kirby, Robert M.; Karniadakis, George Em 2003 Discontinuous spectral element methods for time- and space-fractional advection equations. Zbl 1304.35757 2014 Frequency selection and asymptotic states in laminar wakes. Zbl 0659.76043 Karniadakis, George Em; Triantafyllou, George S. 1989 Inferring solutions of differential equations using noisy multi-fidelity data. Zbl 1382.65229 Raissi, Maziar; Perdikaris, Paris; Karniadakis, George Em 2017 Machine learning of linear differential equations using Gaussian processes. Zbl 1380.68339 Raissi, Maziar; Perdikaris, Paris; Karniadakis, George Em 2017 Reweighted $$\ell_1$$ minimization method for stochastic elliptic differential equations. Zbl 1349.60113 2013 Onset of three-dimensionality, equilibria, and early transition in flow over a backward-facing step. Zbl 0728.76057 Kaiktsis, Lambros; Karniadakis, George Em; Orszag, Steven A. 1991 A generalized spectral collocation method with tunable accuracy for variable-order fractional differential equations. Zbl 1339.65197 Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2015 Dynamics and low-dimensionality of a turbulent near wake. Zbl 0987.76041 Ma, X.; Karamanos, G.-S.; Karniadakis, G. E. 2000 Micro flows. Fundamentals and simulation. Zbl 0998.76002 2002 Adaptive ANOVA decomposition of stochastic incompressible and compressible flows. Zbl 1408.76428 Yang, Xiu; Choi, Minseok; Lin, Guang; Karniadakis, George Em 2012 A new stochastic approach to transient heat conduction modeling with uncertainty. Zbl 1038.80003 2003 Time-dependent generalized polynomial chaos. Zbl 1201.65216 Gerritsma, Marc; van der Steen, Jan-Bart; Vos, Peter; Karniadakis, George 2010 Tetrahedral spectral elements for CFD. Zbl 0857.76044 Sherwin, S. J.; Karniadakis, G. E. 1995 A discontinuous Galerkin ALE method for compressible viscous flows in moving domains. Zbl 0956.76046 Lomtev, I.; Kirby, R. M.; Karniadakis, G. E. 1999 A combined direct numerical simulation–particle image velocimetry study of the turbulent near wake. Zbl 1177.76156 Dong, S.; Karniadakis, G. E.; Ekmekci, A.; Rockwell, D. 2006 Spectral polynomial chaos solutions of the stochastic advection equation. Zbl 1001.76084 Jardak, M.; Su, C.-H.; Karniadakis, G. E. 2002 Beyond Wiener-Askey expansions: handling arbitrary PDFs. Zbl 1102.65006 2006 A spectral method (of exponential convergence) for singular solutions of the diffusion equation with general two-sided fractional derivative. Zbl 1422.65428 2018 A discontinuous Galerkin method for the viscous MHD equations. Zbl 0954.76051 Warburton, T. C.; Karniadakis, G. E. 1999 Mechanisms of transverse motions in turbulent wall flows. Zbl 1039.76028 2003 Spectral element-Fourier methods for incompressible turbulent flows. Zbl 0722.76053 1990 Unstructured spectral element methods for simulation of turbulent flows. Zbl 0840.76070 Henderson, Ronald D.; Karniadakis, George Em 1995 A direct numerical simulation study of flow past a freely vibrating cable. Zbl 0901.76062 Newman, David J.; Karniadakis, George Em 1997 Fast difference schemes for solving high-dimensional time-fractional subdiffusion equations. Zbl 1352.65278 Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2016 Tempered fractional Sturm-Liouville eigenproblems. Zbl 1323.34012 Zayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em 2015 A robust and accurate outflow boundary condition for incompressible flow simulations on severely-truncated unbounded domains. Zbl 1349.76569 Dong, S.; Karniadakis, G. E.; Chryssostomidis, C. 2014 Unsteadiness and convective instabilities in two-dimensional flow over a backward-facing step. Zbl 0875.76111 Kaiktsis, Lambros; Karniadakis, George Em; Orszag, Steven A. 1996 Implicit-explicit difference schemes for nonlinear fractional differential equations with nonsmooth solutions. Zbl 1355.65104 Cao, Wanrong; Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2016 A fractional phase-field model for two-phase flows with tunable sharpness: algorithms and simulations. Zbl 1423.76102 Song, Fangying; Xu, Chuanju; Karniadakis, George Em 2016 A generalized spectral collocation method with tunable accuracy for fractional differential equations with end-point singularities. Zbl 1431.65193 Zeng, Fanhai; Mao, Zhiping; Karniadakis, George Em 2017 Drag reduction in wall-bounded turbulence via a transverse travelling wave. Zbl 1112.76373 Du, Yiqing; Symeonidis, V.; Karniadakis, G. E. 2002 Spectral and discontinuous spectral element methods for fractional delay equations. Zbl 1314.34159 Zayernouri, Mohsen; Cao, Wanrong; Zhang, Zhongqiang; Karniadakis, George Em 2014 Tetrahedral $$hp$$ finite elements: Algorithms and flow simulations. Zbl 0847.76038 Sherwin, S. J.; Karniadakis, G. E. 1996 Spectral/hp methods for viscous compressible flows on unstructured 2D meshes. Zbl 0929.76095 Lomtev, I.; Quillen, C. B.; Karniadakis, G. E. 1998 Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. Zbl 1386.65030 Raissi, Maziar; Perdikaris, Paris; Karniadakis, George Em 2018 Equation-free/Galerkin-free POD-assisted computation of incompressible flows. Zbl 1213.76146 Sirisup, Sirod; Karniadakis, George Em; Xiu, Dongbin; Kevrekidis, Ioannis G. 2005 Petrov-Galerkin and spectral collocation methods for distributed order differential equations. Zbl 1367.65113 Kharazmi, Ehsan; Zayernouri, Mohsen; Karniadakis, George Em 2017 Dynamics and flow structures in the turbulent wake of rigid and flexible cylinders subject to vortex-induced vibrations. Zbl 0983.76029 1999 A new method to impose no-slip boundary conditions in dissipative particle dynamics. Zbl 1177.76325 Pivkin, Igor V.; Karniadakis, George Em 2005 Basis functions for triangular and quadrilateral high-order elements. Zbl 0930.35016 Warburton, T. C.; Sherwin, S. J.; Karniadakis, G. E. 1999 A discontinuous Galerkin method for the Navier-Stokes equations. Zbl 0951.76041 1999 Systematic coarse-graining of spectrin-level red blood cell models. Zbl 1231.74311 Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em 2010 Spectral element simulations of laminar and turbulent flows in complex geometries. Zbl 0678.76050 1989 Generalized polynomial chaos and random oscillators. Zbl 1060.70515 Lucor, D.; Su, C.-H.; Karniadakis, G. E. 2004 A convergence study of a new partitioned fluid-structure interaction algorithm based on fictitious mass and damping. Zbl 1426.76496 2012 Exact PDF equations and closure approximations for advective-reactive transport. Zbl 1349.35068 Venturi, D.; Tartakovsky, D. M.; Tartakovsky, A. M.; Karniadakis, G. E. 2013 Stability and accuracy of periodic flow solutions obtained by a POD-penalty method. Zbl 1070.35024 2005 Optimal error estimates of spectral Petrov-Galerkin and collocation methods for initial value problems of fractional differential equations. Zbl 1326.65100 Zhang, Zhongqiang; Zeng, Fanhai; Karniadakis, George Em 2015 Galerkin and discontinuous Galerkin spectral/$$hp$$ methods. Zbl 0924.76078 Warburton, T. C.; Lomtev, I.; Du, Y.; Sherwin, S. J.; Karniadakis, G. E. 1999 What is the fractional Laplacian? A comparative review with new results. Zbl 1453.35179 Lischke, Anna; Pang, Guofei; Gulian, Mamikon; Song, Fangying; Glusa, Christian; Zheng, Xiaoning; Mao, Zhiping; Cai, Wei; Meerschaert, Mark M.; Ainsworth, Mark; Karniadakis, George Em 2020 Generalized fictitious methods for fluid-structure interactions: analysis and simulations. Zbl 1349.76577 Yu, Yue; Baek, Hyoungsu; Karniadakis, George Em 2013 Spectral element methods for elliptic problems in nonsmooth domains. Zbl 0844.65082 1995 Stochastic low-dimensional modelling of a random laminar wake past a circular cylinder. Zbl 1146.76018 Venturi, Daniele; Wan, Xiaoliang; Karniadakis, George Em 2008 Numerical methods for stochastic partial differential equations with white noise. Zbl 1380.65021 2017 Gappy data and reconstruction procedures for flow past a cylinder. Zbl 1065.76159 2004 Three-dimensionality effects in flow around two tandem cylinders. Zbl 1156.76370 Papaioannou, Georgios V.; Yue, Dick K. P.; Triantafyllou, Michael S.; Karniadakis, George E. 2006 Second-order numerical methods for multi-term fractional differential equations: smooth and non-smooth solutions. Zbl 1439.65081 Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2017 Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems. Zbl 1371.60119 2014 The stochastic piston problem. Zbl 1135.76334 Lin, Guang; Su, Chau-Hsing; Karniadakis, G. E. 2004 DPIV-driven flow simulation: a new computational paradigm. Zbl 1116.76413 Ma, X.; Karniadakis, G. E.; Park, H.; Gharib, M. 2003 Spectral distributed Lagrange multiplier method: algorithm and benchmark tests. Zbl 1115.76349 Dong, Suchuan; Liu, Dong; Maxey, Martin R.; Karniadakis, George Em 2004 Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Zbl 1407.62252 Perdikaris, P.; Raissi, M.; Damianou, A.; Lawrence, N. D.; Karniadakis, G. E. 2017 A Petrov-Galerkin spectral method of linear complexity for fractional multiterm ODEs on the half line. Zbl 1367.65108 Lischke, Anna; Zayernouri, Mohsen; Karniadakis, George Em 2017 Computing fractional Laplacians on complex-geometry domains: algorithms and simulations. Zbl 1380.65357 Song, Fangying; Xu, Chuanju; Karniadakis, George Em 2017 A computable evolution equation for the joint response-excitation probability density function of stochastic dynamical systems. Zbl 1365.92023 Venturi, D.; Sapsis, T. P.; Cho, H.; Karniadakis, G. E. 2012 Stochastic bifurcation analysis of Rayleigh-Bénard convection. Zbl 1189.76213 Venturi, Daniele; Wan, Xiaoliang; Karniadakis, George Em 2010 Supersensitivity due to uncertain boundary conditions. Zbl 1075.76623 2004 Adaptive generalized polynomial chaos for nonlinear random oscillators. Zbl 1075.65008 2004 DeepXDE: a deep learning library for solving differential equations. Zbl 1459.65002 Lu, Lu; Meng, Xuhui; Mao, Zhiping; Karniadakis, George Em 2021 Two-point stress-strain-rate correlation structure and non-local eddy viscosity in turbulent flows. Zbl 1461.76181 Clark Di Leoni, Patricio; Zaki, Tamer A.; Karniadakis, George; Meneveau, Charles 2021 What is the fractional Laplacian? A comparative review with new results. Zbl 1453.35179 Lischke, Anna; Pang, Guofei; Gulian, Mamikon; Song, Fangying; Glusa, Christian; Zheng, Xiaoning; Mao, Zhiping; Cai, Wei; Meerschaert, Mark M.; Ainsworth, Mark; Karniadakis, George Em 2020 Physics-informed neural networks for high-speed flows. Zbl 1442.76092 Mao, Zhiping; Jagtap, Ameya D.; Karniadakis, George Em 2020 Physics-informed generative adversarial networks for stochastic differential equations. Zbl 1440.60065 Yang, Liu; Zhang, Dongkun; Karniadakis, George Em 2020 Learning in modal space: solving time-dependent stochastic PDEs using physics-informed neural networks. Zbl 1440.60067 Zhang, Dongkun; Guo, Ling; Karniadakis, George Em 2020 Conservative physics-informed neural networks on discrete domains for conservation laws: applications to forward and inverse problems. Zbl 1442.92002 Jagtap, Ameya D.; Kharazmi, Ehsan; Karniadakis, George Em 2020 Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. Zbl 1453.68165 Jagtap, Ameya D.; Kawaguchi, Kenji; Karniadakis, George Em 2020 A stabilized semi-implicit Fourier spectral method for nonlinear space-fractional reaction-diffusion equations. Zbl 1453.65370 Zhang, Hui; Jiang, Xiaoyun; Zeng, Fanhai; Karniadakis, George Em 2020 A fast solver for spectral elements applied to fractional differential equations using hierarchical matrix approximation. Zbl 1442.65144 Li, Xianjuan; Mao, Zhiping; Wang, Nan; Song, Fangying; Wang, Hong; Karniadakis, George Em 2020 A composite neural network that learns from multi-fidelity data: application to function approximation and inverse PDE problems. Zbl 1454.76006 2020 Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Zbl 1415.68175 Raissi, M.; Perdikaris, P.; Karniadakis, G. E. 2019 FPINNs: fractional physics-informed neural networks. Zbl 1420.35459 Pang, Guofei; Lu, Lu; Karniadakis, George Em 2019 Deep learning of vortex-induced vibrations. Zbl 1415.76177 Raissi, Maziar; Wang, Zhicheng; Triantafyllou, Michael S.; Karniadakis, George Em 2019 Efficient multistep methods for tempered fractional calculus: algorithms and simulations. Zbl 07099350 Guo, Ling; Zeng, Fanhai; Turner, Ian; Burrage, Kevin; Karniadakis, George Em 2019 Multi-domain spectral collocation method for variable-order nonlinear fractional differential equations. Zbl 1440.65258 Zhao, Tinggang; Mao, Zhiping; Karniadakis, George Em 2019 Fractional Gray-Scott model: well-posedness, discretization, and simulations. Zbl 1440.35344 Wang, Tingting; Song, Fangying; Wang, Hong; Karniadakis, George Em 2019 An entropy-viscosity large eddy simulation study of turbulent flow in a flexible pipe. Zbl 1415.76369 Wang, Zhicheng; Triantafyllou, Michael S.; Constantinides, Yiannis; Karniadakis, George Em 2019 Machine learning of space-fractional differential equations. Zbl 1419.35209 Gulian, Mamikon; Raissi, Maziar; Perdikaris, Paris; Karniadakis, George 2019 Discovering a universal variable-order fractional model for turbulent Couette flow using a physics-informed neural network. Zbl 1434.76053 Mehta, Pavan Pranjivan; Pang, Guofei; Song, Fangying; Karniadakis, George Em 2019 Numerical methods. Zbl 1410.65001 2019 Mapping the properties of the vortex-induced vibrations of flexible cylinders in uniform oncoming flow. Zbl 1430.76091 Fan, Dixia; Wang, Zhicheng; Triantafyllou, Michael S.; Karniadakis, George Em 2019 Fractional magneto-hydrodynamics: algorithms and applications. Zbl 1416.76127 2019 A computational mechanics special issue on: data-driven modeling and simulation – theory, methods, and applications. Zbl 07095663 Liu, Wing Kam; Karniadakis, George; Tang, Shaoqiang; Yvonnet, Julien 2019 A spectral penalty method for two-sided fractional differential equations with General boundary conditions. Zbl 07099316 Wang, Nan; Mao, Zhiping; Huang, Chengming; Karniadakis, George Em 2019 Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. Zbl 1454.65008 Zhang, Dongkun; Lu, Lu; Guo, Ling; Karniadakis, George Em 2019 Hidden physics models: machine learning of nonlinear partial differential equations. Zbl 1381.68248 2018 A spectral method (of exponential convergence) for singular solutions of the diffusion equation with general two-sided fractional derivative. Zbl 1422.65428 2018 Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. Zbl 1386.65030 Raissi, Maziar; Perdikaris, Paris; Karniadakis, George Em 2018 A Riesz basis Galerkin method for the tempered fractional Laplacian. Zbl 1402.65120 Zhang, Zhijiang; Deng, Weihua; Karniadakis, George Em 2018 A partitioned coupling framework for peridynamics and classical theory: analysis and simulations. Zbl 1440.74045 Yu, Yue; Bargos, Fabiano F.; You, Huaiqian; Parks, Michael L.; Bittencourt, Marco L.; Karniadakis, George E. 2018 A new class of semi-implicit methods with linear complexity for nonlinear fractional differential equations. Zbl 1404.65105 Zeng, Fanhai; Turner, Ian; Burrage, Kevin; Karniadakis, George Em 2018 A dissipative particle dynamics method for arbitrarily complex geometries. Zbl 1380.76123 Li, Zhen; Bian, Xin; Tang, Yu-Hang; Karniadakis, George Em 2018 A computational stochastic methodology for the design of random meta-materials under geometric constraints. Zbl 1398.60073 Tsantili, Ivi C.; Cho, Min Hyung; Cai, Wei; Karniadakis, George Em 2018 Multi-fidelity optimization of super-cavitating hydrofoils. Zbl 1439.74256 Bonfiglio, L.; Perdikaris, P.; Brizzolara, S.; Karniadakis, G. E. 2018 Stochastic domain decomposition via moment minimization. Zbl 1391.60167 Zhang, Dongkun; Babaee, Hessam; Karniadakis, George Em 2018 A spectral-element/Fourier smoothed profile method for large-eddy simulations of complex VIV problems. Zbl 1410.76320 Wang, Zhicheng; Triantafyllou, Michael S.; Constantinides, Yiannis; Karniadakis, George Em 2018 Active learning of constitutive relation from mesoscopic dynamics for macroscopic modeling of non-Newtonian flows. Zbl 1392.76058 Zhao, Lifei; Li, Zhen; Caswell, Bruce; Ouyang, Jie; Karniadakis, George Em 2018 Inferring solutions of differential equations using noisy multi-fidelity data. Zbl 1382.65229 Raissi, Maziar; Perdikaris, Paris; Karniadakis, George Em 2017 Machine learning of linear differential equations using Gaussian processes. Zbl 1380.68339 Raissi, Maziar; Perdikaris, Paris; Karniadakis, George Em 2017 A generalized spectral collocation method with tunable accuracy for fractional differential equations with end-point singularities. Zbl 1431.65193 Zeng, Fanhai; Mao, Zhiping; Karniadakis, George Em 2017 Petrov-Galerkin and spectral collocation methods for distributed order differential equations. Zbl 1367.65113 Kharazmi, Ehsan; Zayernouri, Mohsen; Karniadakis, George Em 2017 Numerical methods for stochastic partial differential equations with white noise. Zbl 1380.65021 2017 Second-order numerical methods for multi-term fractional differential equations: smooth and non-smooth solutions. Zbl 1439.65081 Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2017 Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. Zbl 1407.62252 Perdikaris, P.; Raissi, M.; Damianou, A.; Lawrence, N. D.; Karniadakis, G. E. 2017 A Petrov-Galerkin spectral method of linear complexity for fractional multiterm ODEs on the half line. Zbl 1367.65108 Lischke, Anna; Zayernouri, Mohsen; Karniadakis, George Em 2017 Computing fractional Laplacians on complex-geometry domains: algorithms and simulations. Zbl 1380.65357 Song, Fangying; Xu, Chuanju; Karniadakis, George Em 2017 Discovering variable fractional orders of advection-dispersion equations from field data using multi-fidelity Bayesian optimization. Zbl 1419.62499 Pang, Guofei; Perdikaris, Paris; Cai, Wei; Karniadakis, George Em 2017 Multi-fidelity Gaussian process regression for prediction of random fields. Zbl 1419.62272 Parussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G. E. 2017 Adaptive finite element method for fractional differential equations using hierarchical matrices. Zbl 1439.65091 Zhao, Xuan; Hu, Xiaozhe; Cai, Wei; Karniadakis, George Em 2017 A tunable finite difference method for fractional differential equations with non-smooth solutions. Zbl 1439.65082 Chen, Xuejuan; Zeng, Fanhai; Karniadakis, George Em 2017 Fractional Burgers equation with nonlinear non-locality: spectral vanishing viscosity and local discontinuous Galerkin methods. Zbl 1380.65280 2017 A robust bi-orthogonal/dynamically-orthogonal method using the covariance pseudo-inverse with application to stochastic flow problems. Zbl 1380.76093 Babaee, Hessam; Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em 2017 A Petrov-Galerkin spectral element method for fractional elliptic problems. Zbl 1439.65205 Kharazmi, Ehsan; Zayernouri, Mohsen; Karniadakis, George Em 2017 Dissipative particle dynamics: foundation, evolution, implementation, and applications. Zbl 1387.35496 Li, Z.; Bian, X.; Li, X.; Deng, M.; Tang, Y.-H.; Caswell, B.; Karniadakis, George E. 2017 Numerical methods for high-dimensional kinetic equations. Zbl 1404.65117 Cho, Heyrim; Venturi, Daniele; Karniadakis, George Em 2017 Efficient two-dimensional simulations of the fractional Szabo equation with different time-stepping schemes. Zbl 1412.65091 Song, Fangying; Zeng, Fanhai; Cai, Wei; Chen, Wen; Karniadakis, George Em 2017 Fast difference schemes for solving high-dimensional time-fractional subdiffusion equations. Zbl 1352.65278 Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2016 Implicit-explicit difference schemes for nonlinear fractional differential equations with nonsmooth solutions. Zbl 1355.65104 Cao, Wanrong; Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2016 A fractional phase-field model for two-phase flows with tunable sharpness: algorithms and simulations. Zbl 1423.76102 Song, Fangying; Xu, Chuanju; Karniadakis, George Em 2016 Multifidelity information fusion algorithms for high-dimensional systems and massive data sets. Zbl 1342.62110 Perdikaris, Paris; Venturi, Daniele; Karniadakis, George Em 2016 Numerical methods for high-dimensional probability density function equations. Zbl 1349.65046 Cho, H.; Venturi, D.; Karniadakis, G. E. 2016 Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms. Zbl 1415.74049 Yu, Yue; Perdikaris, Paris; Karniadakis, George Em 2016 Strong and weak convergence order of finite element methods for stochastic PDEs with spatial white noise. Zbl 1357.65013 Zhang, Zhongqiang; Rozovskii, Boris; Karniadakis, George Em. 2016 Multi-fidelity modelling of mixed convection based on experimental correlations and numerical simulations. Zbl 1383.76438 Babaee, H.; Perdikaris, P.; Chryssostomidis, C.; Karniadakis, G. E. 2016 Fractional-order uniaxial visco-elasto-plastic models for structural analysis. Zbl 1439.74077 Suzuki, J. L.; Zayernouri, M.; Bittencourt, M. L.; Karniadakis, G. E. 2016 Flow in complex domains simulated by dissipative particle dynamics driven by geometry-specific body-forces. Zbl 1349.76069 Yazdani, Alireza; Deng, Mingge; Caswell, Bruce; Karniadakis, George Em 2016 Fractional spectral collocation methods for linear and nonlinear variable order FPDEs. Zbl 1349.65531 2015 Second-order approximations for variable order fractional derivatives: algorithms and applications. Zbl 1349.65092 Zhao, Xuan; Sun, Zhi-zhong; Karniadakis, George Em 2015 A unified Petrov-Galerkin spectral method for fractional PDEs. Zbl 1425.65127 Zayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em. 2015 A generalized spectral collocation method with tunable accuracy for variable-order fractional differential equations. Zbl 1339.65197 Zeng, Fanhai; Zhang, Zhongqiang; Karniadakis, George Em 2015 Tempered fractional Sturm-Liouville eigenproblems. Zbl 1323.34012 Zayernouri, Mohsen; Ainsworth, Mark; Karniadakis, George Em 2015 Optimal error estimates of spectral Petrov-Galerkin and collocation methods for initial value problems of fractional differential equations. Zbl 1326.65100 Zhang, Zhongqiang; Zeng, Fanhai; Karniadakis, George Em 2015 Time-splitting schemes for fractional differential equations I: Smooth solutions. Zbl 1320.65106 Cao, Wanrong; Zhang, Zhongqiang; Karniadakis, George Em 2015 Special issue on “Fractional PDEs: theory, numerics, and applications”. Zbl 1349.35004 2015 Numerical methods for stochastic delay differential equations via the Wong-Zakai approximation. Zbl 1329.60233 Cao, Wanrong; Zhang, Zhongqiang; Karniadakis, George Em 2015 Multi-resolution flow simulations by smoothed particle hydrodynamics via domain decomposition. Zbl 1349.76663 Bian, Xin; Li, Zhen; Karniadakis, George Em 2015 Numerical methods for SPDEs with tempered stable processes. Zbl 1320.65020 2015 Algorithms for propagating uncertainty across heterogeneous domains. Zbl 1347.60093 Cho, H.; Yang, X.; Venturi, D.; Karniadakis, G. E. 2015 Adaptive multi-element polynomial chaos with discrete measure: algorithms and application to SPDEs. Zbl 1326.65140 Zheng, Mengdi; Wan, Xiaoliang; Karniadakis, George Em 2015 Multiscale universal interface: a concurrent framework for coupling heterogeneous solvers. Zbl 1349.76738 Tang, Yu-Hang; Kudo, Shuhei; Bian, Xin; Li, Zhen; Karniadakis, George Em 2015 Wiener chaos versus stochastic collocation methods for linear advection-diffusion-reaction equations with multiplicative white noise. Zbl 1327.60133 Zhang, Zhongqiang; Tretyakov, Michael V.; Rozovskii, Boris; Karniadakis, George E. 2015 Adaptive Wick-Malliavin approximation to nonlinear SPDEs with discrete random variables. Zbl 1326.60099 Zheng, Mengdi; Rozovsky, Boris; Karniadakis, George E. 2015 U-shaped fairings suppress vortex-induced vibrations for cylinders in cross-flow. Zbl 1381.76141 Xie, Fangfang; Yu, Yue; Constantinides, Yiannis; Triantafyllou, Michael S.; Karniadakis, George Em 2015 Brownian motion of a Rayleigh particle confined in a channel: a generalized Langevin equation approach. Zbl 1317.82043 2015 Quantification of sampling uncertainty for molecular dynamics simulation: time-dependent diffusion coefficient in simple fluids. Zbl 1349.76762 Kim, Changho; Borodin, Oleg; Karniadakis, George Em 2015 Fractional spectral collocation method. Zbl 1294.65097 2014 Exponentially accurate spectral and spectral element methods for fractional ODEs. Zbl 1349.65257 2014 Discontinuous spectral element methods for time- and space-fractional advection equations. Zbl 1304.35757 2014 A robust and accurate outflow boundary condition for incompressible flow simulations on severely-truncated unbounded domains. Zbl 1349.76569 Dong, S.; Karniadakis, G. E.; Chryssostomidis, C. 2014 Spectral and discontinuous spectral element methods for fractional delay equations. Zbl 1314.34159 Zayernouri, Mohsen; Cao, Wanrong; Zhang, Zhongqiang; Karniadakis, George Em 2014 Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems. Zbl 1371.60119 2014 Energy-conserving dissipative particle dynamics with temperature-dependent properties. Zbl 1349.76709 Li, Zhen; Tang, Yu-Hang; Lei, Huan; Caswell, Bruce; Karniadakis, George Em 2014 On the equivalence of dynamically orthogonal and bi-orthogonal methods: theory and numerical simulations. Zbl 1349.65535 Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em 2014 Statistical analysis and simulation of random shocks in stochastic Burgers equation. Zbl 1371.76081 Cho, Heyrim; Venturi, Daniele; Karniadakis, George E. 2014 Probing red blood cell mechanics, rheology and dynamics with a two-component multi-scale model. Zbl 1353.92037 Li, Xuejin; Peng, Zhangli; Lei, Huan; Dao, Ming; Karniadakis, George Em 2014 A recursive sparse grid collocation method for differential equations with white noise. Zbl 1332.65015 Zhang, Z.; Tretyakov, M. V.; Rozovskii, B.; Karniadakis, G. E. 2014 Time correlation functions of Brownian motion and evaluation of friction coefficient in the near-Brownian-limit regime. Zbl 1327.82067 2014 A semi-local spectral/hp element solver for linear elasticity problems. Zbl 1352.74476 Yu, Yue; Bittencourt, Marco L.; Karniadakis, George Em. 2014 Fractional Sturm-Liouville eigen-problems: theory and numerical approximation. Zbl 1349.34095 2013 ...and 154 more Documents all top 5 #### Cited by 6,148 Authors 167 Karniadakis, George Em 67 Sherwin, Spencer J. 39 Lin, Guang 36 Xiu, Dongbin 35 Wang, Hong 31 Thompson, Mark Christopher 27 Doostan, Alireza 25 Shen, Jie 24 Dehghan Takht Fooladi, Mehdi 23 Le Maître, Olivier P. 22 Narayan, Akil C. 21 Ghanem, Roger G. 21 Hourigan, Kerry 21 Venturi, Daniele 21 Zheng, Xiangcheng 20 Blackburn, Hugh M. 20 Schwab, Christoph 20 Zabaras, Nicholas J. 19 Kopriva, David Alan 18 Dumbser, Michael 18 Nobile, Fabio 18 Perdikaris, Paris G. 18 Rozza, Gianluigi 18 Shu, Chi-Wang 17 Jin, Shi 17 Lucor, Didier 17 Sagaut, Pierre 17 Zaky, Mahmoud A. 17 Zhou, Tao 16 Guo, Ben-Yu 16 Hesthaven, Jan S. 16 Noack, Bernd R. 16 Wang, Lilian 15 Abbaszadeh, Mostafa 15 Giraldo, Francis X. 15 Iaccarino, Gianluca 15 Mao, Zhiping 15 Wan, Xiaoliang 15 Warburton, Timothy 14 Baccouch, Mahboub 14 Jakeman, John Davis 14 Knio, Omar M. 14 Najm, Habib N. 14 Pasquetti, Richard 14 Quarteroni, Alfio M. 14 Tartakovsky, Daniel M. 14 Zeng, Fanhai 13 Ainsworth, Mark 13 Bourguet, Rémi 13 Dong, Suchuan 13 Elman, Howard C. 13 Jiang, Xiaoyun 13 Liu, Fawang 13 Machado, José António Tenreiro 13 Nordström, Jan 13 Pulch, Roland 13 Sheard, Gregory John 13 Xing, Yulong 13 Zayernouri, Mohsen 12 Boscheri, Walter 12 Congedo, Pietro Marco 12 Gerritsma, Marc I. 12 Matthies, Hermann Georg 12 Nouy, Anthony 12 Reddy, Junuthula Narasimha 12 Tartakovsky, Alexandre M. 12 Triantafyllou, Michael S. 12 Turner, Ian William 12 Zhang, Zhongqiang 11 Cheng, Liang 11 Dawson, Clint N. 11 Fischer, Paul F. 11 Gunzburger, Max D. 11 Guo, Ling 11 Leweke, Thomas 11 Li, Zhen 11 Luo, Hong 11 Maxey, Martin R. 11 Mishra, Siddhartha 11 Pavarino, Luca Franco 11 Soize, Christian 11 Stiller, Jörg 11 Xu, Chuanju 11 Yang, Xiu 11 Zhang, Dongxiao 11 Zhang, Zhimin 11 Zhang, Zhiwen 10 Bhrawy, Ali Hassan 10 Chan, Jesse 10 Deville, Michel O. 10 Ferrer, Esteban 10 Grigoriu, Mircea D. 10 Kronbichler, Martin 10 Li, Hong 10 Liu, Jianguo 10 Lo Jacono, David 10 Marzouk, Youssef M. 10 Peiró, Joaquim 10 Qiu, Jingmei 10 Raissi, Maziar ...and 6,048 more Authors all top 5 #### Cited in 299 Serials 834 Journal of Computational Physics 351 Computer Methods in Applied Mechanics and Engineering 260 Journal of Fluid Mechanics 260 Journal of Scientific Computing 241 Computers and Fluids 136 SIAM Journal on Scientific Computing 127 Physics of Fluids 110 Applied Numerical Mathematics 101 Computers & Mathematics with Applications 95 Journal of Computational and Applied Mathematics 76 International Journal for Numerical Methods in Engineering 71 Applied Mathematics and Computation 65 International Journal for Numerical Methods in Fluids 65 SIAM/ASA Journal on Uncertainty Quantification 59 Applied Mathematical Modelling 51 Mathematics of Computation 46 European Journal of Mechanics. B. Fluids 34 Numerical Algorithms 33 Mathematics and Computers in Simulation 33 Computational Mechanics 31 SIAM Journal on Numerical Analysis 28 Numerische Mathematik 27 Advances in Computational Mathematics 27 Computational Geosciences 26 International Journal of Heat and Mass Transfer 26 Communications in Nonlinear Science and Numerical Simulation 25 Computational and Applied Mathematics 25 Mathematical Problems in Engineering 22 Fractional Calculus & Applied Analysis 21 Engineering Analysis with Boundary Elements 20 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 19 Nonlinear Dynamics 18 Journal of Mathematical Analysis and Applications 18 International Journal of Computer Mathematics 17 Journal of Engineering Mathematics 17 Archives of Computational Methods in Engineering 16 Multiscale Modeling & Simulation 15 Physica D 15 Advances in Difference Equations 13 Journal of Statistical Physics 13 International Journal of Computational Methods 12 Chaos, Solitons and Fractals 12 Numerical Methods for Partial Differential Equations 12 Applied Mathematics Letters 12 International Journal of Computational Fluid Dynamics 11 Theoretical and Computational Fluid Dynamics 11 Mathematical and Computer Modelling 10 Computer Physics Communications 10 BIT 9 Acta Mechanica 9 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 9 Mathematical Modelling of Natural Phenomena 8 Automatica 8 Chaos 8 European Journal of Mechanics. A. Solids 8 Journal of Turbulence 8 Journal of Computational Acoustics 8 Acta Mechanica Sinica 8 Advances in Applied Mathematics and Mechanics 8 Journal of Theoretical Biology 8 AMM. Applied Mathematics and Mechanics. (English Edition) 7 Meccanica 7 International Journal of Numerical Methods for Heat & Fluid Flow 7 Communications on Applied Mathematics and Computation 6 Journal of Mathematical Physics 6 ZAMP. Zeitschrift für angewandte Mathematik und Physik 6 SIAM Review 6 Continuum Mechanics and Thermodynamics 6 Abstract and Applied Analysis 6 Computational Methods in Applied Mathematics 6 SIAM Journal on Applied Dynamical Systems 6 Acta Numerica 6 Communications in Applied and Industrial Mathematics 6 Stochastic and Partial Differential Equations. Analysis and Computations 6 East Asian Journal on Applied Mathematics 6 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 6 Results in Applied Mathematics 5 Mathematical Methods in the Applied Sciences 5 SIAM Journal on Control and Optimization 5 Applied Mathematics and Mechanics. (English Edition) 5 Journal of Non-Newtonian Fluid Mechanics 5 Archive of Applied Mechanics 5 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 5 Flow, Turbulence and Combustion 5 Foundations of Computational Mathematics 5 Discrete and Continuous Dynamical Systems. Series B 5 Journal of Numerical Mathematics 5 Journal of Applied Mathematics and Computing 5 Structural and Multidisciplinary Optimization 5 Research in the Mathematical Sciences 5 International Journal of Applied and Computational Mathematics 4 Inverse Problems 4 Journal of Mathematical Biology 4 Calcolo 4 Computing 4 Chinese Annals of Mathematics. Series B 4 SIAM Journal on Matrix Analysis and Applications 4 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 4 Communications in Numerical Methods in Engineering 4 Computing and Visualization in Science ...and 199 more Serials all top 5 #### Cited in 45 Fields 2,485 Numerical analysis (65-XX) 1,966 Fluid mechanics (76-XX) 1,196 Partial differential equations (35-XX) 423 Mechanics of deformable solids (74-XX) 408 Probability theory and stochastic processes (60-XX) 208 Ordinary differential equations (34-XX) 193 Statistics (62-XX) 174 Biology and other natural sciences (92-XX) 145 Real functions (26-XX) 131 Computer science (68-XX) 119 Statistical mechanics, structure of matter (82-XX) 111 Classical thermodynamics, heat transfer (80-XX) 99 Dynamical systems and ergodic theory (37-XX) 99 Approximations and expansions (41-XX) 91 Optics, electromagnetic theory (78-XX) 90 Systems theory; control (93-XX) 84 Geophysics (86-XX) 60 Calculus of variations and optimal control; optimization (49-XX) 51 Special functions (33-XX) 49 Integral equations (45-XX) 45 Operations research, mathematical programming (90-XX) 33 Mechanics of particles and systems (70-XX) 29 Information and communication theory, circuits (94-XX) 24 Linear and multilinear algebra; matrix theory (15-XX) 24 Harmonic analysis on Euclidean spaces (42-XX) 24 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 19 Operator theory (47-XX) 12 Functional analysis (46-XX) 12 Quantum theory (81-XX) 11 Global analysis, analysis on manifolds (58-XX) 9 General and overarching topics; collections (00-XX) 9 Integral transforms, operational calculus (44-XX) 8 Difference and functional equations (39-XX) 5 Functions of a complex variable (30-XX) 4 Number theory (11-XX) 4 Astronomy and astrophysics (85-XX) 3 Differential geometry (53-XX) 2 History and biography (01-XX) 2 Combinatorics (05-XX) 2 Measure and integration (28-XX) 2 Relativity and gravitational theory (83-XX) 1 Mathematical logic and foundations (03-XX) 1 Potential theory (31-XX) 1 Convex and discrete geometry (52-XX) 1 Mathematics education (97-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
2021-10-16 17:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.517702043056488, "perplexity": 13394.536466945574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00102.warc.gz"}
https://physics.stackexchange.com/questions/287319/calculating-velocity-of-a-car-based-on-engine-power
# Calculating velocity of a car based on engine power Is it possible to calculate the velocity of a car accelerating from rest at full capacity given the power of the engine and the mass of the car? I have a method of solving the velocity with respect to time, but I fear it may contain fallacies since I'm somewhat a physics novice. I will try to be as concise as possible with my work but let me know if I need to expand on any of my work. Given the horsepower of the engine$^1$, P, there is a relation between power, time, and energy: $P= \frac{dE}{dt}$. Thus $E= Pt$. Next (making sure to use the kW value for P, not the hp value) substitute this value into the kinetic energy equation to get: $Pt = \frac{1}{2} mv^2$. Then solve for the velocity: $v = \sqrt{\frac{2Pt}{m}}$ This gives the velocity of the car void of any air resistance. To account for air resistance, first take the derivative of the velocity to get the acceleration: $a(t) = v'(t) = \sqrt{\frac{P}{2mt}}$ The velocity of a free falling object with air resistance with respect to time t can be modeled by this equation$^2$: $$v = \sqrt{\frac{2mg}{pAC_d}} \tanh\left(t\sqrt{\frac{gpAC_d}{2m}}\right)$$ To modify this to model to suit the situation, simply replace "g" with a(t) since the object is accelerating due to the force of the engine, not gravity: $$v = \sqrt{\frac{2m\dot{}a(t)}{pAC_d}} \tanh\left(t\sqrt{\frac{a(t)\dot{}pAC_d}{2m}}\right)$$ Is this an accurate method of arriving at the velocity? If not, where did I go wrong and how can I fix it? 1. In old times, horsepower ratings used to purely measure the power of the engine alone, but over time its been reformed to represent a closer approximation of an engine’s output as actually installed in a car. So luckily, no adjustments will be necessary to compensate for energy lost as heat or any other means. 2. Excluding "g", Any new variables introduced in this equation are constants that pertain to the drag force equation and are not of any concern in this situation. • I don't think it worked, the $t\to\infty$ limit for terminal velocity/max speed of your car looks like 0... – innisfree Oct 19 '16 at 0:41 • You can't replace $g$ with $a(t)$. $g$ appears as a constant related to the force of gravity. The actual acceleration changes. – garyp Oct 19 '16 at 2:56 Here's a much simpler way to think about this: Air resistance dominates the car's velocity, so the car rapidly reaches the terminal velocity at which the applied force ($F$) and the drag force ($F_D$) balance. The "tanh(...t)" expression is irrelevant at steady state. The drag force $F_D = (1/2) \rho C_d A v^2$, where $\rho$ = the density of air, $C_d$ is the drag coefficient, and $A$ is the frontal area of the car. (To see where this comes from, and for a better explanation, see http://scitation.aip.org/content/aapt/journal/tpt/50/7/10.1119/1.4752039 -- I think the PDF is freely available.) Power = Force x velocity, so our force balance becomes $P / v = (1/2) \rho C_d A v^2$, or $P = (1/2) \rho C_d A v^3$, giving a nice relationship between engine power and velocity. (Again, see the paper for more.) • How about some $\LaTeX$? try e.g. $$E=mc^2$$, $$E=mc^2$$ – innisfree Oct 19 '16 at 2:57
2019-12-08 07:30:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525572419166565, "perplexity": 342.6465169261461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540507109.28/warc/CC-MAIN-20191208072107-20191208100107-00238.warc.gz"}
https://zbmath.org/?q=an%3A0783.68076
# zbMATH — the first resource for mathematics Model-checking in dense real-time. (English) Zbl 0783.68076 The paper extends model checking for branching-time logic CTL (computation tree logic) to the analysis of real-time systems, whose correctness depends on the magnitudes of the timing delays. For specification, the syntax of CTL is extended to allow quantitative temporal operators. The formulas of the resulting logic, Timed CTL (TCTL) are interpreted over continuous computation trees i.e. trees in which paths are maps from the set of nonnegative reals to system states. Timed graphs are introduced to model finite-state systems. These are state- transition graphs annotated with timing constrains. As the main result, an algorithm for model-checking, for determining the truth of a TCTL- formula with respect to a timed graphs is developed. The algorithm is exponential in the number of clocks and the length of the timing constraints, but linear in the size of the state-transition graph and the length of the formula. It is shown that the problem is PSPACE-complete. It is argued that choosing a dense domain instead of a discrete domain to model time does not significantly blow up the complexity of the model- checking problem. On the other side, it is shown that the denseness of the underlying time domain makes the validity problem for CTL $$\prod^ 1_ 1$$hard. The question of deciding whether there exists a timed graph satisfying a TCTL-formula is also undecidable. ##### MSC: 68Q60 Specification and verification (program logics, model checking, etc.) 68Q10 Modes of computation (nondeterministic, parallel, interactive, probabilistic, etc.) 68Q55 Semantics in the theory of computing Full Text:
2021-03-04 04:05:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.649110734462738, "perplexity": 1161.4140200553775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00597.warc.gz"}
https://tex.stackexchange.com/questions/369731/how-to-wrap-an-equation-while-aligning-the-equal-sign
# How to wrap an equation while aligning the equal sign? The equation exceed the margin, and I want to wrap the equation while aligning the equal sign, the following is my original codes: \begin{eqnarray}\label{ 7} \begin{split} \beta_{ij}^{js-js}\mid z_{ij} &= \left(1 -\left(\frac{n -2}{\lVert \boldsymbol{z_j}\rVert^2-\frac{2(n-2)\sum_iz_{ij}^2}{\lVert \boldsymbol{z_{i}}\rVert^2}+\frac{(n-2)^2\sum_iZ_{ij}^2}{\lVert \boldsymbol{z_i}\rVert^4}} +\frac{\text{p}-2}{\lVert \boldsymbol{z_{i}} \rVert^2} -\frac{\left(n -2\right)\left(\text{p}-2\right)}{\lVert \boldsymbol{z_{i}} \rVert^2 \left(\lVert \boldsymbol{z_j}\rVert^2-\frac{2(n-2)\sum_iz_{ij}^2}{\lVert \boldsymbol{z_{i}}\rVert^2}+\frac{(n-2)^2\sum_iZ_{ij}^2}{\lVert \boldsymbol{z_i}\rVert^4}\right)} \right)z_{ij} \\&=\left(1 -\left(\frac{n -2}{\lVert \boldsymbol{z_j}\rVert^2-k_j} +\frac{\text{p}-2}{\lVert \boldsymbol{z_{i}} \rVert^2} - \frac{\left(n -2\right)\left(\text{p}-2\right)}{\lVert \boldsymbol{z_{i}} \rVert^2 \left(\lVert \boldsymbol{z_j}\rVert^2-k_j\right)} \right)\right) z_{ij} \end{split} \end{eqnarray} The result that I want I tried \right\\ \left before the cutting position, but I didn't get the desire result. I also try the following solution I checked online, but still not work in my case. Could anyone offer me a help? Thank you. \begin{eqnarray*} y & = & x \\ & & {} + x1 \\ & & {} + x2 \end{eqnarray*} • You should never use eqnarray it is completely obsoleted by the amsmath package which you must also have loaded as you have split defined. – David Carlisle May 14 '17 at 20:35 You could use equation and \begin{aligned}[b] (as @Mico kindly suggested), this is under the assumption you want only a single number for the equation. With the [b] option the eqn number is printed on the bottom line \documentclass{article} \usepackage{amsmath,mathtools} \begin{document} \begin{aligned}[b] \beta_{ij}^{js-js}\mid z_{ij} &= \Biggl(1 -\Biggl(\frac{n -2}{\lVert \boldsymbol{z_j} \rVert^2 - \frac{2(n-2)\sum_iz_{ij}^2}{\lVert \boldsymbol{z_{i}}\rVert^2}+\frac{(n-2)^2\sum_iZ_{ij}^2}{\lVert \boldsymbol{z_i}\rVert^4} } + \frac{\mathrm{p}-2}{\lVert \boldsymbol{z_{i}} \rVert^2} \\ &\qquad-\frac{(n -2)(\mathrm{p}-2)}{\lVert \boldsymbol{z_{i}} \rVert^2 \Bigl(\lVert \boldsymbol{z_j}\rVert^2 -\frac{2(n-2)\sum_iz_{ij}^2}{\lVert \boldsymbol{z_{i}}\rVert^2}+\frac{(n-2)^2\sum_iZ_{ij}^2}{\lVert \boldsymbol{z_i}\rVert^4}\Bigr)} \Biggr) \Biggr) z_{ij} \\ &=\biggl(1 -\biggl(\frac{n -2}{\lVert \boldsymbol{z_j}\rVert^2-k_j} +\frac{\text{p}-2}{\lVert \boldsymbol{z_{i}} \rVert^2} - \frac{(n -2)(\mathrm{p}-2)}{\lVert \boldsymbol{z_{i}} \rVert^2 (\lVert \boldsymbol{z_j}\rVert^2-k_j)} \biggr)\biggr) z_{ij} \end{aligned} \end{document} giving What I did, except using the aforementioned environment is: 1. aligned the second row with a \qquad space, in order to give it an "indent", which is what I think you wanted 2. Got rid of left and right which were abused (it's best to manually scale the parentheses with commands such as biglbigr and so on, to have better spacing and scaling). you could tweak it if you want 3. I replaced text with mathrm (text gives the current font) • It seems kind of odd to place the equation number on the middle line. Consider replacing the split environment with an aligned[b] environment? – Mico May 14 '17 at 21:13 • @Mico if I manage to correct it I'll do it now thanks, you're right – Moriambar May 14 '17 at 21:27 • Congrats on passing the 4K rep mark! – Mico May 14 '17 at 21:43 • there's a closing parenthesis missing in the first equation, and the initial paren in the same equation should be larger, the same size as the one preceding the fraction. – barbara beeton May 14 '17 at 22:15 • @barbarabeeton I'll try and correct this as soon as I come home from work, which will be in around 12 hours, thanks – Moriambar May 15 '17 at 6:16 Never use eqnarray. With some simplifications and changes in the input, notably • \norm{...} instead of \lVert...\rVert • \bm instead of \boldsymbol; also \bm{z_j} and similar have been changed to \bm{z}_j because the subscript should not be bold • \text{p} became p; if you really need it upright, it should be \mathrm{p} • the complicated denominator in the third summand in the top equation has been set as a product • useless \left and \right removed • some \, for spacing the big parentheses added I left \label{7}, although I suggest a more meaningful name. In any case \label{ 7} is dubious, as it requires \ref{ 7} which is awkward. \documentclass{article} \usepackage{amsmath,mathtools,bm} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \begin{document} $$\label{7} \begin{split} \beta_{ij}^{js-js}\mid z_{ij} &=\Biggl( 1-\Biggl(\,\frac{n-2}{ \norm{\bm{z}_j}^2- \frac{2(n-2)\sum_i z_{ij}^2}{\norm{\bm{z}_{i}}^2}+ \frac{(n-2)^2\sum_i Z_{ij}^2}{\norm{\bm{z}_i}^4} }+ \frac{p-2}{\norm{\bm{z}_{i}}^2} \\ & \hphantom{{}=\Biggl(1}- \frac{1}{\norm{\bm{z}_i}^2} \frac{(n-2)(p-2)}{ \norm{\bm{z}_j}^2-\frac{2(n-2)\sum_iz_{ij}^2}{\norm{\bm{z}_{i}}^2}+ \frac{(n-2)^2\sum_iZ_{ij}^2}{\norm{\bm{z}_i}^4} } \,\Biggr)\Biggr)z_{ij} \\[2ex] &=\biggl( 1-\biggl(\, \frac{n-2}{\norm{\bm{z}_j}^2-k_j}+ \frac{p-2}{\norm{\bm{z}_{i}}^2}- \frac{(n-2)(p-2)}{\norm{\bm{z}_{i}}^2 (\norm{\bm{z}_j}^2-k_j)} \,\biggr) \biggr) z_{ij} \end{split}$$ \end{document} I replaced eqnarray by align and adjusted some of the markup. (the use of \boldsymbol still looks a bit suspect. \documentclass{article} \usepackage{amsmath} \begin{document} \begin{align}\label{zz}% don't use numbers as labels. \beta_{ij}^{js-js}\mid z_{ij} &= \Bigl(1 - \bigl(\frac{n -2}{\lVert \boldsymbol{z_j}\rVert^2-\frac{2(n-2)\sum_iz_{ij}^2}{\lVert \boldsymbol{z_{i}}\rVert^2}+\frac{(n-2)^2\sum_iZ_{ij}^2}{\lVert \boldsymbol{z_i}\rVert^4}} +\frac{\mathrm{p}-2}{\lVert \boldsymbol{z_{i}} \rVert^2}\notag\\ -\frac{(n -2)(\mathrm{p}-2)}{\lVert \boldsymbol{z_{i}} \rVert^2 (\lVert \boldsymbol{z_j}\rVert^2-\frac{2(n-2)\sum_iz_{ij}^2}{\lVert \boldsymbol{z_{i}}\rVert^2}+\frac{(n-2)^2\sum_iZ_{ij}^2}{\lVert \boldsymbol{z_i}\rVert^4})} \bigr)z_{ij} \\&=(1 -(\frac{n -2}{\lVert \boldsymbol{z_j}\rVert^2-k_j} +\frac{\mathrm{p}-2}{\lVert \boldsymbol{z_{i}} \rVert^2} - \frac{(n -2)(\mathrm{p}-2)}{\lVert \boldsymbol{z_{i}} \rVert^2 (\lVert \boldsymbol{z_j}\rVert^2-k_j)} ) \Bigr) z_{ij} \end{align} \end{document} Whatever else you do, don't use eqnarray -- it's badly deprecated. I suggest using a single align environment, not using a split environment, replacing all \boldsymbol directives with \bm (from the bm package), and defining a \norm macro to cut down on all the \lVert and \rVert directives. For extra legibility, consider using square brackets in addition to round parentheses. And, rather than \text{p}, write \mathrm{p}, and don't overuse \left and \right. \documentclass{article} \usepackage{mathtools,bm} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \begin{document} \begin{align} \beta_{ij}^{js-js} \bigm| z_{ij} &= \left[1 -\left( \frac{n -2}{\norm{\bm{z_j}}^2 -\frac{2(n-2)\sum_iz_{ij}^2}{\norm{\bm{z_{i}}}^2} +\frac{(n-2)^2\sum_iZ_{ij}^2}{\norm{\bm{z_i}}^4}} +\frac{\mathrm{p}-2}{\norm{\bm{z_{i}}}^2} \right.\right. \notag\\ \Bigl( \norm{\bm{z_j}}^2 -\frac{2(n-2)\sum_iz_{ij}^2}{\norm{\bm{z_{i}}}^2} +\frac{(n-2)^2\sum_iZ_{ij}^2}{\norm{\bm{z_i}}^4} \Bigr)} \right)\!\right] z_{ij}\notag\\[2ex] &=\biggl[1 -\biggl(\frac{n -2}{\norm{\bm{z_j}}^2-k_j} +\frac{\mathrm{p}-2}{\norm{\bm{z_{i}}}^2} -\frac{(n-2)(\mathrm{p}-2)}{\norm{\bm{z_{i}}}^2 (\norm{\bm{z_j}}^2-k_j)} \biggr)\biggr] z_{ij} \label{eq:7} \end{align} \end{document} with multlined from mathtools and consider some suggestion from above answers: \documentclass{article} \usepackage{geometry} \usepackage{mathtools} \begin{document} \begin{align}\label{eq:num7} \beta_{ij}^{js-js}\Big| z_{ij} & = \begin{multlined}[t][0.7\linewidth] \left[1 - \left(\frac{n -2} {\lVert\boldsymbol{z_j}\rVert^2 -\frac{2(n-2)\sum_iz_{ij}^2} {\lVert\boldsymbol{z_{i}}\rVert^2} +\frac{(n-2)^2\sum_iZ_{ij}^2} {\lVert\boldsymbol{z_i}\rVert^4}} +\frac{\mathrm{p}-2} {\lVert\boldsymbol{z_{i}}\rVert^2} \right.\right. \\ \left.\left. -\frac{(n-2)(\mathrm{p}-2)} {\lVert\boldsymbol{z_{i}}\rVert^2 \Bigl(\lVert\boldsymbol{z_j}\rVert^2 -\frac{2(n-2)\sum_i z_{ij}^2} {\lVert \boldsymbol{z_{i}}\rVert^2} +\frac{(n-2)^2\sum_i Z_{ij}^2} {\lVert \boldsymbol{z_i}\rVert^4} \Bigr) } \right)z_{ij} \right] \end{multlined} \notag \\[1ex] & = \left[(1 -\left(\frac{n - 2} {\lVert\boldsymbol{z_j}\rVert^2-k_j} + \frac{\mathrm{p}-2} {\lVert \boldsymbol{z_{i}} \rVert^2} - \frac{\left(n -2\right)\left(\mathrm{p}-2\right)} {\lVert\boldsymbol{z_{i}}\rVert^2 \left(\lVert\boldsymbol{z_j}\rVert^2-k_j\right)} \right) \right] z_{ij} \end{align} \end{document} • If you use \sum\limits, you should also enlarge quite a few of the parentheses. – Mico May 14 '17 at 21:15 • @Mico, you have right, I remove \limits. – Zarko May 14 '17 at 22:35 • the size of the opening bracket and parenthesis in the first line doesn't match the size of the closing parenthesis and bracket on the next line. – barbara beeton May 15 '17 at 2:05 • @barbarabeeton, yes, they are :( the paranthesis in denomitar are to big, they should be \Big instead \bigg. corrected. – Zarko May 15 '17 at 2:33
2019-08-22 18:51:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9776523113250732, "perplexity": 8881.573228656915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317339.12/warc/CC-MAIN-20190822172901-20190822194901-00193.warc.gz"}
http://codeforces.com/problemset/problem/870/F
F. Paths time limit per test 4 seconds memory limit per test 512 megabytes input standard input output standard output You are given a positive integer n. Let's build a graph on vertices 1, 2, ..., n in such a way that there is an edge between vertices u and v if and only if . Let d(u, v) be the shortest distance between u and v, or 0 if there is no path between them. Compute the sum of values d(u, v) over all 1 ≤ u < v ≤ n. The gcd (greatest common divisor) of two positive integers is the maximum positive integer that divides both of the integers. Input Single integer n (1 ≤ n ≤ 107). Output Print the sum of d(u, v) over all 1 ≤ u < v ≤ n. Examples Input 6 Output 8 Input 10 Output 44 Note All shortest paths in the first example: There are no paths between other pairs of vertices. The total distance is 2 + 1 + 1 + 2 + 1 + 1 = 8.
2019-03-22 13:12:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.647554337978363, "perplexity": 147.79392790048985}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202658.65/warc/CC-MAIN-20190322115048-20190322141048-00466.warc.gz"}
https://physics.stackexchange.com/questions/90646/what-is-the-relation-between-electromagnetic-wave-and-photon
# What is the relation between electromagnetic wave and photon? At the end of this nice video (https://youtu.be/XiHVe8U5PhU?t=10m27s), she says that electromagnetic wave is a chain reaction of electric and magnetic fields creating each other so the chain of wave moves forward. I wonder where the photon is in this explanation. What is the relation between electromagnetic wave and photon? • Please see my answer here. You can understand Willis Lamb's frustration and the waves and normal modes describe the electromagnetic field. Photons are then the changes of number state of each normal mode - they are like the discrete "communications" the whole EM field has with the other quantum fields of the World that make up "empty space". One can reinterpret this statement as Maxwell's equations being the propagation equation for a lone "photon", but only in terms of propagation equations for the mean of electric and magnetic field .... – WetSavannaAnimal Dec 19 '13 at 0:48 • ...observables when the EM field is in a superposition of $n=1$ Fock states (so it is "one photon propagating"). – WetSavannaAnimal Dec 19 '13 at 0:49 Both the wave theory of light and the particle theory of light are approximations to a deeper theory called Quantum Electrodynamics (QED for short). Light is not a wave nor a particle but instead it is an excitation in a quantum field. QED is a complicated theory, so while it is possible to do calculations directly in QED we often find it simpler to use an approximation. The wave theory of light is often a good approximation when we are looking at how light propagates, and the particle theory of light is often a good approximation when we are looking at how light interacts i.e. exchanges energy with something else. So it isn't really possible to answer the question where the photon is in this explanation. In general if you're looking at a system, like the one in the video, where the wave theory is a good description of light you'll find the photon theory to be a poor description of light, and vice versa. The two ways of looking at light are complementary. For example if you look at the experiment described in Anna's answer (which is one of the seminal experiments in understanding diffraction!) the wave theory gives us a good description of how the light travels through the Young's slits and creates the interference pattern, but it cannot describe how the light interacts with the photomultiplier used to record the image. By contrast the photon theory gives us a good explanation of how the light interacts with the photomultiplier but cannot describe how it travelled through the slits and formed the diffraction pattern. • This is news because all QM teachers told me that photons abstractions, proposed by QED, which is more exact than wave discription. However, this should not stop us from figuring out how two are related. Actually quanta = particles. – Val Dec 20 '13 at 18:09 • @Val The way we actually calculate things in QED is with a perturbative expansion that involves photons. The underlying exact theory is one of several completely quantum fields. – Kevin Driscoll Dec 20 '13 at 19:26 • There is a sense in which the classical description of light is retrieved as the classical limit of a coherent state of photons. I would say that this would be an appropriate answer to "where is the photon in the classical wave theory of light?" – Prahar May 4 '16 at 18:01 • @Prahar Yes, but you just said it yourself - that's not the reality. That's just "how it fits in the models"- it doesn't help you outside of the constraints of the models, and that's exactly what the OP is asking here. In the classical wave theory of light... there's no photons. Not one per wave, not "infinite amounts" per wave, just no photons, period. – Luaan May 5 '16 at 11:32 • I think that "excitation of a field instead of waves and particles" is one interpretation, and probably not the most popular one. Many people view fields only as a handy mathematical tool. – Helen Sep 30 '18 at 7:56 In this link there exists a mathematical explanation of how an ensemble of photons of frequency $\nu$ and energy $E=h\nu$ end up building coherently the classical electromagnetic wave of frequency $\nu$. It is not simple to follow if one does not have the mathematical background. Conceptually watching the build up of interference fringes from single photons in a two slit experiment might give you an intuition of how even though light is composed of individual elementary particles, photons, the classical wave pattern emerges when the ensemble becomes large. Figure 1. Single-photon camera recording of photons from a double slit illuminated by very weak laser light. Left to right: single frame, superposition of 200, 1’000, and 500’000 frames. In 1995 Willis Lamb published a provocative article with the title "Anti-photon", Appl. Phys. B 60, 77-84 (1995). As Lamb was one of the great pioneers of 20th century physics it is not easy to dismiss him as an old crank. He writes in the introductory paragraph: The photon concepts as used by a high percentage of the laser community have no scientific justification. It is now about thirty-five years after the making of the first laser. The sooner an appropriate reformulation of our educational processes can be made, the better. There is a lot to talk about the wave-particle duality in discussion of quantum mechanics. This may be necessary for those who are unwilling or unable to acquire an understanding of the theory. However, this concept is even more pointlessly introduced in discussions of problems in the quantum theory or radiation. Here the normal mode waves of a purely classical electrodynamics appear, and for each normal mode there is an equivalent pseudosimple harmonic-oscillator particle which may then have a wave function whose argument is the corresponding normal-mode amplitude. Note that the particle is not a photon. One might rather think of a multiplicity of two distinct wave concepts and a particle concept for each normal mode of the radiation field. However, such concepts are really not useful or appropriate. The "Complementarity Principle" and the notion of wave-particle duality were introduced by N. Bohr in 1927. They reflect the fact that he mostly dealt with theoretical and philosophical concepts, and left the detailed work to postdoctoral assistants. It is very likely that Bohr never, by himself, made a significant quantum-mechanical calculation after the formulation of quantum mechanics in 1925-1926. It is high time to give up the use of the word "photon", and of a bad concept which will shortly be a century old. Radiation does not consist of particles, and the classical, i.e., non-quantum, limit of QTR is described by Maxwell's equations for the electromagnetic fields, which do not involve particles. Talking about radiation in terms of particles is like using such ubiquitous phrases as "You know" or "I mean" which are very much to be heard in some cultures. For a friend of Charlie Brown, it might serve as a kind of security blanket. • Wow, Lamb is actually making my rethink me admittedly amateur perspective on the matter. This quote blew my mind: " It is very likely that Bohr never, by himself, made a significant quantum-mechanical calculation after the formulation of quantum mechanics in 1925-1926." – electronpusher Mar 23 '17 at 7:54 • This is not within the mainstream physics models at present, but a peculiar proposal not validated or supported by model calculations and predictions. – anna v Aug 3 '18 at 3:54 • @anna_v to the limited extent I understand it, I believe that if you read the whole paper and not just the snippet I quoted here you would agree that Lamb's is mainstream physics with mainstream interpretation. – hyportnex Aug 3 '18 at 19:54 • @annav, then again, the chosen answer interpreting everything as fields is not necessarily mainstream physics for many physicists (or, more importantly, not necessarily correct). I think this reference deserves a reading. – Helen Sep 30 '18 at 8:01 • @Helen Inmy opinion quantum field theory has very many calculational successes in describing particle physics, where it is mainstream . One could argue about its region of validity, as with many mathematical models. For example QCD has more success with lattice QCD as the expansions of perturbative field theory do not work. I do not think that there is a problem with photons in the standard model, and photons are their own antiparticle. So I will not go to the trouble of reading the paper ( no link provided so it means a library or a paywall) where a prominent physicist discusses new theory – anna v Sep 30 '18 at 8:49 The photon dilemma It is postulated by Planck that energy is quantized. Owing to classical electromagnetic theory light is an electromagnetic field. This field satisfies a wave equation traveling at the speed of light. Hence, light is an electromagnetic wave. Light consists of photons; and thus each photon carries a unit of energy. This behavior is demonstrated by the photoelectric and Compton effects. Since light is an electromagnetic energy, photons must also carry electromagnetic field and a unit of it. While photons are quantum objects, light still governed by Maxwell's classical theory. The photon model is not critically consistent with Maxwell equations, since it has a dual nature. In fact light as a wave is well described by Maxwell. Recall that Maxwell's equations don't involve the Planck's constant, and thus can not describe the particle nature of the photon. A complete Maxwell's equations should involve this missing element. In quantum electrodynamic paradigm, the photon is brought to interact with the electrons by invoking the idea of minimal coupling where electrons and photons exchange momentum. The photon appears as a mediator between charged particles. At the same time while a moving charged particle has its self electric field, and magnetic field that depend on the particle velocity, the photon, the carrier of the electromagnetic energy is void of these self-fields because it has no charge and mass. Thus, a charge-less photon can't have electric and magnetic fields accompanying its motion. The appropriate Maxwell's equations should then incorporate the photon linear momentum as well as its angular momentum. In such a case the new Maxwell's equations can then describe the dual nature of the photon. Like electric charge, the angular momentum is generally a conserved quantity. The question is how one can correct for these photon proprieties? One way to achieve that is to employ quaternions that generically allow many physical properties to be joined in a single equation. This is so because the quaternion algebra is so rich, unlike the ordinary real numbers. To this end we employ the position-momentum commutator bracket, and invoked a photon wavefunction. This wavefunction is constructed from the linear complex combination of the electric and magnetic fields. The outcome of the bracket yields three equations defining the photon electric and magnetic fields in terms of its angular momentum. These equations turn out to be very similar to those fields created by a moving charge. Thus, the electric and magnetic fields of the photon doesn't' require a charge for the photon. It is intriguing that the photon has no charge and mass but has electric and magnetic fields as well as energy. These fields should also satisfy Maxwell's equations. Doing so, yields additional electric and magnetic charge and current densities for the photon. The emergent Maxwell's equations are now appropriate to describe the photon as a quantum particle. These additional terms in Maxwell's equations are the source in describing the photon quantum electrodynamics behavior. Some emergent phenomena associated with topological insulator, Faraday's rotation effect, Hall effect and Kerr's effect could be examples of this contribution terms to Maxwell's equations. Here are the quantized Maxwell's equations incorporating the photon linear and angular momentum. These are the electric and magnetic fields due to the photon as a particle: $$$$\vec{L}\cdot\vec{E}=-\frac{3\hbar c}{2}\,\Lambda\,, \qquad\qquad \vec{L}\cdot\vec{B}=0\,,$$$$ and $$$$\vec{B}=-\frac{2}{3\hbar c}\,(\vec{L}\times\vec{E})\,,\qquad\qquad\vec{E}=\frac{2 c}{3\hbar}(-\Lambda\,\vec{L}+\vec{L}\times\vec{B})\,.$$$$ And these are the new Maxwell's equations: $$$$\vec{\nabla}\cdot\vec{E}=-\frac{4c}{3\hbar}\,\,(\vec{B}-\frac{1}{2}\,\mu_0\vec{r}\times\vec{J})\cdot\vec{p}+\frac{2}{3\hbar c}\,\vec{E}\cdot\vec{\tau}+\frac{\partial \Lambda}{\partial t}\,,\qquad \vec{\nabla}\cdot\vec{B}=\frac{4}{3\hbar c}\,\,\vec{E}\cdot\vec{p}+\frac{2}{3\hbar c}\,\vec{B}\cdot\vec{\tau}\,,$$$$ and $$$$\vec{\nabla}\times\vec{B}=\frac{1}{c^2}\,\frac{\partial\vec{E}}{\partial t}+\frac{2}{3\hbar c}\left(\Lambda\vec{\tau}+\vec{B}\times\vec{\tau }-\frac{\vec{P}}{\varepsilon_0}\times\vec{p}\right)-\vec{\nabla}\Lambda\,,$$$$ $$$$\vec{\nabla}\times\vec{E}=-\frac{\partial\vec{B}}{\partial t}-\frac{2c}{3\hbar}\left(\mu_0\vec{J}\times\vec{L}+\frac{\vec{\tau}}{c^2}\times\vec{E}+2\Lambda\,\vec{p}\right)\,,$$$$ where $$$$-\Lambda=\frac{1}{c^2}\,\frac{\partial\varphi}{\partial t}+\vec{\nabla}\cdot\vec{A}=\partial_\mu A^\mu\,.$$$$ In the standard electrodynamics $$\Lambda=0$$ represents the Lorenz gauge condition. In order to understand the wave particle dualism you have simply to understand what time is: In 1905, the Newtonian unique time concept was replaced by a twofold time concept of observed coordinate time and proper time - the observed time is relative and observer-dependent, and it is derived from the intrinsic proper time of the observed particle ("The time measured by a clock following a given object"). Proper time is the more fundamental time concept. You can understand the wave particle dualism if you consider the simplest case of a photon, that is a photon moving at light speed c. The spacetime interval of such photons (which corresponds to their proper time) is zero. That means that the event of emission and the event of absorption are adjacent in spacetime, the emitting mass particle is transmitting the momentum which is called photon directly to the absorbing mass particle, without any spacetime between them. That means that the particle characteristics are transmitted directly without need for any intermediate massless particle. However, for observers the zero spacetime interval is not observable, e.g. between Sun and Earth are observed to be eight light minutes, even if the spacetime interval of the path of the photon is zero. In spite of the direct transmission of a momentum between two mass particles, observers observe an electromagnetic wave which is filling the gap of eight light minutes. In summary, particle characteristics are transmitted directly according to the principles of spacetime intervals and proper time, whereas the wave is transmitted according the principles of the observed spacetime manifold. Now you will ask: What about photons which are moving slower than c (through gravity fields and through transparent media)? The answer is that here quantum effects such as nonlocality are implied. But it is important to notice that the limit case of photons in vacuum moving at c may be explained and understood classically, without need for any quantum theory. What are photons? Photons get emitted every time when a body has a temperature higher 0 Kelvin (the absolute zero temperature). All bodies, surrounding us (except black holes) at any time radiate. They emit radiation into the surrounding as well as they receive radiation from the surrounding. Max Planck was the physicist who found out that this radiation has to be emitted in small portions, later called quanta and even later called photons. Making some changes in the imagination of how electrons are distributed around the nucleus, it was concluded that electrons get disturbed by incoming photons, by this way gain energy and give back this energy by the emission of photons. And photons not only get emitted from electrons. The nucleus, if well disturbed, emits photons too. Such radiations are called X-rays and gamma rays. EM radiation is the sum of all emitted photons from the involved electrons, protons and neutrons of a body. All bodies emit infrared radiation; beginning with approx. 500°C they emit visible light, first glowing in red and then shining brighter and brighter. There are some methods to stimulate the emission of EM radiation. It was found out that beside the re-emission of photons there is a second possibility to generate EM radiation. Every time, an electron is accelerated, it emits photons. This explanation helps to understand what happens in the glow filament of an electric bulb. The electrons at the filament are not moving straight forwards, they bump together and running zig-zag. By this accelerations they lose energy and this energy is emitted as photons. Most of this photons are infrared photons, and some of this photons are in the range of the visible light. In a fluorescent tube the electrons get accelerated with higher energy and they emit ultraviolet photons (which get converted into visible light by the fluorescent coating of the glass). Higher energy (with higher velocity) electrons reach the nucleus and the nucleus emits X-rays. As long as the introduced energy is a continuous flow, not one is able to measure an oscillation of EM radiation. What are EM waves? Using a wave generator it is possible to create oscillating EM radiation. Such radiations are called radio waves. It was found out that a modified LC circuit in unit with a wave generator is able to radiate and that it’s possible to filter out such a modulated radiation (of a certain frequency) from the surrounding noisy EM radiation. So the wave generator has a double function. The generator has to accelerate forward and backward the electrons inside the antenna rod and by this the photons of the radio wave get emitted, and the generator makes it possible to modulate this EM radiation with a carrier frequency. It has to be underlined that the frequency of the emitted photons are in the IR range and sometime in the X-ray range. There is an optimal ratio between the length of the antenna rod and the frequency of the wave generator. But of course one can change the length of the rod or one can change the frequency of generator. This changes the efficiency of the radiation to the needed energy input only. To conclude from the length of the antenna rod to the wavelength of the emitted photons is nonsense. What is the wave characteristic of the photon? Since the electrons in an antenna rod are accelerated more or less at the same time, they emit photons simultaneous. The EM radiation of an antenna is measurable and it was found out that the nearfield of an antenna has two components, an electric field component and a magnetic field component. This two components get converted in each other, the induce each other. At some moment the transmitting energy is in the electric field component and otherwise the energy is in the magnetic field component. So why not conclude from the overall picture to the nature of the involved photons? They are the constituents which make the radio wave. • The two components do not induce each other, though it's a common misconception (that's what I've been taught in school as well :-). Because of how wide that misconception is, animations now usually show both the electric and magnetic field in phase, to prevent confusion. – Luaan Jul 21 '16 at 9:20 • The final figure here shows the $E$ and $B$ fields oscillating a quarter-turn out of phase. For waves in vacuum that's incorrect; $E$ and $B$ should be in phase. – rob Dec 15 '16 at 19:50 • @HolgerFiedler If the fields are a quarter-turn out of phase, the average value for the Poynting vector is zero and the wave is not transmitting any energy. – rob Dec 16 '16 at 6:51 • @rob Than how the energy transfer in the near field of an antenna work? And how a standing EM wave inside a box work? – HolgerFiedler Dec 16 '16 at 8:07 • Those would make good follow-up questions; I don't know if I can answer completely in a comment. – rob Dec 18 '16 at 5:42 You report that in the video it is stated that an electromagnetic wave is "a chain reaction of electric and magnetic fields creating each other so the chain of wave moves forward." I disagree with this view. There is just one wave, that of the vector potential or more generally of the four potential. The electric and magnetic fields are just derivatives of the vector potential and do not "create each other". Rejecting this explanation we then arrive at your deeper question: "What is the relation between electromagnetic wave and photon?" Until a few years ago I shared the opinion of Willis Lamb, that the photon is a fictive particle. I finally changed my mind because such an explanation cannot account for low intensity diffraction experiments. Indeed, how can a single atom or molecule absorb a wave that is much larger that it? Note that I don't intend to fork off a discussion on this here but want to give my interpretation. This is that the vector potential describes the probability of a photon being absorbed, just like the Schrödinger and Dirac wave functions do for an electron. Indeed the Maxwell equations in vacuum can be written as a wave equation that closely resembles the Klein-Gordon equation. This interpretation implies that the photon indeed exists as a particle, much smaller than an atom and at least as small as a nucleon. • "how can a single atom or molecule absorb a wave that is much larger that it? ", the same question can be asked how can an electrically small antenna ($"dimension"<< \lambda$), say, a Hertz dipole absorb an essentially infinite plane wave. It can, I have seen it; all waves all the way down, no photons needed... – hyportnex Nov 8 '18 at 15:55 • @hyportnex your argument can easily be used to support the photon concept. – my2cts Nov 8 '18 at 19:49 • I have not seen any attempt neither do I believe that, say, a 5cm long ferrite loaded loop antenna's operation at around 550kHz can be usefully explained via photons and quantum physics but, please, go ahead. – hyportnex Nov 8 '18 at 20:53 • @hyportnex your example pertains to the limit of many photons. That is why no QM is needed. – my2cts Nov 8 '18 at 23:30 ## protected by Community♦Oct 8 '17 at 8:13 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-10-19 06:52:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.6983542442321777, "perplexity": 418.2845870018667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692126.27/warc/CC-MAIN-20191019063516-20191019091016-00369.warc.gz"}
https://control.com/forums/threads/industrial-pcs.1152/
# Industrial PCs C #### Crystal Majercik We are a systems integrator specializing in Industrial Automation and Control. Presently we have spent days searching for an Industrial PC that by no means is considered standard. Most industrial PCs provide access to the drives from the top. In our application it is imperative that we have access to all drives, from the front only. Our intent is to modify the PC with an additional plug-in board, while also meeting additional criteria such as Floppy Drive, Pentium III, CDROM, Full Size ISA. At one time we thought a company such as Beckoff (sounds like) branded such an Industrial PC. I would appreciate any recommendations to this critical path. In anticipation, Crystal D. Majercik Marketing Communications Manager Integrated Industrial Technologies (I=B2T) 221 Seventh Street, Suite 200 Pittsburgh, PA 15238 PH: 412-828-1200 FX: 412-828-0320 URL: www.i2t-inmotion.com R #### Ralphsnyder, Grayg Why do you want to buy an 'industrial pc'? What are your specifications? What is your application? For a good many applications I suggest that you get an off the shelf commodity PC (I have had about ten years of good experience with Dell so far) and put it into one of the many industrial enclosures that you can either buy off the shelf or modify to suit your needs. Desktop pcs are relatively cheap nowadays. I think that (the price of the PC + the enclosure - headache finding exactly what you want) < ( cost of an industrial pc that needs an enclosure anyway). Grayg Ralphsnyder H #### holtek Funny you should ask about this the same time someone else asked about VMEbus. If you can afford it you can put together a industrial PC using VME components including a Hard Drive/Floppy Drive card in a 19" eurobus rack. I've used units from Xycom (http://www.xycom.com/products/vmebus/977.html) but there are plenty of other manufacturers. Alternatively, you should be able to find a rack-mounted PC meant for RAID applications that has one or more removable HD modules. Most industrial computer manufacturers will offer this option. Jerry Holzer R&D Electrical Engineer Curt G. Joa, Inc - Boynton Beach, FL -------------------------------------------------------- DISCLAIMER: The contents of this message may not be consistent with the views or policies of my employer, Curt G. Joa, Inc. M #### Michael Klothe Try Nematron in Ann Arbor, MI. Their "FLexbox" line has removable drives and may give you what you want. http://www.nematron.com/ Another option is to build your own: There are rack-mount and panel-mount drives to be had -- for example they are widely used by CNC builders. Combine a couple of those with your own motherboard, power supply, etc. Michael E. Klothe ELECTRICAL DESIGN & CONTROL, INC. Detroit, USA D #### Dale Witman Crystal, I have not found an Industrial PC worth its money. The components in an industrial PC are the same as I use on my desktop except for the mounting of the hard drive, maybe they use a little better shock mounting, big deal! The internal components are proprietary and costly and they are not any better in my experience. I always recommend using a standard PC or even a clone to save money. Try using a hard drive bay that plugs into any standard pc where a floppy drive or CD can be installed. These devices can be purchased for as little as US$15.00 and they enable you use another drive very quickly if it is needed. I use this technique to investigate different operating sytems and even for taking data drives from my office to my home machines. Dale Witman K #### Kirk S. Hegwood Crystal, Look at Automationdirect.com D4-470 Industrial Computer. It might suit your purpose. Kirk S. Hegwood President Signing for Hegwood Electric Service, Inc. J #### Jansen, Joe The other replies I have seen seemed to miss the part about accessing the drives fromthe front. Do you need something wahdown? (NEMA 4 / IP65). I have just found a supplier named Advantech Direct. My first order is shipping Friday. Looking through the catalog, I see an integrated PC/Screen/partial keyboard, Panel mount, Nema4/IP65 with CDROM and Floppy access through the front, behind a flip open door. Model# is AWS-843HTP. Fully configured (cpu. touchscreen, RAM, etc) price is ~$3800. In the pic, I see at least 1 full size ISA / EISA slot open. If you don't need the integrated screen, look at the IPC-610 product line. That is what I ordered. They are a passive backplane system, rackmounted. The one I am getting has 9 ISA slots, 4 PCI slots, and 1 CPU card slot. or call 877-294-8989 and talk to Lia. She was awesome helping me price out what I needed, and the final cost was actually somewhat less than what the catalog advertised! HTH --Joe R #### rb taylor Try Vatyx. They have some special chassis. They also do custom boxes to any spec. www.vatyx.com C #### Crystal Majercik Nematron can definitely help, providing that they can give us a full-size box. Crystal - I2T J #### J-F Portala Hi, several years ago, I used industrial PC. We have tried several kind of material. Here are my conclusions. Ordinary PC are quite more powerfull than industrial PC (especially PC with passive bus). There is a delay for industrial PC to integrate last CPUs or chipsets. I have destroyed two motherboards of industrial PC when disconnecting the monitor cable (the power was On). The use of passive buses implies to exchange the boards from one slot to another one in order to make the system working. I had numbers of problems with industrial PCs. Since 4 years, I only use ordinary PCs. I am installing industrial vision systems based on PC in sawmills where conditions are severe.(some of them works 24h per day) The sole problems I have encountered are: mechanical problems Floppys, CD-Rom and keyboards to change because of dust. These components are cheap, and are as usual as a fuse or a lamp. software problems (system (NT4) does not boot anymore) In order to have a robust application, take care to have a secondary system on a second disk (with a rack).Especially if you use Windows as operating system. You can also share your disk in several parts and install clones of the principal system on each. The industrial PCs have evoluted and perhaps the problems I have described are isolated, but the cost between ordinary and industrial PC is important and I do not intend to use industrial PCs. If you need industrial PCs, it is perhaps why you intend to put it in a electronic enclosure. If it is the case, an ordinary PC will be protected from dust as well as the industrial PC. Regards. J-F Portala SoViLor company [email protected] J #### Joe Jansen -> several years ago, I used industrial PC. -> We have tried several kind of material. -> Here are my conclusions. -> Ordinary PC are quite more powerfull than industrial PC True, but (honestly) do you need the power from the latest gaming systems available to run on a production floor? -> (especially PC with passive bus). There is a delay for industrial PC -> to integrate last CPUs or chipsets. Yes, and that is good. Look at the troubles with Intel's 820 chipset and the MTH, causing systems to hang so much, that Intel is recalling and replacing components on every motherboard. This is exactly why industrial PC's need to wait. I don't want bleeding edge stuff running (and crashing) my systems. -> I have destroyed two motherboards of -> industrial PC when disconnecting the monitor -> cable (the power was On). Well, don't do that. That will potentially destroy your video card in a regular PC. The reason the mother board went out (I would guess) is that the video was integrated, so you blew the video drivers on the MB. Same thing. -> The use of passive buses implies -> to exchange the boards from one slot to another one in order to -> make the system working. what? -> I had numbers of problems with -> industrial PCs. Since 4 years, I only use ordinary PCs. I am -> installing industrial vision systems based on PC in sawmills where -> conditions are severe. I work in a dairy processing system. Palnt floor in the summer is typically 110 degrees F or more, with humidity around 80 to 90 percent (We use steam in all of our processing equipment). If it isn't NEMA4 / IP65, it condenses and dies almost immediately. White box PC's can't hack it. We have 2 out there (It is a vendors fault, and they wouldn't change them) and we have special air conditioning units and filters on sealed cabinets to try to protect them. It doesn't work real well. (some of them works 24h per day) Yes, that goes without saying, other than when Windows dies, all of them are 24X7. The sole -> problems I have encountered are: mechanical problems -> Floppys, CD-Rom and keyboards to change because of dust. -> These components are cheap, and are as usual as a fuse or a -> lamp. -> software problems (system (NT4) does not boot anymore) -> In order to have a robust application, take care to have a -> secondary system on a second disk (with a rack).Especially if -> you use Windows as operating system. -> You can also share your disk in several parts and install clones -> of the principal system on each. -> -> The industrial PCs have evoluted and perhaps the problems I have -> described are isolated, but the cost between ordinary and industrial -> PC is important and I do not intend to use industrial PCs. I pay ~$3000 USD per system. This includes video, touchscreen, etc. For the water proof enclosure and vibration proof mounting of all internal components, and not having to baby the system, it is worth it. -> If you need industrial PCs, it is perhaps why you intend to put it in -> a electronic enclosure. If it is the case, an ordinary PC will be -> protected from dust as well as the industrial PC. Yes, but what of the screen, KB, Mouse, etc? Especially if it is (in our case) a washdown area? (Think 'operator with a hose') D #### Dean Colwell Contact Anil Weerisinha @ California Designs and Systems http://www.cdspanther.com Several years ago I had him design a panelmount "industrial" pc with front access. The company I used to work for still uses them (>30 per year) with good success. Tell him I sent you. R #### Reagan Thomas Each industrial environment is unique. I go with the original poster to a large degree, but you must judge your own needs. Here is a breakdown of hardware pro's and con's in an industrial setting based on my experience: Industrial PC: Pros: a. robust b. well supported by vendor (usually) Cons: a. expensive b. lags techologically c. often contains non-standard hardware (passive buss systems, also adds to expense) Standard PC: Pros: a. cheap (consumer market drives prices) b. standard hardware c. immediate availability/easy to stock d. if well protected, main unit can endure nearly as well as industrial units) Cons: a. cheap (as in cheap construction) b. direct replacement availability subject to the whims of the consumer market. Now some more commentary. You mention that peripheral hardware, such as keyboards, may experience extreme conditions. This is true. A customer of ours builds and tests hydraulic pumps with our machines. This customer was unhappy with the life expectancy of the standard (read cheap) keyboards we used out front (they were protected only by a keyboard cover or condom as we call them). So they purchased several expensive 'industrial' keyboards for their machines. Their life expectancy turned out to be exactly the same as that of the cheap keyboards. The problem was mechanical abuse (ie, smashing the keyboard with a tool or even a pump, accidentally). So, in this case: cheap keyboard$10 to $30, lasts 2 to 6 months, industrial keyboard,$250 to \$600, lasts 2 to 8 months. I know which one I'll choose for similar applications in the future. It all boils down to understanding the application environment, cost, support and political factors. So make the decision based on your circumstance, there is no blanket solution.
2020-09-29 14:24:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3350970149040222, "perplexity": 3498.2818765365423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401643509.96/warc/CC-MAIN-20200929123413-20200929153413-00100.warc.gz"}
http://www.spywarewarrior.com/viewtopic.php?t=34446&view=previous
Spyware Warrior Help with Spyware, Hijacking & Other Internet Nuisances FAQ Author Message skp28 Newbie Joined: 30 Apr 2012 Last Visit: 05 May 2012 Posts: 4 Location: Overland Park, KS Gary R Moderator Joined: 03 May 2005 Last Visit: 22 May 2015 Posts: 10019 Location: Yorkshire Posted: Wed May 02, 2012 5:09 am    Post subject: Looking over your log, back soon._________________Gary R Administrator at Malware Removal University If you've been helped, please donate to help with the costs of this volunteer site .... Spyware Warrior Donations Gary R Moderator Joined: 03 May 2005 Last Visit: 22 May 2015 Posts: 10019 Location: Yorkshire Posted: Wed May 02, 2012 5:13 am    Post subject: Quote: Please note that all instructions given are customised for this computer only, the tools used may cause damage if used on a computer with different infections. If you think you have similar problems, please post a log in the "Help with spyware removal" forum and wait for help. Unless informed of in advance, failure to post replies within 3 days will result in this thread being closed. Hi skp28 I'm Gary R, Before we start: Please be aware that removing Malware is a potentially hazardous undertaking. I will take care not to knowingly suggest courses of action that might damage your computer. However it is impossible for me to foresee all interactions that may happen between the software on your computer and those we'll use to clear you of infection, and I cannot guarantee the safety of your system. It is possible that we might encounter situations where the only recourse is to re-format and re-install your operating system, or to necessitate you taking your computer to a repair shop. Because of this, I advise you to backup any personal files and folders before you start. I'd also recommend that you create a System Restore Point that we can restore to if necessary. • Click Start, and type Create a restore point into the Search programs and files box. • Now click on the Create a restore point icon at the top of the find list. • This will open a System Properties box, with the System Protection tab open ... • Click on the Create button in the lower part of the window. • Type Pre Malware Cleanup into the description box, then click Create. • Windows will now create a Restore Point and notify you when finished. • Exit any open windows. Please observe these rules while we work: • Perform all actions in the order given. • If you don't know, stop and ask! Don't keep going on. • Stick with it till you're given the all clear. • Remember, absence of symptoms does not mean the infection is all gone. • Don't attempt to install any new software (other than those I ask you to) until we've got your computer clean. • Don't attempt to clean your computer with any tools other than the ones I ask you to use during the cleanup process. If your defensive programmes warn you about any of those tools, be assured that they are not infected, and are safe to use. If you can do these things, everything should go smoothly. • As you're using Vista or Windows7, it will be necessary to right click all tools we use and select ----> Run as Administrator Quote: It may be helpful to you to print out or take a copy of any instructions given, as sometimes it is necessary to go offline and you will lose access to them. The HJT log you've posted is from a 64 bit version of Windows 7, and HJT is not compatible with that Operating System. Please run the following scans for me please .... If you already have a copy of OTL delete it and use this version. • Double click OTL.exe to launch the programme. • Check the following. • Scan all users. • Standard Output. • Lop check. • Purity check. • Under Extra Registry section, select Use SafeList • Click the Run Scan button and wait for the scan to finish (usually about 10-15 mins). • When finished it will produce two logs. • OTL.txt (open on your desktop). • Please post me both logs. Next • Double click on TDSSKiller.exe to launch it. • If using Vista or Windows7, when prompted by UAC allow the prompt. • Click on Change parameters • Check Detect TDLFS file system • Click OK • Click on Start Scan • The scan will run. • When the scan has finished, if it finds anything please click on the drop down arrow next to Cure and select Skip • Now click on Report to open the log file created by TDSSKiller in your root directory C:\ • DO NOT TRY TO FIX ANYTHING AT THIS POINT Next • Double click SecurityCheck.exe and follow the instructions inside the black box. • When finished a Notepad document checkup.txt should open. Summary of the logs I need from you in your next post: • OTL.txt • Extras.txt • TDSSKiller log • Checkup.txt Please post each log separately to prevent it being cut off by the forum post size limiter. Check each after you've posted it to make sure it's all present, if any log is cut off you'll have to post it in sections. _________________ Gary R Administrator at Malware Removal University If you've been helped, please donate to help with the costs of this volunteer site .... Spyware Warrior Donations skp28 Newbie Joined: 30 Apr 2012 Last Visit: 05 May 2012 Posts: 4 Location: Overland Park, KS skp28 Newbie Joined: 30 Apr 2012 Last Visit: 05 May 2012 Posts: 4 Location: Overland Park, KS skp28 Newbie Joined: 30 Apr 2012 Last Visit: 05 May 2012 Posts: 4 Location: Overland Park, KS Posted: Wed May 02, 2012 12:56 pm    Post subject: security check Results of screen317's Security Check version 0.99.32 Windows 7 x64 (UAC is enabled) Internet Explorer 9 Antivirus/Firewall Check: Windows Firewall Enabled! avast! Free Antivirus WMI entry may not exist for antivirus; attempting automatic update. Anti-malware/Other Utilities Check: Adobe Reader X (10.1.3) Mozilla Firefox (12.0.) Process Check: objlist.exe by Laurent AVAST Software Avast AvastSvc.exe AVAST Software Avast AvastUI.exe End of Log` Gary R Moderator Joined: 03 May 2005 Last Visit: 22 May 2015 Posts: 10019 Location: Yorkshire Posted: Wed May 02, 2012 2:02 pm    Post subject: I don't see the TDSSKiller log, if you've run the scan then please post the log. If you haven't yet run it, please run the scan then post the log._________________Gary R Administrator at Malware Removal University If you've been helped, please donate to help with the costs of this volunteer site .... Spyware Warrior Donations skp28 Newbie Joined: 30 Apr 2012 Last Visit: 05 May 2012 Posts: 4 Location: Overland Park, KS Gary R Moderator Joined: 03 May 2005 Last Visit: 22 May 2015 Posts: 10019 Location: Yorkshire Posted: Wed May 02, 2012 10:01 pm    Post subject: Nothing of any real concern showing in your logs, so far it's looking like Malware is not the cause of your problems .... • Double click OTL.exe to launch the programme. • Copy/Paste the contents of the code box below into the Custom Scans/Fixes box. Code: :OTL FF - prefs.js..browser.search.defaultenginename: "SweetIM Search" FF - prefs.js..keyword.URL: "http://search.sweetim.com/search.asp?src=2&crg=3.1010000.10011&q=" [2012/04/28 17:47:01 | 000,003,939 | ---- | M] () -- C:\Users\Donene\AppData\Roaming\Mozilla\Firefox\Profiles\lrjhzokt.default\searchplugins\sweetim.xml O3:64bit: - HKLM\..\Toolbar: (no name) - Locked - No CLSID value found. O3 - HKLM\..\Toolbar: (no name) - Locked - No CLSID value found. :Files C:\Users\Donene\AppData\Local\{39C444FD-1FC6-4B6A-A75A-D9A0707E5103} C:\Users\Donene\AppData\Local\{01F93E30-2459-484A-91B7-C20899A7F4FC} C:\Users\Donene\AppData\Local\{4940D722-29B7-4041-BC12-714FFA98F2AC} C:\Users\Donene\AppData\Local\{3AFB6DBA-27E3-41B6-B167-FAED1DA5AB0C} C:\Users\Donene\AppData\Local\{AD35027E-9055-4515-8E8B-A645FCA52B33} :Commands [CreateRestorePoint] [EmptyFlash] [EmptyTemp] [ResetHosts] • Click the Run Fix button. • OTL will now process the instructions. • When finished a box will open asking you to open the fix log, click OK. • The fix log will open. Note: If necessary, OTL may re-boot your computer, or request that you do so, if it does, re-boot your computer. A log will be produced upon re-boot. Next Please run a scan with ESET Online Scanner Note: You can use either Internet Explorer or Mozilla FireFox for this scan. You will however need to disable your current installed Anti-Virus, how to do so can be read here. • Please go HERE then click on: Quote: Note: If using Mozilla Firefox you will need to download esetsmartinstaller_enu.exe when prompted then double click on it to install. All of the below instructions are compatible with either Internet Explorer or Mozilla FireFox. • When prompted allow the Add-On/Active X to install. • Make sure that the option Remove found threats is NOT checked, and the option Scan archives is checked. • Now click on Advanced Settings and select the following: • Scan for potentially unwanted applications • Scan for potentially unsafe applications • Enable Anti-Stealth Technology • Now click on: • The virus signature database... will begin to download. Be patient this make take some time depending on the speed of your Internet Connection. • When completed the Online Scan will begin automatically. • Do not touch either the Mouse or keyboard during the scan otherwise it may stall. • When completed make sure you first copy the logfile located at C:\Program Files\ESET\EsetOnlineScanner\log.txt • Now click on: (Selecting Uninstall application on close if you so wish) Summary of the logs I need from you in your next post: • OTL log • E-Set log Please post each log separately to prevent it being cut off by the forum post size limiter. Check each after you've posted it to make sure it's all present, if any log is cut off you'll have to post it in sections. _________________ Gary R Administrator at Malware Removal University If you've been helped, please donate to help with the costs of this volunteer site .... Spyware Warrior Donations Gary R Moderator Joined: 03 May 2005 Last Visit: 22 May 2015 Posts: 10019 Location: Yorkshire Posted: Sat May 05, 2012 8:50 am    Post subject: Quote: Due to lack of response this topic is now closed. If you still need help you must open a new thread in the Help with Spyware Removal forum, post a new log, and wait for a new helper. If you have been helped and wish to donate to help with the costs of this volunteer site, please read Spyware Warrior Donations Gary R _________________ Gary R Administrator at Malware Removal University If you've been helped, please donate to help with the costs of this volunteer site .... Spyware Warrior Donations Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First All times are GMT - 8 Hours Page 1 of 1
2015-05-25 23:35:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23907791078090668, "perplexity": 9839.788477418253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928729.99/warc/CC-MAIN-20150521113208-00048-ip-10-180-206-219.ec2.internal.warc.gz"}
https://sackslegal.com/k887pq6r/1kex3b5.php?page=074e1c-neighbourhood-questions-and-answers
Neighbourhood planning; Questions and answers; Questions and answers. It will help to protect us from opportunistic … Turn on suggestions. How does the "degeneracy" (equivalent minima) come into play? Looking at relocating low income individuals/families as the intervention, instead of revitilizing low income neighborhoods where they reside. A project plan sets ambitious timescales for the production of the Neighbourhood Plan. I have got 4 values of the electron effective mass (at each valley) and 4 values of hole effective mass (at each maximum). Housing officer interview questions & answers.. Not every aspect of Neighbourhood Plan making requires the hand of a professional planner. It is not a quick process and it is anticipated, at this stage, that a draft plan could be in place by late Summer 2015. Your sample size could be determined from previous similar research or the prevalence of your outcome of interest (I'm unsure what this is in your study). Now, we have GK questions and answers for teens and older kids. Perhaps you might be interested to read: Boisen, M., Terlouw, K., & van Gorp, B. My position is trying to go beyond the idea of dealing with COVID19 as an opportunity but as a paradigm shift that will change every aspect of life and particularly the way houses, neighborhoods and cities are planned and designed. My husband bashed a tambourine as we cheered! But I suggest to think of it as the area which, if hit by the photon, will trigger photon absorption. Might be a little heavy on the aging demographic and their exposure to urban heat island effect in a changing climate but I vaguely remember implementation of designated corridors in the city structure itself that could not be developed to act as fresh air drainage. Sign up to join this community. A different verse that I have found personally meaningful these past weeks is Romans 8:37-39 - “in all these things we are more than conquerors through him who loved us; for I am convinced that neither death, nor life, nor angels, nor rulers, nor things present, nor things to come, nor powers, nor height, nor depth, nor anything else in all creation will be able to separate us from the love of God in Jesus Christ our Lord“ (NRSV). Making statements based on opinion; back them up with references or personal experience. How can the immune system deals with viruses? Finally, as the sampling of the quantitative part has certain problems, I do not know if I should do the qualitative part with the same students that were surveyed, and deepen with the interview. What Is The Probability That Fewer Than 30% Of Your Friends’ Homes Are Under Water? Development is restricted in a National Park, but that doesn’t mean there is no development at all. Would like to see questionnaire and any rationale behind question design. Every answer we receive helps us in our research efforts! I am currently working on 'ethnic/migrant entrepreneurship, ethnocultural diversity and innovation in cities'. For each neighborhood V of identity there is a positive integer n such that gU is a subset of V^n? However, plasmonic electric field is not very intensive at such distances from nanoparticle surface, because it is maximal on the nanoparticle surface and decays exponentially outside. Zenker, S., & Braun, E. (2017). I know that $C_{c}^{\infty}(R) \subset S(R)\subset L^1(R)$ and Fourier transform is bijection on S(R). A: A neighbourhood plan is a community-led planning … Hello Kamila, oh thank you very much for your kind words, in fact community participatory approach and co-creation would give appropriation and a sense of meaning on its citizens life's, a very meaningful way to build a community in a neighborhood. See all the topics. B. Neighbourhood. I mean that it looks so that this field is not very intensive at the edges of these cross sections. 3. In low-income neighborhoods the absentee father has has a detrimental affect on the healthy child development pointing to the complimentary nature of man that is missing from the family equation. What if I say Neighborhoods can be seen as micro-destinations. A Neighbourhood Plan is a plan prepared by the community for the community. QUESTIONS AND ANSWERS: Neighbourhood Plan Referendum On 15 th February 2018 there will be a referendum on a Neighbourhood Plan for your parishes. MCQ Questions for Class 7 Social Science with Answers were prepared based on the latest exam pattern. How do you see this from your perspective and your context? Leuschner, C. 1998. If they are OK, you should think about the projection center. The specific era for each ethnicity or culture in being or not being accepted by the international communities, is another historic factor. GWR, see the second link). Despite the (non- mandatory) EU Council Recommendation 1999/519/EC, some EU countries adopt more restrictive thresholds. Take U = X, then g+U = X, and surely there is a convex neighborhood of 0 V such that for every n  the set nV is not equal to the whole X. 5. I want to look at how exposure to environmental toxicants differs between children who attend preschools of different socio-economic characteristics. In my opinion, neighborhoods can be seen as micro-destinations and branding can be applied at different spatial scales. Fleet Town Neighbourhood Plan. Explore the latest questions and answers in Neighborhood, and find Neighborhood experts. Answer: There are mixed variety of shops which include grocery shops, vegetable vendors, stationary shop, showrooms of designer clothes and shoes. What if you merge these four things; you have two positive (high school and income) and two negative (poverty, unemployment); SES=%highschool+%rich-%poverty-unemployment. Not sure whether a simple regression model will work for what I am looking for. Questions Arising from the Representations Q1. 12. Warm-up Questions: How long have you lived in your neighborhood? Comparing Urban Sustainability in Two Neighborhoods in Khartoum-, some of sustainable design concepts depend on the planning pattern of the neighborhood such as: accessibility, urban mobility and greening. To what extent can the needs and personal appreciations be included in the design of specific infrastructure in the neighborhoods and also in the prioritization of works in the territorial development plans? Find out by taking our quiz below: Trending on PopBuzz. But it is not difficult to implement it in ArcGIS. Q1 I have just received a Poll Card for Monday 7 October. Neighbours and Neighbourhood Is it easy for you to relocate to some other place? Assess the effect of the Covid-19 virus on the public realm of residential neighborhoods, Urban Uncertainties & the Neighborhood Adaptability to Uncertainties. neighbourhood. Have you met any of your neighbors? It is intended to keep costs to the Parish Council to a minimum wherever possible. GK questions for Class 7 & 8 It shows that Pakistan wants peace in his neighborhood. Share this. Hoping this (partly) answers your questions. 1. How about adolescents? This happens in real time, without the management platform being involved. I would like to answer the following questions on Neighbours and Neighbourhood topic today. Parish (and Town) Councils have established experience of representing local communities and experience in considering planning issues. Then using the 2-opt, the resulted tour is A->B->F->E->D->C->G->A. In most cases the most important strategic policies with which a Neighbourhood Plan will have to generally conform is the assessment of what the requirement is for housing and other development across the Borough. http://bmjopen.bmj.com/content/4/8/e005764.abstract. Can i apply regional frequency analysis when there is no geographical neighborhood for groups? A Trivia Quiz About … The Government think this is right because they are the tier of local government which is closest to the local community, and have an electoral mandate to speak on its behalf. Is your neighborhood convenient for raising children? Let me know if it helps re your study of neighborhood governance. What will you remember when you look back in years to come at the Covid-19 pandemic? Related eBooks. Thank you Kleomenis Kalogeropoulos and William F. Hansen. What benefits Pakistan can get through opening of Kartarpura border.? (5). At the same time you need to study the structure of the inhibitor separately and its charge density. Find 15 questions and answers about working at West Neighbourhood House. Questions and answers: the EU budget for external action in the next Multiannual Financial Framework ... For the Neighbourhood countries, the relevant association agendas, partnership priorities and other equivalent jointly agreed documents are key points of reference for setting the priorities for EU support to neighbourhood countries. I wanna compare PCA and NCA in feature selection and feature reduction. https://esg.pik-potsdam.de/search/isimip/. doi: 10.1016/j.stem.2017.07.010. There is a definite push in our culture to push fathers out of the family, denying their distinctive contribution in healthy child development? When should we call an indicator less/average/more sustainable in a scale of 0-100? This could be a planning consultant or other planning professional, an employee of another local authority or a planning inspector. Kate has twin boys and a baby girl. the quotient topology is trivial. I have come across a few interesting studies but am interested in finding more. Competitiveness through Branding can be achieved through a continuous process to create authentic experiences and build a sustained and solid image that attaches emotionally to the host community and resonates in its target markets. Some explanations of the different limits are provided [1], [1] H. M. Madjar, "Human radio frequency exposure limits: An update of reference levels in Europe, USA, Canada, China, Japan and Korea," 2016 International Symposium on Electromagnetic Compatibility - EMC EUROPE, Wroclaw, 2016, pp. I'm really thankful with your help. Neighbourhood Plans are a powerful tool for shaping the development and growth of a local area. I know that some hydrodinamical models like TELEMAC are able to perform this. What are neighborhood properties in image processing?how these are used in image classification? Drag Race … sampling by conglomerates (neighborhoods) with similar socio-educational index. It appears that this will very much be the case as the indications are that there will be good support from the Borough Council . Pathways 2; Impact Issues 1; World English 2; Q: Skills for Success 2 Listening and Speaking ; Q: Skills for Success 3 Listening and Speaking; … LWFAktuell 108: 38-40. Community Questions and Answers January 2020, updated July 2020 The Steering Group held community events in September 2019 to update residents of Kidmore End Parish on the progress being made towards the production of our Neighbourhood Development Plan (NDP). 2) Today, I would like to cover another set of 4 questions on Neighbours and Neighbourhood topic. USA and Japan are the most liberal countries, adopting in 300–1,500 MHz power- density 4/3 of the ICNIRP1998 and IEEE 2006 levels. Students will answer questions about plot, narrative point of view, drawing conclusions, and cause and effect. I have only considered the maximum valence band and the minimum conduction band. Once man is finished with exploring the neighborhood, then it's probably the time he would be thinking to go places wherefrom - he today suspects - the cigar shaped visitor "Oumuamua" just rendezvoused our world. I agree that the value of neighborhoods in tourism is underestimated and yes, it could help to build a differentiated voice for a destination. and Neighbourhood Development (CND) Summary of Questions and Answers When did the virtual CoP take place? What would the reported effective mass be? Where can I find different End users energy consumption and generation Datasets to validate my Simulator ? Which kind of Questionnaire is better for studying Public Health in any study area (neighborhood) in Housing Planning? This is at a bit of a tangent (relates to data collection in the context of beauty salon) but may be of interest both in itself and in terms of references (includes mention of the role of hairdressers in client interactions): What are neighborhoods with a positive climate? A Neighbourhood Plan could be produced even if the Borough Council’s Local Plan 2032 is not finalised, provided that it is in general conformity with the strategic policies of the current adopted Local Plan. 2. It seems to me that the fitness landscape is pretty flat because solutions within a nonnegligible range in the solution space yield very similar low values of the objective function. Does your neighbourhood have a grotty part which looks dirty and is very old? A large amount of information was provided to the community on the range of evidence that has been gathered and the independent advice that … Considering reality of latin american cities, focusing on informal settlements, how can we contribute for a sustainable model of smart cities? I am asking about opportunities and challenges for each neighborhoods. To help us with our study on Neighborhood Walkability, we ask you to answer the questions which should take only a few seconds to complete. 52,324. Instead of liberating the energy of chemical bonds, you could reach in to the superior nuclear binding energy. Unlike many of the Parish, Village or Town plans produced in the past, a Neighbourhood Plan becomes a formal part of the planning system. Let G be a topological group and U be a fixed neighborhood of identity. For most practical purposes, power density of your propulsion system, as is thrust specific fuel consumption, control what you must provide to your customer. It helps explain how the planning system operates, where neighbourhood planning fits in, and the neighbourhood planning process itself. eg, -1.5 is half a world more use than is sustainable (1.0). I have several indicators to assess neighborhood sustainability. It means that in resonance case incident light will be absorbed both by nanoparticle and by the neighborhood points, which are 10-20 nm distant from the nanoparticle surface. , B into play topic Wavelet and Multiresolution processing transition of a crucial advantage! Worksheets sample CBSE Class 2 neighbourhood questions and answers EVS > our neighbours Worksheets sample CBSE Class 2 > EVS > neighbours... ( Log out / Change ), you are commenting using your account... Urban planning in the debate on recent urban planning in the time of the inhibitor separately and charge. Spatial model and criteria they used rules of CAs with n states and of. Gated community ) in red ) and a competitive local maximum ( in green ) asymptotic growth of numbers cellular!, in the neighborhood and police officers real vector space and a local! With n states and neighborhood of identity the possibility to excite/create plasmons in the tiff image of study... A minimization problem with 20 variables a project Plan sets ambitious timescales for the size of such families... How these are used in image processing? how these are used in image processing a: a Neighbourhood.... Is about using multi-resolution image representation for texture analysis the innate system can use to defend against viruses they! On implementing the NCA on ANFIS Asking for help, clarification, or responding other. Eu accession process to homogenize the place branding narrative tedious necessity for her family 's Vietnamese... Of 4 questions on neighbours and Neighbourhood Plan new development will cover the of! Loop and what is the straight line segment from I to 1+i National international... The poor tend to be decided at the Covid-19 pandemic one please tell me if could. Employee of another local authority or a planning inspector mutation sites PTZ cameras are giving to! About … Neighbourhood planning works and how we can help a city or region to have even more attraction... Tagged with Neighbourhood active newest hottest most voted unanswered stronger political steering your! The future of our villages should contact the Parish get involved in making the Neighbourhood Plan has been a by. Electric field tourism is often neglected as marketing and branding efforts are concentrated in textbooks!, with time we are looking at the Covid-19 pandemic the mitigation works affect the local and! What help will be light touch the built environment transferred electrons between topological... End of this post, you should be built breast cancer mutations a crucial advantage. Which, if hit by the Parish Council and make them aware multiple choice questions on and! Provide factual information about interesting subjects: develop a shared vision for neighbourhood questions and answers parishes following property in an Abelian group. Related questions, and any other material considerations is restricted in a and! For those looking to do that definitely run the test for random effect model but than. Transport accessibility new media and engagement in neighborhood, and find neighborhood experts be... Pattern quality I measure the neighborhood n^n^s rules of CAs with n states and of! 'M talking about a phenomenon of real estate market growth versus densification of these cross sections is caused by electric! Where new Homes, shops, offices and other development should be conceptualized the. Position themselves in a remote village or a planning inspector of high Street Imagine, featuring the single. Could be a fixed element g, consider gU=\ { gu, u\in U\.! End of this post, you could on sims 2 ( 3 ):374-382.e4 am, Microsoft. During presentations where these tools have been presented from the Borough Council has yet... Some future technology - its something that can be done today the same view things., crown plasticity ( Schröter et al central office into 5 homogeneous group, I. Rutgers, the sustainable structure of the debris barrier some groups there is missing link between CCTV... Wish to concentrate on a participatory methodology research ( both quantitative ), whilst contributing to the. Those two loops the NCA on ANFIS research work of weighting factors to. As it is intended to keep costs to the site space and a competitive local maximum ( in ). Census track socioeconomic data and have a grotty part which looks dirty and is very old indicator! Of those Homes are Owned by your Friends ’ Homes are under Water worksheet questions be,. Neighborhood governance pollution quality at 4 different locations and I want to go for a housing officer along! This from your description, it does not seem that your created a scale of 0-100 move... Workshop based Around a theatrical performance ethnocultural diversity and innovation in cities.! Scrubbed windows, a tedious necessity for her family 's new Vietnamese restaurant better way to do research on the! Also listed the questions and answers relating to reference value infected cell, and find neighborhood experts its. Measuring air pollution quality at 4 different locations and I want to know the! More potentials for sustainable development: old traditional neighbourhoods or new contemborary neighbourhoods for place brand management application was... Fits in, and provide factual information about interesting subjects are commenting using your WordPress.com.. Thurleigh Parish Council to a local optimum, they are not neighbourhood questions and answers sub-brands under the assumption.! Plan must contain being clear that it is useful to know each other primarily influencing on... You see this from your description, it does not meet the sustainability criteria, another... The message of peace to the Parish over the next 15 years or so exponent 2 a. Single 'Scary Love ' you recommend for my research I am interested in getting involved should the. And code: //www.nber.org/papers/w7636 & ved=0CB0QFjAA & usg=AFQjCNGQnoRognoCv1yOm_l86QQwSaXsLg & sig2=szzW_6kFYSNCIEd28hN78w of possible Topics that you think. The edges of these infrastructures to provide the increased demand adolescent living in risky neighborhood/environment but 2.0 is n't twice... Can anyone point me to some published resource/tool that can help me prove the over-representation of neighbourhood questions and answers flanking! Guardians ) SES status is to be 'content ' that attracts tourist the! Interaction and the Neighbourhood and 30 of those Homes are Owned by your neighbourhood questions and answers or partol car on! Neighborhood properties in image processing? how these are used in this research is based the... Pharmacist describe the socioeconomic Class of his neighborhood real time, without the management system to display that.! Despite of having fully implemented CCTV cameras, many major and provincial are! Topics that you should be used to: develop a shared vision your. Very well we use it to help communities think about complexity in Neighbourhood planning works and how you can this... 5 homogeneous group, but for some groups there is no geographical neighborhood as possible and Neighbourhood will. Of sustainable neighborhoods this field is not some future technology - its that. Datasets to validate my neighbourhood questions and answers professionals in related fields the edges of these particular?! Answers with various counterexamples wants local authorities will have an emerging local Plan in '... The cut point of score between 0-100 to assess sustainability of an area what benefits can. As it is easy to write down formulas for the size of such families... A great pleasure sharing my angust with you cell to begin its reproductive cycle, these weapons are ineffective space. No development at all combined case studies with longitudinal research ( both quantitative and qualitative?. Since we only have 500 cases I do n't think it will then alongside. 1St step: quantitative a destination brand from a tourism perspective ’ lives, in... Their new neighbors of it Health in any study area ( neighborhood ) in neighbourhood questions and answers planning unsafe neighborhood with cameras... … Fleet Town ’ s! worksheet!.! my! Neighbourhood! increase. Camera ( or local governance ) your cooperation is very important so that this was published BMJ... Very natural: otherwise X/Y is the concept of blank screen ''! To run a referendum on a millions screens as seen in the country,,! 27 million Sikh worldwide 20 millions live in India a planning inspector g can be part of the virus. Engagement in neighborhood, what is taking place and how you can do this on any and... Community ) get answers demand higher consumption and generation Datasets to validate my Simulator Diffusion ; Biotic and Abiotic more... And urban Planni... is the possible risk of data applying ''... Since we only have 500 cases I do n't know that norm of g be. The possibility to excite/create plasmons in the textbooks only quotients X/Y with closed Y are as... And this photon is absorbed by the community to get up-to-date local Plans in as... Not just an opportunity, but paradigm shift particularly in city planning twice '' as sustainable, or to! > F- > E- > F- > G- > a is intended to keep costs to Adopted... Consider space as discrete ( e.g quizzing fun for teens and Older.... Is and who are the arrangement and staging of the parks, overlay census track socioeconomic data have. Work for what I am not sure Whether a simple regression model will together! Technology - its something that can help me prove the over-representation of certain bases flanking mutation.! Neighbourhood! sustainability evaluation results qualitative ) when shall we decide to demolish a neighborhood ( the. Nature and scale more detailed analysis research related to neighborhood their new neighbors vicinity essential... It will give you better information because the local optimum have some characteristics. Optimum make sense Nov/Dec 2016 while they are outside of cells socio-educational.... Various counterexamples, http: //onlinelibrary.wiley.com/doi/10.1111/1365-2435.12428/full 63,697 ) questions related to the radiations resulted from these devices analysis and..
2021-04-16 17:39:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22779154777526855, "perplexity": 2680.3182601176463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00420.warc.gz"}
http://quant.stackexchange.com/questions?page=1&sort=newest
# All Questions 13 views ### About the boundary conditions of the Black-Scholes-Merton PDE I have a question about the solution of the Black-Scholes PDE for the European call option when I read the book Stochastic Calculus for Finance II of Steven E.Shreve. Let $c(t,x)$ be the value of the ... 22 views ### Calculating discount factors using Nelson Siegel Svensson model I am trying to understand how to calculate the discount factors $disc(TTM)$ mentioned on Page 9 of this pdf. When I'm calculating the discount factors, mentioned each bond has its own cash flow and ... 9 views 18 views 27 views ### optimization with absolute constraints Suppose I have an optimization where I need to impose ADV-like constraint (for a case where Shorting is allowed): $\max \mu'w - \lambda w'\Sigma w$ $|w| \le V$ $Aw = 0$ and I want to use a ... 38 views ### Correlation Between 2 Portfolios I have a set of assets, n. I'm trying to find the correlation between 2 portfolios, say x and y, where x is nested in, or, a sub-set of y. That is, x is a portfolio based on a sub-set of n, while y is ... 43 views ### Newbie Quant: Bulding price feeder to securities master db First of all, warm hello to all. I am newbie and i admit it, but with at least 15+ years of exp in C++. Working in IB - derivatives mainly, but unfortunately on the business side not trading at all. ... 42 views ### Careers in finance for postgraduates? [on hold] Having read through similar topics, I see these questions are often poorly received, so apologies if this is not the place to ask (would appreciate if someone could redirect me). I shall try to be as ... 57 views 109 views ### Are all stocks and stock indexes just white noise In the paper Super-Whiteness of Returns Spectra from Erhard Reschenhofer of University of Vienna it is commented the following "Until the late 70’s the spectral densities of stock returns and stock ...
2015-07-28 15:32:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7331719994544983, "perplexity": 1419.064670592147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981969.11/warc/CC-MAIN-20150728002301-00333-ip-10-236-191-2.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/laplace-transform-heaviside-step-function.295965/
Laplace transform Heaviside step function Homework Statement What is the Laplace transform of f(t) = 0 for 0 < t < 2 and f(t) = (4-t) for 2 < t < 3 and f(t) = 1 for 3 < t < 4 and f(t) = (5-t) for 4 < t < 5 and f(t) = 0 for t > 5? The Attempt at a Solution f(t) = H(t-2)(4-t) - H(t-3)(4-t) + H(t-3) - H(t-4) + H(t-4)(5-t) - H(t-5)(5-t) f(t) = H(t-2)(4-t) + H(t-3)(2t-7) - H(t-4)(t-4) + H(t-5)(t-5) How can I rewrite H(t-2)(4-t) + H(t-3)(2t-7) to a form suitable for Laplace transform? Related Calculus and Beyond Homework Help News on Phys.org Tom Mattson Staff Emeritus Gold Member For $H(t-2)(4-t)$, you'd like to see a $t-2$ in the second factor right? So just put one there, and balance it out elsewhere in the expression. Like this: $H(t-2)(4-(t-2)-2)$ See how I added and subtracted 2? This simplifies to: $H(t-2)(2-(t-2))$ Do the same for $H(t-3)(2t-7)$. Now I am getting: $H(t-2)(2-(t-2)) + H(t-3)(2(t-3)-1) - H(t-4)(t-4) + H(t-5)(t-5)$ And the Laplace transform: $((2/s)-(1/s^2))e^(-2s) + ((2/s^2)-(1/s))e^(-3s) + ((-1/s^2)e^(-4s)) + ((1/s^2)e^(-5s))$ Is this correct? Tom Mattson Staff Emeritus
2020-08-07 10:31:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5950943827629089, "perplexity": 1687.4691542344565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737172.50/warc/CC-MAIN-20200807083754-20200807113754-00365.warc.gz"}
https://www.nature.com/articles/s41598-020-74376-3?error=cookies_not_supported&code=129981d7-b113-4ae4-a091-6a91c095e0f3
## Introduction Transition metal dichalcogenide (TMD) monolayers are direct-bandgap semiconductors of which the conduction and valence band extrema consist of two valleys1,2. The broken inversion symmetry of the lattice gives rise to optical selection rules that enable valley-selective, inter-band excitation of electrons using circularly polarized light3,4,5. A strong Coulomb interaction results in the subsequent formation of excitons6, which maintain a valley polarization that is determined by the ratio between the intervalley scattering time and the exciton lifetime3,7. Such valley-polarized excitons have been proposed as carriers of information and play a central role in the field of valleytronics8,9. As such, understanding the processes that govern the exciton lifetime and associated valley polarization is important for assessing the potential applicability of valley-polarized excitons in devices. Under optical excitation, a charge-density-controlled chemical equilibrium between neutral and charged excitons (trions) forms in a TMD monolayer10,11,12. The conversion into trions reduces the exciton lifetime13 and may therefore be expected to lead to a large valley polarization of excitons that are created via valley-selective optical pumping, but demonstrating this effect has thus far remained elusive. The charge density of TMD monolayers can be controlled via electrostatic gating or chemical doping10,11,14,15,16,17,18,19,20. While electrostatic gating is a flexible technique that allows a continuous change of the charge density10,11,14, chemical doping provides a convenient alternative that requires no microfabrication and is well suited for achieving high doping levels15,16,17,18,19,20. Here, we study the valley polarization of excitons and trions in monolayer $$\text {WS}_{2}$$ and show that chemical doping via aromatic anisole (methoxy-benzene) quenches the exciton photoluminescence and causes the spectrum to become dominated by trions with a strong valley polarization. A spatial study of the remaining exciton emission shows that also the excitons attain a strong valley polarization, which we attribute to the rapid doping-induced conversion into trions. We extend a rate equation model describing exciton-trion conversion10 to include the two valleys and use it to explain the observed valley polarization in terms of the doping-controlled chemical equilibrium between excitons and trions. ## Results When doping a TMD monolayer using aromatic molecules such as anisole, Hard Soft Acid Base (HSAB) theory allows predicting whether the dopant will be n- or p-type16. Electrons hop between the adsorbed molecules (A) and the monolayer (B) to compensate for the difference in chemical potential $$\mu$$ between both systems21. The chemical hardness $$\eta$$ of the materials determines how quickly an equilibrium is reached, leading to an average number of transferred electrons per molecule $$\Delta N$$: \begin{aligned} \Delta N=\frac{\mu _A-\mu _B}{\eta _A+\eta _B}. \end{aligned} (1) For both anisole and monolayer $$\text {WS}_{2}$$, the chemical potential and chemical hardness has been calculated using density functional theory22,23. Using these values (Supplementary Section S1) we find $$\Delta N=0.22$$, such that we expect the monolayer to be n-doped upon physisorption of anisole molecules (Fig. 1a). To study the effect of chemical doping with anisole on the valley polarization properties of $$\text {WS}_{2}$$, we start by characterizing the photoluminescence of exfoliated $$\text {WS}_{2}$$ monolayers on 280 nm Si/$$\text {SiO}_{2}$$ substrates. The emission spectrum of an as-prepared monolayer shows the characteristic bright exciton resonance at 2.01 eV (Fig. 1b, black line)24. After chemical doping by a 2-h treatment in liquid anisole at $$70\,^{\circ }\hbox {C}$$, the bright exciton resonance is strongly quenched and only a weak emission peak that is red-shifted by $$\Delta E=23\,\hbox {meV}$$ remains (Fig. 1b, red line). Because the increased binding energy of trions compared to excitons should lead to such a red shift14 and the expected n-type doping by the anisole molecules should favour trion formation, we attribute this peak to emission associated with trions. This conclusion is further supported by spatial studies of emission spectra showing both exciton and trion components that we will describe below. As expected, the trion emission is weak due to its long radiative lifetime and strong non-radiative decay attributed to Auger recombination10,25,26. Doping by adsorbed carbon-hydrogen groups27 was previously shown to result in an increase of the longitudinal acoustic LA(M) and LA(K) modes in the Raman spectrum of $$\hbox {WS}_2$$ monolayers. Our treatment causes a similar increase of the LA(M) Raman mode (Fig. 1c), which we therefore attribute to the adsorption of anisole molecules. We do not observe an associated increase of the LA(K) mode at about $$190\, \hbox {cm}^{-1}$$, which may be due to the different nature of the adsorbates resulting in different lattice deformations and/or defects in the monolayer. We note that a similar behaviour was observed in previous work on $$\hbox {WS}_2$$ monolayers7, which showed an increasing intensity of the LA(M) Raman mode without an associated increase in the LA(K) mode as a function of the defect concentration. In addition, we find that the double-resonance 2LA(M) mode remains unaffected by the doping, indicating that our treatment does not significantly change the monolayer’s electronic structure28. To study the valley polarization of chemically-doped $$\hbox {WS}_2$$ monolayers, we use near-resonant excitation with a 594 nm circularly polarized, continuous-wave laser that is focused to a diffraction-limited spot. The resulting photoluminescence is polarization filtered and collected using a home-built confocal microscope (see “Methods” section). Before detecting the emission with an avalanche photodiode (APD), we apply a spectral bandpass filter with a transmission window centred around the exciton and trion resonances (see the shaded area in Fig. 1b). We quantify the valley polarization $$\rho$$ via polarization-resolved photoluminescence measurements according to \begin{aligned} \rho =\frac{I_{\sigma ^+}-I_{\sigma ^-}}{I_{\sigma ^+}+I_{\sigma ^+}}. \end{aligned} (2) Here, $$I_{\sigma ^+}$$ and $$I_{\sigma ^-}$$ represent the intensities of the right- and left-handed emission by the sample under $$\sigma ^+$$ excitation and the total photoluminescence is given by $$I=I_{\sigma ^+}+I_{\sigma ^-}$$. By scanning the sample while detecting its emission using the APD, we make photoluminescence and valley-polarization maps of our flakes, before and after treating them. Before the anisole treatment, the photoluminescence is characterized by bright exciton emission (Fig. 2a, left panel) with no valley polarization (Fig. 2b, left panel). Strikingly, the trion emission that remains after chemical doping (Fig. 2a, right panel) has a valley polarization of about 25% (Fig. 2b, right panel). We consistently observe the emergence of strong valley polarization after anisole treatment in multiple samples (Supplementary Section S2). Next, we demonstrate the substrate independence of the effect of our treatment by repeating the measurements on an yttrium iron garnet (YIG) substrate. YIG is a magnetic insulator that was shown to effectively negatively dope $$\text {MoS}_{2}$$ monolayers at low temperatures, possibly due to dangling oxygen bonds at the YIG surface29. As such, the total level of doping could be larger for monolayers on YIG due to additional doping from the substrate. We exfoliated monolayers $$\text {WS}_{2}$$ onto polydimethylsiloxane (PDMS) stamps and deposited them onto the YIG substrates30. As before, the emission of the monolayers is strongly quenched after chemical doping and a valley polarization of about 20–40% emerges (Fig. 3, Supplementary Section S2). Compared to the monolayers on Si/$$\hbox {SiO}_2$$ substrates we conclude that these data do not indicate significant additional doping from the YIG substrate. To assess the spatial homogeneity of the doping, we characterize the photoluminescence and valley polarization of a relatively large-area monolayer flake on YIG (Fig. 3a,b). In most parts of the flake, we observe a valley polarization of about 40%. In addition, at multiple spots in the monolayer, we observe an enhanced photoluminescence and reduced valley polarization. A comparison with an atomic force microscope topography image (Fig. 3c) shows that these spots are associated with wrinkles in the flake. Spectrally, the spots are characterized by the simultaneous presence of an exciton resonance and a trion resonance, with the exciton resonance rapidly vanishing as we move off the spot and the trion resonance remaining approximately constant (Fig. 3d). We extract the valley polarization and brightness of the exciton and trion resonances by fitting similar emission spectra near multiple wrinkles with an exciton and trion component (Supplementary Section S3). The extracted trion brightness and valley polarization is independent of the local exciton emission (Fig. 3e), highlighting their spatial homogeneity. In particular, the trion valley polarization of about 40% is similar to that in the flat areas of the flake (Fig. 3b,f). The stronger exciton emission at wrinkles indicates that the doping is less effective, possibly resulting from the restricted physical access to the monolayer at wrinkles or from a decreased substrate-induced doping due to the increased substrate-monolayer distance. In addition, the exciton and trion formation could be altered at the wrinkles as a result of local strain31. Strikingly, the excitons at the wrinkles also attained a strong valley polarization, as can be seen from the spectra in Fig. 3d. We extend an existing rate equation model10 to argue that this is the result of the doping-induced conversion of excitons into trions (Fig. 4a). This conversion acts as a decay channel for the excitons, enhancing their valley polarization and quenching their photoluminescence. The model predicts that the excitonic valley polarization starts to increase strongly when the conversion rate into trions $$\Gamma _{\text {T}\leftarrow \text {X}}$$ becomes comparable to the intervalley scattering rate $$\Gamma _{\text {iv,X}}$$ (Fig. 4b, green line). Since $$\Gamma _{\text {T}\leftarrow \text {X}}$$ is proportional to the electron density as described by a law of mass-action11,12, indeed an emergent exciton polarization is expected when doping is strong. Strongly valley-polarized excitons are expected in the limit of large doping (Fig. 4b). For our flakes, doping is strongest in the flat areas away from the wrinkles as reflected by the low photoluminescence in these areas. Because we are unable to spectrally distinguish the weak exciton emission from the dominant trion emission in these areas, we analyse the valley polarization of the integrated photoluminescence spectrum using our APD. When plotting the local valley polarization against the local photoluminescence (Fig. 4c), we observe a non-monotonous behaviour with a maximum at low photoluminescence. According to our model, this maximum occurs because the exciton valley polarization (green line in Fig. 4b) increases with doping while the exciton photoluminescence vanishes. As a result, the trion contribution (red line) starts to dominate the total signal (black line). These results highlight that the exciton valley polarization becomes large because of the rapid conversion into trions. On wrinkles, we observe that the excitons have a lower valley polarization than the trions (Fig. 3d). In contrast, our model predicts that the local valley polarization of the trions cannot exceed that of the excitons even at low doping (Fig. 4b, Supplementary Section S4). This indicates that the observed spectra on wrinkles are a result of spatial averaging over less-doped, wrinkled areas with a strong exciton contribution and strongly-doped surrounding areas with a dominant trion emission (Supplementary Section S5). Such averaging is expected from the diffraction-limited optical spotsize of our confocal microscope (diameter: $$\sim 500\,\hbox {nm}$$). In summary, we have demonstrated that chemical doping with anisole is an effective method to generate highly valley-polarized excitons and trions in monolayer $$\text {WS}_{2}$$ at room temperature. The emission spectrum of as-prepared monolayers is characterized by a bright exciton resonance that exhibits no valley polarization. After chemical doping, a trion resonance appears with a polarization up to 40%. The doping is less efficient at wrinkled areas, which are marked by the simultaneous presence of exciton and trion resonances. The excitons have a robust valley polarization, which we attribute to the rapid conversion into trions induced by the doping. A rate equation model captures the quenching-induced valley polarization, indicating the presence of excitons with a higher polarization than trions in the limit of maximal quenching. Our results shed light on the effect of the doping-controlled conversion between excitons and trions on the valley polarization in single layers of $$\text {WS}_{2}$$ and highlight that valley polarization by itself does not necessarily reflect optovalleytronic potential, since a strongly-quenched carrier lifetime and emission may constrain its application in devices. ## Methods ### Experimental setup A schematic overview of the setup is presented in Supplementary Section S6. Our samples are excited by a lowpass-filtered 594 nm OBIS laser (Coherent) of which we control the polarization using achromatic half- and quarter-wave plates (Thorlabs). A 50 $$\times$$, $$\hbox {NA}=0.95$$ (Olympus) objective focuses the laser to a diffraction-limited spot and collects the emission from the sample. The emission is separated from the excitation by a 10:90 beam splitter (R:T, Thorlabs). The handedness of the excitation and detection is controlled by a second quarter-wave plate, which projects both circular polarizations of the photoluminescence onto two orthogonal linear polarizations of which we select one with the polarizer. The emission is longpass filtered (2 $$\times$$ Semrock, BLP01-594R-25) to eliminate the laser reflection. We use a mirror on a computer-controlled flipmount to switch between a fiber-coupled spectrometer (Kymera 193 spectrograph with a cooled iVac 324 CCD detector) and an avalanche photodiode (APD, Laser Components) for the detection of the photoluminescence. Before the emission is detected by the APD, it is filtered with a pinhole and bandpass filter (Semrock, FF01-623/32-25). The sample is mounted on an xyz-piezo stage (Mad City Labs, Nano-3D200FT) to allow nanoscale positioning of the sample. An ADwin Gold II was used to control the piezo stage and read out the APD. The grating in the Raman microscope (Renishaw inVia Reflex, 514 nm laser) had 1600 lines per mm, giving a spectral resolution of $$\sim 2 \,\hbox {cm}^{-1}$$ per pixel. All measurements were performed at room temperature. ### Sample fabrication The $$\text {WS}_{2}$$ monolayers were exfoliated from commercially-purchased bulk crystals (HQ Graphene) on PDMS stamps, and were transferred to Si/$$\text {SiO}_{2}$$ and YIG chips. The 245 nm thick YIG films were grown on a gadolinium gallium garnet (GGG) substrate via liquid phase epitaxy and were purchased at Matesy gmbh. YIG samples were sonicated in acetone and cleaned in IPA before stamping.
2023-03-23 07:51:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48964670300483704, "perplexity": 3364.5456937137574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00308.warc.gz"}
https://www.khanacademy.org/science/ap-chemistry-beta/x2eef969c74e0d802:chemical-reactions/x2eef969c74e0d802:introduction-to-acid-base-reactions/v/conjugate-acid-base-pairs-acids-and-bases-chemistry-khan-academy
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. # Conjugate acid–base pairs AP.Chem: TRA‑2 (EU) , TRA‑2.A (LO) , TRA‑2.A.1 (EK) , TRA‑2.B (LO) , TRA‑2.B.1 (EK) , TRA‑2.B.2 (EK) , TRA‑2.B.3 (EK) ## Video transcript - [Voiceover] In this video, we're going to be talking about conjugate acid-base pairs. We're going to introduce the idea of a conjugate acid-base pair using an example reaction. The example reaction is between hydrogen fluoride, or HF, and water. So hydrogen fluoride is a weak acid, and when you put it in water, it will dissociate partially. So some of the HF will dissociate, and you'll get fluoride minus ions. And then that dissociated H plus ion. So this dissociated H plus ion will get donated to our water. So water then becomes H3O plus, or hydronium. And so this process is in dynamic equilibrium, cause it can go forward, and it can go backward, and eventually, those two rates are equal, and they're both happening at the same time. So in this reaction, we have a couple things going on, and we're gonna think about it in terms of hydrogen ions being exchanged. So if we just look at the hydrofluoric acid, and we look in the forward direction, our HF is becoming F minus. And it's doing that by donating or losing, so I'll put a minus for losing a proton. So our HF loses a proton that forms our F minus or fluoride ion. And then we can look at that same process happening in the backwards reaction. So if we look at the backwards reaction, which is also happening, the fluoride ion can pick up or accept a proton from somewhere. So it can pick up an H plus, so I'll have a plus, H plus here. And so when fluoride accepts a proton, we reform our HF. So we can see that HF and F minus have this special relationship where you can form one or the other by losing or gaining a proton. And we can see a similar relationship between water and hydronium. So, water here, we said water is accepting a proton from HF, so we see that water will gain a proton, and that will give us hydronium. In the reverse reaction, hydronium can lose a proton to reform water. So, minus H plus. So again we have these two species, water and hydronium, that are related to each other by having, or not having, one H plus. So in chemistry, we call these species that are related in this way conjugate acid-base pairs. So the official definition, or my official definition of a conjugate acid-base pair is when you have two species that are related to each other. Let's see, two species that are related to each other, related by one H plus. In this case, we have HF and F minus that are related to each other by that one H plus. And so HF and F minus are a conjugate acid-base pair. We also have water and hydronium, which are also related by that one H plus. So water and H3O plus are also a conjugate acid-base pair. You can probably tell from the name, but whenever you have a conjugate acid-base pair, one thing in the pair will be an acid, and the other thing will always be a base. The definition of which one is the acid and which one is the base comes from the Bronsted-Lowry definition of acids and bases. So the Brondsted-Lowry definition says anything that can donate an H plus, so anything that will give away an H plus is an acid. So we can see that, in this case, our hydrofluoric acid is acting as the acid in the conjugate acid-base pair. And that means that fluoride has to be acting as the base. And that makes sense, because the Bronsted-Lowry definition of a base is something that will accept an H plus. And that's exactly what it does in the reverse reaction. Your F minus will pick up an H plus and go back to your acid. So we can also look at water and H3O plus. So here, water is gaining a proton, or accepting it, so water is acting as a base. And in the reverse reaction, H3O plus is donating a proton, so H3O plus is acting as an acid. The relationship between conjugate acid-base pairs we can write a little bit more generally. So, if we represent any generic acid as HA. So this is our acid. We said that a acid is something that donates a proton. So it'll lose the proton, and when it does that, it will form the conjugate base, which is represented by A minus. In the reverse reaction, our base, A minus, can gain a proton and remake our acid, or conjugate acid. So whenever you have two species that have basically the same formula, which we abbreviated here as A minus, except for one has an H plus and one doesn't, then you know you have a conjugate acid-base pair. So let's look at some more examples of conjugate acid-base pairs. We saw above, HF, or hydrofluoric acid, it's conjugate base is F minus. So here HF is our acid, and when it loses that proton, we are left with F minus. We saw in the same reaction that water can act as a base. So if water is our A minus, if that water accepts a proton, it forms the conjugate acid H3O plus. So the example we've gone through so far, HF, is for a weak acid. But we can also talk about the conjugate base of a strong acid, like hydrochloric acid. HCl is a strong acid, so that means it completely dissociates. So it gives away all of its protons, and when it does that, we're left with the conjugate base, chloride. So even though chloride isn't particularly basic, it's still the conjugate base of HCl. And last but not least, we're gonna go through two examples where it looks like we might have a conjugate acid-base pair, but we actually don't. So one example is, what about the relationship between H3O plus and OH minus? If we think of our acid up here being H3O plus, if we lose one proton, we saw that its conjugate base is water. If water loses another proton, we get OH minus. So the difference between these two species here is two protons instead of one proton. So these two, hydronium and hydroxide are not a conjugate acid-base pair because they differ by two protons instead of one. And then the last example we'll look at is, we said that fluoride is a conjugate base of HF. So what about the relationship between sodium fluoride and fluoride? And so these two are also not a conjugate base pair because if we take our fluoride ion, and it accepts a proton, we don't get sodium fluoride. They are related by a sodium ion. So by definition, these two are not a conjugate acid-base pair. So in this video, we learned that a conjugate acid-base pair is when you have two species and they have the same formula, except one has an extra proton. So the acid has an extra proton, which it can lose to form the base. AP® is a registered trademark of the College Board, which has not reviewed this resource.
2022-05-28 20:35:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8165196180343628, "perplexity": 932.7118859807726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00487.warc.gz"}
http://math.stackexchange.com/tags/representation-theory/hot
# Tag Info 3 The natural surjection $A\to A/\text{rad}A$ induces a surjection $A^\times\to (A/\text{rad}A)^\times$, and so $(A/\text{rad}A)^\times$ is path connected. $A/\text{rad}A$ is a finite-dimensional commutative semisimple $\mathbb{R}$-algebra, and so is a finite product of copies of $\mathbb{R}$ and $\mathbb{C}$, and since its group of units is path connected ... 3 I believe you have the wrong copy of $\mathbb{C}$ inside $\mathbb{C}[G]$. Instead, you want to take $\mathbb{C}\cong\{c\cdot\delta_e\mid c\in\mathbb{C}\}$, where $\delta_e$ is the function $$\delta_e(g)=\begin{cases}1&\mbox{if }g=e\\0&\mbox{otherwise.}\end{cases}$$ In fact, there is an isomorphism of algebras ... 3 To explain what I said on MO (and also explaining what goes on in knsam's answer): The way to think about this is that since the group permutes the given basis vectors, it fixes the sum of all the given basis vectors. This gives a $1$-dimensional invariant submodule. 2 Hint. Arguably, the simplest invariant subspace would be one of dimension $1$. What would such a thing be? Do you see any such subspace in this case? 1 Every unitary matrix is diagonalizable, but $\rho(x)$ is not diagonalizable unless $x=0$ (its only eigenvector (up to scaling) is $(1,0)$). 1 The integral sign comes by fiat. The fundamental theorem of calculus tells us that the integral of a continuous function is (continuously) differentiable, introducing an integration gives us a more regular function to work with. While we only know that $\pi$ is continuous, we know that the integral of $\pi$ is differentiable, and therefore we introduce the ... Only top voted, non community-wiki answers of a minimum length are eligible
2015-10-09 12:57:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553978443145752, "perplexity": 149.18092516104204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00157-ip-10-137-6-227.ec2.internal.warc.gz"}
http://anr-sequoia.gforge.inria.fr/Publications/b2hd-GIRRR-CoRR16.html
## (Universal) Unconditional Verifiability in E-Voting without Trusted Parties Gina Gallegos-García, Vincenzo Iovino, Alfredo Rial, Peter B. Rønne, and Peter Y. A. Ryan. (Universal) Unconditional Verifiability in E-Voting without Trusted Parties. CoRR, abs/1610.06343, 2016. ### Abstract In traditional e-voting protocols, privacy is often provided by a trusted authority that learns the votes and computes the tally. Some protocols replace the trusted authority by a set of authorities, and privacy is guaranteed if less than a threshold number of authorities are corrupt. For verifiability, stronger security guarantees are demanded. Typically, corrupt authorities that try to fake the result of the tally must always be detected. To provide verifiability, many e-voting protocols use Non-Interactive Zero-Knowledge proofs (NIZKs). Thanks to their non-interactive nature, NIZKs allow anybody, including third parties that do not participate in the protocol, to verify the correctness of the tally. Therefore, NIZKs can be used to obtain universal verifiability. Additionally, NIZKs also improve usability because they allow voters to cast a vote using a non-interactive protocol. The disadvantage of NIZKs is that their security is based on setup assumptions such as the common reference string (CRS) or the random oracle (RO) model. The former requires a trusted party for the generation of a common reference string. The latter, though a popular methodology for designing secure protocols, has been shown to be unsound. In this paper, we address the design of an e-voting protocol that provides verifiability without any trust assumptions, where verifiability here is meant without eligibility verification. We show that Non-Interactive Witness-Indistinguishable proofs (NIWI) can be used for this purpose. The e-voting scheme is private under the Decision Linear assumption, while verifiability holds unconditionally. To our knowledge, this is the first private e-voting scheme with perfect universal verifiability, i.e. one in which the probability of a fake tally not being detected is 0, and with non-interactive protocols that does not rely on trust assumptions. ### BibTeX @article{GIRRR-CoRR16, abstract = {In traditional e-voting protocols, privacy is often provided by a trusted authority that learns the votes and computes the tally. Some protocols replace the trusted authority by a set of authorities, and privacy is guaranteed if less than a threshold number of authorities are corrupt. For verifiability, stronger security guarantees are demanded. Typically, corrupt authorities that try to fake the result of the tally must always be detected. To provide verifiability, many e-voting protocols use Non-Interactive Zero-Knowledge proofs (NIZKs). Thanks to their non-interactive nature, NIZKs allow anybody, including third parties that do not participate in the protocol, to verify the correctness of the tally. Therefore, NIZKs can be used to obtain universal usability because they allow voters to cast a vote using a non-interactive protocol. The disadvantage of NIZKs is that their security is based on setup assumptions such as the common reference string (CRS) or the random oracle (RO) model. The former requires a trusted party for the generation of a common reference string. The latter, though a popular methodology for designing secure protocols, has been shown to be unsound.\par In this paper, we address the design of an e-voting protocol that provides verifiability without any trust assumptions, where verifiability here is meant without eligibility verification. We show that Non-Interactive Witness-Indistinguishable proofs (NIWI) can be used for this purpose. The e-voting scheme is private under the Decision Linear assumption, while verifiability holds unconditionally. To our knowledge, this is the first private e-voting scheme with perfect universal verifiability, i.e. one in which the probability of a fake tally not being detected is 0, and with {\em non-interactive} protocols that does not rely on trust assumptions. }, author = {Gina Gallegos{-}Garc{\'{\i}}a and Vincenzo Iovino and Alfredo Rial and Peter B. R{\o}nne and Peter Y. A. Ryan}, title = {(Universal) Unconditional Verifiability in E-Voting without Trusted Parties}, journal = {CoRR}, volume = {abs/1610.06343}, year = 2016, url = {http://arxiv.org/abs/1610.06343}, lsv-category = {autc}, }
2017-07-26 18:46:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31314122676849365, "perplexity": 6987.478685435632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00128.warc.gz"}
https://par.nsf.gov/biblio/10368340-topsy-turvy-integrating-global-view-sequence-based-ppi-prediction
Topsy-Turvy: integrating a global view into sequence-based PPI prediction Abstract Summary Computational methods to predict protein–protein interaction (PPI) typically segregate into sequence-based ‘bottom-up’ methods that infer properties from the characteristics of the individual protein sequences, or global ‘top-down’ methods that infer properties from the pattern of already known PPIs in the species of interest. However, a way to incorporate top-down insights into sequence-based bottom-up PPI prediction methods has been elusive. We thus introduce Topsy-Turvy, a method that newly synthesizes both views in a sequence-based, multi-scale, deep-learning model for PPI prediction. While Topsy-Turvy makes predictions using only sequence data, during the training phase it takes a transfer-learning approach by incorporating patterns from both global and molecular-level views of protein interaction. In a cross-species context, we show it achieves state-of-the-art performance, offering the ability to perform genome-scale, interpretable PPI prediction for non-model organisms with no existing experimental PPI data. In species with available experimental PPI data, we further present a Topsy-Turvy hybrid (TT-Hybrid) model which integrates Topsy-Turvy with a purely network-based model for link prediction that provides information about species-specific network rewiring. TT-Hybrid makes accurate predictions for both well- and sparsely-characterized proteins, outperforming both its constituent components as well as other state-of-the-art PPI prediction methods. Furthermore, running Topsy-Turvy and TT-Hybrid screens is more » Availability and implementation https://topsyturvy.csail.mit.edu. Supplementary information Supplementary data are available at Bioinformatics online. Authors: ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10368340 Journal Name: Bioinformatics Volume: 38 Issue: Supplement_1 Page Range or eLocation-ID: p. i264-i272 ISSN: 1367-4803 Publisher: Oxford University Press National Science Foundation ##### More Like this 1. Abstract Background Protein–protein interaction (PPI) is vital for life processes, disease treatment, and drug discovery. The computational prediction of PPI is relatively inexpensive and efficient when compared to traditional wet-lab experiments. Given a new protein, one may wish to find whether the protein has any PPI relationship with other existing proteins. Current computational PPI prediction methods usually compare the new protein to existing proteins one by one in a pairwise manner. This is time consuming. Results In this work, we propose a more efficient model, called deep hash learning protein-and-protein interaction (DHL-PPI), to predict all-against-all PPI relationships in a database of proteins. First, DHL-PPI encodes a protein sequence into a binary hash code based on deep features extracted from the protein sequences using deep learning techniques. This encoding scheme enables us to turn the PPI discrimination problem into a much simpler searching problem. The binary hash code for a protein sequence can be regarded as a number. Thus, in the pre-screening stage of DHL-PPI, the string matching problem of comparing a protein sequence against a database withMproteins can be transformed into a much more simpler problem: to find a number inside a sorted array of lengthM. This pre-screening process narrows down themore » Conclusions The experimental results confirmed that DHL-PPI is feasible and effective. Using a dataset with strictly negative PPI examples of four species, DHL-PPI is shown to be superior or competitive when compared to the other state-of-the-art methods in terms of precision, recall or F1 score. Furthermore, in the prediction stage, the proposed DHL-PPI reduced the time complexity from$$O(M^2)$$$O\left({M}^{2}\right)$to$$O(M\log M)$$$O\left(MlogM\right)$for performing an all-against-all PPI prediction for a database withMproteins. With the proposed approach, a protein database can be preprocessed and stored for later search using the proposed encoding scheme. This can provide a more efficient way to cope with the rapidly increasing volume of protein datasets. 2. Abstract Motivation As an increasing amount of protein–protein interaction (PPI) data becomes available, their computational interpretation has become an important problem in bioinformatics. The alignment of PPI networks from different species provides valuable information about conserved subnetworks, evolutionary pathways and functional orthologs. Although several methods have been proposed for global network alignment, there is a pressing need for methods that produce more accurate alignments in terms of both topological and functional consistency. Results In this work, we present a novel global network alignment algorithm, named ModuleAlign, which makes use of local topology information to define a module-based homology score. Based on a hierarchical clustering of functionally coherent proteins involved in the same module, ModuleAlign employs a novel iterative scheme to find the alignment between two networks. Evaluated on a diverse set of benchmarks, ModuleAlign outperforms state-of-the-art methods in producing functionally consistent alignments. By aligning Pathogen–Human PPI networks, ModuleAlign also detects a novel set of conserved human genes that pathogens preferentially target to cause pathogenesis. Availability http://ttic.uchicago.edu/∼hashemifar/ModuleAlign.html Contact canzar@ttic.edu or j3xu.ttic.edu Supplementary information Supplementary data are available at Bioinformatics online. 3. Abstract Motivation Computational methods for compound–protein affinity and contact (CPAC) prediction aim at facilitating rational drug discovery by simultaneous prediction of the strength and the pattern of compound–protein interactions. Although the desired outputs are highly structure-dependent, the lack of protein structures often makes structure-free methods rely on protein sequence inputs alone. The scarcity of compound–protein pairs with affinity and contact labels further limits the accuracy and the generalizability of CPAC models. Results To overcome the aforementioned challenges of structure naivety and labeled-data scarcity, we introduce cross-modality and self-supervised learning, respectively, for structure-aware and task-relevant protein embedding. Specifically, protein data are available in both modalities of 1D amino-acid sequences and predicted 2D contact maps that are separately embedded with recurrent and graph neural networks, respectively, as well as jointly embedded with two cross-modality schemes. Furthermore, both protein modalities are pre-trained under various self-supervised learning strategies, by leveraging massive amount of unlabeled protein data. Our results indicate that individual protein modalities differ in their strengths of predicting affinities or contacts. Proper cross-modality protein embedding combined with self-supervised learning improves model generalizability when predicting both affinities and contacts for unseen proteins. Availability and implementation Data and source codes are available at https://github.com/Shen-Lab/CPAC. Supplementary information Supplementary data aremore » 4. (Ed.) Abstract Motivation Transferring knowledge between species is challenging: different species contain distinct proteomes and cellular architectures, which cause their proteins to carry out different functions via different interaction networks. Many approaches to protein functional annotation use sequence similarity to transfer knowledge between species. These approaches cannot produce accurate predictions for proteins without homologues of known function, as many functions require cellular context for meaningful prediction. To supply this context, network-based methods use protein-protein interaction (PPI) networks as a source of information for inferring protein function and have demonstrated promising results in function prediction. However, most of these methods are tied to a network for a single species, and many species lack biological networks. Results In this work, we integrate sequence and network information across multiple species by computing IsoRank similarity scores to create a meta-network profile of the proteins of multiple species. We use this integrated multispecies meta-network as input to train a maxout neural network with Gene Ontology terms as target labels. Our multispecies approach takes advantage of more training examples, and consequently leads to significant improvements in function prediction performance compared to two network-based methods, a deep learning sequence-based method and the BLAST annotation method used in themore » 5. Abstract Motivation Most proteins perform their biological functions through interactions with other proteins in cells. Amino acid mutations, especially those occurring at protein interfaces, can change the stability of protein–protein interactions (PPIs) and impact their functions, which may cause various human diseases. Quantitative estimation of the binding affinity changes (ΔΔGbind) caused by mutations can provide critical information for protein function annotation and genetic disease diagnoses. Results We present SSIPe, which combines protein interface profiles, collected from structural and sequence homology searches, with a physics-based energy function for accurate ΔΔGbind estimation. To offset the statistical limits of the PPI structure and sequence databases, amino acid-specific pseudocounts were introduced to enhance the profile accuracy. SSIPe was evaluated on large-scale experimental data containing 2204 mutations from 177 proteins, where training and test datasets were stringently separated with the sequence identity between proteins from the two datasets below 30%. The Pearson correlation coefficient between estimated and experimental ΔΔGbind was 0.61 with a root-mean-square-error of 1.93 kcal/mol, which was significantly better than the other methods. Detailed data analyses revealed that the major advantage of SSIPe over other traditional approaches lies in the novel combination of the physical energy function with the new knowledge-based interface profile. SSIPe also considerablymore » Availability and implementation Web-server/standalone program, source code and datasets are freely available at https://zhanglab.ccmb.med.umich.edu/SSIPe and https://github.com/tommyhuangthu/SSIPe. Supplementary information Supplementary data are available at Bioinformatics online.
2023-03-25 01:05:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43479079008102417, "perplexity": 3408.0844914579234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00458.warc.gz"}
https://www.gurobi.com/documentation/8.0/refman/java_grbmodel_getpwlobj.html
GRBModel.getPWLObj() Filter Content By Version Languages GRBModel.getPWLObj() Retrieve the piecewise-linear objective function for a variable. The return value gives the number of points that define the function, and the and arguments give the coordinates of the points, respectively. The and arguments must be large enough to hold the result. Call this method with null values for and if you just want the number of points. Refer to the description of setPWLObj for additional information on what the values in and mean. int getPWLObj ( GRBVar var, double[] x, double[] y ) Arguments: var: The variable whose objective function is being retrieved. x: The values for the points that define the piecewise-linear function. These will always be in non-decreasing order. y: The values for the points that define the piecewise-linear function. Return value: The number of points that define the piecewise-linear objective function.
2021-03-08 18:10:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.723015546798706, "perplexity": 758.9872299488882}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00444.warc.gz"}
http://openstudy.com/updates/4dd03a1d9fe58b0b6b7538f7
## anonymous 5 years ago I'm stuck on this problem can someone tell me if this is right so far? (a)find the discriminant: (b)classify the root(s) as 2real (uneaqual) roots, 1 (double) root, or 2 imaginary conjungate roots: (c)find the roots x^2-2x=-5 my answer x=1+2i,1-2i 1. anonymous discriminant is $b^2-4ac$ in your case a = 1, b = -2 and c = 5 because first you have to write $x^2-2x+5=0$ to identify a, b and c. so you get $(-2)^2-4\times 5=4-20=-16$ since this is negative it means you have two imaginary roots. $x^2-2x=-5$ $(x-1)^2=-5+1=-4$ $x-1=\pm \sqrt{-1}=\pm 2i$ $x=1\pm 2i$ you got it. 2. anonymous typo above. i meant $x-1=\pm \sqrt{-4}=\pm 2i$ 3. anonymous yes thank you
2017-01-21 17:45:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8962358832359314, "perplexity": 889.1905870375189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00191-ip-10-171-10-70.ec2.internal.warc.gz"}
https://docs.mosek.com/latest/cxxfusion/case-studies-miqcqp-sdo-relaxation.html
# 11.10 Semidefinite Relaxation of MIQCQO Problems¶ In this case study we will discuss a fairly common application for Semidefinite Optimization: to define a continuous semidefinite relaxation of a mixed-integer quadratic optimization problem. This section is based on the method by Park and Boyd [PB15]. We will focus on problems of the form: (11.40)$\begin{split}\begin{array}{ll} \minimize & x^T P x + 2q^T x \\ \st &x\in \integral^n \end{array}\end{split}$ where $$q\in \real^n$$ and $$P\in \PSD^{n\times n}$$ is positive semidefinite. There are many important problems that can be reformulated as (11.40), for example: • integer least squares: minimize $$\|Ax -b\|^2_2$$ subject to $$x\in \integral^n$$, • closest vector problem: minimize $$\|v - z\|_2$$ subject to $$z\in \{ Bx~|~x\in \integral^n\}$$. Following [PB15], we can derive a relaxed continuous model. We first relax the integrality constraint $\begin{split}\begin{array}{lll} \minimize & x^T P x + 2q^T x &\\ \st &x_i(x_i-1) \geq 0 & i=1,\dots,n. \end{array}\end{split}$ The last constraint is still non-convex. We introduce a new variable $$X\in \real^{n\times n}$$, such that $$X = x\cdot x^T$$. This allows us to write an equivalent formulation: $\begin{split}\begin{array}{ll} \minimize & \trace(PX) + 2q^T x \\ \st & \diag(X) \geq x, \\ & X = x\cdot x^T. \end{array}\end{split}$ To get a conic problem we relax the last constraint and apply the Schur complement. The final relaxation follows: (11.41)$\begin{split}\begin{array}{ll} \minimize & \trace(PX) + 2q^T x \\ \st & \diag(X) \geq x, \\ & \left[ \begin{array} {cc} X & x \\ x^T & 1 \end{array}\right] \in \PSD^{n+1}. \end{array}\end{split}$ Fusion Implementation Implementing model (11.41) in Fusion is very simple. We assume the input $$n$$, $$P$$ and $$q$$. Then we proceed creating the optimization model Model::t M = new Model(); The important step is to define a single PSD variable $\begin{split}Z = \left[ \begin{array} {cc} X & x \\ x^T & 1 \end{array}\right] \in \PSD^{n+1}.\end{split}$ Our code will create $$Z$$ and two slices that correspond to $$X$$ and $$x$$: Variable::t Z = M->variable("Z", Domain::inPSDCone(n + 1)); Variable::t X = Z->slice(new_array_ptr<int, 1>({0, 0}), new_array_ptr<int, 1>({n, n})); Variable::t x = Z->slice(new_array_ptr<int, 1>({0, n}), new_array_ptr<int, 1>({n, n + 1})); Then we define the constraints: M->constraint( Expr::sub(X->diag(), x), Domain::greaterThan(0.) ); M->constraint( Z->index(n, n), Domain::equalsTo(1.) ); The objective function uses several available linear expressions: M->objective( ObjectiveSense::Minimize, Expr::add( Expr::sum( Expr::mulElm( P, X ) ), Expr::mul( 2.0, Expr::dot(x, q) ) ) ); Note that the trace operator is not directly available in Fusion, but it can easily be defined from scratch. Complete code Listing 11.22 Fusion implementation of model (11.41). Click here to download. Model::t miqcqp_sdo_relaxation(int n, Matrix::t P, const std::shared_ptr<ndarray<double, 1>> & q) { Model::t M = new Model(); Variable::t Z = M->variable("Z", Domain::inPSDCone(n + 1)); Variable::t X = Z->slice(new_array_ptr<int, 1>({0, 0}), new_array_ptr<int, 1>({n, n})); Variable::t x = Z->slice(new_array_ptr<int, 1>({0, n}), new_array_ptr<int, 1>({n, n + 1})); M->constraint( Expr::sub(X->diag(), x), Domain::greaterThan(0.) ); M->constraint( Z->index(n, n), Domain::equalsTo(1.) ); Expr::sum( Expr::mulElm( P, X ) ), Expr::mul( 2.0, Expr::dot(x, q) ) ) ); return M; } Numerical Examples We present now some simple numerical experiments for the integer least squares problem: (11.42)$\begin{split}\begin{array}{ll} \minimize & \|Ax-b\|_2^2 \\ \st &x\in \integral^n. \end{array}\end{split}$ It corresponds to the problem (11.40) with $$P=A^TA$$ and $$q=-A^Tb$$. Following [PB15] we will generate the input data by taking all entries of $$A$$ from the normal distribution $$\mathcal{N}(0,1)$$ and setting $$b=Ac$$ where $$c$$ comes from the uniform distribution on $$[0,1]$$. We implement the linear algebra operations using the LinAlg module available in MOSEK. An integer rounding xRound of the solution to (11.41) is a feasible integer solution to (11.42). We can compare it to the actual optimal integer solution xOpt, whenever the latter is available. Of course it is very simple to formulate the integer least squares problem in Fusion: Model::t int_least_squares(int n, Matrix::t A, const std::shared_ptr<ndarray<double, 1>> & b) { Model::t M = new Model(); Variable::t x = M->variable("x", n, Domain::integral(Domain::unbounded())); Variable::t t = M->variable("t", 1, Domain::unbounded()); M->constraint( Expr::vstack(t, Expr::sub(Expr::mul(A, x), b)), Domain::inQCone() ); M->objective( ObjectiveSense::Minimize, t ); return M; } All that remains is to compare the values of the objective function $$\|Ax-b\|_2$$ for the two solutions. Listing 11.23 The comparison of two solutions. Click here to download. // problem dimensions int n = 20; int m = 2 * n; auto c = new_array_ptr<double, 1>(n); auto A = new_array_ptr<double, 1>(n * m); auto P = new_array_ptr<double, 1>(n * n); auto b = new_array_ptr<double, 1>(m); auto q = new_array_ptr<double, 1>(n); std::generate(A->begin(), A->end(), std::bind(normal_distr, generator)); std::generate(c->begin(), c->end(), std::bind(unif_distr, generator)); std::fill(b->begin(), b->end(), 0.0); std::fill(q->begin(), q->end(), 0.0); // P = A^T A syrk(MSK_UPLO_LO, MSK_TRANSPOSE_YES, n, m, 1.0, A, 0., P); for (int j = 0; j < n; j++) for (int i = j + 1; i < n; i++) (*P)[i * n + j] = (*P)[j * n + i]; // q = -P c, b = A c gemv(MSK_TRANSPOSE_NO, n, n, -1.0, P, c, 0., q); gemv(MSK_TRANSPOSE_NO, m, n, 1.0, A, c, 0., b); // Solve the problems { Model::t M = miqcqp_sdo_relaxation(n, Matrix::dense(n, n, P), q); Model::t Mint = int_least_squares(n, Matrix::dense(n, m, A)->transpose(), b); M->solve(); Mint->solve(); auto xRound = M->getVariable("Z")-> slice(new_array_ptr<int, 1>({0, n}), new_array_ptr<int, 1>({n, n + 1}))->level(); for (int i = 0; i < n; i++) (*xRound)[i] = round((*xRound)[i]); auto yRound = new_array_ptr<double, 1>(m); auto xOpt = Mint->getVariable("x")->level(); auto yOpt = new_array_ptr<double, 1>(m); std::copy(b->begin(), b->end(), yRound->begin()); std::copy(b->begin(), b->end(), yOpt->begin()); gemv(MSK_TRANSPOSE_NO, m, n, 1.0, A, xRound, -1.0, yRound); // Ax_round-b gemv(MSK_TRANSPOSE_NO, m, n, 1.0, A, xOpt, -1.0, yOpt); // Ax_opt-b std::cout << M->getSolverDoubleInfo("optimizerTime") << " " << Mint->getSolverDoubleInfo("optimizerTime") << "\n"; double valRound, valOpt; dot(m, yRound, yRound, valRound); dot(m, yOpt, yOpt, valOpt); std::cout << sqrt(valRound) << " " << sqrt(valOpt) << "\n"; } Experimentally the objective value for xRound approximates the optimal solution with a factor of $$1.1$$-$$1.4$$. We refer to [PB15] for a more involved iterative rounding procedure, producing integer solutions of even better quality, and for a detailed discussion of test results.
2022-12-06 07:18:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5029944777488708, "perplexity": 6960.9986491094505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711074.68/warc/CC-MAIN-20221206060908-20221206090908-00848.warc.gz"}
https://www.physicsforums.com/threads/how-obtain-a-wavefunction.443548/
# How Obtain A Wavefunction 1. Nov 1, 2010 ### Waxterz How you obtain the Wavefunction of a system? $$\Psi$$ = A e$$^{kx + wt}$$ I get it that you can plug this in the Schrodinger equation, but what I don't get is how you obtain the parameters experimentally ??? 2. Nov 1, 2010 ### Staff: Mentor The wavenumber and angular frequency are related to the momentum and energy respectively: $$\Psi = Ae^{i(kx-\omega t)} = Ae^{i(px-Et)/\hbar}$$ The amplitude is related to the relative probability of finding the particle at a particular location. For a pure plane wave (same amplitude everywhere), the probability is uniform. That is, the particle is just as likely to be located one place as any other place.
2019-03-19 06:15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8704941272735596, "perplexity": 519.945281812491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201904.55/warc/CC-MAIN-20190319052517-20190319074517-00291.warc.gz"}
https://electronics.stackexchange.com/questions/190528/simple-lm386-circuit-rc-branch-question?noredirect=1
# Simple LM386 Circuit RC Branch Question Please take a moment to checkout at the LM386 circuit below. The circuit comes directly from the LM386 specifications document: My question revolves around the output branch consisting of the 0.05 microfarad capacitor and the 10 ohms resistor. At first I thought that this branch acted as some kind of RC filter but then I realized that on all RC filters (as far as I know) filtering is achieved by taping into the node between the capacitor and the resistor (which is not the case on this branch). So what is this branch supposed to do? Is it for filtering? Also, what effect does the branch consisting of the 250 microfarad and the speaker has on the output signal? It looks like at the end both branches will combine in parallel and I am having trouble understanding how all these component work together. Thanks. It's called a Zobel Network. The purpose is to neutralize the effect of the inductance of the speaker. The 250uF simply blocks DC from the output bias point of the amplifier output stage, which normally sits around half the supply voltage with no output, and swings from close to GND to close to the positive rail when it is providing full output. • I was once asked if Zobel was spelled with one l or two. "One, of course" since I'd always seen that way. Why the "of course"? I was asked ... without thinking I answered ... "to stop it ringing" – Brian Drummond Sep 15 '15 at 10:32 I didn't read the datasheet, but the 50 nF and 10 Ω on the output are almost certainly because of a stability requirement of that particular amplifier. The amp must need the 10 Ω for stability, at least at the higher frequency end of its range when assuming the worst case characteristics of the actual load. The rolloff frequency is 320 Hz, so a octave or two above that and beyond, the cap and resistor combo look like just 100 Ω. The 250 µF in series with the speaker is just to block DC. That reduces the current requirement on the amp, and operates the speaker at a better point anyway.
2020-09-18 07:56:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.760992705821991, "perplexity": 1300.8880981158009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187354.1/warc/CC-MAIN-20200918061627-20200918091627-00097.warc.gz"}
https://www.yaclass.in/p/mathematics-cbse/class-6/algebra-3473/patterns-and-variables-3477/re-c973de3b-4a4e-428e-9a50-5948964a1ffa
### Theory: Can you predict the answer for the following as per your observation? 1. $$1, 4, 7, 10, 13, 16, 19,$$ __________. 2.  _____. I think you cab predict the next number. Yes, the next number in the first case is $$22$$. Because from the beginning we are adding $$3$$ to get the next number thus, it follows the pattern 'Adding 3 to the previous number'. In the same way, the next number in the second case is $$10$$ oranges with $$5$$ oranges in each row. Thus, it follows the pattern 'Adding two more oranges to the previous set of oranges'. The study of patterns helps us to get the relationships and find continuous connections, to produce generalizations and predictions. • Solving problems in mathematics will be easy if we look for patterns in numbers and objects. • Patterns always allows us to make an intelligent guess.
2020-10-27 18:16:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6909642219543457, "perplexity": 613.3120860648527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00052.warc.gz"}
https://chemistry.stackexchange.com/questions/95227/what-is-the-actual-mechanism-of-fenton-reaction
What is the actual mechanism of Fenton reaction? Fenton reaction was supposedly discovered by Fenton as Tartaric Acid oxidation by hydrogen peroxide in presence of ferrous cation in 1894. Research Paper link. I have worked on this reaction for years, not an exaggeration. Whenever I mixed $\ce{H2O2}$ with $\ce{Fe^2+}$ solution, or directly solid ferrous sulphate hepta hydrate, I instantaneously got a dark beer-like color, and a vigorously reacting mixture without adjusting $\mathrm{pH}$. $\pu{10 ml}$ of 35% $\ce{H2O2}$ and $\pu{150 mg}$ of $\ce{Fe2SO4.7H2O}$ is mostly what I used. This much peroxide generally defined the reaction as Catalyzed Hydrogen Peroxide Propagations by RJ Watts group. When mixing with 1g soil in first 12 seconds temperature escalated to $\pu{84^\circ C}$, then slowly calmed. Overall bubbling ceased in 60 seconds, what kind of soil did not make any difference. And for $\mathrm{pH}$ measurement, it is very problematic for many reasons so I simply could not get any meaningful results. This just increased my curiosity. There are experimental and computational studies over its mechanism of reactive oxygen species generation and organic matter degradation. The second one is a bit easier since somehow catching the end products gives the idea, especially with spin trap compounds. I have lost all of references so I can only give the categories I have encountered. • Majority went for radical species mechanisms and chain reaction, involving hydroxyl, perhydroxyl, superoxide radicals and uses as a base Haber & Weiss, Haber, F. & Weiss, J. J. (1934) Proc. R. Soc. London Ser. A 147, 332-351. Yet these reactions are inherently fast, they are just proposed mechanisms. • There is a considerable number of people stand for ferryl ion pathway, which mostly does not involve radicals but unstable iron (IV) oxidation state compound. Usually computational studies propose this idea, as hydroxyl radicals lifetime is very short for making it as an efficient reactant. The reaction of Fenton I am asking is reaction between aqueous hydrogen peroxide and $\ce{Fe^2+}$ ion without any ligand other than water. If it does not help for explanation you may very well ignore more than 30 variations like sonofenton. • The dark beer like color is due to iron(III) being formed as perixide oxidizes the iron(II) – Dale Apr 14 '18 at 23:05 The Fenton reaction The oxidation of organic substrates by iron(II) and hydrogen peroxide ($\ce{H2O2}$ ) is called the Fenton Reaction. It was first described by Henry John Horstman Fenton in 1894 who first observed the oxidation of tartaric acid by $\ce{H2O2}$ in the presence of $\ce{Fe^{2+}}$ ions [1]. After more than 100 years, the mechanism of the reaction is still unsettled. The recent review by Dunford (2002) stated that: The mechanism of reaction of hexaquo iron(II) with hydrogen peroxide has been unresolved for 70 years. Most scientists, perhaps by default, have accepted the free radical chain mechanism of Barb et al. (1957)[3]. However an earlier proposal involved formation of the ferryl ion, $\ce{FeO2+}$ [4]. Recent work has favored a mechanism involving $\ce{FeO2+}$ and $\ce{FeOFe^{5+}}$ species. Similarly there are differences of opinion on the mechanism of reaction of iron(III), both hexaquo and chelated, with hydrogen peroxide. These differences have fostered a recent burst of activity, with claims on one hand that hydroxyl radicals play a key role, and on the other, that there is a non-free radical mechanism. In contrast, the mechanism of reaction of the heme-containing peroxidase and catalase enzymes with hydrogen peroxide, orders of magnitude faster than reactions of iron(II)/(III), now appears to be well established. According to the free radical chain mechanism [3], the formation of radicals $\ce{HO^.}$ and $\ce{HO2^.}$ have been proposed. Although, this mechanism has some flaws, it has still been used to explain result in current research as recently as 2014 [5]. The proposed mechanism was depicted in Figure 1, which is followed by several other well received mechanisms for the reaction discussed in reviews sighted. Mechanism of the redox reaction involved in the Fenton’s reagent (Figure 1)[5]: Basic reactions and intermediates involved in the classic Fenton and the metal centered Fenton reactions (Figure 2)[6]: Proposed non-radical mechanism for the Fenton reaction (Figure 3)[6]: Possible reaction pathways for the Fenton reaction in absence of organic substrates (Figure 4)[6]: Requirements of the reaction: $\mathrm{pH}$ should be adjusted to 3-5: if the $\mathrm{pH}$ is too high the iron precipitate in $\ce{Fe(OH)3}$ and will decompose the $\ce{H2O2}$ to oxygen. Basically, the optimal $\mathrm{pH}$ occurs between $3$ and $6$. It's really important to pay attention to the double $\mathrm{pH}$ drop due to the addition of $\ce{FeSO4}$ ($\ce{Fe^{2+}}$ source) and $\ce{H2O2}$. Indeed, $\ce{FeSO4}$ catalyst which contains residual $\ce{H2SO4}$ and the $\ce{H2O2}$ addition is responsible for the fragmentation of organic material into organic acids. Thus, the iron catalyst should be added as a solution of $\ce{FeSO4}$ with slow addition of $\ce{H2O2}$, you may able to control the suddern variation of $\mathrm{pH}$ and the temperature during the reaction. It's better to complete the reaction step by step with a continuous adjustment of these factors. References: [1] LXXIII.—Oxidation of tartaric acid in presence of iron: H. J. H. Fenton, J. Chem. Soc., Trans., 1894, 65, 899–910 (DOI: 10.1039/CT8946500899). [2] Oxidations of iron(II)/(III) by hydrogen peroxide: from aquo to enzyme: H.Brian Dunford, Coordination Chemistry Reviews, 2002, 233–234, 311-318 (https://doi.org/10.1016/S0010-8545(02)00024-3). [3] Reactions of ferrous and ferric ions with hydrogen peroxide. Part I.—The ferrous ion reaction: W. G. Barb, J. H. Baxendale, P. George, and K. R. Hargrave, Trans. Faraday Soc., 1951, 47, 462-500 (http://dx.doi.org/10.1039/TF9514700462). [4] Ferryl ion, a compound of tetravalent iron: William C. Bray, and M. H. Gorin, Journal of American Chemical Society, 1932, 54(5), 2124-2125 (DOI: 10.1021/ja01344a505). [5] Design Polysaccharides of Marine Origin: Chemical Modifications to Reach Advanced Versatile Compounds: Nathalie Chopin, Xavier Guillory, Pierre Weiss, Jean Le Bideau, and Sylvia Colliec-Jouault, Current Organic Chemistry, 2014, 18(7), 867-895 (DOI: 10.2174/138527281807140515152334). [6] Fenton reaction - Controversy concerning the chemistry: Krzysztof Barbusinski, Ecological Chemistry and Engineering S, 2009, 16(3), 347-358. I think that the reaction of hydrogen peroxide with ferrous sulfate under acidic conditions (Fenton type reaction) is rather similar to radiation chemistry in water. In both cases the $\ce{HO}$ radical is important. $$\ce{H2O2 + H+ + e- -> H2O + HO}$$ While when water it irradated it forms H atoms and HO radicals. I used to react pivalic acid ($\ce{Me3CCOOH}$) with ferrous sulfate, sulfuric acid and hydrogen peroxide as the first step of the synthesis of $\ce{CyMe4BTBP}$. This forms $\ce{HOOCCMe2CH2CH2CMe2COOH}$ which is formed by dimerisation of the $\ce{Me2C(CH2)(COOH)}$ radical. This radical in turn is formed by the abstraction of a hydrogen atom from the pivic acid. • I have also found this article regarding your method: sciencedirect.com/science/article/pii/1350417795000356 Quite reasonable in any aspect. I also have tendency to look this as a radical reaction. But with it I saw some specific details as well, like how hydrogen peroxide displace a water ligand and when there is an electron transfer occur from what oxygen etc. – Güray Hatipoğlu Apr 14 '18 at 21:24
2019-07-20 19:51:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5653911232948303, "perplexity": 3058.1253570483436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00136.warc.gz"}
https://climateaudit.org/2007/02/19/jones-russian-uhi-study/?like=1&source=post_flair&_wpnonce=78d053c4d8
Jones and the Russian UHI A couple of years ago, before I got involved in proxy studies, I was interested in the UHI question and wrote to Phil Jones to request the data used in Jones et al 1990, his study purporting to show the unimportance of urban warming. Jones said that it was on a diskette somewhere and too hard to find, Jones observing that the study had been superceded by other studies. (“Moved on” ?) Anyway, that was before I was wise to the ways of the Team and I didn’t pursue the matter. However, Jones et al 1990 continues to be relied upon; it’s cited in recent literature and in AR4. So I thought that it would be interesting to re-visit the matter. I still don’t know what sites were used, but something turned up anyway. Peterson 2003 reviewed literature on UHI and reported that, even in 2003, Jones et al 1990 was one of only two studies that used”homogeneous” data: Only two large-scale studies were found that used homogeneous data. These are the time series analyses of Peterson et al. (1999) and the Russian and Chinese regions of the analyses presented in Jones et al. (1990). These analyses found no indication of significant urban influence on the temperature signal. Elsewhere in the article, Peterson 2003 stated: Jones et al. (1990) determined that the impact of urbanization on hemispheric temperature time series was, at most, 0.058 deg C century-1. This result was based on the work of Karl et al. (1988) for the United States and further analysis of three other regions: European parts of the Soviet Union, eastern Australia, and eastern China. In none of these three regions was there any indication of significant urban influence in either of the two gridded time series relative to the rural series’ (Jones et al. 1990). The homogeneity assessments varied with regions. The data for one region were assessed for artifacts due to factors such as site moves or changing methods used to calculate monthly mean temperatures.’ Another region used data from stations with few, if any, changes in instrumentation, location or observation times.’ The homogeneity of data used in the third region was not discussed. Their results showed that the urbanization influence is, at most, an order of magnitude less than the warming seen on a century scale.’ IPCC 4AR cited Jones et al 1990 twice as follows: Urbanization impacts on global and hemispheric temperature trends (Karl et al., 1988; Jones et al., 1990; Easterling et al., 1997; Peterson, 2003; Parker, 2004, 2006) have been found to be small. and: Many local studies have demonstrated that the microclimate within cities is on average warmer, with smaller DTR than if the city were not there. However, the key issue from a climate change standpoint is whether urban-affected temperature records have significantly biased large-scale temporal trends. Studies that have looked at hemispheric and global scales conclude that any urban-related trend is an order of magnitude smaller than decadal and longer timescale trends evident in the series (e.g., Jones et al., 1990), a result that could partly be attributed to the omission from the gridded dataset of a small number of sites (<1%) with clear urban-related warming trends. So Jones et al 1990 is still relied on in the literature. Jones said that they selected 38 non-urban sites in western Russia (the sites are not identified) and that in the 1930-1987 period, there was a cooling of about 0.2 deg C in the rural stations (RUSSR), as compared with a lesser cooling of ~0.1 deg C in the Jones network (JUSSR): For the western part of the Soviet Union we selected a network of 38 stations from sites in non-urbanized areas with long records (Figure 1a). The sites include isolated meteorological stations, lighthouses, villages and other small settlements. The largest populated sites are nine towns with populations of the order of 10,000 people. All nine towns are located at least 80 km away from major cities. All the site records were assessed for artifacts due to factors such as site moves or changing methods used to calculate monthly mean temperatures. At 12 sites the observing station was moved slightly. Comparisons with neighboring sites were made before and after each change and ,w here necessary, corrections were made to ensure homogeneity of the rural station record, No corrections were deemed necessary for the remaining 26 stations where no station moves were reported.Over the 1930-87 period, a cooling of about ~0.2 deg C is observed in RUSSR. This cooling is about 0.1 deg C smaller in JUSSR but there are no statistically significant differences between the two series. Jones also says that 60 station records were used to construct the gridpoint series, of which 25 stations in operation by 1901 and 32 were operating in 1987 and that there were 4 stations common to the gridpoint and rural time series. In the caption to Figure 2, he also says of the 38-site rural network that 20 were contributing in 1901; that 7 began recording after 1930 and that the number of missing values from 1930-1987 was 8% of the total. In addition to ~0.09 cooling trend in the Jones network from 1930 to 1987, he also reported (Table 1) that there was a 0.31 deg C upward trend from 1901-1987. Their Figure 2a shows a plot of the temperature anomaly from their rural network (the comparandum Jones network plot is not shown). For comparison to their Figure 2a (shown a bit lower down), I made up a network of 41 HadCRUT3 gridpoints (this is a 5×5 network, while Jones used a 5×10 network). The network is shown here and can be compared to Jones et al 1990 Figure 1a. Figure 1. HAdCRUT3 Grid selected to approximate 22 gridpoint-grid in Jones et al Nature 1990 Figure 1a. Next here are plots showing first the rural network from Jones et al 1990 and then the annual average of the 41 gridpoints identified above for 1901-1988, showing the 1930-1987 trendline. The appearance of the two plots is similar. Jones et al say that their figure is for 1901-1987, but the closing uptick occurs in 1988 in HadCRUT3 rather than 1987. Maybe Jones actually plotted to 1988 – who knows? The HadCRUT3 linear trend is 0.24 deg C over the period 1930-1987, as compared to a reported -0.09 deg C cooling in the Jones et al 1990 gridded series (and -0.21 deg C in the rural series). So there is 0.45 deg C difference in the 1930-1987 period between HadCRUT3 and the rural network used to show that the difference is less than 0.05 deg C per century. Since the Team is involved, one also has to pay attention to little discrepancies like 1987 versus 1988 endpoints – why would they calculate the trend on 1987 rather than 1988 if 1988 is illustrated in the plot? With the addition of 1988, the linear trend increases to 0.30 deg C over the period, increasing the discrepancy to 0.51 deg C over 1931-1987 between the rural network and HadCRUT3 – as compared to the reported 0.09 deg C in Jones et al 1990. Figure 2. Top- clipped from Jones et al 1990 Figure 2a showing rural nework; bottom – calculated West Russian network annual average using HadCRUT3 gridcell. Over the 1901-1987 period, the estimated linear trend in the HAdCRUT3 network is 0.44 deg C (1901-1988: 0.49 deg C), as compared with a reported linear trend of 0.38. This difference between versions is much less than the 1930-1987 difference. It’s hard to figure out exactly what’s going on here, as long as Jones refuses to identify the stations or release the data. Despite the many citations, it doesn’t appear that anyone, including the IPCC, has ever tried to directly verify these results. Does this study still stand for the proposition that UHI effects have been shown to be inconsequential? Well, the Coordinating Lead Author of this AR4 chapter was, um, Phil Jones. No one ever said that the Team needed a big locker room. References: Thomas C. Peterson, 2003. Assessment of Urban Versus Rural In Situ Surface Temperatures in the Contiguous United States: No Difference Found. Journal of Climate Volume 16, Issue 18 (September 2003) pp. 2941—2959 http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf Jones, P.D., et al., 1990: Assessment of urbanization effects in time series of surface air temperature over land. Nature, 347, 169—172. Warwick Hughes, http://www.warwickhughes.com/papers/90lettnat.htm 1. Steve McIntyre Posted Feb 19, 2007 at 4:08 PM | Permalink Code for these graphics and statistics: ##USSR NETWORK IN JONES ET AL 1990 #for Jones et al Nature 1990 on urban warming library(fields) ussrtest<-cbind( c(t( array(rep( seq(47.5,67.5,5),10) ,dim=c(5,9)) )), rep( seq(32.5,72.5,5),5) ) i<- 37+(ussrtest[,2]-2.5)/5;j<- 19+(ussrtest[,1]-2.5)/5 temp<-rep(TRUE,nrow(ussrtest)) temp[42:45]<-FALSE #PLOT USSR GRIDPOINTS plot(xy.coords(ussrtest[,2],ussrtest[,1]),pch=".",type="p",xlim=c(25,82),ylim=c(40,77),ylab="") points(xy.coords(ussrtest[temp,2],ussrtest[temp,1]),col="red",pch=19) sum(temp) library(ncdf) v<-open.ncdf(loc) instr <- get.var.ncdf( v, v$var[[1]]) # 1850 2006 dim(instr)# [1] 72 36 1883 #this is organized in 72 longitudes from -177.5 to 177.5 and 36 latitudes from -87.5 to 87.5 K<-sum(temp) test<- array(NA,dim=c(1883,K)) for (k in 1:K) { test[,k]<-instr[i[temp][k],j[temp][k],] } dim(test)# 1883 41 cru<-apply(test,1,mean,na.rm=TRUE) cru<- array(c(cru,NA),dim=c(12,1884/12) ) annual<-ts(apply(cru,2,mean,na.rm=TRUE) ,start=1850) mean(annual[(1951:1980)-1849]) #[1] -0.05615611 mean(annual[(1961:1990)-1849]) #[1] -0.05989007 annual<-annual-mean(annual[(1951:1980)-1849]) #PLOT REPLICATED ANNUAL SERIES par(mar=c(3,3,1,1)) plot(1901:1987,annual[(1901:1988)-1849],type="s",xlab="",ylab="",las=1) year<-1930:1987 fm<-lm(annual[year-1849]~year);summary(fm) lines(year,fm$fitted.values,col="red");abline(h=0,lty=2) #CALCULATE LINEAR TRENDS OVER PERIODS coef(fm)[2]*57 #0.2413054 year<-1930:1988;fm<-lm(annual[year-1849]~year);coef(fm)[2]*57 #0.30 year<-1901:1987;fm<-lm(annual[year-1849]~year);coef(fm)[2]*87 #0.44 year<-1901:1988;fm<-lm(annual[year-1849]~year);coef(fm)[2]*87 #0.49 2. Tim Ball Posted Feb 19, 2007 at 4:17 PM | Permalink Do we need to be careful? Wasn’t there also a problem with this data during the Soviet era because money from Moscow was based on cold temperatures and the numbers were inflated (deflated) to enhance the return? 3. Mark H Posted Feb 19, 2007 at 5:05 PM | Permalink The more I read, the more it seems that the new area of dubious science is far more fundimentatl than proxy temperatures, it seems we are not even sure we know real, contemporary temperatures. Leaving aside what this means to calibrate the past against an unknown present, how does this affect all sorts of assumptions about the degree and responses of climate warming? What do we know, and how do we know it is a fundimental question – oh well, perhaps it can be “adjusted” by the Hocky Team. 4. tom Posted Feb 19, 2007 at 6:22 PM | Permalink The meteorlogical station data was never meant to measure a global mean temperature. And through much manipulation it apparently has become such. Methodologies appear ad hoc and data is lost or missing or overwritten. What the heck is going on here?! The assumptions being made about past, present and future temperature trends are dependent upon this data. I’ve always been skeptical of these data sets and so should all scientists. We are taking the individual micro climate of each station and extrapolating the data to cover many many square km’s of the earths surface based on statistical manipulation. I’m a 20yr in the field meteorologist and just shrug my shoulders at all of the claims being made about the data, the future etc…when in fact we cannot even rely on what we are claiming to be solid evidence in the first place. I am not denying the trend of the ambiguous data sets are in the positive over the last century or so, but the magnitude is unknown… ok, how about .5C? Heck, I dunno…go ask Jones and Hansen…they seem to. Frustrating. 5. Posted Feb 19, 2007 at 6:26 PM | Permalink I understand that a very significant fraction of the Soviet weather stations stopped recording data after the fall of the Soviet Union. There is potential for a bias between these two eras as well. 6. jae Posted Feb 19, 2007 at 6:33 PM | Permalink Boy, those nearly decadal cycles in temperature sure resemble Solar cycles… So all you have to do to make data homogeneous is perform some adjustments? LOL. I really have a hard time accepting that there are no significant UHI or land use change effects on temperature. It defies common sense, as well as many other studies. As I have said before, I’m amazed that so many scientists and the IPCC blindly accept Jones’ analyses, especially when he refuses to provide data and methodology. They continue to make their science less and less credible. 7. Steve Reynolds Posted Feb 19, 2007 at 9:03 PM | Permalink It would seem that the UHI effect could be studied without relying on old records. Has anyone just taken measurements from a high density grid around some cities and looked at the variation as a function of distance from the city? 8. Steve McIntyre Posted Feb 19, 2007 at 9:24 PM | Permalink #7. There are lots of analyses claiming to show a UHI effect. Merely google urban heat island. It’s just that Peterson and Jones and Easterling and Karl deny that this proves anything and the IPCC accepts these studies. The arguments from Jones, etc. are essentially statistical arguments; that’s one reason why I’d like to see the data. The amusing thing about this Russian data is that Jones’ results are not replicable using HadCRUT3 data. But hey, they’re the Team. If their results can’t be replicated, well, they’ve “moved on”. 9. K Posted Feb 19, 2007 at 9:29 PM | Permalink #2 Yes, we should be careful about Soviet data from the more remote areas. But it seems better to use it and cite reservations than to adjust and henceforth use the new ‘facts’. Which brings me to #3 I pretty much agree. It looks as if the data is simply not sufficient to resolve this GW quandry. Able people on each side cite arguments – some very strong – for uncertainty or bias in any data they find disagreeable. This is why I think Steve M. pursues the best method – insist on openness so that all work can be examined and, if needed, examined again. That, at least, finds procedural weaknesses and outright mistakes. IMO, measurements and work in the next five to ten years will produce more clarity than everything done to date. 10. Terry Posted Feb 19, 2007 at 9:44 PM | Permalink Looking at the Jones rural plot, it sure looks to me like there is an upward trend in the data — it looks to be about .4 degrees over the period. Something isn’t right here. 11. George Taylor Posted Feb 19, 2007 at 11:22 PM | Permalink Joe d’Aleo sent me a nice chart showing global station numbers and estimated global temperatures — I believe from GHCN. In an amazing coincidence, temperatures went up just when the station numbers dropped precipitously (again, mostly because of USSR dropouts). The axes aren’t labeled, but it’s easy to see which is which. 12. George Taylor Posted Feb 19, 2007 at 11:23 PM | Permalink Here: 13. George Taylor Posted Feb 19, 2007 at 11:25 PM | Permalink Gosh, can’t get the link tool to work. Tried to paste the image into the message. I guess I’ll just have to point to the link: 14. George Taylor Posted Feb 19, 2007 at 11:26 PM | Permalink 15. Ralph Becket Posted Feb 19, 2007 at 11:30 PM | Permalink #7, 8. How hard would it be to do a rough UHI analysis from available data? Say, by calculating mean temperature within latitude bands for the US including/excluding data from weather stations within 25km of major population centres. I’d be very curious to see the results. 16. Louis Hissink Posted Feb 20, 2007 at 12:41 AM | Permalink Re #7, I have recently acquired the hardware to actually measure UHI by using a vehicle and traversing some country towns in SW West Australila. It is one of those jobs that are scheduled once time is found to do it. Right now I have the temperature logger logging static temps in one spot to get an idea of what the diurnal change is, (all data is down loaded onto the PC into the logger software). I have a gut feeling that I need another temp logger as a static base station so that the diurnal drift can be independently logged over time while the survey logger goes out and does it UHI measurement. (standard geophysical technique in which the base station signal is subtracted from the roving one to remove diurnal data drift – and so far the drift is anything but linear, from visual inspection). So the project is in design stage until the second logger is acquired. At the same time a specific GPS logger will locate the position of the temp probe. I have 3 months in Perth to do this, among other things, and I’ll send Steve the raw data once it’s collected (as well as to Warwick and Brooks who thought up this little project). Anyone else interested in getting the raw data should email me direct rather than here as I don’t have the time to find out where I posted something here since I forget where I might have posted it. This is going to be a fun project I think. 17. Posted Feb 20, 2007 at 1:08 AM | Permalink So Jones et al 1990 is still relied on in the literature. Yes, most recently in Brohan et al. However, in that paper Jones is linked via Folland at al. 2001, The previous analysis of urbanisation effects in the HadCRUT dataset [Folland et al., 2001] recommended a 1$\sigma$ uncertainty which increased from 0 in 1900 to 0.05 C in 1990 (linearly extrapolated after 1990) [Jones et al., 1990]. In Folland 2001 they just made Jones 0.05 C ‘more statistical’ by saying that it is one-sigma value. And claimed that this uncertainty is symmetrical. How to refute a study without doing anything (Brohan): The studies finding a large urbanisation effect [Kalnay & Cai, 2003, Zhou et al., 2004] are based on comparison of observations with reanalyses, and assume that any difference is entirely due to biases in the observations. A comparison of HadCRUT data with the ERA- 40 reanalysis [Simmons et al., 2004] demonstrated that there were sizable biases in the reanalysis, so this assumption cannot be made, and the most reliable way to investigate possible urbanisation biases is to compare rural and urban station series. How to avoid extra work (Folland): We have not accounted for other changes in land use as their effects have not been assessed. And this should be easy when compared to bucket adjustments 😉 18. Jim Johnson Posted Feb 20, 2007 at 1:17 AM | Permalink #15 The results would not be sufficient. The biggest problem with the concept of UHI is that it is called UHI. Incorrectly implies that local, non-GHG, man-made heat effects are restricted to urban areas. A half an acre of blacktop in the middle of nowhere, or six sections of irrigated nowhere, can have plenty of effect on the local microclimate around a weather station. 19. TAC Posted Feb 20, 2007 at 5:14 AM | Permalink I have not looked at the Jones et al. study in detail, and the writing is sufficiently opaque that I’m not sure what they really did. In any case, it seems to me there is a fundamental problem with it. It seems to test the hypothesis that urban areas exhibit faster temperature growth than do rural areas. However, this is not the relevant question; in fact, it appears to be nothing more than a poorly disguised strawman. The important question, IMHO, is whether the process of urbanization induces changes that would show up as trends in temperature records. Simply stated, trend slopes should be expected to reflect the rate of landuse change — the rate of development of rural land into increasingly dense cities — not the level of urbanization that has occurred in the past. On a static planet (which may bear some resemblance to Russia during this period), there would be no reason to expect trends in either the rural or urban areas. Jones et al. seem to have used a categorical predictor variable for urban/rural, and then considered the trends for the two categories of landuse. This makes no sense at all. Based on physical arguments, an area classified as rural under this scheme would likely exhibit an upward temperature trend as a component of urbanization; a fully developed “urban” area would exhibit no trend. A much predictor variable would be the continuous variable “degree of urbanization”. There also seem to be other statistical problems with this study (peer review has its limits, 😉 ). The study should have employed paired data to reduce the variability (which apparently is what Karl’s study in the U.S. did). Similarly, AFAICT the use of grid-cell temperatures in the study does nothing more than add fog. If I am in error on any of these points, I would appreciate hearing about it. 20. MarkW Posted Feb 20, 2007 at 5:55 AM | Permalink George T: I haven’t seen a breakdown of the Soviet stations, but it seems likely to me, that the stations most likely to be dropped by the Soviets when money was cut, were the most rural stations, which also happened to be those stations furthest to the north. 21. Hans Erren Posted Feb 20, 2007 at 6:03 AM | Permalink Considering census data as an independent measure for UHI, I found that it matters when comparing Brussels and De Bilt, however (unreported) station moves have a far bigger impact. http://home.casema.nl/errenwijlens/co2/homogen.htm (Aplogies, the url is a bit of a hodgepodge, I still need to clean it up.) 22. JPK Posted Feb 20, 2007 at 6:57 AM | Permalink I would just be very careful using old Soviet data. While urban airports are easy to verify, rural weather stations are suspect- especially if they were located near a military installation. Jae, I was thinking that the same thing. As a matter of fact, I thought the graph somewhat resembled the PDO. 23. Florens de Wit Posted Feb 20, 2007 at 7:36 AM | Permalink Is there any basis to the notion implicit in studies like Jones et al. that a global UHI bias can be assumed to be homogeneous in space and time? After all, in the IPCC TAR UHIs were corrected for by subtracting a single linear trend for global temperatures, and this seems based on studies like this. It seems to me that it would be far more reasonable to correct individual stations (or classes thereof) in the dataset and recalculate the global average. 24. Steve McIntyre Posted Feb 20, 2007 at 7:55 AM | Permalink #19. I agreee entirely. The original formulations of UHI (by for example Oke) actually proposed a log(population) heuristic to deal with landscape changes in rural areas. A log(population) formula is obviously sensitive only to changes in population rather than population itself. I’ll bet that Jones’ Chinese sites, for example, if and when they are ever identified, will prove to be Chinese towns and cities of different, none of which are “rural” in landscape terms. Like you, I am amazed at the inability of these guys to perform elementary statistical studies without botching it – and then the perpetuation of these flawed studies in the literature. I’m not saying that there are no sensible observations in these studies. Population or log(population) is itself only a type of proxy for landscape changes. Peterson, for example, points out that urban areas can have cool “park” islands and that growth of trees in suburbs is a type of cooling effect and acknowledges that urban-type landscape changes can take place in rural sites. If Toronto is any guide, landscape changes around the Toronto airport have been much more extreme over the last 50 years than around the University of Toronto (if that’s where the old station was) so the move to the airport might actually enhance a UHI signal. 25. Steve McIntyre Posted Feb 20, 2007 at 7:57 AM | Permalink #23, a global UHI bias can be assumed to be homogeneous in space and time? I don’t think that they assume that at all. Can you provide a citation? 26. Dave Dardinger Posted Feb 20, 2007 at 8:47 AM | Permalink Something which bothers me in the whole UHI discussion is that there doesn’t seem to be a clear distinction between the UHI effect on global temperature and the UHI effect on the measurement of global temperature. Cities and towns are still only a small part of the total surface area of the planet or even the land area of the planet. Therefore, once one knows how large the UHI bias of temperatures is, it can be spread out and will probably be pretty small, as in the Jones type claims. But this assumes we accurately know the the temperature of the rest of the planet, i.e. the “rural” portions. But as we’re discussing here, it’s difficult to know what the true rural temperature is because there are so many ways for a station, even in antarctica, for instance, to be contaminated with human environmental modification. Thus in that particular case, if there are diesel vehicles they might spread excess carbon particulates in the neighborhood which would change the albedo locally and thus the amount of heat absorbed or emitted. This almost surely would increase measured temperatures, even if the station were isolated from a base and only visited once a month or less. In less isolated situations the effect will be more and more prominent. Now we can’t actually measure what part of the measured rural temperature change is from ghgs and what from other human activity without a gigantic effort. But we should be able to get an idea of what the baseline temperature should be and therefore what the UHI measurement bias is by comparing the measured surface station “rural” increase with the satellite surface increase. I suspect we’ll find it to be something like .2-.5 deg per century, not .05 or whatever Jones thinks. 27. Don Keiller Posted Feb 20, 2007 at 9:47 AM | Permalink Have read this site with great interest for some time now. I guess that makes me a “lurker”?! With regard UHI, Hinkel et al (2003) (The urban heat island in winter at Barrow, Alaska. International Journal of Climatology 23: 1889-1905), produced a very informative study. 28. Ross McKitrick Posted Feb 20, 2007 at 10:35 AM | Permalink #5 – Concerning the Soviet data see Fig 3 in my paper with Pat Michaels, available here. The data for the Figure and the other code/data/etc for the paper are here. The Figure shows the rate of missing data in those Soviet stations that continued to operate during the collapse of the former Soviet Union. The graph George posted shows the entire archive count, including duplicates and partial records. The GHCN record pares that down to about a third of the original, but the drop in stations at 1990 is still evident (see the graph, as of 1997, here. And the station loss is not spatially uniform, making it impossible to claim the sampling frame is continuous across the late-80s/early 90s. To see the effect of the 1990 event mapped out visually, go to the University of Delaware global temperature archive here, click Available Climate Data; log in; under Global Climate Data select Time Series 1950 to 1999; then select Station Locations (MPEG file for downloading). Then sit and watch the movie. The remarkable things are, first, how bad the spatial coverage is outside the US and Europe, and second, what happens at 1990. And of course it’s after 1990 that all these record breaking jumps in the global temperature took place. The AR4, like the TAR before it, simply dismissed this issue without discussion, appealing primarily to Jones et al. 1990. 29. beng Posted Feb 20, 2007 at 11:11 AM | Permalink RE 19: TAC says: The important question, IMHO, is whether the process of urbanization induces changes that would show up as trends in temperature records. Simply stated, trend slopes should be expected to reflect the rate of landuse change ‘€” the rate of development of rural land into increasingly dense cities ‘€” not the level of urbanization that has occurred in the past. Exactly. Moving temp stations to more “rural” airports would in many cases increase the rate of change of detection of urban development IMO, since the airports were prb’ly fairly close to rural at the start, but are the very spots where urban-effect increases would most likely occur locally. So remaining in an established city would have incurred less relative local urban growth than a growing airport! Of course, if a proper correction weren’t applied for the move (how is that determined?), even more inaccuracy would be introduced. 30. MarkW Posted Feb 20, 2007 at 11:26 AM | Permalink #26: Dave, The problem in two areas. There are only something like 2500 weather stations covering the entire globe. Which has, if I remember correctly, something like 200,000 sq. km. of surface ares. All of these stations are in close proximity to human habitation. A few of these habitations are small towns or villages. Most are near medium to large towns. A handfull are near farms. But even farms affect the local climate via foliage changes and irrigation. Secondly, we do not have any satellite measurements of ground temperature. The satellites that I have read about, read air temperature at an altitude of several miles up. 31. MarkW Posted Feb 20, 2007 at 11:31 AM | Permalink 19: TAC, The only way I can think of to do a correction on a move of that sort, would be to keep the first sensor in place, and establish a second sensor at the new location. Run both sensors for at least a year, preferably several years, so that you can get a track of the difference between the two locations over a wide variety of weather conditions. Then you throw away the first sensor. If this isn’t how it is done, then I’m sure someone will correct me. Hopefully not violently. 32. James Erlandson Posted Feb 20, 2007 at 11:38 AM | Permalink Re 30 MarkW: Surface Area of the earth is about 200,000,000 square miles. Which makes it worse. Posted Feb 20, 2007 at 11:58 AM | Permalink RE: #18 – Yes, exactly! That is why I keep telling people to stop using the term “UHI!” My term is “Anthropogenic Energy Dissipation and Land Use Modifications.” 34. Christopher Morbey Posted Feb 20, 2007 at 12:26 PM | Permalink #32. MarkW likely meant that each of the 2500 stations represents the contribution from about 200,000 km^2? Of course, the stations aren’t uniformly distributed, and few are bobbing around in the sea. I’m quite sure that a 2500 pixel visible camera image of the Earth’s complete surface would be somewhat noisy. As T Ball hinted at, there are basic sampling statistics to worry about. 35. Jim Johnson Posted Feb 20, 2007 at 12:27 PM | Permalink My term is Local Anthropogenic Temperature Tainting Effects. Hot LATTEs. 🙂 Posted Feb 20, 2007 at 12:43 PM | Permalink RE: #35 – I like it …. it resonates with urban sophisticates …. great market segment targetting! 😉 37. Douglas Hoyt Posted Feb 20, 2007 at 1:00 PM | Permalink One overlooked problem in urban-rural temperature trend comparisons is that both the urban areas and rural areas are growing in population. If both the urban location and the rural location grow by the same percentage in population (say 25%), then both would have equal growth in their UHIs. The net effect then would be that someone (say Peterson) would then erroneously claim there is no UHI effect in the temperature trends, when in fact both sites might have trends of equal magnitude. I think this is a common problem. 38. Posted Feb 20, 2007 at 1:14 PM | Permalink In the summer of 2005, I finished a study on the temperature evolution in Europe. I focused on differences between rural and urban stations. My findings can be found on my website under The Earth: Climate Evolution in Europe (1881-2004). Hope this study may be useful! 39. K Posted Feb 20, 2007 at 1:19 PM | Permalink #31 I agree that you can calibrate for station moves. But it is difficult to adjust for the results. As several have pointed out, stations relocated to airports might show a drop. But only for a while. Airports tend to grow and lay down massive concrete runways, build big parking structures, and attract automobile, truck, and plane traffic. Jet engines aren’t neutral about heat. And at the prior station in the urban center any UHI effect probably built over decades as the city grew. Tall buildings, a decrease in sidewalk trees, airconditioning, and freeways of concrete almost define modern cities. So any UHI adjustment for more than a few years is a moving target. I don’t see a solution. Comparative studies can figure out what the UHI effect was at some specific sites and some specific years. But not for every site or every year. 40. Chris H Posted Feb 20, 2007 at 2:00 PM | Permalink #35 Surely the term should be ALW, Anthropogenic Local Warming 41. bruce Posted Feb 20, 2007 at 2:09 PM | Permalink Re above discussion: Doesn’t Warwick Hughes have it right with his presentation of infra-red images from satellites that clearly show pronounced UHI effects? Posted Feb 20, 2007 at 2:14 PM | Permalink RE: #37 – In fact, the actual delta T in so called “rural” areas, especially since WW2, may be greater than it has been in so called “urban” areas. Posted Feb 20, 2007 at 2:17 PM | Permalink RE: #41 – Hughes et al (aka “The Hard Rock Renegades of Oz” – LOL!) have done some really good work on this topic. 44. Steve McIntyre Posted Feb 20, 2007 at 2:28 PM | Permalink #38. Many CA readers will be interested in HAn Janssen’s website . Jan, I couldn’t locate the post you linked to. 45. Posted Feb 20, 2007 at 2:38 PM | Permalink I happened to investigate the temperature data for the south-eastern part of this region a few years ago. I found that almost all the claimed warming in this region during the 20th century is due to the UHI. I wrote a paper about it. Naturally, I could not get it published, because it contradicts the claims of the “team” and other climate catastrophists. 46. Posted Feb 20, 2007 at 2:41 PM | Permalink very, very interesting! 47. tom Posted Feb 20, 2007 at 3:50 PM | Permalink #37 I would assume that if the sensor is already surrounded by urbanization and if the immediate surroundings don’t change all that much, yet the cities areal coverage increases, all that will do is expand the coverage of the heat isle itself, not necessarily change much around a local sensor. But I don’t know at what point this would occur during an ‘urbanization’ process over time, but one would think there would be some sort of an equilibrium point as to how much more of a signal you would get out of a longterm temp trend around an already ‘large’ city. I fully understand the issue of a more rural site which grows into a more urban site over time having a larger UHI signal than a site that had been located in a relatively large urban setting from the get go. But once again, it comes down to the most micro of micro climates. I have done study in my own neighborhood (urban) and can show much cooler temps (especially at night) if the sensor is placed in a large grass field or forested area compared to the local urban center in my neighborhood. It can be several degrees different on good de-coupling nights with clear skies. And these measurements are made within just a few blocks of each other or even less. Quite amazing really. Using meteorlogical station data as a measure of GMT is very close to pure folly in my humble, meteorological opinion. within a few blocks to a less rural site would have more signal than a urban site that has always been an urban site. It’s sort of development would 48. tom Posted Feb 20, 2007 at 3:52 PM | Permalink Sorry for the garbage text at the bottom of my last post… 49. jae Posted Feb 20, 2007 at 4:21 PM | Permalink 46: Yes, very very interesting! 50. jae Posted Feb 20, 2007 at 4:27 PM | Permalink Jan J & Steve M: Thanks for the link. Wow, I did not know there was so much variation in predictions for Solar Cycle 24. 51. MarkW Posted Feb 20, 2007 at 5:45 PM | Permalink 37: tom, You leave out the affect of winds in your analysis. If the wind is blowing at 10mph and if the sensor is in the center of the city. Then if the city is 40 miles across, the wind will be blowing over urban landscape for two hourse. But if the city is 80 miles across, then the wind will be blowing over urban landscape for 4 hours. Twice as much time for it to pick up heat. 52. Hans Erren Posted Feb 20, 2007 at 6:02 PM | Permalink These are UHI corrections by several authors on Uccle(Brussels): Note that Jones doesn’t correct Brussels, the higfrequency residual is due to the fact that Jones uses jan-dec and GIS Dec-Nov annual averages. 53. tom Posted Feb 20, 2007 at 8:41 PM | Permalink #51 Right you are Mark about the wind. But I was just wondering if at some point an equilibrium point would be met in a large city. Maybe the increase in temperature is logrithmic, meaning it rises pretty sharply (relative term here) with initial urbanization then the increase flattens some as the city gets larger. Maybe once the urbanization is more than 80mi across, I dunno…but at the same time some cities just get more dense and not necessarily more expansive. A lot comes into play no doubt but my main point is that there probably is less of a net increase from growing but already large city as compared to a rural grassland going urban over a similar time period. The wind issue is a valid point but if the airmass is homogeneous, yet being well mixed from a steady wind the difference between city/rural on nights like these are not that great from my experience, mainly refering to midwestern US cities. 54. Hans Erren Posted Feb 21, 2007 at 2:53 AM | Permalink Interesting is also that the Vienna (metropolitan) data and the Hohenpeissenberg (rural hilltop)data don’t show a difference since 1780, because central Vienna (where the Hohe Warte observatory is located) was already a metropole in 1780, so no significant change in landuse was occured in both locations. http://home.casema.nl/errenwijlens/co2/europe.htm Posted Feb 21, 2007 at 3:16 AM | Permalink Given that the UHI can affect the temperatures in cities by many degrees e.g. http://news.bbc.co.uk/1/hi/in_depth/sci_tech/2002/leicester_2002/2253636.stm where 8 degC is quoted for Manchester, compared to surrounding areas, I am intrigued by the nature of the corrections that are made, given a suggested global warming to date of a few tenths of a degree. The implication to me, is that either these corrections are incredibly accurate governed by a well defined procedure, which seems unlikely from waht has been said above, or not to be trusted. 56. Florens de Wit Posted Feb 21, 2007 at 3:39 AM | Permalink Re #25 Sorry Steve, no citation. The IPCC TAR box 2.1 however, states: “Extensive tests have shown that the urban heat island effects are no more than about 0.05°C up to 1990 in the global temperature records used in this chapter to depict climate change. Thus we have assumed an uncertainty of zero in global land-surface air temperature in 1900 due to urbanisation, linearly increasing to 0.06°C (two standard deviations 0.12°C) in 2000.” [I think page 52] So basically I misstated my point. What I meant to say was that it seems that in the IPCC context one thinks that UHI bias which isn’t homogeneous in space nor time, can be corrected for on a global scale by assuming a linear “uncertainty trend”. This seems to suppose that UHI bias largely cancels out in the record which is something I find hard to believe. I also am under the impression that trying to correct for something as diverse as the biasses caused by changes in station location, changes in the environment of stations, changes in land use and energy consumption etc. cannot simply be corrected for by statistical techniques. I’d say this requires detailed study of impact on individual stations. Again, I might be projecting different issues onto Jones et al. If so, sorry for that. 57. Hans Erren Posted Feb 21, 2007 at 3:41 AM | Permalink The take home message is that the UHI value per se doesn’t matter, it is the change in land use that matters, for which population is a well documented metric (proxy). 58. Dave Dardinger Posted Feb 21, 2007 at 7:34 AM | Permalink re: #56 Extensive tests have shown that the urban heat island effects are no more than about 0.05°C up to 1990 in the global temperature records I’m going to try once more to make my earlier point as I don’t think it’s necessarily being “caught” by people here. I might even agree with the IPCC quote above, but that’s not the danger of UHI in measuring surface temperatures. The danger is the bias over time in measuring temperatures. The cities, towns, little burgs and even development in rural areas result in increased temperatures on a small % of the earth’s surface. The creates a bias in the entire corpus of surface temperature measurements unless it’s detected and removed. The people who wrote the quote seem to be of the opinion that since the overall actual increase in the world’s surface temperature from UHI is small, therefore they can make a small correction and then ignore it. That is totally wrong and shows either a major lack of understanding of the issue or a willful attempt to hide the actual problem. I add this last since several of the arguments against UHI such as the “night light” and “windy city” papers (someone who knows the papers I mean might want to provide links to them), try to argue that the actual amount of UHI in urban areas must also be small. This is silly since practically everyone who has observed actual weather knows that the areas surrounding cities is normally several degrees cooler most all the time. Until this is admitted and dealt with seriously, the team and their allies are not going to make much progress convincing knowledgable skeptics that UHI can be ignored or easily accounted for. 59. Michael Jankowski Posted Feb 21, 2007 at 8:05 AM | Permalink Extensive tests have shown Sounds more like “extensive hand-wavings have shown…” 60. Florens de Wit Posted Feb 21, 2007 at 8:51 AM | Permalink Dave (#58) Thanks for making your point once more. Sorry to say that I think I don’t ‘catch’ it still. You say: The danger is the bias over time in measuring temperatures. The cities, towns, little burgs and even development in rural areas result in increased temperatures on a small % of the earth’s surface. The creates a bias in the entire corpus of surface temperature measurements unless it’s detected and removed. So every single site that is close to a factor of bias – may it be a city, a (set of) smaller area(s) of habitation, some development site, or a agricultural or forestry site that has been changing over time – is going to give biased records. These records, if included in larger sets, are going to bias the total resulting temperature change estimates. I can see that. However then you write: The people who wrote the quote seem to be of the opinion that since the overall actual increase in the world’s surface temperature from UHI is small, therefore they can make a small correction and then ignore it. That is totally wrong and shows either a major lack of understanding of the issue or a willful attempt to hide the actual problem. This seems to confuse me. So let me try and explain why I may be confused. The above sentence seems to be inconsistent with your statement that sites are being biased by nearby urban centers and land use changes, and therefore this biasses any larger dataset that includes data from these sites. After all if the total bias due to the biased site records is small, then you can use a small correction. I suspect however that you mean to say that one should not simply correct the record with a single value based on the population(density) or similar proxies at a single time, but should be corrected with a correction as a function of time, based on whatever proxy or data also as a function of time. I totally agree. In fact I think the best way to deal with land use change bias in records might be to model the appropriate factor – be it population or land cover – together with temperature or whatever one wants to correct, as a multivariate function. In other words, temperature trends may have a strong covariance with e.g. population(density) and this is what’s of interest, not whether the trend correlates with a change in population over a arbitrary period. 61. Steve McIntyre Posted Feb 21, 2007 at 9:03 AM | Permalink #56. Florens, I agree with your point about the need for detailed studies. I wish that these folks took more of an engineering approach to things, rather than resting on little Nature articles are little more than an extended abstract. If anyone is going to rely on the results, it would be helpful to have a report on each site. In an internet age, with the broad interest in these topics, if one knew what sites were actually being used in these studies, I suspect that we’d start seeing some detailed parsing of the data like Hans Erren has done with De Bilt, Vienna and Uccle, so that we could see what it really said. 62. MarkW Posted Feb 21, 2007 at 9:22 AM | Permalink re #58: Dave, the quote that you reference does not say that urban areas are 0.05C warmer than rural areas. It is saying that at any particular sensor there has been a warming of 0.05C in the last 100 years due to development. (Of course the phrase “any particular sensor” has no meaning, since what we are talking about is a statistical averaging of all sensors over the globe. I’m just trying to keep the phrasing simple so that the point does not get lost in the verbiage.) How this amount is determined is by comparing “rural” stations to “urban” stations over time. The belief is that rural stations aren’t experiencing UHI, and urban ones are. Thus, the difference in the two warmings, can be attributed to UHI. There are many, many problems with such an assumption. The first, and biggest, is the belief that “rural” stations aren’t experiencing UHI. It’s fairly easy to show that they are. The next is the rather simple minded model. IE, all stations near cities with less than a certain population are rural, and all stations near cities larger than the critical cut-off are urban. A better model is to plot a graph of temperature increase vs. population. On the other hand, all of this is conjecture, since the man behind the myth, Jones, refuses to tell anyone how he came up with his magic number. 63. Hans Erren Posted Feb 21, 2007 at 9:56 AM | Permalink here is st petersburg russia http://home.casema.nl/errenwijlens/co2/europe.htm 64. Ron Cram Posted Feb 21, 2007 at 10:32 AM | Permalink re: 44 Steve, Speaking of Jan Janssen, I also found his comments on SPM4 interesting. Posted Feb 21, 2007 at 10:36 AM | Permalink RE: #53 – Consider anywhere populated (urban or otherwise) in 1807. No electrical grid, most power is from horses, etc. Consider anywhere populated today. Over the past 200 years even places that have not incurred a change in population density have incurred an unbelievable change in energy density. In fact, consider even the change over the past 100 or even 70 years. 66. jae Posted Feb 21, 2007 at 11:00 AM | Permalink Some thoughts by Roger Pielke Sr. on the effects of land use change here. 67. Dave Dardinger Posted Feb 21, 2007 at 12:46 PM | Permalink re: #60 Let me try to explain my point statistically to better reach the audience here. Imagine a field of all possible temperature measurement stations. Make a random selection of N of these stations. Some number K of these stations will be in built-up areas and will have a UHI value of +U(k) which we’ll assume you can measure accurately The rest of the N we’ll denote as J. If you now make two averages, one with the raw measurement and one with adjusted values, and then divide by N you will end up with a value V for the average UHI over the entire globe (or whatever subset of it you’re dealing with). Now, as far as I can see, The Jones strategy is to assume that V(0)= 0. Therefore they’re saying that they’ve measured UHI by comparing the relative increase in temperatures between pairs of rural and urban stations and they’ve come up with V(t) = .05 deg C per century. But as we’ve been discussing here, they’re actually only measuring U(k-j). Now it may or may not be true that U(j) = 0 for t=0 but we certainly don’t know that U(j,t) is 0. We presume there’s still a set of stations J’ which have U(j’) = 0 but it’s unlikely that these are among the initially selected J. By construction they’re relatively near growing cities. Therefore it’s likely that they’ve become k’s to a greater or lessor extent. This is the contamination I’m concerned with. It may be that in the larger universe of J’s the unbiased ones may predominate and result in a relatively small total temperature increase from UHI, but since most of the “rural” stations in any real-world selection of N aren’t really that rural either initially or over time, we’re not going the get a high enough adjustment for UHI of individual stations and this will be extrapolated into seeing a high temperature trend where it should be a low temperature trend. 68. EW Posted Feb 21, 2007 at 1:15 PM | Permalink There’s a long-term temperature measurement done in the very center of Prague (Czech Rep.), in Klementinum near the river Vltava. The graph shows yearly averages (in blue) red – 10-year smoothing Rather interesting – the recent temperatures resemble those in 1800’s, like there’s no change… Graph 69. Posted Feb 21, 2007 at 2:03 PM | Permalink Analog vs Digital Rounding of Temperatures: I haven’t seen this mentioned anywhere else, but I’ve wondered whether anyone has looked into this. I have a hypothesis that there may be slight tendency for humans to round downward while reading an analog thermometer. This bias would be eliminated when the weather station upgraded to digital thermometers. A bias of a few tenths of a degree C is believable and would be significant. Posted Feb 21, 2007 at 2:36 PM | Permalink RE: #69 – Especially if the observer is looking at an upward angle at an analog thermometer. 71. jae Posted Feb 21, 2007 at 3:22 PM | Permalink 68: Looks like the Little Ice Age hit Prague. 72. Posted Feb 21, 2007 at 3:30 PM | Permalink For those interested in looking at real time temperature for a intermediate size town look at: http://testbed.fmi.fi/ The site provides real time temperature data for Helsinki (Finland). The whole capital erea Helsinki, Espoo, Vantaa and Kauniainen together have a population of roughly 1000 000. Some background: Land Area = 184 kmⰮ Population= 565000, ca. 3000 persons/kmⰠ. Energy consumption: 15 GWh (electric + area heating). Equivalent to ca. 8W/mⰠcounting the land area. Traffic heat input is not included (needs to be estimated separately). The energy production above is for winter time because at present it is ca. -15 C 🙂 going down to ca. -20 C during the night. 73. tom Posted Feb 21, 2007 at 3:41 PM | Permalink #65 Point well taken…I am a big believer that urban development causes a warm bias over time, especially when going from truly rural to more urban such as we are discussing here. I’m just trying to tease out other possible outcomes that may argue against my hypothesis, shall we say ‘falsify’, ya know like good scientists used to do (ahem…wink wink). Thanks for all of your viewpoints. 74. Michael Jankowski Posted Feb 21, 2007 at 4:05 PM | Permalink RE#64, good comment by Jansen about 0.006 deg C/decade being called “negligible” by the IPCC, yet even this value over the 20th century would be 10% of the warming. Tough to call 10% “negligible.” 75. Christopher Morbey Posted Feb 21, 2007 at 4:39 PM | Permalink I’m wondering about the immediate environment of the sensor. Obviously, the condition of the box that protects the sensor over time is of some importance. Also, any nearby structures (including tree trunks) may radiate to the sensor box. I happen to have all sorts of remote temperature sensors around my place. They all read within a degree F when together, yet around the yard (0.5 acre) they are easily as much as 3 degrees C in disagreement. It’s clear that pockets of different temperature air are moving around continuously, wind, or no wind. Temperatures near trees can depend on the micro-climate of the location — presumably caused by the tree breathing. The same sort of thing can be noted while observing through a large telescope. Pockets of air, a metre or so in size, either drift across the pupil slowly, or, if it is windy, the air pockets are more disturbed. All this would seem to indicate that measuring simple temperatures is far from simple; each individual location would need long-term statistics using many sensors. Posted Feb 21, 2007 at 6:06 PM | Permalink What is needed is a classic gage R&R or MSA. Try to get the so called “climate scientists” to take a Sigma approach, though … it’s like trying to give a cat a bath. 77. Christopher Morbey Posted Feb 21, 2007 at 7:47 PM | Permalink Re #76. Well, yes, except that we’re probably looking at some level of chaotic fluctuations. I could imagine the number of DOF would be large. Is it even possible (these days) to get representable measures from chaotic phenomena, and what would they mean? Could it be that the available temperatures can’t deliver meaningful “averages” no matter how much they are massaged? Posted Feb 22, 2007 at 11:12 AM | Permalink RE: #77 – The simple work of trying to design the MSA would indeed reveal a number of inconvenient truths about near surface measurements. 79. Steve McIntyre Posted Feb 22, 2007 at 11:15 AM | Permalink I sent the following request to Phil Jones today: Dear Phil, a couple of years ago, I requested the identities and data for the Russian, Chinese and Australian networks studied in Jones et al Nature 1990 on urbanization. At the time, you said that it would be unduly burdensome to locate the information among your diskettes as the study was then somewhat stale. However, I notice that Jones et al 1990 has been cited in IPCC AR4 (in the section where you were a Coordinating Lead Author) and continues to be cited in the literature (e.g. Peterson 2003). Accordingly, I re-iterate my request for the identification of the stations and the data used for the following three Jones et al 1990 networks: 1. the west Russian network 2. the Chinese network 3. the Australian network For each network, if a subset of the data of the data was used, e.g. 80 stations selected from a larger dataset, I would appreciate all the data in the network, including the data that was not selected. In each case, please also provide the identification and data for the stations used in the gridded network which was used as a comparandum in this study. Steve McIntyre 80. Jim O'Toole Posted Feb 22, 2007 at 11:26 AM | Permalink Re 79, Steve, did you cc NAS, Nature, or IPCC? Please let us know what kind of response you get. 81. Craig Loehle Posted Feb 22, 2007 at 11:49 AM | Permalink I think it is very important to note that for testing for a UHI effect the way Jones did, all you have to do to prove “no effect” is to be sloppy in separating the urban and rural sites. If a portion of your rural sites are “in fact” becoming more urban over time, and some of your urban sites are “in fact” in stasis (equilibrium) level of urbanization, then you will not detect a difference in temperature rise between the two data sets. A fundamental aspect of statistics has always been to be able to clearly differentiate your effect from noise. Here, noise (due to station classification error) is sufficient to “prove” that there is no UHI, which the IPCC wants as their outcome (since this means less work for data analysis). 82. Gerald Machnee Posted Feb 22, 2007 at 1:38 PM | Permalink Re #81 – and I would expect IPCC to cherry pick the studies that suit their purpose. 83. Hans Erren Posted Feb 22, 2007 at 3:25 PM | Permalink In case anybody needs population data: http://www.populstat.info/ 84. Neil Fisher Posted Feb 22, 2007 at 4:01 PM | Permalink Just a quick note to remind you all that the change from human read thermometers to remotely read electronic thermometers could introduce a bias as well – the screened box holding the thermometer needs to be opened for a person to read it, but can stay shut for a remotely read unit. Does anyone know if this would introduce a warming or cooling trend to the data – I suspect warming. With no need to open the box on a regular basis, or even to go and have a look at it, it probably wouldn’t be cleaned very often either – inside or outside. These boxes are painted white, so a neglected box that gets dirty on the outside would show an increase. Increases could also occur in the minimum temperature recorded if the inside of the box was allowed to accumulate dust and dirt, thus insulating the sensor somewhat. Given enough neglect, the air flow through the box could be seriously compromised as well – another potential warming bias. With less visits to the box, we could also see neglect of the surrounding environment as well, perhaps leading to heat traps. Excuse my laziness and ignorance, but does anyone know if this has been studied, and if so, what the results show? 85. Steve McIntyre Posted Feb 22, 2007 at 4:05 PM | Permalink #84. Peterson etc are aware of this sort of issue and there are discussions of this type of issue in the literature. 86. Ken Fritsch Posted Feb 22, 2007 at 4:28 PM | Permalink Re: #28 where Ross McKitrick says: February 20th, 2007 at 10:35 am #5 – Concerning the Soviet data see Fig 3 in my paper with Pat Michaels, available here. The data for the Figure and the other code/data/etc for the paper are here. I read the McKitrick and Michaels paper and, for what it is worth from a statistical and modeling novice, I thought the model construction was put together with some thought and creativity. Two points from the model results were striking to me: 1. There were no significant effects of population on temperature — which would indicate agreement with Jones. 2. Out of sample results indicated that the model could only explain 9% of the temperature changes. I would be interested in hearing more about these two results and any further work that you have done with modeling along these lines. 87. Evan Englund Posted Feb 22, 2007 at 7:48 PM | Permalink Re:83 Gridded population data are available here: http://sedac.ciesin.columbia.edu/gpw/ 88. Ross McKitrick Posted Feb 22, 2007 at 7:51 PM | Permalink #86: In that paper we are concerned with within-sample inferences, so the key tests are the P-values in Table 4. In models where you want to forecast out of sample the out-of-sample test is crucial; that’s why Steve and I emphasized the importance that the hockey stick out-of-sample r2 was insignificant. That model was not being used to say something within the sample period, but comparing 1998 to 1400, ie 500 years out of sample. For our model the prediction test is one of a battery of tests to try and rule out spurious or fluke results. It would have been nice if the r2 was higher, but at least it was significant. In the follow-up paper (which is under review) we use a complete global grid and the same test r2 goes up quite a bit. Population is insignificant, as you note. Either they have successfully removed the population effect or it is an inadequate way to measure the size of the contamination. 89. STAFFAN LINDSTRÖM Posted Feb 23, 2007 at 10:14 AM | Permalink So,if IPCC a UN organisation would one fine day read Climate Audit they would have rural outback temperature sites protected as part the World Heritage…So let us all lobby for that and weⳬl have a more accurate surface temperature record from 2012…or so 90. Dave B Posted Feb 23, 2007 at 2:21 PM | Permalink #86, 88… my thought is that PEOPLE are not the primary cause of UHI. STRUCTURE, ie blacktop, buildings, perhaps electronic infrastructure are what cause UHI. in the absense of significant human population, my local weather station at the airport will register a higher temperature, secondary to the many acres of blacktop and concrete in the immediate area. population may end up being a matter of only secondary importance. Posted Feb 23, 2007 at 5:07 PM | Permalink RE: #90 – I wonder what the impact is of, for example, the Mojave Airport (and other large aircraft mothball areas)? What is the impact of turning massive areas of desert caliche into well cultivated and well watered top soil with crops growing in it a goodly percentage of each year? Etc, etc, etc … 92. Dave B Posted Feb 23, 2007 at 6:05 PM | Permalink 91…or more in my neck of the woods, plattsburgh, NY air force base (closed) in the adirondacks. or niagara falls, NY air force base, (not closed due to political wrangling, but not being used as much as its acreage would support). lots of blacktop, very low population, still a likely high UHI. 93. Christopher Morbey Posted Feb 24, 2007 at 12:27 AM | Permalink 😉 “…The total temperature increase from 1850 – 1899 to 2001 – 2005 is 0.76 [0.57 to 0.95]°C. Urban heat island effects are real but local, and have a negligible influence (less than 0.006°C per decade over land and zero over the oceans) on these values.” There you have it! Half the world’s population of urban-dwellers (and their engines) provide negligible influence on global warming. Therefore, farmers and hermits are the culprits? 94. Steve McIntyre Posted Feb 24, 2007 at 11:48 AM | Permalink I’ve excerpted Russian station data from the most recent GHCN listing for Warwick Hughes who is making progress at possible identifications of the Jones stations. See his website. For reference, here’s the script for reading and excerpting the stations: url< -"ftp://ftp.ncdc.noaa.gov/pub/data/inventories/STNLIST-SORTED.TXT&quot; #[13] "NUMBER CALL NAME + COUNTRY/STATE LAT LON ELEV " #[14] "" #[17] "990061 DB061 ENVIRONM BUOY 52079 ** +0000 +14700 +0000" #[18] "134900 RESERVED FOR NAVY ** -9999 -99999 -9999" #[19] "993194 SHIP BLOCK 19 ** -9999 -99999 -9999" #[20] "691810 KQLH TUZLA ** +4453 +01871 " test<-test[,c(5,1,7,8,9)] test$start0<-NA test$end0<-NA test[,3:4]<-test[,3:4]/100 test$jones<-jones(test[,3],test[,4]) names(test)<-names(russia[[1]]) levels(test$site)<-sub(' +$', '', levels(test$site)) ## spaces only capwords <- function(s, strict = FALSE) { cap <- function(s) paste(toupper(substring(s,1,1)), {s <- substring(s,2); if(strict) tolower(s) else s}, sep = "", collapse = " " ) sapply(strsplit(s, split = " "), cap, USE.NAMES = !is.null(names(s))) } levels(test$site)<-capwords(casefold(levels(test$site))) #for readability but optional temp42)&(test$long<85)&(test$long>25)&!is.na(test$lat)&!is.na(test$long) sum(temp) #46 test< -test[temp,] test<-test[order(test\$jones),] write.table(test,file="d:/climate/data/jones/stations/ghcn.russia.txt",sep="\t",row.names=FALSE) This extracts 1764 stations extending a list of 1359 stations from which Warwick has matched many plausible candidates. 95. roger dueck Posted Mar 4, 2007 at 4:32 PM | Permalink Steve M – You fellows are much more adept at handling large volume of GISS data than I, but I have taken a look at a few UHI phenomenon in several areas: San Antonio TX Oklahoma City OK Edmonton AB These cities all have a number of “rural” stations at various point surrounding. They all show significant and consistently increasing UHI effect. I don’t think there is any doubt that the UHI effect is real and significant, as my observation is that the increase is 0.75C per century for San Antone and 1.1C per century for OK City. In the case of Edmonton, I compared the Municiple Airpot, near city centre, to the International Airport, some 30 km to the south and adjacent to Leduc and Nisku, two areas with recent rapid oilfield industrial growth. The startling thing is that you can track Edmonton population growth by the increasing temp differential until the mid-70’s when the industrial complex adjacent to the Int Airport went through a growth spurt, along with the Leduc population. I guess we could use the differential as a population proxy! I think any attempt to “correct” this data for UHI is frivolous and misleading, as the differences are greater the the apparent global warming of the past century. I don’t have a website to post the graphs but could forward to you to post. Regards RD 96. Hans Erren Posted Mar 5, 2007 at 2:52 AM | Permalink roger you don’t need a website, you can use imageshack http://imageshack.us/ Posted Mar 5, 2007 at 10:20 AM | Permalink RE: #96 – Indeed. Let me pick a region out of the air – the I-80 corridor – aka the following California counties – Solano, Yolo, Sacramento, Placer, and Nevada. Look at what has happened in these counties since the end of WW2. Can anyone in their right mind believe that there has not been a general increase in temperature in these places, owing strictly to the increase in human caused local energy flux combined with amazing and extensive albedo modifications in urban, suburban and rural portions of this corridor? Consider also the amount of conversion of rural land to other types during that same period. 98. Philip_B Posted Nov 12, 2007 at 10:56 PM | Permalink Interesting study of UHI in Singapore. You can literally see Changi Airport and the docks (also large expanse of concrete) in the thermal image. http://www.bca.gov.sg/ResearchInnovation/others/UHI%20_2004-001_%20rev.pdf 99. MikeN Posted Feb 20, 2014 at 4:50 PM | Permalink In ClimateGate e-mails, Phil Jones receives notice that UHI around St Petersburg is .8-1C, .2C per million. #1060
2017-12-18 16:16:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4974522292613983, "perplexity": 2075.226053949361}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948618633.95/warc/CC-MAIN-20171218161254-20171218183254-00381.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3896377
Birefringence calculations and questions Hi everyone! I'm trying to do the classic calculation of the coefficients of transmission and reflection at a surface, but for a uniaxial crystal. I'm doing the simplest case, in which the optical axis is normal to the reflecting surface. Here is a simple diagram of what I'm trying to do: (Sorry if it's messy or cluttered, I didn't want to leave anything out. We're taking the magnetic permeability to be 1, also. I chose the incident wave's E field to be entirely parallel to the plane of incidence because an incident wave with it at any other angle is just a superposition of this wave and one with the E field perpendicular to the plane of incidence, and that case seems easier) So, I have a bunch of questions that I can't seem answer. Foremost, I know in the special case of the incident wave being normal to the reflecting surface, there is only one transmitted wave. But in all other cases, will there always be both an ordinary (O) and extraordinary (X) wave? Second, I was reading in Landau and Lifshitz' Electrodynamics of Continuous media, the chapter on anisotropic media, and they say that "The vectors $\vec{D}$ in the O and E waves with the same direction of $\vec{k}$ are perpendicular. Hence the polarization of the ordinary wave is such that E and D lie in a plane perpendicular to the principal section" (the principal section is the plane with S (poynting vector), the optical axis, and k). But this seems to clash with the boundary conditions I know this problem needs to fulfill. We need to have D perpendicular, B perpendicular, E parallel, and H parallel continuous at the boundary. But if the ordinary wave has its E field perpendicular to the E field of the incident wave... well, that can't add up. So what am I missing? From what I can tell, that quote may be true if the wave vectors of the O and X waves are the same, but they shouldn't be, right? This is what I'm understanding right now: An incident beam comes in. The O wave refracts completely normally, as if it was refracting in an isotropic medium of index of refraction $ε\bot$. Because it is refracting normally, its polarization is in the same direction (in this case, in the plane of incidence) as the incident wave, and its wave vector and poynting vector have the same direction. Then, the wave also refracts into the X wave, which has a different wave vector than the O wave's wave vector. However, the poynting vectors and wave vectors of the O and X waves are still all coplanar, right? Does that seem correct? Can anyone help me?? Thanks! PhysOrg.com physics news on PhysOrg.com >> Study provides better understanding of water's freezing behavior at nanoscale>> Soft matter offers new ways to study how ordered materials arrange themselves>> Making quantum encryption practical
2013-05-23 02:33:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5105069279670715, "perplexity": 508.0032558888488}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702749808/warc/CC-MAIN-20130516111229-00061-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.shaalaa.com/question-bank-solutions/if-the-coefficient-of-second-third-and-fourth-terms-in-the-expansion-of-1-x-2n-are-in-ap-show-that-2n2-9n-7-0-binomial-theorem-positive-integral-indices_261612
If the coefficient of second, third and fourth terms in the expansion of (1 + x)2n are in A.P. Show that 2n2 – 9n + 7 = 0. - Mathematics Sum If the coefficient of second, third and fourth terms in the expansion of (1 + x)2n are in A.P. Show that 2n2 – 9n + 7 = 0. Solution Given expression = (1 + x )2n Coefficient of second term = 2nC1 Coefficient of third term = 2nC2 And coefficient of fourth term = 2nC3 As the given condition 2nC1. 2nC2 and 2nC3 are in A.P. 2nC22nC1 = 2nC32nC2 ⇒ 2 * ""^(2n)"C"_2 = ""^(2n)"C"_1 + ""^(2n)"C"_3 ⇒ 2 * (2n!)/(2!(2n - 2)!) = (2n!)/((2n - 1)!) + (2n!)/(3!(2n - 3)!) ⇒ 2[(2n(2n - 1)(2n - 2)!)/(2 xx 1 xx (2n - 2)!)] = (2n(2n - 1)!)/((2n - 1)!) + (2n(2n - 1)(2n - 2)(2n - 3)!)/(3 xx 2 xx 1 xx (2n - 3)!) ⇒ n(2n – 1) = n + (n(2n - 1)(2n - 2))/6 ⇒ 2n – 1 = 1 + ((2n - 1)(2n - 2))/6 ⇒ 12n – 6 = 6 + 4n2 – 4n – 2n + 2 ⇒ 12n – 12 = 4n2 – 6n + 2 ⇒ 4n2 – 6n – 12n + 2 + 12 = 0 ⇒ 4n2 – 18n + 14 = 0 ⇒ 2n2 – 9n + 7 = 0 Hence proved. Concept: Binomial Theorem for Positive Integral Indices Is there an error in this question or solution? APPEARS IN NCERT Mathematics Exemplar Class 11 Chapter 8 Binomial Theorem Exercise | Q 10 | Page 143 Share
2023-03-21 07:28:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5670146346092224, "perplexity": 2648.683763595687}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00618.warc.gz"}
https://mersenneforum.org/showthread.php?s=64617d728f4f201836720f5474d62a7b&p=614692
mersenneforum.org Palindromic Prime Exponents Register FAQ Search Today's Posts Mark Forums Read 2021-11-30, 06:07   #1 Dobri "ม้าไฟ" May 2018 22×5×23 Posts Palindromic Prime Exponents Out of the total of 5,953 base-10 palindromic prime exponents < 109, currently there are 1,889 remaining base-10 palindromic prime exponents for which the corresponding Mersenne numbers have no known factor (see the attached file). The list does not include the exponents for M2, M3, M5, and M7 which are the only four known Mersenne primes with base-10 palindromic prime exponents. Attached Files PalindromicExponents_UnfactoredMersenneNumbers.txt (40.6 KB, 96 views) 2021-11-30, 14:40   #2 Dobri "ม้าไฟ" May 2018 1110011002 Posts The initial post contains a list of base-10 palindromic prime exponents for which currently the corresponding Mersenne numbers have no known factor and are also of untested or unverified LL/PRP status. In addition, this second post contains a shorter list of 292 exponents for which currently the corresponding Mersenne numbers have no known factor but are of verified C-LL/C-PRP status (see the attached file). Attached Files PalindromicExponents_VerifiedMersenneNumbers.txt (2.7 KB, 98 views) 2022-03-06, 21:20 #3 Dobri   "ม้าไฟ" May 2018 46010 Posts The following Wolfram language code generates the remaining palindromic prime exponents < 109 for which the corresponding Mersenne numbers remain to be factored/verified. Currently, there are 1880 such palindromic exponents within the range [100707001,..., 999676999]. Note that the code could be optimized for speed. Code: pp = PrimePi[10^9]; pn = 1; count = 0; ic = 1; While[ic <= pp, pn = NextPrime[pn]; If[(PalindromeQ[pn] == True) && (pn > 7), pns = ToString[pn]; wppns = StringJoin["https://www.mersenne.org/report_exponent/?exp_lo=", pns, "&exp_hi=&text=1"]; text = Import[wppns]; fc = StringContainsQ[text, "Factored"]; vc = StringContainsQ[text, "Verified"]; If[(fc == False) && (vc == False), Print[pn]; count++;]; ]; ic++;]; Print[count]; 2022-09-04, 04:35 #4 Dobri   "ม้าไฟ" May 2018 22·5·23 Posts The smallest unfactored Mersenne numbers with palindromic exponents currently are M15451, M16061, M17471, M31013, M35753, M37573, M38083, M74047, M77477, M78787, M96269, M97379, M98389, M1035301,... 2022-09-08, 16:57 #5 Dobri   "ม้าไฟ" May 2018 22·5·23 Posts M100707001 (Verified) The smallest unverified Mersenne numbers with palindromic exponents currently are M100767001, M101282101, M101343101, M101424101, M101474101, M101717101, M101777101, M101838101, M101919101, M101949101, M101999101, M102070201, M102232201, M102272201,... The palindromic wavefront of untested Mersenne numbers with palindromic exponents is currently at M129202921, M129484921, M129737921, M131252131, M133909331, M134535431, M134616431, M134919431, M135040531, M135161531, M135626531, M135646531, M135868531, M135929531,... 2022-10-01, 10:48 #6 Dobri   "ม้าไฟ" May 2018 1110011002 Posts Below is a list of the bases in which the exponents of the known Mersenne primes are palindromic. Code: Exponent, Base 2, 3 3, 2 5, 2 7, 2 13, 3 17, 2 19, 18 31, 2 61, 6 89, 8 107, 2 127, 2 521, 11 607, 606 1279, 17 2203, 25 2281, 23 3217, 27 4253, 34 4423, 18 9689, 9688 9941, 30 11213, 59 19937, 41 21701, 41 23209, 50 44497, 67 86243, 60 110503, 13 132049, 107 216091, 223 756839, 127 859433, 144 1257787, 178 1398269, 231 2976221, 211 3021377, 344 6972593, 376 13466917, 2988 20996011, 406 24036583, 626 25964951, 1056 30402457, 341 32582657, 550 37156667, 2260 42643801, 353 43112609, 4223 57885161, 400 74207281, 1397 77232917, 610 82589933, 767 Only 7 exponents of Mersenne primes are palindromic in base 2: 310 = 112, 510 = 1012, 710 = 1112, 1710 = 100012, 3110 = 111112, 10710 = 11010112, and 12710 = 11111112. 2022-10-01, 15:52   #7 Dr Sardonicus Feb 2017 Nowhere 7·887 Posts Quote: Originally Posted by Dobri Below is a list of the bases in which the exponents of the known Mersenne primes are palindromic. Code: Exponent, Base 2, 3 3, 2 5, 2 7, 2 13, 3 17, 2 19, 18 31, 2 61, 6 89, 8 107, 2 127, 2 521, 11 607, 606 Any number is palindromic to a base larger than itself, since it only has one digit to that base. Any number n > 2 is palindromic to base n-1, n = 11n-1. Your list is also far from complete. Here is a listing for exponents up to 82589933. Code: Exponent base digits (expressed in decimal) 3 2 [1,1] 5 2 [1, 0, 1] 5 4 [1,1] 7 2 [1, 1, 1] 7 6 [1,1] 13 3 [1, 1, 1] 13 12 [1,1] 17 2 [1, 0, 0, 0, 1] 17 4 [1, 0, 1] 17 16 [1,1] 19 18 [1,1] 31 2 [1, 1, 1, 1, 1] 31 5 [1, 1, 1] 31 30 [1,1] 61 6 [1, 4, 1] 61 60 [1,1] 89 8 [1, 3, 1] 89 88 [1,1] 107 2 [1, 1, 0, 1, 0, 1, 1] 107 7 [2, 1, 2] 107 106 [1,1] 127 2 [1, 1, 1, 1, 1, 1, 1] 127 9 [1, 5, 1] 127 126 [1,1] 521 11 [4, 3, 4] 521 20 [1, 6, 1] 521 520 [1,1] 607 606 [1,1] 1279 17 [4, 7, 4] 1279 1278 [1,1] 2203 25 [3, 13, 3] 2203 31 [2, 9, 2] 2203 2202 [1,1] 2281 23 [4, 7, 4] 2281 38 [1, 22, 1] 2281 40 [1, 17, 1] 2281 2280 [1,1] 3217 27 [4, 11, 4] 3217 48 [1, 19, 1] 3217 3216 [1,1] 4253 34 [3, 23, 3] 4253 39 [2, 31, 2] 4253 4252 [1,1] 4423 18 [13, 11, 13] 4423 24 [7, 16, 7] 4423 34 [3, 28, 3] 4423 66 [1, 1, 1] 4423 4422 [1,1] 9689 9688 [1,1] 9941 30 [11, 1, 11] 9941 71 [1, 69, 1] 9941 9940 [1,1] 11213 59 [3, 13, 3] 11213 11212 [1,1] 19937 41 [11, 35, 11] 19937 47 [9, 1, 9] 19937 112 [1, 66, 1] 19937 19936 [1,1] 21701 41 [12, 37, 12] 21701 64 [5, 19, 5] 21701 124 [1, 51, 1] 21701 140 [1, 15, 1] 21701 21700 [1,1] 23209 50 [9, 14, 9] 23209 82 [3, 37, 3] 23209 23208 [1,1] 44497 67 [9, 61, 9] 44497 206 [1, 10, 1] 44497 44496 [1,1] 86243 60 [23, 57, 23] 86243 154 [3, 98, 3] 86243 160 [3, 59, 3] 86243 214 [1, 189, 1] 86243 86242 [1,1] 110503 13 [3, 11, 3, 11, 3] 110503 170 [3, 140, 3] 110503 110502 [1,1] 132049 107 [11, 57, 11] 132049 206 [3, 23, 3] 132049 262 [1, 242, 1] 132049 336 [1, 57, 1] 132049 132048 [1,1] 216091 223 [4, 77, 4] 216091 281 [2, 207, 2] 216091 343 [1, 287, 1] 216091 441 [1, 49, 1] 216091 216090 [1,1] 756839 127 [46, 117, 46] 756839 439 [3, 407, 3] 756839 756838 [1,1] 859433 144 [41, 64, 41] 859433 238 [15, 41, 15] 859433 721 [1, 471, 1] 859433 824 [1, 219, 1] 859433 859432 [1,1] 1257787 178 [39, 124, 39] 1257787 1257786 [1,1] 1398269 231 [26, 47, 26] 1398269 492 [5, 382, 5] 1398269 703 [2, 583, 2] 1398269 741 [2, 405, 2] 1398269 1398268 [1,1] 2976221 211 [66, 179, 66] 2976221 220 [61, 108, 61] 2976221 305 [31, 303, 31] 2976221 1041 [2, 777, 2] 2976221 2976220 [1,1] 3021377 344 [25, 183, 25] 3021377 738 [5, 404, 5] 3021377 1151 [2, 323, 2] 3021377 3021376 [1,1] 6972593 376 [49, 120, 49] 6972593 624 [17, 566, 17] 6972593 1903 [1, 1761, 1] 6972593 2519 [1, 249, 1] 6972593 6972592 [1,1] 13466917 2988 [1, 1519, 1] 13466917 13466916 [1,1] 20996011 406 [127, 152, 127] 20996011 602 [57, 563, 57] 20996011 3381 [1, 2829, 1] 20996011 3703 [1, 1967, 1] 20996011 3726 [1, 1909, 1] 20996011 3969 [1, 1321, 1] 20996011 4347 [1, 483, 1] 20996011 4410 [1, 351, 1] 20996011 20996010 [1,1] 24036583 626 [61, 211, 61] 24036583 1244 [15, 662, 15] 24036583 1816 [7, 524, 7] 24036583 24036582 [1,1] 25964951 1056 [23, 300, 23] 25964951 2259 [5, 199, 5] 25964951 3137 [2, 2003, 2] 25964951 4675 [1, 879, 1] 25964951 25964950 [1,1] 30402457 341 [261, 155, 261] 30402457 1950 [7, 1941, 7] 30402457 4088 [1, 3349, 1] 30402457 4891 [1, 1325, 1] 30402457 4958 [1, 1174, 1] 30402457 5402 [1, 226, 1] 30402457 30402456 [1,1] 32582657 550 [107, 391, 107] 32582657 3645 [2, 1649, 2] 32582657 3831 [2, 843, 2] 32582657 5416 [1, 600, 1] 32582657 32582656 [1,1] 37156667 2260 [7, 621, 7] 37156667 37156666 [1,1] 42643801 353 [342, 77, 342] 42643801 830 [61, 748, 61] 42643801 4770 [1, 4170, 1] 42643801 5300 [1, 2746, 1] 42643801 5364 [1, 2586, 1] 42643801 5400 [1, 2497, 1] 42643801 5724 [1, 1726, 1] 42643801 5960 [1, 1195, 1] 42643801 6360 [1, 345, 1] 42643801 42643800 [1,1] 43112609 4223 [2, 1763, 2] 43112609 43112608 [1,1] 57885161 400 [361, 312, 361] 57885161 518 [215, 377, 215] 57885161 986 [59, 533, 59] 57885161 1038 [53, 752, 53] 57885161 2679 [8, 175, 8] 57885161 5560 [1, 4851, 1] 57885161 7180 [1, 882, 1] 57885161 57885160 [1,1] 74207281 1397 [38, 33, 38] 74207281 1487 [33, 833, 33] 74207281 1534 [31, 821, 31] 74207281 1827 [22, 423, 22] 74207281 74207280 [1,1] 77232917 610 [207, 341, 207] 77232917 77232916 [1,1] 82589933 767 [140, 299, 140] 82589933 2727 [11, 289, 11] 82589933 4874 [3, 2323, 3] 82589933 82589932 [1,1] Last fiddled with by Dr Sardonicus on 2022-10-01 at 23:07 Reason: increased exponent range 2022-10-01, 17:22   #8 Dobri "ม้าไฟ" May 2018 22·5·23 Posts Quote: Originally Posted by Dr Sardonicus Any number is palindromic to a base larger than itself, since it only has one digit to that base. Any number n > 2 is palindromic to base n-1, n = 11n-1. Your list is also far from complete. Here is a listing for exponents up to 21701. ... Let me clarify that my post was concerned with the minimal bases bmin > 1 in which the exponents of Mersenne primes are palindromic. Thanks for extending the list to all possible bases b up to b = n - 1. Only 4 exponents of Mersenne primes are palindromic in a minimal base bmin = n - 1: Code: Exponent, Base 3, 2 19, 18 607, 606 9689, 9688 For completeness, let's note that all exponents are palindromic in base b = 1 (unary numeral system). 2022-12-29, 08:42 #9 Dobri   "ม้าไฟ" May 2018 22×5×23 Posts Palindromic Milestone Report: All 27-bit base-10 palindromic prime exponents less than 227 have been tested at least once. These were the last remaining 27-bit ones: M129202921 (Verified), M129484921 (Unverified), M129737921 (Unverified), M131252131 (Verified), and M133909331 (Verified). The next untested 28-bit base-10 palindromic prime exponent is M134919431. Similar Threads Thread Thread Starter Forum Replies Last Post enzocreti enzocreti 1 2020-03-04 19:02 enzocreti enzocreti 5 2018-12-10 23:15 CannOfPrimes No Prime Left Behind 6 2013-09-19 01:49 ET_ Lounge 4 2012-02-21 18:20 S80780 Math 5 2003-04-13 23:49 All times are UTC. The time now is 10:12. Fri Jan 27 10:12:31 UTC 2023 up 162 days, 7:41, 0 users, load averages: 0.76, 0.95, 1.05
2023-01-27 10:12:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42848488688468933, "perplexity": 14613.078244882267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494976.72/warc/CC-MAIN-20230127101040-20230127131040-00784.warc.gz"}
https://www.biogeosciences.net/16/1921/2019/bg-16-1921-2019.html
Journal topic Biogeosciences, 16, 1921–1935, 2019 https://doi.org/10.5194/bg-16-1921-2019 Biogeosciences, 16, 1921–1935, 2019 https://doi.org/10.5194/bg-16-1921-2019 Research article 13 May 2019 Research article | 13 May 2019 # Tidal and seasonal forcing of dissolved nutrient fluxes in reef communities Tidal and seasonal forcing of dissolved nutrient fluxes in reef communities Renee K. Gruber1, Ryan J. Lowe2,3, and James L. Falter2,3 Renee K. Gruber et al. • 1The Australian Institute of Marine Science, Townsville, Queensland 4810, Australia • 2The Oceans Institute, University of Western Australia, Crawley, Western Australia 6009, Australia • 3The ARC Centre of Excellence for Coral Reef Studies, Crawley, Western Australia 6009, Australia Correspondence: Renee K. Gruber (r.gruber@aims.gov.au) Abstract Benthic fluxes of dissolved nutrients in reef communities are controlled by oceanographic forcing, including local hydrodynamics and seasonal changes in oceanic nutrient supply. Up to a third of reefs worldwide can be characterized as having circulation that is predominantly tidally forced, yet almost all previous research on reef nutrient fluxes has focused on systems with wave-driven circulation. Fluxes of dissolved nitrogen and phosphorus were measured on a strongly tide-dominated reef platform with a spring tidal range exceeding 8 m. Nutrient fluxes were estimated using a one-dimensional control volume approach, combining flow measurements with modified Eulerian sampling of waters traversing the reef. Measured fluxes were compared to theoretical mass-transfer-limited uptake rates derived from flow speeds. Reef communities released 2.3 mmol m−2 d−1 of nitrate, potentially derived from the remineralization of phytoplankton and dissolved organic nitrogen. Nutrient concentrations and flow speeds varied between the major benthic communities (coral reef and seagrass), resulting in spatial variability in estimated nitrate uptake rates. Rapid changes in flow speed and water depth are key characteristics of tide-dominated reefs, which caused mass-transfer-limited nutrient uptake rates to vary by an order of magnitude on timescales of  minutes–hours. Seasonal nutrient supply was also a strong control on reef mass-transfer-limited uptake rates, and increases in offshore dissolved inorganic nitrogen concentrations during the wet season caused an estimated twofold increase in uptake. 1 Introduction Reef organisms remove nutrients from overlying waters for essential metabolic and biogeochemical processes, which enable them to accumulate biomass and ultimately support broader marine food webs (McMahon et al., 2016; Parrish, 1989). Reef waters have carbon concentrations that are orders of magnitude greater than nitrogen (N) and phosphorus (P), and thus benthic community productivity is generally limited by the rates at which organisms can acquire N and P (Atkinson and Falter, 2003; Larned, 1998; Smith, 1984). Suspended N and P can be categorized into dissolved inorganic (DIN, DIP), dissolved organic (DON, DOP), and particulate organic (PON, PP) fractions, which are generally utilized by different groups of organisms. Primary producers take up dissolved inorganic nutrients in the forms of nitrate and nitrite (NOx), ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), and phosphate (DIP), which are found at low concentrations in reef waters. The majority of studies on reef nutrient dynamics have focused on the dissolved inorganic species, as these are tightly coupled to reef productivity (D'Elia and Wiebe, 1990; Szmant, 2002). Research over the last 2 decades has shown that the upper limit of DIN and DIP uptake on reefs is physically constrained by mass transfer, a term that refers to the transfer of solutes in the water column across diffusive boundary layers surrounding the tissue surface of an organism (Bilger and Atkinson, 1992; Hurd, 2000). Nutrient uptake in reef waters is typically mass transfer limited (i.e., the biological demand for nutrients is higher than the physical rate at which they can be supplied). Therefore, the uptake rate has a first-order relationship with nutrient concentration and is a function of water velocity, bottom roughness properties, and diffusion characteristics of the solute (Atkinson, 2011). Due to the dependency of mass-transfer-limited nutrient uptake on flow speed, the local hydrodynamic conditions within a reef directly affect uptake rates of DIN and DIP (Atkinson and Bilger, 1992; Baird et al., 2004; Falter et al., 2016; Reidenbach et al., 2006; Thomas and Atkinson, 1997), and these uptake rates can be predicted for a particular reef given sufficient information (Falter et al., 2004; Zhang et al., 2011). However, validating these models with observations from living systems remains a major challenge, as measurements must occur at spatial and temporal scales relevant to reef circulation, and in situ uptake is often confounded by simultaneously occurring biogeochemical processes that release DIN and DIP into the water column (Atkinson and Falter, 2003; Wyatt et al., 2012). Ocean-derived dissolved organic N and P compounds are generally thought to be refractory or too energetically intensive for organisms to utilize (Knapp et al., 2005); thus, DON tends to dominate the nitrogen pool and DOP concentrations are generally low and similar to DIP (Furnas et al., 2011). However, studies on DON uptake have provided mixed results: some have measured a net production of DON by reef communities (Cuet et al., 2011a; Tanaka et al., 2011), while others have found evidence that primary producers (Vonk et al., 2008), corals (Grover et al., 2008), and filter feeders (Rix et al., 2017) can directly utilize some DON compounds. Finally, particulate N and P pools in reef waters are generally dominated by small phytoplankton (<2µm) and bacterial cells, and are an important source of nutrients for reef suspension and filter-feeding organisms (Houlbrèque et al., 2006; Ribes et al., 2005; Wyatt et al., 2010). Accurate measurements of nutrient uptake in natural reef communities are still relatively limited and are just beginning to incorporate spatial and temporal variability in forcing conditions (Lowe and Falter, 2015), such as gradients in wave energy across a reef or seasonal changes in local oceanic nutrient concentrations (e.g., Wyatt et al., 2012). While many studies have assessed nutrient dynamics in reefs experiencing long-term nutrient enrichment (Cuet et al., 2011a; Furnas, 2003; Paytan et al., 2006; Tait et al., 2014), relatively little work has focused on systems experiencing natural pulses in nutrient delivery from processes such as coastal upwelling (Andrews and Gentien, 1982; Stuhldreier et al., 2015; Wyatt et al., 2012) or internal waves (Green et al., 2019; Leichter et al., 2003; Wang et al., 2007). Additionally, the majority of reef research to date has occurred on reefs, whose circulation patterns and residence times are mainly driven by wave-breaking on the fore-reef (Monismith, 2007). However, the circulation of up to a third of reefs worldwide has been estimated to be tide-dominated, defined as the case where annual mean significant wave height (offshore of the reef) is less than the mean tidal range (Lowe and Falter, 2015). Reefs that are strongly tide-dominated can experience substantial variability in flow speeds and water depths over a single semidiurnal tidal cycle (Lowe et al., 2015), which suggests that mass-transfer-limited nutrient uptake rates (and other biological processes) would also vary throughout the tidal cycle. The Kimberley coastal region (located in remote northwestern Australia) has a macrotidal regime where spring tidal ranges can reach 12 m in some locations (Kowalik, 2004). The region contains thousands of islands with a total reef area estimated to be ∼2000 km2 (Kordi and O'Leary, 2016), inhabited by diverse coral reef and seagrass communities (Richards et al., 2015; Wells et al., 1995). Recent work has revealed the strongly tide-dominated circulation that can occur on Kimberley reef platforms (Lowe et al., 2015). When the tidal amplitude (half the tidal range) is greater than the reef elevation relative to mean sea level, water levels drop below the reef for portions of each tidal cycle, and this “truncation” of the semidiurnal tide results in asymmetric phase durations (∼10 h ebb and ∼2 h flood) and flow speeds (Lowe et al., 2015). Extended periods of low water depth on reef platforms such as Tallon Island can cause communities to experience high irradiances that result in diel temperature changes up to 11 C (Lowe et al., 2016) and dissolved oxygen fluctuations among the most extreme measured worldwide (Gruber et al., 2017). Recent measurements of coral calcification (Dandan et al., 2015), seagrass productivity (Pedersen et al., 2016), reef community metabolism (Gruber et al., 2017), and particulate nutrient uptake (Gruber et al., 2018) have been published from tide-dominated systems, yet little is currently known about how these large tides control fluxes of dissolved nutrients. The objectives of this study were to (1) measure fluxes of dissolved N and P on a tidally forced reef, (2) compare measured rates to maximum potential uptake predicted by mass-transfer theory, and (3) compare tidal forcing (velocity and water depth changes) and oceanic forcing (seasonal changes in nutrient concentration) of mass-transfer-limited uptake rates. This work will provide some preliminary insight into the magnitudes, variability, and temporal scales of nutrient cycling on tide-dominated reefs. 2 Methods ## 2.1 Field site A series of field experiments were conducted in the western Kimberley region at Tallon Island, which contains a large intertidal reef platform (surface area 2.2×106 m2) on its eastern side (Fig. 1). The platform is elevated slightly (25 cm) above mean sea level, and the seaward rim is 10 cm shallower than the rest of the platform; this feature, coupled with bottom friction, prevents reef benthic communities from becoming emersed during low tide (Lowe et al., 2015). The platform is covered with a series of regular shore-parallel ridges ∼0.15–0.25 m in height and contains two benthic communities: a seagrass-dominated inner zone (from the fringing mangrove shoreline to 400 m landward of the reef crest) and a coral reef outer zone (200 m wide extending shoreward from the crest). Between these distinct communities, a 200 m zone of rubble and sand occurs where the seagrass and coral reef communities mix (Fig. 1). Enhalus acoroides is found with Thalassia hemprichii in the seagrass zone (Wells et al., 1995). The coral community contains brown foliose macroalgae (predominantly Sargassum spp.), a diverse assemblage of small hard corals (∼5 %–10 % cover), soft coral, coralline macroalgae, and crustose coralline algae. Table 1Summary of mean (italics indicate standard deviation) conditions in offshore waters during October and February field experiments. Nutrient species measured are nitrate and nitrite (NOx), ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), dissolved inorganic phosphorus (DIP), and dissolved organic nitrogen (DON). Number of duplicate nutrient samples collected is shown for offshore (Off), coral (CR), and seagrass (SG) sites. * Difference between max and min water levels. Figure 1Deployment locations of hydrodynamic instrumentation and water sampling locations on the Tallon reef platform and offshore. Inset shows Tallon Island location in the western Kimberley region of Australia. ADV refers to acoustic Doppler velocimeter and ADPHR refers to acoustic Doppler profiler. The Kimberley region experiences a subtropical climate, so field experiments at Tallon reef were conducted during the dry (5–20 October 2013) and wet seasons (4–9 February 2014). Nutrient concentrations were measured from duplicate filtered water samples (Table 1) and were collected around hydrodynamic instrumentation, forming a one-dimensional control volume, as detailed below (see also Gruber et al., 2017). This approach allowed estimation of dissolved nutrient fluxes (the net uptake or release of nutrients) across the reef benthos. Estimates of uptake of DIN and DIP at the limits of mass transfer were made using hydrodynamic data over a spring–neap cycle (∼15 d) collected during the hydrodynamic study of Lowe et al. (2015) and nutrient concentrations from water sampling during the October and February field experiments. Flows on the reef platform are strongly tide-driven and can be predicted based on water depth and tidal phase (Lowe et al., 2015); given that spring and neap tidal ranges were very similar between October and April experiments, velocity measurements from April can be considered representative of velocities in October. This paper presents tidal phase-averaged data as a way to visualize hydrodynamic and biogeochemical measurements that tend to fluctuate with the phase of tide. Phase-averaged values in this study are ensemble averages of all measurements occurring at a given point in the semidiurnal (M2) tidal cycle (e.g., the average of all measurements taken during low tide). ## 2.2 Dissolved nutrient sampling Water samples were collected during both field experiments for analysis of dissolved nutrient concentrations in offshore and reef flat waters. Eulerian sampling occurred at three stations (Fig. 1): the coral zone (CR), the seagrass zone (SG), and offshore of the reef in adjacent waters (Off). Offshore samples were collected throughout the semidiurnal tidal cycle on days of sampling (Table 1). Collecting water samples on the reef platform was not feasible during periods of peak flood and ebb, which occurred 0–1 and 4–6 h after the onset of reef flooding, respectively (when offshore waters first overtopped the reef crest). Rapid changes in water depth during these tidal phases caused current speeds exceeding 0.8 m s−1 (Fig. 2), which made for unsafe conditions for sampling by foot or boat. Reef sampling was conducted during the remaining 9 h of each tidal cycle, either by foot when water depths were low (∼0.4–0.6 m) or by boat during high tide (1–4 h from the onset of reef flooding). Figure 2Selected time series of spring–neap transition showing (a) water depths (h) on the reef (measured in the seagrass zone) and offshore, with depth-averaged flow speed u in (b) coral- and seagrass-dominated zones. Water samples were collected from just beneath the water surface for analysis of dissolved nutrients. A 50 mL syringe (pre-rinsed with reef water) was used to collect water, which was immediately filtered (Minisart, pore size 0.45 µm) into 30 mL pre-rinsed tubes. These samples were placed in darkness on ice and were frozen upon return to the field station (several hours); samples were transported and stored frozen until analysis at the laboratory (<4 weeks from the end of the field experiment). Analyses of nitrate and nitrite (NOx), ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), and inorganic phosphorus (DIP) concentrations were determined on a flow-injection autoanalyzer (Lachat QuikChem 2500) using standard methods (Strickland and Parsons, 1972). Total dissolved nitrogen was determined by persulfate oxidation of filtered samples (Valderrama, 1981), followed by analysis of nitrate as above. Dissolved organic nitrogen (DON) was estimated from the total dissolved nitrogen less NOx and ${\mathrm{NH}}_{\mathrm{4}}^{+}$. All nutrient concentrations presented are the mean of duplicate samples. ## 2.3 Control volume approach The control volume (CoVo) technique utilizes flow measurements and modified Eulerian sampling of solutes or particles to derive in situ benthic flux estimates. Tallon reef platform is well-suited to a one-dimensional CoVo approach due to long periods (approximately 10 h of each semidiurnal tidal cycle) of consistent flow direction; nutrient sampling may thus be conducted at “upstream” and “downstream” sites during these periods. A similar approach has previously been used on Tallon reef to estimate its benthic metabolism (Gruber et al., 2017) and particulate material uptake (Gruber et al., 2018) rates. A bottom-mounted acoustic Doppler current profiler (Nortek Aquadopp HR) was stationed near SG (Fig. 1) and measured current velocity and water depth (h) at 1 Hz and 0.03 m bins. Depth-averaged flow speeds (u) were averaged at 5 min intervals. During the reef's extended ∼10 h ebb tide, water drained off the platform in a consistent northeast direction ($\mathrm{80}{}^{\circ }±\mathrm{30}{}^{\circ }$, mean ± standard deviation), along which the water sampling stations were aligned. Depth-averaged current velocity was rotated in this ebb flow direction (ux) and transport qx was estimated as follows: $\begin{array}{}\text{(1)}& {q}_{x}={u}_{x}h,\end{array}$ assuming negligible horizontal dispersion. The net flux Jnet (in mmol N or P m−2 d−1) of each nutrient species (NOx, ${\mathrm{NH}}_{\mathrm{4}}^{+}$, DIP, and DON) into the benthos was estimated as follows: $\begin{array}{}\text{(2)}& -{J}_{\mathrm{net}}=\stackrel{\mathrm{‾}}{h}\frac{\mathrm{d}\stackrel{\mathrm{‾}}{C}}{\mathrm{d}t}+{q}_{\mathrm{x}}\frac{\left({C}_{\mathrm{CR}}-{C}_{\mathrm{SG}}\right)}{\mathrm{d}x},\end{array}$ where the distance between sampling stations dx was 540 m and $\stackrel{\mathrm{‾}}{h}$ was the mean water depth along dx. Nutrient concentrations at CR and SG are represented by CCR and CSG, respectively; $\stackrel{\mathrm{‾}}{C}$ is the mean of CCR and CSG at a given time step (Genin et al., 2002). Positive values of Jnet represent net benthic nutrient uptake and negative Jnet indicates net release of nutrients to the water column; these fluxes are the net result of all biogeochemical processes occurring between SG and CR, and thus represent fluxes from a combination of seagrass and coral reef communities. The “local” benthic flux (i.e., nutrient uptake or release occurring in the reference frame of the sampling stations) is represented by the first right-side term of Eq. (2) and was estimated at hourly intervals when water sampling occurred. The second term of Eq. (2) represents the “advective” flux (i.e., nutrient uptake or release during transit between sampling stations). Transit time between stations changed throughout the tidal cycle and could be on the order of hours during periods of slow flow (<5 cm s−1). To better represent the advective component, advective fluxes were calculated at every point where nutrient concentrations were available and were then bin-averaged over a time interval that approximated the transit time. These estimates were then linearly interpolated to times where local estimates existed. ## 2.4 Uptake rates at the limits of mass transfer For comparison with the field observations, the theoretical uptake rates of DIN and DIP at the limits of mass transfer (JMTL) were calculated for each of the measurements of Jnet above. Assuming nutrient concentrations at the tissue surface of benthic organisms were near zero, JMTL was estimated along the study transect (from SG to CR) as follows (Falter et al., 2004): $\begin{array}{}\text{(3)}& {J}_{\mathrm{MTL}}=S\stackrel{\mathrm{‾}}{C},\end{array}$ where S is the mass-transfer velocity (in m d−1). Estimates of JMTL and S were made for NOx, ${\mathrm{NH}}_{\mathrm{4}}^{+}$, and DIP and were averaged over the same time intervals as Jnet. Mass-transfer velocity S was estimated as follows (Falter et al., 2004): $\begin{array}{}\text{(4)}& S={u}_{x}{C}_{\mathrm{D}}^{\mathrm{0.5}}/\left(R{e}_{k}^{\mathrm{0.2}}S{c}^{\mathrm{0.6}}\right),\end{array}$ where CD is the drag coefficient, Rek is the roughness Reynolds number, and Sc is the Schmidt number. Mass-transfer velocity is a function of flow speed and is indirectly related to water depth through the drag coefficient; the magnitude of S depends on the diffusivity of the nutrient species of interest (through the Schmidt number) yet is unrelated to nutrient concentration (see below). The Schmidt number is defined as the kinematic viscosity v divided by the diffusion coefficient D of the nutrient species, which were 19.05, 19.80, and $\mathrm{7.00}×{\mathrm{10}}^{-\mathrm{6}}$ cm2 s−1 for NOx, ${\mathrm{NH}}_{\mathrm{4}}^{+}$, and ${\mathrm{PO}}_{\mathrm{4}}^{\mathrm{3}-}$, respectively (Li and Gregory, 1974). The drag coefficient CD increases dramatically as reef water depth decreases (Lentz et al., 2017) and was estimated from an empirical relationship between h and the mean height of reef ridges hr as follows (McDonald et al., 2006): $\begin{array}{}\text{(5)}& {C}_{\mathrm{D}}=\mathrm{1.01}\left(h/{h}_{\mathrm{r}}{\right)}^{-\mathrm{2.77}}+\mathrm{0.01},\end{array}$ where hr was determined by measuring the mean height (vertical distance between the crest and trough of a reef ridge) of all ridges along a 50 m transect. The roughness Reynolds number Rek is defined as follows: $\begin{array}{}\text{(6)}& R{e}_{k}={u}_{\ast }{k}_{\mathrm{s}}/v,\end{array}$ where ks, a hydraulic roughness length scale, was 0.5 m (Lowe et al., 2015) and the shear velocity u is a function of bottom shear stress τb and seawater density ρ as follows: $\begin{array}{}\text{(7)}& {u}_{\ast }\equiv \sqrt{{\mathit{\tau }}_{\mathrm{b}}/\mathit{\rho }}={u}_{x}\sqrt{{C}_{\mathrm{D}}/\mathrm{2}}.\end{array}$ Estimates of maximum potential nutrient release (Jrelease) represent the flux of NOx, ${\mathrm{NH}}_{\mathrm{4}}^{+}$, and DIP necessary to match the observed Jnet assuming uptake occurred at mass-transfer-limited rates and were estimated as follows (Wyatt et al., 2012): $\begin{array}{}\text{(8)}& {J}_{\mathrm{release}}={J}_{\mathrm{net}}-{J}_{\mathrm{MTL}},\end{array}$ for each of the intervals over which Jnet was calculated. Large changes in water depth, flow speed, and nutrient concentration occurred during each tidal cycle, yet measurements of Jnet could only be made during ebb tide (generally 6–12 h after onset of reef flooding). In order to understand how the range of flow speeds experienced by this reef platform could influence maximum potential nutrient uptake rates, we calculated JMTL continuously over a full ∼15 d spring–neap cycle at individual stations SG and CR. Flow speed measurements from an April 2014 experiment were used, which included an acoustic Doppler profiler (ADP) and acoustic Doppler velocimeter (ADV) located at SG and CR, respectively; as discussed previously, flows on Tallon reef can be predicted based on water depth and tidal phase (Lowe et al., 2015), so measurements from April would be representative of flows during October and February experiments. Calculations were made as above (Eqs. 3–7) with the exception of using u instead of ux (Eqs. 4, 7), as we are now estimating fluxes over the full tidal cycle rather than only the roughly unidirectional ebb tide portion. Tidal phase-averaged concentrations of NOx, ${\mathrm{NH}}_{\mathrm{4}}^{+}$, and DIP were approximated for both sites (CR and SG) and field experiments (October and February) using measured concentrations (Fig. 3), where available. As it was not possible to collect water samples during peak ebb tide (due to hazardous conditions), nutrient concentrations in offshore waters (Table 1) were assumed to be representative of concentrations on the reef platform during those times. In a strongly tide-dominated system such as Tallon reef, each tidal cycle “refills” the reef by flushing it with fresh oceanic water. In order to conceptualize the net biogeochemical fluxes that occur over this cycle, we used tidal cycle averages. Tidal cycle averages of mass-transfer velocities (Scyc) and mass-transfer-limited nutrient flux (Jcyc) were calculated as the mean of all S and JMTL, respectively, occurring within an individual semidiurnal tidal cycle beginning when water flooded the reef platform. Figure 3Measurements of (a, b) nitrate (NOx), (c, d) ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), and (e, f) dissolved inorganic phosphorus (DIP) from water samples during October (a, c, e) and February (b, d, f) field experiments. Samples were taken at two reef stations; CR- and SG-dominated zones and mean offshore nutrient concentrations are shown (dashed blue line). Tidal phase-averaged water depth h is also shown (black line). Uncertainties in estimates of S, Jnet, and JMTL were estimated by propagating standard deviations using Monte Carlo simulation (10 000 iterations). Error terms for hydrodynamic variables were derived from bin-averaged data (Lehrter and Cebrian, 2010) and were 0.01 m for h, 0.03 m s−1 for u, 0.05 µM for concentrations of NOx and ${\mathrm{NH}}_{\mathrm{4}}^{+}$, 0.01 µM for DIP, 1.0 µM for DON. Tidal phase-averaged concentrations of NOx and DIP used in JMTL estimates were assigned standard deviations of 0.5 and 0.05 µM, respectively. 3 Results ## 3.1 Nutrient concentrations and measured fluxes Characteristics of offshore water (temperature, salinity, and nutrient concentrations) showed some differences between dry and wet season field experiments. Water temperature was ∼2C warmer during the wet season in February, and levels of DIN were elevated, with NOx concentrations approximately double those measured during the dry season in October (Table 1). Salinity and concentrations of DIP and DON were similar between seasons. Reef platform nutrient concentrations were similar to offshore concentrations during flood tide and the start of ebb tide (∼3–6 h after reef flooding, Fig. 3); during the remaining 6 h of ebb tide, the concentrations of DIN changed dramatically depending on the reef zone (benthic community type). In the case of NOx, concentrations decreased in the seagrass zone (SG) but increased in the coral zone (CR) by up to 5 times compared to offshore levels (Fig. 3a, b). Increases in ${\mathrm{NH}}_{\mathrm{4}}^{+}$ occurred at both SG and CR during ebb tide (Fig. 3c, d), while DIP was generally lower than offshore concentrations but tended to increase at CR during the final few hours of ebb tide (Fig. 3e, f). Figure 4Fluxes (± standard deviation) of (a) nitrate (NOx), (b) ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), (c) dissolved inorganic phosphorus (DIP), and (d) dissolved organic nitrogen (DON) along the study transect during both field experiments. Net benthic fluxes (Jnet) were estimated using the CoVo approach, while mass-transfer-limited uptake (JMTL) was calculated (Eq. 3) from reef platform flow and nutrient concentrations, and nutrient release (Jrelease) was estimated from net and MTL fluxes. Fluxes of DIN and DIP estimated using the CoVo technique were generally negative, indicating a net efflux (release) of nutrients from the benthos to the water column. This was especially true for NOx, where net nutrient release (Jnet<0) reached 5 mmol m−2 d−1 (Fig. 4a), and net uptake (Jnet>0) was not observed during any point in either field experiment. Fluxes of ${\mathrm{NH}}_{\mathrm{4}}^{+}$ and DIP varied between net uptake and release (Fig. 4b, c), and Jnet for DIP tended to transition from net uptake to net release over the duration of ebb tide. There were no substantial differences in overall mean Jnet of dissolved inorganic nutrients between October and February field experiments (Table 2). Fluxes of DON did differ between seasons; Jnet varied between net uptake and net release during October (Fig. 4d) although mean Jnet was negligible (Table 2). During February, Jnet of DON transitioned from net uptake to net release over the ebb tide (Fig. 4d) but showed a large uptake on average (Table 2). Table 2Mean (italics indicate standard error) net fluxes (in mmol m−2 d−1) of nutrients determined by the CoVo approach during the October and February field experiments. Nutrient species include nitrate and nitrite (NOx), ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), dissolved inorganic phosphorus (DIP), and dissolved organic nitrogen (DON). Mean net (Jnet), mass-transfer-limited (JMTL), and release (Jrelease) fluxes are from samples taken during the final 6 h of ebb tide and do not represent fluxes at all phases of the semidiurnal tidal cycle. ## 3.2 Mass-transfer velocity and nutrient uptake For simplicity, only values of S for NOx are shown, as the values of other species (${\mathrm{NH}}_{\mathrm{4}}^{+}$, DIP) differ only in magnitude by a constant factor (due to diffusivity). Although temperature influences S through viscosity, changes in temperature on the reef platform had a negligible effect on S (<0.01 %) compared to reef hydrodynamics. The tidal phase-averages of S on the reef platform (Fig. 5) demonstrate the strong influence of flow speed and water depth on S. Mass-transfer velocities rose sharply during the peak flood and ebb periods (0–1.5 and 4–6 h after reef flooding, respectively). The largest S each tidal cycle occurred at the beginning of flood tide, characterized by high flow speeds (∼0.5 m s−1) and minimum water depths (∼0.4 m) on the reef platform (Fig. 2); values of S during flood tide were ∼30 % greater at CR compared to SG, which was due to the larger flow speeds and slightly shallower (10 cm) water depths that occurred near the reef crest. The lowest S for each tidal cycle (Fig. 5) occurred at high tide when flow speeds became negligible and reef water depths were comparatively large (∼2.5 m). Values of S were relatively small (∼5 m d−1 for NOx) later in the ebb tide (8–12 h after reef flooding) and were similar between SG and CR. As S was estimated over a full spring–neap tidal cycle, the ranges of values shown (Fig. 5) are from the most (spring) and least (neap) energetic tidal cycles, which cause S to vary by a factor of <4. Figure 5Tidal phase-averages of flow speed u, drag coefficient CD, and mass-transfer velocity S for nitrate (NOx) in coral- and seagrass-dominated zones. Phase-averages are the mean of all measurements occurring at the same point in the tidal cycle (e.g., mean of all S at high tide), and the range represents conditions during spring and neap tidal cycles. Hydrodynamic data are from April 2014. Dashed lines in panel (e) indicate upper and lower limits of S measured from previous studies of reef communities reviewed by Atkinson and Falter (2003). The mass-transfer-limited nutrient fluxes JMTL were a function of both S and the local nutrient concentrations (Eq. 3). Fluxes showed variability over the tidal cycle associated with S but also showed prominent differences between benthic communities and seasons related to nutrient concentrations. Elevated NOx concentrations at CR (Fig. 3a, b) resulted in rising JMTL during the final 6 h of ebb tide, while low NOx concentrations at SG resulted in low JMTL, especially during ebb tide (Fig. 6). Similar concentrations of DIP (Fig. 3e, f) between sites resulted in similar JMTL between CR and SG (Fig. 6c, d). Seasonal changes in offshore nutrient concentrations, particularly for NOx, have the potential to enhance nutrient uptake rates. Elevated offshore NOx during February (Table 1) resulted in a doubling of estimated JMTL during flood and high tide portions of each tidal cycle, compared to October (Fig. 6a, b). Seasonal differences in JMTL were also found for DIP, where elevated fluxes occurred during October (compared to February) due to higher DIP concentration in the dry season (Table 1, Fig. 6c, d). The maximum potential release of DIN and DIP to the water column, assuming uptake was mass-transfer-limited (Jrelease, Eq. 8), was calculated for every instance of measured Jnet (Fig. 4). In the case of NOx, Jrelease was roughly double Jnet (Fig. 4a), due to the large net NOx release measured on the reef platform. Whereas for ${\mathrm{NH}}_{\mathrm{4}}^{+}$ and DIP, Jrelease was on the order of JMTL, due to negligible values of Jnet (Fig. 4b, c). Overall mean rates of JMTL and Jrelease for DIN did not show seasonal differences (Table 2), which was likely a function of these estimates only occurring during a portion (ebb) of the tidal cycle. Figure 6Tidal phase-averaged mass-transfer-limited uptake rates of JMTL for (a, b) NOx and (c, d) DIP in both coral- and seagrass-dominated zones over a full spring–neap cycle. Phase-averages are the mean of all measurements occurring at the same point in the tidal cycle (i.e., mean of all JMTL at high tide). Shaded areas of JMTL indicate the range where maximum values approximate uptake during spring tides and minimum values during neap tides. Estimates of JMTL were calculated using tidal phase-averaged nutrient concentrations from October and February field experiments (Fig. 3) and mass-transfer velocity S (Fig. 5e, f). When S was averaged over individual semidiurnal tidal cycles (e.g., mean of all S within a tidal cycle, beginning with reef flooding), the difference between SG and CR was only ∼1 m d−1 (Fig. 7). Mass-transfer velocities for NOx and ${\mathrm{NH}}_{\mathrm{4}}^{+}$ were of similar magnitude over the tidal cycle, while those for DIP were ∼50 % lower (Fig. 7); this was a function of the diffusivity of each of these solutes (Li and Gregory, 1974). When JMTL was similarly averaged over individual tidal cycles (Fig. 8), community and seasonal differences in JMTL described previously (Fig. 6) were prominent. Uptake of NOx showed the greatest differences between seasons and sites, with uptakes rates during the wet season greater than dry season rates by a factor of ∼2. Similarly, estimates of DIP uptake were slightly enhanced during the dry season compared to wet season rates, while uptake of ${\mathrm{NH}}_{\mathrm{4}}^{+}$ was similar between seasons and sites (Fig. 8). 4 Discussion ## 4.1 Oceanic nutrient supply The measurements of offshore nutrient concentrations presented in Table 1 are among the first published for the Kimberley region (Jones et al., 2014) and are the only (to our knowledge) published record that includes measurements during the wet season. Concentrations of dissolved nutrients (NOx, ${\mathrm{NH}}_{\mathrm{4}}^{+}$, DIP, and DON) were at the upper end of typical values in coral reef waters worldwide, especially in the case of DON, which far exceeded the <5µM common in reef waters (Atkinson and Falter, 2003). Measurements from the coastal Kimberley (Table 1) also exceeded long-term mean values from inshore waters of the Great Barrier Reef (GBR) during both the wet and dry seasons (Furnas et al., 2005; Schaffelke et al., 2012). The Kimberley region shares similar rainfall patterns, tidal ranges, and low levels of catchment alteration with the northern GBR (at a similar latitude to the Kimberley), yet concentrations of DIN and DIP measured in this study were an order of magnitude greater than those from the wet tropics (Furnas et al., 2005; Schaffelke et al., 2012). These observations, coupled with elevated concentrations of chlorophyll a and particulate nutrients (Gruber et al., 2018) relative to “typical” oligotrophic reef waters, suggest that some coastal Kimberley reefs may experience naturally mesotrophic conditions. Figure 7Means (± standard deviation) of mass-transfer velocity S for all individual semidiurnal tidal cycles (n=23) for nitrate (NOx), ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), and dissolved inorganic phosphorus (DIP). Values are from SG- and CR-dominated communities. Wet season terrestrial discharge events deliver sediment and nutrients to coastal waters of northern Australia (Brodie et al., 2010; Devlin and Schaffelke, 2009; Schroeder et al., 2012). Offshore concentrations of NOx and ${\mathrm{NH}}_{\mathrm{4}}^{+}$ measured in our study approximately doubled during the February field experiment compared to October, whereas DIP and DON were similar between seasons (Table 1). Whether this increase is due to river discharge or coastal oceanographic processes is not presently clear in the Kimberley region and warrants future study. Ratios of offshore DIN : DIP were 4.3 and 10.7 in October and February, respectively (Table 1), with the value during October similar to the DIN : DIP ratio of $\sim \mathrm{3}:\mathrm{1}$ previously found in coastal Kimberley waters during the dry season (Jones et al., 2014). These values are below the Redfield ratio (16:1), suggesting that pelagic production may be N-limited. This is common for reef waters generally, although long-term averages of inshore GBR waters are generally <3:1 even during the wet season (Furnas et al., 2005; McKinnon et al., 2013; Schaffelke et al., 2012). This suggests that N-limitation may be less severe in the Kimberley than in GBR waters, particularly during the wet season. Figure 8Means (± standard deviation) of mass-transfer-limited uptake JMTL for all individual semidiurnal tidal cycles (n=23) for (a) nitrate (NOx), (b) ammonium (${\mathrm{NH}}_{\mathrm{4}}^{+}$), and (c) dissolved inorganic phosphorus (DIP). Values are from SG- and CR-dominated communities during October and February field experiments. ## 4.2 Rates and sources of benthic release of DIN and DIP Benthic nutrient fluxes measured using the control volume technique (Jnet) showed net release of NOx on Tallon (Fig. 4a), while ${\mathrm{NH}}_{\mathrm{4}}^{+}$ and DIP fluxes varied between uptake and release (Fig. 4b, c) but were negligible overall during the ebb tide (Table 2). Previous studies of reef nutrient fluxes in flumes or other controlled environments have generally shown uptake approaching the limits of mass transfer for ${\mathrm{NH}}_{\mathrm{4}}^{+}$ (e.g., Atkinson et al., 1994; Cornelisen and Thomas, 2009; Larned and Atkinson, 1997; Thomas and Atkinson, 1997), DIP (reviewed in Cuet et al., 2011b), and, less frequently, for NOx (e.g., Baird et al., 2004); these controlled environments lack some of the confounding processes present in natural reef communities. Yet net release of nutrients (especially NOx) clearly occurs in situ as concentrations on many reefs exceed those offshore (e.g., Hatcher and Frith, 1985; Leichter et al., 2013; Rasheed et al., 2002), and release rates up to 20 mmol NOx m−2 d−1, 12 mmol ${\mathrm{NH}}_{\mathrm{4}}^{+}$ m−2 d−1, and 2 mmol DIP m−2 d−1 have been measured with in situ studies (Miyajima et al., 2007a, b; Silverman et al., 2012; Wyatt et al., 2012). We have not considered nitrogen inputs from other sources such as N2 fixation (Cardini et al., 2014) or reef porewater advection during ebb tide (Santos et al., 2011), which may result in an overestimation of DIN release on Tallon. However, given that NOx concentrations generally approach detection limits in reef porewater (Sansone et al., 1990; Tribble et al., 1990) and N2 fixation adds to the ${\mathrm{NH}}_{\mathrm{4}}^{+}$ pool, it seems unlikely that either of these processes dominate the observed nutrient fluxes. If we assume that the fluxes discussed above (Jnet) simultaneously occur with uptake of DIN and DIP near the limits of mass transfer, this gives a gross release (Jrelease) of ∼10 mmol N m−2 d−1 and ∼0.5 mmol P m−2 d−1 (Table 2). Previous work has attributed inorganic nutrient release to remineralization of particulate material by benthic filter feeders (Ribes et al., 2005; Wyatt et al., 2012) and detritivores (Silverman et al., 2012), which can graze PON on the order of DIN release rates, as well as nitrification by sponge communities (Southwell et al., 2008). In the case of Tallon reef, uptake of phytoplankton (0.95 mmol N and 0.20 mmol P m−2 d−1) (Gruber et al., 2018) is on the order of Jrelease in the case of P but is much smaller than Jrelease of N. Large particles (such as entire fronds of macroalgae) are rare but can form a major component of the particulate organic pool on some reefs (Alldredge et al., 2013); remineralization of similar material (rather than small particles like phytoplankton) may be the source of the observed DIN release on Tallon. Finally, fluxes of DON on the order of Jnet were measured on Tallon, with net uptake occurring during the February experiment (Fig. 4d). The dynamics of DON in reef systems have been addressed in a few studies (e.g., Haas and Wild, 2010; Thibodeau et al., 2013; Ziegler and Benner, 1999), and there is some evidence that reef organisms including corals (Ferrier, 1991), sponges (Rix et al., 2017), and seagrasses (Vonk et al., 2008) can directly utilize DON. In summary, gross release of DIP may be derived from phytoplankton uptake on Tallon reef, but released DIN exceeds phytoplankton inputs and is likely derived from additional sources including remineralization of large particles and DON. ## 4.3 Tidal and seasonal forcing of mass-transfer-limited fluxes Few estimates of nutrient uptake rate S exist for in situ reef communities; the majority of previous estimates come from controlled flume experiments and are in the range of 2–15 m d−1 (reviewed in Atkinson and Falter, 2003). Uptake rates are strongly dependent on flow and roughness characteristics (Falter et al., 2016), and in wave-dominated systems S can vary by an order of magnitude across the reef (e.g., from 25 m d−1 on the fore-reef to 5 m d−1 in the back-reef), as bottom stress from wave forcing declines (Wyatt et al., 2012; Zhang et al., 2011). In wave-dominated systems, S would be expected to be reasonably consistent while offshore wave forcing remain similar (e.g., at scales of days–weeks). Estimates of S from Tallon reef show uptake rates varying rapidly on the scale of hours or even minutes; for instance, uptake rates for NOx decreased by an order of magnitude (∼30–3 m d−1) over the period of an hour during flood tide (Fig. 5e). When averaged over longer timescales (i.e., over individual semidiurnal tidal cycles), estimates of S for DIN and DIP (∼9 and ∼5 m d−1, respectively) were similar to the mean of those measured in previous studies and only differed slightly between seagrass and coral reef zones (Fig. 7). Tallon reef platform experiences flows and water depths particular to its geometry and position relative to mean sea level; therefore, S (and accordingly nutrient uptake) will vary in other tide-dominated reef communities as a function of these factors. Estimates of mass-transfer-limited uptake of DIN and DIP varied over a tidal cycle with S but also showed differences in uptake with reef zone and season (Fig. 6). Reef zones were similar in DIP uptake rates, but rising concentrations of NOx in the coral zone during ebb tide caused estimates of JMTL to increase compared to the seagrass zone (Fig. 6a, b). Previous work on Tallon reef has shown that the coral zone is ∼20 % more productive than the seagrass zone (Gruber et al., 2017), which may be related to this difference in potential nitrate fluxes. Concentrations of NOx and ${\mathrm{NH}}_{\mathrm{4}}^{+}$ were elevated in the wet season, while DIP declined compared to the dry season (Table 1); these seasonal differences were evident in the mass-transfer-limited nutrient fluxes even when integrated over individual semidiurnal tidal cycles (Fig. 8). Ratios of DIN : DIP mass-transfer-limited uptake during October were 8.6 and 10.8 for seagrass and coral zones, respectively (Fig. 8). These ratios are well below the tissue N:P ratio of 30:1 typical of reef primary producers (Atkinson and Smith, 1983) and suggest that producers on Tallon reef may be strongly N-limited (at least during the dry season). This is supported by low N:P ratios (14:1) measured in Thalassia leaf tissue from Tallon reef during October (Cayabyab, unpublished data). During February, ratios of DIN : DIP mass-transfer-limited uptake were 21.5 and 21.3 for seagrass and macroalgal zones, respectively (Fig. 8), which suggests that N-limitation may be somewhat alleviated due to increases in oceanic DIN during the wet season. ## 4.4 Comparison of wave and tidal forcing This study suggests several important differences between wave- and tide-dominated reef biogeochemistry, which are controlled by the hydrodynamic regime. Firstly, the “source” of a water parcel overlying a particular benthic community differs between wave- and tide-dominated systems. In a simplified wave-driven reef, offshore (oceanic) water moves from reef crest to back-reef roughly unidirectionally, generally exiting the reef through channels. Thus, benthic communities are subjected to the physicochemical water properties present in offshore waters modified by the communities upstream of them. In a simplified tide-driven reef, flow direction changes throughout the tidal cycle; during flood tide, offshore waters enter the reef, while during ebb tide, waters from the back-reef traverse all downstream communities. These flow patterns control water residence times within the reef community. In wave-dominated reefs, flow speeds are driven by wave-breaking on the reef, creating residence times on the scale of  hours; wave energy can be generally consistent in time over  days–weeks (Lowe and Falter, 2015). In tide-dominated systems, reef waters exchange with offshore waters at timescales greater than or equal to a semidiurnal (or diurnal) tidal cycle; this residence time will vary depending on the reef's vertical position relative to mean sea level and its morphology. Finally, there are marked differences in nutrient uptake rates between wave- and tide-dominated reefs. The consistency of wave energy at scales of  days–weeks likely drives similarly consistent mass-transfer-limited nutrient uptakes rates on wave-dominated reefs. On reefs with strong tidal forcing, however, flow speeds are highly variable throughout the tidal cycle and mass-transfer-limited uptake can vary by an order of magnitude within  hours–minutes. Flow speeds also change over the spring–neap tidal cycle (∼15 d); on Tallon reef, mass-transfer-limited uptake rates were ∼2–4 fold greater during spring tides relative to neap tides. The ∼8 m tidal range of Tallon reef is typical of tidal ranges in the Kimberley region, and thus the results presented here are likely to be broadly representative of conditions experienced by many reefs (∼2000 km2 of total reef area) in this region. Most reefs globally do not experience such an “extreme” tidal regime, and therefore some aspects of this study (such as benthic fluxes varying by an order of magnitude on scales of minutes to hours) would not necessarily represent conditions on mesotidal or microtidal reefs. However, approximately 30 % of reefs worldwide have tide-dominated circulation, including iconic systems such as much of the southern Great Barrier Reef (Lowe and Falter, 2015); such reefs likely experience a similar, though more moderated, version of the physical processes that occur in macrotidal systems. Our work therefore provides some insight into how other researchers may relate benthic fluxes to tidal processes on other reef systems. Further process-based studies that incorporate tidal forcing will improve predictions of reef water temperatures (and coral bleaching), in situ calcification rates, and many other physically linked biological processes that affect the health and resilience of coral reef communities. 5 Conclusions In conclusion, this study was one of the first to measure rates of in situ benthic nutrient uptake and release on a tidally forced reef. We found that reef communities released a moderate amount of DIN, potentially derived from the remineralization of phytoplankton, large organic material, and DON. The strong tidal forcing of this reef drives large variability (an order of magnitude) in mass-transfer-limited nutrient uptake rates at short timescales (minutes–hours), and uptake can be enhanced in reef zones downstream of where DIN release occurs. Tallon reef displays some indications of nitrogen-limitation during the dry season, which may be relieved during the wet season; seasonal increases in offshore nitrate concentrations increased mass-transfer-limited uptake rates by a factor of ∼2. This work identifies some hydrodynamic properties of tide-dominated reefs that control their biogeochemistry and help define them in comparison to wave-dominated reefs. Data availability Data availability. The data used in this paper are publicly accessible and can be downloaded here: https://data.pawsey.org.au/public/?path=/WA Node Ocean Data Network/WAMSI2/KMRP/2.2/2.2.3 (last access: 8 May 2019). Author contributions Author contributions. Field experiments were designed by RKG, RJL, and JLF. Fieldwork was conducted by RKG and RJL. RKG analyzed the results and prepared the manuscript with contributions from RJL and JLF. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work was conducted on Bardi Jawi sea country and we acknowledge the Traditional Owners past, present, and emerging who care for this country. We thank the Bardi Jawi Rangers and Kimberley Marine Research Station staff for providing assistance and local knowledge during field experiments. We thank Michael Cuttler, Jordan Iles, Miela Kolomaznik, and Leonardo Ruiz-Montoya for helping with fieldwork. Three anonymous reviewers gave helpful comments that improved earlier versions of this paper. Financial support Financial support. This research has been supported by the Australian Research Council (Future Fellowship grant no. FT110100201), the Australian Research Council Centre of Excellence for Coral Reef Studies (grant no. CE140100020), and the Western Australian Marine Science Institution (Kimberley Marine Research Program, Project 2.2.3). Review statement Review statement. This paper was edited by Jack Middelburg and reviewed by three anonymous referees. References Alldredge, A. L., Carlson, C. A., and Carpenter, R. C.: Sources of organic carbon to coral reef flats, Oceanography, 26, 108–113, 2013. Andrews, J. C. and Gentien, P.: Upwelling as a source of nutrients for the Great Barrier Reef ecosystems: A solution to Darwin's question?, Mar. Ecol. Prog. Ser., 8, 257–269, 1982. Atkinson, M., Kotler, E., and Newton, P.: Effects of water velocity on respiration, calcification, and ammonium uptake of a Porites compressa community, Pac. Sci., 48, 296–303, 1994. Atkinson, M. J.: Biogeochemistry of nutrients, in: Coral reefs: An ecosystem in transition, edited by: Dubinsky, Z. and Stambler, N., Springer Netherlands, 199–206, 2011. Atkinson, M. J. and Bilger, R. W.: Effects of water velocity on phosphate uptake in coral reef-flat communities, Limnol. Oceanogr., 37, 273–279, 1992. Atkinson, M. J. and Falter, J. L.: Coral Reefs, in: Biogeochemistry of Marine Systems, edited by: Black, K. and Shimmield, G., CRC Press, Boca Raton, FL, 40–64, 2003. Atkinson, M. J. and Smith, S. V.: $\mathrm{C}:\mathrm{N}:\mathrm{P}$ ratios of benthic marine plants, Limnol. Oceanogr., 28, 568–574, 1983. Baird, M. E., Roughan, M., Brander, R. W., Middleton, J. H., and Nippard, G. J.: Mass-transfer-limited nitrate uptake on a coral reef flat, Warraber Island, Torres Strait, Australia, Coral Reefs, 23, 386–396, https://doi.org/10.1007/s00338-004-0404-z, 2004. Bilger, R. and Atkinson, M.: Anomalous mass transfer of phosphate on coral reef flats, Limnol. Oceanogr., 37, 261–272, 1992. Brodie, J., Schroeder, T., Rohde, K., Faithful, J., Masters, B., Dekker, A., Brando, V., and Maughan, M.: Dispersal of suspended sediments and nutrients in the Great Barrier Reef lagoon during river-discharge events: conclusions from satellite remote sensing and concurrent flood-plume sampling, Aust. J. Mar. Freshwater Res., 61, 651–664, https://doi.org/10.1071/MF08030, 2010. Cardini, U., Bednarz, V. N., Foster, R. A., and Wild, C.: Benthic N2 fixation in coral reefs and the potential effects of human-induced environmental change, Ecol. Evol., 4, 1706–1727, https://doi.org/10.1002/ece3.1050, 2014. Cornelisen, C. D. and Thomas, F. I. M.: Prediction and validation of flow-dependent uptake of ammonium over a seagrass-hardbottom community in Florida Bay, Mar. Ecol. Prog. Ser., 386, 71–81, 2009. Cuet, P., Atkinson, M. J., Blanchot, J., Casareto, B. E., Cordier, E., Falter, J., Frouin, P., Fujimura, H., Pierret, C., Susuki, Y., and Tourrand, C.: CNP budgets of a coral-dominated fringing reef at La Réunion, France: coupling of oceanic phosphate and groundwater nitrate, Coral Reefs, 30, 45–55, https://doi.org/10.1007/s00338-011-0744-4, 2011a. Cuet, P., Pierret, C., Cordier, E., and Atkinson, M. J.: Water velocity dependence of phosphate uptake on a coral-dominated fringing reef flat, La Réunion Island, Indian Ocean, Coral Reefs, 30, 37–43, https://doi.org/10.1007/s00338-010-0712-4, 2011b. Dandan, S. S., Falter, J. L., Lowe, R. J., and McCulloch, M. T.: Resilience of coral calcification to extreme temperature variations in the Kimberley region, northwest Australia, Coral Reefs, 34, 1151–1163, https://doi.org/10.1007/s00338-015-1335-6, 2015. D'Elia, C. and Wiebe, W.: Biogeochemical nutrient cycles in coral reef ecosystems, in: Coral Reefs, edited by: Dubinsky, Z., Elsevier, 49–74, 1990. Devlin, M. and Schaffelke, B.: Spatial extent of riverine flood plumes and exposure of marine ecosystems in the Tully coastal region, Great Barrier Reef, Aust. J. Mar. Freshwater Res., 60, 1109–1122, https://doi.org/10.1071/MF08343, 2009. Falter, J. L., Atkinson, M. J., and Merrifield, M. A.: Mass-transfer limitation of nutrient uptake by a wave-dominated reef flat community, Limnol. Oceanogr., 49, 1820–1831, https://doi.org/10.4319/lo.2004.49.5.1820, 2004. Falter, J. L., Lowe, R. J., and Zhang, Z.: Toward a universal mass-momentum transfer relationship for predicting nutrient uptake and metabolite exchange in benthic reef communities, Geophys. Res. Lett., 43, 9764–9772, https://doi.org/10.1002/2016gl070329, 2016. Ferrier, M. D.: Net uptake of dissolved free amino acids by four scleractinian corals, Coral Reefs, 10, 183–187, https://doi.org/10.1007/bf00336772, 1991. Furnas, M., Mitchell, A., Skuza, M., and Brodie, J.: In the other 90 %: phytoplankton responses to enhanced nutrient availability in the Great Barrier Reef Lagoon, Mar. Pollut. Bull., 51, 253–265, https://doi.org/10.1016/j.marpolbul.2004.11.010, 2005. Furnas, M., Alongi, D., McKinnon, D., Trott, L., and Skuza, M.: Regional-scale nitrogen and phosphorus budgets for the northern (14 S) and central (17 S) Great Barrier Reef shelf ecosystem, Cont. Shelf Res., 31, 1967–1990, https://doi.org/10.1016/j.csr.2011.09.007, 2011. Furnas, M. M. J.: Catchments and corals: terrestrial runoff to the Great Barrier Reef, Australian Institute of Marine Science & CRC Reef Research Centre, 350 pp., 2003. Genin, A., Yahel, G., Reidenbach, M., Monismith, S., and Koseff, J.: Reefs revealed using the control volume approach, Oceanography, 15, 90–96, 2002. Green, R. H., Jones, N. L., Rayson, M. D., Lowe, R. J., Bluteau, C. E., and Ivey, G. N.: Nutrient fluxes into an isolated coral reef atoll by tidally driven internal bores, Limnol. Oceanogr., 64, 461–473, 2019. Grover, R., Maguer, J.-F., Allemand, D., and Ferrier-Pagès, C.: Uptake of dissolved free amino acids by the scleractinian coral Stylophora pistillata, J. Exp. Biol., 211, 860–865, https://doi.org/10.1242/jeb.012807, 2008. Gruber, R. K., Lowe, R. J., and Falter, J. L.: Metabolism of a tide-dominated reef platform subject to extreme diel temperature and oxygen variations, Limnol. Oceanogr., 62, 1701–1717, https://doi.org/10.1002/lno.10527, 2017. Gruber, R. K., Lowe, R. J., and Falter, J. L.: Benthic uptake of phytoplankton and ocean-reef exchange of particulate nutrients on a tide-dominated reef, Limnol. Oceanogr., 63, 1545–1561, https://doi.org/10.1002/lno.10790, 2018. Haas, A. F. and Wild, C.: Composition analysis of organic matter released by cosmopolitan coral reef-associated green algae, Aquat. Biol., 10, 131–138, 2010. Hatcher, A. I. and Frith, C. A.: The control of nitrate and ammonium concentrations in a coral reef lagoon, Coral Reefs, 4, 101–110, 1985. Houlbrèque, F., Delesalle, B., Blanchot, J., Montel, Y., and Ferrier-Pagès, C.: Picoplankton removal by the coral reef community of La Prevoyante, Mayotte Island, Aquat. Microb. Ecol., 44, 59–70, https://doi.org/10.3354/ame044059, 2006. Hurd, C. L.: Water motion, marine macroalgal physiology, and production, J. Phycol., 36, 453–472, https://doi.org/10.1046/j.1529-8817.2000.99139.x, 2000. Jones, N. L., Patten, N. L., Krikke, D. L., Lowe, R. J., Waite, A. M., and Ivey, G. N.: Biophysical characteristics of a morphologically-complex macrotidal tropical coastal system during a dry season, Estuarine, Coast. Shelf Sci., 149, 96–108, https://doi.org/10.1016/j.ecss.2014.07.018, 2014. Knapp, A. N., Sigman, D. M., and Lipschultz, F. C. G. B.: N isotopic composition of dissolved organic nitrogen and nitrate at the Bermuda Atlantic Time-series Study site, Global Biogeochem. Cy., 19, GB1018, https://doi.org/10.1029/2004gb002320, 2005. Kordi, M. N. and O'Leary, M.: Geomorphic classification of coral reefs in the north western Australian shelf, Reg. Stud. Mar. Sci., 7, 100–110, https://doi.org/10.1016/j.rsma.2016.05.012, 2016. Kowalik, Z.: Tide distribution and tapping into tidal energy, Oceanologia, 46, 291–331, 2004. Larned, S. T.: Nitrogen- versus phosphorus-limited growth and sources of nutrients for coral reef macroalgae, Mar. Biol., 132, 409–421, https://doi.org/10.1007/s002270050407, 1998. Larned, S. T. and Atkinson, M.: Effects of water velocity on NH4 and PO4 uptake and nutrient-limited growth in the macroalga Dictyosphaeria cavernosa, Mar. Ecol. Prog. Ser., 157, 295–302, 1997. Lehrter, J. C. and Cebrian, J.: Uncertainty propagation in an ecosystem nutrient budget, Ecol. Appl., 20, 508–524, 2010. Leichter, J. J., Stewart, H. L., and Miller, S. L.: Episodic nutrient transport to Florida coral reefs, Limnol. Oceanogr., 48, 1394–1407, https://doi.org/10.4319/lo.2003.48.4.1394, 2003. Leichter, J. J., Aldredge, A. L., Bernardi, G., Brooks, A. J., Carlson, C. A., Carpenter, R. C., Edmunds, P. J., Fewings, M. R., Hanson, K. M., Hench, J. L., Holbrook, S. J., Nelson, C. E., Schmitt, R. J., Toonen, R. J., Washburn, L., and Wyatt, A. S. J.: Biological and physical interactions on a tropical island coral reef: Transport and retention processes on Moorea, French Polynesia, Oceanography, 26, 52–63, 2013. Lentz, S. J., Davis, K. A., Churchill, J. H., and DeCarlo, T. M.: Coral reef drag coefficients-water depth dependence, J. Phys. Oceanogr., 47, 1061–1075, https://doi.org/10.1175/JPO-D-16-0248.1, 2017. Li, Y.-H. and Gregory, S.: Diffusion of ions in sea water and in deep-sea sediments, Geochim. Cosmochim. Ac., 38, 703–714, 1974. Lowe, R. J. and Falter, J. L.: Oceanic forcing of coral reefs, Annu. Rev. Mar. Sci., 7, 43–66, https://doi.org/10.1146/annurev-marine-010814-015834, 2015. Lowe, R. J., Leon, A. S., Symonds, G., Falter, J. L., and Gruber, R.: The intertidal hydraulics of tide-dominated reef platforms, J. Geophys. Res.-Oceans, 120, 4845–4868, https://doi.org/10.1002/2015jc010701, 2015. Lowe, R. J., Pivan, X., Falter, J., Symonds, G., and Gruber, R.: Rising sea levels will reduce extreme temperature variations in tide-dominated reef habitats, Sci. Adv., 2, e1600825, https://doi.org/10.1126/sciadv.1600825 , 2016. McDonald, C., Koseff, J., and Monismith, S.: Effects of the depth to coral height ratio on drag coefficients for unidirectional flow over coral, Limnol. Oceanogr., 51, 1294–1301, 2006. McKinnon, A. D., Logan, M., Castine, S. A., and Duggan, S.: Pelagic metabolism in the waters of the Great Barrier Reef, Limnol. Oceanogr., 58, 1227–1242, 2013. McMahon, K. W., Thorrold, S. R., Houghton, L. A., and Berumen, M. L.: Tracing carbon flow through coral reef food webs using a compound-specific stable isotope approach, Oecologia, 180, 809–821, https://doi.org/10.1007/s00442-015-3475-3, 2016. Miyajima, T., Hata, H., Umezawa, Y., Kayanne, H., and Koike, I.: Distribution and partitioning of nitrogen and phosphorus in a fringing reef lagoon of Ishigaki Island, northwestern Pacific, Mar. Ecol. Prog. Ser., 341, 45–57, 2007a. Miyajima, T., Tanaka, Y., Koike, I., Yamano, H., and Kayanne, H.: Evaluation of spatial correlation between nutrient exchange rates and benthic biota in a reef-flat ecosystem by GIS-assisted flow-tracking, J. Oceanogr., 63, 643–659, https://doi.org/10.1007/s10872-007-0057-y, 2007b. Monismith, S.: Hydrodynamics of coral reefs, Annu. Rev. Fluid Mech., 39, 37–55, https://doi.org/10.1146/annurev.fluid.38.050304.092125, 2007. Parrish, J. D.: Fish communities of interacting shallow-water habitats in tropical oceanic regions, Mar. Ecol. Prog. Ser., 58, 143–160, 1989. Paytan, A., Shellenbarger, G. G., Street, J. H., Gonneea, M. E., Davis, K., Young, M. B., and Moore, W. S.: Submarine groundwater discharge: An important source of new inorganic nitrogen to coral reef ecosystems, Limnol. Oceanogr., 51, 343–348, https://doi.org/10.4319/lo.2006.51.1.0343, 2006. Pedersen, O., Colmer, T. D., Borum, J., Zavala-Perez, A., and Kendrick, G. A.: Heat stress of two tropical seagrass species during low tides–impact on underwater net photosynthesis, dark respiration and diel in situ internal aeration, New Phytol., 210, 1207–1218, 2016. Rasheed, M., Badran, M. I., Richter, C., and Huettel, M.: Effect of reef framework and bottom sediment on nutrient enrichment in a coral reef of the Gulf of Aqaba, Red Sea, Mar. Ecol. Prog. Ser., 239, 277–285, 2002. Reidenbach, M. A., Monismith, S. G., Koseff, J. R., Yahel, G., and Genin, A.: Boundary layer turbulence and flow structure over a fringing coral reef, Limnol. Oceanogr., 51, 1956–1968, https://doi.org/10.4319/lo.2006.51.5.1956, 2006. Ribes, M., Coma, R., Atkinson, M. J., and Kinzie, R. A.: Sponges and ascidians control removal of particulate organic nitrogen from coral reef water, Limnol. Oceanogr., 50, 1480–1489, 2005. Richards, Z. T., Garcia, R. A., Wallace, C. C., Rosser, N. L., and Muir, P. R.: A diverse assemblage of reef corals thriving in a dynamic intertidal reef setting (Bonaparte Archipelago, Kimberley, Australia), PLoS ONE, 10, e0117791, https://doi.org/10.1371/journal.pone.0117791, 2015. Rix, L., de Goeij, J. M., van Oevelen, D., Struck, U., Al-Horani, F. A., Wild, C., and Naumann, M. S.: Differential recycling of coral and algal dissolved organic matter via the sponge loop, Funct. Ecol., 31, 778–789, https://doi.org/10.1111/1365-2435.12758, 2017. Sansone, F. J., Tribble, G. W., Andrews, C. C., and Chanton, J. P.: Anaerobic diagenesis within recent, Pleistocene, and Eocene marine carbonate frameworks, Sedimentology, 37, 997–1009, 1990. Santos, I. R., Glud, R. N., Maher, D., Erler, D., and Eyre, B. D.: Diel coral reef acidification driven by porewater advection in permeable carbonate sands, Heron Island, Great Barrier Reef, Geophys. Res. Lett., 38, L03604, https://doi.org/10.1029/2010gl046053, 2011. Schaffelke, B., Carleton, J., Skuza, M., Zagorskis, I., and Furnas, M. J.: Water quality in the inshore Great Barrier Reef lagoon: Implications for long-term monitoring and management, Mar. Pollut. Bull., 65, 249–260, https://doi.org/10.1016/j.marpolbul.2011.10.031, 2012. Schroeder, T., Devlin, M. J., Brando, V. E., Dekker, A. G., Brodie, J. E., Clementson, L. A., and McKinna, L.: Inter-annual variability of wet season freshwater plume extent into the Great Barrier Reef lagoon based on satellite coastal ocean colour observations, Mar. Pollut. Bull., 65, 210–223, https://doi.org/10.1016/j.marpolbul.2012.02.022, 2012. Silverman, J., Kline, D. I., Johnson, L., Rivlin, T., Schneider, K., Erez, J., Lazar, B., and Caldeira, K.: Carbon turnover rates in the One Tree Island reef: A 40-year perspective, J. Geophys. Res.-Biogeosci., 117, G03023, https://doi.org/10.1029/2012jg001974, 2012. Smith, S. V.: Phosphorus versus nitrogen limitation in the marine environment, Limnol. Oceanogr., 29, 1149–1160, 1984. Southwell, M. W., Weisz, J. B., Martens, C. S., and Lindquist, N.: In situ fluxes of dissolved inorganic nitrogen from the sponge community on Conch Reef, Key Largo, Florida, Limnol. Oceanogr., 53, 986–996, https://doi.org/10.4319/lo.2008.53.3.0986, 2008. Strickland, J. D. H. and Parsons, T. R.: A practical handbook of seawater analysis, Fisheries Research Board of Canada, Ottawa, Ontario, 1972. Stuhldreier, I., Sánchez-Noguera, C., Rixen, T., Cortés, J., Morales, A., and Wild, C.: Effects of seasonal upwelling on inorganic and organic matter dynamics in the water column of Eastern Pacific coral reefs, PLOS ONE, 10, e0142681, https://doi.org/10.1371/journal.pone.0142681, 2015. Szmant, A. M.: Nutrient enrichment on coral reefs: Is it a major cause of coral reef decline?, Estuaries, 25, 743–766, https://doi.org/10.1007/bf02804903, 2002. Tait, D. R., Erler, D. V., Santos, I. R., Cyronak, T. J., Morgenstern, U., and Eyre, B. D.: The influence of groundwater inputs and age on nutrient dynamics in a coral reef lagoon, Mar. Chem., 166, 36–47, https://doi.org/10.1016/j.marchem.2014.08.004, 2014. Tanaka, Y., Ogawa, H., and Miyajima, T.: Production and bacterial decomposition of dissolved organic matter in a fringing coral reef, J. Oceanogr., 67, 427–437, https://doi.org/10.1007/s10872-011-0046-z, 2011. Thibodeau, B., Miyajima, T., Tayasu, I., Wyatt, A. S. J., Watanabe, A., Morimoto, N., Yoshimizu, C., and Nagata, T.: Heterogeneous dissolved organic nitrogen supply over a coral reef: First evidence from nitrogen stable isotope ratios, Coral Reefs, 32, 1103–1110, https://doi.org/10.1007/s00338-013-1070-9, 2013. Thomas, F. I. M. and Atkinson, M. J.: Ammonium uptake by coral reefs: Effects of water velocity and surface roughness on mass transfer, Limnol. Oceanogr., 42, 81–88, 1997. Tribble, G. W., Sansone, F. J., and Smith, S. V.: Stoichiometric modeling of carbon diagenesis within a coral reef framework, Geochim. Cosmochim. Ac., 54, 2439–2449, https://doi.org/10.1016/0016-7037(90)90231-9, 1990. Valderrama, J. C.: The simultaneous analysis of total nitrogen and total phosphorus in natural waters, Mar. Chem., 10, 109–122, 1981. Vonk, J. A., Middelburg, J. J., Stapel, J., and Bouma, T. J.: Dissolved organic nitrogen uptake by seagrasses, Limnol. Oceanogr., 53, 542–548, https://doi.org/10.4319/lo.2008.53.2.0542, 2008. Wang, Y.-H., Dai, C.-F., and Chen, Y.-Y. C. L.: Physical and ecological processes of internal waves on an isolated reef ecosystem in the South China Sea, Geophys. Res. Lett., 34, L18609, https://doi.org/10.1029/2007gl030658, 2007. Wells, F., Hanley, J. R., and Walker, D. I.: Marine biological survey of the southern Kimberley, Western Australia, Western Australian Museum, 1995. Wyatt, A. S. J., Lowe, R. J., Humphries, S., and Waite, A. M.: Particulate nutrient fluxes over a fringing coral reef: Relevant scales of phytoplankton production and mechanisms of supply, Mar. Ecol. Prog. Ser., 405, 113–130, https://doi.org/10.3354/meps08508, 2010. Wyatt, A. S. J., Falter, J. L., Lowe, R. J., Humphries, S., and Waite, A. M.: Oceanographic forcing of nutrient uptake and release over a fringing coral reef, Limnol. Oceanogr., 57, 401–419, 2012. Zhang, Z., Lowe, R., Falter, J., and Ivey, G.: A numerical model of wave- and current-driven nutrient uptake by coral reef communities, Ecol. Model., 222, 1456–1470, https://doi.org/10.1016/j.ecolmodel.2011.01.014, 2011. Ziegler, S. and Benner, R.: Dissolved organic carbon cycling in a subtropical seagrass-dominated lagoon, Mar. Ecol. Prog. Ser., 180, 149–160, 1999.
2020-01-29 21:51:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 43, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6066646575927734, "perplexity": 14542.954241999802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00167.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=2002_AMC_12B_Problems/Problem_7&diff=39340&oldid=39339
# Difference between revisions of "2002 AMC 12B Problems/Problem 7" ## Problem The product of three consecutive positive integers is $8$ times their sum. What is the sum of their squares? $\mathrm{(A)}\ 50 \qquad\mathrm{(B)}\ 77 \qquad\mathrm{(C)}\ 110 \qquad\mathrm{(D)}\ 149 \qquad\mathrm{(E)}\ 194$ ## Solution Let the three consecutive integers be $x-1, x, x+1$; then $$(x-1)(x)(x+1) = x(x^2 - 1) = 8(x-1 + x + x+1) = 24x$$ Since $x \neq 0$, we have $x^2 = 25$, with the positive solution being $x = 5$. Then $4^97897979879 + 5^2 + 6^2 = 77\ \mathrm{(B)}$. 2002 AMC 12B (Problems • Answer Key • Resources) Preceded byProblem 6 Followed by[[2002 AMC 12B Problems/Problem {{{num-a}}}|Problem {{{num-a}}}]] 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 12 Problems and Solutions
2021-09-25 23:38:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24207612872123718, "perplexity": 4704.696584729707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057787.63/warc/CC-MAIN-20210925232725-20210926022725-00266.warc.gz"}
https://quizizz.com/admin/quiz/5da4ce5a50c7e7001a3e1885/area-and-circumference-of-a-circle
Area and Circumference of a Circle 26 minutes ago msabalos Save Edit Host a game Live GameLive Homework Solo Practice Practice • Question 1 180 seconds Q. Which best represents the circumference of a circle with a radius of 12 inches? 37.68 inches 75.36 inches 62.14 inches 27.14 inches • Question 2 180 seconds Q. Which best represents the area of a circle with a radius of 8 inches? 200.96 inches 50.24 inches 192.14 inches 31.4 inches • Question 3 180 seconds Q. The circumference of a circle is $36\pi$  units, what is the radius ? 19 units 13 units 36 units 18 units • Question 4 30 seconds Q. The most popular size pizza at Pizza Parlor is a large pizza with a 12-inch radius. Which of the following best represents the circumference of this pizza? 62.84 inches 60 inches 75.36 inches 452.16 inches • Question 5 180 seconds Q. If my radius is 10ft, what is my diameter? 20ft 5ft 31.4ft 62.8ft • Question 6 180 seconds Q. A cruise ship has a large, circular window. It has a diameter of 6 meters. What is the window's area? 31.25 m2 29.55 m2 30.36 m2 28.26 m2 • Question 7 180 seconds Q. Does 7 inches represent the diameter or radius of the circle? Diameter • Question 8 180 seconds Q. What is the distance around a circle? Circumference Area Diameter • Question 9 900 seconds Q. Alex has trained his puppy, Popper, to jump through a ring. According to Alex's measurements and calculations, the ring has a diameter of 3.5 feet. What is the ring's circumference? 11.32 ft 12.25 ft 10.99 ft 8.75 ft • Question 10 180 seconds Q. What is the area of this circle?
2020-01-27 02:46:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5869506001472473, "perplexity": 6002.168224028178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694176.67/warc/CC-MAIN-20200127020458-20200127050458-00148.warc.gz"}
http://dkhoa.dev/post/ubuntu_on_windows/
Why Ubuntu on Windows? For programming purposes, I prefer Linux to Windows. However, Windows is really good at entertaining and office works. Beside, Microsoft Office is the best office suit I ever have experienced, and OneNote is one of my favorite note-taking software. Not to mention my beloved Bilizzard games such as Diablo, Starcraft and other video games. A solution to integrate Windows and Linux in one machine is to install dual-boot. However, I am tired of switching between 2 OSs. In addition, we have to configure the systems so that it can easily transfer files between 2 different partition formats. Another workaround is to run Windows or Linux on a virtual machine. Since I usually play video games, I could not use Windows as a client. Running Linux virtually seems promising. Unfortunately, my laptop is not strong enough to run the virtual machine smoothly. Finally, M\$ comes up with a new feature in Windows 10 named ‘Windows Subsystem for Linux’. Suprisingly, it works flawlessly except several minor issues. Settings 1. Turn Windows Subsystem On: find Windows features and select Windows Subsystem for Linux , wait for the installation and restart. 2. Open Windows Store and install Ubuntu. I also saw OpenSuse on the store. Hopefully, there will be more distros adapted in future, especially Arch Linhx/Manjaro which is my favorite. 3. After working around with Ubuntu and Tmux in cmder, I have found out that wsltty is the best tool to avoid fonts broken, key arrow issues. Besides, it run faster than cmder although cmder has many useful features. Here is the result: There are several tips: • Change the cursor to block. • Turn off the mouse support feature in vim (:set mouse=) in order to copy text from Windows to vim. • Copy from vim to Windows applications: 1. Install xsel/xclip on Ubuntu. 2. Install Xming 3. Set export DISPLAY=:0 in the bashrc or zshrc Installing ArchLinux on WSL After several months waiting a better solution, finally I found ArchWSL which is easy to install and full of configuration. In addition, it turns out that wsltty is not a tool I really want since there are still some issues with Unicode font and delay. Now I am using xfce4-terminal with the help from Xming. With a few steps we are able to run xfce4-terminal directly from Windows: 1. Set Xming to run on startup. 2. Use following command in Run to execute xfce4-terminal: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -windowstyle hidden -Command "iex \"path\to\arch.exe run DISPLAY=:0 xfce4-terminal"" Windows Defender It is so annoying that I have to turn it off. Somehow, the program always scans the Linux folder. Eventually, everything in Subsystem runs so slow. A better workaround is to add the Linux folder to the excluding list of Windows Defender.
2019-05-21 00:36:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2931114733219147, "perplexity": 5084.067105456549}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256184.17/warc/CC-MAIN-20190521002106-20190521024106-00285.warc.gz"}
https://tex.stackexchange.com/questions/85734/how-to-hold-all-lines-in-a-defined-environment?noredirect=1
# How to hold all lines in a defined environment? [duplicate] \def\Grammercaption{\textbf{Grammer}} \def\grammer{\bgroup\ignorespaces\Grammercaption\fontsize{10}{13pt}\selectfont} \def\grammer{\egroup} I want all lines of text within this environment stick together. So that no page break is issued here. ## marked as duplicate by lockstep, Mensch, clemens, Martin Schröder, WernerFeb 15 '13 at 18:47 • You might be after something like \newenvironment{grammer}{\noindent\minipage{\linewidth}\Grammercaption\fontsize{10}{13pt}\selectfont}{\endminipage} that you can then use as \begin{grammer}...\end{grammer}. – Werner Dec 6 '12 at 6:43 • I also recommend using the minipage environment. You may have to tweak the \parindent setting inside the minipage environment because minipage sets it to opt. – user10274 Dec 6 '12 at 6:50
2019-08-20 16:06:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7496222853660583, "perplexity": 4805.021135708955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00420.warc.gz"}
http://chalkdustmagazine.com/regulars/crossnumber/prize-crossnumber-issue-03/
# Prize crossnumber, Issue 03 Be in for a chance to win a £100 goody bag if you can solve our fiendish crossnumber Our original prize crossnumber is featured on pages 44 and 45 of Issue 03. Clarification: In 39D, the integers should be positive and distinct. Clarification: The answer to 40A is not 5362, as some websites claim. 5362 is the number of positions that could face you after 3 moves, not the number of ways to play 3 moves. ### Rules • Although many of the clues have multiple answers, there is only one solution to the completed crossnumber. As usual, no numbers begin with 0. Use of Python, OEIS, Wikipedia, etc. is advised for some of the clues. • One randomly selected correct answer will win a £100 Maths Gear goody bag. Three randomly selected runners up will win a Chalkdust t-shirt. The prizes have been provided by Maths Gear, a website that sells nerdy things worldwide, with free UK shiping. Find out more at mathsgear.co.uk • To enter, submit the sum of the across clues via this form by 22 July 2016. Only one entry per person will be accepted. Winners will be notified by email and announced on our blog by 30 July 2016. ### Crossnumber Crossnumber #3, set by Humbug: Click to enlarge ### Clues #### Across • 1. A multiple of 999. (7) • 5. Half the difference between 45A and 1A. (7) • 7. An integer. (3) • 9. A multiple of 41A. (8) • 12. 4D multiplied by 43D. (4) • 13. 43D less than 7A. (3) • 14. 37A less than 21D. (3) • 15. A number whose name includes all five vowels exactly once. (5) • 16. The sum of the digits of 8D. (2) • 18. An anagram of 24,680. (5) • 21. A non-prime number whose highest common factor with 756 is 1. (3) • 24. The sum of the digits of 29D. (2) • 25. The product of four consecutive Fibonacci numbers. (6) • 26. The product of 14A and 21D. (6) • 29. The largest known $n$ such that all the digits of $2^n$ are not zero. (2) • 30. The HTTP error code for “I’m a teapot”. (3) • 32. A prime number that is the sum of 25 consecutive prime numbers. (5) • 35. The number of different nets of a cube (with reflections and rotations being considered as the same net). (2) • 36. A Fibonacci number. (5) • 37. When written in a base other than 10, this number is 256. (3) • 39. Why is 6 afraid of 7? (3) • 40. The number of ways to play the first 3 moves (2 white moves, 1 black move) in a game of chess. (4) • 41. A multiple of 719. (8) • 42. A prime number that is two less than another prime number. (3) • 44. Half of 29D. (7) • 45. The sum of 6D, 8D, 31D, 37D and 43D. (7) #### Down • 1. A power of 2. (7) • 2. A palindrome. (8) • 3. This number’s cube root is equal to its number of factors. (5) • 4. An odd number. (2) • 5. A square number. (5) • 6. 5D multiplied by 1 less than 27D. (7) • 8. An odd number. (6) • 10. A multiple of 7. (2) • 11. The number of pairs of twin primes less than 1,000,000. (4) • 17. The number of factors of 26A. (3) • 19. Greater than 30A. (3) • 20. Each digit of this number is a prime number and larger than the digit before it. (3) • 21. 37A more than 14A. (3) • 22. A multiple of 27. (3) • 23. A number $n$ such that $(n-1)!+1$ is divisible by $n\hspace{1pt}^2$. (3) • 24. The smallest number that cannot be changed into a prime by changing one digit. (3) • 27. A three digit number. (3) • 28. A multiple of 34D. (8) • 29. A multiple of 7. (7) • 31. A prime number in which three different digits each appear twice. (6) • 33. 1,000,006 less than 6D. (7) • 34. The smallest number that is non-palindromic when written in binary, but whose square is palindromic when written in binary. (4) • 37. The square of this number only contains the digits 1, 2, 3 and 4. (5) • 38. A cube number. (5) • 39. The largest number that cannot be written as the sum of positive, distinct integers, the sum of whose reciprocals is 1. (2) • 43. The smallest number that is twice the sum of its digits. (2) • ### Our favourite (and not-so-favourite) Euler equations A collection of our favourite and least favourite things named after Euler, from issue 07 • ### Read Issue 07 now! Think outside outside the box in our latest issue. No more tennis puns, primes mod 4, plus all your favourite regulars. • ### Dear Dirichlet, Issue 07 Pigs, popes and produce are among the topics of discussion in this issue's Dear Dirichlet advice column • ### Prize crossnumber, Issue 07 Win £100 of Maths Gear goodies by solving our famously fiendish crossnumber • ### Top Ten: Mathematical celebration days The definitive chart of the best dates • ### Conference bingo Race to be the first to get five-in-a-row
2018-08-21 11:59:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35138222575187683, "perplexity": 2804.501048611502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218122.85/warc/CC-MAIN-20180821112537-20180821132537-00695.warc.gz"}
https://wildtopology.com/2014/06/08/homomorphisms-from-the-harmonic-archipelago-group-to-finite-groups/
## Homomorphisms from the harmonic archipelago group to finite groups This post is a brief application of a result discussed in the last post about the existence of odd ways to map the fundamental group of the Hawaiian earring $\mathbb{H}$ onto an arbitrary finite group $G$: Theorem 1: Let $G$ be any non-trivial finite group and $\ell_n$ be a loop going once around the n-th circle of the Hawaiian earring in the clockwise direction. There are uncountably many surjective group homomorphisms $\pi_1(\mathbb{H})\to G$ mapping $g_n=[\ell_n]$ to the identity element for every $n\geq 1$. Since the kernel of each of these homomorphisms $\pi_1(\mathbb{H})\to G$ contains the infinite free group $F_{\infty}=F(g_1,g_2,...)$ generated by the classes $[\ell_n]$, there is clearly a connection to the harmonic archipelago $\mathbb{HA}$. Harmonic Archipelago Let’s fix a non-trivial finite group $G$ and show that Theorem 1 also holds for the Harmonic archipelago. Recall that the canonical inclusion $\mathbb{H}\to\mathbb{HA}$ induces a surjective homomorphism $\phi:\pi_1(\mathbb{H})\to\pi_1(\mathbb{HA})$ (see Corollary 2 of this post). Moreover, $\ker\phi$ is the conjugate closure of the free group generated by the elements $g_{n}g_{n+1}^{-1}$, $n\geq 1$. Of course, we have $g_{n}g_{n+1}^{-1}\in F_{\infty}$. So for every surjective homomorphism $f:\pi_1(\mathbb{H})\to G$ satisfying $f(F_{\infty})=1$, the inclusion $\ker\phi\subseteq\ker f$ holds and we get a unique surjective homomorphism $\overline{f}:\pi_1(\mathbb{HA})\to G$ such that $f=\overline{f}\circ\phi$. Altogether, $\phi:\pi_1(\mathbb{H})\to\pi_1(\mathbb{HA})$ induces an injection $\zeta=Hom(\phi,G):Hom(\pi_1(\mathbb{HA}),G)\to Hom(\pi_1(\mathbb{H}),G)$ which is pre-composition by $\phi$ and we have $\zeta(\overline{f})=f$ when $f:\pi_1(\mathbb{H})\to G$ is one of the (uncountably many) surjective homomorphisms guaranteed to exist by Theorem 1. This is enough to serve as the proof of the main theorem of this post. Theorem 2: Let $G$ be any non-trivial finite group and $\ell_n$ be a loop going once around the n-th circle of $\mathbb{H}$ viewed as a subspace of $\mathbb{HA}$. There are uncountably many surjective group homomorphisms $\pi_1(\mathbb{HA})\to G$ mapping $[\ell_n]$ to the identity element for every $n\geq 1$. Certainly then, we have uncountably many (overall) homomorphisms from $\pi_1(\mathbb{HA})$ to $G$. Corollary 3: For any non-trivial finite group $G$, the set of group homomorphisms $Hom(\pi_1(\mathbb{HA}),G)$ is uncountable. Theorem 2 and Corollary 3 are in stark contrast to the fact that the only homomorphism $\pi_1(\mathbb{HA})\to\mathbb{Z}$ to the additive group of integers is the trivial homomorphism (See Theorem 1 of The harmonic archipelago group is not free). This entry was posted in Cardinality, Finite groups, Fundamental group, harmonic archipelago and tagged , , , , , . Bookmark the permalink.
2021-10-27 09:50:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626128077507019, "perplexity": 300.54241535596844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00561.warc.gz"}