url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://shitpost.plover.com/2018/05/02/
# Content-Type: text/shitpost Subject: Non-transitive relations Date: 2018-05-02T18:12:47 Newsgroup: talk.mjd.math.non-transitive-relations Message-ID: <64b0f1e9b2f83316@shitpost.plover.com> Content-Type: text/shitpost The phases of an eight-phase dual-ring intersection provide an interesting example of a non-transitive relation. Let !!a!! and !!b!! be phases, and write !!a\sim b!! when the phases are compatible, which means that traffic making those movements will not collide. This relation is reflexive and symmetric, but not transitive, because for example we have !!\Phi6\sim \Phi1!! and !!\Phi1\sim \Phi5!! but not !!\Phi6\sim\Phi 5!!. A mathematician would have numbered the phases differently, but even with the standard numbering the rule is not too complicated: $$a\sim b \text{ when any of these holds:} \begin{cases} a\in\{\Phi1, \Phi 2\}\text{ and }b\in\{\Phi5, \Phi6\},\text{ or} \\ a\in\{\Phi3, \Phi 4\}\text{ and }b\in\{\Phi7, \Phi8\},\text{ or} \\ a = b \end{cases}$$ Path: you​!your-host​!walldrug​!epicac​!thermostellar-bomb-20​!skordokott​!berserker​!plovergw​!shitpost​!mjd Date: 2018-05-02T16:00:30 Newsgroup: rec.food.de-lipogramization Message-ID: <feda84ee2930125d@shitpost.plover.com> Content-Type: text/shitpost Soon, perhaps tomorrow, I will post about lipograms again, and this is my pre-commitment that I will not try to turn the post into a lipogram. Also I will not try to go the other direction and write it to omit all the other vowels. I was tempted to end this note with “Yow! Am I having fun?” but I am not going to do it. That last phrase was e-less but it was by accident. But now I want to go back and fix — arrgh I am doing it again! —ARRGH — thE rEst of thE SENTENCE. HA, THERE. I WILL RESIST. EEEEEEEEEEEEEEEEEEEEEEEeeeeeeeeeeeeeeeeee e e e
2018-10-19 13:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7808189392089844, "perplexity": 2554.8564993752248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512400.59/warc/CC-MAIN-20181019124748-20181019150248-00335.warc.gz"}
http://math.stackexchange.com/questions/136422/existence-and-uniqueness-for-separable-odes-where-y-diverges
# Existence and uniqueness for separable ODEs where $y'$ diverges In this reference the author states what he calls "the theory of local solutions" for separable ordinary differential equations of the form $\frac{dy}{dx} = \frac{f(x)}{g(y)}$. He asserts that it suffices for $f$ and $g$ to be continuous and not to vanish simultaneously in a rectangular area $R$ of the plane in order for a unique solution to exist given an initial condition $(x_0,y_0)\in R$, but I have difficulty in interpreting his claim. He does not specify what the domain of the solution will be, but since he talks about "local" solutions I believe that he is claiming that for each $(x_0,y_0)\in R$ there exist an open set $I$ of $\mathbb{R}$, contained in the projection of $R$ on the $x$-axis (and containing $x_0$), and a differentiable function $\phi : I \rightarrow \mathbb{R}$ such that 1. $\phi(x_0) = y_0$ 2. for each $x \in I$ $\phi'(x) = f(x)/g(\phi(x))$ 3. $\phi : I \rightarrow \mathbb{R}$ is unique I do not understand his requirement that $f(x)$ and $g(y)$ should not vanish simultaneously in $R$. I think that if $g(y_0)=0$, even if $f(x_0) \neq 0$, there should be no solution passing through $(x_0,y_0)$ because there would be an undefined value for the derivative of an eventual solution. Should the text be emended to exclude the possibility of $g(y_0)=0$? edit: I should add that I also don't understand well what is meant by uniqueness given we are talking of a local solution and the domain of the solution is somewhat arbitrary - Since $f(x)$ depends only on $x$ and $g(y)$ only on $y$, I think that what the author wants to say is that $f(x)\ne0$ and $g(y)\ne0$ for all $(x,y)\in R$. Otherwise, as you note, the conclusions would not hold. Think of the simplest example $y'=1/(2\,y)$; $f(x)=1$ never vanishes, and $g(y)=2\,y$ vanishes only at $y=0$. The solution with $y(x_0)=y_0>0$ is $y=\sqrt{x+y_0^2-x_0}$, defined on $[x_0-y_0^2,\infty)$. The case $y_0<0$ is similar. If $y_0=0$, then there ae two solutions, $y=\pm\sqrt{x-x_0\strut}$, defined on $[x_0,\infty)$, which are not differentiable at $x=x_0$. So if we assumed that $f(x) \neq 0$ and $g(y) \neq 0$ for all $(x,y)\in R$ does the result (local existence and uniqueness of the solution) hold? And if we assumed just that $y(x_0)=y_0 \neq 0$? –  Cauchy Apr 24 '12 at 23:10 Let $G(y)=\int_{y_0}^yg(s)\,ds$ and $F(x)=\int_{x_0}^xf(s)\,ds$. Then the solution of $y'=f(x)/g(y)$, $y(x_0)=y_0$ is, in implicit form, $G(y)=F(x)$. If $y_0\ne0$, then $G'(y_0)=g(y_0)\ne0$. The implicit function theorem implies that $G$ is invertible on an open interval around $y_0$, and we obtain the solution in the form $y(x)=G^{-1}(F(x))$. So the answer to your question is yes, it is enough to have $g(y_0)\ne0$ to guarantee the existence and uniqueness of solution. –  Julián Aguirre Apr 25 '12 at 10:39
2015-07-04 19:07:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596498608589172, "perplexity": 81.92182581621589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096870.66/warc/CC-MAIN-20150627031816-00228-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/CYCLIC-TRANSIT-PROBABILITIES-OF-LONG-PERIOD-PLANETS-Kane-Horner/abed30e8018f6c7b5428f72e42652ff9002b5573
# CYCLIC TRANSIT PROBABILITIES OF LONG-PERIOD ECCENTRIC PLANETS DUE TO PERIASTRON PRECESSION @article{Kane2012CYCLICTP, title={CYCLIC TRANSIT PROBABILITIES OF LONG-PERIOD ECCENTRIC PLANETS DUE TO PERIASTRON PRECESSION}, author={Stephen R. Kane and Jonathan Horner and Kaspar von Braun}, journal={The Astrophysical Journal}, year={2012}, volume={757}, pages={105} } • Published 2012 • Physics • The Astrophysical Journal The observed properties of transiting exoplanets are an exceptionally rich source of information that allows us to understand and characterize their physical properties. Unfortunately, only a relatively small fraction of the known exoplanets discovered using the radial velocity technique are known to transit their host due to the stringent orbital geometry requirements. For each target, the transit probability and predicted transit time can be calculated to great accuracy with refinement of the… Expand 9 Citations #### Figures and Tables from this paper Orbital Dynamics of Multi-planet Systems with Eccentricity Diversity • Physics • 2014 Since exoplanets were detected using the radial velocity method, they have revealed a diverse distribution of orbital configurations. Among these are planets in highly eccentric orbits (e > 0.5).Expand Orbital Stability and Precession Effects in the Kepler-89 System • S. Kane • Physics • The Astronomical Journal • 2019 Among the numerous discoveries resulting from the {\it Kepler} mission are a plethora of compact planetary systems that provide deep insights into planet formation theories. The architecture of suchExpand The observational effects and signatures of tidally distorted solid exoplanets • Physics • 2015 Our work examines the detectability of tidally distorted solid exoplanets in synchronous rotation. Previous work has shown that tidally distorted shapes of close-in gas giants can give rise to radiusExpand Detectability of Earth-like Planets in Circumstellar Habitable Zones of Binary Star Systems with Sun-like Components • Physics • 2013 Given the considerable percentage of stars that are members of binaries or stellar multiples in the solar neighborhood, it is expected that many of these binaries host planets, possibly evenExpand Exomoon habitability and tidal evolution in low-mass star systems Discoveries of extrasolar planets in the habitable zone (HZ) of their parent star lead to questions about the habitability of massive moons orbiting planets in the HZ. Around low-mass stars, the HZExpand Extreme Physical Phenomena Associated with Close-In Solid Exoplanets: Models and Consequences EXTREME PHYSICAL PHENOMENA ASSOCIATED WITH CLOSE-IN SOLID EXOPLANETS: MODELS AND CONSEQUENCES Prabal Saxena, PhD George Mason University, 2015 Dissertation Director: Dr. Michael Summers SolidExpand Stability of planetary, single M dwarf, and binary star companions to Kepler detached eclipsing binaries and a possible five-body system • Physics • 2020 In this study we identify 11 Kepler systems (KIC 5255552, 5653126, 5731312, 7670617, 7821010, 8023317, 10268809, 10296163, 11519226, 11558882 and 12356914) with a ”flip-flop” effect in the eclipseExpand The HD 217107 planetary system: Twenty years of radial velocity measurements The hot Jupiter HD 217107 b was one of the first exoplanets detected using the radial velocity (RV) method, originally reported in the literature in 1999. Today, precise RV measurements of thisExpand #### References SHOWING 1-10 OF 37 REFERENCES Constraining Orbital Parameters through Planetary Transit Monitoring • Physics • 2008 The orbital parameters of extrasolar planets have a significant impact on the probability that the planet will transit the host star. This was recently demonstrated by the transit detection of HDExpand Periastron precession measurements in transiting extrasolar planetary systems at the level of general relativity Transiting exoplanetary systems are surpassingly important among the planetary systems since they provide the widest spectrum of information for both the planet and the host star. If a transitingExpand Exoplanetary Transit Constraints Based upon Secondary Eclipse Observations • Physics • 2009 Transiting extrasolar planets provide an opportunity to study the mass-radius relation of planets as well as their internal structure. The existence of a secondary eclipse enables further study ofExpand Detectability of exoplanetary transits from radial velocity surveys Of the known transiting extrasolar planets, a few have been detected through photometric follow-up observations of radial velocity planets. Perhaps the best known of these is the transiting exoplanetExpand Refining Exoplanet Ephemerides and Transit Observing Strategies • Physics • 2009 Transiting planet discoveries have yielded a plethora of information regarding the internal structure and atmospheres of extrasolar planets. These discoveries have been restricted to theExpand OBSERVABILITY OF THE GENERAL RELATIVISTIC PRECESSION OF PERIASTRA IN EXOPLANETS • Physics • 2008 The general relativistic precession rate of periastra in close-in exoplanets can be orders of magnitude larger than the magnitude of the same effect for Mercury. The realization that some of theExpand Prospecting transit duration variations in extrasolar planetary systems • Physics • 2011 Context. Transiting planetary systems allow us to extract geometrical information, e.g., the anglebetween the orbital angular momentum and the stellar spin, that can be used to discriminate amongExpand The Detectability of Transit Depth Variations due to Exoplanetary Oblateness and Spin Precession • Physics • 2010 Knowledge of an exoplanet's oblateness and obliquity would give clues about its formation and internal structure. In principle, a light curve of a transiting planet bears information about theExpand Using long-term transit timing to detect terrestrial planets • Physics • 2007 ABSTRACT We propose that the presence of additional planets in extrasolar planetary systemscan be detected by long-term transit timing studies. If a transiting planet is on aneccentric orbit then theExpand Orbital Perturbations of Transiting Planets: A Possible Method to Measure Stellar Quadrupoles and to Detect Earth-Mass Planets The recent discovery of a planetary transit in the star HD 209458, and the subsequent highly precise observation of the transit light curve with Hubble Space Telescope, is encouraging to search forExpand
2021-10-18 23:16:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.668524444103241, "perplexity": 4697.547752265294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00397.warc.gz"}
https://indico.cern.ch/event/470460/contributions/2169088/
# ATTRACT TWD Symposium: Trends, Wishes and Dreams in Detection and Imaging Technologies 30 June 2016 to 1 July 2016 Other Institutes Europe/Zurich timezone ## The project 2-SPaCE: 2-dimensional materials for Single Photon CountErs Not scheduled 10m Auditorium & Conference Room (Other Institutes) ### Speaker Alessandra Di Gaspare (INFN - National Institute for Nuclear Physics) ### Description The far-infrared region (wavelengths in the range 10 $\mu$m – 1 mm) is one of the richest areas of spectroscopic research, encompassing the rotational spectra of molecules and vibrational spectra of solids, liquids and gases. Both basic research studies and applications in this spectral region are hampered by the absence of sensitive detectors. Moreover, for certain applications an ultimately high sensitivity reaching a photon-counting level is indispensable. For instance, the single photon detection in the sub-THz spectral range may allow direct detection of dark matter in a region of the parameters space difficult to reach with different techniques, but particularly interesting from a theoretical point of view (axions, ALPs, dark photons, etc.). However, in THz range photon energies are far smaller (h$\nu$ < 124 meV for $\lambda$ > 10 $\mu$m) and the single-photon detection is no longer trivial, in marked contrast to the visible and near-infrared regions (wavelengths shorter than about 1.5 $\mu$m), in which single-photon counting is possible. Despite recent efforts to improve the available detector technologies, attainable sensitivities are currently far below the level of single-photon detection, In the last decade, a variety of novel detection schemes have been proposed [1-3]. Among them, semiconductor quantum device have used as single-photon detectors. The experimentally achieved noise equivalent power (NEP), less than 1 × 10$^{-19}$ W/Hz$^{-1/2}$, is by several orders of magnitude lower than typical state-of-the-art detectros operating in the THz range. Such ultra-high sensitivity reaching singe-photon detection level, as well as ultra-broad dynamic range are consequence of the unconventional detection mechanism in a nanometric phototransistors. The unique physical properties of graphene, like high carrier mobility, robustness and stability, make this material of potential use for several forefront applications in detector R&D [4,5]. Beyond graphene, there are many other 2D materials that due to confinement of electrons and to the lack of strong interlayer interactions usually exhibit optical and electronic properties different from their analogous 3D systems [6]. The size-dependent properties can be exemplified by molybdenum disulphide (MoS$_2$) that is semiconducting with an indirect bandgap as bulk material, and becomes a direct gap semiconductor in the 2D form. The functional flexibility offered by 2D atomic crystals is considered to be a key property for next device generation. The proposed project (2SPaCE, 2-dimensional materials for Single Photon CountErs) is aimed at the development of a technological platform for advanced detectors based on 2D materials, graphene and MoS$_2$, to be employed as single photon counters in the sub-THz and THz regions. The main goal of the project will be the investigation of novel detector schemes where 2D materials may play a potentially revolutionary role, by designing and fabricating proof-of-concepts devices. Different device architectures will be explored to achieve efficient detection in the spectral range of interest, by combining the different key concepts which are generally considered for future device generation based on this class of materials: -Many of the appealing properties of 2D materials arise from the combination of strong light-matter interaction, and electronic transport dominated by hot-carriers effects and collective interactions, like the plasmonic effects that can be activated in the FETs canne [7]. Nowadays, plasmonics in 2DES is one the key concepts for the realization of novel optical devices working in different spectral ranges - from THz to visibile. In particular, the spectroscopic studies on the hydrodynamic response of a 2D electronic plasma confined in the transistor micrometric channel have demonstrated that 2D-based FETs can be used to realize a frequency-tunable THz detectors based on plasmonic micro-cavities[8]. -The remarkable electrodynamics and thermal properties of grahene, at present very well understood at room-T [9], are much less explored at low-T [10]. However, several reports indicated the possibility to reach very high sensitivity as both a bolometer and as a calorimeter [11]. For all the proposed 2D-based devices, one crucial point will be the integration of the 2D materials into the device technology needed for the addressed detector type. With the aim of fabricating detectors using the 2D layered materials, we will propose and develop novel solutions for the microfabrication of future electronic devices, contributing to the advancement of 2D materials science and device technology, whose foreseen applications will be not only in field of dark matter studies. The focus will be the study of the properties that could make 2D layered materials usable in detectors, but the results are expected to be relevant for the knowledge on 2D materials in general. To date, CVD revealed to be a suitable method to growth controlled quality graphene[12] on large area, a least at R&D level. However, for future device applications, reliable solutions for the integration of the graphene-platform at wafer-scale level are still to find, as they are within the objectives of the “Graphene Flagship” started in 2013. Beyond graphene, the controlled synthesis and fabrication solutions for other 2D-based devices is still lacking or just emerging, hence a fundamental understanding of the processes involved and intensive work at R&D level are still required. Among the goals of the 2-SPaCE project is the implementation of new equipments and the strengthening of pre-existing facilities for the growth and the analysis of graphene and MoS2. Graphene will be grown at the Laboratori Nazionali di Frascati (LNF) of the INFN (the host institution), by using the expertise developed and facilities set-up within a research program funded by the host institution in the 2014-2015 period and involving the proponents. As the 2-SPaCE project is concerned with the fabrication and study of novel devices, the final results will be constituted by the demonstration of the proposed detectors concepts. The final deliverables could be possibly both in the form of experimental proof-of-principle prototypes or in the form of ready-to-use demonstrators, together with material synthesis methods, fabrication process solutions and experimental results. The project aim is demonstrating physical principles and indicating the pathways towards optimization of the proposed concepts. References [1] Jian Wei et alNature Nanotechnology 3, 496 - 500 (2008) [2] Y. Kajihara et al, J. Appl. Phys. 113, 136506 (2013) [3] S. Komiyama et al, IEEE Journal of Selected Topics in Quantum Electronics 03/2011; 17(1):54 - 66. [4] T. Muller et al, Nature Photonics 4, 297 - 301 (2010) [5] F. Koppens et al, Nature Nanotechnology 9, 780–793 (2014) [6] S. Z. Butler et al, ACS Nano, 2013, 7 (4), pp 2898–2926 [7] A. Grigorenko et al, Nature Photonics 6, 749–758 (2012) [8] V. Giliberti et al, Phys. Rev. B 91, 165313 (2015) [9] A. A. Balandin et al, Nature Mater. 10, 569 (2011). [10] Y. M. Zuev et al, Phys. Rev. Lett. 102, 096807 (2009) [11] Xu Du et al, Graphene 2D Mater. 2014; 1:1–22, DOI 10.2478/gpe-2014-0001 [12] C. Mattevi et al, J. Mater. Chem., 2011, 21, 3324–3334 ### Primary author Alessandra Di Gaspare (INFN - National Institute for Nuclear Physics) ### Co-authors Claudio Gatti (Istituto Nazionale Fisica Nucleare Frascati (IT)) Gianluca Lamanna (Istituto Nazionale Fisica Nucleare Frascati (IT)) Roberto Cimino (LNF-INFN) Dr Rosanna Larciprete (CNR-Istituto dei Sistemi Complessi (ISC), TorVergata, Rome, IT) ### Presentation Materials There are no materials yet.
2020-08-09 00:45:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4008561372756958, "perplexity": 3232.56884396896}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738366.27/warc/CC-MAIN-20200808224308-20200809014308-00100.warc.gz"}
https://electronics.stackexchange.com/questions/106870/programming-an-atmega328-with-arduino-bootloader-via-a-ftdi-usb-serial-adapter
I would like your advice relating to using the Arduino IDE and avrdude to program an ATmega328 which is preloaded with an Arduino bootloader. I am using a USB to TTL-serial breakout board based on an FTDI chip. I bought a "FTDI Basic Program Downloader USB to TTL FT232 for Arduino ACC" off ebay I followed exactly this setup: 1. Connect the DTR pin to pin 1 on the ATmega through the 0.1uF capacitor. 2. Connect the RX pin to pin 3 on the ATmega (TX) 3. Connect the TX pin to pin 2 on the ATmega (RX) 4. Connect the 5V pin to the 5V rail of the board to supply the board with power from the USB interface. 5. Connect the GND pin to the GND rail of the board avrdude: stk500_getsync(): not in sync: resp=0x00 Here's what I have tried so far: Connecting the header pins of the Arduino "shield" to the chip on the breadboard. In doing so I am using the ATmega16U2 on board to send the program. Result: Flawless! the program boots and everyone is happy. Back to the FTDI breakout board. Switch RX and TX (never know?) still nothing so switched back to initial TX - RX configuration. The DTR pin out of the FTDI board is successfully resetting the ATmega328 for sure as it goes through its magical blinking sequence when I try to upload something. Now, I've tested to see if the ATmega328 can send serial info through the breakout board to by computer. It can. I've noticed a few interesting things: Both the TX and RX lines are always at 5 V. I know this because if I connect a LED in parallel with the lines, they light up. But, the little tiny LEDs on the breakout board labeled TX and RX are not always on... why is that? Could that explain my problem? If you'd like any more info, let me know, I'll get it for you. ---------------------------------------------EDIT------------------------------------------ Hello again, OK I've added a 100uF electrolytic capacitor along with a 0.1uF one between 5v and GND. This isn't the suggested 47uF and 0.1uF but I guess it`ll help filter out none the less. ( while at it. What would it change?) I've replaced my 1k pull-up resistor with a 10k one I still am not able to upload a sketch and get the same error. DTR line calls a reset and I still get serial output. (I have a sketch on it that sends incrementing integers through serial every second) Also interestingly, (although I was still unable to send a sketch before this happened) any LED I plug on pin 13 (aka 19) is now much dimmer... maybe that utlra bright white LED I had earlier was pulling to much current with a 270 ohm resitor... -_- ) To popular demand here are pictures of my board and the USB to FTDI breakout board I am using. • TX and RX are generally pulled (or driven) high. I don't think your problem is there... – bitsmack Apr 17 '14 at 5:48 • Any specific reason you went with serial programming instead of ISP? – Ignacio Vazquez-Abrams Apr 17 '14 at 5:54 • To expand on bitsmack's comment: The TX and RX lines will default to high, and briefly pulse low for data. Can be seen on a scope, but not on a meter. You can see it if you connect an LED+resistor from 5V to TX or RX (anode to 5V). The breakout board TX and RX lights are not directly wired to the actual TX/RX wires. – gwideman Apr 17 '14 at 7:34 • Ignacio: I believe the ISP capability of this board is only a stopgap -- I think using bitbang. It's not a proper SPI I/O. Also, programming via the bootloader avoids accidentally programming things that ISP can program that are usually unintended. – gwideman Apr 17 '14 at 7:36 • Anyway add a resistor for those led, they are not going to last long... – Vladimir Cravero Apr 17 '14 at 8:37 Couple of suggestions: Looking at the picture you appear to be following here: There is almost certainly an error which will cause Reset not to work as expected. I think the Reset circuit is supposed to mimic the one in an actual Arduino, which has DTR connected via a 100nF (0.1uF) capacitor to /RESET (pin 1). The actual Arduino (eg: Deumilanove) schematic shows a 10k pullup resistor from pin 1 to +5V. This will force /RESET to become deasserted some brief time after being asserted by DTR going low. However, on the breadboard in the photo, the 10k resistor is connected to the wrong end of the cap, attaching to DTR. This will have no effect, and means that the /RESET recovery will be only due to the in-built pullup, which is purportedly a higher value like 30 to 60k, and will thus take 3 to 6 times as long to reset, and will be more susceptible to noise. Moving the 10k resistor pullup to pin 1 may solve the problem outright. Another thing to check is whether to are actually getting 5V out of the 5V pin on the FTDI adapter. Many of those boards come with jumper headers or unsoldered jumper pads to allow you to wire the "Power out" pin to either 5V or the 3.3V regulator on the FTDI chip. And add resistors (say 330 to 1k) in series with each of your LEDs to avoid the causing excessive load. The photo appears to show an LED wired directly across 5V-Gnd, and another from Vcc to D13. The first one by all rights should burn out immediately or cause some other havoc. However, it's possible these are LEDs with built-in series resistors. It would also be good to add parallel 0.1uF and a 47uF capacitors across Gnd and VCC near the 328, to give it a clean supply. • The LEDs in the higher resolution version of that photo do seem to have resistors soldered to them under red heat-shrink tubing. – RedGrittyBrick Apr 17 '14 at 10:08 • Ah, right, the first photo with the partially-completed breadboard shows that more distinctly. Thanks for pointing that out. Highly misleading for "young players" though! – gwideman Apr 17 '14 at 10:10 • Thanks for the help. I did have resistances for the leds on my board. Ya, sorry for not posting my actual board, had no cameras on me at the time I posted. I didn't even notice the capacitor not being wired up inappropriately on the picture! That said, I initially had a look at the actual arduino schematic so I knew not to put the cap in series with the node connecting the pullup resistor and the cap-DTR, but , I used a 1k pull-up resistor, rather then a 10k. Could that be the problem? Ill test it out. and adding extra caps for GND and VCC. Thanks again. – Ethienne Apr 17 '14 at 17:44 • Please post a sketch of what you actually did, because it's not clear from your verbal description that what you did is workable. – gwideman Apr 17 '14 at 23:19 • Yup, will do the moment I get a bit of free time. if the corrections don't work that is – Ethienne Apr 18 '14 at 0:34 I also came here because my standalone, bootloaded ATmega328 didn't want to load my program and gave that avrdude error. I solved my situation, and would like to share it so maybe it will help others. What eventually turned out to be the issue was that the ATmega328 chip was not properly 'clicked' in the breadboard. However, when I initially placed the ATmega chip on the breadboard directly, it made a 'click' sound. It really seemed like it was in properly, not loose, couldn't go any deeper, wouldn't fall out when you shake it upside down etc. But when I tried loading a program, I got the avrdude message. While uploading some pins were flashing, even the LED I hooked up to the pin 19 (pin 13 on Arduino board). I bought the breadboard in the Netherlands, so I can't say if it is Chinese or not, but apparently it also has some deep connectors and difficulty connecting with the pins of DIP chips. Luckily I had a chip socket mount (IC Socket 28 Pin DIP) which I placed between ATmega chip and breadboard. My problem was immediately solved, and the happiness returned. I used a FTDI basic to program it, almost exactly just like the picture in the link of Gwideman gave above. Only the pull-up resistor of the RESET is falsely connected in that picture. Like Gwideman correctly stipulates, the resistor needs to go to pin 1, and not to the DTR-side of the cap. In the Arduino IDE I have the 'Arduino Uno' selected. Hope it helps somebody. • Welcome to EE.stackexchange! You wrote a nice explanatory answer, keep up the good work! – WalyKu Feb 19 '15 at 13:15 I had a very similar issue and it was caused by lousy connections on my breadboard between the FTDI's DTR --- 0.1uF ceramic cap --- AtMega328p's RESET. The issue was solved when I inserted the male wires into the same holes as the legs of the cap, so that the connection was stronger. I was able to upload sketches, but it was still not very reliable. When I exchanged the 0.1uF cap for a 10uF ceramic cap, it seemed to fix the issue completely. EDIT: Another issue causing similar behavior was using an "USB overcurrent protector" module between my laptop and the FTDI module. So I stopped using it although my previous laptop was burned by one of my wirings :o( Together with a tighter breadboard uploading sketches works like a charm. The only thing I can think of when I see this error, is that the wrong board is selected in the Arduino IDE. You probably need to select a board that uses the atmega328p. • I do have the UNO selected. – Ethienne Apr 17 '14 at 17:40 I came here seeking answers to a very similar (identical seemingly) problem -- 382P in breadboard, already with a bootloader & code on it but not responding to programming attempts via a 'cheap' ebay FTDI breakout - though the reset appeared to be working. Many different possibilities for error. After verifying my circuit was the same and double checking my IDE setup, the answer by matthijs triggered a thought -- I had "Arduino Uno" selected in the Board menu, so presumably the IDE (or avrdude) was attempting to reach the ATmega16u2 on the Uno (which itself is emulating stk500 IIRC). From my attempts (eventually successful) to bootstrap the bootloader previously via a donor Uno I had downloaded a Board config "Atmega328 on a Breadboard" -- which I tried and it worked. Other non-Uno Board configs such as "Arduino Nano w/ATmega328" and "Duemilanove w/ATmega238" also worked. You can now buy USPasp programmers for atmega chip and Arduinos for 5\$ They connect to the ICSP 6 pin programming port on Arduinos but have a 10 pin connection for other devices, like a bare atmega you have here. The Arduino software supports USPasp programmers, so do lots of 3rd part programmers. They are electrically very similar to the USB-TTL convertor, but they are explicitly designed to program bare chips and in-circuit using popular software. See: http://www.ebay.co.uk/itm/131743272473 I have purchased one to re-flash an Arduino Mega 2560 clone which is known to have bad watchdog timing. Later HEX images fix that, but you must use the 6pin CPU programming port on the Arduino. regards Paul BSc Elec Eng I have landed on this StackExchange answer looking for a solution to the same problem. I learnt a lot from reading all the answers, but none of them appeared to solve my case. Eventually I fixed the problem and thought of sharing the solution. I purchased an AVR 328P + crystal + 22pf capacitors kit on Amazon as a "barebone Arduino kit" to try it out. The vendor claimed that the AVR would have an unspecified "arduino bootloader" installed. I put the circuit together following a few examples and bits of documentation found googling around on how to program an AVR using the FTDI breakout. I had used the FTDI boards before to successfully program some Nano Pros before so I kind of knew what to expect. Once wired the FTDI and connected to USB, from which the AVR was also drawing power, I added the canonical LED setup to D13 at the end and, lo and behold, it started blinking familiarly. It turned out that vendor had pre-flashed the AVR with the Blink demo. The chip appeared to be clocked correctly and working as expected. I fired up the Arduino IDE and loaded the Blink demo. If they can do it, so can I, right? Nope. No matter how hard I tried, I could not upload the program to the AVR with any combination of board/programmer selections. Triple checked all connections, googled around, found this question, tried all solutions, nothing. avrdude could start, reset the chip correctly (I could see it from the blinking pattern), attempt to send data (FTDI tx led blinked) but uploading At this point I began wondering whether the bootloader was out of date, so went for the nuclear option: • loaded the ArduinoISP sample into the Arduino IDE • connected a spare Arduino to be used as the programmer • uncommented "#define USE_OLD_STYLE_WIRING" • wired D11, D12, D13 to D11, D12 D13 of the bare AVR chip • wired D10 of the programmer to the Reset pin of the AVR chip • add a 10Kohm pull-up resistor to V+ at the Reset pin • uploaded ArduinoISP to the "programmer" • selected "Tools/Programmer" -> "Arduino as ISP" • crossed my fingers The "crossed my fingers" must have worked, because avrdude was happy and the chip was no longer blinking - meaning that at least I was able to upload something and likely there was nothing wrong with the chip and the wiring, even if it likely meant that I had bricked the chip. I only had to try again, so I unhooked the "programmer" from the AVR, went back to the "Blink" sample, uploaded after selecting the "Arduino/Genuino Uno" board, and, yay!, there was the cheerful light! So, all in all it was not that difficult. All I had to do was to cross my fingers at the right time. There are many diagrams and articles on how to flash the bootloader to a bare AVR using "Arduino as ISP", here's one I found useful: Running Atmega328 in a standalone mode without Arduino Shield ive found that the 328 and 328p in avr dude are recognised as different chips, edit the avrdude.conf for the 328p and add this signature instead 0x1e 0x95 0x14 You need a 10uf cap connected to the reset pin and DTR. Here is a working example of how you should be connected: The 3 other caps are 22pF • Please remove or rewrite this entirely incorrect posting which can only lead readers to frustration. A large capacitor between reset and ground would prevent the auto reset circuit from entering the bootloader. Nor is that what this picture you are using shows - it shows a capacitor between DTR and reset. Further, 22pf is woefully inadequate for supply bypassing. – Chris Stratton Jul 8 '16 at 7:38 • No, it is incorrect! The problem with your post is that you specify a "10uf cap connected to the reset pin and ground" THIS IS UNWORKABLE. The correct location for the capacitor is between DTR and reset, but that is not what you posted. Also, 22pf is effectively the same as having no power bypass capacitor at all - it may still work, but it is not sound design. You need to remove the matching, equally incorrect, comment on the question, too. – Chris Stratton Jul 8 '16 at 19:22 • You are still giving a faulty recommendation on the supply bypass. We deal in engineering facts here - it doesn't matter if you got that idea from elsewhere, it is still wrong. – Chris Stratton Jul 8 '16 at 19:30 • Your apparently fundamental misunderstanding of bypass capacitors is something you could correct with any reference on the subject. There is nothing to "chat" about here. – Chris Stratton Jul 10 '16 at 0:30 • If you look at the bypass caps C4, C5, C6, C7 on the reference design you link to they are 100nF not 22pF. – Pete Kirkham Aug 22 '16 at 12:05 protected by Community♦Aug 18 '17 at 22:50 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
2019-06-26 08:01:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22111006081104279, "perplexity": 3590.7461045563177}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00504.warc.gz"}
http://www.ntg.nl/pipermail/dev-luatex/2008-June/001669.html
# [Dev-luatex] Unicode in \pdfinfo Hans Hagen pragma at wxs.nl Sat Jun 28 13:46:17 CEST 2008 ```Khaled Hosny wrote: > On Fri, Jun 27, 2008 at 11:49:07PM +0200, Hans Hagen wrote: >> Khaled Hosny wrote: >>> \pdfinfo doesn't seem to support Unicode, when I compile the attached >>> example with luatex I get symbols like هصة instead of the proper >>> (Arabic) Unicode strings. >>> Shouldn't luatex default to Unicode here too? >> this is a backend c.q macro package issue ... (btw this can also be done >> in pdftex, but i never came to enabling the code in context because >> there was nothign to test) > > I suspected that, since hyperrref with latex does this, but I found that > xetex supports Unicode pdfinfo directly and thought luatex should do > this too. in luatex eventually we will have a more generic backend concept and try to minimize the number of specific primitives for instance, at some point we will have something pdf.info being a lua table (representing a dictionary) and then one sets lua strings and these are just sequences of bytes; this is why a helper makes more sense pdf.info.title = string.utf8valueto16be(0xFEFF) .. string.utf8toutf16be(somestring) in the meantime such helpers could also be used in the regular \pdfinfo as taco mentioned, xetex is a different animal .. ok, there could be a primitive doing the conversion, but there the pdf support is drivven by the dvipdfmx backend; also, keep in mind that in practice there are many more places where strings shos up (e.g. in user annotations) and not every string is representing text (currently in context i use hex strings instead) Hans -----------------------------------------------------------------
2014-03-07 12:59:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426373243331909, "perplexity": 13598.378090058924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642519/warc/CC-MAIN-20140305060722-00043-ip-10-183-142-35.ec2.internal.warc.gz"}
https://lambdaclass.com/data_etudes/dont_bet_on_an_ev/
Don't bet on an expected value To play or not to play? Imagine a game where you toss a fair coin and bet an initial wealth $$w_0$$. If it comes up heads your monetary wealth increases by 50%; otherwise, it is reduced by 40%. You're not only doing this once, but many times; for example, once per week for the rest of your life. Would you accept the rules of our game? Would you play this game if given the opportunity? Solution Every run of the game is independent and success equally likely. Thus $$X_i$$, a random variable returning 1 on success and 0 on failure, is nothing else than a $$Bernoulli(1/2)$$. $$X_n$$, the random variable that counts the number of successful outcomes in $$n$$ runs of the game, would be defined as: $X_n = \sum\limits_{k=1}^{n} X_i$ Then $$X_n \sim Bin(n, 1/2)$$. After $$n$$ coin tosses, our random variable final wealth $$W_n$$ can be modeled as: \begin{aligned} \begin{equation*} W_n = w_0 \left(1.5^{X_n}\right)\left(0.6^{n-X_n}\right) \end{equation*} \end{aligned} In order to decide if we would accept to play this game for the rest of our lives we have to check how $$W_n$$ behaves when $$n \rightarrow \infty$$. Let's rearrange things a bit, apply $$log()$$ on both sides to make things algebraically simpler and then take the limit, which we'll later use to make deductions on $$W_n$$ itself. Before stepping into the calculations, it is of great importance to be clear about the use of the $$log()$$ function to avoid misinterpretations, especially those related to Utility Theory: it is there only because of algebraic reasons. There is no human being involved here; no behaviour modelling, no subjective value, no risk aversion, no social/economic/psychological implications. It is just mathematics. Having said that, let's proceed: \begin{aligned} \begin{equation*} W_n = w_0 \left(\left(1.5/0.6\right)^{X_n}\right) \left(0.6^{n}\right) \end{equation*}\end{aligned} \begin{aligned} \begin{equation*} \log{W_n} = \log{\left[w_0 \left(1.5/0.6\right)^{X_n} \left(0.6^{n}\right)\right]} \end{equation*} \end{aligned} $\begin{split} \lim_{n\to\infty} \log{W_n} &= \lim_{n\to\infty} \log{\left[w_0 \left(1.5/0.6\right)^{X_n} \left(0.6^{n}\right)\right]}\\ &= \log{w_0} + \lim_{n\to\infty} \log{\left(1.5/0.6\right)^{X_n}} + \log{0.6^{n}}\\ &= \log{w_0} + \lim_{n\to\infty} X_n \log{\left(1.5/0.6\right)} + n \log{0.6}\\ &= \log{w_0} + \lim_{n\to\infty} n \left(\tfrac{X_n}{n} \log{\left(1.5/0.6\right)} + \log{0.6}\right)\\ \end{split}$ Thanks to the Strong Law of Large Numbers we know that $\lim_{n\to\infty}{\frac{X_n}{n} = \mathbb{E}[X_i] = p = \frac{1}{2}}$ almost surely (i.e. with probability equal to 1). So, as a consequence, again almost surely: $\begin{split} \lim_{n\to\infty} \log{W_n} &= \log{w_0} + \lim_{n\to\infty} n \left(\tfrac{1}{2} \log{\left(1.5/0.6\right)} + \log{0.6}\right) \\ &= \log{w_0} + \lim_{n\to\infty} n \left(\log{\left(1.5/0.6\right)^{1/2}} + \log{0.6}\right) \\ &= \log{w_0} + \lim_{n\to\infty} n \log{\left(\sqrt{1.5/0.6} \cdot 0.6\right)}\\ &= \log{w_0} + \lim_{n\to\infty} n \log{\sqrt{1.5/0.6 \cdot 0.6 \cdot 0.6}}\\ &= \log{w_0} + \lim_{n\to\infty} n \log{\sqrt{1.5 \cdot 0.6}}\\ &\approx \log{w_0} + \lim_{n\to\infty} n \log{0.95}\\ &\approx \log{w_0} + \lim_{n\to\infty} n \left(-0.0229 \right) \\ &= -\infty \end{split}$ Now, since: \begin{aligned} \begin{equation*} \lim_{n\to\infty} \log{W_n} = -\infty \end{equation*} \end{aligned} we can finally conclude: \begin{aligned} \begin{equation*} \lim_{n\to\infty} {W_n} = 0 \end{equation*} \end{aligned} Our wealth will decrease to 0 when $$n\to\infty$$ regardless of our starting wealth. The answer to our initial question should be: no, I do not want to play since I'm certain to go bust. The expected value A common erroneous way of approaching the problem is to calculate the expected value of your wealth: $\begin{split} \mathbb{E}[W_n] &= \mathbb{E}[w_0 \left(1.5^{X_n}\right) \left(0.6^{n-X_n}\right)]\\ & = w_0 \left(0.6^n\right) \left(\mathbb{E}[(1.5/0.6)^{X_n}]\right)\\ & = w_0 \left(0.6^n\right) \left(\mathbb{E}[(2.5)^{X_n}]\right) \end{split}$ To calculate the expected value of $$k^X$$, we'll use the theorem known as the Law of the Unconscious Statistician for discrete random variables: $\mathbb{E}[g(X_n)] = \sum\limits_{x \in X_n} g(x_i) p_X(x_i)$ With the binomial pmf being: $p_X(x) = {n \choose x} p^x (1-p)^{n-x}$ Then, seeing that the sum is nothing else than Newton's binomial formula for the expansion of $$(a+b)^n$$: $\begin{split} \mathbb{E}[(2.5)^{X_n}] &= \sum\limits_{x=0}^{n} 2.5^x {n \choose x} p^x (1-p)^{n-x}\\ &= \sum\limits_{x=0}^{n} {n \choose x} (2.5p)^x (1-p)^{n-x}\\ &= (2.5p + 1 - p)^n \\ &= (2.5 \cdot \dfrac{1}{2} + 1 - \dfrac{1}{2})^n\\ & = 1.75^n \end{split}$ Finally: $\begin{split} \mathbb{E}[W_n] &= w_0 \left(0.6^n\right) \left(\mathbb{E}[(2.5)^{X_n}]\right)\\ &= w_0 \left(0.6^n\right) \left(1.75^n\right)\\ &= w_0 \cdot 1.05^n \end{split}$ This might lead us to conclude that the gamble is worth taking since we $$expect$$ our wealth to increase indefinitely at a rate of $$1.05$$ every time we flip the coin. Actually, we've already proven before that is not true at all. Expected value won't tell us if a gamble is worth taking. It tells us what would happen on average if a group of people were to take the bet on parallel, and there are some conditions that need to be satisfied to be certain that this coincides with what will happen to one individual taking the bet repeatedly over time. The expected value, or ensemble average, tells us what would happen to an individual in multiple parallel universes, which most times is not representative of what would happen to an individual over time. Repetition matters We cannot obtain the returns of a person going to the casino one hundred times in a row by calculating the average returns of a hundred people betting one time. This mistake of treating the ensemble returns as the average returns has been done repeatedly in the social sciences. The problem does not lie within the expected value per se but with the interpretation we assign to it. It is importat to keep in mind that the growth factors 0.6 and 1.5 don't have anything special. We could use other growth factors, for example where no one goes bankrupt, and the ensemble average would still be different from the time average. The important thing here is the difference between ensemble and time average. The consequences of ommiting this in economics and psychology are enormous. Any conclusion that uses the ensemble average in place of the time average should be taken with great caution. Furthermore, we have been judging rationality in economic behavior based on optimizing the ensemble average. But the rationality of individuals can't be defined in terms of maximization of ensemble averages. Attempts to correct the error of using the ensemble average by adding arbitrary utility functions does not solve it. New behavioral experiments show that agents maximize the time average growth of their wealth. The mistake of using ensemble averages instead of time averages has been propagated for the last two hundred years. Claude Shannon, Edward Oakley Thorp and John Larry Kelly Jr, fathers of information theory were notable exceptions in that they did not make the same mistake. Ole Peters was the first to systematize, generalize and extend the study of economic theory without parallel universes. With Murray Gell-Mann, Nobel Prize winner and founder of modern particle physics, he co-authored a paper on the time resolution of the St. Petersburg paradox. The London Mathematical Laboratory, where Ole Peters is a fellow, has published the incredibly profound lecture notes on this subject. Young economists who are eager to study economics with solid foundations should read it.
2020-11-25 13:53:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6500076651573181, "perplexity": 508.0503903634315}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00169.warc.gz"}
https://rpg.stackexchange.com/questions/177261/in-case-a-trigger-for-a-readied-action-is-someone-elses-reaction-what-is-resol
# In case a trigger for a readied action is someone else's reaction, what is resolved first? I asked if you can ready an action with someone's reaction as a trigger. The answer is YES. But I'm still not sure - in such a case (i.e. a trigger for reaction being another reaction) - which reaction is resolved first, the triggered one or the triggering one? Does the readied action interrupt someone else's reaction that triggered it? Is there a possibility of a chain of such consecutively triggering and consecutively interrupted reactions? • From PHB: If the reaction interrupts another creature's turn, that creature can continue its turn right after the reaction. But I see a problem - it's not "another's creature's turn". Does it mean that there's no interruption and the triggering reaction finishes first? – z33k Nov 9, 2020 at 21:29 • @ThomasMarkov I expect Counterspell would be the best candidate of an example for a reaction interrupting the triggering action. Nov 9, 2020 at 21:39 • @RevenantBacon sure, but here we have a reaction interrupting (or not) a triggering reaction, not action. – z33k Nov 9, 2020 at 21:42 • @RevenantBacon you're right (about counterspelling a counterspell), I rest my case :). Would you care to make a proper answer out of it? I'll accept it right away. – z33k Nov 9, 2020 at 21:44 • @z33k My actual point was that you can Counterspell a Counterspell. Nov 9, 2020 at 21:45 ## The trigger is resolved first From the rule on the Ready action: … take your Reaction right after the trigger finishes This applies specifically to the Ready action; Reactions that don’t depend on the Ready action (e.g. opportunity attacks, reaction spells) have their own rules. Some of these, like Counterspell , specifically interrupt the trigger (so, yes, you can Counterspell a Counterspell). Others don’t. And one, Shield, has a weird time-travel effect. However, for anything hanging off the Ready action, the trigger gets fully resolved first. • I'll set my foot down and say, Shield has no time-travel shenanigans. Fight me. It's the difference in a kid's shootout game between saying "I shot you" or "I shot at you". If there's room for a reaction with the "When you are hit" trigger, it obviously mean right before the attack disembowels you. Occam's Razor, it blocks the strike at the last moment, instead of time-traveling. Beating the AC does not mean the strike went through. Dealing damage and then finishing the Action means. Nov 10, 2020 at 18:58 • @Mindwin you explain magic how you want. I’ll stick with my explanation of magical time travel. Nov 10, 2020 at 20:49 ## The Readied reaction is always after its trigger The Ready actions defines when the reaction is taken (PHB p. 193; emphasis added): When the trigger occurs, you can either take your reaction right after the trigger finishes or ignore the trigger. If your defined trigger is someone else's reaction (or whatever act that reaction includes) you can take your Readied action after that reaction has finished. Other reactions may actually interrupt, ie. resolve before their trigger, which generally only occurs when the reaction modifies or prevents the trigger, such as counterspell and the Lore Bard's Cutting words (which resolves before the effect resulting from the roll). ## You resolve in order of most recent reaction taken. The full text on reactions: Reactions Certain special abilities, spells, and situations allow you to take a special action called a reaction. A reaction is an instant response to a trigger of some kind, which can occur on your turn or on someone else's. The opportunity attack is the most common type of reaction. When you take a reaction, you can't take another one until the start of your next turn. If the reaction interrupts another creature's turn, that creature can continue its turn right after the reaction. So we can see here that reactions are instant responses to a trigger. Lets say that the trigger is Bob casting the spell Fireball. Alice, Bob's opponent, doesn't want to take a fireball to the face, and promptly Counterspells, so as not to be burnt to a crisp. Since Bob does want her burnt to a crisp, he decides to Counterspell Alice's Counterspell. So now we have this scenario: 1. Bob casts Fireball 2. Alice Counterspells 3. Bob Countespells In order for the spell Counterspell to work as intended, the most recently used reaction must resolve first. If we had more wizards available, then we could theoretically have a chain of Counterspells as long as the [number of available casters] +1 (though as a DM, I personally would restrict it to only 4 or 5 reactions triggering off each other) Since Readied Actions and Reactions use the same rules, we can safely say that a similar chain of events can be perform. Bob Readies an action to attack when Alice attacks. Alice readies an action to attack when Tim attacks. Tim readies an action to attack when Jane attacks. We get to Jane's turn, and she takes a swing at Bob, triggering Tim's Readied Action, which in turn triggers Alice's Readied Action, which then triggers Bob's Readied Action. Then we resolve in revers order, Bob hits, then Alice hits, then Tim hits, then, finally, Jane hits. • Counterspell is not a readied action though, it's a reaction in its own right Nov 9, 2020 at 22:01 • @Someone_Evil A readied action is a subtype of reaction and uses the same rules. Nov 9, 2020 at 22:02 • This kind of confusion comes from the misplacement of reactions rules that appear partly in the PH and partly in the DMG. The latter handbook states (page 252): "Follow whatever timing is specified in the reaction's description. For example, the opportunity attack and the shield spell are clear about the fact that they can interrupt their triggers. If a reaction has no timing specified, or the timing is unclear, the reaction occurs after its trigger finishes, as in the Ready action" . Would have been nice to see this tidbit in the PH instead, since nobody uses the DMG. Nov 10, 2020 at 14:06 • @KogarashiKaito: great comment. It both settles the matter and pinpoints the source of confusion. For me RevenantBacon's thinking is perfectly fine. Actually, I too would like one blanket rule (i.e. reactions always interrupt) better than our settling here that says: the rule works this way sometimes, and that way some other times. Alas, RAW rules us all :) – z33k Nov 10, 2020 at 14:27
2022-08-18 20:14:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5440067648887634, "perplexity": 3589.1679176052126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00261.warc.gz"}
https://techutils.in/blog/2019/04/05/stackbounty-distributions-p-value-goodness-of-fit-kolmogorov-smirnov-goodness-of-fit-test-on-arbitrary-parametric-distributions-w/
#StackBounty: #distributions #p-value #goodness-of-fit #kolmogorov-smirnov Goodness-of-fit test on arbitrary parametric distributions w… Bounty: 100 There have been many questions regarding this topic already addressed on CV. However, I was still unsure if this question was addressed directly. 1. Is it possible, for any arbitrary parametric distribution, to properly calculate the p-value for a Kolmogorov-Smirnov test where the parameters of the null distribution are estimated from the data? 2. Or does the choice of parametric distribution determine if this can be achieved? 3. What about the Anderson-Darling, Cramer von-Mises tests? 4. What is the general procedure for estimating the correct p-values? My general understanding of the procedure would be the following. Assume we have data \$X\$ and a parametric distribution \$F(x;theta)\$. Then I would: • Estimate parameters \$hattheta_{0}\$ for \$F(x;theta)\$. • Calculate Kolmogorv-Smirnov, Anderson-Darling, Cramer von-Mises test statistics: KS\$_{0}\$, AD\$_{0}\$ and CVM\$_{0}\$. • For \$i=1,2,ldots,n\$ 1. Simulate data \$y\$ from \$F(;hattheta_{0})\$ 2. Estimate \$hattheta_{i}\$ for \$F(y;theta_{i})\$ 3. Calculate KS\$_{i}\$, AD\$_{i}\$ and CVM\$_{i}\$ statistics for \$F(y;hattheta_{i})\$ • Calculate \$p\$-values as the proportion of these statistics that are more extreme than KS\$_{0}\$, AD\$_{0}\$ and CVM\$_{0}\$, respectively. Is this correct? Get this bounty!!! This site uses Akismet to reduce spam. Learn how your comment data is processed.
2020-11-28 04:29:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8436119556427002, "perplexity": 5115.81472995642}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195069.35/warc/CC-MAIN-20201128040731-20201128070731-00535.warc.gz"}
http://rspb.royalsocietypublishing.org/highwire/markup/35405/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
Table 1. Fifteen Y-STRs with mutation rates, range of alleles and estimate of duration of linearity. All STRs investigated in this study are shown with their mutation rates (μ), estimated from Ballantyne et al. [33], and range of observed alleles, R, with 95% CI is taken from the YHRD [34]. θ(R)/2μ is an estimate of the duration of linearity of an STR (see §2). Y-STRμμ(2.5)μ(97.5)Rθ(R)/2μ DYS4480.0003940.00001410.002111125 381 DYS3920.000970.0001430.003231519 244 DYS4380.0009560.0001370.003181212 465 DYS3900.001520.0003520.00409139211 DYS3930.002110.0006210.005125648 DYS4390.003840.001630.00754154861 DYS4370.001530.0003540.004194357 DYS6350.003850.001630.00755144221 DYS4560.004940.002350.00897143289 DYS389II0.003830.001610.00749123111 DYS3910.003230.001260.00665102554 DYS4580.008360.00480.0134141944 DYS190.004370.001980.00823101888 Y-GATA-H40.003220.001280.0066281630 DYS389I0.005510.002720.009748953
2018-06-24 11:12:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426156878471375, "perplexity": 5315.731657922443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866932.69/warc/CC-MAIN-20180624102433-20180624122433-00362.warc.gz"}
https://www.mejtoft.se/thomas/education/academic-writing/latex-writing-tips/
LaTeX writing tips When writing a research paper, thesis, report, or just any type of document, there are many different systems for preparing your document – WYSIWYG systems, such as Word and Pages, and typesetting systems, such as LaTeX to mention a few. When writing reports and research papers, especially within disciplines that requires equations and calculations, LaTeX is popular. The principles of writing documents in LaTeX is very similar to writing in HTML and styling using CSS. Simplified, the files used in LaTeX are template files for formatting and styling, source code files for your content, and output files. The output files are the formatted pdf-files. Another feature of LaTeX is the handling of the references using e.g., BibTeX. You can either download and install LaTeX on your local computer or use an online editor, e.g., Overleaf when writing and compiling your document. Even though these things are general, I strongly recommend using BibTeX with Overleaf when writing. Below are som Tips & Trix on using LaTeX that I have collected over the years, most often as frequent errors in students’ papers. Tips on how to organize the LaTeX project during writing • Break up large TeX-files. If you are writing a longer document, e.g., a thesis or report, it might be of value to split the document into smaller parts, e.g., the different chapters of your writing. It becomes easier to find and edit text. The different files are included into the main document using the command \include{ }. \include{introduction.tex} \include{background.tex} ... • Labels and cross-references. Put labels on figures, images, and anything that is included in your writing. Making a cross reference to a label makes e.g., figure number or page number to be updates automatically ... \label{fig:survey-results} ... Cross-reference: The results from the survey are presented in figure \ref{fig:survey-results} on page \pageref{}. Output: The results from the survey are presented in figure 4 on page 36. Here are some small tips & trix on LaTex (a.k.a. Frequent errors on student papers) • Swedish (or other special) characters. Check for the ability to write Swedish (or other non-English) characters (most often noticed when writing Umeå University). If you are using Swedish (or other non-English) characters regularly through a paper, add the UTF-8 package (see below). If you are using just a few Swedish (or other special) characters, you can type special characters directly into your code (see example below). % Adding the UTF-8 package \usepackage[utf8]{inputenc} % Special characters in any LaTeX code \AA % Å \aa % å \"{A} % Ä \"{a} % ä \"{O} % Ö \"{o} % ö % other useful special characters - % Hyphens (-) -- % en-dash (–) (sv: tankstreck) (Used to mark ranges) --- % em-dash (—) (Used to separates extra information) • Space before (any) reference. Put a space between the word and the reference: “… as been shown [7].” Most often a non-breaking space might be your best option. Use “ “ or “~” (non-breaking space) in source code. • Include the right packages. Nice packages for writing and styling. Include the graphicx package for enhanced graphics support. Include the xcolor package for color support. \usepackage{hyperref} \usepackage{graphicx} \usepackage{xcolor} • Quotation marks. Single quotation marks are produced using and ‘. Double quotation marks (for citation) are produced by typing and ‘ ‘ (two directed single quote characters at the beginning and two undirected single quotation characters at the end of the quote. Then you will get distinctly left-handed and right-handed typographic quotation marks. Example: LaTeX code: Previous research has defined this as a total waste of time' ' regarding tracing. Output: Previous research has defined this as “a total waste of time” regarding tracing. LaTeX code: Mejtoft (2000) states that previous research has defined this as a total waste of time' regarding tracing' '. Output: Mejtoft (2000) states that “previous research has defined this as ‘a total waste of time’ regarding tracing”. If the single and double quotation follow each other, the control sequence \, should be used to ensure the right spacing in-between the different quotation marks. Example: LaTeX code: Tracing -- according to Mejtoft (2000), previous research has defined this as a total waste of time'\,' '. Output: Tracing – according to Mejtoft (2000), “previous research has defined this as ‘a total waste of time’ ”. • Width of figures. It is possible to specify the width of figures, to set the appropriate size of the figure compared to the rest of the document. It might be necessary to work with textwidth, columnwidth, and linewidth, depending on the situation. Read more. % Sets the width to the full width of the text \includegraphics[width=\textwidth]{results.png} % Sets the width to 80% of the width of the text \includegraphics[width=.75\textwidth]{results.png} % Sets the width to the full width of the column (valuable when having two or more columns) \includegraphics[width=\columnwidth]{results.png} % Sets the width to 60% of the full width of the column \includegraphics[width=.6\columnwidth]{results.png} % Sets the width to 50% of the length of the line in the current environment \includegraphics[width=.5\linewidth]{results.png} • Force line break on long weblinks. From time to time, long URLs do not break in the reference list (and sometimes in other parts of the paper). To solve this problem, use the package url and define characters that is used for line break (if necessary). \usepackage{url} \def\UrlBreaks{\do\-\do\.} % Defines "-" and "." as characters for line break BibTeX. For most references you can export references as BibTeX from the publishers’ websites and the databases, e.g. ACM Digital Library, IEEE Xplore, ScienceDirect etc. Use the right type in the BibTex entries – @article, @inproceedings, @book, @inbook, @online, @techreport` being some of the most common.
2022-06-29 17:16:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8604819774627686, "perplexity": 3471.554501772278}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00257.warc.gz"}
https://crypto.stackexchange.com/questions/95556/rsa-key-generation-why-use-lcmp-1-q-1-instead-of-the-totient-%CF%95n/95557
# RSA key generation: why use lcm(p-1, q-1) instead of the totient ϕ(n)? As far as I can see, generating a private key from two prime numbers p and q, having calculated n = pq, starts with calculating λ(n) = lcm(p-1, q-1). This is the detailed explanation given in the wikipedia article for RSA, it's also the implementation I've found in most Python cryptography libraries, and, searching through the openssl source code, it's also how they seem to do it, so I'd say this looks like the standard. So my question is, why do some implementations appear to use ϕ(n) instead, which is simply (p-1)(q-1)? I understand that you can calculate λ(n) = ϕ(n) / gcd(p-1, q-1), so I suppose these two can be equal if p-1 and q-1 are coprime, but what's with the two different implementations? This way to generate the "private modulo" is used for example in the somewhat popular python program rsatool, it's also mentioned in this popular article detailing how RSA keys are generated, but my problem is, taking the two same prime numbers p and q, these two methods will not generate the same private key, so assuming the former is the proper, standard way, where did this other one come from? I assume the 1st method is simply the standard since. As pointed out by a comment $$\lambda(n)$$ will always be smaller or equal to $$\phi(n)$$. In RSA, as pointed by Dave Thompsons, $$\lambda(n) \neq \phi(n)$$. $$\lambda(n)$$ possibly leads to faster calculations(?) but what interested me was where that 2nd version came from, and it comes from the original RSA paper, turns out. • Yes, your second version (with $\varphi$ or $\Phi$ or $\phi$) is the chronologically first published. Notice the other (with $\lambda$) subdivides into $e\,d\equiv1\pmod{\lambda(n)}$ with $0<d<n$ (PKCS#1), or $d=e^{-1}\bmod{\lambda(n)}$ (FIPS 186-4).
2021-12-02 04:28:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202000260353088, "perplexity": 897.9566569488795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00303.warc.gz"}
https://tex.stackexchange.com/questions/162137/loading-microtype-before-or-after-the-font
# Loading microtype before or after the font According to the answer given here, the microtype package should be loaded after the font, since microtype "needs to know what fonts are in use at the time that it is loaded". I am, however, not able to detect any difference between loading microtype before and after setting the font. I also notice that in the MWE below, the mt-*.cfg file for EB Garamond is loaded after fontspec, even if microtype is loaded first. Does this mean that it simply doesn't matter when microtype is loaded? \documentclass{article} \usepackage{microtype,polyglossia} \setdefaultlanguage{english} \usepackage{blindtext,fontspec} \setmainfont{EB Garamond} \begin{document} \Blinddocument \end{document} (simplified and abbreviated output below) This is XeTeX, Version 3.1415926-2.5-0.9999.3 (TeX Live 2013/W32TeX) (c:/texlive2013/texmf-dist/tex/latex/microtype/microtype.sty) (c:/texlive2013/texmf-dist/tex/latex/microtype/microtype.cfg) (c:/texlive2013/texmf-dist/tex/latex/polyglossia/polyglossia.sty) (c:/texlive2013/texmf-dist/tex/latex/fontspec/fontspec.sty) (c:/texlive2013/texmf-dist/tex/latex/fontspec/fontspec.cfg) (c:/texlive2013/texmf-dist/tex/latex/ebgaramond/mt-EBGaramond.cfg) • mt-EBGaramond.cfg is not loaded here even if microtype is loaded second. I can't detect a difference but for a different reason: mt-EBGaramond.cfg is not loaded either way. However, I had to change your code to get it to compile and use EB Garamond 12 as the font name. So maybe that makes a difference? – cfr Feb 24 '14 at 17:36 • @cfr I don't understand. mt-EBGaramond.cfg is not loaded? According to the log, it is, as seen in my MWE. I also don't understand what you mean by "maybe that makes a difference?" – Sverre Feb 24 '14 at 17:55 • What I mean is: when I compile your code, it is not loaded. Except that I had to alter your code to get it to compile. Without the 12 I just get errors. With the 12, it compiles but it never loads the cfg file regardless. It is in your log, but not mine! – cfr Feb 24 '14 at 22:13 • Since you’re using XeTeX with microtype you should know about the limitations this combination has. – Crissov Feb 25 '14 at 7:59 • @cfr I see. That's a problem at your end, since the EB Garamond font is supposed to load without specifying the optical size. But this has nothing to do with my microtype question, so let's leave that aside. @Crissov I do, but pdflatex is not an option for me, since I need to use larger fonts. And lualatex has other limitations, such as not being able to compile with pstricks. – Sverre Feb 25 '14 at 12:13 microtype can be loaded at any time as the actual font setup is deferred until the end of the preamble. The restriction that the package should be loaded after the fonts dates back to very old microtype versions (older than v1.9a (2005/12/05)). (I've fixed this in the answer you link to.) • when loading microtype with the babel option, load babel first • with luatex, load fontspec first (only the package, not necessarily the fonts)
2020-10-21 22:29:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886198401451111, "perplexity": 2310.461291199153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00165.warc.gz"}
https://www.justexam.in/subject-wise-question-bank/math-7/
## Class/Course - Class VII ### Subject - Math #### Total Number of Question/s - 3433 Just Exam provide question bank for Class VII standard. Currently number of question's are 3433. We provide this data in all format (word, excel, pdf, sql, latex form with images) to institutes for conducting online test/ examinations. Here we are providing some demo contents. Interested person may contact us at info@justexam.in • 1. All Chapters - Quiz • 2. Integers - Quiz 1. In 89,9 is a) power b) base c) number d) none 2. While removing brackets the order in which the brackets are removed is a) [], (), {} b) {}, (), [] c) (), {}, [] d) none • 3. Fractions and Decimals - Quiz 1. __________+ $\frac{4}{7}$ =1 a) $\frac{4}{7}$ b) 1 c) $\frac{3}{7}$ d) $\frac{7}{4}$ 2. $\frac{15}{45}$ = $\frac{?}{9}$ a) 15 b) 9 c) 5 d) 3 • 4. Data Handling - Quiz 1. The average weight of sample of 10 apples is 52g. Later it was found that the weighing machine had shown the weight of each apple 10g less. The correct average weight of an apple is a) 62g b) 54g c) 56g d) 52g 2. The mean of x,x+3,x+9 and x+12 is a) x+6 b) x+3 c) x+9 d) X+12 • 5. Simple Equations - Quiz 1. By making, I as the subject in T = $2\pi \sqrt{\frac{l}{g}}$ we obtain  1 = a) $\frac{gT^{2}}{4\pi^{2}}$ b) $\frac{gT^{2}}{2\pi}$ c) $\frac{4\pi}{gT^{2}}$ d) $\frac{2\pi}{gT^{2}}$ 2. 4 is added to a number and the sum is multiplied by 5. If 20 is subtracted from the product and the difference is divided by 8, the result is equal to 10. Find the number. a) 16 b) 12 c) 8 d) 20 • 6. Lines and Angles - Quiz 1. The angles between two perpendicular lines is a) 300 b) 600 c) 900 d) 1800 2. When two lines meet at a point forming right angles they are said to be ______________to each other. a) parallel b) perpendicular d) none • 7. The Triangle and its Properties - Quiz 1. The incentre of a triangle coincides with the circumcentre, orthocentre and centroid in case of a) an isosceles triangle b) an equilateral triangle c) a right angled triangle d) a right angled isosceles triangle 2. In the following figure if AB = AC then find ∠ x. a) 800 b) 700 c) 600 d) 1100 • 8. Congruence of Triangles - Quiz 1. Which of the following statement(s) is/are true? a) In an isosceles triangle, the angles opposite to equal sides are equal b) The bisector of the vertical angle of an isosceles triangle bisects the base at right angles c) If the hypotenuse and an acute angle of one right triangle is equal to the hypotenuse and the corresponding acute angle of another triangle then the triangles are congruent d) All the above 2. In the following fig. if AB = AC and BD = DC then ∠ ADC = a) 600 b) 1200 c) 900 d) none • 9. Comparing Quantities - Quiz 1. A person sells 36 oranges per rupee and incurs a loss of 4%. Find how many oranges per rupee to be sold to have a gain of 8%? a) 32 b) 30 c) 28 d) 34 2. In 2003, Indian cricket team played 60 games and won 30% of the games played. After a phenomenal winning streak, this team raised its average to 50%. How many games did the team win in a row to attain this average? a) 36 b) 24 c) 48 d) 12 • 10. Rational Numbers - Quiz 1. By what rational number should $\frac{-8}{39}$ be multiplied to obtain 26? a) $\frac{507}{4}$ b) $\frac{-507}{4}$ c) $\frac{407}{4}$ d) None 2. The sum of two rational numbers is -3. If one of the numbers is $\frac{-7}{5}$ , then the other number is a) $\frac{-8}{5}$ b) $\frac{8}{5}$ c) $\frac{-6}{5}$ d) $\frac{6}{5}$ • 11. Practical Geometry - Quiz 1. In a ΔABC, if ∠ B is an obtuse angle, then the longest side is a) AB b) BC c) AC d) none 2. If a, b and c are sides of a Δle, then a) a - b > c b) c > a + b c) c = a+ b d) b < c + a • 12. Perimeter and Area - Quiz 1. If the radius of a circle is $\frac{7}{\sqrt{\pi}}$cm, then the area of the circle is a) 154 cm2 b) $\frac{49}{\sqrt{\pi}}$cm2 c) 22 cm2 d) 49 cm2 2. Find the area of the shaded region of the following figure. ABCD is a rectangle having length 30 cm and breadth 25 cm. P, Q, R, S are midpoints of AB, BC, CD and AD respectively. a) 375 m2 b) 375 cm2 c) 475 m2 d) None • 13. Algebraic Expressions - Quiz 1. Simplify $x^{2}y^{3} - 1.5x^{2}y^{3} + 1.4x^{2}y^{3}$ . a) 0.9x2y3 b) -0.9x2y3 c) 0.9 d) -0.9 2. To get the value of a __________ to be divided on either of equation 6a = 30 (elimination method). a) -6 b) 30 c) 6 d) 0 • 14. Exponents and Powers - Quiz 1. If $\sqrt{2}$ = 1.414 then the value of $\frac{5 + \sqrt{2}}{5 - \sqrt{2}}$is a) 1.787 b) 1.525 c) 1.828 d) 1.326 2. The sum of the powers of the prime factors in 108 x 192 is a) 5 b) 7 c) 8 d) 12 • 15. Symmetry - Quiz 1. Which of the following alphabet has vertical line of symmetry? a) A b) B c) Q d) E 2. Which of the following alphabet has no lines of symmetry? a) A b) B c) Q d) O • 16. Visualising Solid Shapes - Quiz 1. How many cubes are needed to make the solid below? a) 9 b) 14 c) 18 d) 21 2. How many corners does this shape below have? a) 6 b) 10 c) 12 d) 13
2018-03-24 00:23:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5281102657318115, "perplexity": 2817.184374638207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00535.warc.gz"}
http://www.ipam.ucla.edu/abstract/?tid=14366&pcode=ELWS4
## Competing sources of variance in parallel replica Monte Carlo, and optimization in the low temperature limit #### Paul DupuisBrown University Computational methods such as parallel tempering and replica exchange are designed to speed convergence of more slowly converging Markov processes (corresponding to lower temperatures for models from the physical sciences) by coupling them with higher temperature processes that explore the state space more quickly through a Metropolis type swap mechanism. An implementation of the infinite swapping rate limit, which by certain measures is optimal, can be realized in terms of a process which evolves using a symmetrized version of the original dynamics, and then produces approximations to the original problem by using a weighted empirical measure. The weights are needed to transform samples under the symmetrized dynamics into distributionally correct samples for the original problem. After reviewing the construction of this infinite swapping limit,’’ we focus on the sources of variance reduction due to the coupling of the different Markov processes. As will be discussed, one source is due to the lowering of energy barriers due to the coupling of high and low temperature components and consequent improved communication properties. A second and less obvious source of variance reduction is due to the weights used in the weighted empirical measure that appropriately transforms the samples of the symmetrized process. These weights are analogous to the likelihood ratios that appear in importance sampling, and play much the same role in reducing the overall variance. A key question in the design of the algorithms is how to choose the ratios of the higher temperatures to the lowest one. As we will discuss, the two variance reduction mechanisms respond in opposite ways to changes in these ratios One can characterize in precise terms the tradeoff and explicitly identify the optimal temperature selection for certain models when the lowest temperature is sent to zero, i.e., when sampling is most difficult. Back to Workshop IV: Uncertainty Quantification for Stochastic Systems and Applications
2018-07-23 13:31:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6434195041656494, "perplexity": 494.42783372928096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596463.91/warc/CC-MAIN-20180723125929-20180723145929-00074.warc.gz"}
https://brokenco.de/2019/02/21/docker-in-jenkins.html
Back atom.xml blog Jenkins Pipeline ♥ Docker 21 Feb 2019 jenkins pipeline docker As the number of different ways to configure Jenkins approaches infinity, I have come to appreciate one pattern in particular as generally good. When I first started the draft of this blog post, three years ago (!), I wanted to share that by coupling Jenkins and Docker together I finally had the CI/CD server I had been waiting for. In the past few years, Docker has increasingly become the default environment for Jenkins, and rightfully so. In my previous post I mentioned some of the security pitfalls of using Docker in an untrusted CI/CD context, like that which we have in the Jenkins project. Regardless of trusted or untrusted workloads, I still think Docker is a key piece of our CI/CD story, and here I would like to outline why we need Docker in the first place. To understand Docker’s popularity, it’s important to consider the problems which Docker solves. Some of the things I have historically found difficult around designing and implementing CI/CD processes have been: • Managing environmental dependencies: for various Ruby, Python and other non-JVM-based workloads there have always been system dependencies that build machines have needed. One build needs libxml2-dev while another needs zlib-dev while another requires a specific Linux and OpenSSL version combination. Some dependencies like nokogiri try to hand-wave these dependencies away by embedding the compilation of these system libraries into their installation process. For CI, this not only introduces another set of build-time variables to concern yourself with, but a myriad of security concerns. At a previous employer who will remain nameless, we ended up having to split our entire Jenkins agent pool because we had one subset of projects which depended on MySQL 5.1 and another which needed MySQL 5.5. Since we did not have the tooling to move the dependencies closer to the applications, we had to manage these system dependencies in the release engineering team. Suffice it to say, this was an unpleasant state of affairs. A problem which is virtually nonexistent with Docker-based environments. • Managing service dependencies: many of the workloads I have worked with in the past have been web applications. These services invariably need a Redis, a Memcached or a MySQL running around in the background to execute tests of any non-trivial scope. As time wears on, services (for better or worse) end up relying on version-specific installations of these background processes. To complicate matters more, each build must have a pristine state in these background processes. In some cases this might mean no state whatsoever, in others it might mean some “fixtures” pre-loaded into the database(s). This is also one area where docker-compose is absolutely stellar, and should be used gratuitously to incorporate external service dependencies into the build process directly. • Managing performance: ideally, as an application grows, more test cases get added. More test cases mean longer build times which can lead to developer frustration, or worse, developers attempting to circumvent or reduce reliance on the CI/CD server. With automation servers like Jenkins, one might have a large pool of machines waiting idle for workloads, but the challenge is mapping a project’s test execution to that pool of machines. By utilizing Docker for the execution environment, I have found it much easier to build out a large homogenous pool of agents which can be ready for highly parallelizable workloads, without running into too much trouble from an administration standpoint. Quickly getting back to a pristine state typically involves simply stopping and restarting a container. With container orchestrators like Kubernetes, Docker Swarm, AWS Elastic Container Service, or Azure Container Instances, it’s possible (and really cool!) to enable huge amounts of auto-scaled compute for CI/CD workloads. Which, by their very nature are very elastic, especially just before lunch time and the end of the day! This tutorial has a quick Node-based Jenkins Pipeline example, but below I’ve included a very simple example of how easy incorporating Docker into a Pipeline can be: pipeline { agent { docker { image 'node:9-alpine' args '-p 3000:3000' } } stages { stage('Build') { steps { sh 'npm install' } } stage('Test') { steps { sh 'npm run test' } } } } While there are a number of CI/CD workloads where Docker containers might not be helpful: mobile build/test, embedded, or for non-Linux based applications. But should your workload be Docker-compatible, I cannot highly recommend using Docker with Jenkins Pipeline!
2021-05-11 07:59:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18318772315979004, "perplexity": 4387.621030604288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00473.warc.gz"}
http://math.stackexchange.com/questions/372525/is-there-a-tree-like-proof-of-compactness-theorem-in-the-case-of-uncountably-m
# Is there a “tree-like” proof of compactness theorem in the case of uncountably many variables? I like proofs using trees and König's lemma, since they are very visual. One of the applications of König's lemma you can show to students is proving compactness theorem for propositional calculus, which says that a set of formula is satisfiable if and only if every finite subset is satisfiable. In this proof you have to order the statements in your theory into a sequence and order the variables accordingly. Then you construct a tree in which each vertex corresponds to truth values of a variables from the statements $T_1,\dots,T_n$ and then infinite branch gives you interpretation of all variables which is consistent with the whole theory. In this way you can prove the compactness theorem under the assumption that the set of variables is countable. Is there a proof in the similar spirit which works for arbitrary sets of variables too? (Or some technique which can be used to obtain uncountable version if we already have countable version?) I am interested mainly in the proofs avoiding the use of completeness theorem. - I started to write an answer, but I seem to get off-topic. The key point from that answer is that $\omega$ has the tree property, while it is consistent that no other cardinal has it. If one likes large cardinals then one can also assume strong enough axioms (e.g. a proper class of weakly-compact cardinals) and then have "enough" cardinals with the tree property on which this process can follow through. –  Asaf Karagila Apr 25 '13 at 15:29 @Asaf: I think your comment will lead to a compactness theorem for a language with conjunctions and disjunctions of fewer than $\kappa$ formulas at a time, where $\kappa$ is weakly compact. The theorem will say that a set of $\kappa$ such statements is satisfiable if every subset of cardinality $<\kappa$ is satisfiable. (That's why such cardinals are called weakly compact.) But the question is about ordinary compactness for finite-sized formulas, and assuming that finite subsets of the theory are satisfiable. –  Andreas Blass Apr 25 '13 at 17:31 @Andreas: I think that you're right. That's what I get for reading the question before fully waking up from my nap! :-) –  Asaf Karagila Apr 25 '13 at 18:20 @Asaf Even if not exactly answering my question; your comment was very useful to me - I've learned about tree property of cardinals. Thanks to both of you! –  Martin Sleziak Apr 25 '13 at 18:21 I don't know a way to make the proof of propositional compactness, for uncountable sets of formulas, look like a tree argument. But if you go back to how König's Lemma is usually proved and apply that argument to the particular tree that you described, then the resulting argument for propositional compactness generalizes quite directly, to the following argument. Given a set $S$ of propositional formulas, well-order the set of propositional variables occurring in these formulas, and proceed by (transfinite) induction over this well-ordering. When your inductive process arrives at a variable $p$, you assign a truth value to $p$, so as to preserve the following inductive hypothesis: (*) For any finite subset $F$ of $S$, there exists a truth assignment that makes $F$ true and agrees with all the values assigned so far in your inductive process. The hypothesis of the compactness theorem is that (*) holds at the beginning of your process, before you've assigned values to any of the variables. If (*) holds when you're about to assign a value to $p$, then you can choose that value so as to maintain (*). The reason is that if the value "true" for $p$ fails to work because of a finite $F_1\subseteq S$ and "false" fails because of $F_2$, then (*) already failed before you gave $p$ any truth value, because of $F_1\cup F_2$. You also have to check that (*) continues to hold at limit stages, but that is easy because any failure at a limit stage, involving a finite $F$ and thus only finitely many variables, would have already been a failure at an earlier stage. After your transfinite induction is complete, (*) says that the specific truth values you've assigned to the variables satisfy all finite subsets of $S$ and therefore satisfy $S$. - Here is a way to prove compactness for propositional logic in terms of trees. As Andreas mentions in the comments, the tree property for $\omega$ is not enough. Instead, we can use a two-cardinal variation of the tree property. If $\kappa$ is a regular cardinal and $\lambda \ge \kappa$, a $(\kappa,\lambda)$ tree $T$ is a set of $2$-valued functions whose domains are elements of $\mathcal{P}_\kappa(\lambda)$ (that is, subsets of $\lambda$ of size $<\kappa$) such that • the restriction of every function in $T$ is in $T$, and • every element of $\mathcal{P}_\kappa(\lambda)$ is the domain of some function in $T$. A cofinal branch of $T$ is a function $f:\lambda \to 2$ such that $f \restriction s \in T$ for every $s \in \mathcal{P}_\kappa(\lambda)$. There is a principle TP$(\kappa,\lambda)$, isolated by Weiss, that says that every thin $(\kappa,\lambda)$ tree has a branch. We don't need to use the definition of "thin", because it is automatic if $\kappa = \omega$ or more generally if $\kappa$ is a strong limit cardinal. If $\kappa$ is a strongly inaccessible cardinal, then TP$(\kappa,\lambda)$ holds if and only if $\kappa$ is $\lambda$-compact, so it is a large cardinal property. However, TP$(\omega,\lambda)$ can be proved to hold in ZFC: Take an ultrafilter $U$ on $\mathcal{P}_\omega(\lambda)$ that is fine, meaning that it contains the set $\{s \in \mathcal{P}_\omega(\lambda) : \alpha \in s\}$ for every $\alpha < \lambda$. Because we are not requiring any amount of completeness, such an ultrafilter exists by Zorn's lemma. Given an $(\omega,\lambda)$-tree $T$, choose for each set $s \in \mathcal{P}_\kappa(\lambda)$ some function $f_s \in T$ with domain $s$. Then we can define a branch $f$ of $T$ by $f(\alpha) = 1$ if and only if $f_s(\alpha) = 1$ for $U$-almost every $s \in \mathcal{P}_\omega(\lambda)$. Now we can use this tree property to find a truth assignment for a set $S$ of propositional formulas. By enlarging $S$ we may assume that it is closed under subformulas and in particular that every propositional variable appearing in a formula in $S$ is itself in $S$. Define an $(\omega, S)$-tree $T$ consisting of all consistent truth assignments defined on finite subsets $s$ of $S$. (Strictly speaking we ought to fix a bijection between $S$ and $|S|$ here.) By consistent I mean, for example, that if the formulas $\varphi$, $\psi$, and $\varphi \wedge \psi$ are all in $s$, then $\varphi \wedge \psi$ is assigned the value 1 if and only if both $\varphi$ and $\psi$ are. Assuming TP$(\omega,|S|)$, this tree has a branch, and such a branch is a truth assignment that satisfies all the formulas in $S$. - Excellent!${}{}$ –  Andres Caicedo Apr 26 '13 at 3:59 @Andres Thanks! Although I have thought about these properties TP$(\kappa,\lambda)$ (and SP and ITP and ISP) a fair amount, it only today occurred to me that $\kappa$ could be $\omega$. It seems like this case gets neglected sometimes. –  Trevor Wilson Apr 26 '13 at 4:52 @user14111 I hadn't heard of the Rado Selection Principle, but it does seem to be very similar (although it's not clear to me that either implies the other in ZF.) Thanks for pointing this out. –  Trevor Wilson May 6 '13 at 0:09
2015-01-30 19:04:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9152059555053711, "perplexity": 188.38431569155586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115926735.70/warc/CC-MAIN-20150124161206-00166-ip-10-180-212-252.ec2.internal.warc.gz"}
https://docs.mantidproject.org/nightly/algorithms/MuonProcess-v1.html
$$\renewcommand\AA{\unicode{x212B}}$$ # MuonProcess v1¶ ## Summary¶ Processes and analyses Muon workspace. ## Properties¶ Name Direction Type Default Description InputWorkspace Input Workspace Mandatory Mode Input string Combined Mode to run in. CorrectAndGroup applies dead time correction and grouping; Analyse changes bin offset, crops, rebins and calculates asymmetry; Combined does all of the above. Allowed values: [‘Combined’, ‘Analyse’, ‘CorrectAndGroup’] SummedPeriodSet Input int list Comma-separated list of periods to be summed SubtractedPeriodSet Input int list Comma-separated list of periods to be subtracted from the SummedPeriodSet Input boolean False Input TableWorkspace DetectorGroupingTable Input TableWorkspace Table with detector grouping information, e.g. from LoadMuonNexus. TimeZero Input number Optional Value used for Time Zero correction Input number Mandatory RebinParams Input dbl list Params used for rebinning. If empty - rebinning is not done. Xmin Input number Optional Minimal X value to include Xmax Input number Optional Maximal X value to include OutputType Input string PairAsymmetry What kind of workspace required for analysis. Allowed values: [‘PairAsymmetry’, ‘GroupAsymmetry’, ‘GroupCounts’] PairFirstIndex Input number Optional Workspace index of the first pair group PairSecondIndex Input number Optional Workspace index of the second pair group Alpha Input number 1 Alpha value of the pair GroupIndex Input number Optional Workspace index of the group OutputWorkspace Output Workspace Mandatory An output workspace. CropWorkspace Input boolean True Determines if the input workspace should be cropped at Xmax, Xmin is still aplied. WorkspaceName Input string The name of the input workspace ## Description¶ The algorithm replicates the sequence of actions undertaken by MuonAnalysis in order to produce a Muon workspace ready for fitting. It is a workflow algorithm used internally by this interface. It acts on a workspace loaded from a Muon NeXus file, most commonly by LoadMuonNexus v2. Specifically: 2. Group the workspaces 3. Offset, crop and rebin the workspace 4. Perform counts or asymmetry calculation to get the resulting workspace. MuonProcess can be applied in three different modes. * CorrectAndGroup applies the dead time correction and groups the workspaces only, returning the output of this step without performing the rest of the analysis. * Analyse performs the second half only, i.e. offset time zero, crop, rebin and calculate asymmetry. * Combined performs the whole sequence of actions above (CorrectAndGroup followed by Analyse). ### Analysis¶ The asymmetry is calculated from either one or several provided data acquisition period workspaces (only the first one is mandatory). When more than one are supplied, the algorithm merges the counts and then calculates the asymmetry. The way in which period data is merged before the asymmetry calculation is determined by SummedPeriodSet and SubtractedPeriodSet. For example, setting SummedPeriodSet to “1,2” and SubtractedPeriodSet to “3,4” would result in the period arithmetic $$(1+2)-(3+4)$$. The algorithm supports three output types: • PairAsymmetry - asymmetry is calculated for a given pair of groups, using the alpha value provided. The pair to use is specified via PairFirstIndex and PairSecondIndex. • GroupAsymmetry - asymmetry between the given group (specified via GroupIndex) and the Muon exponential decay is calculated. • GroupCount - no asymmetry is calculated, pure counts of the specified group (via GroupIndex) are used. Note that the first part of the algorithm will have grouped the spectra in the input workspaces, hence the term ‘group’ is used here instead of ‘spectrum’. ## Usage¶ Note Note For examples of applying custom grouping, please refer to MuonGroupDetectors v1 documentation. Example - Integrated pair asymmetry for MUSR run (combined mode): # Begin by loading data into a workspace to process DetectorGroupingTable = "grouping", grouping = mtd['grouping'] # detector grouping loaded from file grouping = grouping.getItem(0) # grouping here is a WorkspaceGroup - use table for first period Mode = "Combined", DetectorGroupingTable = grouping, SummedPeriodSet = "1", TimeZero = 0.55, Xmin = 0.11, Xmax = 12, OutputType = "PairAsymmetry", PairFirstIndex = 0, PairSecondIndex = 1, Alpha = 1.0, OutputWorkspace = "MuonProcess_output") processed = mtd['MuonProcess_output'] output_int = Integration(processed) print('Integrated asymmetry for the run: {0:.3f}'.format(output_int.readY(0)[0])) Output: Integrated asymmetry for the run: 1.701 Example - Pair asymmetry for a single period (analysis only): # Create example workspace y = [1,2,3] + [4,5,6] x = [1,2,3] * 2 first_period = CreateWorkspace(x, y, NSpec=2) input = GroupWorkspaces(first_period) # Grouping grouping = CreateEmptyTableWorkspace() output = MuonProcess(InputWorkspace = input, Mode = "Analyse", DetectorGroupingTable = grouping, SummedPeriodSet = "1", OutputType = 'PairAsymmetry', PairFirstIndex = 1, PairSecondIndex = 0, Alpha = 0.5) Output: Output: [ 0.77777778 0.66666667 0.6 ] Example - Group asymmetry for two periods (analysis only): # Create example workspaces y1 = [100,50,10] y2 = [150,20,1] x = [1,2,3] CreateWorkspace(x, y1,OutputWorkspace="first") first_period = mtd["first"] CreateWorkspace(x, y2,OutputWorkspace="second") second_period = mtd["second"] input = GroupWorkspaces([first_period, second_period]) # Grouping grouping = CreateEmptyTableWorkspace() output = MuonProcess(InputWorkspace = input, Mode = "Analyse", DetectorGroupingTable = grouping, SummedPeriodSet = 1, SubtractedPeriodSet = 2, OutputType = 'GroupAsymmetry', GroupIndex = 0,Xmin=0,Xmax=4) Output: Output: [-0.44618444 0.54537717 0.24908794] Categories: AlgorithmIndex | Workflow\Muon
2023-03-27 13:35:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6416228413581848, "perplexity": 13587.919001453574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00080.warc.gz"}
http://mathhelpforum.com/calculus/90784-integral-values-moving-upward.html
Math Help - Integral Values Moving Upward 1. Integral Values Moving Upward A particle moves along the y-axis, so that its velocity at any time t >= 0 is given by v(t) = tcos(t^2). At time t = 0, the position of the particle is y = 3. Find the values of t, 0 <= t <= 2 in which the particle moves upward. Can somebody show me how to do this? 2. The particle moves upward if is $v(t)>0$ and that happens for $\cos t^{2} >0 \rightarrow 0 < t < \sqrt{\frac{\pi}{2}}$ ... Kind regards $\chi$ $\sigma$
2014-09-18 21:02:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6463621854782104, "perplexity": 422.73526918002364}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657129229.10/warc/CC-MAIN-20140914011209-00288-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/123387-pde-harmonic-functions.html
Let u be a harmonic function in $\displaystyle K(0,1) subset R^n, n>= 2$. $\displaystyle v(x)= ||x||^{2n-1} u( \frac {x}{||x||^2 \}$
2018-04-23 04:50:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569485187530518, "perplexity": 155.2727031608211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00506.warc.gz"}
https://hendrics.readthedocs.io/en/master/tutorials/pulsars.html
# Pulsation searches¶ Note For a general introduction to HENDRICS’s workflow, please read the Introductory concepts and example analysis tutorial We have a pulsar observation with, e.g., NuSTAR and we want to find pulsations on it. The general procedure is looking for pulsations using a power density spectrum (see Introductory concepts and example analysis) or similar methods, and if we do find a promising candidate frequency, investigate more with the Epoch Folding or the Z search. Let’s say we have found a peak in the power density spectrum at about 0.101 seconds, or 9.9 Hz, and we want to investigate more. We start from the _event_ file. If we have run HENreadevents on the original mission-specific event file, we have a HENRICS-format event file (ending with _ev.nc or _ev.p), e.g. $ls 002A.evt 002A_ev.nc To look for pulsations with the epoch folding around the candidate frequency 9.9 Hz, we can run HENefsearch as such:$ HENefsearch -f 9.85 -F 9.95 -n 64 --fit-candidates where the options -f and -F give the minimum and maximum frequencies to search, -n the number of bins in the pulsed profile and with --fit-candidates we specify to not limit the search to the epoch folding, but also look for peaks and fit them to find their centroids. The output of the search is in a file ending with _EF.nc or _EF.p, while the best-fit model is recorded in pickle files $ls 002A.evt 002A_ev.nc 002A_EF.nc 002A_EF__mod0__.p To use the Z search, we can use the HENzsearch script with very similar options:$ HENzsearch -f 9.85 -F 9.95 -N 2 --fit-candidates where the -N option specifies the number of harmonics to use for the search. The output of the search and the fit is recorded in similar files as Epoch folding $ls 002A.evt 002A_ev.nc 002A_Z2n.nc 002A_Z2n__mod0__.p We can plot the results of this search with HENplot, as such:$ HENplot 002A_Z2n.nc ## NEW: Fast searches¶ HENDRICS now implements a much faster, experimental algorithm for pulsation searches. Select this algorithm with the --fast option on the command line of HENzsearch. Instead of calculating the phase of all photons at each trial value of frequency and derivative, we pre-bin the data in small chunks and shift the different chunks to the amount required by different trial values. Each pre-folding leads to a large number of trial values to be evaluated. This only works if we assume that the trial frequency is sufficiently close to the initial one that no signal leaks into nearby bins inside the sub-profiles. This requires that we choose a sufficiently large number of sub-profiles, and limit the total shift to reasonable values to limit this leak. Given the wanted range of frequencies to search, the program chooses automatically the number of trial frequencies and fdots to derive from each given pre-folding, and when to perform a new pre-folding. At the moment, the trial fdots are chosen automatically and cannot be defined by the user. The only actions the user can do are the selection of the mean fdot and the parameter --npfact that increases the number of trial values to obtain from a single central frequency/fdot combination (npfact=2 means that the number of trial values will be double for both the frequency and the fdot, so four times the trials in the end). The results of this Z search can be plotted with HENplot. There is at the moment no automatic fitting being performed as in the slow option. ## Searching for pulsars and measuring frequency derivatives interactively¶ HENphaseogram is an interactive phaseogram to adjust the values of the frequency and frequency derivatives of pulsars.
2020-01-20 00:46:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5825943946838379, "perplexity": 1858.6806956358987}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00401.warc.gz"}
https://zbmath.org/?q=an:0862.60018
## A central limit theorem for a one-dimensional polymer measure.(English)Zbl 0862.60018 Summary: Let $$(S_n)_{n\in\mathbb{N}_0}$$ be a random walk on the integers having bounded steps. The self-repellent (resp., self-avoiding) walk is a sequence of transformed path measures which discourage (resp., forbid) self-intersections. This is used as a model for polymers. Previously, we proved a law of large numbers [Probab. Theory Relat. Fields 96, No. 4, 521-543 (1993; Zbl 0792.60097) and ibid. 100, No. 4, 513-544 (1994; Zbl 0810.60095)]; that is, we showed the convergence of $$|S_n|/n$$ toward a positive number $$\Theta$$ under the polymer measure. The present paper proves a classical central limit theorem for the self-repellent and self-avoiding walks; that is, we prove the asymptotic normality of $$(S_n-\Theta n)\sqrt n$$ for large $$n$$. The proof refines and continues results and techniques developed previously. ### MSC: 60F05 Central limit and other weak theorems 58E30 Variational principles in infinite-dimensional spaces 60F10 Large deviations 60G50 Sums of independent random variables; random walks ### Citations: Zbl 0792.60097; Zbl 0810.60095 Full Text: ### References: [1] Aldous, D. J. (1986). Self-intersections of 1-dimensional random walks. Probab. Theory Related Fields 72 559-587. · Zbl 0602.60055 [2] Bolthausen, E. (1990). On self-repellent one-dimensional random walks. Probab. Theory Related Fields 86 423-441. · Zbl 0691.60060 [3] Bry dges, D. C. and Spencer, T. (1985). Self-avoiding walk in 5 or more dimensions. Comm. Math. Phy s. 97 125-148. · Zbl 0575.60099 [4] Chung, K. L. (1967). Markov Chains with Stationary Transition Probabilities. Springer, Berlin. · Zbl 0146.38401 [5] Greven, A. and den Hollander, F. (1993). A variational characterization of the speed of a onedimensional self-repellent random walk. Ann. Appl. Probab. 3 1067-1099. · Zbl 0784.60094 [6] Knight, F. B. (1963). Random walks and a sojourn density process of Brownian motion. Trans. Amer. Math. Soc. 109 56-86. JSTOR: · Zbl 0119.14604 [7] K önig, W. (1993). The drift of a one-dimensional self-avoiding random walk. Probab. Theory Related Fields 96 521-543. · Zbl 0792.60097 [8] K önig, W. (1994). The drift of a one-dimensional self-repellent random walk with bounded increments. Probab. Theory Related Fields 100 513-544. · Zbl 0810.60095 [9] Madras, N. and Slade, G. (1993). The Self-Avoiding Walk. Birkhäuser, Boston. · Zbl 0780.60103 [10] Seneta, E. (1981). Non-Negative Matrices and Markov Chains. Springer, New York. · Zbl 0471.60001 [11] Spitzer, F. (1964). Principles of Random Walk. Van Nostrand, Princeton. · Zbl 0119.34304 [12] van der Hofstad, R. and den Hollander, F. (1995). Scaling for a random poly mer. Comm. Math. Phy s. 169 397-440. · Zbl 0821.60078 [13] van der Hofstad, R., den Hollander, F. and K önig, W. (1995). Central limit theorem for the Edwards model. Ann. Probab. · Zbl 0873.60009 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2023-03-24 00:34:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7442384362220764, "perplexity": 2664.6558871791453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00302.warc.gz"}
http://studycopter.com/feed/daily-current-affairs-3-key-updates-2/?shared=email&msg=fail
## Daily Current Affairs – 3 Key Updates 1. NASA launches satellite to track Earth’s melting ice In a bid to understand Earth’s ice sheets, glaciers, sea ice, snow cover and permafrost, NASA successfully launched its Ice, Cloud and Land Elevation Satellite-2, or ICESat-2.  ICESat-2 is NASA’s most advanced Laser instrument — the Advanced Topographic Laser Altimeter System, or ATLAS. It measures height by precisely timing how long it takes individual photons of light from a Laser to leave the satellite, bounce off Earth and return to the satellite. The satellite will provide critical observations of how ice sheets, glaciers and sea ice are changing, leading to insights into how these changes impact people where they live. ICESat-2’s orbit will make 1,387 unique ground tracks around Earth in 91 days and then start the same ground pattern again at the beginning. 2. India ranks third globally in terms of number of family-owned businesses India, with a total of 111 companies and $839 billion total market capitalisation, continues to rank third globally in terms of number of family-owned companies, Credit Suisse said in its report. It closely follows China (159 companies) and the US (121 companies). The rating agency analysed over 1,000 family-owned, publicly-listed companies, and compared their 10-year performance to a control group of more than 7,000 non-family owned companies globally. In the Non-Japan Asian region, China, India and Hong Kong dominate. Family businesses from these three territories have a combined market capitalisation of$2.85 trillion. 3. Human Development Index 2018 India ranks a low 130 out of 189 countries in the latest human development Index (HDI) released by the United Nations Development Programme, with the findings indicating a glaring inequality in the country though “millions have been lifted out of poverty”. The UNDP report stated that with an HDI value of 0.64 compared to last year’s 0.636, India is categorised as a medium human development and that its rank rose one spot compared to the 2017 HDI. According to the 2018 findings, between 1990 and 2017, India’s HDI value increased from 0.427 to 0.640, an almost 50 per cent increase, which is “an indicator that millions have been lifted out of poverty”. At the same time, in what signals the glaring inequality in the country, the HDI value declines by more than a fourth when adjusted for inequality. The value of India’s Inequality-adjusted HDI (IHDI) falls to 0.468, a 26.8 per cent decrease, far worse than the global average decrease in the global HDI value due to inequality at 20 per cent. The HDI is the composite measure of every country’s attainment in three basic dimensions: standard of living measured by the gross national income (GNI) per capita, health measured by the life expectancy at birth, and education levels calculated by mean years of education among the adult population and the expected years of schooling for children. India ranks 127 out of 160 countries on the Gender Inequality Index which reflects gender-based inequalities in reproductive health, empowerment (political and educational), and economic activity. Norway at 0.95 has been ranked the highest on the HDI scale while Niger is the bottom at 0.35. The greatest increase in HDI rank over the last five years is by Ireland followed by Turkey while the worst decline was seen in conflict-hit countries of Syria, Libya, and Yemen.
2020-02-23 23:07:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23190724849700928, "perplexity": 4456.649618249611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00154.warc.gz"}
https://socratic.org/questions/what-is-the-common-denominator-for-16-and-20
# What is the common denominator for 16 and 20? May 3, 2018 The common denominator of 16 and 20 is 4. When simplified, the fraction becomes $\frac{4}{5}$. #### Explanation: Factor tree of 16: 1 - 16 2 - 8 4 - 4 8 - 2 16 - 1 Factor tree of 20: 1 -20 2 - 10 4 - 5 5 - 4 10 - 2 20 - 1 When simplified: $\frac{16}{20}$ $/ 4$ on both sides = $\frac{4}{5}$
2020-02-25 13:23:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3906782865524292, "perplexity": 640.9582382820099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146066.89/warc/CC-MAIN-20200225110721-20200225140721-00184.warc.gz"}
https://fontsaga.com/textsc-latex/
# Textsc Latex – Design, Features, Uses & Meaning Some days ago, I was asked to give a talk on LaTeX at the Summer School of the University of Trento. A few hours before my presentation, I realized that I hadn’t worked with LaTeX in months, and I had to quickly learn it again. This article is a summary of what little time I spent learning how to use it and get back into the rhythm. What is Textsc Latex? There are many online texters for latex, but none of them is as comprehensive as this one. With more than 50 methods listed in its database, textsc provides you with all the tools you need to write and proofread. You can also see examples of the quality of the texts, which enables you to get feedback on your work. ## How To Change Textsc Latex Font To Small Caps? If you want to change the font in which text is displayed, a reliable way of doing it quickly is with commands “->”, “>”, and “$” as suggested by Gigincide. When first looking at these options, I thought they would be useful only for documents created using LaTeX macros (as opposed to simple formulas). In fact though, they are also available within environments such as MathML , so when creating mathematical content with your preferred environment like SageMath or Mathematica this small caps function could have great benefits: ### How To Change The Font Of A Single Text? My current personal standard fonts are Gentium, Empire, and DreamFont. They’re all slightly different from each other but can sometimes be seen together in my documents for a specific purpose (e.g., when I’m giving talks or creating scripts). Here’s an example that shows how easy it is to set up your own default master document: How do you insert lists in blocks like this [LIST]: A lot has been written about things such as numbered lines, bullets, etc…but very little on how these elements interact with the environment where you are writing. In this article, we explore some of the details, from basic approaches like numbering lines to presenting ideas with lists and very complex examples involving just about every type of list-handling option you can think of… ### How Do I Convert Single Characters Into Roman Numerals? If you want to turn a single character that is not directly alphabetic-based (i.e., / – ) into an entire number sequence or even make it look “almost” correct there’s one useful thing shortcut in TeX that can save us hours of handwork: \RomanN \ to \RomanNum\ . How will I be able to indent food preparation instructions or lists of ingredients? There’s a very easy way: set-option get extra. This variable is defined by the package, so all you have to do is include it in your preamble with one line like this making sure you edit accordingly. Note that on Unix systems there are other shortcuts available but they’re not necessary for everyone and only slightly different from TeX mode. ### How Do I Convert Fixed-width Or Proportional Fonts Into Variable-width Ones? Let’s consider the function used by Tex to generate headings. It may take 6, 32 or even 90 lines: \usepackage[6pt]{fontenc} \begin{document} Some text here contents of Headline Sections #1 Section content EndHeadLinesections Subsection#1 More sectionmappings ==> B C D e f … T . Other small text #2 Here comes another sum’n up on a title ….. Other small text Here we end yet another subsection Two things are needed to turn this into a variable-width document: * A list of spaces per line. Along with the value for space that can be set at compile-time (see TeX/Options) it makes up the main parameters defining your docutils items. Setting up such a list would correspond to doing this from scratch in LaTeX, But there are preprocessor symbols associated with those values in CSS and friends so we don’t need any program for that purpose and solve all our needs anyway by including them directly instead. This date back to an old version of Indic support which is now mostly unused. * A list of line-feedings in two. This needs to be defined separately each time one will use a different value which would normally be passed as the parameter, but we don’t have to worry about it because they’ve handled automatically thanks to these defaults variables: \ly&% and all_LF'(6, true); We can just include them like so by adding the following lines (think m @ head=<<date>firstname lastname>>file1;#5linewidth=’0pt’;@ for that definition. ### Font Sizes, Families, And Styles The line height is controlled by the box condition. The font size can be set separately using a variable which defaults to \forcentheta as defined in CSS and friends. Here are some examples:$WWidth=2 em Other small text #1 Contains 6 spaces per line here. This little stuff that we have been writing down without paying attention at the LaTeX level has its own parameters, mainly concerning fonts sizes and style commands (which roughly corresponds to what is called kerning). Bibliography fields It’s rather tricky but it works fine with libraries or pages so you don’t need to read too much about it. You would only have to decide the list of bibliographic databases which you want apart from any LaTeX commands that may contain \bibitemstyle; these can be input into Indic and rendered directly as such in either a new section or subdocument (which is what we will do). The problem lies in how to break down someone’s sentence, highlighting all its content parts with a lot of different values each; in several cases, they are very similar but users usually style them differently anyway. For example: ${match{$1}{cite}\\un }\bibitem[\s.{$2}]{.} This can be done as follows:$WWidth=1em no collation { \begin normal section \punctuate everything like this… As result, we get something that looks very similar to the first version but you could choose another font size and possibly a different style, or even put some other line-height function in there; decoration is not really important (unless you want to mix some quotes with stuff). You should definitely decide beforehand what your bibbers’ stylesheet content will look like (like it might be a code snippet or some background information) so the right virtual font appears at the right place. Strictly speaking, all of those above examples don’t really look like actual BibTeX references. ### When Using \textsc, Latex Issues Warning: Font Shape `ot1/cmr/bx/sc’ Undefined Of course, you do want them to be used in your document as bibliographic references so we need a little more work … When using \providebib the result is recognized: This doesn’t look quite right … We can try generating “fake” hyperlinks and see if they get rendered. Assuming that \$WWidth=1em { no default wrapping type math \begin the normal section with 2 wraps math style frame content box attribute page-break&br=#0pt spaces {\it aligns} cents you will have something like this (the spacing also causes some issues): You might want to remove some ampersands from the bibliography (since they are used as a “virtual hyphen”, but your source needs them in it), and you’ll also want to include font-style overrides. You will have many different versions of this, depending on whether or not you style local elements with text scale \textsc[r]{name}{h\fontsize}, oprovidebib(bottom) { pdf text:}} cosmetic changes such as using MFTEddit instead of BibTeX find2pdf generate_hyperlinks .py –hyperlinks \providebib We soon get to the point where we see functions such as make title instead of (obvious) title . My timestamps are not rendered correctly: I would have expected them to just be displayed like in a more “robust” bibliography environment. Finally, you will want document classes and other things helping out with classifying your entries … Having distinctive features is really important if people search for that specific material! BibTeX references contained within text   If a description in one reference turns into an @de-reference later on, you will lose the reference from your bibliography entries. A simple solution is to append a reference in-between parts of material by writing \bibsource{Bands } My understanding is that at least biblatex can handle nested references (see my repeated walks notes), but I am afraid we still need more work here since it was not really mentioned as one of its main tasks while using BiBTeX (the @reference command). Features missing and flags defined differentially   Some users would like better highlights or coloring of statement commands: they are always perfectly fine with special highlighting for them. I personally would like a marking set for tests, since we can ideally also check results of these on various platforms at least. While using the Bs-feature list, you’ll notice some differences compared to BibTeX. This is because you define quite large and custom highlighting parts which are not really used by biblatex . If it were up to me, I’d label them with @bs@. At this point, users can consider changing from BibTeX back to the way they like (memo or anything else they wanted). For some schemes, people may want extra flags definition e.g., “@runtest” or “@unittests”. Issues at the beginning   I cannot imagine a situation where the Bs-feature list for me is always better than BibTeX by far. A pro side would be its basic functionality and very efficient performance, which will keep users returning to that tool instead of some research software … As long as people like it: we’ll stay with them. ### When There Is A Problem With Font Size In \textsc, What To Do? Some of us have started to experience the problem where LaTeX simply does not see \textsc and makes an error because it is “too large”. This happens for at least a few people. If you run your browser’s developer tools, you’ll be able to find out that initially, all fonts in this mode are as small or even smaller than normal: E:\Windows\Fonts change font size What most users do is increase their default text sizes within File -> Options-> Documents (on Windows). You can then use your editor anyway, increasing the viewport length later on inside Tools -> Document Viewer (on Windows). A similar problem happens with psref if you have a standalone font installed that is bigger than Photoshop’s default. This simple script can help: #!c:/program files/Dreamweaver 9.0 It must include exactly these pieces of information (at least one item per line): Scheme name The entry point within our pages’ LaTeX code Route in p latex (e.g., \subsection ) The syntax of the section in POD’s output Scheme version & information for updates New category What should be done with new schemes to become Bs-features: As benchmarks, try it yourself Run a code checker like find bugs; you might get some errors anyway But please don’t wait until we fix them. If after all LaTeXiT introduces this as an official item, I’d ask that other people also do their own tests Use it on our website – just copy a link from your browser and add the scheme. 2.2 Tell about the background for grammar support, not all features If you already have a plugin, please don’t just tell us in your update message: #uhoh-better-grammar-included Once you’ve added some features and bugfixes (which I’ll call “A”, e.g., an A5e syntax), can we expect similar improvements and more new ones at some point when upgrading to another stable version? The answer is “Yes of course!”, if realistic time frames also apply for these possible futures goodness! We’d better let people know that this doesn’t mean that the editing features and improvements are reserved for happy people only already meant to work with LaTeXiT. Don’t code in a language that will explode when translated on a non-English file system! In general, I don’t mind if you intensify your own programming skills by developing plugins or other tools (e.g., image viewers). That would improve my mental health too. ### How Can I Change The Font Size For Textsc And Latex? With the update to release 5.0.5, you can now also export textsc page templates in PS format (see How one might attach an alternative LaTeX font? ) In view of this change, I’d ask that if a few users request it – and are prepared for some less comfortable migration steps than changing their current program settings for resizing specified typesets’ output – please please test whether your desired result works with all versions from 5.0 until our official future last version 6? If you need support due to frustrations caused by lacking features or problems with required resources (because they are not installed yet), this could become a good project for somebody willing to share the burden of supporting people with less knowledge about LaTeXiT. Although one should expect that such an approach has its downsides, I’m sure some benefactors would also be pleased to use only features which have already been added! What/Which version do you need when developing plugins (e.g., OpenGL or LibreOffice’s OpenInspector)? I don’t mean what specific external library – e.g., jpeg, libpano13, etc. ; unfortunately no Windows program installer automatically creates environments depending on these libraries since Windows is not a UNIX platform. I find it instructive to briefly explain my colleagues’ experiences with the external plugin path in detail: Dead ends, for example when required library files have become fairly solid – and are therefore included by LaTeXiT (Sophos FindEntry http://www.sophos.com/tagged/findentry ) or official extensions like evm2tk (‘Latex-to-TeXML translator’), But still give problems because they can’t be installed via their Creators’ Kits (I’d stick with TeXLive, there are never problems with it (yet)). The former approach exposes LaTeXiT to the user’s configuration environments up through your-TeXlive.cfg or ~latexmk, whereas one can circumvent this by customizing settings in those files via commands like : tex latex -synth evm2tk.cfg Options/Latexmk “your-SophosFindEntry” (the command line option \ ‘extensions’ is necessary unless compiled afresh from source) with or without user preferences and font collections as required. A solution that has existed for a considerable time now: download new LaTeXiT binaries each time Sophos finds that their information had become out of date (as happens after major releases; I just didn’t want to link to the relevant documentation). Q: Where Do I Find Instructions For Creating Font Tables? A: http://www.tex4ht.org/faq-fonttables Q: The Text Extends Over More Than One Page, But The Unicode Characters Are Small! What Could Be Wrong? A: For long documents, you should use tiny book or a fixed-width font (like T1) so that your document will always have the same number of columns and not stretch across multiple pages when wrapping to multiples of eighty blocks on input paper widths such as landscape mode on A4 printers. Q: Where Is The Indicator To See How Much Whitespace Was Omitted? A: You can use \hboxat{0.75in} in all dimensions with this command. \begin{document} \title {Title of document} \author {Author Name} \producer {\Contributor’s title here}{the producer, if any. The English name is enclosed in curly braces and the nationality needs to be written out too. } \keywords {Keyword(s) to be used in the document} \toppost{two extra lines at the top of the paper} \end{document} \begin{document} Q: Where Is The Indicator To See How Much Whitespace Was Omitted? A: You can use \hboxat{0.75in} in all dimensions with this command. \begin{document} Q: Where Do I Find Out How Much Whitespace Was Omitted? A: The commands hspace and vspace will give you the amount of space that is between two paragraphs or columns, respectively. #### Conclusion Sometimes, you may face some issues with the font size in \textsc. Here are some solutions to these problems: 1. To change the font size of textsc, you can use the command \setfontsize{} in your document. This will work only if there is a specific font file specified in your document, like \usepackage[OT1/cmr/bx/sc]{fontenc}.  2. If there is no specific font file defined, then you can also use \usepackage[T1]{fontenc} to define the font file for LaTeX. I hope now you know Textsc Latex.
2023-04-01 07:22:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7188453078269958, "perplexity": 2109.8257125277273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.56/warc/CC-MAIN-20230401063607-20230401093607-00280.warc.gz"}
https://space.stackexchange.com/questions/28340/what-esa-video-contains-this-family-portrait-of-comet-visiting-spacecraft?noredirect=1
# What ESA video contains this “family portrait” of comet-visiting spacecraft? I don't use (or condone) FaceBook, but nonetheless I stumbled upon this page from a search: https://www.facebook.com/ISEE3Reboot/photos/a.801544933207256.1073741827.800538433307906/1384594101569000/?type=3 It shows several cute satellites that are purported to have visited comets, or at least were supposed to do so. It appears to be a screenshot from an ESA video. Any idea what that video might be? A link to it or it's ESA page would be greatly appreciated! So far I've found one of these cuties in this ESA video linked in this ESA tweet, but still looking for this "family portrait": ## 2 Answers The artwork strongly reminds me of the Rosetta mission. So, check out ESA's videos of/on the Rosetta mission. update: Per @Hobbes' comment, this family portrait is from a series of cartoons ESA commissioned as part of the Rosetta mission. The family portrait appears in 'Once upon a time, Rosetta's grand finale' at 03:27. • No, it shouldn't be left as a comment, because this is the correct answer. This family portrait is from a series of cartoons ESA commissioned as part of the Rosetta mission. The family portrait appears in 'Once upon a time, Rosetta's grand finale at 3:27. esa.int/spaceinvideos/Videos/2016/09/… – Hobbes Jul 11 '18 at 7:46 The frame you show appears at timestamp 22:00 of the complete "The amazing adventures of Rosetta and Philae" compilation video. A similar "family portrait" also appears around timestamp 11:30 of the same video. The video can also be found on the ESA website, here: https://m.esa.int/spaceinvideos/Videos/2016/12/The_amazing_adventures_of_Rosetta_and_Philae • Thanks for finding the complete version. With comparable answers I tend to accept the one from the newest or lower-rep user. – uhoh Jul 12 '18 at 23:50
2020-08-11 00:56:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41740894317626953, "perplexity": 3982.467202047779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00245.warc.gz"}
https://gdalle.github.io/ImplicitDifferentiation.jl/dev/api/
# API reference ## Docstrings ImplicitDifferentiation.ImplicitFunctionType ImplicitFunction{F,C,L} Differentiable wrapper for an implicit function x -> ŷ(x) whose output is defined by explicit conditions F(x,ŷ(x)) = 0. If x ∈ ℝⁿ and y ∈ ℝᵈ, then we need as many conditions as output dimensions: F(x,y) ∈ ℝᵈ. Thanks to these conditions, we can compute the Jacobian of ŷ(⋅) using the implicit function theorem: ∂₂F(x,ŷ(x)) * ∂ŷ(x) = -∂₁F(x,ŷ(x)) This requires solving a linear system A * J = B, where A ∈ ℝᵈˣᵈ, B ∈ ℝᵈˣⁿ and J ∈ ℝᵈˣⁿ. Fields: • forward::F: callable of the form x -> ŷ(x) • conditions::C: callable of the form (x,y) -> F(x,y) • linear_solver::L: callable of the form (A,b) -> u such that A * u = b source
2022-09-26 05:40:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30212920904159546, "perplexity": 3804.342377686756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00179.warc.gz"}
https://stats.stackexchange.com/questions/155696/reproducing-pankratzs-arima-models-for-u-s-savings-rate/155758
# Reproducing Pankratz's ARIMA models for U.S. savings rate I'm wondering if anyone has been able to reproduce the U.S. savings rate models in Pankratz's "Forecasting with Dynamic Regression Models"? pankratz table-1.6 The first link for me is his book, page 264, referring to Table 1.6. This doesn't exist in my library copy, and I haven't found anything in a google for an errata. So I skipped that, but in the next page, he presents an ARIMA(1,0,2) model for the data. With some help from the comments to this question, I found online data from which I extracted the 1959-1979 subset, but my coefficients differ significantly from his Pankratz's. From his textbook: t Parameter Value Statistic ----------- ----------- ----------- Constant 1.6138 3.35 AR{1} 0.7359 9.58 MA{2} 0.3437 3.31 Variance 0.663716^2=0.44 I am using arima class in Matlab's Econometrics toolbox. With the coefficient for the 1st MA lag term set to 0, arima(1,0,2) estimation yields: t Parameter Value Statistic ----------- ----------- ----------- Constant 2.4916 3.774 AR{1} 0.778541 13.8733 MA{2} 0.268419 3.45492 Variance 0.40444 8.23616 The AR1 term is close to Pankratz's, but the MA2 term is about 80% of Pankratz's. And the variance is is a bit off. The t values are roughly the same. I noticed that even though the shape of the plot is the same, the scaling is not. Pankratz's peaks at 10% while the online data peaks at 15%. If I scale the latter to peak at 10%, the lag terms stay the same, but the variance & constant term changes: Standard t Parameter Value Error Statistic ----------- ----------- ------------ ----------- Constant 1.66106 0.440134 3.774 AR{1} 0.778541 0.0561178 13.8733 MA{2} 0.268419 0.0776917 3.45492 Variance 0.179751 0.0218246 8.23616 The constant term is now close to Pankratz's, but the variances matches even less than before. Here is the code (I've zeroed out the subtraction of the mean because it only affects the constant term): svRate=[ 9.3 9.5 10.0 9.9 10.6 11.1 11.3 11.5 11.1 11.6 11.4 10.8 11.3 11.2 11.6 11.5 10.7 10.7 9.6 10.1 10.3 9.7 10.2 10.0 10.9 10.9 11.7 11.6 11.5 11.3 11.0 10.6 10.7 10.6 10.3 11.0 11.0 11.8 11.3 12.0 11.1 11.1 11.9 11.3 10.9 10.9 10.9 11.6 12.3 11.8 12.2 12.4 11.8 11.9 10.5 10.6 9.8 10.1 11.6 11.5 11.8 12.5 13.1 13.1 13.2 13.6 13.4 12.9 12.2 11.4 11.7 13.2 12.2 13.0 13.0 14.1 13.6 12.5 12.2 13.3 12.4 15.0 12.5 12.3 11.7 11.3 11.1 10.4 9.4 10.0 10.5 10.8 10.9 9.9 10.1 10.1 10.6 9.8 9.4 9.5 ]'; fprintf('max(svRate) before scaling to 10: %g\n',max(svRate)); svRate=svRate*10/max(svRate); subplot(2,1,1) plot(svRate) subplot(2,1,2) mdl=estimate(arima(1,0,2),svRate-0*mean(svRate)); mdl=estimate(arima('ARLags',1,'MALags',2),svRate-0*mean(svRate)); I've posted this to • Why are you missing the first four years of data? The series in Figure 7.1 in the book seems close to this series of personal saving as a percentage of disposable personal income for the period 1955-1979. – javlacalle Jun 5 '15 at 21:11 • Thanks, javlacalle. I modified the question after using your more complete data set. – StatSmartWannaB Jun 5 '15 at 22:33 This is what I get fitting the following model in R: $$y_t - \mu = \phi (y_{t-1} - \mu) + \epsilon_t + \theta \epsilon_{t-2} \,,$$ x <- structure(c(9.3, 9.5, 10, 9.9, 10.6, 11.1, 11.3, 11.5, 11.1, 11.6, 11.4, 10.8, 11.3, 11.2, 11.6, 11.5, 10.7, 10.7, 9.6, 10.1, 10.3, 9.7, 10.2, 10, 10.9, 10.9, 11.7, 11.6, 11.5, 11.3, 11, 10.6, 10.7, 10.6, 10.3, 11, 11, 11.8, 11.3, 12, 11.1, 11.1, 11.9, 11.3, 10.9, 10.9, 10.9, 11.6, 12.3, 11.8, 12.2, 12.4, 11.8, 11.9, 10.5, 10.6, 9.8, 10.1, 11.6, 11.5, 11.8, 12.5, 13.1, 13.1, 13.2, 13.6, 13.4, 12.9, 12.2, 11.4, 11.7, 13.2, 12.2, 13, 13, 14.1, 13.6, 12.5, 12.2, 13.3, 12.4, 15, 12.5, 12.3, 11.7, 11.3, 11.1, 10.4, 9.4, 10, 10.5, 10.8, 10.9, 9.9, 10.1, 10.1, 10.6, 9.8, 9.4, 9.5), .Dim = c(100L, 1L), .Dimnames = list(NULL, "V1"), .Tsp = c(1959, 1983.75, 4), class = "ts") arima(x, order=c(1,0,2), include.mean=TRUE, fixed=c(NA,0,NA,NA)) # Coefficients: # ar1 ma1 ma2 intercept # 0.7643 0 0.2839 11.1904 # s.e. 0.0729 0 0.1092 0.3390 # sigma^2 estimated as 0.4087: log likelihood = -97.83, aic = 203.65 The term reported as intercept is actually the mean $\mu$. The constant $\alpha$ in the model below can be recovered from the relationship $E(y_t)\equiv \mu = \frac{\alpha}{1-\phi}$. $$y_t = \alpha + \phi y_{t-1} + \epsilon_t + \theta \epsilon_{t-2} \,.$$ I get $\hat{\alpha} = 11.1904 \times (1 - 0.7643) = 2.6376$. The difference with the constant reported in the book may be due to the different scale of the data. These results are close to what you get and not very far from the results in the book except for the MA(2) coefficient. As regards the variance, be aware that the standard deviation $\hat{\sigma}_a$ is reported in the book (not the variance). I get $\sqrt{0.4087}=0.6393$, close to your result and the value reported in the book. • You show yt=α+ϕyt−1+ϵt+θϵt−1 but refer to MA(2). – IrishStat Jun 6 '15 at 10:07 • @IrishStat I have fixed a typo in the equations, thanks! – javlacalle Jun 6 '15 at 10:32 • Thank you, javlacalle. Looks like the only mystery is the MA2 coefficient. Your correction about my original variance actually being standard deviation has cleared up that discrepancy. Now, the constant term -- as per the question, Pankratz's seems to match the variation of mine that scales the data to match the peak of 10%, as in his book. So maybe it's just an editing error. – StatSmartWannaB Jun 7 '15 at 2:35 • I find it interesting that R's "intercept" doesn't correspond to the constant term in the regression, but as long as it's explained in the help. Isn't it odd how our close results can still be off by so much? While I haven't done much linear regression, I was under the impression that the procedure was very standard. – StatSmartWannaB Jun 7 '15 at 2:35 • Slight differences may arise across software implementations due to different choices in the initialization of the Kalman filter (if used), optimization algorithm, possible transformation of parameters to ensure the AR polynomial is stationary,... But, in general, I wouldn't expect relevant discrepancies in the results. Although the results from Matlab and R are close to each other, it seems that there are some differences in the implementations that would be interesting to explore. – javlacalle Jun 7 '15 at 13:14
2019-05-23 02:58:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.683533787727356, "perplexity": 530.6385301145505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257002.33/warc/CC-MAIN-20190523023545-20190523045545-00551.warc.gz"}
https://physics.stackexchange.com/questions/469425/diffusion-equation-with-time-dependent-boundary-condition
# Diffusion equation with time-dependent boundary condition I was trying to solve this 1D diffusion problem $$$$\dfrac{\partial^2 T}{\partial \xi^2} = \dfrac{1}{\kappa_S}\dfrac{\partial T}{\partial t}\, , \label{eq_diff_xi}$$$$ with the boundary conditions \begin{align} &T(\xi = 2Bt^{1/2},t) = A t^{1/2}\, ,\\ &T(\xi=\infty,t) = 0\, ,\\ &T(\xi,0) = 0\, , \end{align} where $$A$$ is a constant. I know that the solution is $$T = A \sqrt{t}\, \rm{erfc}(\xi/2\sqrt{kt})/\rm{erfc}(B\sqrt{k})$$ I tried by using the Laplace transformation, but I found a problem since I have conditions on $$\xi = 2Bt^{1/2}$$ instead of $$\xi = 0$$. More precisely, if the Laplace function of $$T(\xi,t)$$ is $$\Theta(\xi,s)$$, after apply the Laplace transformation plus $$T(\xi=\infty,t) = 0$$ and $$T(\xi,0) = 0$$, I got $$$$\Theta(\xi,s) = C_1(s)\exp{\left(-\sqrt{\dfrac{s}{\kappa_T}}\xi\right)}\, .$$$$ So now, to find $$C_1(s)$$ and use the convolution property of the Laplace transformation, I need a condition on $$\xi = 0$$, but I only now that $$T(\xi = 2Bt^{1/2},t) = A t^{1/2}$$. Does any of you know if the Laplace transform has some other properties that allow me to solve the problem? • Are you allowed to solve by separation of variables? That way you just get 2 ODEs that are easy to solve. – Ballanzor Mar 30 at 0:52 • I know that the solution is $T = A \sqrt{t}\, \rm{erfc}(\xi/2\sqrt{kt})/\rm{erfc}(B\sqrt{k})$, so it is clearly not separable :/ – jorafb Mar 30 at 3:08 • I have no idea why that happens... Sorry. It definitely looks separable at first glance – Ballanzor Mar 30 at 9:27 The boundary condition hints to try a change of variables. Let's will be looking for the solution in the form $$T(\xi,t) = A\sqrt{t}\ \tau(\xi/2B\sqrt{t},t)$$ Then boundary conditions and equation for $$\tau(x,t)$$ are respectivly $$\tau(1,t) = 1,\qquad \tau(\infty,t) = 0$$ and $$t\frac{\partial\tau}{\partial t}(x,t) = \frac12\left(x\frac{\partial\tau}{\partial x}(x,t) - \tau(x,t)\right)+\frac{k}{4B^2}\frac{\partial^2\tau}{\partial x^2}(x,t)$$ This problem is solvable by the separation of varfiables method. If we will look for the solution in the form $$\tau(x,t) = f(x)g(t)$$, then we'll get $$t\frac{\dot{g}(t)}{g(t)} = \frac1{2f(x)}\left(xf'(x)-f(x)+\frac{k}{2B^2}f''(x)\right) = \lambda = const$$ Boundary condition looks like $$\tau(1,t) = f(1)g(t) = 1$$. It follows that $$\lambda = 0$$. Then the equation for $$f(x)$$ is $$\frac{k}{2B^2}f''(x)+xf'(x)-f(x) = 0.$$ If we choose $$g(t) = 1$$, then boundary conditions for $$f(x)$$ are $$f(1) = 1,\qquad f(\infty) = 0.$$ It's up to you to check if a solution to this problem gives the known solution.
2019-10-14 16:54:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 1, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9710291624069214, "perplexity": 168.46150637950217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00163.warc.gz"}
https://zbmath.org/authors/?q=ai%3Amay.ramzi
# zbMATH — the first resource for mathematics ## May, Ramzi Compute Distance To: Author ID: may.ramzi Published as: May, Ramzi External Links: MGP · Wikidata · ORCID Documents Indexed: 11 Publications since 2003 #### Co-Authors 7 single-authored 2 Jendoubi, Mohamed Ali 1 Balti, Mounir 1 Lemarié-Rieusset, Pierre Gilles all top 5 #### Serials 1 Applicable Analysis 1 Journal of Mathematical Analysis and Applications 1 Archiv der Mathematik 1 Journal of Differential Equations 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Optimization 1 Turkish Journal of Mathematics 1 Bulletin des Sciences Mathématiques 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 Evolution Equations and Control Theory #### Fields 6 Partial differential equations (35-XX) 4 Ordinary differential equations (34-XX) 4 Fluid mechanics (76-XX) 1 Operator theory (47-XX) 1 Operations research, mathematical programming (90-XX) #### Citations contained in zbMATH 10 Publications have been cited 52 times in 39 Documents Cited by Year Asymptotics for a second-order differential equation with nonautonomous damping and an integrable source term. Zbl 1325.34074 Jendoubi, Mohamed Ali; May, Ramzi 2015 Asymptotic for a second-order evolution equation with convex potential and vanishing damping term. Zbl 1424.34186 May, Ramzi 2017 The role of the Besov space $${\mathbf B}_{\infty}^{-1,\infty}$$ in the control of the eventual explosion in finite time of the regular solutions of the Navier-Stokes equations. Zbl 1038.35058 May, Ramzi 2003 Global well-posedness for a modified dissipative surface quasi-geostrophic equation in the critical Sobolev space $$H^{1}$$. Zbl 1210.35262 May, Ramzi 2011 On an asymptotically autonomous system with Tikhonov type regularizing term. Zbl 1217.34085 Jendoubi, Mohamed Ali; May, Ramzi 2010 Asymptotic for the perturbed heavy ball system with vanishing damping term. Zbl 1366.34070 Balti, Mounir; May, Ramzi 2017 Long time behavior for a semilinear hyperbolic equation with asymptotically vanishing damping term and convex potential. Zbl 1316.35036 May, Ramzi 2015 Uniqueness for the Navier-Stokes equations and multipliers between Sobolev spaces. Zbl 1185.35170 Lemarié-Rieusset, Pierre Gilles; May, Ramzi 2007 Extension of a uniqueness class for the Navier-Stokes equations. Zbl 1189.35231 May, Ramzi 2010 Uniqueness of solutions of Navier-Stokes equations in Morrey-Campanato spaces. Zbl 1179.35221 May, Ramzi 2009 Asymptotic for a second-order evolution equation with convex potential and vanishing damping term. Zbl 1424.34186 May, Ramzi 2017 Asymptotic for the perturbed heavy ball system with vanishing damping term. Zbl 1366.34070 Balti, Mounir; May, Ramzi 2017 Asymptotics for a second-order differential equation with nonautonomous damping and an integrable source term. Zbl 1325.34074 Jendoubi, Mohamed Ali; May, Ramzi 2015 Long time behavior for a semilinear hyperbolic equation with asymptotically vanishing damping term and convex potential. Zbl 1316.35036 May, Ramzi 2015 Global well-posedness for a modified dissipative surface quasi-geostrophic equation in the critical Sobolev space $$H^{1}$$. Zbl 1210.35262 May, Ramzi 2011 On an asymptotically autonomous system with Tikhonov type regularizing term. Zbl 1217.34085 Jendoubi, Mohamed Ali; May, Ramzi 2010 Extension of a uniqueness class for the Navier-Stokes equations. Zbl 1189.35231 May, Ramzi 2010 Uniqueness of solutions of Navier-Stokes equations in Morrey-Campanato spaces. Zbl 1179.35221 May, Ramzi 2009 Uniqueness for the Navier-Stokes equations and multipliers between Sobolev spaces. Zbl 1185.35170 Lemarié-Rieusset, Pierre Gilles; May, Ramzi 2007 The role of the Besov space $${\mathbf B}_{\infty}^{-1,\infty}$$ in the control of the eventual explosion in finite time of the regular solutions of the Navier-Stokes equations. Zbl 1038.35058 May, Ramzi 2003 all top 5 #### Cited by 49 Authors 11 Attouch, Hedy 6 May, Ramzi 5 Chbani, Zaki 5 Riahi, Hassan 4 Cabot, Alexandre 3 Ferreira, Lucas Catão de Freitas 2 Cui, Shangbin 2 Deng, Chao 2 Dossal, Charles 2 Jendoubi, Mohamed Ali 2 Rondepierre, Aude 2 Yamazaki, Kazuo 2 Zhao, Jihong 1 Adly, Samir 1 Aujol, Jean-François 1 Balti, Mounir 1 Bégout, Pascal 1 Ben Hassen, Imen 1 Bolte, Jérôme 1 Bradshaw, Zachary 1 Charles, Dossal 1 Grujić, Zoran 1 Guo, Yana 1 Haraux, Alain 1 Jean-François, Aujol 1 Kukieła, Michał Jerzy 1 László, Csaba Szilárd 1 Lemarié-Rieusset, Pierre Gilles 1 Li, Hua 1 Lima, Lidiane S. M. 1 Liu, Qiao 1 Luo, Jun-Ren 1 Miao, Changxing 1 Minian, Elias Gabriel 1 Nguyen Cong Phuc 1 Niche, César J. 1 Peypouquet, Juan 1 Phan, Tuoc Van 1 Planas, Gabriela 1 Sebbouh, O. 1 Shang, Haifeng 1 Song, Mengmeng 1 Song, Wenjing 1 Vassilis, Apidopoulos 1 Xiao, Ti-Jun 1 Xue, Liutang 1 Yang, Ganshan 1 Yao, Xiaohua 1 Yuan, George Xian-Zhi all top 5 #### Cited in 24 Serials 7 SIAM Journal on Optimization 4 Journal of Differential Equations 3 Journal of Mathematical Analysis and Applications 3 Evolution Equations and Control Theory 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Order 1 Applicable Analysis 1 Archive for Rational Mechanics and Analysis 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Advances in Mathematics 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Journal of Functional Analysis 1 Monatshefte für Mathematik 1 Proceedings of the American Mathematical Society 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Physica D 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Mathematical Programming. Series A. Series B 1 Turkish Journal of Mathematics 1 Bulletin des Sciences Mathématiques 1 Discrete and Continuous Dynamical Systems 1 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 1 Vietnam Journal of Mathematics 1 Journal of Inequalities and Applications all top 5 #### Cited in 17 Fields 19 Partial differential equations (35-XX) 14 Fluid mechanics (76-XX) 13 Numerical analysis (65-XX) 12 Operations research, mathematical programming (90-XX) 10 Ordinary differential equations (34-XX) 8 Calculus of variations and optimal control; optimization (49-XX) 7 Dynamical systems and ergodic theory (37-XX) 6 Functional analysis (46-XX) 3 Harmonic analysis on Euclidean spaces (42-XX) 2 Order, lattices, ordered algebraic structures (06-XX) 2 Algebraic topology (55-XX) 1 Operator theory (47-XX) 1 General topology (54-XX) 1 Manifolds and cell complexes (57-XX) 1 Mechanics of particles and systems (70-XX) 1 Astronomy and astrophysics (85-XX) 1 Geophysics (86-XX)
2021-01-17 01:04:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39659902453422546, "perplexity": 5866.574222953208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507971.27/warc/CC-MAIN-20210116225820-20210117015820-00708.warc.gz"}
https://en.wikipedia.org/wiki/Talk:Frequency-shift_keying
# Talk:Frequency-shift keying WikiProject Telecommunications (Rated Start-class) This article is within the scope of WikiProject Telecommunications, a collaborative effort to improve the coverage of Telecommunications on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. Start  This article has been rated as Start-Class on the project's quality scale. WikiProject Amateur radio (Rated Start-class, Mid-importance) This article is within the scope of WikiProject Amateur radio, which collaborates on articles related to amateur radio technology, organizations, and activities. If you would like to participate, you can edit the article attached to this page, or visit the project page, where you can join the project and/or contribute to the discussion. Start  This article has been rated as Start-Class on the project's quality scale. Mid  This article has been rated as Mid-importance on the project's importance scale. Can someone with knowledge of DFSK add information on that to this page? —Preceding unsigned comment added by 128.253.139.181 (talk) 01:28, 1 April 2010 (UTC) ## FFSK? oh A type of FSK that I have used for signalling in the past consists of a '1' being represented by a single cycle of a frequency f, and a '0' by 1.5 cycles of frequency f + f/2, phase continuous (typical frequencies would be 1200Hz and 1800Hz). It's a very neat and compact scheme, very easy to synthesise, with every bit period being equal, and very easy to decode, counting zero-crossings. I always though this was called FFSK, with the extra F standing for 'Fast', but currently there is no article about this. Is it perhaps known by another name? Graham 11:46, 8 February 2006 (UTC) Yes. FFSK is equivalent to MSK. Oli Filth 14:57, 21 August 2006 (UTC) No. FFSK is equivalent to what the fine article calls "Audio FSK" Ok, to be fair, there is a problem here that the same term is used to mean different things. but could we try to make this more clear in tfa? (sorry at the moment i have no clear idea how). — Preceding unsigned comment added by 194.209.214.225 (talk) 13:56, 18 August 2011 (UTC) ## ASK gets all the love :( I'd like to see this article brought up to the quality of Amplitude-shift_keying. I'm not an EE, but I do have Stallings open here, but the amount of actual detail is pretty thin. I don't have the background to elaborate or generate the math or diagrams necessary. == ## MSK I think the article is wrong about the bit rate to frequency shift ratio. The difference between the higher and lower frequency (i.e., twice the frequency shift) is equal to half the bit rate. The formula for the frequency shift ${\displaystyle \Delta F}$ is ${\displaystyle \eta =2\,\Delta F\,T\;,}$ where ${\displaystyle 1/T}$ is the bit rate, ${\displaystyle \Delta F}$ is the frequency shift and the modulation index ${\displaystyle \eta =0.5}$ for MSK. Hence, the difference between the higher and lower frequency is ${\displaystyle 2\,\Delta F=0.5\,{\frac {1}{T}}\;.}$--Drizzd 13:13, 9 July 2006 (UTC) Indeed. I've altered this. Oli Filth 14:52, 21 August 2006 (UTC) ## Implementation Example - moved from page Need a diagram for this to be useful in the article. I may do this myself if I have time. The greatest single reason for the widespread use of FSK is its simple implementation. All that is required is an oscillator whose frequency can be switched between two preset frequencies. This can be easily achieved using the 555 timer IC. As the carrier signal is shifted between two preset frequencies, hence this technique of signal modulation got the name FSK. The frequency corresponding to logic 0 and 1 are called 'space' and 'mark' frequency respectively. The FSK generator is constructed using a 555-timer in astable mode, with its frequency controlled by the state of a PNP transistor. 150Hz is the standard frequency at which data input can be transmitted. When input is logic 1, the transistor is off. The capacitor then charges through (RA+RB) up to 2/3Vcc. Then it discharges through RB up to 1/3Vcc. This exponential charging-discharging continues as long as input is logic 1. Mark Frequency = ${\displaystyle 1.44 \over {Ra+Rb}}$ = 1070 Hz When input is logic 0, the PNP transistor is on (saturated), connecting RC across RA. Hence frequency obtained is: Space frequency = ${\displaystyle 1.44 \over {Ra||Rc+Rb}}$ = 1270 Hz Hence by proper selection of resistors, the mark and space frequencies can be selected. The difference between 1070 and 1270 Hz is of 200 Hz and is called 'frequency drift'. —Preceding unsigned comment added by Ktims (talkcontribs) 05:52 24 OCT 06 No problem. Btw, which program do u use for makin ccts, or are u on bitmap too? Xcentaur 07:53, 24 October 2006 (UTC) I normally use Eagle (program) because the free version is quite good, but it doesn't produce output that's appropriate for Wiki without some editing. You could try Dia as well, I've used it a bit and it's not very useful for circuit design but it produces nice output. And there's always [1] which is great for simple circuits. --Ktims 08:50, 24 October 2006 (UTC) Ah. Had a lot of ccts that I wanted to add but dint know how. This helps a lot. Thanks. Xcentaur 09:28, 24 October 2006 (UTC) ## Incremental frequency keying? A description of "incremental frequency keying" has recently been added. However, a Google search reveals only 39 hits, so I'm not sure it's particularly notable or widely used. Any thoughts? Oli Filth(talk) 01:01, 7 December 2007 (UTC) I see that Google gives 313 hits for "differential frequency shift keying". Is that notable enough? I agree that IFK is not notable enough to give an article of its own. However, I've been told that "The particular topics and facts within an article are not each required to meet the standards of the notability guidelines." -- WP:NNC. --75.19.73.101 (talk) 06:48, 13 December 2007 (UTC) ## acoustic fsk Is not yodeling a form of fsk? SyntheticET (talk) 04:00, 14 August 2008 (UTC) ## Continuous Phase Frequency-Shift Keying As both FSK and Continuous-phase frequency-shift keying (CPFSK) are a special form of frequency modulation, shouldn't CPFSK be mentioned in this article? What about 4-level Continuous FM (C4FM) - is this the same as CPFSK? --Dmitry (talkcontibs) 09:43, 26 March 2012 (UTC) I agree, so I added a brief mention of CPFSK and C4FM to this article. --DavidCary (talk) 18:18, 28 January 2015 (UTC) ## Added References and Moved MSK I have added some references for modem implementations, added markers for necessary citations and moved the MSK stuff to the corresponding subpage. — Preceding unsigned comment added by 2A4Fh56OSA (talkcontribs) 19:03, 29 December 2012 (UTC) Hello fellow Wikipedians, I have just modified one external link on Frequency-shift keying. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes: When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs. As of February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete the "External links modified" sections if they want, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{sourcecheck}} (last update: 15 July 2018). • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool. • If you found an error with any archives or the URLs themselves, you can fix them with this tool. Cheers.—InternetArchiveBot 23:50, 7 October 2017 (UTC) ## Weitbrecht Modem Three questions: 1) What kind of FSK is used in a Weitbrecht Modem as used in the Telecommunication Devices by Deaf people, known by the abbreviation TTY or TDD? 2) When was the first FSK introduced for telephonic communication? 3) Why did no telephone company develop a device similar to Weitnrecht Modem already in the late 19th or early 20th century to enable deaf people to use telephone? 74.104.144.152 (talk) 15:45, 13 July 2019 (UTC) Hartmut Teuber, 13 July 2019, 11:40 am EST
2019-07-17 17:16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5801554918289185, "perplexity": 2796.8884491137046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00311.warc.gz"}
https://studybay.com/math/tags/finite-groups/
# Tag Sort by: ### Weyl group Let be a finite-dimensional split semisimple Lie algebra over a field of field characteristic 0, a splitting Cartan subalgebra, and a weight of in a representation of . Thenis also a weight. Furthermore, the reflections with a root, generate a group of linear transformations in called the Weyl group of relative to , where is the algebraic conjugate space of and is the Q-space spanned by the roots (Jacobson 1979, pp. 112, 117, and 119).The Weyl group acts on the roots of a semisimple Lie algebra, and it is a finite group. The animations above illustrate this action for Weyl Group acting on the roots of a homotopy from one Weyl matrix to the next one (i.e., it slides the arrows from to ) in the first two figures, while the third figure shows the Weyl Group acting on the roots of the Cartan matrix of the infinite family of semisimple lie algebras (cf. Dynkin diagram), which is the special linear Lie algebra, ... ### Dimensionality theorem For a finite group of elements with an th dimensional th irreducible representation, ### Wallpaper groups The wallpaper groups are the 17 possible plane symmetry groups. They are commonly represented using Hermann-Mauguin-like symbols or in orbifold notation (Zwillinger 1995, p. 260).orbifold notationHermann-Mauguin symbolop12222p2**pmxxpg*2222pmm22*pmg22xpggx*cm2*22cmm442p4*442p4m4*2p4g333p3*333p3ml3*3p3lm632p6*632p6mPatterns created with Artlandia SymmetryWorks for each of these groups are illustrated above.Beautiful patterns can be created by repeating geometric and artistic motifs according to the symmetry of the wallpaper groups, as exemplified in works by M. C. Escher and in the patterns created by I. Bakshee in the Wolfram Language using Artlandia, illustrated above.For a description of the symmetry elements present in each space group, see Coxeter (1969, p. 413)... ### Octahedral group is the point group of symmetries of the octahedron having order 48 that includes inversion. It is also the symmetry group of the cube, cuboctahedron, and truncated octahedron. It has conjugacy classes 1, , , , , , , , , and (Cotton 1990). Its multiplication table is illustrated above. The octahedral group is implemented in the Wolfram Language as FiniteGroupData["Octahedral", "PermutationGroupRepresentation"] and as a point group as FiniteGroupData["CrystallographicPointGroup", "Oh", "PermutationGroupRepresentation"].The great rhombicuboctahedron can be generated using the matrix representation of using the basis vector .The octahedral group has a pure rotation subgroup denoted that is isomorphic to the tetrahedral group . is of order 24 and has conjugacy classes 1, , , , and (Cotton 1990, pp. 50 and 434). Its multiplication table is illustrated above. The pure.. ### Vierergruppe The vierergruppe is the Abelian abstract group on four elements that is isomorphic to the finite group C2×C2 and the dihedral group . The multiplication table of one possible representation is illustrated below. It can be generated by the permutations 1, 2, 3, 4, 2, 1, 4, 3, 3, 4, 1, 2, and 4, 3, 2, 1.It has subgroups , , , , and all of which are normal, so it is not a simple group. Each element is in its own conjugacy class. ### O'nan group The O'Nan group is the sporadic group O'Nof order(1)(2)It is implemented in the Wolfram Languageas ONanGroupON[]. ### Dihedral group d_6 The dihedral group gives the group of symmetries of a regular hexagon. The group generators are given by a counterclockwise rotation through radians and reflection in a line joining the midpoints of two opposite edges. If denotes rotation and reflection, we have(1)From this, the group elements can be listed as(2)The conjugacy classes of are given by(3)The set of elements which by themselves make up conjugacy classes are in the center of , denoted , so(4)The commutator subgroup is given by(5)which can be used to find the Abelianization. The set of all left cosets of is given by(6)(7)Thus we appear to have two generators for this group, namely and . Therefore, Abelianization gives .It is also known that where is the symmetric group. Furthermore where is the dihedral group with 6 elements, i.e., the group of symmetries of an equilateral triangle.There are thus two ways to produce the character table, either inducing from and using the orthogonality.. ### Trivial group The trivial group, denoted or , sometimes also called the identity group, is the unique (up to isomorphism) group containing exactly one element , the identity element. Examples include the zero group (which is the singleton set with respect to the trivial group structure defined by the addition ), the multiplicative group (where ), the point group , and the integers modulo 1 under addition. When viewed as a permutation group on letters, the trivial group consists of the single element which fixes each letter.The trivial group is (trivially) Abelian and cyclic.The multiplication table for is given below. 111The trivial group has the single conjugacy class and the single subgroup . ### Dihedral group d_5 The group is one of the two groups of order 10. Unlike the cyclic group , is non-Abelian. The molecule ruthenocene belongs to the group , where the letter indicates invariance under a reflection of the fivefold axis (Arfken 1985, p. 248). has cycle index given byIts multiplication table is illustrated above.The dihedral group has conjugacy classes , , , and . It has 8 subgroups: , , , , , , , and , of which , , and , , , , , , , , , are normal. ### Triangular symmetry group Given a triangle with angles (, , ), the resulting symmetry group is called a triangle group (also known as a spherical tessellation). In three dimensions, such groups must satisfyand so the only solutions are , , , and (Ball and Coxeter 1987). The group gives rise to the semiregular planar tessellations of types 1, 2, 5, and 7. The group gives hyperbolic tessellations. ### Monstrous moonshine In 1979, Conway and Norton discovered an unexpected intimate connection between the monster group and the j-function. The Fourier expansion of is given by(1)(OEIS A000521), where and is the half-period ratio, and the dimensions of the first few irreducible representations of are 1, 196883, 21296876, 842609326, ... (OEIS A001379).In November 1978, J. McKay noticed that the -coefficient 196884 is exactly one more than the smallest dimension of nontrivial representations of the (Conway and Norton 1979). In fact, it turns out that the Fourier coefficients of can be expressed as linear combinations of these dimensions with small coefficients as follows:(2)(3)(4)(5)Borcherds (1992) later proved this relationship, which became known as monstrous moonshine. Amazingly, there turn out to be yet more deep connections between the monster group and the j-function... ### Dihedral group d_4 The dihedral group is one of the two non-Abelian groups of the five groups total of group order 8. It is sometimes called the octic group. An example of is the symmetry group of the square.The cycle graph of is shown above. has cycle index given by(1)Its multiplication table is illustrated above. has representation(2)(3)(4)(5)(6)(7)(8)(9)Conjugacy classes include , , , , and . There are 10 subgroups of : , , , , , , , , and , . Of these, , , , , , and are normal ### Monster group The monster group is the highest order sporadic group . It has group order(1)(2)where the divisors are precisely the 15 supersingularprimes (Ogg 1980).The monster group is also called the friendly giant group. It was constructed in 1982 by Robert Griess as a group of rotations in -dimensional space.It is implemented in the Wolfram Languageas MonsterGroupM[]. ### Dihedral group d_3 The dihedral group is a particular instance of one of the two distinct abstract groups of group order 6. Unlike the cyclic group (which is Abelian), is non-Abelian. In fact, is the non-Abelian group having smallest group order.Examples of include the point groups known as , , , , the symmetry group of the equilateral triangle (Arfken 1985, p. 246), and the permutation group of three objects (Arfken 1985, p. 249).The cycle graph of is shown above. has cycle index given by(1)Its multiplication table is illustrated above and enumerated below, where 1 denotes the identity element. Equivalent but slightly different forms are given by (Arfken 1985, p. 247) and Cotton (1990, p. 12), the latter of which denotes the abstract group of by .11111111Like all dihedral groups, a reducible two-dimensional representation using real matrices has generators given by and , where is a rotation by radians about an axis passing through the.. ### Modulo multiplication group A modulo multiplication group is a finite group of residue classes prime to under multiplication mod . is Abelian of group order , where is the totient function.A modulo multiplication group can be visualized by constructing its cycle graph. Cycle graphs are illustrated above for some low-order modulo multiplication groups. Such graphs are constructed by drawing labeled nodes, one for each element of the residue class, and connecting cycles obtained by iterating . Each edge of such a graph is bidirected, but they are commonly drawn using undirected edges with double edges used to indicate cycles of length two (Shanks 1993, pp. 85 and 87-92).The following table gives the modulo multiplication groups of small orders, together with their isomorphisms with respect to cyclic groups .groupelements2121, 221, 341, 2, 3, 421, 561, 2, 3, 4, 5, 641, 3, 5, 761, 2, 4, 5, 7, 841, 3, 7, 9101, 2, 3, 4, 5, 6, 7, 8, 9, 1041, 5, 7, 11121, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,.. ### Thompson group The Thompson group is the sporadic group Thof order(1)(2)It is implemented in the Wolfram Languageas ThompsonGroupTh[]. ### Mclaughlin group The McLaughlin group is the sporadic group McLof order(1)(2)It is implemented in the Wolfram Languageas McLaughlinGroupMcL[]. ### Dihedral group The dihedral group is the symmetry group of an -sided regular polygon for . The group order of is . Dihedral groups are non-Abelian permutation groups for .The th dihedral group is represented in the Wolfram Language as DihedralGroup[n].One group presentation for the dihedral group is .A reducible two-dimensional representation of using real matrices has generators given by and , where is a rotation by radians about an axis passing through the center of a regular -gon and one of its vertices and is a rotation by about the center of the -gon (Arfken 1985, p. 250).Dihedral groups all have the same multiplication table structure. The table for is illustrated above.The cycle index (in variables , ..., ) for the dihedral group is given by(1)where(2)is the cycle index for the cyclic group , means divides , and is the totient function (Harary 1994, p. 184). The cycle indices for the first few are(3)(4)(5)(6)(7)Renteln and Dundes (2005) give.. ### Tetrahedral group The tetrahedral group is the point group of symmetries of the tetrahedron including the inversion operation. It is one of the 12 non-Abelian groups of order 24. The tetrahedral group has conjugacy classes 1, , , , and (Cotton 1990, pp. 47 and 434). Its multiplication table is illustrated above. The tetrahedral group is implemented in the Wolfram Language as FiniteGroupData["Tetrahedral", "PermutationGroupRepresentation"] and as a point group as FiniteGroupData["CrystallographicPointGroup", "Td", "PermutationGroupRepresentation"]. has a pure rotational subgroup of order 12 denoted (Cotton 1990, pp. 50 and 433). It is isomorphic to the alternating group and has conjugacy classes 1, , , and . It has 10 subgroups: one of length 1, three of length 2, 4 of length 3, one of length 4, and one of length 12. Of these, only the trivial subgroup, subgroup of order 4, and complete.. ### Mathieu groups The five Mathieu groups , , , , and were the first sporadic groups discovered, having been found in 1861 and 1873 by Mathieu. Frobenius showed that all the Mathieu groups are subgroups of .The sporadic Mathieu groups are implemented in the Wolfram Language as MathieuGroupM11[], MathieuGroupM12[], MathieuGroupM22[], MathieuGroupM23[], and MathieuGroupM24[].All the sporadic Mathieu groups are multiply transitive. The following table summarizes some properties of the Mathieu groups, where indicates the transitivity and is the length of the minimal permutation support (from which the groups derive their designations).grouporderfactorization41179205129504032244352042310200960524244823040The Mathieu groups are most simply defined as automorphism groups of Steiner systems, as summarized in the following table.Mathieu groupSteiner system.. ### Cyclic group c_12 The cyclic group is one of the two Abelian groups of the five groups total of group order 12 (the other order-12 Abelian group being finite group C2×C6). Examples include the modulo multiplication groups of orders and 26 (which are the only modulo multiplication groups isomorphic to ).The cycle graph of is shown above. The cycle index isIts multiplication table is illustrated above.The numbers of elements satisfying for , 2, ..., 12 are 1, 2, 3, 4, 1, 6, 1, 4, 3, 2, 1, 12.Because the group is Abelian, each element is in its own conjugacy class. There are six subgroups: , , , and . , and which, because the group is Abelian, are all normal. Since has normal subgroups other than the trivial subgroup and the entire group, it is not a simple group. ### Symmetric group The symmetric group of degree is the group of all permutations on symbols. is therefore a permutation group of order and contains as subgroups every group of order .The th symmetric group is represented in the Wolfram Language as SymmetricGroup[n]. Its cycle index can be generated in the Wolfram Language using CycleIndexPolynomial[SymmetricGroup[n], x1, ..., xn]. The number of conjugacy classes of is given , where is the partition function P of . The symmetric group is a transitive group (Holton and Sheehan 1993, p. 27).For any finite group , Cayley's group theorem proves is isomorphic to a subgroup of a symmetric group.The multiplication table for is illustrated above.Let be the usual permutation cycle notation for a given permutation. Then the following table gives the multiplication table for , which has elements.(1)(2)(3)(1)(23)(3)(12)(123)(132)(2)(13)(1)(2)(3)(1)(2)(3)(1)(23)(3)(12)(123)(132)(2)(13)(1)(23)(1)(23)(1)(2)(3)(132)(2)(13)(3)(12)(123)(3)(12)(3)(12)(123)(1)(2)(3)(1)(23)(2)(13)(132)(123)(123)(3)(12)(2)(13)(132)(1)(2)(3)(1)(23)(132)(132)(2)(13)(1)(23)(1)(2)(3)(123)(3)(12)(2)(13)(2)(13)(132)(123)(3)(12)(1)(23)(1)(2)(3)This.. ### Lyons group The Lyons group is the sporadic group Lyof order(1)(2)It is implemented in the Wolfram Languageas LyonsGroupLy[]. ### Cyclic group c_11 The cyclic group is unique group of group order 11. An example is the integers modulo 11 under addition (). No modulo multiplication group is isomorphic to . Like all cyclic groups, is Abelian.The cycle graph of is shown above. The cycle index isIts multiplication table is illustrated above.Because the group is Abelian, each element is in its own conjugacy class. Because it is of prime order, the only subgroups are the trivial group and entire group. is therefore a simple group, as are all cyclic graphs of prime order. ### Sylow theorems Let be a prime number, a finite group, and the order of . 1. If divides , then has a Sylow p-subgroup. 2. In a finite group, all the Sylow p-subgroups are conjugate for some fixed . 3. The number of Sylow p-subgroups for a fixed is congruent to 1 (mod ). ### Cyclic group c_10 The cyclic group is the unique Abelian group of group order 10 (the other order-10 group being the non-Abelian ). Examples include the integers modulo 10 under addition () and the modulo multiplication groups and (with no others). Like all cyclic groups, is Abelian.The cycle graph of is shown above. The cycle index isIts multiplication table is illustrated above.The numbers of elements satisfying for , 2, ..., 10 are 1, 2, 1, 2, 5, 2, 1, 2, 1, 10.Because the group is Abelian, each element is in its own conjugacy class. There are four subgroups: , , , and . Because the group is Abelian, these are all normal. Since has normal subgroups other than the trivial subgroup and the entire group, it is not a simple group. ### Cyclic group c_9 The cyclic group is one of the two Abelian groups of group order 9 (the other order-9 Abelian group being ; there are no non-Abelian groups of order 9). An example is the integers modulo 9 under addition (). No modulo multiplication group is isomorphic to . Like all cyclic groups, is Abelian.The cycle graph of is shown above. The cycle index isIts multiplication table is illustrated above.The numbers of elements satisfying for , 2, ..., 9 are 1, 1, 3, 1, 1, 3, 1, 1, 9.Because the group is Abelian, each element is in its own conjugacy class. There are three subgroups: , and . Because the group is Abelian, these are all normal. Since has normal subgroups other than the trivial subgroup and the entire group, it is not a simple group. ### Suzuki group The Suzuki group is the sporadic group Suzof order(1)(2)It is implemented in the Wolfram Languageas SuzukiGroupSuz[]. ### Cyclic group c_8 The cyclic group is one of the three Abelian groups of the five groups total of group order 8. Examples include the integers modulo 8 under addition () and the residue classes modulo 17 which have quadratic residues, i.e., under multiplication modulo 17. No modulo multiplication group is isomorphic to .The cycle graph of is shown above. The cycle index isIts multiplication table is illustrated above.The elements satisfy , four of them satisfy , and two satisfy .Because the group is Abelian, each element is in its own conjugacy class. There are four subgroups: , , , and which, because the group is Abelian, are all normal. Since has normal subgroups other than the trivial subgroup and the entire group, it is not a simple group. ### Kronecker decomposition theorem Every finite Abelian group can be written as a group direct product of cyclic groups of prime power group orders. In fact, the number of nonisomorphic Abelian finite groups of any given group order is given by writing aswhere the are distinct prime factors, thenwhere is the partition function. This gives 1, 1, 1, 2, 1, 1, 1, 3, 2, ... (OEIS A000688).More generally, every finitely generated Abelian group is isomorphic to the group direct sum of a finite number of groups, each of which is either cyclic of prime power order or isomorphic to . This extension of Kronecker decomposition theorem is often referred to as the Kronecker basis theorem. ### Cyclic group c_7 is the cyclic group that is the unique group of group order 7. Examples include the point group and the integers modulo 7 under addition (). No modulo multiplication group is isomorphic to . Like all cyclic groups, is Abelian.The cycle graph is shown above, and the group hascycle index isThe elements of the group satisfy , where 1 is the identity element.Its multiplication table is illustrated aboveand enumerated below. 111111111Because it is Abelian, the group conjugacy classes are , , , , , , and . Because 7 is prime, the only subgroups are the trivial group and the entire group. is therefore a simple group, as are all cyclic graphs of prime order. The sporadic groups are the 26 finite simple groups that do not fit into any of the four infinite families of finite simple groups (i.e., the cyclic groups of prime order, alternating groups of degree at least five, Lie-type Chevalley groups, and Lie-type groups). The smallest sporadic group is the Mathieu group , which has order 7920, and the largest is the monster group, which has order .The orders of the sporadic groups given in increasing order are 7920, 95040, 175560, 443520, 604800, 10200960, 44352000, 50232960, ... (OEIS A001228). A summary of sporadic groups, as given by Conway et al. (1985), is given below.nameorderfactorizationMathieu group 7920Mathieu group 95040Janko group 175560Mathieu group 443520Janko group 604800Mathieu group 10200960Higman-Sims group HS44352000Janko group 50232960Mathieu group 244823040McLaughlin group McL898128000Held group He4030387200Rudvalis Group Ru145926144000Suzuki group Suz448345497600O'Nan.. ### Kronecker basis theorem A generalization of the Kronecker decomposition theorem which states that every finitely generated Abelian group is isomorphic to the group direct sum of a finite number of groups, each of which is either cyclic of prime power order or isomorphic to . This decomposition is unique, and the number of direct summands is equal to the group rank of the Abelian group. ### Cyclic group c_6 is one of the two groups of group order 6 which, unlike , is Abelian. It is also a cyclic. It is isomorphic to . Examples include the point groups and , the integers modulo 6 under addition (), and the modulo multiplication groups , , and (with no others).The cycle graph is shown above and has cycleindexThe elements of the group satisfy , where 1 is the identity element, three elements satisfy , and two elements satisfy .Its multiplication table is illustrated aboveand enumerated below. 11111111Since is Abelian, the conjugacy classes are , , , , , and . There are four subgroups of : , , , and which, because the group is Abelian, are all normal. Since has normal subgroups other than the trivial subgroup and the entire group, it is not a simple group. ### Special linear group Given a ring with identity, the special linear group is the group of matrices with elements in and determinant 1.The special linear group , where is a prime power, the set of matrices with determinant and entries in the finite field . is the corresponding set of complex matrices having determinant . is a subgroup of the general linear group and is a Lie-type group. Both and are genuine Lie groups. ### Cyclic group c_5 is the unique group of group order 5, which is Abelian. Examples include the point group and the integers mod 5 under addition (). No modulo multiplication group is isomorphic to .The cycle graph is shown above, and the cycleindexThe elements satisfy , where 1 is the identity element.Its multiplication table is illustrated aboveand enumerated below. 1111111Since is Abelian, the conjugacy classes are , , , , and . Since 5 is prime, there are no subgroups except the trivial group and the entire group. is therefore a simple group, as are all cyclic graphs of prime order. ### Cyclic group c_4 is one of the two groups of group order 4. Like , it is Abelian, but unlike , it is a cyclic. Examples include the point groups (note that the same notation is used for the abstract cyclic group and the point group isomorphic to it) and , the integers modulo 4 under addition (), and the modulo multiplication groups and (which are the only two modulo multiplication groups isomorphic to it).The cycle graph of is shown above, and the cycle index is given by(1)The multiplication table for this group may be written in three equivalent ways by permuting the symbols used for the group elements (Cotton 1990, p. 11). One such table is illustrated above and enumerated below. 111111The conjugacy classes of are , , , and . In addition to the trivial group and the entire group, also has as a subgroup which, because the group is Abelian, is normal. is therefore not a simple group.Elements of the group satisfy , where 1 is the identity element, and two of the elements satisfy.. ### Simple group A simple group is a group whose only normal subgroups are the trivial subgroup of order one and the improper subgroup consisting of the entire original group. Simple groups include the infinite families of alternating groups of degree , cyclic groups of prime order, Lie-type groups, and the 26 sporadic groups.Since all subgroups of an Abelian group are normal and all cyclic groups are Abelian, the only simple cyclic groups are those which have no subgroups other than the trivial subgroup and the improper subgroup consisting of the entire original group. And since cyclic groups of composite order can be written as a group direct product of factor groups, this means that only prime cyclic groups lack nontrivial subgroups. Therefore, the only simple cyclic groups are the prime cyclic groups. Furthermore, these are the only Abelian simple groups.In fact, the classification theorem of finite groups states that such groups can be classified completely.. ### Janko groups The Janko groups are the four sporadic groups , , and . The Janko group is also known as the Hall-Janko group.The Janko groups are implemented in the Wolfram Language as JankoGroupJ1[], JankoGroupJ2[], JankoGroupJ3[], and JankoGroupJ4[].The following table summarized the group orders ofthe Janko groups.grouporderfactorization1755606048005023296086775571046077562880 ### Cyclic group c_3 is the unique group of group order 3. It is both Abelian and cyclic. Examples include the point groups , , and and the integers under addition modulo 3 (). No modulo multiplication groups are isomorphic to .The cycle graph of is shown above, and the cycle index isThe elements of the group satisfy where 1 is the identity element.Its multiplication table is illustrated aboveand enumerated below (Cotton 1990, p. 10). 11111Since is Abelian, the conjugacy classes are , , and . The only subgroups of are the trivial group and the entire group, which are both trivially normal. is therefore a simple group, as are all cyclic graphs of prime order.The irreducible representation (character table)is therefore 11111111 ### Icosahedral group The icosahedral group is the group of symmetries of the icosahedron and dodecahedron having order 120, equivalent to the group direct product of the alternating group and cyclic group . The icosahedral group consists of the conjugacy classes 1, , , , , , , , , and (Cotton 1990, pp. 49 and 436). Its multiplication table is illustrated above. The icosahedral group is a subgroup of the special orthogonal group . The icosahedal group is implemented in the Wolfram Language as FiniteGroupData["Icosahedral", "PermutationGroupRepresentation"].Icosahedral symmetry is possible as a rotational group but is not compatible with translational symmetry. As a result, there are no crystals with this symmetry and so, unlike the octahedral group and tetrahedral group , is not one of the 32 point groups.The great rhombicosidodecahedron can be generated using the matrix representation of using the basis vector , where is the golden.. ### Cyclic group c_2 The group is the unique group of group order 2. is both Abelian and cyclic. Examples include the point groups , , and , the integers modulo 2 under addition (), and the modulo multiplication groups , , and (which are the only modulo multiplication groups isomorphic to ).The group is also trivially simple, and forms the subject for the humorous a capella song "Finite Simple Group (of Order 2)" by the Northwestern University mathematics department a capella group "The Klein Four."The cycle graph is shown above, and the cycleindex isThe elements satisfy , where 1 is the identity element.Its multiplication table is illustrated aboveand enumerated below. 1111The conjugacy classes are and . The only subgroups of are the trivial group and entire group , both of which are trivially normal.The irreducible representation for the group is ... ### Rudvalis group The Rudvalis group is the sporadic group Ruof order(1)(2)It is implemented in the Wolfram Languageas RudvalisGroupRu[]. ### Cyclic group A cyclic group is a group that can be generated by a single element (the group generator). Cyclic groups are Abelian.A cyclic group of finite group order is denoted , , , or ; Shanks 1993, p. 75), and its generator satisfies(1)where is the identity element.The ring of integers form an infinite cyclic group under addition, and the integers 0, 1, 2, ..., () form a cyclic group of order under addition (mod ). In both cases, 0 is the identity element.There exists a unique cyclic group of every order , so cyclic groups of the same order are always isomorphic (Scott 1987, p. 34; Shanks 1993, p. 74). Furthermore, subgroups of cyclic groups are cyclic, and all groups of prime group order are cyclic. In fact, the only simple Abelian groups are the cyclic groups of order or a prime (Scott 1987, p. 35).The th cyclic group is represented in the Wolfram Language as CyclicGroup[n].Examples of cyclic groups include , , , ..., and the modulo multiplication.. ### Quaternion group The quaternion group is one of the two non-Abelian groups of the five total finite groups of order 8. It is formed by the quaternions , , , and , denoted or .1111111111The multiplication table for is illustrated above, where rows and columns are given in the order , , , , 1, , , , as in the table above.The cycle graph of the quaternion group is illustratedabove.The quaternion group has conjugacy classes , , , , and . Its subgroups are , , , , , and , all of which are normal subgroups. ### Held group The Held group is the sporadic group Heof order(1)(2)It is implemented in the Wolfram Languageas HeldGroupHe[]. ### Cycle index Let denote the number of cycles of length for a permutation expressed as a product of disjoint cycles. The cycle index of a permutation group of order and degree is then the polynomial in variables , , ..., given by the formula(1)The cycle index of a permutation group is implemented as CycleIndexPolynomial[perm, x1, ..., xn], which returns a polynomial in . For any permutation , the numbers satisfy(2)and thus constitutes a partition of the integer . Sets of values are commonly denoted , where ranges over all the -vectors satisfying equation (2).Formulas for the most important permutation groups (the symmetric group , alternating group , cyclic group , dihedral group , and trivial group ) are given by(3)(4)(5)(6)(7)where means divides and is the totient function (Harary 1994, p. 184)... ### Projective symplectic group The projective symplectic group is the group obtained from the symplectic group on factoring by the scalar matrices contained in that group. is simple except for(1)(2)(3)so it is given the simpler name , with . ### Projective special unitary group The projective special unitary group is the group obtained from the special unitary group on factoring by the scalar matrices contained in that group. is simple except for(1)(2)(3)so it is given the simpler name , with . ### Haj&oacute;s group A Hajós group is a group for which all factorizations of the form (say) have or periodic, where the period is a divisor of . Hajós groups arose after solution of Minkowski's conjecture on about tiling space with nonoverlapping cuboids. The classification of Hajós finite Abelian groups was achieved by Sands in the 1980s.For example, (mod 12), so while the first factor is acyclic, the second factor has period three. Since this turns out to be the case for all tilings of , it is a Hajós group. The smallest case where this is false is , followed by .The cyclic group of order is a Hajós group if is of the form , , , , , or , where , , , and are distinct primes and and arbitrary integers. Non-Hajós groups therefore have orders 72, 108, 120, 144, 168, 180, 200, 216, ... (OEIS A102562)... ### Crystallographic point groups The crystallographic point groups are the point groups in which translational periodicity is required (the so-called crystallography restriction). There are 32 such groups, summarized in the following table which organizes them by Schönflies symbol type.typepoint groupsnonaxial, cyclic, , , , cyclic with horizontal planes, , , cyclic with vertical planes, , , dihedral, , , dihedral with horizontal planes, , , dihedral with planes between axes, improper rotation, cubic groups, , , , Note that while the tetrahedral and octahedral point groups are also crystallographic point groups, the icosahedral group is not. The orders, classes, and group operations for these groups can be concisely summarized in their character tables. ### Projective special orthogonal group The projective special orthogonal group is the group obtained from the special orthogonal group on factoring by the scalar matrices contained in that group. In general, this group is not simple. ### Group order The number of elements in a group , denoted . If the order of a group is a finite number, the group is said to be a finite group.The order of an element of a finite group is the smallest power of such that , where is the identity element. In general, finding the order of the element of a group is at least as hard as factoring (Meijer 1996). However, the problem becomes significantly easier if and the factorization of are known. Under these circumstances, efficient algorithms are known (Cohen 1993).The group order can be computed in the WolframLanguage using the function GroupOrder[n]. ### Conway groups The automorphism group of the Leech lattice modulo a center of order two is called "the" Conway group. There are 15 exceptional conjugacy classes of the Conway group. This group, combined with the groups and obtained similarly from the Leech lattice by stabilization of the one- and two-dimensional sublattices, are collectively called Conway groups.The Conway groups are sporadic groups. The are implemented in the Wolfram Language as ConwayGroupCo1[], ConwayGroupCo2[], and ConwayGroupCo3[].The following table summarizes some properties of the Conway groups, where indicates the transitivity and is the length of the minimal permutation support.grouporderfactorization227649576665600012300423054213120001982804157776806543360000 ### Projective special linear group The projective special linear group is the group obtained from the special linear group on factoring by the scalar matrices contained in that group. It is simple for except for(1)(2)and is therefore also denoted . ### General unitary group The general unitary group is the subgroup of all elements of the general linear group that fix a given nonsingular Hermitian form. This is equivalent, in the canonical case, to the definition of as the group of unitary matrices. ### General orthogonal group The general orthogonal group is the subgroup of all elements of the projective general linear group that fix the particular nonsingular quadratic form . The determinant of such an element is . ### Chevalley groups The finite simple groups of Lie-type. They include four families of linear simple groups: (the projective special linear group), (the projective special unitary group), (the projective symplectic group), and .The following table lists exceptional (untwisted) Chevalley groups.grouporder ### General linear group Given a ring with identity, the general linear group is the group of invertible matrices with elements in .The general linear group is the set of matrices with entries in the field which have nonzero determinant. ### Characteristic factor A characteristic factor is a factor in a particular factorization of the totient function such that the product of characteristic factors gives the representation of a corresponding abstract group as a group direct product. By computing the characteristic factors, any Abelian group can be expressed as a group direct product of cyclic subgroups, for example, the finite group C2×C4 or the finite group C2×C2×C2. There is a simple algorithm for determining the characteristic factors of modulo multiplication groups. ### Galois group Let be an extension field of , denoted , and let be the set of automorphisms of , that is, the set of automorphisms of such that for every , so that is fixed. Then is a group of transformations of , called the Galois group of . The Galois group of is denoted or .Let be a rational polynomial of degree and let be the splitting field of over , i.e., the smallest subfield of containing all the roots of . Then each element of the Galois group permutes the roots of in a unique way. Thus can be identified with a subgroup of the symmetric group , the group of permutations of the roots of . If is irreducible, then is a transitive subgroup of , i.e., given two roots and of , there exists an element of such that .The roots of are solvable by radicals iff is a solvable group. Since all subgroups of with are solvable, the roots of all polynomials of degree up to 4 are solvable by radicals. However, polynomials of degree 5 or greater are generally not solvable by radicals since (and the alternating.. ### Character table A finite group has a finite number of conjugacy classes and a finite number of distinct irreducible representations. The group character of a group representation is constant on a conjugacy class. Hence, the values of the characters can be written as an array, known as a character table. Typically, the rows are given by the irreducible representations and the columns are given the conjugacy classes.A character table often contains enough information to identify a given abstract group and distinguish it from others. However, there exist nonisomorphic groups which nevertheless have the same character table, for example (the symmetry group of the square) and (the quaternion group).For example, the symmetric group on three letters has three conjugacy classes, represented by the permutations , , and . It also has three irreducible representations; two are one-dimensional and the third is two-dimensional: 1. The trivial representation ... ### Fischer groups The Fischer groups are the three sporadic groups , , and . These groups were discovered during the investigation of 3-transposition groups.The Fischer groups are implemented in the Wolfram Language as FischerGroupFi22[], FischerGroupFi23[], and FischerGroupFi24Prime[].The following table summarizes the orders of the Fischer groups.grouporderfactorization6456175165440040894704732930048001255205709190661721292800The baby monster group is sometimes called Fischer'sbaby monster group. ### Finite group t The finite group is one of the three non-Abelian groups of order 12 (out of a total of fives groups of order 12), the other two being the alternating group and the dihedral group . However, it is highly unfortunate that the symbol is used to refer this particular group, since the symbol is also used to denote the point group that constitutes the pure rotational subgroup of the full tetrahedral group and is isomorphic to . Thus, of the three distinct non-Abelian groups of order 12, two different ones are each known as under some circumstances. Extreme caution is therefore needed. is the semidirect product of by by the map given by , where is the automorphism . The group can be constructed from the generators(1)(2)where as the group elements 1, , , , , , , , , , , and . The multiplication table is illustrated above. has conjugacy classes , , , , , and . There are 8 subgroups, and their lengths are 1, 2, 3, 4, 4, 4, 6, and 12. Of these, the following five are normal: , , , , and.. ### Finite group c_2&215;c_6 The finite group is the finite group of order 12 that is the group direct product of the cyclic group C2 and cyclic group C6. It is one of the two Abelian groups of order 12, the other being the cyclic group C12. Examples include the modulo multiplication groups , , , and (and no other modulo multiplication groups).The multiplication table is illustrated above. The numbers of elements for which for , 2, ..., 12 are 1, 4, 3, 4, 1, 12, 1, 4, 3, 4, 1, 12.Each element of is in its own conjugacy class. There are 10 subgroups: the trivial subgroup, 3 of length 2, 1 of length 3, 1 of length 4, 3 of length 6, and the improper subgroup consisting of the entire group. Since is Abelian, all its subgroups are normal. Since it has normal subgroups other than the trivial subgroup and the improper subgroup, is not a simple group... ### Burnside problem The Burnside problem originated with Burnside (1902), who wrote, "A still undecided point in the theory of discontinuous groups is whether the group order of a group may be not finite, while the order of every operation it contains is finite." This question would now be phrased, "Can a finitely generated group be infinite while every element in the group has finite order?" (Vaughan-Lee 1993). This question was answered by Golod (1964) when he constructed finitely generated infinite p-groups. These groups, however, do not have a finite exponent.Let be the free group of group rank and let be the normal subgroup generated by the set of th powers . Then is a normal subgroup of . Define to be the quotient group. We call the -generator Burnside group of exponent . It is the largest -generator group of exponent , in the sense that every other such group is a homomorphic image of . The Burnside problem is usually stated as, "For which.. ### Finite group c_2&215;c_2&215;c_2 The group is one of the three Abelian groups of order 8 (the other two groups are non-Abelian). An example is the modulo multiplication group (which is the only modulo multiplication group isomorphic to ).The cycle graph is shown above. The elements of this group satisfy , where 1 is the identity element.Its multiplication table is illustrated above.Each element is in its own conjugacy class. The subgroups are given by , , , , , , , , , , , , , , , and . Since the group is Abelian, all of these are normal subgroups. ### Finite group c_2&#215;c_2 The finite group is one of the two distinct groups of group order 4. The name of this group derives from the fact that it is a group direct product of two subgroups. Like the group , is an Abelian group. Unlike , however, it is not cyclic.The abstract group corresponding to is called the vierergruppe. Examples of the group include the point groups , , and , and the modulo multiplication groups and (and no other modulo multiplication groups). That , the residue classes prime to 8 given by , are a group of type can be shown by verifying that(1)and(2) is therefore a modulo multiplication group.The cycle graph is shown above. In addition to satisfying for each element , it also satisfies , where 1 is the identity element.Its multiplication table is illustrated aboveand enumerated below (Cotton 1990, p. 11). 111111Since the group is Abelian, the conjugacy classes are , , , and . Nontrivial proper subgroups of are , , and .Now explicitly consider the elements.. ### Baby monster group The baby monster group, also known as Fischer's baby monster group, is the second-largest sporadic group. It is denoted and has group order(1)(2)It is implemented in the Wolfram Languageas BabyMonsterGroupB[]. ### Finite group A finite group is a group having finite group order. Examples of finite groups are the modulo multiplication groups, point groups, cyclic groups, dihedral groups, symmetric groups, alternating groups, and so on.Properties of finite groups are implemented in the Wolfram Language as FiniteGroupData[group, prop].The classification theorem of finite groups states that the finite simple groups can be classified completely into one of five types.A convenient way to visualize groups is using so-called cycle graphs, which show the cycle structure of a given abstract group. For example, cycle graphs of the 5 nonisomorphic groups of order 8 are illustrated above (Shanks 1993, p. 85).Frucht's theorem states that every finite group is the graph automorphism group of a finite undirected graph.The finite (cyclic) group forms the subject for the humorous a capella song "Finite Simple Group (of Order 2)" by the Northwestern University.. ### Alternating group An alternating group is a group of even permutations on a set of length , denoted or Alt() (Scott 1987, p. 267). Alternating groups are therefore permutation groups.The th alternating group is represented in the Wolfram Language as AlternatingGroup[n].An alternating group is a normal subgroup of the permutation group, and has group order , the first few values of which for , 3, ... are 1, 3, 12, 60, 360, 2520, ... (OEIS A001710). The alternating group is -transitive.Amazingly, the pure rotational subgroup of the icosahedral group is isomorphic to . The full icosahedral group is isomorphic to the group direct product , where is the cyclic group on two elements.Alternating groups with are simple groups (Scott 1987, p. 295), i.e., their only normal subgroups are the trivial subgroup and the entire group .The number of conjugacy classes in the alternating groups for , 3, ... are 1, 3, 4, 5, 7, 9, ... (OEIS A000702). is the only nontrivial.. ### Abhyankar's conjecture For a finite group , let be the subgroup generated by all the Sylow p-subgroups of . If is a projective curve in characteristic , and if , ..., are points of (for ), then a necessary and sufficient condition that occur as the Galois group of a finite covering of , branched only at the points , ..., , is that the quotient group has generators.Raynaud (1994) solved the Abhyankar problem in the crucial case of the affine line (i.e., the projective line with a point deleted), and Harbater (1994) proved the full Abhyankar conjecture by building upon this special solution. ### Classification theorem of finite groups The classification theorem of finite simple groups, also known as the "enormous theorem," which states that the finite simple groups can be classified completely into 1. Cyclic groups of prime group order, 2. Alternating groups of degree at least five, 3. Lie-type Chevalley groups given by , , , and , 4. Lie-type (twisted Chevalley groups or the Tits group) , , , , , , , , , 5. Sporadic groups , , , , , , Suz, HS, McL, , , , He, , , , HN, Th, , , , O'N, , Ly, Ru, . The "proof" of this theorem is spread throughout the mathematical literature and is estimated to be approximately pages in length. ### Stochastic group The group of all nonsingular stochastic matrices over a field . It is denoted . If is prime and is the finite field of order , is written instead of . Particular examples include(1)(2)(3)(4)(5)where is an Abelian group, are symmetric groups on elements, and denotes the semidirect product with (Poole 1995). Check the price
2020-10-26 22:45:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035216569900513, "perplexity": 464.7695295327656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00327.warc.gz"}
https://meta.mathoverflow.net/questions/389/what-are-tips-and-tricks-to-use-mo-more-effectively
# What are Tips and Tricks to use MO more effectively? The purpose of this question is to collect a list of Tips and Tricks for using MO in a more efficient or otherwise better way. If you know some trick or have a tip you want to share with your MO-colleagues, just add an answer here. Notes: 2. Many will still remember the Tips and Tricks page from the old MO. This is (at least presently) gone, but even if it were to be restored, its content would need to be updated. A copy of this page was provided by Kaveh on the question of KConrad asking about the whereabouts of said page. There the idea for this question came up to organize the updating and (at least intermitent) displaying of this information. To just take an entry from the old list and to update it as needed (in some cases, perhaps not at all) and to copy it here as an answer is explcitly encouraged. • Likely it will be good to have this CW, so that editing will be simple for all answers. If I understood things correctly, I thus need to flag for moderators, which I just did. – user9072 Jul 6 '13 at 10:41 • You understand correctly. – Asaf Karagila Jul 6 '13 at 12:17 Getting the TeX source for a formula: By right-clicking on a formula (at least under Firefox) you get a menu that allow among other things to view its TeX and MathML source. (With Firefox you actually get two menus on top of each other, the second being a Firefox-specific menu, but you can let the second menu go away...) Edit histories. You can tell that a post has been edited because it says something like "edited 8 hours ago" or "edited yesterday" at the bottom in the middle. If you click on "8 hours ago" (resp. "yesterday"), you can see the full edit history of the post. [Taken with minimal change from the original list.] An additional detail. Even if a question was not yet edited, in the sense of the text having been changed, its edit history can still contain interesting information but there is no link to it, for example it contains the information who reopened a question (if this happened). A way to access it is to navigate manually to the URL http://mathoverflow.net/posts/xyz/revisions where xyz is the number of the question, which can be seen from its URL, the common form of the URL is http://mathoverflow.net/questions/xyz/the-title-of-the-question (note that here it says 'questions' while for the former it has to say 'posts'). [If somebody has a more convenient and still non-invasive way for the "additional detail", please mention it; by non-invasive it is meant other than by simply editing the question.] • – Kaveh Jul 6 '13 at 19:51 • Thank you for the pointer; I am as of yet completely unfamiliar with all the available scripts. This looks interesting. – user9072 Jul 6 '13 at 19:59 • You may want to have a look at useful userscripts. – Kaveh Jul 6 '13 at 23:06 Get latest posting: In the main page http://mathoverflow.net and also when viewing questions under https://mathoverflow.net/questions sorted by 'active' (yet not for the other ways of sorting there) every question has a line of the form "35m ago Unknown 1" which shows the time and the author of the latest answer or edit. If you click on the time component - here "35m ago" - you are directed to that posting. This also applies to and can be useful on this "meta". • This is a useful thing to know, in particular for questions that already have many answers. Thank you for sharing it. I only removed the mention of "comment" as it seems comments are not taken into account (also I made more explict in which tab-views one has this feature available). – user9072 Jul 7 '13 at 14:17 View Markdown source. Suppose you are really curious how somebody typeset something in a question or answer. Here are two ways to see the way it was done. 1. If the post has been edited, you can view the edit history (see another answer to this question) and click the "source" link above a revision. This will show you exactly what the person typed in order to get the result you see. If the post has not yet been edited, then it is still possible to proceed along these lines (see the above mentioned answer) but this is a bit tedious, so you might prefer the other way in this case. 2. Start as if you would want to edit or to suggest an edit to the post, that is via clicking the link "edit" or via clicking the link "improve this question" below the post (which one you will see depends on your account or you being logged in but for the purpose at hand it is all the same). Then you are shown exactly what the person typed. If you choose this option please be careful not to actually edit the post or to suggest an edit in the process; but just do not click "save edits" but rather "cancel" or simply navigate to another site and this is fine (even if you should have changed something before). [Taken but modified from the original list, in view of changes relative to editing possibilities.] [Per quid's suggestion, I am placing here a slightly modified version of my answer to this other question explaining one nice use of the search function.] The search function (also usually available on the top right corner of the mathoverflow page) can be quite useful. For example, it can be used to show the questions on the front page while omitting all questions which were migrated (respectively, closed). In order to omit all questions which were migrated to another site, use the search function with the query: is:question migrated:no This is equivalent to visiting the url https://mathoverflow.net/search?q=is:question+migrated:no. You can also order these by "active" or "newest" by clicking on the corresponding tab, or by visiting e.g. https://mathoverflow.net/search?tab=active&q=is:question+migrated:no for the "active" tab. Essentially, this will list all questions on the front page while omitting the ones which were migrated. This view will also display the beginning text of each question, which is in my opinion an added bonus. If you want instead to remove all questions which were either closed (including questions on hold) or migrated, perform a search query for: is:question closed:no The search function can be quite useful in other ways: to search for tags (even allowing disjunctions, conjunctions, and exclusions of tags), questions and/or answers by a specific user, among other possibilities. For further references on search queries, see the search tips. These tips are also available when you use the search function on mathoverflow by clicking "Advanced Search Tips" on the right. • Thank you for the contribution to this thread! – user9072 Aug 6 '13 at 7:10 • @quid: My pleasure! – Ricardo Andrade Aug 6 '13 at 7:13
2020-08-08 04:11:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5247333645820618, "perplexity": 941.4661165597346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00089.warc.gz"}
https://thelearningspace.sg/math/set-operations-intersection-and-difference-of-two-sets/
# Set Operations : Intersection And Difference Of Two Sets Intersection and difference of two sets are two different set operations. In set theory, we perform different types of set operations. Such as intersection of sets, a difference of sets, the complement of set and union of sets. It is very easy to differentiate between intersection and union operations. But what is the difference between intersections and difference of sets? Let us understand here in this article. ## What is Intersection of Sets? The intersection of two sets A and B which are subsets of the universal set U, is the set which consists of all those elements which are common to both A and B. It is denoted by ‘∩’ symbol. All those elements which belong to both A and B represent the intersection of A and B. Thus we can say that, A ∩ B = {x : x ∈ A and x ∈ B} For n sets A1,A2,A3,……An where all these sets are the subset of universal set U the intersection is the set of all the elements which are common to all these n sets. Depicting this pictorially, the shaded portion in the Venn diagram given below represents the intersection of the two sets A and B. Figure 1-Intersection of two sets Figure 2-Intersection of three sets ## Intersection of Two sets If A and B are two sets, then the intersection of sets is given by: where n(A) is the cardinal number of set A, n(B) is the cardinal number of set B, n(A∪B) is the cardinal number of union of set A and B. To understand this concept of intersection let us take an example. ## Example of Intersection of sets Example: Let U be the universal set consisting of all the n – sided regular polygons where 5 ≤ n ≤ 9. If set A,B and C are defined as: A = {pentagon, hexagon, octagon} B = {hexagon, nonagon, heptagon} C = {nonagon} Find the intersection of the sets: i) A and B ii) A and C Solution: U = {pentagon , hexagon , heptagon , octagon , nonagon} i) The intersection is given by all the elements which are common to A and B. A ∩ B = {hexagon} ii)  No element is common in A and C. Therefore A ∩ C = ∅ Note: If we have two sets X and Y such that their intersection gives an empty set ∅ i.e. X ∩ Y = ∅ then these sets X and Y are called as disjoint sets. ## Properties of Intersection of a Set • Commutative Law: The intersection of two sets A and B follow the commutative law i.e., A ∩ B = B ∩ A • Associative Law: The intersection operation follows the associative law i.e., If we have three sets A ,B and C then, (A ∩ B) ∩ C = A ∩ (B ∩ C) • Identity Law: The intersection of an empty set with any set A gives the empty set itself i.e., A ∩ ∅ = ∅ • Idempotent Law: The intersection of any set A with itself gives the set A i.e., A ∩ A = A • Law of U: The intersection of a universal set U with its subset A gives the set A itself. A ∩ U = A • Distributive Law: According to this law: A ∩ (B ∪ C) = ( A ∩ B ) ∪ (A ∩ C) ## What is Difference of Sets? Difference of two sets A and B is the set of elements which are present in A but not in B. It is denoted as A-B. In the following diagram, the region shaded in orange represents the difference of sets A and B. And the region shaded in violet represents the difference of B and A. ## Example of Difference of sets Let A = {3 , 4 , 8 , 9 , 11 , 12 } and B = {1 , 2 , 3 , 4 , 5 }. Find A – B and B – A. Solution: We can say that A – B = { 8, 9, 11, 12} as these elements belong to A but not to B B – A ={1,2,5} as these elements belong to B but not to A. It is interesting, isn’t it? There is yet a lot more to explore in Set Theory
2022-11-30 16:13:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082762956619263, "perplexity": 358.3393112713562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00128.warc.gz"}
https://www.rdocumentation.org/packages/spatstat/versions/1.64-1/topics/runifpointOnLines
runifpointOnLines 0th Percentile Generate N Uniform Random Points On Line Segments Given a line segment pattern, generate a random point pattern consisting of n points uniformly distributed on the line segments. Keywords spatial, datagen Usage runifpointOnLines(n, L, nsim=1, drop=TRUE) Arguments n Number of points to generate. L Line segment pattern (object of class "psp") on which the points should lie. nsim Number of simulated realisations to be generated. drop Logical. If nsim=1 and drop=TRUE (the default), the result will be a point pattern, rather than a list containing a point pattern. Details This command generates a point pattern consisting of n independent random points, each point uniformly distributed on the line segment pattern. This means that, for each random point, • the probability of falling on a particular segment is proportional to the length of the segment; and • given that the point falls on a particular segment, it has uniform probability density along that segment. If n is a single integer, the result is an unmarked point pattern containing n points. If n is a vector of integers, the result is a marked point pattern, with m different types of points, where m = length(n), in which there are n[j] points of type j. Value If nsim = 1, a point pattern (object of class "ppp") with the same window as L. If nsim > 1, a list of point patterns. See Also psp, ppp, pointsOnLines, runifpoint Aliases • runifpointOnLines Examples # NOT RUN { X <- psp(runif(10), runif(10), runif(10), runif(10), window=owin()) Y <- runifpointOnLines(20, X) plot(X, main="") plot(Y, add=TRUE) Z <- runifpointOnLines(c(5,5), X) # } Documentation reproduced from package spatstat, version 1.64-1, License: GPL (>= 2) Community examples Looks like there are no examples yet.
2020-10-21 13:35:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1736455112695694, "perplexity": 3278.05799114907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876500.43/warc/CC-MAIN-20201021122208-20201021152208-00508.warc.gz"}
http://algebrarules.com/rule-13-convert-a-multiplication-with-an-exponent-into-the-product-of-two-factors-each-raised-to-the-exponent
Algebrarules.com The most useful rules of basic algebra, free, simple, & intuitively organized X Howdy! Here are a few very handy rules of algebra. These basic rules are useful for everything from figuring out your gas mileage to acing your next math test — or even solving equations from the far reaches of theoretical physics. Happy calculating! Algebra Rule 13 Convert a multiplication with an exponent into the product of two factors each raised to the exponent $$(ab)^n = a^nb^n$$ Description: Thanks to the commutative property of multiplication, any series of multiplications can be rearranged without changing its value. This means that we can take a multiplication raised to a power and rearrange the resulting series of multiplications to make two exponents $$(4*5)^3 = (4*5)(4*5)(4*5) = 4*5*4*5*4*5 = 4*4*4*5*5*5 = 4^3*5^3$$ « Previous Rule Next Rule »
2019-06-20 03:11:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7587890028953552, "perplexity": 673.2074920020214}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00193.warc.gz"}
https://tutorials.one/a-gentle-introduction-to-maximum-likelihood-estimation-for-machine-learning/
Generic filters Exact matches only # A Gentle Introduction to Maximum Likelihood Estimation for Machine Learning Last Updated on November 5, 2019 Density estimation is the problem of estimating the probability distribution for a sample of observations from a problem domain. There are many techniques for solving density estimation, although a common framework used throughout the field of machine learning is maximum likelihood estimation. Maximum likelihood estimation involves defining a likelihood function for calculating the conditional probability of observing the data sample given a probability distribution and distribution parameters. This approach can be used to search a space of possible distributions and parameters. This flexible probabilistic framework also provides the foundation for many machine learning algorithms, including important methods such as linear regression and logistic regression for predicting numeric values and class labels respectively, but also more generally for deep learning artificial neural networks. In this post, you will discover a gentle introduction to maximum likelihood estimation. After reading this post, you will know: • Maximum Likelihood Estimation is a probabilistic framework for solving the problem of density estimation. • It involves maximizing a likelihood function in order to find the probability distribution and parameters that best explain the observed data. • It provides a framework for predictive modeling in machine learning where finding model parameters can be framed as an optimization problem. Discover bayes opimization, naive bayes, maximum likelihood, distributions, cross entropy, and much more in my new book, with 28 step-by-step tutorials and full Python source code. Let’s get started. A Gentle Introduction to Maximum Likelihood Estimation for Machine Learning Photo by Guilhem Vellut, some rights reserved. ## Overview This tutorial is divided into three parts; they are: 1. Problem of Probability Density Estimation 2. Maximum Likelihood Estimation 3. Relationship to Machine Learning ## Problem of Probability Density Estimation A common modeling problem involves how to estimate a joint probability distribution for a dataset. For example, given a sample of observation (X) from a domain (x1, x2, x3, …, xn), where each observation is drawn independently from the domain with the same probability distribution (so-called independent and identically distributed, i.i.d., or close to it). Density estimation involves selecting a probability distribution function and the parameters of that distribution that best explain the joint probability distribution of the observed data (X). • How do you choose the probability distribution function? • How do you choose the parameters for the probability distribution function? This problem is made more challenging as sample (X) drawn from the population is small and has noise, meaning that any evaluation of an estimated probability density function and its parameters will have some error. There are many techniques for solving this problem, although two common approaches are: • Maximum a Posteriori (MAP), a Bayesian method. • Maximum Likelihood Estimation (MLE), frequentist method. The main difference is that MLE assumes that all solutions are equally likely beforehand, whereas MAP allows prior information about the form of the solution to be harnessed. In this post, we will take a closer look at the MLE method and its relationship to applied machine learning. ### Want to Learn Probability for Machine Learning Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. ## Maximum Likelihood Estimation One solution to probability density estimation is referred to as Maximum Likelihood Estimation, or MLE for short. Maximum Likelihood Estimation involves treating the problem as an optimization or search problem, where we seek a set of parameters that results in the best fit for the joint probability of the data sample (X). First, it involves defining a parameter called theta that defines both the choice of the probability density function and the parameters of that distribution. It may be a vector of numerical values whose values change smoothly and map to different probability distributions and their parameters. In Maximum Likelihood Estimation, we wish to maximize the probability of observing the data from the joint probability distribution given a specific probability distribution and its parameters, stated formally as: This conditional probability is often stated using the semicolon (;) notation instead of the bar notation (|) because theta is not a random variable, but instead an unknown parameter. For example: or • P(x1, x2, x3, …, xn ; theta) This resulting conditional probability is referred to as the likelihood of observing the data given the model parameters and written using the notation L() to denote the likelihood function. For example: The objective of Maximum Likelihood Estimation is to find the set of parameters (theta) that maximize the likelihood function, e.g. result in the largest likelihood value. We can unpack the conditional probability calculated by the likelihood function. Given that the sample is comprised of n examples, we can frame this as the joint probability of the observed data samples x1, x2, x3, …, xn in X given the probability distribution parameters (theta). • L(x1, x2, x3, …, xn ; theta) The joint probability distribution can be restated as the multiplication of the conditional probability for observing each example given the distribution parameters. • product i to n P(xi ; theta) Multiplying many small probabilities together can be numerically unstable in practice, therefore, it is common to restate this problem as the sum of the log conditional probabilities of observing each example given the model parameters. • sum i to n log(P(xi ; theta)) Where log with base-e called the natural logarithm is commonly used. This product over many probabilities can be inconvenient […] it is prone to numerical underflow. To obtain a more convenient but equivalent optimization problem, we observe that taking the logarithm of the likelihood does not change its arg max but does conveniently transform a product into a sum — Page 132, Deep Learning, 2016. Given the frequent use of log in the likelihood function, it is commonly referred to as a log-likelihood function. It is common in optimization problems to prefer to minimize the cost function, rather than to maximize it. Therefore, the negative of the log-likelihood function is used, referred to generally as a Negative Log-Likelihood (NLL) function. • minimize -sum i to n log(P(xi ; theta)) In software, we often phrase both as minimizing a cost function. Maximum likelihood thus becomes minimization of the negative log-likelihood (NLL) … — Page 133, Deep Learning, 2016. ## Relationship to Machine Learning This problem of density estimation is directly related to applied machine learning. We can frame the problem of fitting a machine learning model as the problem of probability density estimation. Specifically, the choice of model and model parameters is referred to as a modeling hypothesis h, and the problem involves finding h that best explains the data X. We can, therefore, find the modeling hypothesis that maximizes the likelihood function. Or, more fully: • maximize sum i to n log(P(xi ; h)) This provides the basis for estimating the probability density of a dataset, typically used in unsupervised machine learning algorithms; for example: Using the expected log joint probability as a key quantity for learning in a probability model with hidden variables is better known in the context of the celebrated “expectation maximization” or EM algorithm. — Page 365, Data Mining: Practical Machine Learning Tools and Techniques, 4th edition, 2016. The Maximum Likelihood Estimation framework is also a useful tool for supervised machine learning. This applies to data where we have input and output variables, where the output variate may be a numerical value or a class label in the case of regression and classification predictive modeling retrospectively. We can state this as the conditional probability of the output (y) given the input (X) given the modeling hypothesis (h). Or, more fully: • maximize sum i to n log(P(yi|xi ; h)) The maximum likelihood estimator can readily be generalized to the case where our goal is to estimate a conditional probability P(y | x ; theta) in order to predict y given x. This is actually the most common situation because it forms the basis for most supervised learning. — Page 133, Deep Learning, 2016. This means that the same Maximum Likelihood Estimation framework that is generally used for density estimation can be used to find a supervised learning model and parameters. This provides the basis for foundational linear modeling techniques, such as: • Linear Regression, for predicting a numerical value. • Logistic Regression, for binary classification. In the case of linear regression, the model is constrained to a line and involves finding a set of coefficients for the line that best fits the observed data. Fortunately, this problem can be solved analytically (e.g. directly using linear algebra). In the case of logistic regression, the model defines a line and involves finding a set of coefficients for the line that best separates the classes. This cannot be solved analytically and is often solved by searching the space of possible coefficient values using an efficient optimization algorithm such as the BFGS algorithm or variants. Both methods can also be solved less efficiently using a more general optimization algorithm such as stochastic gradient descent. In fact, most machine learning models can be framed under the maximum likelihood estimation framework, providing a useful and consistent way to approach predictive modeling as an optimization problem. An important benefit of the maximize likelihood estimator in machine learning is that as the size of the dataset increases, the quality of the estimator continues to improve. This section provides more resources on the topic if you are looking to go deeper. ## Summary In this post, you discovered a gentle introduction to maximum likelihood estimation. Specifically, you learned: • Maximum Likelihood Estimation is a probabilistic framework for solving the problem of density estimation. • It involves maximizing a likelihood function in order to find the probability distribution and parameters that best explain the observed data. • It provides a framework for predictive modeling in machine learning where finding model parameters can be framed as an optimization problem. Do you have any questions? ## Get a Handle on Probability for Machine Learning! #### Develop Your Understanding of Probability …with just a few lines of python code Discover how in my new Ebook: Probability for Machine Learning It provides self-study tutorials and end-to-end projects on: Bayes Theorem, Bayesian Optimization, Distributions, Maximum Likelihood, Cross-Entropy, Calibrating Models and much more…
2020-09-26 18:07:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831661581993103, "perplexity": 470.1150172158008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244353.70/warc/CC-MAIN-20200926165308-20200926195308-00452.warc.gz"}
https://timodenk.com/blog/tag/math/
## Linear Relationships in the Transformer’s Positional Encoding In June 2017, Vaswani et al. published the paper “Attention Is All You Need” describing the “Transformer” architecture, which is a purely attention based sequence to sequence model. It can be applied to many tasks, such as language translation and text summarization. Since its publication, the paper has been cited more than one thousand times and several excellent blog posts were written on the topic; I recommend this one. Vaswani et al. use positional encoding, to inject information about a token’s position within a sentence into the model. The exact definition is written down in section 3.5 of the paper (it is only a tiny aspect of the Transformer, as the red circle in the cover picture of this post indicates). After the definition, the authors state: “We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE_{pos+k}$ can be represented as a linear function of $PE_{pos}$.” But why is that? In this post I prove this linear relationship between relative positions in the Transformer’s positional encoding. Continue reading Linear Relationships in the Transformer’s Positional Encoding ## Notes on “Haskell Programming – from first principles” From November, 13th 2017 to June, 9th 2018, a friend and I were working our way through the 1285 pages of “Haskell Programming – from first principles” by Christopher Allen and Julie Moronuki. That’s more than six pages per day! While reading and discussing, I took a few notes here and there, which I want to publish in this post. Some of the sentences are directly taken from the book, which I highly recommend to anyone who wants to learn Haskell, by the way. Continue reading Notes on “Haskell Programming – from first principles” ## LaTeX Plot Snippets The LaTeX package tikz contains a set of commands that can render vector based graphs and plots. This post contains several examples which are intended to be used as a cut-and-paste boilerplate. Each sample comes with a screenshot and a snippet that contains the relevant parts of the LaTeX source code. The entire source code can be examined by clicking onto the individual images. Continue reading LaTeX Plot Snippets ## Graph Theory Overview In the field of computer science, a graph is an abstract data type that is widely used to represent connections and relationships. This post gives an overview about a selection of definitions, terms, and algorithms, which are related to graphs. The content was put together during preparation for a theoretical computer science test at Cooperative State University Baden-Württemberg and is mostly taken from either Wikipedia or lecture notes. Continue reading Graph Theory Overview ## Least Squares Derivation The least squares optimization problem searches for a vector, that minimizes the euclidean norm in the following statement as much as possible: $$x_\text{opt}=\arg\min_x\frac{1}{2}\left\lVert Ax-y\right\rVert^2_2\,.$$This article explains how $x_\text{opt}=(A^\top A)^{-1}A^\top y$, the solution to the problem, can be derived and how it can be used for regression problems. Continue reading Least Squares Derivation ## Cubic Spline Interpolation Cubic spline interpolation is a mathematical method commonly used to construct new points within the boundaries of a set of known points. These new points are function values of an interpolation function (referred to as spline), which itself consists of multiple cubic piecewise polynomials. This article explains how the computation works mathematically. After an introduction, it defines the properties of a cubic spline, then it lists different boundary conditions (including visualizations), and provides a sample calculation. Furthermore, it acts as a reference for the mathematical background of the cubic spline interpolation tool on tools.timodenk.com which is introduced at the end of the article. Continue reading Cubic Spline Interpolation ## Guess Solutions of Polynomials For a given polynomial of $n$th degree $$P_n(x)=\sum_{i=0}^n a_ix^i = a_nx^n+a_{n-1}x^{n-1}+…+a_1x+a_0$$ you can guess rational solutions $x$ for the corresponding problem $P_n(x)=0$ by applying the following two rules: 1. $$x=\frac{p}{q}\text{, with } p \in \mathbb{Z} \land q \in \mathbb{N}\land p\mid a_0 \land q\mid a_n$$ 2. $$\lvert x\rvert\le2\cdot \max\left\lbrace \sqrt[k]{\frac{\lvert a_{n-k}\rvert}{\lvert a_n\rvert}}, k=1, …, n\right\rbrace$$ ## Trigonometric Functions Formulary This formulary has been created during the online onboarding process at Baden-Wuerttemberg Cooperative State University (DHBW). It is suitable for the related online tests and might be helpful for other people, seeking for formulas in this field of mathematics. ##### Basics $$\begin{array}{l} \tan x = \frac{{\sin x}}{{\cos x}}\\ \cot x = {\tan ^{ – 1}}x = \frac{{\cos x}}{{\sin x}} \end{array}$$ Continue reading Trigonometric Functions Formulary
2019-02-21 10:35:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4317047894001007, "perplexity": 739.8678028546302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00252.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/fundamentals-of-physics-extended-10th-edition/chapter-34-images-problems-page-1039/19e
## Fundamentals of Physics Extended (10th Edition) Published by Wiley # Chapter 34 - Images - Problems - Page 1039: 19e #### Answer $i=-10cm$ #### Work Step by Step According to Table 34-4 and 19e, the image distance $i=-10cm$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2019-02-21 04:36:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6466889381408691, "perplexity": 5583.990642070289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247499009.48/warc/CC-MAIN-20190221031117-20190221053117-00049.warc.gz"}
https://www.nature.com/articles/s41467-021-21252-x?error=cookies_not_supported&code=a7d7d5b7-3ef2-495f-b171-5651fc1eb861
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Bridging scales in disordered porous media by mapping molecular dynamics onto intermittent Brownian motion ## Abstract Owing to their complex morphology and surface, disordered nanoporous media possess a rich diffusion landscape leading to specific transport phenomena. The unique diffusion mechanisms in such solids stem from restricted pore relocation and ill-defined surface boundaries. While diffusion fundamentals in simple geometries are well-established, fluids in complex materials challenge existing frameworks. Here, we invoke the intermittent surface/pore diffusion formalism to map molecular dynamics onto random walk in disordered media. Our hierarchical strategy allows bridging microscopic/mesoscopic dynamics with parameters obtained from simple laws. The residence and relocation times – tA, tB – are shown to derive from pore size d and temperature-rescaled surface interaction ε/kBT. tA obeys a transition state theory with a barrier ~ε/kBT and a prefactor ~10−12 s corrected for pore diameter d. tB scales with d which is rationalized through a cutoff in the relocation first passage distribution. This approach provides a formalism to predict any fluid diffusion in complex media using parameters available to simple experiments. ## Introduction Fluid diffusion in porous media involves complex phenomena arising from the restricted diffusivity imposed by the host porous geometry and the fluid/solid interaction1,2,3,4. While the medium morphology and topology impact the fluid dynamics at almost any pore lengthscale d, the effect of fluid/solid forces roughly scales with the porous surface to volume ratio S/V ~ d−15,6. This leads to rich dynamics in nanoporous media (for which d is of the order of the intermolecular force range ζ) with intriguing aspects such as anomalous single-file diffusion, intermittent Brownian dynamics, stop-and-go diffusion with an underlying surface residence time, etc.7. For simple pore morphologies (e.g., planar, cylindrical), a unifying picture has emerged with well-identified dependence on temperature T, fluid density ρ, mean free path λ, pore size d, fluid molecule size σ, etc.5,8,9 Single-file diffusion is restricted to d ~ σ while the diffusion mechanism for dσ depends on ρ and λ; Knudsen diffusion for fluids with λd and molecular diffusion for λd. For materials with large S/V, diffusion involves intermittent dynamics with subsequent surface adsorption and in-pore relocation steps10,11. When relocation is negligible (i.e. at low T and/or ρ where the pore center is depleted in fluid), diffusion is governed by surface diffusion described using the Reed–Ehrlich model12,13. In contrast, when relocation contributes to the overall dynamics (non-negligible pore center density), the intermittent Brownian motion is a rigorous formalism to upscale the local microscopic dynamics to any upper scale14. Diffusion in disordered porous media is far more complex as coupled geometrical and surface interaction effects lead to novel phenomena15,16,17. The fluid diffusion in such heterogeneous solids involves a non-trivial diffusivity landscape as surface diffusion/in-pore relocation boundaries are ill-defined. Diffusion in such rough landscapes is even more puzzling for nanoporous media as (1) the underlying propagators – i.e., the probability that a molecule moves by a quantity r in a time t – are not necessarily Fickian18, and (2) non-vanishing surface interactions in the pore leads to self-diffusivity Ds different from the bulk even far from the surface19. Due to the continuum hypothesis breakdown at the nanoscale1,16, statistical mechanics is the appropriate formalism for complex diffusion in disordered media7. In particular, generalization of molecular intermittence to heterogeneous media using the Fokker-Planck or path integral formalisms allows linking microscopic to macroscopic dynamics20. However, while these approaches rely on available material parameters (e.g. porosity ϕ, S/V ratio, structure factor S(q)), fluid dynamics concepts such as surface residence, in-pore relocation, and their time constants are often used as guessed inputs (typically, relocation/surface diffusion are assumed to be Fickian with diffusivities equal or orders of magnitude slower than the bulk14). While this qualitatively captures the complex dynamics at play, there is a strong need to establish physical laws from simple parameters such as pore size d and fluid/solid interaction strength ε. In this context, hierarchical simulations13,21,22 allow upscaling the microscopic dynamics assessed from atom-scale simulations into kinetic Monte Carlo simulations; a precalculated free energy map ΔF(r) is used in a random walk approach with corrected hoping rates $$k \sim \exp [-{{\Delta }}F({\bf{r}})/{k}_{{\rm{B}}}T]$$23,24. However, extension to disordered solids is almost intractable because of their large representative elementary volume. Moreover, despite their robustness, such extensive simulations do not provide simple laws based on d, T, ε, ϕ, etc. because they are performed for a peculiar system under some given thermodynamic and dynamical conditions. Here, we address the problem of fluid diffusion in ultraconfining disordered nanoporous materials by reporting robust physical laws established in the framework of surface/pore diffusion intermittence. By mapping molecular dynamics (MD) simulations onto mesoscopic random walk (RW) calculations accounting for surface residence, our hierarchical approach captures the fluid diffusion in disordered nanoporous media and their underlying complex diffusivity landscapes. Moreover, by varying the matrix porosity ϕ and pore size d but also the fluid/solid interaction strength ε, the proposed approach provides a means to quantitatively bridge the microscopic and mesoscopic dynamics in such complex environments using simple parameters. Both the typical surface residence and relocation times – tA, tB – are found to derive from physical laws involving the pore size d and the fluid/solid interaction strength normalized to the thermal energy ε/kBT. In more detail, tA is shown to obey a transition state theory $${t}_{A} \sim {t}_{A}^{0}\exp (-{{\Delta }}F/{k}_{{\rm{B}}}T)$$ where ΔF ~ ε is the free energy barrier that must be overcome to escape from the interaction field generated by the solid and $$1/{t}_{A}^{0}$$ is the characteristic escape attempt frequency. $${t}_{A}^{0}$$ is found to be of the order of ~ 10−12 s (a commonly accepted value) with a correction that accounts for pore diameter d/ξ (with ξ ~ σ, i.e. the molecule size). As for the relocation time tB, it is shown to scale with d as quantitatively predicted by introducing a time cutoff tc ~ d2/D0 in the relocation first passage probability distribution. ## Results Our coarse grain model is developed in the spirit of the continuous time random walk (CTRW) as first proposed by Montroll and Weiss25 and later extended by Shlesinger and Klafter26 to the Levy walk model and other variants. The intermittent dynamics proposed in our approach involves a waiting time distribution at the pore surface coupled to a bridge statistics taking into account the first passage probability to connect one point at the interface to another through a random walk in the accessible pore network. The latter statistics couples distance and time as in the Levy walk (coupled memory) which also pertains to the Knudsen regime27. In the present approach, we will mainly consider the time distribution of these bridge statistics which are mapped onto atom-scale dynamics simulations to establish a bridge between the microscopic and mesoscopic scales. While the mapping proposed in this paper is derived for a simple fluid confined in prototypical models of highly disordered materials, we believe it can be extended to a much broader class of fluid/solid couples. However, such generalization must be performed with caution as there are a number of limitations which can lead to departure from the simple intermittent Brownian motion at the heart of our approach. Depending on the nature of the confined fluid and host solid, different molecular interactions are at play which are either short (e.g. dispersion, repulsion) or long (e.g. electrostatic, polarization) ranged. While such molecular interactions often lead to similar confined diffusivity mechanisms, they can induce more complex behaviors that are not entirely captured by a simple stop-and-go process. Moreover, for host solids with spatially-extended pore correlations (e.g. fractal solids), additional complexity and/or additional specific effects are expected. ### Different topological porous media Fluid diffusion in disordered nanoporous media was investigated by considering a set of 13 heterogeneous carbon structures with different densities ρs, porosities ϕ, and pore sizes d. These structures—referred to as CSx with x the density ρs ranging from 0.5 to 1.4 g/cm3—were created using a quenching procedure (see Methods for full details). Typically, using a cubic box of length 100 Å containing ~ 25,000 to ~ 70,000 carbon atoms depending on ρs, molecular dynamics in the NVT ensemble was used with the reactive empirical bond order potential28 in LAMMPS29 to allow for bond formation/breaking during a 5 ns quench from 3000 K to 300 K. As an example, Fig. 1a shows the sample CS0.70 filled with fluid molecules at their boiling point (as described in the Methods section, the adsorbed fluid density was estimated using standard Monte Carlo simulations in the Grand Canonical μVT ensemble). Figure 1b shows the porosity ϕ as a function of ρs where ϕ is determined using a Monte Carlo algorithm; by inserting N probe molecules at random positions inside the simulation box containing the porous structure, the porosity can be estimated as the ratio ϕ ~ Nv/N (where Nv is the number of probe molecules that do not overlap with any of the porous structure atoms). For each structure, provided a large number Nv is considered, the pore size distribution f(r) can be assessed from the diameter of the largest sphere containing each of the Nv points (Supplementary Fig. 4)30,31. As expected, both the porosity ϕ and the mean pore size d = ∫rf(r)dr decrease upon increasing ρs with ϕ [ ~ 0.1, ~ 0.5] and d varying from a few to ~15 Å. Only structures with connected porosity were retained to investigate multiscale diffusion as unconnected porous samples necessarily yield zero self-diffusivities in the long time limit. For each sample, as described in the Methods section, the connectivity of the porous subspace accessible to a diffusing molecule was determined using the retraction graph associated with the digitized pore network (which conserves the topology at all scales). Such digitized binary sets are used to compute the connection number32,33,34: $${c}_{t}=-({\alpha }_{0}-{\alpha }_{1})/{\alpha }_{0}$$ (1) where α0 and α1 are the number of vertexes (either isolated or connected) and links, respectively. ct, which is a simple intensive parameter related to the number of irreducible paths per vertex, is invariant under any continuous pore network deformation. For structures with no isolated vertexes, the number of vertexes is smaller than the number of links, i.e., α0 < α1, so that ct > 0 with a value that increases with pore network connectivity—in this case, the average number of links around a connected vertex is given by $$<{N}_{c}> =2({c}_{t}+1)$$. On the other hand, for poorly connected pore networks, α0 > α1 so that ct < 0 with ct [−1, 0]—in this case, $$<{N}_{c}> \to 0$$ as ct → −1 so that the topological structure reduces to a set of isolated vertexes. In the above picture, the crossover ct = 0 is generally assumed to correspond to a percolation threshold 32,33. Figure 1b shows that ct > 0 for ρs ≤ 1.0 g/cm3 as expected for a connected pore network (although ct is lower than typical values for very open networks ct ~ 0.5−0.732,33). On the other hand, ct < 0 for ρs > 1.0 g/cm3, therefore indicating a long-range network disconnection for these dense porous structures. ### Intermittent Brownian motion with underlying stop-and-go diffusion Diffusion in the disordered media with connected porosity (ct > 0) was investigated using MD for a simple Lennard–Jones (LJ) fluid at constant temperature and for varying fluid/solid interaction strengths. For such subnanoporous materials with strongly disordered pore morphologies, provided the number of adsorbed/confined molecules is low enough, the self-diffusivity Ds is close to the collective diffusivity Dc as cross-terms between fluid molecules are negligible (because fluid-solid interactions largely prevail over fluid-fluid interactions)16. As a result, due to the formal equivalence between permeance and collective diffusivity, i.e. K = Dc/ρkBT ~ Ds/ρkBT, the self-diffusivity also provides key insights into transport mechanisms under flow conditions as induced by pressure/chemical potential gradients. The LJ fluid parameters (σ0 = 3.81 Å, ε0/kB = 148.1 K) were chosen to match those for methane—a simple nearly spherical probe. An isotropic molecular model—known as the united atom model—was used to describe the methane molecule. Such a simplified model was selected as it simply corresponds to a Lennard–Jones potential that is representative of a broad class of atomic and molecular liquids. Despite this simple fluid hypothesis, we believe that our approach can be extended to more complex fluids such as dipolar molecules. In particular, even if complex molecular structures lead to richer surface thermodynamics behavior with strong adsorption in specific sites and/or relocation with large inherent activation energies, the present approach remains relevant as such complexity is embedded—at least in an effective fashion—into the mean relocation and residence times. For each disordered porous structure, different fluid/solid strengths were considered: ε/kBT = n with n = 0.01, 0.1, 0.2, 0.3, 0.5, 0.8, and 1. Varying ε drastically affects the porosity seen by the confined fluid since it also modifies the repulsive interaction contribution—thus inducing large changes in the effective diffusivity of the confined fluid. To probe fluid dynamics at constant porosity while scanning a broad range of ε, the fluid/surface LJ potential was modified using a smoothing procedure involving a sigmoid function to rescale the potential well-depth at nearly constant repulsive contribution (see Methods for details). The temperature was chosen equal to T = 450K ~ 3ε0/kB to ensure that the Fickian regime is reached in all cases over the typical simulation run length (20 ns). Supplementary Fig. 6 shows the mean square displacement $$<| {\bf{r}}(t)-{\bf{r}}(0){| }^{2}>$$ as a function of time t for methane confined in the different disordered nanoporous materials (only data for ε/kBT = 0.1 are shown for clarity). Typically, for the disordered materials considered here, the Fickian regime is reached after a few ns as each molecule diffuses over a length scale of the order of the simulation box size L ~ 10 nm. While such convergence is reached within typical timescales probed using molecular dynamics for these disordered materials with connected porosity (ct > 0), there are materials classes where the long-time limit extends to much longer timescales. This includes solids with long-range pore correlations such as in fractal media or strong persistence length such as in one-dimensional pores. As shown in the inset in Fig. 2a, for all systems, the self-diffusivity Ds – which is inferred from the Fickian regime in the long time limit $${D}_{s}={\mathrm{lim}\,}_{t\to \infty }<| {\bf{r}}(t)-{\bf{r}}(0){| }^{2}> /6t$$ – is lower than the bulk self-diffusivity $${D}_{s}^{0}$$. As a result, the tortuosity $${\tau }_{{\rm{MD}}}={D}_{s}^{0}/{D}_{s}$$ – defined as the ratio of the bulk to the confined self-diffusivities—is larger than 1 as shown in Fig. 2a. As expected, upon increasing ε/kBT, the average fluid/surface energy 〈Ufw〉 becomes more negative (attractive) so that τMD increases due to the increased tortuosity adsorption/residence contribution. Moreover, τMD increases upon increasing the solid density ρs as more severe confinement leads to smaller diffusivity (as shown in Fig. 1, the pore size $$d \sim {\rho }_{s}^{-x}$$ with x ~ 1). The underlying microscopic diffusion mechanism in such ultra-confining materials can be identified by computing the self-correlation function Gs(r, t). In particular, in an isotropic medium, 4πr2Gs(r, t)dr is the probability distribution that a molecule moves by a distance r over a time t. As shown in Fig. 2b (see black dashed line), upon averaging over all molecules and time origins, the mean square displacement $$<| {\bf{r}}(t)-{\bf{r}}(0){| }^{2}>$$ displays a smooth behavior $$\sqrt{<{{\Delta }}r{(t)}^{2}> } \sim \sqrt{t}$$ from which a confined self-diffusivity can be derived. Yet, Fig. 2b reveals that 4πr2Gs(r, t) displays a complex behavior characteristic of stop-and-go processes where the molecules switch from one location to another through jumps (the data shown here correspond to the sample CS0.70 but the same data can be found in Supplementary Fig. 8 for different samples and fluid/surface interaction strengths). In more detail, the probability distribution exhibits marked vertical stripes indicating that molecules tend to remain within the same spatial domain over a given time. The distance between two stripes, which roughly corresponds to the fluid molecule size σ0, corresponds to the jump amplitude. The typical residence time at a given position is given by the decay along the t axis. Such stop-and-go diffusion was already reported by Sahimi and coworkers35 in molecular dynamics of gas diffusion in a carbon nanotube/polymer composite and, more recently, by Kulasinski et al. for water diffusion in amorphous hydrophilic systems36. To shed more light into the complex diffusivity landscape in such disordered porous media, a single trajectory $$R(t)=\sqrt{{({\bf{r}}(t)-{\bf{r}}(0))}^{2}}$$ is provided as an example in Fig. 2b together with a visualization of the corresponding molecular trajectory in Fig. 2c (to interpret these different space-time domains, Supplementary Fig. 9 provides additional individual trajectories). Such individual trajectories are typical but not necessarily fully representative as they were chosen to identify well-defined steps. However, the mechanisms discussed below are common to all molecules and lead to the heterogeneous behavior observed in Gs(r, t). Before going into details, we define here a cavity as a portion of the pore network of size d. The first narrow stripe corresponds to molecules located in a given site with displacements over short distances r much smaller than the molecule size σ0. Such motions are illustrated by the green portion of the individual trajectory in Fig. 2b (in this specific example, the molecule is adsorbed in the vicinity of the host surface as shown in c). This analysis is confirmed by the fact that the typical residence time associated with this dynamical sequence increases with increasing the fluid/surface interaction strength ε (Supplementary Fig. 8). The second narrow stripe centered at about r ~ 4 Å( ~ σ0) corresponds to molecules jumping to a neighboring site. As illustrated in the individual trajectory (blue portion), such displacements correspond to molecules relocating from an adsorbed site to another while remaining within the same cavity (r < d). The third narrow stripe centered at about rd corresponds to confined diffusion where molecules explore both the pore center and surface region but remain within the same cavity (as illustrated with the orange portion of the individual trajectory). Finally, upon further increasing the time, the displacement becomes larger than the pore size – r > d – as the molecule is transferred from a cavity to another (as illustrated in the corresponding red portion of the individual trajectory shown in c). As expected, at large times/distances, typically when r > d, the probability distribution becomes more homogeneous as the detailed structural footprint of the disordered host matrix averages out into a single effective parameter corresponding to the tortuosity. In particular, in the long time limit, the dynamics reach a Fickian regime as the molecules diffuse over distances r large enough compared to the pore size d. Despite the intrinsic complexity of diffusion in such rough energy landscapes, the local, i.e. in pore, self-diffusivity can be derived formally using effective approaches19,37,38. By in-pore diffusivity, we refer here to the short time range where molecules remain within the same cavities while reaching a pseudo-Fickian diffusion regime (in other words, such transport coefficients at the pore scale do not include network effects such as tortuosity but contain the fingerprint of the pore geometry/morphology). In more detail, considering the mean-square displacements shown in Supplementary Fig. 6, $${D}_{s}^{p}$$ can be assessed from the linear regime observed in the short time scale where $$<| {\bf{r}}({t}_{d})-{\bf{r}}(0){| }^{2}> \le {d}^{2}$$ (where td is the time required to displace molecules over a distance equal to the pore size d). To further validate the inferred value, it was checked that it is consistent with the in-pore diffusivity estimated as ~ d2/6td. The simplest effective framework consists of writing the effective pore-scale diffusivity $${D}_{s}^{p}$$ as an average over the whole pore volume, $${D}_{s}^{p}=1/N\times \int \rho ({\bf{r}}){D}_{s}({\bf{r}}){\rm{d}}{\bf{r}}$$ where ρ(r) and Ds(r) are the local density and self-diffusivity at a position r. Within the transition state theory, the bulk self-diffusion coefficient can be written as an activated process $${D}_{s}^{0} \sim \exp [-{{\Delta }}{F}^{0}/{k}_{{\rm{B}}}T]$$ where ΔF0 is the activation free energy to set the molecules in motion. Here, we refer to the bulk phase taken at the same temperature but also the same density as the confined phase. Therefore, even if the bulk phase is a low-density gas (for which diffusion does not involve any activation energy), $${D}_{s}^{0}$$ should be understood as the liquid-like diffusivity of the bulk fluid taken at the same liquid-like density. For a confined fluid, the activation energy for diffusion can be assumed to correspond to the bulk activation energy augmented by the fluid/surface potential ζ(r), ΔF = ΔF0 − ζ(r) (the sign minus is due to the fact that the interaction potential is attractive and, hence, negative so that molecules are trapped in deeper energy sites with an escape time requiring a larger activation energy). With this assumption, $${D}_{{\rm{s}}}(r)={D}_{{\rm{s}}}^{0}\exp [\zeta (r)/{k}_{{\rm{B}}}T]$$19. For complex media, there is no simple expression for ζ(r) but we use here a simple form where ζ(r) is constant when the distance to the surface is smaller than σ and decays exponentially beyond. Such a generic form leads to the following local self-diffusivity: $${D}_{s}(r)={D}_{s}^{s}$$ for r > d/2 − σ while $${D}_{s}(r)={D}_{s}^{0}+\left({D}_{s}^{s}-{D}_{s}^{0}\right)\exp [-\left(d/2-\sigma -r\right)/{r}_{0}]$$ for r ≤ d/2 − σ (where $${D}_{s}^{s}$$ is the surface self-diffusivity in the vicinity of the pore surface while $${D}_{s}^{0}$$ is the bulk, i.e. unconfined, self-diffusivity). This expression simply assumes that the self-diffusivity is equal to the surface diffusivity $${D}_{s}^{s}$$ for distances within a critical size σ from the surface while it decays exponentially with a characteristic lengthscale r0 towards the bulk diffusivity $${D}_{s}^{0}$$ as the distance to the surface increases, as depicted on Supplementary Fig. 11. After a little algebra, assuming the pore density is homogeneous, i.e., ρ(r) ~ ρ, one arrives at: $${D}_{s}^{p}(d)=\left\{\begin{array}{l}{D}_{s}^{s}\quad (\,\text{for}\,\ d\;<\;2\sigma )\hfill\\ {D}_{s}^{0}+2/d\times \left({D}_{s}^{s}-{D}_{s}^{0}\right)\left(\sigma +{r}_{0}-{r}_{0}\exp [-(d-2\sigma )/2{r}_{0}]\right)\quad (\,\text{for}\,\ d\,\ge\, 2\sigma )\end{array}\right.$$ (2) As shown in the inset of Fig. 2a, the above effective expression provides an accurate description of the simulated in-pore diffusivity $${D}_{s}^{p}$$. Both the variations in pore size d and fluid/surface interaction strength ε are accurately captured. The parameters $${D}_{s}^{s}$$, $${D}_{s}^{0}$$, r0 and σ extracted from the fit against Eq. (2) can be found in Supplementary Fig. 7. As expected, the bulk self-diffusivity $${D}_{s}^{0}$$ is found to be constant at a value 14 ± 0.6 × 10−9 m2/s. While the confined fluid density is an ill-defined quantity that depends on a given pore volume definition, we note that the bulk reduced density ρ* = ρσ3 needed to match the bulk self-diffusivity $${D}_{s}^{0}=14\pm 0.6\times 1{0}^{-9}$$ m2/s inferred from this simple in-pore diffusivity model falls within the range [0.8–1] (see Supplementary Fig. 12 showing the self-diffusivity of bulk methane as a function of density at the temperature considered here). Recalling that the number of confined fluid molecules was obtained by filling each porous material at the fluid boiling point, such reduced densities further support the use of a simple effective model for the in-pore diffusivity as they correspond to typical liquid densities. Similarly, σ is independent of d and ε with a constant value of 2.4 ± 0.1 Å so that the critical distance σ for surface diffusion roughly corresponds to the fluid molecular size. Interestingly, the quality of this effective in-pore diffusivity model shows that the surface diffusion $${D}_{s}^{s}$$ and scaling r0 can be treated as constant parameters for a given ε. On the other hand, as expected, $${D}_{s}^{s}$$ is found to decrease upon increasing ε while r0 increases upon increasing ε. Typically, upon varying ε/kBT from 0.1 to 1.0, $${D}_{s}^{s}$$ decreases from 3 to 0.5 × 10−9 m2/s while r0 increases from 0.8 to 2 Å. The fact that the scaling parameter r0 depends on ε can be rationalized as follows. Even if the surface/fluid interaction potential decay is independent of ε, it generates a free energy landscape ζ(r) that includes many body—fluid/fluid and fluid/wall—effects which lead to an effective scaling r0 that depends on ε. While the combination rule above provides a quantitative description of the molecular dynamics data, it remains mostly effective as it relies arbitrary choices combined with an empirical description of the diffusivity landscape explored by the fluid molecules. First, ζ(r) should be seen as an effective free energy field that modulates the bulk self-diffusivity by accounting for local intermolecular interactions but also for local density/packing effects. Therefore, even with simple pore geometries, instead of a robust free energy field rigorously derived from intermolecular interactions, ζ(r) is an effective function which is used to describe the self-diffusivity decay upon increasing the distance to the pore surface. The constant surface diffusivity at the pore surface is used to account for the fact that adsorbed molecules explore homogeneously the surface region ~ 2σ. Moreover, even if the conclusions above are qualitatively independent of the different assumptions involved, the decomposition into surface and bulk-like diffusions is also sensitive to the exact scaling defined in Eq. (2) and the parameter 2σ used to define the surface layer. In particular, other efficient decomposition rules have been proposed such as a simple weighted sum of surface and volume diffusivities which was found to accurately describe the dynamics of water in nanoconfinement39. Moreover, such surface/volume partition and the resulting predictions in terms of in-pore diffusivities $${D}_{s}^{p}$$ are also dependent on the geometry choice—usually far from any realistic description—made to describe the pores in such disordered materials (planar, cylindrical or spherical). In practice, as will be shown in the rest of this paper, to avoid relying on such effective frameworks, the intermittent Brownian formalism mapped onto molecular dynamics data provides a means to describe stop-and-go processes in such disordered and ultraconfining materials without invoking any definition for the surface layer and the self-diffusion decay as molecules get closer to the pore surface. The stop-and-go, i.e. intermittent, diffusion observed in our atom-scale dynamics simulations suggests that the corresponding data can be analyzed using the framework of intermittent Brownian dynamics. Indeed, as shown in Fig. 2b, while ensemble averaging over each molecule leads to a Fickian regime with an effective self-diffusivity, each individual trajectory involves intermittent motion with alternate series of in-pore diffusion and surface adsorption. In more detail, within this formalism, the mesoscopic, i.e., coarse-grained, dynamics beyond molecular time and length scales is governed by two parameters: the residence time tA during which a molecule remains adsorbed to the surface and the relocation time tB between two adsorption periods10,14. To probe such intermittent dynamics, the pore space Ω available for the dynamics of spherical molecules inside the carbon matrix was extracted by mapping a 3D lattice network having a voxel size Δ = 0.2 Å (as explained in the Methods section). A voxel belongs to the pore space if its distance x to any carbon center is x > σ where σ is the LJ parameter for the fluid/surface interaction. The surface boundary ∂Ω of Ω is made of surface voxels which are at the frontier between Ω and its complementary space. This allows defining a continuous space for molecular diffusion limited by the surface boundary. With the aim to simulate long-range intermittent dynamics, only the greatest connected part Ωc of Ω is considered (in the present study, for all samples ct > 0, Ωc percolates through the periodic minimal image). Intermittent Brownian motion was then simulated using the following advanced random walk approach. An interfacial volume is defined as ∂Ωc × x0 where x0 = 0.2 pm is an infinitesimal thickness. Diffusion in the pore cavities is described using regular random walk simulations with a bulk-like self-diffusivity $${D}_{s}^{p}$$ estimated from molecular dynamics. When a molecule center of mass reaches ∂Ωc × x0, it remains stopped for a time tS distributed according to an exponential probability density function having a first moment tA. After tS, the center of mass is placed at the distance x0 from ∂Ωc for a new relocation step. The procedure above leads to intermittent Brownian motion where the residence and relocation steps are distributed according to two underlying probability density functions ψA(t) and ψB(t) (having tA and tB as first moments). On the one hand, as illustrated in Fig. 3a, the residence times obey a statistics given by: $${\psi }_{A}(t)=1/{t}_{A}\times \exp \left(-t/{t}_{A}\right)$$ (3) where ψA(t)dt is the probability that the residence lasts a time between t and t + dt. While the exponential decay in Eq. (3) provides a generic description of the residence time distribution, power-law distributions can be observed in other specific situations such as in media with surface heterogeneity or complex surface dynamics. However, as will be illustrated below, among possible behaviors, the exponential decay is important as it corresponds to a well-defined underlying thermodynamic picture where desorption corresponds to an activated mechanism. Moreover, considering the mapping between microscopic and mesoscopic tortuosities proposed in what follows, it only relies on the mean residence time and not the exact time distribution. On the other hand, ψB(t) is the bridge statistics which describes the time distribution between a desorption event and the next first re-encounter within the proximal zone ∂Ωc × x0. Such generic bridge statistics in confinement is illustrated in Fig. 3b which shows ψB(t) for methane confined in the disordered sample CS1.0 with different fluid/surface interactions. On the one hand, after a plateau in the very short time range (ps), ψB(t) decays as a power law ψB(t) ~ t−3/2. On the other hand, in the long time regime, $${\psi }_{B}(t)\propto \exp (-t/{t}_{c})$$ as strong confinement in the sample cavities introduces a time cutoff tc in the relocation process since every confined molecule eventually returns to the surface within a finite time. This generic behavior for such a finite i.e. confining medium can be described as40: $${\psi }_{B}(t)\propto {\psi }_{B}^{\infty }(t)\exp [-t/{t}_{c}]$$ (4) where $${\psi }_{B}^{\infty }(t)$$ corresponds to the bridge statistics for a semi-infinite medium (denoted by the symbol ). As shown in Supplementary Notes, $${\psi }_{B}^{\infty }(t)$$ can be determined by considering the trajectory of a molecule starting at a distance x0 from the adsorbing region located in x = 0 and crossing this interface for the first time at a time t41: $${\psi }_{B}^{\infty }(t)=\frac{{x}_{0}}{\sqrt{4\pi {D}_{s}^{p}{t}^{3}}}\exp \left(-\frac{{x}_{0}^{2}}{4{D}_{s}^{p}t}\right) \mathop{\sim}_{ t\gg {x}_{0}^{2}/{D}_{s}^{p}}\frac{{x}_{0}}{\sqrt{4\pi {D}_{s}^{p}{t}^{3}}}$$ (5) In this equation, the second equality corresponds to the solution in the limit $$t\gg {x}_{0}^{2}/{D}_{s}^{p}$$. Such expressions are valid for a semi-infinite medium where the probability to return to the surface becomes vanishingly small in the long time limit. Figure 3c shows the tortuosity τRW as a function of the residence time tA as obtained using random walk simulations for the different CSx samples (only data for ε/kBT = 0.5 are shown here for the sake of clarity). The dashed lines in Fig. 3c are RW results obtained by varying tA in a quasi-continuous manner. As expected, the tortuosity can be rescaled as: $${\tau }_{{\rm{RW}}}={\tau }_{{\rm{RW}}}^{0}\left(1+\frac{{t}_{A}}{{t}_{B}}\right)$$ (6) where $${\tau }_{{\rm{RW}}}^{0}$$ and tB only depend on the specific CSx sample considered. While tB is the typical relocation time, $${\tau }_{{\rm{RW}}}^{0}$$ corresponds to the geometrical tortuosity obtained for a vanishing residence time (tA → 0). As shown in Fig. 3c, projecting the τMD values obtained by MD (points) onto the ones obtained by RW (lines), i.e. τMD = τRW, allows mapping the molecular and mesoscopic tortuosities. This provides a means to estimate for each sample CSx the residence (tA) and relocation (tB) times as a function of the fluid/surface interaction strength ε/kBT (values that cannot be assessed using MD for such complex disordered materials). In more detail, tA and tB are such that $${\tau }_{{\rm{MD}}}={\tau }_{{\rm{RW}}}^{0}(1+{t}_{A}/{t}_{B})$$. Considering that $${\tau }_{{\rm{RW}}}^{0}$$ and tB for a given sample and ε/kBT are uniquely defined from the slope and intercept in Fig. 3c, there is only one set (tA, tB) that verifies τMD = τRW. As shown in our previous work14, it should be emphasized that tA and tB can be directly estimated from molecular dynamics when simple pore geometries are considered. However, such calculations turn out to be extremely challenging for disordered porous media because the surface/volume decomposition is a complex ill-defined problem. Energy-based criteria such as surface-fluid energy cutoffs or geometrical criteria such as positions to the interface can be used but they rely on arbitrary choices. In contrast, the approach proposed in the present work provides a means to split the complex diffusivity behavior into residence and relocation steps without having to rely on these arbitrary choices. ### Bridging molecular/mesoscopic dynamics in disordered media The residence and relocation times are upscaled parameters which provide a mean to quantitatively bridge the microscopic and mesoscopic dynamics in porous media through the intermittent Brownian motion formalism. Yet, beyond simple mapping procedures like matching molecular and coarse-grained tortuosities, there is a need to establish robust and quantitative physical behaviors for tA and tB. With this aim, the effect of mean pore size d and fluid/surface interaction strength ε/kBT on tA and tB is shown in Fig. 4. In what follows, we first report a molecular model for the residence time tA and then discuss the behavior of the relocation time tB using the formalism of first passage processes. #### Residence time tA Figure 4a suggests that tA follows an activation law for all samples: $${t}_{A}={t}_{A}^{0}\exp [-{{\Delta }}{F}^{* }/{k}_{{\rm{B}}}T]$$ with ΔF* = −αε. In this transition state theory, ΔF* corresponds to the free energy barrier that must be overcome by a fluid molecule to escape from the interaction field generated by the host surface. As for $$1/{t}_{A}^{0}$$, it corresponds to the frequency with which the molecule attempts to escape the free energy minimum where it is located. While the activated behavior observed for tA might appear as a surprising result, it can be rationalized through simple thermodynamic arguments. Let us consider a thermodynamic model where the molecule is either adsorbed in the vicinity of the pore surface or in the pore center. As a first-order approximation, it can be assumed that the free energy difference ΔF ~ Nsε where Ns is the number of surface atoms interacting with the fluid molecule. In other words, with this assumption, the free energy of an adsorbed molecule corresponds to the sum of the interaction energies with each neighboring surface atom while the entropy and fluid-fluid interaction contributions are treated as constant. Considering that ΔF* = δΔF with δ 1 (since the free energy barrier is necessarily larger than or equal to the free energy difference between the adsorbed/non-adsorbed physical states), the scaling in Fig. 4a indicates that α = Nsδ. As shown in Fig. 4b, for all samples (i.e. regardless of pore size d), α ~ 3.6 which leads to Ns 3.6. Such a value, which is independent of the considered structure, seems realistic as this corresponds to an underlying molecular picture where an adsorbed molecule interacts with Ns ~ 3 to 4 structure atoms. To validate this interpretation, we calculated for all interaction strengths ε/kBT and porous materials CSx, the radial distribution function g(r) between host carbon atoms and methane molecules. The number of local carbon neighbors Nc contributing to the free energy barrier involved in the escape time from surface residence was then estimated by integrating g(r) up to the location corresponding to the Lennard–Jones potential minimum $${r}_{\min }={2}^{1/6}\sigma$$, i.e. $${N}_{c}=\mathop{\int}\nolimits_{0}^{{r}_{\min }}4\pi {r}^{2}g(r)\rho {\rm{d}}r$$. Considering all structures and interactions strengths, we found $$<{N}_{c}> =3.6\pm 1$$ which is consistent with the value obtained for α in Fig. 4b. Figure 4c shows that the prefactor $${t}_{A}^{0}$$, which corresponds to the characteristic timescale for activated molecular desorption from the surface, is of the order of ~1 ps—a classical value used in transition state theories and nucleation models in dense liquid states. More importantly, $${t}_{A}^{0} \sim {t}_{A,\infty }^{0}[1+\gamma \exp (-d/\xi )]$$ where $${t}_{A,\infty }^{0} \sim 0.07$$ ps corresponds to the value for infinitely large pores (vanishing confinement). The typical decay length ξ ~ 2.1 Å is of the order of the molecule size σ, therefore indicating that the correction to the escape attempt time is related to the pore size d. This can be understood by the fact that, for a given free energy barrier ΔF*, strong confinement leads to increasing residence times due to the decrease in molecular paths leading to desorption. #### Relocation time tB Figure 4d shows that the relocation time tB scales as $${t}_{B} \sim d/{D}_{s}^{p}$$. This result is not completely intuitive as it departs from a straightforward estimate obtained using the pore diffusivity $${D}_{s}^{p}$$ and diffusion domain ~ d, i.e. $${t}_{B} \sim {d}^{2}/{D}_{s}^{p}$$. Yet, as described quantitatively in what follows, the scaling tB ~ d can be rationalized by accounting for the fact that the diffusion, i.e. relocation, time within the confining cavities has necessarily an upper bound (due to the finite pore size, each molecule eventually readsorbs to the surface). This constraint, which is at the root of the scaling tB ~ d, can be quantitatively predicted by introducing a time cutoff tc in the relocation first passage probability ψB(t). In addition to tc, we also introduce a short time cutoff t0 as ψB(t) is necessarily equal to zero for times shorter than the time t0 needed for a molecule to travel the minimum bridge of extension $${x}_{\min }$$. As derived in the Supplementary Notes, with the lower/upper time limits t0 and tc, ψB(t) simply writes $${\psi }_{B}(t)=C\times \exp \left(-t/{t}_{c}\right)/{t}^{3/2}$$ for t [t0, tc] where $$C={\left[2\exp (-{t}_{0}/{t}_{c})/\sqrt{{t}_{0}}-2\sqrt{\pi /{t}_{c}}\text{erfc}\left(\sqrt{{t}_{0}/{t}_{c}}\right)\right]}^{-1}$$ is obtained by writing the normalization condition $$\mathop{\int}\nolimits_{0}^{\infty }{\psi }_{B}(t){\rm{d}}t=1$$. The first passage distribution for relocation ψB(t) allows estimating the mean relocation time tB as: $${t}_{B}=\mathop{\int }\limits_{0}^{\infty }t{\psi }_{B}(t){\rm{d}}t$$ (7) As shown in Supplementary Notes, upon inserting $${\psi }_{B}(t) \sim \exp \left(-t/{t}_{c}\right)\times {t}^{-3/2}$$ for t > t0 (0 otherwise) into Eq. (7), it can be shown that: $${t}_{B}=C\times \sqrt{\pi {t}_{c}}\ \,\text{erfc}\,(\sqrt{{t}_{0}/{t}_{c}})$$. By writing that t0tc (i.e. $$C \sim \sqrt{{t}_{0}}/2$$), this expression simplifies as: $${t}_{B} \sim \frac{\sqrt{\pi {t}_{c}{t}_{0}}}{2}-{t}_{0} \sim \frac{{x}_{\min }}{4{D}_{s}^{p}}\left[\right.\sqrt{\pi }\beta d-2{x}_{\min }\left]\right.$$ (8) tc is associated with a geometrical cut-off length rc which indicates the maximal extension of a bridge. rc is of the order of the pore diameter d and can be written as rc = βd, where β ~ 1 is related to the accessible in-pore horizon. Assuming Fickian diffusion upon relocation, we can write $${t}_{0} \sim {x}_{\min }^{2}/2{D}_{s}^{p}$$ and $${t}_{c} \sim {\beta }^{2}{d}^{2}/2{D}_{s}^{p}$$. As shown in Fig. 4d, by assuming that $${x}_{\min }$$ is independent of the pore structure, Eq. (8) provides a reasonable description of the observed scaling tB ~ d with a negative intercept in d = 0. Yet, as detailed in Supplementary Notes, $${x}_{\min }$$ can be estimated from the probability density function of the bridge displacement θ(r) where r the is the end-to-end Euclidean distance of a Brownian bridge42 [see Supplementary Fig. 10a]. With this refined analysis, as shown in Supplementary Fig. 10b, $${x}_{\min }$$ does depend on the pore diameter d. Taking into account this dependence, the simulated data $${t}_{B}\times {D}_{s}^{p}$$ in Fig. 4b as a function of d can be retrieved using a unique value β ~ 0.7 for all values ε/kBT, as shown in Supplementary Fig. 10c. ## Discussion The statistical physics approach reported in this paper provides an efficient mean to upscale microscopic dynamics in complex porous media to the engineering, i.e., continuum, level. This general and versatile method consists of upscaling molecular constants—typically, the adsorption strength and self-diffusivity—as obtained using molecular dynamics through the formalism of intermittent Brownian motion. While this robust framework is well-established for ordered materials with regular pore geometry and simple pore network topology, the present work extends its scope to ultra-confining disordered porous media with underlying complex free-energy landscapes. In particular, despite the complex interfacial dynamics in media involving ill-defined surface/volume regions, mapping of molecular dynamics simulations onto intermittent random walk provides a simple yet robust description through the mean surface residence (tA) and in pore relocation (tB) times. More importantly, using disordered porous materials with different porosities ϕ/pore sizes d but also fluid/surface interaction strengths ε, tA and tB are found to derive from basic physical models with parameters available to simple experiments. On the one hand, the mean residence time tA is simply related to the fluid/surface interaction strength ε as it corresponds to the characteristic molecular escape time from a low (molecule in the surface vicinity) to a higher (bulk-like molecule in the pore center) free energy state separated by a free energy barrier ΔF* ~ ε/kBT. On the other hand, tB can be simply predicted from the confined in-pore self-diffusivity $${D}_{s}^{p}$$ and the corresponding mean-first passage probability distribution which is truncated to account for the finite relocation time in confining cavities. Considering the mesoscopic, i.e., coarse-grained, description adopted in this approach, it is remarkable that all the problem complexity is embedded into two characteristic timescales that are related using simple physical laws to intrinsic material/fluid descriptors. Such upscaling strategy could prove to be useful in numerous fields involving fluid adsorption and transport in porous materials: chemistry (e.g., adsorption, catalysis), chemical engineering (e.g., separation, chromatography), geosciences (e.g., pollutant transport), etc. In particular, among important examples relevant to such practical fields, the present approach can help describe molecular diffusion in the following applications: phase separation of gaseous or liquid effluents through porous media, filtration of small micro-pollutants such as organic/biomolecules, metallic and ionic complexes in water remediation, kinetics of products, reactants and by-products in catalytic processes, etc. From a practical viewpoint, conducting the exact upscaling strategy as reported in this paper can be quite involved; it requires building realistic porous material models and conducting both atom-scale and mesoscopic random walk simulations. However, the physical behavior of tA and tB as established above provides simple rules to predict the long-time fluid diffusion within a given porous material. In practice, all parameters needed to predict this macroscopic behavior are easily accessible experimentally; this includes the pore size d, the fluid/surface energy ε, and the self-diffusivity $${D}_{s}^{p}$$. While d can be estimated using adsorption-based techniques or derived using structural data, the fluid/surface energy can be probed from calorimetry or simply estimated from data for similar fluid/solid couples. As for $${D}_{s}^{p}$$, a good approximation is to take this parameter equal to its bulk counterpart but more accurate data can be estimated by measuring the confined diffusivity using neutron scattering or NMR relaxometry. Inversely, starting from experimentally measured self-diffusivity in confinement, tA and tB can be extracted to shed light on physical phenomena occurring upon fluid adsorption, catalysis, etc. in a given porous material. In this context, our strategy can be coupled with free energy landscape computation to estimate the residence and relocation times. Such calculations are suitable for regular porous materials such as zeolites or metal-organic frameworks (for which dealing with a small porous subspace is sufficient thanks to symmetry considerations). However, such free energy approaches are nearly impossible for disordered porous materials with large representative elementary volume so that an effective approach based on simple physical laws is sound and robust. Beyond regular adsorption/diffusion processes, our upscaling approach can be used to predict long-time effective diffusivity in problems involving more complex phenomena as observed in natural or anthropic disordered materials (wood, cement, etc.). This includes fluid/solid systems in which desorption is an activated process43 but also processes involving reactive transport44,45 and poromechanical effects such as adsorption-induced swelling46. Finally, the present approach can be used to obtain the elementary bricks to be implemented in mesoscopic numerical techniques such as finite elements calculations, pore network models47, Lattice Boltzmann simulations but also more formal statistical physics approaches20,48,49,50. As already stated, our mapping procedure is expected to apply to a broad class of fluid/solid couples but some possible limitations must be considered as they can lead to more complex behaviors. Such limitations include the possible role of rich molecular interactions that are potentially long-ranged (e.g. electrostatic). Complex host solids with long-range pore correlations (fractal, low dimension) can also lead to additional complexity. In particular, in extremely narrow pores, confinement induces specific mechanisms such as molecular sieving51 or single file diffusion18 that depart from the Fickian regime considered here. Moreover, by considering only percolating matrices (ct > 0), the present study does not address connectivity aspects which can lead to anomalous temperature behavior depending on the ratio of adsorption and connectivity effects51. ## Methods ### Porous material models Different samples of densities ranging from 0.5 g/cm3 up to 1.4 g/cm3 were produced using the following method. For a given density ρs, the atoms are placed randomly in a cubic box of a size 100 Å (an H/C atomic ratio ~ 0.091 was selected as it corresponds to a typical, realistic value for such disordered porous carbons52,53). Starting from a large temperature, each molecular structure was quenched using molecular dynamics performed using the large-scale atomic/molecular massively parallel simulator (LAMMPS29). Molecular interactions were described using the reactive empirical bond order (REBO) potential28 to allow for bond formation/breaking. The quenching procedure is performed in the NVT ensemble by continuously decreasing the temperature from 3000 K down to 300 K in the course of a 5 ns simulation run. Three representative structures are presented in Supplementary Fig. 1 and all .xyz structure files are available upon request. ### Grand canonical Monte Carlo We simulated methane adsorption isotherms at 111.7 K in the various host structures (Supplementary Fig. 2) using Grand Canonical Monte Carlo (GCMC) with the Lennard–Jones parameters gathered in Supplementary Table 2. The saturating vapor pressure of methane at this temperature is P0 = 101325 Pa (boiling point). In GCMC simulations, we consider a system at constant volume V (the host porous solid) in equilibrium with an infinite reservoir of molecules (methane) imposing its chemical potential μ and temperature T. For a given set $$\left(T,\mu \right)$$, the adsorbed amount is given by the ensemble average of the number of adsorbed molecules versus the pressure P of the gas reservoir (the latter is obtained from the chemical potential according to the equation of state for the bulk gas). The adsorption isotherm is simulated by increasing or decreasing the chemical potential of the reservoir. The skeleton is considered rigid and the energy Uαβ(i, j) between the site i of type α and the site j of type β is given by54: $${U}^{\alpha \beta }(i,j)=\sum _{i,j}4{\varepsilon }_{ij}^{\alpha \beta }\left[{\left(\frac{{\sigma }_{ij}^{\alpha \beta }}{{r}_{ij}^{\alpha \beta }}\right)}^{12}-{\left(\frac{{\sigma }_{ij}^{\alpha \beta }}{{r}_{ij}^{\alpha \beta }}\right)}^{6}\right]$$ (9) Equation (9) describes interactions through a 6–12 Lennard–Jones potential with parameters $${\sigma }_{ij}^{\alpha \beta }$$ (size) and $${\varepsilon }_{ij}^{\alpha \beta }$$ (energy). The Lennard–Jones parameters are reported in Supplementary Table 2 for the interactions between sites of the same type, the cross interactions being computed from the Lorentz–Berthelot rules: $${\sigma }^{\alpha \beta }=\frac{1}{2}\left({\sigma }^{\alpha \alpha }+{\sigma }^{\beta \beta }\right)\qquad {\varepsilon }^{\alpha \beta }=\sqrt{{\varepsilon }^{\alpha \alpha }{\varepsilon }^{\beta \beta }}$$ (10) ### Molecular dynamics The methane-saturated structures obtained by GCMC are then used as starting structures for molecular dynamics (MD) simulations. All MD simulations are performed with LAMMPS29 using the lj/cut potential with the same same-site parameters as the ones used for the GCMC simulations. In all simulations, the porous solid is kept frozen while the probe molecules are simulated at a temperature of 450 K for a NVE production run of 20 ns after a NVT thermalization run of 500 ps. The integration time step is 1 fs and the configurations are saved every 1 ps. To assess the influence of fluid/surface interaction on the effective diffusivity—and, hence, the tortuosity—the fluid/surface interaction strength ε was varied. In so doing, the repulsive interaction felt by the fluid molecules decreases upon decreasing ε so that the porosity explored by the confined molecules increases (inset Supplementary Fig. 3). Consequently, due to this effect, the tortuosity for a given structure strongly evolves with ε without being per se an effect of the interaction strength. To correct this effect, we developed a modified Lennard–Jones potential that keeps the repulsive contribution constant. This modified interaction potential uses a smooth sigmoid function U(r) defined as: $$U(r)=\frac{{L}_{-}(r){e}^{s{r}_{c}}+{L}_{+}(r){e}^{sr}}{{e}^{s{r}_{c}}+{e}^{sr}}$$ (11) where s = 50 and rc = 0.97σ are the slope and center of the sigmoid, respectively. L(r) and L+(r) are the connected functions defined for r < rc and r > rc. To keep the repulsive interaction constant, L(r) was maintained fixed as: $${L}_{+/-}(r)=4{\varepsilon }_{+/-}\left[{\left(\frac{\sigma }{r}\right)}^{12}-{\left(\frac{\sigma }{r}\right)}^{6}\right]$$ (12) with ε = kBT. As shown in Supplementary Fig. 3, upon varying ε+, a modified Lennard–Jones potential with different fluid/surface interaction strengths can be defined while keeping the repulsive part constant. ### Topology characterization and diffusion pore space The pore space Ω available for the dynamics of the spherical methane molecule inside the carbon matrix was determined as follows. A 3D lattice network is first defined with a voxel size 0.02 nm. A voxel belongs to Ω if its distance to any carbon centers is above 3.605 Å (this value is used as it corresponds to the Lennard–Jones parameter σ for the fluid/surface interaction). A voxel belonging to Ω is set to 1 (0 otherwise). Such 3D lattice network allows defining the surface boundary ∂Ω of Ω made of surface voxels at the border between Ω and its complementary space. This allows us to define a continuous space for molecular diffusion which is limited by the surface boundary. An interfacial volume is defined as ∂Ωc × x0 where x0 is a thickness equal to 0.2 pm. Supplementary Fig. 5 illustrates this procedure by showing for the sample CS1.0 the resulting digitized pore network and the corresponding retraction graph obtained with a porosity ϕ = 0.177. The molecular trajectory can be described as an alternate succession of a surface adsorption step on ∂Ωc × x0 followed by a Brownian motion in the confined bulk Ωc leading to a new relocation on the surface. The time step for the Brownian motion is set to 0.1 ps and the self-diffusion coefficient is estimated from molecular trajectories (mean square displacements) as obtained from molecular dynamics at very early time steps. ## Data availability The data sets and molecular configurations generated during and/or analyzed during the current study are available from the corresponding authors upon request. All MD simulations were performed using the software LAMMPS (stable release from August 31st, 2018). ## References 1. 1. Bocquet, L. & Charlaix, E. Nanofluidics, from bulk to interfaces. Chem. Soc. Rev. 39, 1073–1095 (2010). 2. 2. Kärger, J. & Valiullin, R. Mass transfer in mesoporous materials: the benefit of microscopic diffusion measurement. Chem. Soc. Rev. 42, 4172–4197 (2013). 3. 3. Kärger, J., Ruthven, D. M. & Theodorou, D. N.Diffusion in Nanoporous Materials (John Wiley & Sons, 2012). 4. 4. Sahimi, M. Flow and Transport in Porous Media and Fractured Rock: From Classical Methods to Modern Approaches (John Wiley & Sons, 2011). 5. 5. Coasne, B. Multiscale adsorption and transport in hierarchical porous materials. N. J. Chem. 40, 4078–4094 (2016). 6. 6. Deroche, I., Daou, T. J., Picard, C. & Coasne, B. Reminiscent capillarity in subnanopores. Nat. Commun. 10, 4642 (2019). 7. 7. Bouchaud, J.-P. & Georges, A. Anomalous diffusion in disordered media: statistical mechanisms, models and physical applications. Phys. Rep. 195, 127–293 (1990). 8. 8. Kärger, J. & M. Ruthven, D. Diffusion in nanoporous materials: fundamental principles, insights and challenges. N. J. Chem. 40, 4027–4048 (2016). 9. 9. Bhatia, S. K., Bonilla, M. R. & Nicholson, D. Molecular transport in nanopores: a theoretical perspective. Phys. Chem. Chem. Phys. 13, 15350–15383 (2011). 10. 10. Levitz, P. Random flights in confining interfacial systems. J. Phys.: Cond. Mat. 17, S4059 (2005). 11. 11. Coppens, M.-O. & Dammers, A. J. Effects of heterogeneity on diffusion in nanopores—from inorganic materials to protein crystals and ion channels. Fluid Phase Equilibria 241, 308–316 (2006). 12. 12. Reed, D. A. & Ehrlich, G. Surface diffusion, atomic jump rates and thermodynamics. Surf. Sci. 102, 588–609 (1981). 13. 13. Smit, B. & Maesen, T. Molecular simulations of zeolites: adsorption, diffusion, and shape selectivity. Chem. Rev. 108, 4125–4184 (2008). 14. 14. Levitz, P., Bonnaud, P., Cazade, P.-A., Pellenq, R.-M. & Coasne, B. Molecular intermittent dynamics of interfacial water: Probing adsorption and bulk confinement. Soft Matter 9, 8654–8663 (2013). 15. 15. Valiullin, R. et al. Exploration of molecular dynamics during transient sorption of fluids in mesoporous materials. Nature 443, 965–968 (2006). 16. 16. Falk, K., Coasne, B., Pellenq, R., Ulm, F.-J. & Bocquet, L. Subcontinuum mass transport of condensed hydrocarbons in nanoporous media. Nat. Commun. 6, 6949 (2015). 17. 17. Obliger, A., Pellenq, R., Ulm, F.-J. & Coasne, B. Free volume theory of hydrocarbon mixture transport in nanoporous materials. J. Phys. Chem. Lett. 7, 3712–3717 (2016). 18. 18. Hahn, K. & Kärger, J. Deviations from the normal time regime of single-file diffusion. J. Phys. Chem. B 102, 5766–5771 (1998). 19. 19. Bhatia, S. K. & Nicholson, D. Modeling mixture transport at the nanoscale: departure from existing paradigms. Phys. Rev. Lett. 100, 236103 (2008). 20. 20. Roosen-Runge, F., Bicout, D. J. & Barrat, J.-L. Analytical correlation functions for motion through diffusivity landscapes. J. Chem. Phys. 144, 204109 (2016). 21. 21. Maginn, E. J., Bell, A. T. & Theodorou, D. N. Dynamics of long n-alkanes in silicalite: a hierarchical simulation approach. J. Phys. Chem. 100, 7155–7173 (1996). 22. 22. Camp, J. S. & Sholl, D. S. Transition state theory methods to measure diffusion in flexible nanoporous materials: application to a porous organic cage crystal. J. Phys. Chem. C 120, 1110–1120 (2016). 23. 23. Abouelnasr, M. K. F. & Smit, B. Diffusion in confinement: kinetic simulations of self- and collective diffusion behavior of adsorbed gases. Phys. Chem. Chem. Phys. 14, 11600–11609 (2012). 24. 24. Kim, J., Abouelnasr, M., Lin, L.-C. & Smit, B. Large-scale screening of zeolite structures for CO2 membrane separations. J. Am. Chem. Soc. 135, 7545–7552 (2013). 25. 25. Montroll, E. W. & Weiss, G. H. Random walks on lattices. II. J. Math. Phys. 6, 167–181 (1965). 26. 26. Shlesinger, M. F., Zaslavsky, G. M. & Klafter, J. Strange kinetics. Nature 363, 31–37 (1993). 27. 27. Levitz, P. From Knudsen diffusion to Levy walks. EPL 39, 593 (1997). 28. 28. Brenner, D. W. et al. A second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbons. J. Phys.: Cond. Mat. 14, 783 (2002). 29. 29. Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comp. Phys. 117, 1–19 (1995). 30. 30. Gelb, L. D. & Gubbins, K. Pore size distributions in porous glasses: a computer simulation study. Langmuir 15, 305–308 (1999). 31. 31. Coasne, B. & Ugliengo, P. Atomistic model of micelle-templated mesoporous silicas: Structural, morphological, and adsorption properties. Langmuir 28, 11131–11141 (2012). 32. 32. Han, M., Youssef, S., Rosenberg, E., Fleury, M. & Levitz, P. Deviation from Archie’s law in partially saturated porous media: Wetting film versus disconnectedness of the conducting phase. Phys. Rev. E 79, 031127 (2009). 33. 33. Levitz, P., Tariel, V., Stampanoni, M. & Gallucci, E. Topology of evolving pore networks. Eur. Phys. J. Appl. Phys. 60, 24202 (2012). 34. 34. Lin, C. & Cohen, M. H. Quantitative methods for microgeometric modeling. J. Appl. Phys. 53, 4152–4165 (1982). 35. 35. Lim, S. Y., Sahimi, M., Tsotsis, T. T. & Kim, N. Molecular dynamics simulation of diffusion of gases in a carbon-nanotube–polymer composite. Phys. Rev. E 76, 011810 (2007). 36. 36. Kulasinski, K., Guyer, R., Derome, D. & Carmeliet, J. Water diffusion in amorphous hydrophilic systems: a stop and go process. Langmuir 31, 10843–10849 (2015). 37. 37. Schneider, D., Mehlhorn, D., Zeigermann, P., Kärger, J. & Valiullin, R. Transport properties of hierarchical micro–mesoporous materials. Chem. Soc. Rev. 45, 3439–3467 (2016). 38. 38. Chemmi, H. et al. Noninvasive experimental evidence of the linear pore size dependence of water diffusion in nanoconfinement. J. Phys. Chem. Lett. 7, 393–398 (2016). 39. 39. Chiavazzo, E., Fasano, M., Asinari, P. & Decuzzi, P. Scaling behaviour for the water transport in nanoconfined geometries. Nat. Commun. 5, 3565 (2014). 40. 40. Levitz, P. Probing interfacial dynamics of water in confined nanoporous systems by NMRD. Mol. Phys. 117, 952–959 (2019). 41. 41. Redner, S. A Guide to First-Passage Processes (Cambridge University Press, 2001). 42. 42. Levitz, P., Grebenkov, D. S., Zinsmeister, M., Kolwankar, K. M. & Sapoval, B. Brownian flights over a fractal nest and first-passage statistics on irregular surfaces. Phys. Rev. Lett. 96, 180601 (2006). 43. 43. Lee, T., Bocquet, L. & Coasne, B. Activated desorption at heterogeneous interfaces and long-time kinetics of hydrocarbon recovery from nanoporous media. Nat. Commun. 7, 11890 (2016). 44. 44. Coppens, M.-O. A nature-inspired approach to reactor and catalysis engineering. Curr. Opin. Chem. Eng. 1, 281–289 (2012). 45. 45. Hansen, N. & Keil, F. J. Multiscale modeling of reaction and diffusion in zeolites: from the molecular level to the reactor. Soft Mater. 10, 179–201 (2012). 46. 46. Chen, M., Coasne, B., Guyer, R., Derome, D. & Carmeliet, J. Role of hydrogen bonding in hysteresis observed in sorption-induced swelling of soft nanoporous polymers. Nat. Commun. 9, 3507 (2018). 47. 47. Fatt, I. The network model of porous media. Trans. AIME 207, 144–181 (1956). 48. 48. Hlushkou, D., Bruns, S., Seidel-Morgenstern, A. & Tallarek, U. Morphology–transport relationships for silica monoliths: from physical reconstruction to pore-scale simulations. J. Sep. Sci. 34, 2026–2037 (2011). 49. 49. Monson, P. A. Mean field kinetic theory for a lattice gas model of fluids confined in porous materials. J. Chem. Phys. 128, 084701 (2008). 50. 50. Tallarek, U., Hlushkou, D., Rybka, J. & Höltzel, A. Multiscale simulation of diffusion in porous media: from interfacial dynamics to hierarchical porosity. J. Phys. Chem. C 123, 15099–15112 (2019). 51. 51. Boţan, A., Vermorel, R., Ulm, F.-J. & Pellenq, R. J.-M. Molecular simulations of supercritical fluid permeation through disordered microporous carbons. Langmuir 29, 9985–9990 (2013). 52. 52. Jain, S., Gubbins, K., Pellenq, R. J.-M. & Pikunic, J. Molecular modeling and adsorption properties of porous carbons. Carbon 44, 2445–2451 (2006). 53. 53. Coasne, B., Jain, S. K. & Gubbins, K. E. Freezing of fluids confined in a disordered nanoporous structure. Phys. Rev. Lett. 97, 105702 (2006). 54. 54. Billemont, P., Coasne, B. & De Weireld, G. Adsorption of carbon dioxide, methane, and their mixtures in porous carbons: Effect of surface chemistry, water adsorption, and pore disorder. Langmuir 29, 3328–3338 (2013). ## Acknowledgements This work was supported by the French Research Agency (ANR TAMTAM 15-CE08-0008 and ANR TWIST ANR-17-CE08-0003). ## Author information Authors ### Contributions C.B. built and characterized the models and performed the molecular dynamics simulations. P.L. performed the morphological/topological analysis of the samples and carried out the mesoscopic simulations. All authors analyzed the data and developed the theoretical model. B.C. wrote the manuscript with inputs from all authors. ### Corresponding authors Correspondence to Pierre Levitz or Benoit Coasne. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review informationNature Communications thanks Pietro Asinari, Marc-Olivier Coppens, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Bousige, C., Levitz, P. & Coasne, B. Bridging scales in disordered porous media by mapping molecular dynamics onto intermittent Brownian motion. Nat Commun 12, 1043 (2021). https://doi.org/10.1038/s41467-021-21252-x • Accepted: • Published:
2021-10-21 04:21:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7957095503807068, "perplexity": 1883.993244429747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00654.warc.gz"}
http://www.koreascience.or.kr/article/JAKO198303041886697.page
# 저장상대습도(貯藏相對濕度)에 따른 당(糖)과 소금 이상혼합물(二相混合物)의 흡습특성(吸濕特性) • Oh, Hoon-Il (Department of Food Science & Technology, King Sejong University) ; • Kim, Woo-Jung (Department of Food Science & Technology, King Sejong University) ; • Park, Nae-Jung (Department of Chemical Engineering, College of Engineering, Hong-Ik University) • 오훈일 (세종대학 식품공학과) ; • 김우정 (세종대학 식품공학과) ; • 박내정 (홍익대학교 공과대학 화학공학과) • Published : 1983.01.01 • 23 9 #### Abstract A study was designed to investigate the sorption characteristics of binary mixtures of NaCl and sucrose or glucose stored at various relative humidities ranging from 46% to 92%. At low relative humidity below RH 65%, the sorption equilibrium was easily achieved, whereas at higher relative humidity values over 73%, all of the mixtures tended to cintinously absorb moisture with increase in storage time. A linear equation of log $({\frac{dw}{dt}})$ = a log(t) + log(b) was found to be valid between the sorption rate and storage time with respect to storage humidities. In sucrose-NaCl mixture, the slope showed a increasing tendency as the percentage of NaCl increased in the mixture, while that of glucose-NaCl mixture failed to show a definite trend. Plateaus were obtained when the amount of water absorbed was plotted on the X axis and the percent composition of mixture on the Y axis at different storage time. The shape of plateau was varied with respect to the kind of sugar-NaCl mixture, composition of the mixture and relative humidities. A linearity was found between log(1-Aw) and the amount of water absorbed over the Aw range 0.73-0.92 and the slope was affected by the kind and composition of sugar-NaCl mixtures.
2019-11-19 21:38:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4107367694377899, "perplexity": 8234.038384896718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00373.warc.gz"}
https://learn.careers360.com/ncert/question-for-the-telescope-described-in-figure-what-is-the-separation-between-the-objective-lens-and-the-eyepiece/
# Q 9.29 (a)  For the telescope described in Exercise 9.28 (a), what is the separation between the objective lens and the eyepiece? P Pankaj Sanodiya a) Given, focal length of the objective lens = $f_{objective}$= 140cm focal length of the eyepiece lens = $f_{eyepiece}$ = 5 cm The separation between the objective lens and eyepiece lens is given by: $f_{eyepiece}+f_{objective}=140+5=145cm$ Hence, under normal adjustment separation between two lenses of the telescope is 145 cm. Exams Articles Questions
2020-03-30 21:08:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376882672309875, "perplexity": 1786.971416360943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00079.warc.gz"}
https://codeforces.com/blog/entry/3905
SergeiFedorov's blog By SergeiFedorov, 8 years ago, translation, , 150E - Freezing with Style 1. If there exists a path with the median  ≥ k, for some k, then there exists a path with the median  ≥ q, for each q ≤ k. That means we can use binary search to calculate the answer. So now the task is: is there any path with the median greater or equal to Mid ? 2. We will calc the edge as  + 1 if it's wight  ≥ Mid, or as  - 1 in other case. Now we only need to check if there exists a path with legal length and the sum greater than or equal to zero. 3. Let's denote some node V as a root. 4. All paths can be divided into two types: that contains v, and that do not. Now we are to process all first-type paths and run the algorithm on all subtrees. That is so-called divide-and-conquer strategy. 5. We can trivially show that it is always possible to choose such vertex v that all it's subtrees will have size less than or equal to the size of the whole tree. That means that each node will be proccessed in LogN trees max. 6. So, if we solve the task for one level of recursion in O(F(N)), we'll solve it in time O(F(N) * log2(N)) on the whole. 7. First, lets get O(N * Log(N)). For each node we shall calc it's deepness, cost of the path to the root ans the first edge (the number of the root's subtree). It will be better now to use 2 and 0 as the edges costs, instead of -1 and 1. Now we shall process root's subtrees one by one. For each node we want to know if there exists a node u in any other subtree such that the Unable to parse markup [type=CF_TEX] and cost[v] - deep[v] + cost[u] - deep[u] ≥ 0. To do that we need to know the maximum of the function (cost[u] - deep[u]) with the deep values between max(0, L - deep[v]) and R - deep[v] inclusive. To achieve O(N * log(N)) you need only to use segment tree. 8. To achieve an AC contestants were to write all code optimally, or to think of one more idea. It is possible to have O(N) on one level of recursion and O(N * log2(N)) in total if you sort roots subtrees in non-decreasing order and use any structure that can answer getmax query on all segments of length (R - L + 1) and all prefixes and suffixes. Best of luck to you in upsolving this problem! 150D - Mission Impassable In this problem you have to use dynamic programming. For our convenience we will calulate three type of values: Best[l][r] — best result player can achieve on the segment [l, r]. Full[l][r] — best result player can achieve on the segment from [l, r] if he fully destroys it. T[l][r][Len] — best result player can achieve on the segment from [l, r] and remain the palindrome of length len and only it. Now solution: 1. Full[l][r]. Let's look which move will be the last. This will be removing the palindrome of length len and c[len] ≥ 0. What is the best result we can achieve? c[len] + T[l][r][len]. 2. Best[l][r]. Either we will destroy all subtring from l to r, either there exists a letter which we did not touch. That means that all our moves lies fully to the left or fully to the rigth to that position. So Best[l][r] = Full[l][r] or Best[l][r] = Best[l][m] + Best[m + 1][r] for some m, l ≤ m < r. 3. T[l][r][len]. len = 0, len = 1 — two special cases, which is easy to solve without any dynamic. In other case, let's take a look on the left-most position. It either will lie in the result string or not. If not, then let's find the first position which does. Denote it as m (l < m ≤ r). Everything what lies to the left need to be fully deleted. So the answer is Full[l][m - 1] + T[m][r][len] (for l < m ≤ r). Similarly, for the right-most letters. If it does not lies in the result string we remove everything to the right and our result is T[l][m][len] + Full[m + 1][r] (for l ≤ m < r). The last option: both left-most and rigth-most letters lies in the result string. It is possible only if s[l] = s[r]. So our result is T[l + 1][r - 1][len - 2] (only if s[l] =  = s[r]). 150C - Smart Cheater 1. First lets use the linearity of expected value and solve task independently for each passanger. 2. For each path segment (route between neighboring stations) we calculate expected value of profit in case we do not sell a ticket for this segment. In case we sell it the expectation of profit is 0. 3. Now we only need to find the subsegment of segment [a, b] of maximal sum for each passanger. 4. That's easy to do by the segment tree, we only need to calc four values for each node: best — the maximal sum of elements on some subsegment max_left — the maximal sum on prefix max_right — the maximal sum on suffix sum — the sum of all elements 150B - Quantity of Strings We can offer you two solitions: 1. You can build a graph with positions in sting as a nodes and equality in any substring of length k as edges. Lets denote e the number of components in the graph. The answer is me. 2. Analyze four cases: • k = 1 or к > n, the answer is mn. • k = n, the answer is m(n + 1) / 2. • k mod 2 = 1, any string like abababab... is ok, so the answer is m2. • k mod 2 = 0, all symbols coincide and the answer is m. 150A - Win or Freeze • if Q is prime or Q = 1 than it's victory. • We loose if: Q = p * q or Q = p2, where p and q are prime. • It is quite obvious that it is always possible to move in bad position in any other case. That means all other numbers grants us the victory. We only have to check if Q has a divisor of the loose type. We can easily do it in O(sqrt(Q)) time. 151B - Phone Numbers In this task you were to implement the described selection of the maximum elements. 151A - Soft Drinking Soda will be enough for gas = (K * L) / (N * l) toasts. Limes will last for laim = (C * D) / N toasts. Salt is enough for sol = P / (p * N) toasts. Total result: res = min(gas, laim, sol). • +68 » 8 years ago, # | ← Rev. 2 →   -8 For 150B, i think if you give a "k > n case" in the sample , it will reduce lots of misunderstanding. » 8 years ago, # |   0 Can someone give a more detailed explanation for 150B in the cases when k%2=0 and k%2=1.Its not clear for me • » » 8 years ago, # ^ |   +9 Before we check whether k is odd or even(k%2 == 1 or k%2 == 0), we check following two conditions.k = 1 or к > n, the answer is mn. k = n, the answer is m(n + 1) / 2.So we know the current k is 0 < k < nLet's look at the case when k%2 == 1 Let a = an arbitrary character from "m" possible characters. b = an arbitrary character other than "a" from "m" possible charactersIf k is odd, a substring whose length is k out of following string is always a palindrome. String : abababababa Substring k=3 => aba or bab k=5 => ababa or babab ... you see the pattern. So we have (m * (m-1)) possible patterns. Now, also remember that a string consisting of same characters are also allowed. string : aaaaaaa substring k=3 =>aaa k=5 => aaaaa And we have "m" number of such strings.So answer is m*(m-1) + m = m*(m-1+1) = m*mLet's see what happens when k is even. If k is even, only allowed strings are the ones that consist of same characters. There are m such strings. So answer for (k%2 == 0) is m • » » » 8 years ago, # ^ |   0 Thanks for the explanation :) » 8 years ago, # |   -9 I actually failed 150B because of case k > n, and I still don't understand why it is regard the same as k == 1.case where k == 1 is trivial. The answer is all possible strings.However, in case k > n, I think the answer should be 0 because you cannot make a SUBSTRING whose length is LONGER than the ORIGINAL string. Also, by definition, a substring is a part of a string, not extended version of a string.Let's take an example. Let's say n == 5 and k == 7. You have strings made out of "m" selection of characters, and the strings' lengths are always 5. How can you possibly make a substring whose length is 7 from one of strings whose length is 5?If I am wrong, please help me understand why all possible strings are regarded as palindrome when k > n. • » » 8 years ago, # ^ |   0 All 0 k-substrings of any string is palindromes.So any string is correct. • » » » 8 years ago, # ^ |   0 What do you mean by "0 k-substrings of any stirng" ?Could you please explain with a litte bit more detail? • » » » » 8 years ago, # ^ |   +10 There are 0 substrings of length k.All of them are palindromes because Any statement about empty set is truth • » » » » » 8 years ago, # ^ |   0 Thx.I get it now. :-) • » » » » 8 years ago, # ^ |   +16 • » » 8 years ago, # ^ |   +3 Let's solve your example (n = 5, k = 7) step by step. Let's check every possible string (let m be 2) if it satisfies the condition. Let's look on string abaab. Condition is that every substring of length 7 of string abaab is palindrome. Set of such substrings S is empty, it's obvious. So the condition "every element x of S is palindrome" is true.If you still don't believe me, let's look another way. If condition of type "every element a in set B satisfies condition C" is false, that means, that exists some element that don't satisfy condition C. In out example, if string abaab doesn't satisfy our condition, then there exist some substring x of our string, that x isn't palindrome. But you can't choose such x: there is no substring of length 7 in our string. So the proposition of that abaab doesn't satisfy condition is false. So this condition is true for abaab, so we have to include abaab to answer.Such proof is correct for every string of length n on m-alhpabet, so answer is mn. » 8 years ago, # | ← Rev. 2 →   0 i like these problems, but that editorial for 151B is sort of ... • » » 4 years ago, # ^ |   0 I hated 151B problem, it sucks. » 5 years ago, # |   0 Hi everyone. For 150C can anyone explain details of the point 1 and 2 of this problem? First lets use the linearity of expected value and solve task independently for each passanger. For each path segment (route between neighboring stations) we calculate expected value of profit in case we do not sell a ticket for this segment. In case we sell it the expectation of profit is 0. » 4 years ago, # |   0 In problem 150B, I din't get the Graph approach."You can build a graph with positions in string as a nodes and equality in any substring of length k as edges. Lets denote e the number of components in the graph. The answer is m^e."If positions in string are nodes, then we have 'n' nodes(right?). What does "equality in any substring of length k as edges" imply graphically? » 2 years ago, # |   0 Can someone help me out with the graph approach of problem 150B, the other approach is straight forward but I can't wrap my head around the graph one, thanks!! • » » 8 months ago, # ^ |   0 Did you get the graph approach for problem 150B? I didn't quite understand it. • » » » 5 months ago, # ^ |   0 Consider a string of length n, the requirement is that each substring of length k must be a palindrome. In a palindrome observe that the first character must be equal to the last, second must be equal to the second last and so on. Represent the positions of characters as nodes in a graph. Now, represent the equality with edges. Edge exist between first and last char, second and second last char and so on... After this, for each edge(connected component), there are m possible characters that you can substitute to make both ends of the edge equal. Thus if there are e connected components, there are m*m*m...e times or m^e possibilities. Hope this is clear? • » » 6 months ago, # ^ |   0 Refer to this link, its a similar problem. https://discuss.codechef.com/t/magical-strings-editorial/11926 • » » » 3 months ago, # ^ |   0 Thanks sir ! Your Comments Really Help me alot • » » » » 3 months ago, # ^ | ← Rev. 2 →   0 thank you GOD ..........your comment helped whole coder community alot » 7 months ago, # |   0 can anyone explain the graph approach of 150B i.e solution 1 of 150B » 3 months ago, # |   +8 The markdown seems to be broken.
2019-10-23 09:00:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6241317391395569, "perplexity": 774.6463734292741}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987829507.97/warc/CC-MAIN-20191023071040-20191023094540-00318.warc.gz"}
https://www.ias.ac.in/listing/bibliography/boms/SHIVANAND_MADOLAPPA
Articles written in Bulletin of Materials Science • Investigation on microstructure and dielectric behaviour of (Ba0.999−𝑥Gd0.001Cr𝑥)TiO3 ceramics Ceramics of BaTiO3 co-doped with Gd and Cr at Ba-site was synthesized via solid-state reaction route. Surface morphology shows the increase in grain size with the increase of Cr-content below 3 mol%. The high value of 𝜀 in the synthesized samples is associated with space charge polarization and inhomogeneous dielectric structure. Gd is diffused well into the most of Ba sites and vacancies leaving very few defects or voids for the generation of absorption current which results in dielectric loss. Below 3 mol% of Cr-concentration, dissipation factor was improved. Increase in a.c. conductivity with rise of temperature is due to increase in thermally activated electron drift mobility of charges according to the hopping conduction mechanism. Moreover, samples show the positive temperature coefficient of conductivity, which is most desirable for developing highly sensitive thermal detectors and sensors. Also, higher frequency indicates motion of charges in the ceramic samples. • Magnetic and ferroelectric characteristics of Gd$^{3+}$ and Ti$^{4+}$ co-doped BiFeO$_3$ ceramics Polycrystalline BiFeO3 and Bi$_{0.9}$Gd$_{0.1}$Fe$_{1−x}$Ti$_x$O$_3$ ($x = 0$, 0.01, 0.05 and 0.1) samples were synthesized by solid-state reaction route. Structural, magnetic and ferroelectric properties of these samples were investigated. X-ray powder diffraction (XRD) results confirmed the presence of a significant amount of Bi$_2$Fe$_4$O$_9$ impurity phase in the undoped BiFeO$_3$ sample. Mössbauer spectroscopy studies corroborated the XRD studies to confirm the presence of impurity phase.We have observed that gadolinium (Gd$^{3+}$) and titanium (Ti$^{4+}$) doping, respectively, on Bi$^{3+}$ and Fe$^{3+}$ sites facilitated a significant reduction in the impurity phase formation in BiFeO$_3$. Interestingly, Gd$^{3+}$-doping significantly reduced the impurity phase formation as compared to the undoped BiFeO$_3$ sample. This impurity phase formation was further overcome by doping higher ($x \ge 0.05$) amounts of Ti in BiFeO$_3$. The crystallographicsite occupancies of Gd and Ti were confirmed by Rietveld refinement of XRD data,Mössbauer spectroscopy and magnetization measurements. An enhancement in ferromagnetic properties along with moderate ferroelectricproperties have been observed after co-doping. There was an increasing trend in remnant polarization (Pr) with the increase in Ti concentration besides an improvement in the characteristic saturation magnetization. Our resultsdemonstrate that Gd$^{3+}$ and Ti$^{4+}$ doping could be used to enhance multifunctional properties of BiFeO3 ceramics to enable them as potential material for various devices. • # Bulletin of Materials Science Volume 45, 2022 All articles Continuous Article Publishing mode • # Dr Shanti Swarup Bhatnagar for Science and Technology Posted on October 12, 2020 Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru Chemical Sciences 2020
2022-12-01 00:20:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3859546482563019, "perplexity": 8581.722054818014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00388.warc.gz"}
https://www.vexorian.com/2013/08/three-colorability.html
## Saturday, August 10, 2013 ### ThreeColorability (SRM 587 div1 hard) (main editorial preview post) ## Intro Link to problem statement. In short, we have a square grid. We want to color the corners of the squares using at most 3 colors, in such a way that points connected by a line segment don't share a color. Is it possible? Of course is possible! so, how about making it harder? You also need to add exactly one diagonal in each of the squares. Some squares already start with diagonals. Find a way to fill the remaining diagonals. This explanation takes the explanation for the division 2 version as a base. In that explanation we have found a property that is both necessary and sufficient for the setup to be valid. Each row must be equal or completely opposite to the first row. The question is how to use this property to generate the lexicographically-first way to fill the remaining cells? ## Is it possible? Given a grid of some set cells and some unknowns, can the unknown cells be filled in a way that the points are 3-colorable? In a way such that the number of Zs in each 2x2 sub-rectangle is even? In a way such that each row is equal or completely opposite to the first row? Binary grids that follow those properties follow many other properties. There is a useful one that can be derived from the row property. If each row is equal to the first row or a negative, then we can "negate" the negative rows, the result will look like this (If we once again use 1 for Z and 0 for N): 0110001100101001 0110001100101001 0110001100101001 0110001100101001 0110001100101001 0110001100101001 Then we can negate all the columns that contain 1, and the result is full of zeros. We can conclude that valid setups are those in which you negate a set of rows and a set of columns. Let us assign values to the rows and columns, 0 for rows/columns that are not negated and 1 for the negated ones. Now take a look to the following setup: ????1?????? ?0??????0?? ????1?????? ?0????????? If cell (x,y) has value 0, then there are two possibilities: a) Neither column x or row y were negated. b) Both column x and row y were negated. We can just say that the values for row y and column x must be equal. Likewise, if cell (x,y) had a 1, it would mean that the values for the respective row and column must be different. If the cell was '?', then there is no condition that connects the row and column directly. Consider a set of variables such that each could be 0 or 1. There are some conditions of the form (v_i = v_j), and others in the form (v_i != v_j). Is there at least one way to assign valid values to the variables? If you assign 0 to a variable, the rules will force you to assign 0 to some variables and 1 to others. You can then process the rules for each of these new variables and you will be forced to assign other values to other variables. Repeat until there is a contradiction or until all variables get an assigned value. If we assigned 1 to the first variable it wouldn't really change the result, all variables would just be negated. In other words, each condition connects two variables and we just need a Depth-first search (DFS) between the variables. ## Lexicographically first Once we can confirm if a setup can be filled correctly. We can use the same function to find the lexicographically-first result. Try the first of the unknown cells in row-major order, if adding a N to this cell position is possible, then the lexicographically-first setup will have a N in this location (Because 'N' is smaller than 'Z'). If it is not possible to put a N, put a Z. Then try the remaining unknown cells in row-major order, always placing the smallest possible value in each of them. ## Code int N; int graph[110][110]; int color[110]; void dfs(int x, int c){ if (color[x] != -1) { return; } color[x] = c; for (int i = 0; i < N ; i++) { if (graph[x][i] != -1) { dfs(i, (c ^ graph[x][i])); } } } bool check() { fill(color, color+N, -1); // do a DFS, to fill the values for the variables, start each connected // component with 0: for (int i = 0; i < N ; i++) { if (color[i] == -1) { dfs(i, 0); } } for (int i = 0; i < N ; i++) { for (int j = 0; j < N ; j++) { // Check if there is any inconsistency: if (graph[i][j] != -1 && ((color[i] ^ color[j]) != graph[i][j]) ) { return false; } } } return true; } vector <string> lexSmallest(vector <string> cells){ int X=cells.size(), Y=cells[0].length(); N = X + Y; for (int i = 0; i < X ; i++) { for (int j = 0; j < Y ; j++) { graph[i][j] = -1; //-1 means there is no connection between the variables } } // For each cell != '?': for (int i = 0; i < X ; i++) { for (int j = 0; j < Y ; j++) { if (cells[i][j] != '?') { // If the cell is 'Z', the row and column must be different // save 1 in the graph. Else 0, they must be equal. graph[i][X+j] = graph[X+j][i] = (cells[i][j] == 'Z'); } } } if (!check() ) { // the board is already invalid return {}; } // Lexicographically-first. // For each cell in row-major order: for (int i = 0; i < X; i++) { for (int j = 0; j < Y ; j++) { if(cells[i][j] == '?') { //Try with N: graph[i][X+j] = graph[X+j][i] = 0; if (!check()) { // Not possible, put a Z: graph[i][X+j] = graph[X+j][i] = 1; } } } } // translate the result back to N/Z: vector <string> ans(X); for (int i = 0; i < X ; i++) { for (int j = 0; j < Y ; j++) { ans[i] += ((graph[i][X+j] == 0) ? 'N' : 'Z'); } } return ans; }
2022-05-29 00:09:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36598268151283264, "perplexity": 809.6344137511311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00448.warc.gz"}
https://gmatclub.com/forum/ds-divisible-by-4-jpg-71426.html
It is currently 21 Oct 2017, 11:04 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # DS-Divisible by 4.jpg Author Message Manager Joined: 21 May 2008 Posts: 85 Kudos [?]: [0], given: 4 ### Show Tags 10 Oct 2008, 06:46 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 0 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. Attachment: DS-Divisible by 4.jpg [ 8.66 KiB | Viewed 967 times ] Kudos [?]: [0], given: 4 Director Joined: 27 Jun 2008 Posts: 540 Kudos [?]: 69 [0], given: 92 WE 1: Investment Banking - 6yrs ### Show Tags 10 Oct 2008, 07:35 (1) k = 2 n = 5 n^3-n = 125 - 5 = 120.....Yes k=1 n=3 n^3 - n = 27-3 = 24...Yes Suff (2) n^2+n = n(n+1) / 6 Let n = 2 n^3 - n = 2^3-2 = 6...not divisible by 4 n = 5 n^3 - n = 120..divisible by 4 Insuff A Kudos [?]: 69 [0], given: 92 Senior Manager Joined: 04 Aug 2008 Posts: 372 Kudos [?]: 37 [0], given: 1 ### Show Tags 10 Oct 2008, 07:36 klb15 wrote: Attachment: DS-Divisible by 4.jpg no easy way here started from plugin in 1 and that eliminated 1) then proceeded with 2 and with all the combinations it turned out that only numbers divisible w 6 but greater then 6 will satisfy, therefore C _________________ The one who flies is worthy. The one who is worthy flies. The one who doesn't fly isn't worthy Kudos [?]: 37 [0], given: 1 Senior Manager Joined: 04 Aug 2008 Posts: 372 Kudos [?]: 37 [0], given: 1 ### Show Tags 10 Oct 2008, 07:43 pawan203 wrote: (1) k = 2 n = 5 n^3-n = 125 - 5 = 120.....Yes k=1 n=3 n^3 - n = 27-3 = 24...Yes Suff (2) n^2+n = n(n+1) / 6 Let n = 2 n^3 - n = 2^3-2 = 6...not divisible by 4 n = 5 n^3 - n = 120..divisible by 4 Insuff A if n is 2 then satisfies 1) but not the question 8-2=6 not divisible _________________ The one who flies is worthy. The one who is worthy flies. The one who doesn't fly isn't worthy Kudos [?]: 37 [0], given: 1 VP Joined: 30 Jun 2008 Posts: 1034 Kudos [?]: 708 [0], given: 1 ### Show Tags 10 Oct 2008, 07:50 I guess there is an easy way here. $$n^3 -n$$ = $$n(n^2-1)$$ = n(n+1)(n-1) The question is is (n-1)*n*(n+1) divisible by 4 ? ( notice n-1, n and n+1 are consecutive numbers) (1) n=2k+1 . This essentially means n is odd. so n-1 and n+1 are evens. Rule: When an integer can be divided by 2 twice, it is divisible by 4 as well. Now as we have 2 evens, it is divisible by 2 twice or in other words it is divisible by 4. So statement 1 is sufficient (2) $$n^2+n$$ is divisible by 6 or n(n+1) is divisible by 6. So either n or n+1 is a multiple of 2 and the other is multiple of 3. To determine whether a number is divisible by 4, we have to check if it is divisible by 2 twice. Now in statement 2 we do not know if n is even or odd. So cant say _________________ "You have to find it. No one else can find it for you." - Bjorn Borg Kudos [?]: 708 [0], given: 1 Manager Joined: 15 Apr 2008 Posts: 164 Kudos [?]: 12 [0], given: 1 ### Show Tags 10 Oct 2008, 13:17 i am also getting answer A i did it by picking odd and even numbers. Kudos [?]: 12 [0], given: 1 SVP Joined: 05 Jul 2006 Posts: 1750 Kudos [?]: 431 [0], given: 49 ### Show Tags 10 Oct 2008, 16:05 N(N+1)(N-1) ARE 3 CONSECS IF N IS ODD THEN SURE IS DEVISBALE BY 4 IF N IS EVEN THEN IT COULD BE EITHER DEVISABLE BY 2 OR 4 FROM ONE n IS ODD.......SUFF FROM 2 N(N+1) IS DEVISBALE BY 6 WE STILL DONT KNOW IF N IS EVEN OR ODD.....INSUFF a Kudos [?]: 431 [0], given: 49 Manager Joined: 10 Aug 2008 Posts: 74 Kudos [?]: 13 [0], given: 0 ### Show Tags 11 Oct 2008, 11:29 Quest is n3-n is divisible by 4 OR n(n-1)(n+1) is divisible by 4. Now state 1 ------------ n = 2k + 1 => (n-1) = 2k means n-1 always divisible by 2 2nd state --------- n^2 + n is divisible by 6. => n(n+1) is divisible by 6 so n(n-1)(n+1) is divisible by 12 Hence divisible bhy 4 Ans is C) Kudos [?]: 13 [0], given: 0 Re: DS-Divisible by 4   [#permalink] 11 Oct 2008, 11:29 Display posts from previous: Sort by
2017-10-21 18:04:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36139726638793945, "perplexity": 7382.904255544834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824824.76/warc/CC-MAIN-20171021171712-20171021191712-00747.warc.gz"}
http://enacademic.com/dic.nsf/enwiki/294546/Radius
 The distance from the center of a sphere or ellipsoid to its surface is its radius. The equivalent "surface radius" that is described by radial distances at points along the body's surface is its radius of curvature (more formally, the radius of curvature of a curve at a point is the radius of the osculating circle at that point).With a sphere, the radius of curvature equals the radius. With an oblate ellipsoid (or, more properly, an oblate spheroid), however, not only does it differ from the radius, but it varies, depending on the direction being faced. The extremes are known as the "principal radii of curvature". Explanation Imagine driving a car on a curvy road on a completely flat plain (so that the geographic plain is a geometric plane). At any one point along the way, lock the steering wheel in its position, so that the car thereafter follows a perfect circle. The car will, or course, deviate from the road, unless the road is also a perfect circle. The radius of that circle the car makes is the radius of curvature of the curvy road at the point at which the steering wheel was locked. The more sharply curved the road is at the point you locked the steering wheel, the smaller the radius of curvature. Formula If $gamma : mathbb\left\{R\right\} ightarrow mathbb\left\{R\right\}^n$ is a parameterized curve in $mathbb\left\{R\right\}^n$ then the radius of curvature at each point of the curve, $ho : mathbb\left\{R\right\} ightarrow mathbb\left\{R\right\}$, is given by :$ho = frac\left\{sqrt\left\{|gamma\text{'}^2\left(t\right)| ; |gamma"^2\left(t\right)| - \left(gamma\text{'}\left(t\right) cdot gamma"\left(t\right)\right)^2$ or, omitting the parameter ("t") for readability, :$ho = frac\left\{|gamma\text{'}|^3\right\}\left\{sqrt\left\{|gamma\text{'}|^2 ; |gamma"|^2 - \left(gamma\text{'} cdot gamma"\right)^2$. Elliptic, latitudinal components The radius extremes of an oblate spheroid are the equatorial radius, or semi-major axis, "a", and the polar radius, or semi-minor axis, "b". The "ellipticalness" of any ellipsoid, like any ellipse, is measured in different ways (e.g., eccentricity and flattening), any and all of which are trigonometric functions of its "angular eccentricity", $o!varepsilon,!$: : The primary parameter utilized in identifying a point's vertical position is its latitude.A latitude can be expressed either directly or from the arcsine of a trigonometric product, the "arguments" (i.e., a function's "input") of the factors being the arc path (which defines, and is the azimuth at the equator of, a given great circle, or its elliptical counterpart) and the transverse colatitude, which is a corresponding, vertical latitude ring that defines a point along an arc path/great circle. The relationship can be remembered by the terms' initial letter, L-A-T:::: Therefore, along a north-south arc path (which equals 0°), the primary quadrant form of latitude equals the transverse colatitude's at a given point.As most introductory discussions of curvature and their radius identify position in terms of latitude, this article will too, with only the added inclusion of a "0" placeholder for more advanced discussions where the arc path is actively utilized: $F\left(L\right) ightarrow F\left(0,L\right)=F\left(A,T\right).,!$There are two types of latitude commonly employed in these discussions, the planetographic (or planetodetic; for Earth, the customized terms are ""ge"ographic" and ""ge"odetic") and reduced latitudes, $phi,!$ and (respectively): : The calculation of elliptic quantities usually involves different elliptic integrals, the most basic integrands being $E\text{'}\left(0,L\right),!$ and its complement, $C\text{'}\left(0,L\right),!$: : : Thus . Curvature A simple, if crude, definition of a circle is "a curved line bent in equal proportions, where its endpoints meet". Curvature, then, is the state and degree of deviation from a straight line—i.e., an "arced line".There are different interpretations of curvature, depending on such things as the planular angle the given arc is dividing and the direction being faced at the surface's point.What is concerned with here is "normal curvature", where "normal" refers to orthogonality, or perpendicularity.There are two principal curvatures identified, a maximum, "κ1", and a minimum, "κ2". Meridional maximum :: :The arc in the meridional, north-south vertical direction at the planetographic equator possesses the maximum curvature, where it "pinches", thereby being the least straight. Perpendicular minimum :: :The perpendicular, horizontally directed arc contains the least curvature at the equator, as the equatorial circumference is——at least in mathematical definition——perfectly circular. The spot of least curvature on an oblate spheroid is at the poles, where the principal curvatures converge (as there is only one facing direction——towards the planetographic equator!) and the surface is most flattened. Merged curvature :There are two universally recognized blendings of the principal curvatures: The arithmetic mean is known as the "mean curvature", "H", while the squared geometric mean——or simply the product——is known as the "Gaussian curvature", "K": ::$H=frac\left\{kappa_1+kappa_2\right\}\left\{2\right\};qquadKappa=kappa_1kappa_2;,!$ A curvature's radius, "RoC", is simply its reciprocal: :$mathrm\left\{RoC\right\} = frac\left\{1\right\}mathrm\left\{curvature\right\};qquad mathrm\left\{curvature\right\} = frac\left\{1\right\}mathrm\left\{RoC\right\};,!$ Therefore, there are two principal radii of curvature: A vertical, corresponding to "κ1", and a horizontal, corresponding to "κ2". Most introductions to the principal radii of curvature provide explanations independent to their curvature counterparts, focusing more on positioning and angle, rather than shape and contortion. :The vertical radius of curvature is parallel to the "principal vertical", which is the facing, "central meridian" and is known as the "meridional radius of curvature, M" (alternatively, "R1" or "p"): :::::"(Crossing the planetographic equator, $\left\{\right\}_\left\{M=bcos\left(o!varepsilon\right)=frac\left\{b^2\right\}\left\{a,!$.}" :The horizontal radius of curvature is perpendicular (again, meaning "normal" or "orthogonal") to the central meridian, but parallel to a great arc (be it spherical or elliptical) as it crosses the "prime vertical", or "transverse equator" (i.e., the meridian 90° away from the facing principal meridian——the "horizontal meridian"), and is known as the "transverse (equatorial)", or normal, "radius of curvature, N" (alternatively, "R2" or "v"): :::::"(Along the planetographic equator, which is an ellipsoid's"::: "only true great circle, $\left\{\right\}_\left\{N=bsec\left(o!varepsilon\right)=a\right\},!$.)" Polar convergence :Just as with the curvature, at the poles "M" and "N" converge, resulting in an equal radius of curvature:::::$M=N=asec\left(o!varepsilon\right)=frac\left\{a^2\right\}\left\{b\right\}.,!$ :There are two possible, basic "means": ::*"Mean radius of curvature", which is the arithmetic mean::::$frac\left\{M+N\right\}\left\{2\right\}=frac\left\{frac\left\{1\right\}\left\{kappa_1\right\}+frac\left\{1\right\}\left\{kappa_2\left\{2\right\}=frac\left\{M\right\}\left\{2\right\}!cdot!left\left(1+frac\left\{a^4\right\}\left\{\left(bN\right)^2\right\} ight\right)=frac\left\{N\right\}\left\{2\right\}!cdot!left\left(frac\left\{\left(bN\right)^2\right\}\left\{a^4\right\}+1 ight\right);,!$ ::*"Radius of mean curvature", which is the harmonic mean::::$frac\left\{2\right\}\left\{frac\left\{1\right\}\left\{M\right\}+frac\left\{1\right\}\left\{N=frac\left\{2\right\}\left\{kappa_1+kappa_2\right\}=frac\left\{1\right\}\left\{H\right\}=frac\left\{2M\right\}\left\{1+frac\left\{\left(bN\right)^2\right\}\left\{a^4=frac\left\{2N\right\}\left\{frac\left\{a^4\right\}\left\{\left(bN\right)^2\right\}+1\right\}.,!$ :If these means are then arithmetically and harmonically averaged together, with the results reaveraged until the two averages converge, the result will be the arithmetic-harmonic mean, which equals the geometric mean and, in turn, equals the square root of the "inverse of Gaussian curvature"! ::$sqrt\left\{M!N\right\}=sqrt\left\{frac\left\{1\right\}\left\{Kappa=sqrt\left\{frac\left\{1\right\}\left\{kappa_1kappa_2=frac\left\{b\right\}\left\{a^2\right\}N^2;,!$ :While, at first glance, the squared form may be regarded as either the "radius of Gaussian curvature", "radius of Gaussian curvature2" or "radius2 of Gaussian Curvature", none of these terms quite fit, as Gaussian Curvature is the product of two curvatures, rather than a singular curvature. Applications and examples *For the use in differential geometry, see Cesàro equation. Radius of curvature is also used in a three part equation for bending of beams. ee also *Curvature *Diameter * [http://www.fas.org/irp/agency/nima/nug/gloss_t.html USIGS Glossary] (Definitions of "transverse" terms) * [http://www.geom.uiuc.edu/zoo/diffgeom/surfspace/concepts/curvatures/prin-curv.html The Geometry Center: Principal Curvatures] * [http://www-math.mit.edu/18.013A/HTML/chapter15/section03.html 15.3 Curvature and Radius of Curvature] * [http://mathworld.wolfram.com/PrincipalCurvatures.html MathWorld: Principal Curvatures] * [http://www.brown.edu/Students/OHJC/hm4/k.htm The History of Curvature] Wikimedia Foundation. 2010. ### Look at other dictionaries: • Radius of curvature (optics) — Radius of curvature has specific meaning and sign convention in optical design. A spherical lens or mirror surface has a center of curvature located in (x, y, z) either along or decentered from the system local optical axis. The vertex… …   Wikipedia • Radius of curvature — may refer to: Radius of curvature (mathematics) Radius of curvature (optics) Radius of curvature (applications), in geodesy and materials science The reciprocal of the curvature, in differential geometry Radius, for a sphere (lingo) The radius of …   Wikipedia • Curvature — In mathematics, curvature refers to any of a number of loosely related concepts in different areas of geometry. Intuitively, curvature is the amount by which a geometric object deviates from being flat, or straight in the case of a line, but this …   Wikipedia • Curvature of Riemannian manifolds — In mathematics, specifically differential geometry, the infinitesimal geometry of Riemannian manifolds with dimension at least 3 is too complicated to be described by a single number at a given point. Riemann introduced an abstract and rigorous… …   Wikipedia • Curvature form — In differential geometry, the curvature form describes curvature of a connection on a principal bundle. It can be considered as an alternative to or generalization of curvature tensor in Riemannian geometry. Contents 1 Definition 1.1 Curvature… …   Wikipedia • Radius — For other uses, see Radius (disambiguation). Circle illustration In classical geometry, a radius of a circle or sphere is any line segment from its center to its perimeter. By extension, the radius of a circle or sphere is the length of any such… …   Wikipedia • Scalar curvature — In Riemannian geometry, the scalar curvature (or Ricci scalar) is the simplest curvature invariant of a Riemannian manifold. To each point on a Riemannian manifold, it assigns a single real number determined by the intrinsic geometry of the… …   Wikipedia • Mean curvature — In mathematics, the mean curvature H of a surface S is an extrinsic measure of curvature that comes from differential geometry and that locally describes the curvature of an embedded surface in some ambient space such as Euclidean space. The… …   Wikipedia • Principal curvature — Saddle surface with normal planes in directions of principal curvatures In differential geometry, the two principal curvatures at a given point of a surface are the eigenvalues of the shape operator at the point. They measure how the surface… …   Wikipedia • Minimum railway curve radius — 90 foot (27.43 m) radii on the elevated 4 ft 8 1⁄2 in (1,435 mm) Chicago L . There is no room for longer radii at this street intersection The minimum railway curve radius, the shortest design radius, has an… …   Wikipedia
2019-03-24 18:12:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120474219322205, "perplexity": 1430.2091423966615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203464.67/warc/CC-MAIN-20190324165854-20190324191854-00298.warc.gz"}
https://readme.phys.ethz.ch/osx/skim/
# Skim Skim allows to view and annotate PDF documents. ## Features ### Annotations You can highlight text and add notes. In contrast to Apple's Preview, the annotations are stored as extended attributes without altering the PDF file itself. This ensures that saving notes is quick even for large documents. The text highlights and notes can be exported to a text file for further studying or processing. The SkimNotes command line tool allows to automate the conversion of the Skim notes and includes an agent other programs can connect to and access the notes. ### Full-screen Besides the presentation mode, Skim offers a full screen mode for comfortable reading. ### Auto-reload Skim can recognize when a PDF file is updated on the disk and reload it. For this to work, make sure `Check for file changes` is checked in the `Sync` tab of Skim's preferences. If you don't want to be prompted for every file, you can enter the following command to activate autoreload globally. ``````defaults write -app Skim SKAutoReloadFileUpdate -boolean true `````` ### TeX-PDF synchronization Skim allows to Shift-Command-click on a location in the PDF and jump to the corresponding line in the TeX, and the other way round. This may require a little bit of configuration at three locations: Skim, the TeX compiler and your text editor. #### Skim Skim has predefined settings for the most common editors in the `Sync` preferences. #### TeX To enable TeX-PDF synchronization, you can compile with `pdflatex -synctex=1` or include `\usepackage{pdfsync}` in the header of your TeX document. #### Editor In Vim you can define a keyboard shortcut to jump to the location in the PDF corresponding to the TeX line under the cursor. Putting the following in your `.vimrc` assigns this mapping to the `F3` key. ``````au filetype tex map <F3> :w<CR>:silent !/Applications/Skim.app/Contents/SharedSupport/displayline <C-r>=line('.')<CR> %<.pdf %<CR> `````` Further instructions and configurations for other environments can be found in the TeX-PDF Synchronization Wiki.
2022-08-15 01:00:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8477887511253357, "perplexity": 3667.7100582607786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00655.warc.gz"}
http://openstudy.com/updates/4d9cd2a88f378b0b47e9e117
## A community for students. Sign up today Here's the question you clicked on: ## anonymous 5 years ago find the surface area of the regular pyramids • This Question is Closed 1. anonymous 2. anonymous Well if you can find the slant height you can find the area of the triangles on the sides. Then just add those areas plus the area of the bottom should give you the total. $Area_{triangle} = \frac{Base*Height}{2}$ 3. anonymous so the first one would be 21cm^2 #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2017-01-18 14:48:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5796315670013428, "perplexity": 1427.608209565039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280292.50/warc/CC-MAIN-20170116095120-00327-ip-10-171-10-70.ec2.internal.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A3=0208&L=LATEX-L&E=7bit&P=15534&B=--&T=text%2Fplain;%20charset=iso-8859-1&header=1
Hi, after a long time and after changing from YandY to fpTeX I made an attempt to compile xo-pfloat with the new OR (with PDFLaTeX). I got the following error message: [snip] (c:/TeX/texmf/tex/latex3/xor/xo-alloc.sty ! LaTeX Error: Missing \begin{document}. See the LaTeX manual or LaTeX Companion for explanation. Type H <return> for immediate help. ... l.29 \chardef\@kludgeins="F C\relax^^M ? [snip] and the same for \insc@unt"FC\relax After changing these hex values to decimal everything compiled perfectly well. Now I'm wondering if this is well-known, a bug, or something wrong with my setup. BTW are there new developments concerning this stuff (OR, grid design, etc.)? Regards, Ulrich Dirr
2022-12-03 09:03:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9710068702697754, "perplexity": 11237.340469563896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710926.23/warc/CC-MAIN-20221203075717-20221203105717-00359.warc.gz"}
https://electronics.stackexchange.com/questions/169268/pic18-io-polling
# PIC18 IO polling I just recently decided to migrate an existing design which was based on Interrupt on change pins to standard IO polling due to some constraints in the part that I was using. I am trying to figure out the worst case time required to poll a GPIO for an event, process the event and exit the loop. I have seen the word "non-blocking" code thrown around various literature and my guess is that is what I would like to implement here. Any pointers or Pseudo-code would be very helpful. The part I am using is PIC18F85K22 and I am running the internal clock at 64MHz(Max). A non blocking call to a portion of code usually implies the request for a resource, such as a printer, or an event in your case. If the printer is busy, or the event has not happened, there are two possibilities: • wait for the resource to become available • carry on and check after a while the first option is blocking: the code execution is halted in what is called busy waiting, or spooling. The processor can't do anything and that is a waste of power and time. the second way is non blocking: the code executions continues, the processor can do something else, possibly servicing other events, then check back later. The problem is that if your event expires in some way code execution may well not come back in time. Some pseudo c to illustrate an example of busy wait: while(event_1_has_happened == 0); //do nothing int result_1 = service_event_1(); while(event_2_has_happened == 0); //do nothing int result_2 = service_event_2(); //... and so on Non blocking wait: int keep_servicing = 1; while(keep_servicing == 1) { if(event_1_has_happened == 1) { int result_1 = service_event_1(); } if(event_2_has_happened == 1) { int result_2 = service_event_2(); } //... and so on keep_servicing = somefunctionofresults(result_1, result2, ..); } Please note: the above samples assume that the functions event_n_has_happened return immediately whether the event has happened or not. They might be something like checking if an input pin is high or low, or whatever. • This is relevant only if the OP is running some sort of RTOS. He says he's heard of the term "non-blocking" but that doesn't mean its applicable to him. – tcrosley May 6 '15 at 19:09 • I don't see where an OS becomes part of the game... – Vladimir Cravero May 6 '15 at 19:11 • Generally if you need to worry about blocking, that implies multi-tasking. – tcrosley May 6 '15 at 19:12 • @tcrosley it is a bare-metal application. No RTOS. Right now the main function has a bunch of "busy" loops that wait for various tasks to be completed. I am looking for ways to streamline this multitasking without a RTOS. – ultrasounder May 11 '15 at 18:39 • @ultrasounder if you make a new question explaining in detail what these tasks are you can be helped better. As things are the best advice that I can give is: make your busy waits not busy. – Vladimir Cravero May 13 '15 at 14:07
2019-12-13 21:36:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25035855174064636, "perplexity": 1864.259231629347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00381.warc.gz"}
https://investeringarnjrt.web.app/50808/24413.html
# 5% z 15000 Z= P-values: use the standard normal distribution (Table A-2) and refer to figure 8 -6 on page 396. The average cost of owning and operating a vehicle is $8121 per 15,000 miles including fixed and variable Step 5: Make a conclusion The town of 14. log x + log 3 – 5log z-1/2 logm. 15 Gen Z isn't all that different—but their definitions might be. And just like other age groups, when Gen Z customers feel appreciated, they are more likely to recommend or Only 5% of companies do all it takes to get to payback one of his customers cancels an order for$15,000 and returns the 5. George makes 7% on every sale. If he sold 3 vacuums at $225 each and 4 carpet. 3) A population of beetles is growing each month at a rate of 5%. is close to z elo dollars. 6) You invest Jul 4, 2015 Deprescribing benzodiazepines and Z-drugs in community-dwelling adults: a PO Box 15000, Halifax, NS, B3H 4R2, Canada. andrea.murphy@dal.ca. 58 (42 %) were narrative reviews and seven (5%) were guidelines. ## 8 rows Now we have two simple equations: 1) 15000=100% 2) x=5% where left sides of both of them have the same units, and both right sides have the same units, so we can do something like that: 15000/x=100%/5% 6. We assume that the whole amount is 15000. ### Solution The couple invested$15,000 (the principal) for 3 years (the time) and 45,000. 45,000 r. 3,375. 45,000r t. P. I. 3,375. 15,000 r 3. I. Prt r r. 3. 3. 4. %. 5. 1 3. 4. Solve each system using matrices. 5. 6. μ. 2x. 2y. 3z Z- stacks were taken at an interval of 0.1 µM across cells fixed identically by centrifugation at 15,000×g for 20 min to remove aggregated protei Plan B: $2,956 per year for 4 years plus$15,000 at end of 5 years. Plan C: Nothing for 2 years, then $9,048 per year for 3 years. Solution. PWCA = 5,010(P/ A, Class 2 covers the normal distribution, including standard normal z-values, in Section 5. THE Z-SCORE: Comparing Values in Different Normal Distributions. 6,800. 3,900-4,700 2,700-3,300. 22,000. 10,000-. 8,200.$15,000. The two companies will make their decisions independently; either, both, If they promise 5 hours, the on-time probability is P(X < 5) which is P(Z < [ 5  Let's do another example. 20,000. 1,053. 392. 204. 100 . All of the steps are the same, except we replace z(.05) with z(.025) > me <- qnorm(.975)*(15000/sqrt(10)) Approaches proposed for deprescribing benzodiazepines and Z-drugs are numerous and heterogeneous. Current research in this area using methods such as randomized trials and meta-analyses may too narrowly encompass potential strategies available to target … At 5 %= Php 15,000.00; at 7%= Php 60,000.00. LET Review for Math Majors - March 2020. August 28, 2020 · An obtuse angle is greater than a right angle but less than the ____ angle. A. Straight B. Reflex C. Acute D. Right. See All. Photos. See All. A job has 10 cost code categories, half of which have a markup listed of 5%, three have a markup of 15%, and the remainder has no markup.If each category has $15000 in cost, what is the price Rangers: French bank BNP Paribas buys 5% stake. If he sold 3 vacuums at$225 each and 4 carpet. asked Sep 5, 2019 in Accounts by PujaBharti (55.0k points). X, Y, and Z are The drawings of the partner were X Rs. 15000, Y Rs. 12,600, Z Rs. 12,000. A P2P lender is willing to lend him $16,000 for 5 years at an interest rate of 12% along with a 5% fee up front. The town of 14. log x + log 3 – 5log z-1/2 logm. 15 Gen Z isn't all that different—but their definitions might be. And just like other age groups, when Gen Z customers feel appreciated, they are more likely to recommend or Only 5% of companies do all it takes to get to payback one of his customers cancels an order for$15,000 and returns the 5. George makes 7% on every sale. If he sold 3 vacuums at \$225 each and 4 carpet. asked Sep 5, 2019 in Accounts by PujaBharti (55.0k points). ltc zmena adresy massachusetts michelle phan prečo je bitcoin hotovosť hore koľko je mkr 2021 súčasný kurz dolára voči západnej únii čo sa stane, ak paypal zostane záporný ### Aby obliczyć ile to jest 5% z liczby 15000 należy wykonać dwa kroki: 1). Zamienić procent na ułamek: Zamiana 5% na ułamek: 5 \% = \frac{5\%}{100\%} = 0.05 For information regarding online support and services, visit Vibration may be applied in the X, Y, or Z axis. b. 000,5. 000,36. 000,40. ## NTPC shares jump 5% on plans to raise Rs 15,000 cr via bonds The funds are proposed to be raised on a private placement basis in one or more tranches not exceeding 30, as per the company's notice North Fremont's population growth rate is 5%. (15,000 in 2010, and the population growth rate remains constant, when will the population reach 30,000? Resistor Carbon Film 15K Ohm 1 Watt 5% available at Jameco Electronics. Určete neznáme číslo. 24. Motorová sekačka stála v únoru 6 000 Kč. V dubnu ji zdražili o 5 %.
2023-03-21 23:09:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4998769462108612, "perplexity": 11744.82466140695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00486.warc.gz"}
https://socratic.org/questions/how-do-you-solve-the-system-4x-y-6-and-5x-y-21-by-substitution
# How do you solve the system -4x+ y =6 and -5x - y =21 by substitution? May 21, 2015 May 21, 2015 $- 4 x + y = 6$ ----------(1) $- 5 x - y = 21$----------(2) We can transpose $- 4 x$ in the first equation to the right hand side: $y = 4 x + 6$ ------(3) Substituting $y$ from the third equation into the second one gives us: $- 5 x - \left(4 x + 6\right) = 21$ $- 5 x - 4 x - 6 = 21$ $- 9 x - 6 = 21$ $- 9 x = 21 + 6$ $- 9 x = 27$ Dividing both sides by $- 9$ will give us: $\frac{\cancel{- 9} x}{\cancel{- 9}} = \frac{27}{-} 9$ color(green)(x = -3 Substituting $x = - 3$ in the third equation will give us : $y = 4 \left(- 3\right) + 6$ $y = - 12 + 6$ color(green)( y = -6 The solution to both these equations :x = -3; y = -6 Verify : Substitute the values of $x \mathmr{and} y$ in both the equations to see if they are satisfied
2020-04-01 18:46:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.358858585357666, "perplexity": 819.7320960648267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00285.warc.gz"}
https://tex.stackexchange.com/questions/353856/get-thmtools-style-with-mdframed-issues-with-the-title-and-mwe-provided
# Get thmtools style with mdframed (issues with the title and MWE provided) I am trying to achieve the following output with mdframed, but I can't. I did the proof in the attached picture below with thmtools, but I am abandoning it because some of my proofs are very long and I want them to be split between pages to avoid getting to large white spaces at the end of some pages (something that apparently thmtools cannot handle easily). What I can get is this: But the output I am trying to replicate should look as follows: As you can see, what I need is: (a) The title in the same line as the text. (b) The word "Proof" before the name of the theorem being proven. (c) A long horizontal dash between Proof and the title of the theorem being proven. (d) A colon after the title of the theorem being proven. Does it make sense? Here you can find a MWE that replicates what I get (except the font and the color of the math; because those are irrelevant to my question and would only make a code more complex without need). \documentclass[a4paper]{report} \pagestyle{plain} \usepackage[dvipsnames]{xcolor} \usepackage{amsmath, mathtools, amsthm, mathrsfs, amssymb} \usepackage{mdframed} \let\proof\relax \let\endproof\relax \newmdenv[linecolor=Gray,frametitle=Proof]{proof} \begin{document} \begin{proof}[frametitle={\textbf{von-Neumann--Morgenstern Expected Utility Theorem I}}] Let's prove that if there exists $U: \mathcal{L}(X) \longrightarrow \mathbb{R}$, a preference $\succsim$ on $\mathcal{L}(X)$ satisfies X and Y. Assume there exists a utility function $U: \mathcal{L}(X) \longrightarrow \mathbb{R}$ representing $\succsim$ such that $U$ satisfies the expected utility property. \begin{enumerate} \item To show that $\succsim$ \textbf{is continuous}, let $x,y,z \in \mathcal{L}(X)$ such that $x \succ y \succ z$. Since $U$ represents $\succsim$, $U(x) > U(y) > U(z)$. The set of real numbers is convex; and hence there exists $p \in (0,1)$ such that $p \cdot U(x) + (1-p) \cdot U(z) = U(y)$. By the expected utility property, $U(p \odot x \oplus (1-p) \odot z) = p \cdot U(x) + (1-p) \cdot U(z) = U(y)$. Since $U$ represents $\succsim$, $p \odot x \oplus (1-p) \odot z \sim y$. Now, let $q,r \in [0,1]$ be such that $q >p>r$. Then: \begin{flalign*} && q \cdot U(x) + (1-q) \cdot U(z) & > p \cdot U(x) + (1-p) \cdot U(z)\\ && &> r \cdot U(x) + (1-r)\cdot U(z) \end{flalign*} And by the expected utility property, and the hypothesis that $U$ represents $\succsim$, it follows that $q \odot x \oplus (1-q) \odot z \succ y \succ r \odot x \oplus (1-r) \odot z$. \item To show that $\succsim$ \textbf{satisfies independence}, let $x,y,z \in \mathcal{L}(X)$ and let $p \in [0,1]$. Since $U$ represents $\succsim$, $x \succ y \Longleftrightarrow U(x) > U(y)$. Hence, $x \succ y \Longleftrightarrow p \cdot U(x) + (1-p) \cdot U(z) > p \cdot U(y) + (1-p) \cdot U(z)$. Since $U$ satisfies the expected utility property, $x \succ y \Longleftrightarrow [p \odot x \oplus (1-p) \odot z] \succ [p \odot y \oplus (1-p) \odot z]$. Since $U$ represents $\succsim$, $x \sim y \Longleftrightarrow U(X) = U(y)$. Hence, $x \sim y \Longleftrightarrow p \cdot U(x) + (1-p) \cdot U(z) = p \cdot U(y) + (1-p) \cdot U(z)$. Since $U$ satisfies the expected utility property, $x \sim y \Longleftrightarrow [p \odot x \oplus (1-p) \odot z] \sim [p \odot y \oplus (1-p) \odot z]$. \end{enumerate} \end{proof} \end{document} Could anybody help me, please? Thank you all in advance for your time. • Can you make a compilable MWE? \begin{document} is in the wrong place and after moving it I get Environment proof undefined. \begin{proof} – user36296 Feb 14 '17 at 17:27 • I am really sorry for the mistake. It is fixed now and the MWE should compile just fine. – Héctor Feb 14 '17 at 17:31 With a bit of help from https://tex.stackexchange.com/a/185168/36296: \documentclass[a4paper]{report} \usepackage{amsmath, amssymb, amsthm} \usepackage{mdframed} \newtheoremstyle{mystyle}% % Name {0pt}% % Space above {}% % Space below {\itshape}% % Body font {}% % Indent amount {\bfseries}% % Theorem head font {:}% % Punctuation after theorem head { }% % Space after theorem head, ' ', or \newline {\thmname{#1} -- \thmnote{#3}}% % Theorem head spec (can be left empty, meaning normal') \theoremstyle{mystyle} \let\proof\relax \let\endproof\relax \newmdtheoremenv[innerleftmargin=0.1cm,innerrightmargin=0.1cm,innertopmargin=0.1cm,innerbottommargin=0.1cm]{proof}{Proof} \AtEndEnvironment{proof}{\hfill$\square$}% \begin{document} \begin{proof}[von-Neumann--Morgenstern Expected Utility Theorem I] Let's prove that if there exists $U: \mathcal{L}(X) \longrightarrow \mathbb{R}$, a preference $\succsim$ on $\mathcal{L}(X)$ satisfies X and Y. Assume there exists a utility function $U: \mathcal{L}(X) \longrightarrow \mathbb{R}$ representing $\succsim$ such that $U$ satisfies the expected utility property. \begin{enumerate} \item To show that $\succsim$ \textbf{is continuous}, let $x,y,z \in \mathcal{L}(X)$ such that $x \succ y \succ z$. Since $U$ represents $\succsim$, $U(x) > U(y) > U(z)$. The set of real numbers is convex; and hence there exists $p \in (0,1)$ such that $p \cdot U(x) + (1-p) \cdot U(z) = U(y)$. By the expected utility property, $U(p \odot x \oplus (1-p) \odot z) = p \cdot U(x) + (1-p) \cdot U(z) = U(y)$. Since $U$ represents $\succsim$, $p \odot x \oplus (1-p) \odot z \sim y$. Now, let $q,r \in [0,1]$ be such that $q >p>r$. Then: \begin{flalign*} && q \cdot U(x) + (1-q) \cdot U(z) & > p \cdot U(x) + (1-p) \cdot U(z)\\ && &> r \cdot U(x) + (1-r)\cdot U(z) \end{flalign*} And by the expected utility property, and the hypothesis that $U$ represents $\succsim$, it follows that $q \odot x \oplus (1-q) \odot z \succ y \succ r \odot x \oplus (1-r) \odot z$. \item To show that $\succsim$ \textbf{satisfies independence}, let $x,y,z \in \mathcal{L}(X)$ and let $p \in [0,1]$. Since $U$ represents $\succsim$, $x \succ y \Longleftrightarrow U(x) > U(y)$. Hence, $x \succ y \Longleftrightarrow p \cdot U(x) + (1-p) \cdot U(z) > p \cdot U(y) + (1-p) \cdot U(z)$. Since $U$ satisfies the expected utility property, $x \succ y \Longleftrightarrow [p \odot x \oplus (1-p) \odot z] \succ [p \odot y \oplus (1-p) \odot z]$. Since $U$ represents $\succsim$, $x \sim y \Longleftrightarrow U(X) = U(y)$. Hence, $x \sim y \Longleftrightarrow p \cdot U(x) + (1-p) \cdot U(z) = p \cdot U(y) + (1-p) \cdot U(z)$. Since $U$ satisfies the expected utility property, $x \sim y \Longleftrightarrow [p \odot x \oplus (1-p) \odot z] \sim [p \odot y \oplus (1-p) \odot z]$. \end{enumerate} \end{proof} \end{document} • I will mark it as accepted because your code does exactly what I asked for, but notice that your code increases the space surrounding the text inside the frame, which is something I'd like to avoid. Also, I forgot to mention that the QED symbol should be automatically added in the environment. But this is my fault because I forgot to mention it in the original question. – Héctor Feb 14 '17 at 18:20 • @Héctor Please see my edit for adjusted margins. – user36296 Feb 14 '17 at 18:26 • @Héctor and the qed – user36296 Feb 14 '17 at 18:30 • The qed symbol should not be on a separate line. – Bernard Feb 14 '17 at 21:20 I propose a solution with the simpler framed environment option of ntheorem. One advantage is the automatic (and correct) placement of the qed symbol, even when the proof ends in a displayed equation. It is activated by the thmmarks option. I redefined the nonumberplain predefined style under the name myproof so as to incorporate the emdash between the name of the environment and the optional argument, in the place of the pair of parentheses. Also I obtain a better layout of the enumerate environment inside the frame with enumitem. \documentclass[a4paper]{report} \pagestyle{plain} \usepackage{geometry}% \usepackage[dvipsnames]{xcolor} \usepackage{amsmath, mathtools, mathrsfs, amssymb} \usepackage{framed, enumitem} % \usepackage[framed, thref, amsmath, thmmarks]{ntheorem} % \makeatletter \newtheoremstyle{myproof}% \makeatother \theorembodyfont{\mdseries} \theoremseparator{:} \theoremsymbol{\ensuremath{\square}} \theoremstyle{myproof} \newframedtheorem{proof}{Proof} \begin{document} \begin{proof}[von\,Neumann-Morgenstern Expected Utility Theorem I] Let's prove that if there exists $U: \mathcal{L}(X) \longrightarrow \mathbb{R}$, a preference $\succsim$ on $\mathcal{L}(X)$ satisfies X and Y. Assume there exists a utility function $U: \mathcal{L}(X) \longrightarrow \mathbb{R}$ representing $\succsim$ such that $U$ satisfies the expected utility property. \begin{enumerate}[wide=0pt, leftmargin=*] \item To show that $\succsim$ \textbf{is continuous}, let $x,y,z \in \mathcal{L}(X)$ such that $x \succ y \succ z$. Since $U$ represents $\succsim$, $U(x) > U(y) > U(z)$. The set of real numbers is convex; and hence there exists $p \in (0,1)$ such that $p \cdot U(x) + (1-p) \cdot U(z) = U(y)$. By the expected utility property, $U(p \odot x \oplus (1-p) \odot z) = p \cdot U(x) + (1-p) \cdot U(z) = U(y)$. Since $U$ represents $\succsim$, $p \odot x \oplus (1-p) \odot z \sim y$. Now, let $q,r \in [0,1]$ be such that $q >p>r$. Then: \begin{flalign*} && q \cdot U(x) + (1-q) \cdot U(z) & > p \cdot U(x) + (1-p) \cdot U(z)\\ && &> r \cdot U(x) + (1-r)\cdot U(z) \end{flalign*} And by the expected utility property, and the hypothesis that $U$ represents $\succsim$, it follows that $q \odot x \oplus (1-q) \odot z \succ y \succ r \odot x \oplus (1-r) \odot z$. \item To show that $\succsim$ \textbf{satisfies independence}, let $x,y,z \in \mathcal{L}(X)$ and let $p \in [0,1]$. Since $U$ represents $\succsim$, $x \succ y \Longleftrightarrow U(x) > U(y)$. Hence, $x \succ y \Longleftrightarrow p \cdot U(x) + (1-p) \cdot U(z) > p \cdot U(y) + (1-p) \cdot U(z)$. Since $U$ satisfies the expected utility property, $x \succ y \Longleftrightarrow [p \odot x \oplus (1-p) \odot z] \succ [p \odot y \oplus (1-p) \odot z]$. Since $U$ represents $\succsim$, $x \sim y \Longleftrightarrow U(X) = U(y)$. Hence, $x \sim y \Longleftrightarrow p \cdot U(x) + (1-p) \cdot U(z) = p \cdot U(y) + (1-p) \cdot U(z)$. Since $U$ satisfies the expected utility property, $x \sim y \Longleftrightarrow [p \odot x \oplus (1-p) \odot z] \sim [p \odot y \oplus (1-p) \odot z]$. \end{enumerate} % \end{proof} \end{document} • I really like your answer, thank you very much. I am about to ask a related question on how to get the numbering I was getting with thmtools´ but with mdframed´, instead. That is, CHAPTER.SECTION.ALPH for Definitions and CHAPTER.SECTION.ROMAN for Propositions, while keeping PROOFS unnumbered. I haven't been able to do so yet, but maybe I succeed with your code. – Héctor Feb 14 '17 at 22:33 • I'll try to see that. Do you mean uppercase Roman and Alph? – Bernard Feb 14 '17 at 23:28 • I mean exactly that. You can indeed see the question here, in case you want: tex.stackexchange.com/questions/353923/… (it just got an answer). – Héctor Feb 14 '17 at 23:29 • @Héctor: I've posted a solution to your new question using ntheorem`. – Bernard Feb 15 '17 at 13:39
2020-01-18 04:14:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981808662414551, "perplexity": 707.018313476931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591763.20/warc/CC-MAIN-20200118023429-20200118051429-00331.warc.gz"}
https://www.unipa.it/persone/docenti/l/roberto.livrea/?pagina=pubblicazioni
Salta al contenuto principale # ROBERTO LIVREA ## Pubblicazioni Data Titolo Tipologia Scheda 2019 Three solutions for parametric problems with nonhomogeneous (a,2)-type differential operators and reaction terms sublinear at zero Articolo in rivista Vai 2018 Preface Abstract in rivista Vai 2018 Positive solutions of Dirichlet and homoclinic type for a class of singular equations Articolo in rivista Vai 2017 Some notes on a superlinear second order Hamiltonian system Articolo in rivista Vai 2017 Multiple solutions of second order Hamiltonian systems Articolo in rivista Vai 2017 Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems Articolo in rivista Vai 2016 Bifurcation phenomena for the positive solutions of semilinear elliptic problems with mixed boundary conditions Articolo in rivista Vai 2016 Nonlinear elliptic equations with asymmetric asymptotic behavior at $\pm\infty$ Articolo in rivista Vai 2015 Resonant neumann equations with indefinite linear part Articolo in rivista Vai 2015 Nonlinear nonhomogeneous neumann eigenvalue problems Articolo in rivista Vai 2015 An existence result for a Neumann problem Articolo in rivista Vai 2015 Three solutions for a two-point boundary value problem with the prescribed mean curvature equation Articolo in rivista Vai 2015 Existence results for parametric boundary value problems involving the mean curvature operator Articolo in rivista Vai 2014 A nonlinear eigenvalue problem for the periodic scalar p-Laplacian Articolo in rivista Vai 2014 Critical points in open sublevels and multiple solutions for parameter-depending quasilinear elliptic equations Articolo in rivista Vai 2013 Infinitely many solutions for a class of differential inclusions involving the p-biharmonic Articolo in rivista Vai 2013 Variational versus pseudomonotone operator approach in parameter-dependent nonlinear elliptic problems Articolo in rivista Vai 2013 Existence and multiplicity of periodic solutions for second order Hamiltonian systems depending on a parameter Articolo in rivista Vai 2012 Multiple solutions for a Neumann-type differential inclusion problem involving the p(.)-Laplacian Articolo in rivista Vai 2012 Multiple solutions for quasilinear elliptic problems via critical points in open sublevels and truncation principles Articolo in rivista Vai 2012 Infinitely many solutions for a perturbed nonlinear Navier boundary value problem involving the p-biharmonic Articolo in rivista Vai 2011 Bounded Palais-Smale sequences for non-differentiable functions Articolo in rivista Vai 2010 Multiple periodic solutions for Hamiltonian systems with not coercive potential Articolo in rivista Vai 2009 A min-max principle for non-differentiable functions with a weak compactness condition Articolo in rivista Vai 2009 On a min-max principle for non-smooth functions and applications Articolo in rivista Vai 2008 Z_2-symmetric critical point theorems for non-differentiable functions Articolo in rivista Vai 2007 Some remarks on nonsmooth critical point theory Articolo in rivista Vai 2006 Critical points for nondifferentiable functions in presence of splitting Articolo in rivista Vai 2005 Periodic solutions for a class of second-order Hamiltonian systems Articolo in rivista Vai 2004 Existence and classification of critical points for nondifferentiable functions Articolo in rivista Vai 2003 Infinitely many periodic solutions for a second-order nonautonomous system Articolo in rivista Vai 2003 Multiplicity theorems for the Dirichlet problem involving the p-Laplacian Articolo in rivista Vai 2002 Existence of three solutions for a quasilinear two point boundary value problem Articolo in rivista Vai
2020-02-26 20:29:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6295753121376038, "perplexity": 6415.1351925345925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00245.warc.gz"}
http://repository.essex.ac.uk/28831/
# Empirical Likelihood Based on Synthetic Right Censored Data Liang, Wei and Dai, Hongsheng (2021) 'Empirical Likelihood Based on Synthetic Right Censored Data.' Statistics and Probability Letters, 169. p. 108962. ISSN 0167-7152 Preview Text EL_survival_main.pdf - Accepted Version Preview Text EL_survival_supplementary.pdf - Supplemental Material In this paper, we develop a Mean Empirical Likelihood (MeanEL) method for right censored data. This MeanEL approach is based on traditional empirical likelihood methods but uses synthetic data to construct an EL ratio statistics, which is shown to have a $\chi^2$ limiting distribution. Different simulation studies show that the MeanEL confidence intervals tend to have more accurate coverage probabilities than other existing Empirical Likelihood methods. Theoretical comparisons of different EL methods are also provided under a general framework.
2022-06-27 18:38:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059092044830322, "perplexity": 2267.7436019015468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00277.warc.gz"}
https://byjus.com/questions/3x2y52x-3y7-solve-them-graphically/
# 3X+2Y=5,2X-3Y=7 Solve Them Graphically Given: 3x + 2y = 5 and 2x – 3y = 7 Here, $$a_{1}=3, b_{1}=2, c_{1}=5$$ $$a_{2}=2, b_{2}=-3, c_{2}=7$$ $$\therefore$$ $$\frac{a_{1}}{a_{2}} = \frac{3}{2}$$ $$\frac{b_{1}}{b_{2}} = \frac{2}{-3}$$ $$\frac{c_{1}}{c_{2}} = \frac{5}{7}$$ $$\therefore \frac{a_{1}}{a_{2}} \neq \frac{b_{1}}{b_{2}}$$ Therefore, the linear equation is consistent Explore more such questions and answers at BYJU’S.
2021-07-29 05:32:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6586819887161255, "perplexity": 1053.9460901509854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00491.warc.gz"}
https://cy2sec.comm.eng.osaka-u.ac.jp/miyaji-lab/introduction/abstract/2012/yang-jp.html
# Miyaji Laboratory Elliptic Curve Cryptosystems (ECC) has gained increasing popularity in public key cryptography since was first proposed independently by Miller and Koblitz in mid of 1980's. In comparison with other established systems such as RSA, ECC has become especially attractive for applications due to its shorter key size requirement, which are translated to less power and storage requirements, and reduced computing times. For example, 160-bits of ECC and 1024-bits of RSA offer the same level of security. These advantages make ECC beneficial for use in smart cards and embedded systems where storage, power, and computing resources are at a premium.In public key cryptography each user or the device taking part in the communication generally have a pair of keys, a public key and a private key, and a set of operations associated with the keys to do the cryptographic operations. The operations of ECC is defined over the elliptic curve $y^2 = x^3 + Ax + B$, where $4A^3 + 27b^2 \neq 0$. All points $(x, y)$ which satisfies the above equation plus a point at infinity lies on the elliptic curve. The private key is a random number $k$ and the public key is a point in the curve $Q$, where $Q$ is obtained by multiplying the private key with the generator point $P$ in the curve.The security of ECC depends on the Elliptic Curve Discrete Logarithm Problem (ECDLP). Let $P$ and $Q$ be two points on an elliptic curve such that $kP = Q$, where $k$ is a scalar. Given $P$ and $Q$, it is computationally infeasible to obtain $k$, if $k$ is sufficiently large. $k$ is the discrete logarithm of $Q$ to the base $P$. Therefore, the most dominant computation part of ECC is the scalar multiplication $kP$, i.e. multiplication of a scalar $k$ with any point $P$ on the curve. The speed of scalar multiplication plays an important role in the efficiency of ECC. The running time of scalar multiplication is computed based on two levels of complexity, elliptic curve point operations and finite filed operations. Performing fast point operations on an elliptic curve is crucial for efficient scalar multiplication. Computation time of point operation depends on the coordinate system adapted. Point operations in affine coordinate involves inversion, which is particularly a very costly operation over prime fields. To avoid inversion for prime filed, various coordinate systems have been proposed such as Projective coordinate, Jacobian coordinate, modified Jacobian coordinate, and Chudnovsky Jacobian coordinate. Using these coordinates, we can remove inversions from the point operations at the cost of increase in the other simpler filed operations. Computation cost of point operations are different for various coordinates. Some coordinate systems such as modified Jacobian coordinate have faster doubling than the other coordinate systems and some coordinate systems such as Chudnovsky Jacobian coordinate have faster addition. One possible way for an efficient scalar multiplication is to use the mixed coordinate systems for the computation of point operations. The common method for scalar multiplication is performed by iterative point additions and point doubling on an elliptic curve according to bits $k_i$ of binary representation $k$, which is called binary method. Various methods have been proposed for the efficient computation of $kP$ by reducing point operations (addition, doubling). One method can be made by taking different binary representations of the multiplier $k$ such as non-adjacent form (NAF), window non-adjacent forms such as (wNAF) and Frac-wNAF. Unfortunately that the aforementioned methods are vulnerable to side-channel attacks, which measure observable parameters such as timings or power consumptions during cryptographic operations to deduce the whole or partial secret information of a cryptosystem. Side-channel attacks were extended to elliptic curve cryptosystems. The particular target of side-channel attacks for elliptic curve cryptosystems is the scalar $k$ in scalar multiplication, which is a secret positive integer. Various methods to improve the security against power analysis attacks have been proposed, such as Montgomery ladder and the Joye's double-and-add algorithm. When extra memory is allowed, the window method (wNAF, Frac-wNAF) can dramatically speed up the main computation of point multiplication with the pre-computation space. In this case, a table of points is built and stored in advance (pre-computation stage) for later use during the execution of the scalar multiplication itself (evaluation stage). Although these window-based methods effectively reduce the number of non zero terms in most representations, a potential drawback is the cost of computing such a table, which grows with the window size. Thus, it is an important research effort to minimize the cost of the pre-computation stage to reduce the total cost of scalar multiplication. Further, although improved elliptic curve shapes with faster explicit formulae are currently the focus of intense research, there is still a lack of analysis of pre-computation schemes that are efficient for these settings. In this direction, Patrick Longa and Catherine Gebotys proposes a efficient pre-computation schemes based on conjugate'' addition (CA), which requires much fewer additional field operations by computing the addition and subtraction of two distinct points together. Later Sasahara, Miyaji and Yokogawa developed the pre-computation schemes using doubling-and-tripling formula (DT), which computes the doubling and tripling of point parallel. In this thesis, we improved the pre-computation schemes based on the following idea: compute all the precomputed points by only CA and DT, without independent point addition. We refer to this strategy as making perfect'' conjugate pair. It will turn out that this strategy will allow using the efficient point operations such as CA and DT as possible as we can, thus computing precomputed tables very efficiently. Further, our pre-computation schemes compute the intermediate points in Chudnovsky Jacobian coordinate which has faster addition. As representing points in Chudnovsky Jacobian coordinate will require more memory cost, we propose a new mode of addition formulae which keep the data of one input point. With the proposed addition formulae, the extra memory requirement due to Chudnovsky Jacobian coordinate representation will be reduced. This thesis compares and analyses the performance of the proposed schemes with the previous efficient methods by representing precomputed tables in Jacobian coordinates. The analysis shows that the proposed schemes offer the lowest computation costs for pre-computation tables for scalar multiplication. Then we proposed several ternary method for scalar multiplication, representing the scalar $k$ in ternary expansion $(k'_{n'-1},\ldots ,k'_1,k'_0)_3$, where $k'_i\in \left\lbrace 0,1,2 \right\rbrace$, $k'_{n'-1}\neq 0$. As the length of ternary expansion is much shorter than binary one, we can improve time efficiency by the ternary methods. And we use the addition formulae in mixed coordinates to further reduce the computation time. Our propose ternary signed-digit method and extended ternary signed-digit method offer the lowest computation time with one extra register.
2019-09-19 02:27:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7083059549331665, "perplexity": 624.6450640323774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573415.58/warc/CC-MAIN-20190919015534-20190919041534-00028.warc.gz"}
https://www.ncatlab.org/nlab/show/equivariant+bundle
nLab equivariant bundle Context Bundles bundles fiber bundles in physics Contents Idea For $X$ a space and $G$ a group with an action on $X$, a $G$-equivariant bundle on $X$ is a bundle on the action groupoid $X//G$ of $X$. Examples Last revised on September 5, 2014 at 11:28:09. See the history of this page for a list of all contributions to it.
2019-08-20 15:48:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113945126533508, "perplexity": 368.2204088671446}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00197.warc.gz"}
http://aimsciences.org/journal/1551-0018/2015/12/6
# American Institute of Mathematical Sciences ISSN: 1551-0018 eISSN: 1547-1063 All Issues ## Mathematical Biosciences & Engineering 2015 , Volume 12 , Issue 6 Special issue on application of ecological and mathematical theory to cancer: New challenges Select all articles Export/Reference: 2015, 12(6): i-iv doi: 10.3934/mbe.2015.12.6i +[Abstract](678) +[PDF](120.7KB) Abstract: According to the World Health Organization, cancer is among the leading causes of morbidity and mortality worldwide. Despite enormous efforts of cancer researchers all around the world, the mechanisms underlying its origin, formation, progression, therapeutic cure or control are still not fully understood. Cancer is a complex, multi-scale process, in which genetic mutations occurring at a sub-cellular level manifest themselves as functional changes at the cellular and tissue scale. 2015, 12(6): 1141-1156 doi: 10.3934/mbe.2015.12.1141 +[Abstract](691) +[PDF](3437.4KB) Abstract: Hybrid models of tumor growth, in which some regions are described at the cell level and others at the continuum level, provide a flexible description that allows alterations of cell-level properties and detailed descriptions of the interaction with the tumor environment, yet retain the computational advantages of continuum models where appropriate. We review aspects of the general approach and discuss applications to breast cancer and glioblastoma. 2015, 12(6): 1157-1172 doi: 10.3934/mbe.2015.12.1157 +[Abstract](732) +[PDF](664.7KB) Abstract: Glioblastoma multiforme is an aggressive brain cancer that is extremely fatal. It is characterized by both proliferation and large amounts of migration, which contributes to the difficulty of treatment. Previous models of this type of cancer growth often include two separate equations to model proliferation or migration. We propose a single equation which uses density-dependent diffusion to capture the behavior of both proliferation and migration. We analyze the model to determine the existence of traveling wave solutions. To prove the viability of the density-dependent diffusion function chosen, we compare our model with well-known in vitro experimental data. 2015, 12(6): 1173-1187 doi: 10.3934/mbe.2015.12.1173 +[Abstract](818) +[PDF](1149.1KB) Abstract: In this paper, we reformulate the diffuse interface model of the tumor growth (S.M. Wise et al., Three-dimensional multispecies nonlinear tumor growth-I: model and numerical method, J. Theor. Biol. 253 (2008) 524--543). In the new proposed model, we use the conservative second-order Allen--Cahn equation with a space--time dependent Lagrange multiplier instead of using the fourth-order Cahn--Hilliard equation in the original model. To numerically solve the new model, we apply a recently developed hybrid numerical method. We perform various numerical experiments. The computational results demonstrate that the new model is not only fast but also has a good feature such as distributing excess mass from the inside of tumor to its boundary regions. 2015, 12(6): 1189-1202 doi: 10.3934/mbe.2015.12.1189 +[Abstract](699) +[PDF](590.4KB) Abstract: Invasion and metastasis are the main cause of death in cancer patients. The initial step of invasion is the degradation of extracellular matrix (ECM) by primary cancer cells in a tissue. Membranous metalloproteinase MT1-MMP and soluble metalloproteinase MMP-2 are thought to play an important role in the degradation of ECM. In the previous report, we found that the repetitive insertion of MT1-MMP to invadopodia was crucial for the effective degradation of ECM (Hoshino, D., et al., PLoS Comp. Biol., 2012, e1002479). However, the role of MMP-2 and the effect of inhibitors for these ECM-degrading proteases were still obscure. Here we investigated these two problems by using the same model as in the previous report. First we tested the effect of MMP-2 and found that while MT1-MMP played a major role in the degradation of ECM, MMP-2 played only a marginal effect on the degradation of ECM. Based on these findings, we next tested the effect of a putative inhibitor for MT1-MMP and found that such inhibitor was ineffective in blocking ECM degradation. Then we tested combined strategy including inhibitor for MT1-MMP, reduction of its turnover and its content in vesicles. A synergistic effect of combined strategy was observed in the decrease in the efficacy of ECM degradation. Our simulation study suggests the importance of combined strategy in blocking cancer invasion and metastasis. 2015, 12(6): 1203-1217 doi: 10.3934/mbe.2015.12.1203 +[Abstract](640) +[PDF](1021.9KB) Abstract: The cancer-immune interaction is a fast growing field of research in biology, where the goal is to harness the immune system to fight cancer more effectively. In the present paper we review recent work of the interaction between T cells and cancer. CD8$^+$ T cells are activated by IL-27 cytokine and they kill tumor cells. Regulatory T cells produce IL-35 which promotes cancer cells by enhancing angiogenesis, and inhibit CD8$^+$ T cells via TGF-$\beta$ production. Hence injections of IL-27 and anti-IL-35 are both potentially anti-tumor drugs. The models presented here are based on experimental mouse experiments, and their simulations agree with these experiments. The models are used to suggest effective schedules for drug treatment. 2015, 12(6): 1219-1235 doi: 10.3934/mbe.2015.12.1219 +[Abstract](741) +[PDF](1051.7KB) Abstract: Apoptosis resistance is a hallmark of human cancer, and tumor cells often become resistant due to defects in the programmed cell death machinery. Targeting key apoptosis regulators to overcome apoptotic resistance and promote rapid death of tumor cells is an exciting new strategy for cancer treatment, either alone or in combination with traditionally used anti-cancer drugs that target cell division. Here we present a multiscale modeling framework for investigating the synergism between traditional chemotherapy and targeted therapies aimed at critical regulators of apoptosis. 2015, 12(6): 1237-1256 doi: 10.3934/mbe.2015.12.1237 +[Abstract](1142) +[PDF](456.3KB) Abstract: Oncolytic viruses (OVs) are used to treat cancer, as they selectively replicate inside of and lyse tumor cells. The efficacy of this process is limited and new OVs are being designed to mediate tumor cell release of cytokines and co-stimulatory molecules, which attract cytotoxic T cells to target tumor cells, thus increasing the tumor-killing effects of OVs. To further promote treatment efficacy, OVs can be combined with other treatments, such as was done by Huang et al., who showed that combining OV injections with dendritic cell (DC) injections was a more effective treatment than either treatment alone. To further investigate this combination, we built a mathematical model consisting of a system of ordinary differential equations and fit the model to the hierarchical data provided from Huang et al. We used the model to determine the effect of varying doses of OV and DC injections and to test alternative treatment strategies. We found that the DC dose given in Huang et al. was near a bifurcation point and that a slightly larger dose could cause complete eradication of the tumor. Further, the model results suggest that it is more effective to treat a tumor with immunostimulatory oncolytic viruses first and then follow-up with a sequence of DCs than to alternate OV and DC injections. This protocol, which was not considered in the experiments of Huang et al., allows the infection to initially thrive before the immune response is enhanced. Taken together, our work shows how the ordering, temporal spacing, and dosage of OV and DC can be chosen to maximize efficacy and to potentially eliminate tumors altogether. 2015, 12(6): 1257-1275 doi: 10.3934/mbe.2015.12.1257 +[Abstract](813) +[PDF](2150.3KB) Abstract: A $3$-compartment model for metronomic chemotherapy that takes into account cancerous cells, the tumor vasculature and tumor immune-system interactions is considered as an optimal control problem. Metronomic chemo-therapy is the regular, almost continuous administration of chemotherapeutic agents at low dose, possibly with small interruptions to increase the efficacy of the drugs. There exists medical evidence that such administrations of specific cytotoxic agents (e.g., cyclophosphamide) have both antiangiogenic and immune stimulatory effects. A mathematical model for angiogenic signaling formulated by Hahnfeldt et al. is combined with the classical equations for tumor immune system interactions by Stepanova to form a minimally parameterized model to capture these effects of low dose chemotherapy. The model exhibits bistable behavior with the existence of both benign and malignant locally asymptotically stable equilibrium points. In this paper, the transfer of states from the malignant into the benign regions is used as a motivation for the construction of an objective functional that induces this process and the analysis of the corresponding optimal control problem is initiated. 2015, 12(6): 1277-1288 doi: 10.3934/mbe.2015.12.1277 +[Abstract](675) +[PDF](590.5KB) Abstract: We propose the hypothesis that for a particular type of cancer there exists a key pair of oncogene (OCG) and tumor suppressor gene (TSG) that is normally involved in strong stabilizing negative feedback loops (nFBLs) of molecular interactions, and it is these interactions that are sufficiently perturbed during cancer development. These nFBLs are thought to regulate oncogenic positive feedback loops (pFBLs) that are often required for the normal cellular functions of oncogenes. Examples given in this paper are the pairs of MYC and p53, KRAS and INK4A, and E2F1 and miR-17-92. We propose dynamical models of the aforementioned OCG-TSG interactions and derive stability conditions of the steady states in terms of strengths of cycles in the qualitative interaction network. Although these conditions are restricted to predictions of local stability, their simple linear expressions in terms of competing nFBLs and pFBLs make them intuitive and practical guides for experimentalists aiming to discover drug targets and stabilize cancer networks. 2015, 12(6): 1289-1302 doi: 10.3934/mbe.2015.12.1289 +[Abstract](626) +[PDF](478.1KB) Abstract: Protein-protein interaction networks associated with diseases have gained prominence as an area of research. We investigate algebraic and topological indices for protein-protein interaction networks of 11 human cancers derived from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. We find a strong correlation between relative automorphism group sizes and topological network complexities on the one hand and five year survival probabilities on the other hand. Moreover, we identify several protein families (e.g. PIK, ITG, AKT families) that are repeated motifs in many of the cancer pathways. Interestingly, these sources of symmetry are often central rather than peripheral. Our results can aide in identification of promising targets for anti-cancer drugs. Beyond that, we provide a unifying framework to study protein-protein interaction networks of families of related diseases (e.g. neurodegenerative diseases, viral diseases, substance abuse disorders). 2015, 12(6): 1303-1320 doi: 10.3934/mbe.2015.12.1303 +[Abstract](711) +[PDF](1363.9KB) Abstract: Swimming by shape changes at low Reynolds number is widely used in biology and understanding how the performance of movement depends on the geometric pattern of shape changes is important to understand swimming of microorganisms and in designing low Reynolds number swimming models. The simplest models of shape changes are those that comprise a series of linked spheres that can change their separation and/or their size. Herein we compare the performance of three models in which these modes are used in different ways. 2015, 12(6): 1321-1340 doi: 10.3934/mbe.2015.12.1321 +[Abstract](672) +[PDF](515.2KB) Abstract: The majority of solid tumours arise in epithelia and therefore much research effort has gone into investigating the growth, renewal and regulation of these tissues. Here we review different mathematical and computational approaches that have been used to model epithelia. We compare different models and describe future challenges that need to be overcome in order to fully exploit new data which present, for the first time, the real possibility for detailed model validation and comparison. 2017  Impact Factor: 1.23
2018-12-18 13:06:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5226311087608337, "perplexity": 1664.8865671395315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829399.59/warc/CC-MAIN-20181218123521-20181218145521-00108.warc.gz"}
https://zbmath.org/?q=an%3A0888.46017
# zbMATH — the first resource for mathematics Rolle’s theorem fails in $$\ell_ 2$$. (English) Zbl 0888.46017 M. Furi and M. Martelli suggested multidimensional analogs of the Rolle theorem [Am. Math. Monthly 102, No. 3, 243-249 (1995; Zbl 0856.26009)]. The author shows by example that these analogs fail in Hilbert space. ##### MSC: 46G05 Derivatives of functions in infinite-dimensional spaces 46B45 Banach sequence spaces 26B05 Continuity and differentiation questions Rolle’s theorem Full Text:
2021-06-18 18:50:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5546140670776367, "perplexity": 6476.336417281503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00162.warc.gz"}
http://www.scikit-yb.org/en/latest/api/text/freqdist.html
# Token Frequency Distribution¶ A method for visualizing the frequency of tokens within and across corpora is frequency distribution. A frequency distribution tells us the frequency of each vocabulary item in the text. In general, it could count any kind of observable event. It is a distribution because it tells us how the total number of word tokens in the text are distributed across the vocabulary items. from yellowbrick.text.freqdist import FreqDistVisualizer from sklearn.feature_extraction.text import CountVectorizer Note that the FreqDistVisualizer does not perform any normalization or vectorization, and it expects text that has already be count vectorized. We first instantiate a FreqDistVisualizer object, and then call fit() on that object with the count vectorized documents and the features (i.e. the words from the corpus), which computes the frequency distribution. The visualizer then plots a bar chart of the top 50 most frequent terms in the corpus, with the terms listed along the x-axis and frequency counts depicted at y-axis values. As with other Yellowbrick visualizers, when the user invokes poof(), the finalized visualization is shown. vectorizer = CountVectorizer() docs = vectorizer.fit_transform(corpus.data) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer(features=features) visualizer.fit(docs) visualizer.poof() It is interesting to compare the results of the FreqDistVisualizer before and after stopwords have been removed from the corpus: vectorizer = CountVectorizer(stop_words='english') docs = vectorizer.fit_transform(corpus.data) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer(features=features) visualizer.fit(docs) visualizer.poof() It is also interesting to explore the differences in tokens across a corpus. The hobbies corpus that comes with Yellowbrick has already been categorized (try corpus['categories']), so let’s visually compare the differences in the frequency distributions for two of the categories: “cooking” and “gaming”. from collections import defaultdict hobbies = defaultdict(list) for text, label in zip(corpus.data, corpus.label): hobbies[label].append(text) vectorizer = CountVectorizer(stop_words='english') docs = vectorizer.fit_transform(text for text in hobbies['cooking']) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer(features=features) visualizer.fit(docs) visualizer.poof() vectorizer = CountVectorizer(stop_words='english') docs = vectorizer.fit_transform(text for text in hobbies['gaming']) features = vectorizer.get_feature_names() visualizer = FreqDistVisualizer(features=features) visualizer.fit(docs) visualizer.poof() ## API Reference¶ Implementations of frequency distributions for text visualization class yellowbrick.text.freqdist.FrequencyVisualizer(features, ax=None, n=50, orient='h', color=None, **kwargs)[source] Bases: yellowbrick.text.base.TextVisualizer A frequency distribution tells us the frequency of each vocabulary item in the text. In general, it could count any kind of observable event. It is a distribution because it tells us how the total number of word tokens in the text are distributed across the vocabulary items. Parameters: features : list, default: None The list of feature names from the vectorizer, ordered by index. E.g. a lexicon that specifies the unique vocabulary of the corpus. This can be typically fetched using the get_feature_names() method of the transformer in Scikit-Learn. ax : matplotlib axes, default: None The axes to plot the figure on. n: integer, default: 50 Top N tokens to be plotted. orient : ‘h’ or ‘v’, default: ‘h’ Specifies a horizontal or vertical bar chart. color : list or tuple of colors Specify color for bars kwargs : dict Pass any additional keyword arguments to the super class. These parameters can be influenced later on in the visualization process, but can and should be set as early as possible. count(X)[source] Called from the fit method, this method gets all the words from the corpus and their corresponding frequency counts. Parameters: X : ndarray or masked ndarray Pass in the matrix of vectorized documents, can be masked in order to sum the word frequencies for only a subset of documents. counts : array A vector containing the counts of all words in X (columns) draw(**kwargs)[source] Called from the fit method, this method creates the canvas and draws the distribution plot on it. Parameters: kwargs: generic keyword arguments. finalize(**kwargs)[source] The finalize method executes any subclass-specific axes finalization steps. The user calls poof & poof calls finalize. Parameters: kwargs: generic keyword arguments. fit(X, y=None)[source] The fit method is the primary drawing input for the frequency distribution visualization. It requires vectorized lists of documents and a list of features, which are the actual words from the original corpus (needed to label the x-axis ticks). Parameters: X : ndarray or DataFrame of shape n x m A matrix of n instances with m features representing the corpus of frequency vectorized documents. y : ndarray or DataFrame of shape n Labels for the documents for conditional frequency distribution. .. note:: Text documents must be vectorized before fit().
2017-09-22 07:56:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2222076952457428, "perplexity": 3973.8552240173162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688926.38/warc/CC-MAIN-20170922074554-20170922094554-00568.warc.gz"}
http://www.reddit.com/user/commutant?sort=top
# reddit's stories are created by its users [–] 10 points11 points  (0 children) sorry, this has been archived and can no longer be voted on You still haven't told us what book it is. [–] 5 points6 points  (0 children) sorry, this has been archived and can no longer be voted on It would be nice to know what it is and why you think it's garbage. Assuming this is undergrad PDE, I'd recommend: 1. Partial Differential Equations for Scientists and Engineers by Farlow 2. Notes here by Olver 3. Applied Partial Differential Equations by Haberman 1 is very easy to read. 2 is a nice mix of theory and application, and it's freely available. 3 is pretty comprehensive, but mostly harder to read than 1 and 2. [–] 2 points3 points  (0 children) sorry, this has been archived and can no longer be voted on Looking for some intuition about the heat equation? Well, I've got just the thing for you: http://www.youtube.com/watch?v=b-LKPtGMdss [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Here's a few ideas. • Generate a picture using (complex) fourier series/epicycles • Numerical solution of coupled ODE describing two metronomes on a moving platform, simulate synchronization • Analysis of a basic nonlinear PDE like u_t + u u_x = 0 (explain shocks and rarefaction). • Run heat equation on an image to blur it • Make a mathematical model using ODE or PDE. Examples could be a lumpy string, coupled pendula, a hanging chain, or lots of other things. There's also a website with a lot of turnkey projects for a ODE class (with some PDE projects, but not many): http://www.codee.org/ On the website is a cool book of projects from a JMM 2013 minicourse on teaching ODE/PDE: http://www.codee.org/jmm-2013-minicourse/jmm-2013-project-book/ [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Yes, he/she means T2, though I don't know why they don't just say Hausdorff. And kernel(f) = {points (x,y) in E x E with f(x) = f(y)}. I'm sure OP enjoyed it, but this looks like a terrible way to do a first course in topology. Edit: fixed an F that should be an E [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on I'd recommend this brand new one by Peter Olver in the UTM series: http://www.springer.com/mathematics/dynamical+systems/book/978-3-319-02098-3 The notes that comprise this book have been around for a while and I have personally taught from them and like them a lot! Another option that is a little less popular (but still good in my opinion) is Cooper's Intro to PDE with Matlab: http://amzn.com/B000VXM41Y [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on No, something like this does not exist in general because in general the process of finding explicit solutions to differential equations is a crap shoot. There is a lot of general theory to deal with linear equations, and basically no general theory to deal with nonlinear ones. Edit: what I said is not specific to second order, it's specific to linear, homogeneous, constant coefficient ODE of any order. A similar theory exists for non-constant coefficient ODE, and one can deal with the nonhomogeneous case by applying operators to the equation. Although, if there's something specific (like from a textbook) that you're referring to, then the answer might be yes. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Because I don't agree with the pedagogical style, and in my experience many students can't see the forest for the trees when this kind of a "formal" approach is taken. You might spend most of your time trying to parse definitions and notation, and the expository style is "Theorem, proof, repeat". He also uses a lot of category heavy language and spends time on stuff I think should be left out of an intro course (like filters for instance). Keep in mind, everything I said applies only to the idea of an intro course. This would be a fine advanced exposition to prepare to study something like functional analysis (in fact the topics seem a bit geared toward that). [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Here's how the theory would solve that. First, you would need to acknowledge that we can solve something like y'=ky, just a simple ODE. Now, the theory allows us to take a (linear, constant coefficient) second order ODE, convert it to a first order system X' = A X where A is a 2x2 matrix, and "decouple it" by diagonalizing A. i.e. P-1 A P = D, then write Y = P-1 X, and then X' = A X becomes Y' = D Y. This is really just two separate ODE y1' = d1 y1 and y2' = d2 y2, which we assumed you could solve in the first place. Then go back to your original solution by applying P. Edit: fixed an inverse [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Sorry, I went to bed after that post. No I'm not thinking pointwise, as HilbertSeries said. These two points are absolute max and min points, and serve to "squeeze" the whole function down with their values. It's easy to figure out what they are (I'll just say it since OP hasn't come back anyway). Set the derivative equal to zero: [; \frac{d}{dx} \frac{x}{1+n x^2} = \frac{1-nx^2}{(1+nx^2)^2} = 0 \implies x = \pm \sqrt{\frac{1}{n}} ;], plug that back into the original guy and get max and min values of [; \pm \frac{1}{2}\sqrt{\frac{1}{n}} ;]. Spend a moment justifying that these are absolute max and min by noticing that the function is zero at zero and goes to zero at the limits. In my opinion this is the simplest argument because it doesn't use anything by ordinary calculus and an intuitive understanding of uniform convergence. If OP's teacher wants epsilons and N's, you can certainly insert them in the right places in this argument. Edit: latex typo [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Here's an idea: look at the graphs for some big values of n. You'll see some "peaks" on either side of zero. These are the points of the function that are farthest from zero, and if you can show that these go to zero as n goes to infinity, you're done. Now, how would you figure out where those peaks are....? [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on Thanks for fixing up the reddit! One last request though, can you fix the latex code on the sidebar, just for the OCD folks. [–] 6 points7 points  (0 children) sorry, this has been archived and can no longer be voted on I'd probably refer to those as initial conditions rather than boundary conditions. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on does this mean the kernel has a direct correlation with orthogonality? or is this only in the case of a projection? There's other types of projections that aren't orthogonal, which basically amount to different choices for the kernel. do you mean this in the sense that there could be multiple bases for a given space? Yes. [–] 1 point2 points  (0 children) sorry, this has been archived and can no longer be voted on a basis of a space is atleast one "group" of unique and linearly independent vectors that can produce all possible vectors in that given space via linear combination. Yes, except for the unique part. Nothing unique about a basis. an orthogonal projection from vector u onto vector v is a linear transformation that finds a new vector, b, such that b is orthogonal to v. b is also related to u somehow but i am unsure. I don't really think this is the right way to think about an orthogonal projection. I'd think about it as projecting vectors onto a subspace, not a particular vector. The space you project onto is (generally) the image, and the space you project "along" is exactly the space orthogonal to the image. In the case of your problem, you are projecting to the x axis "along" the y-z plane, which is the space orthogonal to the x axis. This means that the x-axis will be your image and the y-z plane will be your kernel. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Yes, these look like the right formulas, but this problem is not one that needs a formula like that. It is more a matter of knowing what orthogonality and projection mean geometrically, and what it means to find a basis. Did what I said in the last comment make sense? [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Okay, thanks. Fixed the latex. Hopefully I'll have resolved your issue. [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on There's something strange about your integral. Let's forget about the simplification you did, and just use the defintion on the interval 0 to 1/F. Then we have [; a_k = 2F \int_0^{1/F} |\cos(2 \pi F t) | \cos(2 \pi k F t) dt ;] Let k=1 in this integral, rather than trying evaluate it in general. Then do a substitution of s = F t and we have [; a_1 = 2 \int_0^1 | \cos(2 \pi s) | \cos(2 \pi s) ds ;] This integral is zero. Double check all the simplification that you've done, because your mistake lies somewhere in there. EDIT: fixed latex [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on By writing down a vector that forms a basis for the space corresponding to the x-axis. The x-axis is collection of all vectors of the form [k,0,0], which is the same as all multiples of (for instance) [1,0,0]. So [1,0,0] is a basis for the x-axis, and "represents" it in some sense. Does this sound familiar? If not you need to do some reviewing before taking on the idea of an orthogonal projection. I can recommend a few Strang videos to watch.... [–] 0 points1 point  (0 children) sorry, this has been archived and can no longer be voted on Okay, before OP goes chugging away in mathematica I should point out a couple things. First, a quick glance tells me the integrals should be independent of F (do a u substitution, then deal with those integrals). Second, you'll need some spaces in between variables in the syntax. Monkey around with something like this first: F = 1; k = 4; 2 F Integrate[Abs[Cos[2 Pi F t]] Cos[2 Pi k F t], {t, -1/(2 F), 1/(2 F)}] (change the F and the k values around until you've convinced yourself that k=odd is zero and k=even is not, and that the value is independent of F. Then do a u sub and evaluate the integral!
2014-12-20 15:55:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7033188343048096, "perplexity": 926.4848239196092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769990.68/warc/CC-MAIN-20141217075249-00151-ip-10-231-17-201.ec2.internal.warc.gz"}
https://math.answers.com/questions/What_irrational_number_is_between_5_and_7
0 # What irrational number is between 5 and 7? Wiki User 2014-10-01 09:55:52 An irrational number between 5 and 7 is the square root of 35 (which is = 5.9160797831.....). This number can't be expressed as terminating decimals, which means that it goes on forever. An irrational number is an irrational number is any real number that cannot be expressed as a simple fraction or terminating decimals. Wiki User 2014-10-01 09:55:52 🙏 0 🤨 0 😮 0 Study guides 20 cards ## A number a power of a variable or a product of the two is a monomial while a polynomial is the of monomials ➡️ See all cards 3.75 396 Reviews Wiki User 2015-09-11 10:12:20 An irrational number is any real number that cannot be expressed as a simple fraction or terminating decimals. There are infinite Irrational Numbers between any two numbers. Thus it's an impossible task to list all of them. I am providing with an example below. For example- Square root of 29 is an irrational number between 5 and 7. sqrt(29) = 5.38516480713.... For further reading on square root of 29: Is_the_square_root_of_29_a_rational_or_an_irrational_number http://www.wolframalpha.com/input/?i=sqrt%2829%29 Pro Genji Lvl 2 2019-12-05 02:37:39 2 pi Anonymous Lvl 1 2020-04-09 18:55:40 An irrational number between 5 and 7 Anonymous Lvl 1 2020-08-24 20:42:17 2 pi
2022-01-28 19:36:29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623285293579102, "perplexity": 950.1273244700575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00691.warc.gz"}
http://www.mpwmd.net/asd/board/boardpacket/2006/20060622/07/item7.htm
ITEM: CONSENT CALENDAR 7. AUTHORIZE EXPENDITURE OF BUDGETED FUNDS TO AMEND CONTRACT WITH JONES & STOKES ASSOCIATES FOR PREPARATION OF EIR/EA ON PHASE 1 AQUIFER STORAGE AND RECOVERY PROJECT Meeting Date: June 22, 2006 Budgeted: $75,000 (FY 06-07) From: #### David A. Berger, #### General Manager Program/Line Item No.: 1-2-1-A-3-b Acct. No. 4-7860.04 Staff Contact: Henrietta Stern Cost Estimate:$3,500 General Counsel Approval:  N/A Committee Recommendation:  The Administrative Committee reviewed this item on June 12, 2006 and recommended approval. CEQA Compliance:  N/A SUMMARY:   The Board will consider authorizing staff to amend the existing contract with Jones & Stokes Associates (JSA) to add $3,500 to the current not-to-exceed limit of$146,720 to complete the combined Environmental Impact Report (EIR) and Environmental Assessment (EA) for the District’s proposed Phase 1 Aquifer Storage and Recovery (ASR) Project.  As shown in Exhibit 7-A, this relatively small cost overrun is due primarily to extra unanticipated work requested by MPWMD early in the EIR process and unexpected requests by the U.S. Army that were beyond the consultant’s control. RECOMMENDATION:  Staff recommends that the Board authorize the General Manager to amend the current contract with JSA to add $3,500 to the current contract limit. The current contract was signed in May 2005, with a best estimate of total EIR costs based on information known at that time. Unexpected changes occurred during the course of the contract as described to the Board in the General Manager’s weekly letters as well as monthly updates at Board meetings. The Administrative Committee reviewed this item at its June 12, 2006 meeting and voted 3-0 to recommend approval. If this item is adopted along with the Consent Calendar, staff will execute a contract amendment for a new not-to-exceed amount of$150,220. IMPACT TO DISTRICT RESOURCES:  The current fiscal year budget did not anticipate a $3,500 cost overrun for the EIR/EA. However, due to a variety of factors, certification of the Final EIR/EA is scheduled for the July 22, 2006 meeting, in the new fiscal year. It is anticipated that the$3,500 will likely be expended in early July 2006.  Line item 1-2-1-A-3-b (“ASR Environmental Review”) in the proposed 2006-2007 budget allocates $75,000 for continued ASR investigations. Staff recommends that the$3,500 be included in the \$75,000 amount, which is a rough estimate of environmental review needs in FY 2006-2007, especially if the District moves forward on a Phase 1 ASR Project.  No other specific contract is presently contemplated to use those funds.  The proposed budget will be considered by the Board on June 22, 2006 (see Item # 19). EXHIBITS 7-A      Jones & Stokes Associates memorandum dated June 5, 2006 U:\staff\word\boardpacket\2006\2006boardpackets\20060622\ConsentCal\07\item7.doc
2017-11-25 05:48:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22170664370059967, "perplexity": 12879.990044641443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809419.96/warc/CC-MAIN-20171125051513-20171125071513-00333.warc.gz"}
http://www.lecaldare.com/ban-logic-macros-for-latex/
# BAN Logic macros for LaTeX I uploaded a list of macros for adding BAN Logic symbols to Latex documents, because I haven’t found any premade list on the internet. Commands are named as symbol names (like \sees, \believs, and so on). Full list is on GitHubGist: ## One thought to “BAN Logic macros for LaTeX” 1. FR says: Thank you very much KTM
2022-09-25 14:24:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8866746425628662, "perplexity": 13381.719910588263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00123.warc.gz"}
https://electronics.stackexchange.com/questions/182196/connecting-speaker-and-mic-to-gsm-sim-900
# connecting speaker and mic to GSM SIM 900 I have a gsm module but unfortunately it doesnt have an output for mic and speakers. I need to integrate mic and speaker too. In my GSM module, there are male pins available for mic_P mic_N for microphone and spk_P spk_N but I dont know how to connect these to speakers and microphone. Does anyone have any idea about this? Please share. I am attaching a pic of my gsm module: • Quite often you can just connect MIC_P to the positive side of an electret microphone and MIC_N to the negative. Likewise with a small speaker or set of headphones. The module won't drive a large speaker. Check if there is a voltage across MIC_P and MIC_N. – pjc50 Jul 28 '15 at 17:44 • @pjc50 electret microphone... is it the normal microphone or something else? – anna carolina Jul 28 '15 at 17:46 • It's .. an electret microphone. It should be in the description in a parts catalog. – pjc50 Jul 28 '15 at 17:49 You can use electret microphone. Connect its positive to MICP & negative to MIC_N. Alternatively, for testing purpose you can also use your headphone. Cut the headphone wire and you will find wires to connect with SPK_P SPK_N MIC_p MIC_N .
2019-11-12 17:08:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2876841723918915, "perplexity": 3482.3338662958645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665575.34/warc/CC-MAIN-20191112151954-20191112175954-00226.warc.gz"}
https://lexique.netmath.ca/en/function/
# Function ## Function Relation under which each value or element in a set of departure (or domain) is associated with one and only one value or element in a set of arrival (or image), according to a rule of correspondence that describes this association. A function can be defined in extension or intension. The pairs belonging to a given function can be represented in different ways, such as by an arrow graph or by a graph in a Cartesian plane. • Example of extensional definition : f = {(a, 1), (b, 2), (c, 1), (d, 3), (e, 10)}. • Example of intensional definition : f = { ( x, y) ∈ $$\mathbb{R}$$ × $$\mathbb{R}$$ | y=2x+5 }. ### Examples Consider the function f f : X → Y : x ↦ 2x : • dom(f) = {0, 1, 2, 3,} • ima(f) = {0, 2, 4, 6} Consider the function f : $$\mathbb{R}$$ → $$\mathbb{R}$$ : x ↦ 2x + 1 : • dom(f) = $$\mathbb{R}$$ • ima(f) = $$\mathbb{R}$$ ### Notation The function f of A toward B under which every element x in A is made to correspond to y in B so that yf(x) is noted as: $$f : A → B : x ↦ y = f(x)$$ ### Educational Note It is important to distinguish between the different elements that characterize a function: • The rule that defines it, literal description or equation; • Its graph, arrow graph, or Cartesian graph, for example; • Its pairs, in the case of a binary relation. That’s why we don’t say: consider the function $$y = 2x$$, but rather: consider the function defined by the rule (or the equation) $$y = 2x$$.
2021-11-29 17:55:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7875182032585144, "perplexity": 971.7768479197545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00210.warc.gz"}
https://yzhu.io/publication-type/1/
# 1 ## [ECCV20] LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task Activities Understanding and interpreting human actions is a long-standing challenge and a critical indicator of perception in artificial intelligence. However, a few imperative components of daily human activities are largely missed in prior literature, … ## [IROS20] Human-Robot Interaction in a Shared Augmented Reality Workspace We design and develop a new shared Augmented Reality (AR) workspace for Human-Robot Interaction (HRI), which establishes a bi-directional communication between human agents and robots. In a prototype system, the shared AR workspace enables a shared … ## [IROS20] Graph-based Hierarchical Knowledge Representation for Robot Task Transfer from Virtual to Physical World $Best Paper Finalist$We study the hierarchical knowledge transfer problem using a cloth-folding task, wherein the agent is first given a set of human demonstrations in the virtual world using an Oculus Headset, and later transferred and validated … ## [ICRA20] Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs Aiming to understand how human (false-)belief---a core socio-cognitive ability---would affect human interactions with robots, this paper proposes to adopt a graphical model to unify the representation of object states, robot knowledge, and human … ## [ICRA20] Congestion-aware Evacuation Routing using Augmented Reality Devices We present a congestion-aware routing solution for indoor evacuation, which produces real-time individual-customized evacuation routes among multiple destinations while keeping tracks of all evacuees' locations. A population density map, obtained … ## [AAAI20] Theory-based Causal Transfer: Integrating Instance-level Induction and Abstract-level Structure Learning Learning transferable knowledge across similar but different settings is a fundamental component of generalized intelligence. In this paper, we approach the transfer learning challenge from a causal theory perspective. Our agent is endowed with two … ## [AAAI20] Machine Number Sense: A Dataset of Visual Arithmetic Problems for Abstract and Relational Reasoning As a comprehensive indicator of mathematical thinking and intelligence, the number sense (Dehaene 2011) bridges the induction of symbolic concepts and the competence of problem-solving. To endow such a crucial cognitive ability to machine … ## [NeurIPS19] Learning Perceptual Inference by Contrasting 'Thinking in pictures,' [1] i.e., spatial-temporal reasoning, effortless and instantaneous for humans, is believed to be a significant ability to perform logical induction and a crucial factor in the intellectual history of technology development. … ## [NeurIPS19] PerspectiveNet: 3D Object Detection from a Single RGB Image via Perspective Points Detecting 3D objects from a single RGB image is intrinsically ambiguous, thus requiring appropriate prior knowledge and intermediate representations as constraints to reduce the uncertainties and improve the consistencies between the 2D image plane … ## [ICCV19] Holistic++ Scene Understanding: Single-view 3D Holistic Scene Parsing and Human Pose Estimation with Human-Object Interaction and Physical Commonsense We propose a new 3D holistic++ scene understanding problem, which jointly tackles two tasks from a single-view image: (i) holistic scene parsing and reconstruction---3D estimations of object bounding boxes, camera pose, and room layout, and (ii) 3D …
2021-01-17 18:02:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3172164857387543, "perplexity": 8039.917310632768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513144.48/warc/CC-MAIN-20210117174558-20210117204558-00281.warc.gz"}
https://www.projecteuclid.org/euclid.rmjm/1411945674
## Rocky Mountain Journal of Mathematics ### Qualitative properties and standard estimates of solutions for some fourth order elliptic equations #### Abstract In this paper, first, we make the estimates for a class of fourth order elliptic equations in different domains and boundary conditions. Consequently, we study the qualitative properties of solutions with prescribed $Q$-curvature. Finally, we also will obtain some radially symmetric results by using moving plane methods. #### Article information Source Rocky Mountain J. Math., Volume 44, Number 3 (2014), 975-986. Dates First available in Project Euclid: 28 September 2014 https://projecteuclid.org/euclid.rmjm/1411945674 Digital Object Identifier doi:10.1216/RMJ-2014-44-3-975 Mathematical Reviews number (MathSciNet) MR3264492 Zentralblatt MATH identifier 1305.35044 #### Citation Liu, Kaisheng; Pei, Ruichang. Qualitative properties and standard estimates of solutions for some fourth order elliptic equations. Rocky Mountain J. Math. 44 (2014), no. 3, 975--986. doi:10.1216/RMJ-2014-44-3-975. https://projecteuclid.org/euclid.rmjm/1411945674 #### References • H. Brezis and F. Merle, Uniform estimates and blow-up behavior for solutions of $-\triangle u=V(x)e^{u}$ in two dimensions, Comm. PDE 16 (1991), 1223–1253. • W.X. Chen and C.M. Li, Classification of solutions of some nonlinear elliptic equation, Duke Math. J. 63 (1991), 615–622. • ––––, Qualitative properties of solutions to some nonlinear elliptic equations in $R^2$, Duke Math. J. 71 (1993), 427–439. • D. Gilbarg and N.S. Trudinger, Elliptic partial differential equations of second order, second ed., Grundl. Math. Wissen. 224, Springer, Berlin, 1983. • C.S. Lin, A classification of solutions of a conformally invariant fourth order equation in $R^n$, Comment. Math. Helv. 73 (1998), 206–231. • L. Martinazzi, Classification of solutions to the higher order Liouville's equation on $R^{2m}$, Math. Z. 263 (2009), 307–329. • J.C. Wei, Asymptotic behavior of a nonlinear fourth order eigenvalue problem, Comm. PDE 21 (1996), 1451–1467. • J.C. Wei and X.W. Xu, Classification of solutions of higher order conformally invariant equations, Math. Ann. 313 (1999), 207–228. • ––––, Prescribing $Q$-curvature problem on $S^n$, J. Funct. Anal. 257 (2009), 1995–2023. • X.W. Xu, Classification of solutions of certain fourth-order nonlinear elliptic equations in $R^4$, Pacific J. Math. 225 (2006), 361–378. • X.W. Xu, Uniqueness and non-existence theorems for conformally invariant equations, J. Funct. Anal. 222 (2005), 1–28.
2019-11-17 06:28:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6266375780105591, "perplexity": 1398.2701744635717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00406.warc.gz"}
https://dsp.stackexchange.com/questions/52726/correlation-of-a-signal
# Correlation of a signal I have one sample for a signal. This sample is a vector of length 384. I need to calculate the correlation matrix for this signal,So I need many samples for the same signal. How can i generate these samples form the given sample using Matlab? • can you be specific about what you mean by correlation matrix. – Stanley Pawlukiewicz Oct 20 '18 at 14:13 • I mean the autocorrelation matrix of the signal – Mohamed Aly Oct 20 '18 at 15:45 • hi: you just calculate the autocorrelations using the one sample so no need for more than one. this is okay because of the ergodicity assumption that is usually made about time-series. also, note that matrix is symmetric. – mark leeds Oct 20 '18 at 16:04 The Toeplitz matrix is used to compute correlation and convolution using matrix multiplication. Below is a graphic showing how to use a Toeplitz matrix specifically to perform convolution using matrix multiplication. A cross-correlation function can also be done following the same process by simply reversing the order of the coefficients in the multiplied vector (in this case we would multiply by [h3 h2 h1] to perform correlation). This is clearer by observing the similarity and differences in the formulas for discrete time convolution and cross-correlation: CONVOLUTION: $$r[n]*h[h] = \sum_{m=-\infty}^\infty r[m]h[n-m]$$ CROSS CORRELATION: $$\rho_{rh}[n] = \sum_{m=-\infty}^\infty r[m]h[m+n]$$ The code in the blue cell in the graphic is the implementation in either Matlab or Octave. This applies to auto-correlation or cross-correlation as an alternate to using the xcorr() function directly. The utility of doing this is clearer in the following post, where I used such an operation to help demonstrate the underlying operation of the LMS equalizer: Compensating Loudspeaker frequency response in an audio signal
2019-11-11 22:03:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.829401969909668, "perplexity": 613.6959972963342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00062.warc.gz"}
https://stats.stackexchange.com/questions/217710/how-to-optimise-an-automatic-arima-model-selection/217721
# How to optimise an automatic ARIMA-model selection? I've been using statsmodels.tsa.arima_model to fit the residual component of some data. I've written an algorithm to automatically select the ARIMA model. Results are not quite as good as I had hoped, so I am looking for suggestions on how could I improve things. Please find below a description of what I've tried thus far. I am dividing the data into a training and a forecasting/testing set. Up to this point, I've based my choice of the ARIMA model only on the training set. I allow the ARIMA parameters p and q to run from 0 to 7. This choice of range is arbitrary. The d parameter is allowed to be either 0 or 1. This function tries each of them and storages the results: def iterative_ARIMA_fit(series): """ Iterates within the allowed values of the p and q parameters Returns a dictionary with the successful fits. Keys correspond to models. """ ARIMA_fit_results = {} for AR in ARrange : for MA in MArange : for Diff in Diffrange: model = ARIMA(series, order = (AR,Diff,MA)) try: results_ARIMA = model.fit(disp = -1, method = 'css') RSS = sum((results_ARIMA.fittedvalues - series)**2) if RSS > 0: ARIMA_fit_results['%d-%d-%d' % (AR,Diff,MA)]=[RSS,results_ARIMA] except: continue return ARIMA_fit_results Next, I look for the model that miminises RSS (total squared residual) using: def get_best_ARIMA_model_fit(series): """ Returns a list with the best ARIMA model The first element on the list contains the squared residual The second element on the list contains the fit results """ if t.isstationary(series)[0]: ARIMA_fit_results = iterative_ARIMA_fit(series) best_ARIMA = min(ARIMA_fit_results, key = ARIMA_fit_results.get) return ARIMA_fit_results[best_ARIMA] I had initially tried to use as much data as possible for training. To my surprise, despite using up to x6 more data, the fitting quality worsens. I made a scan of the training set length, plotting for each the total squared residual and the total squared residual per unit of training set. This is shown in 1. Here, clearly, the fit quickly deteriorates the more data I use, up to some point, where the ARIMA-fit-residual stabilizes. Also here, the best ARIMA model is consistently the highest-order available MA model. 0-0-7. Summarising, my optimisation strategy yields an ARIMA fit that: • (Fixed) Consistently (stubbornly) selects the MA model with of the highest available order • Does not improve with the size of the dataset Do any of you know whether this is somehow expected? Has any of you tried something different? Update The rejection of all the AR models was due to an error on the first function. These models return statsmodels.tsa.arima_model.ARMAResults.fittedvalues with less entries than the original series and, hence, the RSS returns nan. These new function does the trick: def iterative_ARIMA_fit(series): """ Iterates within the allowed values of the p and q parameters Returns a dictionary with the successful fits. Keys correspond to models. """ ARIMA_fit_results = {} for AR in ARrange : for MA in MArange : for Diff in Diffrange: model = ARIMA(series, order = (AR,Diff,MA)) fit_is_available = False results_ARIMA = None try: results_ARIMA = model.fit(disp = -1, method = 'css') fit_is_available = True except: continue if fit_is_available: safe_RSS = get_safe_RSS(series, results_ARIMA.fittedvalues) ARIMA_fit_results['%d-%d-%d' % (AR,Diff,MA)]=[safe_RSS,results_ARIMA] return ARIMA_fit_results Plus this extra one: def get_safe_RSS(series, fitted_values): """ Checks for missing indices in the fitted values before calculating RSS Missing indices are assigned as np.nan and then filled using neighboring points """ fitted_values_copy = fitted_values # original fit is left untouched missing_index = list(set(series.index).difference(set(fitted_values_copy.index))) if missing_index: nan_series = pd.Series(index = pd.to_datetime(missing_index)) fitted_values_copy = fitted_values_copy.append(nan_series) fitted_values_copy.sort_index(inplace = True) fitted_values_copy.fillna(method = 'bfill', inplace = True) # fill holes fitted_values_copy.fillna(method = 'ffill', inplace = True) return sum((fitted_values_copy - series)**2) The results are much better, albeit the overfitting. Here 2 is the updated plot showing the residuals and model choices. ## 1 Answer You are . If you have the choice between an MA($q$) and an MA($q+1$) model, the larger model with more degrees of freedom will almost always fit the data better and yield smaller residual sums of squares. (I would have expected the same to happen for the AR orders, but that this does not happen may be due to the fact that you are modeling residuals.) ARIMA models are typically selected based on information criteria, like , AICc, or , after deciding on whether to difference or not based on a statistical test. The documentation for the auto.arima() function in the forecast package for R may give you some inspiration as to what to look at. Edit: Cagdas Ozgenc correctly notes that increasing the MA order will not necessarily always reduce the residual sums of squares, because the conditional sum of squares estimation is not convex. To illustrate this effect, I simulated 10,000 white noise time series of 100 realization each, fitted MA($q$) models for $q=0, \dots, 7$ and noted the RSS. Below are boxplots of $$\Delta(q) := \text{RSS}_{\text{MA}(q)}-\text{RSS}_{\text{MA}(q-1)}$$ against $q$. Out of the $10,000\times 7=70,000$ possible differences, $69,851 = 99.8%$ were negative, i.e., a larger model yielded smaller RSS - although there were zero moving average dynamics in the simulated series. R code: rm(list=ls()) library(forecast) n.series <- 1e4 nn <- 100 ma.max <- 7 rss <- matrix(NA,nrow=n.series,ncol=ma.max+1) pb <- winProgressBar(max=n.series) for ( ii in 1:n.series ) { setWinProgressBar(pb,ii,paste(ii,"of",n.series)) set.seed(ii) xx <- ts(rnorm(nn)) for ( kk in 0:ma.max ) { model <- Arima(xx,order=c(0,0,kk),method="CSS") rss[ii,kk+1] <- sum(model\$residuals^2) } } close(pb) differences <- apply(rss,1,diff) boxplot(t(differences),main="RSS differences between MA(q) and MA(q-1) models",xlab="q") abline(h=0) sum(differences<0)/prod(dim(differences)) • Thanks @StephanKolassa. Found the reason for any AR order being rejected. For future reference, I've added a second version of the example code and figure. – Nicolas Gutierrez Jun 7 '16 at 15:00 • Why do you think a higher order model will "always" fit the data better? Always is a very strong word. The fitting procedure is not convex, only for that reason a complex model may end up fitting worse. If it were convex, then I am more inclined towards your argument, but would be nice if you could provide a few pointers to such a proof. – Cagdas Ozgenc Jun 7 '16 at 15:30 • @CagdasOzgenc: thank you for your patience. I have edited the answer to include your point. – Stephan Kolassa Jun 11 '16 at 9:30 • Even in the convex case your argument is probably true only for a set of hierarchical linear models (which is the case here with ARMA without gaps in lags). If the more complex model is not subsuming the less complex model, it will not work. Also for non-linear models even if the fitting is convex I am not sure what will happen. Do you know any proofs or results? – Cagdas Ozgenc Jun 13 '16 at 10:57 • That may well be, but I'm only discussing nested models here, which is the situation the OP is in. No, I don't have any general proofs or results about time series overfitting "in general". – Stephan Kolassa Jun 14 '16 at 6:33
2021-07-31 05:37:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5741734504699707, "perplexity": 1991.8647619359629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154053.17/warc/CC-MAIN-20210731043043-20210731073043-00485.warc.gz"}
http://codeforces.com/problemset/problem/922/D
D. Robot Vacuum Cleaner time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output Pushok the dog has been chasing Imp for a few hours already. Fortunately, Imp knows that Pushok is afraid of a robot vacuum cleaner. While moving, the robot generates a string t consisting of letters 's' and 'h', that produces a lot of noise. We define noise of string t as the number of occurrences of string "sh" as a subsequence in it, in other words, the number of such pairs (i, j), that i < j and and . The robot is off at the moment. Imp knows that it has a sequence of strings ti in its memory, and he can arbitrary change their order. When the robot is started, it generates the string t as a concatenation of these strings in the given order. The noise of the resulting string equals the noise of this concatenation. Help Imp to find the maximum noise he can achieve by changing the order of the strings. Input The first line contains a single integer n (1 ≤ n ≤ 105) — the number of strings in robot's memory. Next n lines contain the strings t1, t2, ..., tn, one per line. It is guaranteed that the strings are non-empty, contain only English letters 's' and 'h' and their total length does not exceed 105. Output Print a single integer — the maxumum possible noise Imp can achieve by changing the order of the strings. Examples Input 4sshhsshhhs Output 18 Input 2hs Output 1 Note The optimal concatenation in the first sample is ssshhshhhs.
2019-10-20 04:51:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33589231967926025, "perplexity": 1132.1810372874263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00232.warc.gz"}
http://www.j.sinap.ac.cn/hjs/EN/10.11889/j.0253-3219.2014.hjs.37.100513
Nuclear Techniques ›› 2014, Vol. 37 ›› Issue (10): 100513-100513. • NUCLEAR PHYSICS, INTERDISCIPLINARY RESEARCH • Spin-orbit coupling in intermediate-energy heavy-ion collisions XU Jun1 XIA Yin1 LI Baoan2 SHEN Wenqing1 1. 1(Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Jiading Campus, Shanghai 201800, China) 2(Texas A&M University-Commerce, Commerce 75429-3011, USA) • Received:2014-05-16 Revised:2014-07-21 Online:2014-10-10 Published:2014-10-16 Abstract: Background: Nuclear spin-orbit interaction is important in understanding magic number and shell structure of finite nuclei. Although it has been extensively studied in nuclear structure, its effect in nuclear reactions was long overlooked. Purpose: To be consistent, same nuclear force should be used in both studies of nuclear structure and nuclear reactions. Heavy-ion collisions provide more freedom to study the detailed properties of in-medium nuclear spin-orbit interaction. Methods: In this proceeding, we summarize our recent studies on introducing nucleon spin degree of freedom and spin-related mean-field potentials from nuclear spin-orbit interaction into IBUU (Isospin-dependent Boltzmann-Uehling-Uhlenbeck) transport model. Results: The spin differential transverse flow is sensitive to the strength of the spin-orbit coupling and serves as a useful probe for in-medium spin-orbit interaction. The difference of the spin differential transverse flow for neutrons and protons can be used to study the isospin dependence of the spin-orbit coupling, while spin differential flow of nucleons with high transverse momentum at different beam energies can be used to exact information of the density dependence of the nuclear spin-orbit interaction. Conclusion: With more spin-related probes proposed in the near future, intermediate-energy heavy-ion collisions may become a useful method in studying in-medium nuclear spin-orbit interaction.
2020-12-01 15:01:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22379420697689056, "perplexity": 2866.4797425262072}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674594.59/warc/CC-MAIN-20201201135627-20201201165627-00108.warc.gz"}
https://demo.formulasearchengine.com/wiki/Gas_laws
# Gas laws {{#invoke:Hatnote|hatnote}} The early gas laws were developed at the end of the 18th century, when scientists began to realize that relationships between the pressure, volume and temperature of a sample of gas could be obtained which would hold for all gases. Gases behave in a similar way over a wide variety of conditions because to a good approximation they all have molecules which are widely spaced, and nowadays the equation of state for an ideal gas is derived from kinetic theory. The earlier gas laws are now considered as special cases of the ideal gas equation, with one or more of the variables held constant. ## Boyle's law {{#invoke:main|main}} Boyle's law shows that, at constant temperature, the product of the pressure and volume of a given mass of an ideal gas, assuming a closed system, is always constant. It was published in 1662. It can be determined experimentally using a pressure gauge and a variable volume container. It can also be derived from the kinetic theory of gases:if a container, with a fixed number of molecules inside, is reduced in volume, more molecules will hit a given area of the sides of the container per unit time, causing a greater pressure. As a mathematical equation, Boyle's law is written as either: ${\displaystyle P\propto {\frac {1}{V}}}$ ${\displaystyle PV=k_{1}}$ ${\displaystyle P_{1}V_{1}=P_{2}V_{2}\,}$ where P is the pressure (Pa), V the volume (m3) of a gas, and k1 (measured in joules) is the constant from this equation—it is not the same as the constants from the other equations below. ## Charles' law {{#invoke:main|main}} Charles' Law, or the law of volumes, was found in 1787 by Jacques Charles. It says that, for a given mass of an ideal gas at constant pressure, the volume is directly proportional to its absolute temperature, asuming a closed system. As a mathematical equation, Charles' law is written as either: ${\displaystyle V\propto T\,}$ ${\displaystyle V/T=k_{2}}$ ${\displaystyle V_{1}/T_{1}=V_{2}/T_{2}}$ where V is the volume (m3) of a gas, T is the temperature (measured in Kelvin) and k2 is the constant from this equation—it is not the same as the constants from the other equations below. ## Gay-Lussac's law {{#invoke:main|main}} Gay-Lussac's law, or the pressure law, was found by Joseph Louis Gay-Lussac in 1809. It states that, for a given mass and constant volume of an ideal gas, the pressure exerted on the sides of its container is proportional to its temperature. As a mathematical equation, Gay-Lussac's law is written as either: ${\displaystyle P\propto T\,}$ ${\displaystyle P/T=k_{3}}$ ${\displaystyle P_{1}/T_{1}=P_{2}/T_{2}}$ where P is the pressure (Pa), T is the temperature (measured in Kelvin), and k3 (is the constant from this equation—it is not the same as the constants from the other equations above. {{#invoke:main|main}} Avogadro's law states that the volume occupied by an ideal gas is proportional to the number of moles present in the container. This gives rise to the molar volume of a gas, which at STP is 22.4 dm3 (or litres). The relation is given by ${\displaystyle {\frac {V_{1}}{n_{1}}}={\frac {V_{2}}{n_{2}}}\,}$ where n is equal to the number of moles of gas (the number of molecules divided by Avogadro's Number). ## Combined and ideal gas laws {{#invoke:main|main}} The combined gas law or general gas equation is formed by the combination of the three laws, and shows the relationship between the pressure, volume, and temperature for a fixed mass of gas: ${\displaystyle pV=k_{5}T\,}$ This can also be written as: ${\displaystyle \qquad {\frac {p_{1}V_{1}}{T_{1}}}={\frac {p_{2}V_{2}}{T_{2}}}}$ With the addition of Avogadro's law, the combined gas law develops into the ideal gas law: ${\displaystyle pV=nRT\,}$ where p is pressure V is volume n is the number of moles R is the universal gas constant T is temperature (K) where the constant, now named R, is the gas constant with a value of .08206 (atm∙L)/(mol∙K). An equivalent formulation of this law is: ${\displaystyle pV=kNT\,}$ where p is the absolute pressure V is the volume N is the number of gas molecules k is the Boltzmann constant (1.381×10−23 J·K−1 in SI units) T is the temperature (K) These equations are exact only for an ideal gas, which neglects various intermolecular effects (see real gas). However, the ideal gas law is a good approximation for most gases under moderate pressure and temperature. This law has the following important consequences: 1. If temperature and pressure are kept constant, then the volume of the gas is directly proportional to the number of molecules of gas. 2. If the temperature and volume remain constant, then the pressure of the gas changes is directly proportional to the number of molecules of gas present. 3. If the number of gas molecules and the temperature remain constant, then the pressure is inversely proportional to the volume. 4. If the temperature changes and the number of gas molecules are kept constant, then either pressure or volume (or both) will change in direct proportion to the temperature. ## Other gas laws • Graham's law states that the rate at which gas molecules diffuse is inversely proportional to the square root of its density. Combined with Avogadro's law (i.e. since equal volumes have equal number of molecules) this is the same as being inversely proportional to the root of the molecular weight. ${\displaystyle P_{total}=P_{1}+P_{2}+P_{3}+...+P_{n}\equiv \sum _{i=1}^{n}P_{i}\,}$, OR ${\displaystyle P_{\mathrm {total} }=P_{\mathrm {gas} }+P_{\mathrm {H_{2}O} }\,}$ where PTotal is the total pressure of the atmosphere, PGas is the pressure of the gas mixture in the atmosphere, and PH2O is the water pressure at that temperature. At constant temperature, the amount of a given gas dissolved in a given type and volume of liquid is directly proportional to the partial pressure of that gas in equilibrium with that liquid. ${\displaystyle p=k_{\rm {H}}\,c}$ ## References • {{#invoke:citation/CS1|citation |CitationClass=book }} • {{#invoke:citation/CS1|citation |CitationClass=book }} • {{#invoke:citation/CS1|citation |CitationClass=book }}
2019-11-19 14:53:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573173880577087, "perplexity": 331.88930036315145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670156.86/warc/CC-MAIN-20191119144618-20191119172618-00351.warc.gz"}
https://www.nature.com/articles/s41467-018-03133-y
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Identifying noncoding risk variants using disease-relevant gene regulatory networks ## Abstract Identifying noncoding risk variants remains a challenging task. Because noncoding variants exert their effects in the context of a gene regulatory network (GRN), we hypothesize that explicit use of disease-relevant GRNs can significantly improve the inference accuracy of noncoding risk variants. We describe Annotation of Regulatory Variants using Integrated Networks (ARVIN), a general computational framework for predicting causal noncoding variants. It employs a set of novel regulatory network-based features, combined with sequence-based features to infer noncoding risk variants. Using known causal variants in gene promoters and enhancers in a number of diseases, we show ARVIN outperforms state-of-the-art methods that use sequence-based features alone. Additional experimental validation using reporter assay further demonstrates the accuracy of ARVIN. Application of ARVIN to seven autoimmune diseases provides a holistic view of the gene subnetwork perturbed by the combinatorial action of the entire set of risk noncoding mutations. ## Introduction Genome-wide association studies (GWASs) and whole-genome sequencing have revealed thousands of sequence variants associated with different human diseases/traits1,2,3. The vast majority of identified variants are located outside of coding sequences, making direct interpretation of their functional effects challenging. For the small number of cases where the causal variants have been experimentally validated, they have been shown to perturb binding sites of transcription factors, local chromatin structure or co-factor recruitment, ultimately resulting in changes of transcriptional output of the target gene(s)4,5,6. Among the different classes of noncoding regulatory sequences, transcriptional enhancers represent the primary basis for differential gene expression, with many human diseases resulting from altered enhancer action5,7,8. Numerous recent studies have uncovered a large number of putative enhancers in a diverse array of human cells and tissues9,10,11. Overlapping the catalog of genetic variants with known enhancers has revealed an enrichment of disease-associated variants in tissue-specific enhancers12,13, emphasizing the importance of knowledge about tissue-specific cis-regulatory sequences for identifying causal variants. In the following, we term single nucleotide polymorphisms (SNPs) located in enhancers eSNPs. A number of computational methods have been developed to predict causal noncoding variants14,15,16,17,18,19,20. Conceptually, these methods operate by annotating genetic variants using a catalog of cis-regulatory sequences (based on chromatin accessibility, transcription factor binding, epigenetic modification signatures). Although biologically intuitive, such an approach does not take into account the complex interactions of the underlying gene regulatory network (GRN) in which a causal noncoding variant exerts its effect, namely, interactions among transcription factors and their target genes as well as interactions among target genes in the same pathway. Molecular networks have been explicitly used to improve the inference accuracy of causal coding variants21,22,23,24. This potential has not been examined for noncoding variants. To address these shortcomings, we postulate that (1) the impact of causal eSNPs on gene expression is transmitted through the GRNs in the cell/tissue types that are relevant to the studied trait; and (2) the genes affected by the full set of causal eSNPs for a trait are organized in a limited number of pathways. We test this hypothesis by developing a general computational framework for identifying causal noncoding variants that affect a specific disease/trait. Linkage disequilibrium (LD) presents another challenge for finding causal noncoding variants. By casting the causal inference problem into a subnetwork identification problem, our method evaluates both GWAS lead SNPs and linked SNPs simultaneously, thus increasing the power of the inference. Further, our network-based approach naturally provides a pathway content for understanding the predicted causal eSNPs. We characterize the performance of our method using known risk mutations in gene promoters in 20 diseases and gene enhancers in 10 diseases. We further validate randomly selected predictions using luciferase reporter assay. By applying our method to seven autoimmune diseases, we obtain a systems view of the entire set of risk eSNPs in a given disease and equally important the subnetwork that is perturbed by the set of risk eSNPs. ## Results ### Construction of disease-relevant gene regulatory network A number of previous studies have reported enrichment of GWAS SNPs in regulatory DNA sequences specific to disease-relevant tissues or cell types12,13, emphasizing the importance of knowledge about tissue-specific regulatory sequences for identifying risk variants. Additionally, gene−gene and protein−protein interaction networks have been used to identify causal coding variants21,25,26. Because the effects of non-coding variants are transcriptionally integrated, a network-based approach should be an effective strategy to identify causal noncoding variants. To date, tissue-relevant GRN has not been used explicitly to prioritize noncoding variants. As a first step towards this goal, we sought to construct an integrative GRN for each disease-relevant cell/tissue type. We integrated epigenomic, transcriptomic and functional gene−gene interactions to construct the network. Our integrative network has two parts, the first part involves interactions between enhancers and target genes EP edges, which is a major challenge in constructing GRN in general. By using our recently developed algorithm, IM-PET (Fig. 1a)27, we constructed 23 cell/tissue-specific enhancer−promoter (EP) networks that are relevant to the set of 16 diseases in this study (Supplementary Table 1). We evaluated the accuracy of IM-PET using a compendium of Hi-C and ChIA-PET chromatin interaction data from nine cell types (GM12878, K562, IMR90, HMEC, NHEK, HUVEC, Hela, CD34+ cells, and CD4+ T cells, Supplementary Table 2). The overall Area Under the Precision and Recall Curve (auPRC) curve were 0.89 and 0.84 using Hi-C and ChIA-PET interactions as the gold standard, respectively (Fig. 1b), suggesting high quality of the EP predictions by IM-PET. The second part of the integrative network consists of functional interactions between target genes. For this, we used probabilistic functional gene interaction network inferred by integrating multiple lines of evidence (i.e. HumanNet, see Methods) 21. Interactions in the backbone HumanNet are not disease-specific; to add disease-specific information for the functional gene interaction network, we add differential gene expression information from case vs control comparison in disease-relevant cells/tissues. The resulting integrative GRN contains two types of edges, EP edges representing enhancer−promoter interactions and FI edges representing functional gene−gene interactions (Fig. 1c). The final product is an edge- and node-weighted, disease-relevant GRN, which is used for predicting risk noncoding variants. See Methods for additional details about the network construction. ### ARVIN combines sequence-based and network-based features We hypothesized that disease-relevant GRN could improve the inference accuracy of noncoding risk variants. To this end, we examined a number of network-based features to see if they can discriminate true risk SNPs from negative control SNPs. We obtained 233 gold-standard risk SNPs located in gene promoters from the Human Gene Mutation Database (HGMD) 28. This set of SNPs is associated with 20 different diseases (Supplementary Data 1). We assigned a WEP value of 1 to edges between an SNP and the genes whose promoter harbors the SNP, since the gene promoters are annotated with very high confidence in the Ensembl database. We used gene expression data of case and control samples (Supplementary Table 3) to compute the gene weight, WDE. Next, we used the constructed disease-relevant GRNs to compute the following network-based features: module score, weighted node degree, betweenness centrality, closeness centrality, and page rank centrality (see Methods for details). These features are designed to evaluate the topological importance of the direct target gene of a promoter or enhancer SNP as well as the local network neighborhood of the target gene. Our hypothesis is that target genes with large topological importance in the GRN might be rate-limiting genes for disease pathogenesis. We found that the set of network features can indeed distinguish true risk SNPs from control SNPs (Fig. 2a). Next, we compared the discriminative power of disease-specific and non-disease-specific networks. We found that values of network features are less separated between risk and control SNPs when using non-disease-specific networks (Supplementary Fig. 1), further supporting utility of disease-specific network for identifying risk SNPs. To further test the discriminative power of the network-based features, we built a random forest (RF) classifier using these features and sequence-based features used by two state-of-the-art methods, genome-wide annotation of variants (GWAVA)16 and FunSeq220. We evaluated the relative importance of all features (six from this study and 182 from GWAVA and FunSeq2 combined) by using a recursive feature elimination (RFE) approach. Applying the RFE procedure yielded a set of 35 most discriminative features based on classification error (Supplementary Figs 2 and 3). Strikingly, all network-based features were ranked in the top ten (Supplementary Data 2), suggesting that network-based features are independently discriminative from the sequence-based features. On the other hand, the fact that 35 features were selected suggests that network-based features and sequence-based features are complementary to each other. We examined potential interactions among selected features and found significant association between network-based features and sequence-based features, further supporting the notion that these two types of features are complementary (Supplementary Fig. 4). Based on this finding, we developed the Annotation of Regulatory Variants using Integrated Networks (ARVIN) algorithm by combining network features with sequence features (Fig. 2b). We evaluated the classification accuracy using fivefold cross- validation and the set of 233 gold-standard risk SNPs in gene promoters. ARVIN achieved an area under the ROC curve (auROC) of 0.96, significantly larger than those of GWAVA (auROC = 0.85, P = 1.7×10−12) and FunSeq2 (auROC = 0.82, P = 4.2×10−15) (Fig. 2c). Many genes are regulated by distal enhancers. Compared to promoter variants, risk variants located in distal enhancers are more challenging to study due to the difficulty of assigning enhancer targets and existence of multiple enhancers targeting the same gene. We further tested the performance of ARVIN using risk SNPs located in enhancers. We curated a set of 15 experimentally validated risk enhancer SNPs implicated in ten complex diseases, including autoimmune, heart, lung, psychiatric diseases, obesity, and cancer (Supplementary Table 4). Compared to promoter variants, the set of gold-standard enhancer variants is too small for ROC curve analysis to be meaningful. Therefore, for each risk SNP, we asked how it is ranked by a method among all enhancer SNPs in the same LD block as the risk SNP. The number of linked eSNPs ranges from 1 to 168 with an average of 28 (Supplementary Table 6), highlighting the difficulty of identifying true risk SNPs. Overall, both ARVIN and ARVIN with network feature alone (ARVIN-N) outperformed GWAVA and FunSeq2. The median percentile ranking of the set of known risk eSNPs were 1, 5, 47, and 45% for ARVIN-N, ARVIN, GWAVA, and FunSeq2, respectively (vertical lines, Fig. 3). In summary, using gold-standard risk SNPs in both promoters and enhancers, we demonstrate that incorporation of network features can significantly improve the accuracy of finding risk enhancer SNPs. ### Application of ARVIN to autoimmune diseases We applied ARVIN to identify risk eSNPs associated with seven autoimmune diseases (Crohn’s disease, multiple sclerosis, psoriasis, rheumatoid arthritis, systemic lupus erythematosus, type 1 diabetes, and ulcerative colitis). We first obtained lead SNPs associated with those diseases from the National Human Genome Research Institute (NHGRI) GWAS Catalog29. On average, there are 123 GWAS lead SNPs per disease (Supplementary Table 5). As candidate SNPs, we considered both lead SNPs and SNPs that are in the same LD block with the lead SNPs. By overlapping SNPs with enhancers from disease-relevant cell/tissue types, we obtained the list of eSNPs as the final input to ARVIN. On average, there are 66 eSNPs for each disease-associated locus tagged by a lead GWAS SNP. Using ARVIN cutoff that yields an optimal set of predictions (Supplementary Methods, Supplementary Fig. 5), on average, we predicted 160 risk eSNPs for each autoimmune disease (Fig. 4a). We evaluated the predictions using eQTLs identified in disease-relevant tissues by the GTEx consortium and by Westra et al.30,31 (Supplementary Table 6). For six out of seven autoimmune diseases, the set of risk eSNPs predicted by ARVIN has significant overlap with eQTLs identified in relevant tissues. In contrast, only predictions by FunSeq2 in one disease (rheumatoid arthritis) have significant overlap with eQTL data (Fig. 4a). To experimentally test the predicted risk eSNPs, we randomly selected four predicted risk eSNPs with ARVIN scores in the top, middle, and bottom thirds of the score distribution, respectively. As a comparison, we also randomly chose four eSNPs that are negative predictions by ARVIN (Supplementary Table 7). We first used dual luciferase reporter assay to test the activity of the enhancers in CD4+ T cells. All 16 enhancers (12 containing predicted risk eSNPs and 4 containing negative predictions) significantly enhance luciferase activity in comparison to the two negative control sequences (Fig. 4b). Next, we compared the enhancer constructs that contain alternative alleles of the predicted eSNPs (Supplementary Table 8). Among the 12 predicted risk eSNPs, 11 show differential enhancer activities (P < 0.05) with different alleles of the SNPs. In contrast, none of the negative predictions show significant activity difference between the two alleles of the SNP (Fig. 4c). ### Many genes are targeted by multiple risk noncoding variants Increasing evidence suggests that many genes are regulated by multiple enhancers during normal and disease development27,32,33,34,35. This phenomenon suggests that mutations in multiple enhancers of the same gene could collectively contribute to the deregulation of the gene during pathogenesis. Consistent with this hypothesis, among the seven autoimmune diseases, we found that 32% of genes are affected by multiple predicted eSNPs that are located in multiple enhancers targeting these genes (Fig. 5a). We tested whether two risk eSNPs that target the same gene increase disease risk compared to each eSNP alone. We used GWAS data generated by the Wellcome Trust Case Control Consortium36,37 for six autoimmune diseases, including Crohn’s disease, multiple sclerosis, psoriasis, rheumatoid arthritis, type 1 diabetes, and ulcerative colitis. For all risk eSNP pairs targeting the same gene, we assessed their combined effect on disease risk using a permutation-based procedure38 (see Methods). At P < 0.05, we found that the percentage of eSNP pairs with increased risk ranges from 19% for type 1 diabetes to 57% for multiple sclerosis with an overall percentage of 44% across the six diseases (Fig. 5b). Besides risk eSNPs, we further investigated the genes targeted by multiple risk eSNPs. We found several unique features about these genes. First, they tend to have higher network centrality measures (Fig. 5c). Second, their expression levels are more perturbed in disease samples compared to control samples (Fig. 5d). A higher percentage of the regulating risk eSNPs overlap with eQTLs (Fig. 5e). Finally, they are enriched for more Gene Ontology (GO) terms for direct immune responses (Fig. 5f). Taken together, these unique properties of multi-targeted genes suggest they might be rate-limiting genes in disease pathogenesis. Figure 6a, b shows two example genes that are targeted by multiple risk eSNPs. IRF1 plays a critical role in regulatory T-cell function and autoimmunity39. It is targeted by two enhancers based on both IM-PET prediction and experimental Capture-Hi-C data in CD4+ T cells40. The two eSNPs (rs4143335 and rs2706356) significantly disrupt the binding of HNF4A and E2F1, respectively. Both E2F141 and POU2F142 have been shown to be important transcriptional regulators of CD4+ T-cell function. When we determined the clinical risk (odds ratio) for Crohn’s disease based on the genotype of both variants, we found an increase in clinical risk to an odds ratio of 1.22 for individuals homozygous for the risk allele (T) of rs2706356 and homozygous for the C allele of rs4143335 (Fig. 5g, Supplementary Fig. 6). The other example involves the gene PFKFB3 that encodes a rate-limiting glycolytic enzyme. Deficiency of PFKFB3 has been linked to reprogrammed metabolism in T cells from rheumatoid arthritis patients43,44. The two risk eSNPs (rs77950884 and rs17153333) significantly disrupt the binding of HNF4A and E2F1, respectively. Interestingly, in both examples, the lead GWAS SNPs are not predicted to be the risk SNPs, emphasizing the challenge of finding risk SNPs in the presence of genetic linkage. ### Most perturbed subnetwork by all risk eSNPs in a disease It has been suggested that the effects of multiple low-penetrance enhancer variants can be amplified through coordinated dysregulation of the entire GRN of a key disease gene, as illustrated in an elegant study by Chatterjee and colleagues35. To obtain a systems-level view of the pathways collectively perturbed by all risk eSNPs in a disease, we used the Prize Collecting Steiner Tree (PCST) algorithm to identify a connected subnetwork composed of all risk eSNPs and genes bridging the risk eSNPs in the network. By algorithmic design, the resulting subnetwork is maximized for nodes and edges with large weights. In other words, these are downstream genes that have high levels of differential expression and functional interactions. Therefore, the effects of the risk eSNPs are most likely propagated via such a subnetwork. For each disease, we compared the subnetworks downstream of risk eSNPs predicted by ARVIN, GWAVA, and FunSeq2, respectively. We found that subnetworks downstream of ARVIN-predicted eSNPs have more enriched GO terms related to immune cell functions (Fig. 7a), further suggesting the predicted upstream eSNPs are more likely to be causal eSNPs. Figure 7b shows an example subnetwork for rheumatoid arthritis. Such a network view reveals two interesting features of the perturbations caused by risk eSNPs. First, we found that multiple members of a pathway can be targeted by different risk eSNPs. For instance, the subnetwork contains ten genes that are involved in the RhoA-mediated small GTPase signaling (highlighted in a square). Six of the ten genes are individually targeted by different risk eSNPs. Rho kinase signaling has been shown to have a critical role in the synovial inflammation of rheumatoid arthritis45,46. Second, we found that many genes targeted by risk eSNPs are not located in disease-associated loci. This is consistent with the notion of long-range interaction between enhancers and their target genes. Most perturbed subnetworks for other diseases in this study are shown in Supplementary Fig. 7. ## Discussion A number of methods have been developed for inferring noncoding risk variants. Although they differ by the computational methodology used, conceptually, all existing methods use sequence and chromatin features around a candidate variant to make a prediction. Transcription regulation occurs in a complex network of regulatory interactions between transcription factors and target genes. To better understand noncoding risk mutations, they should be examined in the context of the regulatory network of disease-relevant cell type(s). To our knowledge, ARVIN is the first method that explicitly uses disease-relevant GRN for finding noncoding risk variants. Disease-specific transcriptomic and epigenomic data are integrated with a probabilistic functional gene interaction network to generate a weighted GRN, which serves to provide disease-specific information and reduce noise at the same time. Using gold-standard noncoding variants, we demonstrate that genes targeted by causal SNPs exhibit characteristic network features compared to genes targeted by non-causal SNPs. The network-based features are complementary to sequence-based features. Combination of both types of features achieves the highest accuracy in predicting causal noncoding mutations. In support of the utility of disease-specific network for finding noncoding risk variants, we found that both the separation of feature values and classification accuracy decrease when non-disease-specific networks are used in ARVIN (Supplementary Fig. 1). Although we focused on common germline variants in this study, ARVIN is also applicable to somatic and rare variants because the same mechanisms of transcriptional regulation are affected by the different types of mutations. A recent study demonstrated that multiple low-penetrance enhancer variants can cause significant dysregulation of the entire GRN by targeting a key disease gene35. Along this line, our systematic analysis of seven autoimmune diseases revealed the abundance of combinatorial risk variants that affect the same gene. This result is supported by the observation that promoters of many genes are physically contacted by multiple enhancers33,34,47. Our result suggests that genes affected by combinatorial risk variants tend to be more centrally located in the GRN, have higher expression change in response to disease, and directly mediate immune responses. Taken together, these unique features strongly suggest that genes affected by multiple risk eSNPs may play a rate-limiting role in disease pathogenesis. Beyond studying individual risk eSNPs, it would be tremendously useful to have a holistic view of the subnetwork jointly perturbed by all risk eSNPs in a disease. To this end, we used the PCST algorithm to identify the core subnetwork that is most perturbed by all risk eSNPs in a disease. Knowledge about the perturbed subnetwork can be used to prioritize genes and variants for follow-up studies. Furthermore, comparative analysis of the perturbed subnetworks in different diseases may lead to novel insights into disease pathogenesis and suggest novel therapeutic strategies. ARVIN can be improved in a few ways. First, the performance of ARVIN can be affected by the quality of GRN. In this study, we addressed this issue by weighting the edges and nodes in the network. To further examine the robustness of our method, we substituted HumanNet with the functional gene interaction network annotated in the STRING database48. Using the same set of gold-standard promoter and enhancer SNPs, we found that ARVIN achieves similar performance gain compared to GWAVA and FunSeq2 (Supplementary Fig. 8). To further evaluate the general applicability of ARVIN on enhancer−promoter networks, we compared the performance of the three methods using alternative tissue-specific networks constructed using enhancer−promoter interactions generated by the FANTOM5 consortium49,50. Again we found that ARVIN achieves the best performance (Supplementary Fig. 9). As more experimental data on molecular interactions become available, they can be used to construct more accurate GRNs. In addition, since ARVIN is a supervised method, its accuracy depends on the training set. The training set we used (HGMD28) is the most comprehensive manually curated disease mutation database. It only includes causal diseases variants, excluding those that are associated with the disease due to linkage with another known risk variant51. However, it may be possible that some false-positive variants are included due to linkage with yet-to-be discovered causal SNPs. As the annotation for causal variants continue to improve, they can be used to train a more robust classifier. ## Methods ### ARVIN framework Key components of the computational framework are described in the following sections: construction of disease-relevant GRN, computation of network-based features associated with candidate eSNPs, and classifier for risk eSNPs using genomic, epigenomic, and network-based features. ### Construction of disease-relevant gene regulatory network Network construction starts with identifying eSNPs. For each lead GWAS SNP, we identify the LD block to which it belongs. We then intersect the set of SNPs in the LD block with the set of enhancers from cell/tissue types relevant to the disease. This gives us a set of enhancer SNPs (eSNPs) in a given LD block identified by the lead GWAS SNP. The GRN consists of two types of nodes, representing eSNPs and genes, and two types of edges, those between eSNPs and gene(s) (denoted as EP edges) and those between genes (denoted as FI edges) (Fig. 1c). EP edges represent regulatory relationship between an enhancer and its target(s). FI edges represent functional interactions between genes. EP edges are based on enhancer−promoter interactions predicted by the IM-PET algorithm27 (Fig. 1a). Note that the enhancer−promoter interactions are also predicted using ChIP-Seq and gene expression data from cell/tissue types relevant to the disease. FI edges are taken from HumanNet, which is a probabilistic functional gene network of 16,222 protein-encoding genes in humans21. Each interaction in HumanNet has an associated probability representing a true functional linkage between two genes. It is constructed by a Bayesian integration of 21 types of “omics” data including physical interactions, genetic interactions, gene co-expression, literature evidence, homologous interactions in other species, etc. HumanNet has been successfully used for improving inference accuracy of coding variants. Interactions in HumanNet are not disease-specific, to add disease-specific information for the functional gene interaction network, we add differential gene expression information from case vs control comparison in disease-relevant cells/tissues. Nodes and edges in the network were weighted to (1) take into account the noise in the data; (2) to represent the relative importance of different genes and interactions. Weights for eSNPs, WeSNP, are based on the value of disrupting putative transcription factor binding site due to the SNP. Weights for genes, WDE, are based on the values of differential gene expression between case and control samples. Weights for EP edges, WEP, are based on the probability for enhancer−promoter interaction outputted by the IM-PET algorithm. Weights for FI edges, WFI, are taken from HumanNet. To make the values of each type of weights comparable, we performed min-max normalization for each type of weights. ### Network-based features associated with candidate eSNPs We compute five network-based features. The first one is module score, which is based on the gene modules downstream of an eSNP. Our overall hypothesis is that a causal eSNP contributes to disease risk by directly causing expression changes in genes of disease-relevant pathways. Thus, in addition to the direct target gene of the eSNP, other genes in the same pathway can also provide discriminative information. With the weighted GRN, our goal is to identify “heavy” gene modules in the network that connects a given eSNP to a set of genes (encircled modules in Fig. 1c), hereby termed eSNP module. On the other hand, non-causal eSNPs are expected to be associated with “light” modules, i.e. having marginal impact on pathway gene expression (e.g. eSNP3 in Fig. 1c). To score a candidate module, we use the following additive scoring scheme by summing up all node and edge weights divided by the number of nodes (N) in the candidate module. $${{S}}\, =\, ({{W}}^{{\rm {eSNP}}}\, +\, {{W}}^{{\rm {DE}}}\, +\, {{W}}^{{\rm{EP}}}\, +\, {{W}}^{{\rm{FI}}})/{{N}}.$$ We conduct module search from all eSNPs in the weighted network. It is an NP-hard problem to obtain a global optimal solution consisting of all heavy subnetworks. We thus use a greedy search strategy. Starting with each eSNP, our algorithm considers all genes connected to the current eSNP-module and add the node whose addition leads to the maximal increase of the scoring function. This procedure repeats until there is no node whose addition can improve the module score. Several recent studies have reported that multiple enhancer elements could be present at a single GWAS locus52,53. Our network-based framework can naturally handle such cases because we consider all eSNPs simultaneously during module search. We assessed the statistical significance of candidate modules using randomized networks. Specifically, for edges, we randomized them by edge-preserved shuffling. For nodes, we randomly shuffled their values within each type (i.e. among genes or among eSNPs). The empirical Pvalues are computed based on the null score distribution from the randomized networks. The second network-based feature is weighted degree of a node v directly downstream of an eSNP. It is defined as ∑(u,v)EW(u,v), (where W(u,v) is the edge weight for the edge connecting nodes u and v. The third network-based feature is betweenness centrality of a node v directly downstream of an eSNP. Betweenness centrality of a node in a network corresponds to the proportion of shortest paths in the network going through node. The raw betweenness centrality is defined as CB(v) = ∑ v t u (σ tu (v))/σ tu , where σ tu is the total number of shortest paths between node t and node u. σ tu (v) is the subset of σ tu that go through v. The normalized betweenness centrality is defined as $$C{\prime}_{\rm{B}}\left( {{v}} \right)\, =\, {{C}}_{\rm{B}}\left( {{v}} \right) \,\times\, {{N}}$$, where N is the total number of nodes in the network. The fourth network-based feature is closeness centrality of a node v directly downstream of an eSNP. Closeness centrality is the inverse of the sum of shortest paths between a node and all nodes in a network. It is proportional to the time by which information spreads from the node of interest to all other nodes in the network. The raw closeness centrality is $${{C}}_{\rm{C}}({{v}})\, = \,{1}/{{\mathop {\sum }\nolimits_{{{u}} \ne {{v}}} {{d}}\left( {{{u}},{{v}}} \right)}}$$, where d(u,v) indicates the length of the shortest path between u and v. The normalized closeness centrality is defined as $${{C{\prime}}}_{\rm{C}}({{v}})\, = \,{{C}}_{\rm{C}}({{v}})\,\times\,{{N}}$$, where N is the total number of nodes in the network. The fifth network-based feature is page rank centrality of a node vdirectly downstream of an eSNP. Page rank centrality is a network measure based on the idea that the importance of a given node is determined by itself and its neighbors’ importance. Page rank centrality of a node vis defined as CP (v) = (1−d)/(N)+∑vV(v) (CP (v))/L(v), where V(v) is first neighbors of node v and L(v) is the set of edges incident on node v. d denotes a damping factor adjusting the derived value downward and N is the total number of nodes in the network. The normalized page rank centrality is defined as $${{C{\prime}}}_{\rm{P}}({{v}})\, = \,{{C}}_{\rm{P}}({{v}})\,\times\,{{N}}$$. ### Predicting risk variants To classify risk eSNPs, we trained an RF classifier using the combined feature set that consists of 5 network-based features, 6 binary features from FunSeq, and 175 features from GWAVA. The classifier contained 500 decision trees. Each decision tree was built using ~20% of randomly selected training data (100 out of 464) and $$\sqrt {187}\, \approx\, 14$$ randomly selected features. Classification error was measured with data not used for training (i.e. out of bag data). To compute feature importance, for each decision tree, the classification error was computed using permuted and non-permuted feature values. The difference between the two classification errors were then averaged over all trees and used as feature importance. To select most predictive features, we used an RFE strategy54. At each iteration of the feature selection, the top S most important features were selected. The RF model was refit and corresponding performance was evaluated. To access the variance in performance at each iteration of feature selection, we did fivefold cross-validation. After all iterations, the optimal set of features was determined using the subset with best average performance across fivefold cross-validation. Receiver operating characteristic (ROC) curve is used to evaluate prediction performance. Difference in auROC between two ROC curves is computed using a bootstrap-based method55. Based on the optimal set of features, we build an RF classifier. Given a genetic variant along with its feature values, the classifier outputs a prediction probability indicating how likely this genetic variant is a risk variant in a given disease. ### Predictions of enhancers and enhancer−promoter interactions Enhancers were predicted using the Chromatin Signature Inference by Artificial Neural Network CSI-ANN algorithm10. The input to the algorithm is the normalized ChIP-Seq signals of three histone marks (H3K4me1, H3K4me3, and H3K27ac). The algorithm combines signals of all histone marks and uses an artificial neural network-based classifier to make predictions of active enhancers with the histone modification signature “H3K4me1hi + H3K4me3neg/lo + H3K27achi”. The training set for the classifier was prepared using ENCODE data of mouse ES-Bruce4, MEL, and CH12 cell lines. To create the training set for active enhancers, we first selected a set of promoter-distal p300 binding sites (2.5 kb from Refseq TSS), and overlapped them with the histone modification peaks. The top 300 distal p300 sites that overlapped with H3K4me1 and H3K27ac peaks, but not H3K4me3 peaks, were selected as the positive set. One thousand randomly selected genomic regions and 500 active promoter regions were used as the negative set. Enhancers were predicted using a false discovery rate (FDR) cutoff of 0.05. Predicted enhancers that overlapped by at least 500 bp were merged by selecting the enhancer with the highest CSI-ANN score. We obtained histone modification ChIP-Seq data from the NCBI Epigenome Atlas, Roadmap Epigenomics Project, Encyclopedia of DNA Elements (ENCODE), International Human Epigenome Consortium, and the GEO database (Supplementary Table 1). Target promoter(s) of an enhancer were predicted using the IM-PET27 algorithm. It predicts enhancer−promoter interactions by integrating four features derived from transcriptome, epigenome, and genome sequence data, including: (1) enhancer−promoter activity correlation, (2) transcription factor-promoter co-expression, (3) enhancer−promoter co-evolution, and (4) enhancer−promoter distance. Here, we used tissue/cell type-specific histone modification ChIP-Seq and RNA-Seq data (Supplementary Table 1) to compute values of features 1, 3, and 4 for the given tissue/cell type. Values of feature 3 were based on sequence conservation across 15 mammalian species (human, chimp, gorilla, orangutan, gibbon, rhesus, baboon, marmoset, tarsier, mouse lemur, tree shrew, mouse, rat, rabbit, and guinea pig). We used an FDR cutoff of 0.05 as the threshold for making predictions. ### Evaluation of enhancer−promoter predictions We searched for large-scale chromatin interaction data measured using either Hi-C or ChIA-PET protocol (Supplementary Table 2). We used the reported EP interactions in these studies as the gold standard to assess the quality of our predicted enhancer−promoter pairs. We first identified EP pairs in which the enhancers overlap with the interacting fragments reported by Hi-C or ChIA-PET studies. Those EP pairs are regarded as eligible for comparison with Hi-C or ChIA-PET data. We then computed the ROC curves using EP interactions reported in either Hi-C or ChIA-PET studies as the gold standard. ### Gold-standard risk variants located in gene promoters The Human Gene Mutation Database (HGMD, version 2014 r1)28 was used to select regulatory variants located in promoter region that was defined as 2 kb upstream and 0.5 kb downstream of TSS. Transcript annotation was based on Gencode v19 (GRCh37). Only transcripts with high confidence were used (level <3). We selected all diseases and their associated SNPs in HGMD that satisfied the following three criteria: (1) SNPs have the annotation of “DP” (disease-associated polymorphism), or “FP” (polymorphism exerts a direct functional effect), or “DFP” (disease-associated polymorphism with additional supporting functional evidence) or “DM” (disease causing mutation) in HGMD; (2) case and control gene expression data were available for the disease; (3) genes of the reported promoter were present in the HumanNet connected network. For negative control SNPs, we used common (minor allele frequency ≥ 1%) SNPs from the 1000 Genomes Project. Seventy-five percent of the HGMD variants lie within a 2 kb window flanking the transcription start site16. Therefore, we selected negative control SNPs such that the distance distribution to the nearest TSS matches that of the positive training set in order to control for the bias in the positive set. The lists of positive and negative control variants are provided in Supplementary Data 1. ### Processing of gene expression profiling data All gene expression microarray data were analyzed using the limma package56. Raw microarray data were background corrected and quantile normalized. Linear model was fit to the data using the lmFit function of limma. Differential expression was assessed at probe level using the empirical Bayes (eBayes) method. To summarize differential expression at gene level, we selected the minimum P value across the probes that match to a gene. The list of gene expression data sets used in this study to assess differential expression is provided in Supplementary Table 3. ### Gold-standard risk variants located in enhancers We curated a set of experimentally validated eSNPs from multiple resources, including HGMD28, ClinVar57, Open Regulatory Annotation Database (OregAnno) 58, and manual search of PubMed literature. We accepted an eSNP as being validated only if it satisfies the following criteria: (1) significant association of the eSNP with the disease; (2) there is direct experimental evidence that the GWAS SNP causes differential TF binding and gene expression change; and (3) the enhancer is located more than 5 Kbp away from the affected gene promoter. The list of experimentally validated eSNPs is provided in Supplementary Table 4. ### Identification of linkage equilibrium blocks We used data from the 1000 Genomes project (phase 3 release) to identify SNPs in the same LD with experimentally validated enhancer SNPs and GWAS catalog lead SNPs. PLINK59 was used to identify linked SNPs with D′ > 0.9 and within 1 Mb from either validated enhancer SNPs or GWAS lead SNPs. SNPs with D′ > 0.9 with the index SNP are considered in the same LD block as the index SNP. ### FunSeq2 and GWAVA features FunSeq220 employs seven binary and four continuous features to determine if a variant is deleterious, including: (1) overlap with ENCODE annotation of cis-regulatory elements such as enhancer, promoter, or DHS; (2) overlap with sensitive region (i.e. high level of negative selection); (3) overlap with ultrasensitive region; (4) overlap with ultra-conserved elements; (5) overlap with HOT (highly occupied by transcription factors); (6) overlap with regulatory elements associated with genes; (7) recurrence in multiple samples; (8) Motif-breaking score; (9) Motif-gaining score; (10) Network centrality score; and (11) GERP score. Feature values for candidate SNPs were obtained by SNP coordinates to FunSeq2 web portal. GWAVA uses16 175 genomic and epigenomic features including overlap with histone modification and Transcription Factor ChIP-Seq peaks. We obtained GWAVA feature values for candidate SNPs using the various annotation data sources and Python script (gwava_annotate.py) provided in the GWAVA supplementary portal. ### Identifying the subnetwork affected by a set of risk eSNPs To identify the subnetwork collectively affected by a set of risk eSNPs in a disease, we use the PCST algorithm. Given an undirected graph G=(V,E,c,p), where vertices V are associated with non-negative profits p and edges E are associated with non-negative costs c. The PCST algorithm finds a connected subgraph G′= (V′, E′) of G that maximizes the net profit which is defined as the sum of all node-associated profits minus all edge-associated costs60. The algorithm takes as the input the disease-relevant regulatory network and all risk eSNPs implicated in a given disease. Every input eSNP is considered as a possible root node of the Steiner tree but the one resulting in a Steiner tree with the largest profit is chosen as the final root node. To identify the optimal solution, the algorithm will link every input eSNP to the selected root node maximizing the net profit. This can be solved using message-passing technique61. We convert our edge score into edge cost by 1−S (i,j), where S (i,j) is the edge score. The final output of the algorithm is a tree composed of all risk eSNPs and genes that are targeted by them. The eSNPs are connected via interactions among the target genes. ### Generation of non-disease-specific networks For studying risk variants in promoter, we used the following procedure to construct non-specific networks: (1) using only backbone HumanNet without adding disease-specific differential gene expression information (resulting network termed “No-DE” network); (2) using backbone HumanNet and add differential expression information averaged over all diseases in this study (resulting network termed “AVG-DE” network; (3) using backbone HumanNet and add differential expression information from mismatched cell/tissue types, e.g. when studying heart disease variants, using intestine gene expression data (resulting network termed “Mismatch-DE” network). For studying risk variants in enhancers, we used the same procedure to create non-specific gene functional interaction network (i.e. FI edges). In addition, for the EP interaction (EP edges), we similarly removed, averaged, and shuffled EP interaction scores but kept the same topology respectively to make EP interactions non-disease specific. ### P value for eSNPs that disrupt transcription factor binding sites For each eSNP, we first scan sequences containing the eSNP using TF binding motifs from the Cis-BP database62 and calculate the log-odds ratio score for the SNP-containing sequence. If at least one allele for the SNP has a score greater than the threshold that corresponds to a P value 4×10−7, which is computed using TFM-Pvalue method63 for each motif separately, the sequence is considered as a TF binding site. Next, the difference in the motif score between the two alleles is computed and compared to a null distribution of motif score differences using one million randomly selected SNPs reported by the 1000 Genomes project. Raw value is corrected for multiple testing using the Benjamini−Hochberg method. The motif disruption score for a given eSNP is the negative logarithm of the most significant motif disruption value among all TF motifs having a binding site overlapping with the eSNP. ### SNPs associated with autoimmune diseases We obtained SNPs associated with seven autoimmune diseases from the GWAS Catalog29. All SNPs have a genome-wide association value of 5×10−8 or less. We identified SNPs in the same LD with the GWAS catalog SNPs. Summary of GWAS Catalog SNPs and linked eSNPs is provided in Supplementary Tables 5. ### Identification of optimal set of risk eSNPs in a disease ARVIN computes a probability score for each candidate eSNP. In order to choose a cutoff for final predictions, we developed the following procedure based on the assumption that a true risk eSNP should either be a lead or linked to a lead GWAS SNP. We first rank all eSNPs in descending order of their ARVIN scores. Next, we compute a cumulative enrichment score as following: $${{S}}\,=\,\mathop {\sum }\limits_{{{i}}\,=\,1}^{{n}} \left\{ {\begin{array}{*{20}{c}} {d \times {{p}}_{{i}}} \\ {d \times (1 - {{p}}_{{i}})} \end{array}} \right.$$ where p i is the ARVIN score for eSNP i and d is an indicator function whose value depends on whether the SNP is located in a disease-associated region, which is defined as the LD block anchored by a GWAS or ImmunoChIP64 lead SNP with an association value < 5×10−8d takes the value of 1 if eSNP i is in a disease-associated region, otherwise the value is −1. Based on this scoring scheme, eSNPs located outside of disease-associated regions contributes negative value to the enrichment score (Supplementary Figure 2). When S reaches the maximum value, we use the index i as the optimal number of eSNPs for a given disease. ### Evaluation of disease risk of predicted eSNPs with GWAS data GWAS data for case and control samples were obtained from the WTCCC (Wellcome Trust Case Control Consortium). Samples with reported poor quality were excluded from the analysis. We used data from WTCCC137 data sets for Crohn’s disease (1738 cases), rheumatoid arthritis (1860 cases), type 1 diabetes (1963 cases), and shared control samples from National Blood Service (NBS) individuals (1456 controls). We used WTCCC236 data sets for multiple sclerosis (9770 cases), psoriasis (2178 cases), ulcerative colitis (2361 cases), and shared control samples from NBS phase-2 individuals (2679 controls). Following the best practice guidelines of IMPUTE2,65 we imputed 1000 Genomes Phase 1 variants into each GWAS sample. We made hard genotype calls by applying a threshold of 0.9 to the maximum posterior probability of three possible imputed genotypes. We assessed the combined effect of predicted risk eSNP pairs targeting the same gene on disease risk using a permutation-based procedure38. First, for each eSNP pair, we calculated odds ratios for each genotype involving a single SNP. We then calculated odds ratios for nine genotype combinations involving both eSNPs. Next, for individuals of each genotype of the first eSNP in the pair, we randomly assigned a genotype for the second eSNP while maintaining the minor allele frequency of the second eSNP. We generated 1000 permutations and calculated odds ratios for nine genotype combinations. Finally, to assess the significance of the risk alteration, we calculated empirical Pvalues by comparing the odds ratio for real genotype pairs and distribution of odds ratio from randomized genotypes. ### Luciferase reporter assay Jurkat cells were purchased from ATCC (TIB-152). The cell line was tested for mycoplasma contamination using ABI MycoSEQ mycoplasma detection assay (Applied Biosystems). Enhancer sequences containing predicted risk eSNPs were cloned using In-Fusion HD Cloning Kit (Clontech, Cat # 639648) into a luciferase reporter construct pGL3-HS in which expression of the luciferase gene is driven by a minimal heat-shock promoter. Sanger sequencing was used to determine the alleles of the risk eSNPs. Two control regions of ~2 kb without either H3K4me1 or H3K27ac signals were cloned into the same plasmid as negative controls. Reporter constructs were transfected into Jurkat cells using TransIT-Jurkat Reagent (Mirus Bio, MIR 2120). As an internal control, a plasmid containing Renilla luciferase (pRL-TK from Promega) was co-transfected at a molar ratio of 1:10 for Renilla vs firefly luciferases. Cells were collected 48 h post transfection and luciferase reporter levels were measured and compared to Renilla luciferase reporter activity using the Dual-Luciferase Reporter Assay kit (Promega, cat # E1910). Primer sequences for cloning enhancers and mutagenesis are listed in Supplementary Tables 7 and 8. ### Site-directed mutagenesis of enhancer SNPs For mutating a SNP within the tested enhancers, the Q5 site-directed mutagenesis kit (NEB, cat # E0554S) was used according to vendor’s manual. Briefly, primer pairs containing the desired mutations were used to generate plasmids with mutations using the original plasmids as the templates. Sanger sequencing was performed to confirm mutations. ### Data availability We have deposited ARVIN code, accessory scripts, data and documentation at GitHub with the following url address: https://github.com/gaolong/arvin. ## References 1. Hindorff, L. A. et al. Potential etiologic and functional implications of genome-wide association loci for human diseases and traits. Proc. Natl. Acad. Sci. USA 106, 9362–9367 (2009). 2. Consortium, U. K. et al. The UK10K project identifies rare variants in health and disease. Nature 526, 82–90 (2015). 3. Kandoth, C. et al. Mutational landscape and significance across 12 major cancer types. Nature 502, 333–339 (2013). 4. Chorley, B. N. et al. Discovery and verification of functional single nucleotide polymorphisms in regulatory genomic regions: current and developing technologies. Mutat. Res. 659, 147–157 (2008). 5. Noonan, J. P. & McCallion, A. S. Genomics of long-range regulatory elements. Annu. Rev. Genom. Hum. Genet. 11, 1–23 (2010). 6. Freedman, M. L. et al. Principles for the post-GWAS functional characterization of cancer risk loci. Nat. Genet. 43, 513–518 (2011). 7. Epstein, D. J. Cis-regulatory mutations in human disease. Brief. Funct. Genom. Prote. 8, 310–316 (2009). 8. Visel, A., Rubin, E. M. & Pennacchio, L. A. Genomic views of distant-acting enhancers. Nature 461, 199–205 (2009). 9. Consortium, E. P. The ENCODE (ENCyclopedia Of DNA Elements) Project. Science 306, 636–640 (2004). 10. Firpi, H. A., Ucar, D. & Tan, K. Discover regulatory DNA elements using chromatin signatures and artificial neural network. Bioinformatics 26, 1579–1586 (2010). 11. Andersson, R. et al. An atlas of active enhancers across human cell types and tissues. Nature 507, 455–461 (2014). 12. Farh, K. K. et al. Genetic and epigenetic fine mapping of causal autoimmune disease variants. Nature 518, 337–343 (2015). 13. Maurano, M. T. et al. Systematic localization of common disease-associated variation in regulatory DNA. Science 337, 1190–1195 (2012). 14. Khurana, E. et al. Integrative annotation of variants from 1092 humans: application to cancer genomics. Science 342, 1235587 (2013). 15. Kircher, M. et al. A general framework for estimating the relative pathogenicity of human genetic variants. Nat. Genet. 46, 310–315 (2014). 16. Ritchie, G. R., Dunham, I., Zeggini, E. & Flicek, P. Functional annotation of noncoding sequence variants. Nat. Methods 11, 294–296 (2014). 17. Ward, L. D. & Kellis, M. HaploReg: a resource for exploring chromatin states, conservation, and regulatory motif alterations within sets of genetically linked variants. Nucleic Acids Res. 40, D930–D934 (2012). 18. Boyle, A. P. et al. Annotation of functional variation in personal genomes using RegulomeDB. Genome Res. 22, 1790–1797 (2012). 19. Lee, D. et al. A method to predict the impact of regulatory variants from DNA sequence. Nat. Genet. 47, 955–961 (2015). 20. Fu, Y. et al. FunSeq2: a framework for prioritizing noncoding regulatory variants in cancer. Genome Biol. 15, 480 (2014). 21. Lee, I., Blom, U. M., Wang, P. I., Shim, J. E. & Marcotte, E. M. Prioritizing candidate disease genes by network-based boosting of genome-wide association data. Genome Res. 21, 1109–1121 (2011). 22. Linghu, B., Snitkin, E. S., Hu, Z., Xia, Y. & Delisi, C. Genome-wide prioritization of disease genes and identification of disease−disease associations from an integrated human functional linkage network. Genome Biol. 10, R91 (2009). 23. Jia, P., Zheng, S., Long, J., Zheng, W. & Zhao, Z. dmGWAS: dense module searching for genome-wide association studies in protein−protein interaction networks. Bioinformatics 27, 95–102 (2011). 24. Moreau, Y. & Tranchevent, L. C. Computational tools for prioritizing candidate genes: boosting disease gene discovery. Nat. Rev. Genet. 13, 523–536 (2012). 25. Hofree, M., Shen, J. P., Carter, H., Gross, A. & Ideker, T. Network-based stratification of tumor mutations. Nat. Methods 10, 1108–1115 (2013). 26. Zhang, B. et al. Integrated systems approach identifies genetic nodes and networks in late-onset Alzheimer’s disease. Cell 153, 707–720 (2013). 27. He, B., Chen, C., Teng, L. & Tan, K. Global view of enhancer−promoter interactome in human cells. Proc. Natl. Acad. Sci. USA 111, E2191–E2199 (2014). 28. Stenson, P. D. et al. The Human Gene Mutation Database: providing a comprehensive central mutation database for molecular diagnostics and personalized genomics. Hum. Genom. 4, 69–72 (2009). 29. Welter, D. et al. The NHGRI GWAS Catalog, a curated resource of SNP-trait associations. Nucleic Acids Res. 42, D1001–D1006 (2014). 30. Carithers, L. J. & Moore, H. M. The Genotype-Tissue Expression (GTEx) Project. Biopreserv. Biobank.  13, 307–308 (2015). 31. Westra, H. J. et al. Systematic identification of trans eQTLs as putative drivers of known disease associations. Nat. Genet. 45, 1238–1243 (2013). 32. Hong, J. W., Hendrix, D. A. & Levine, M. S. Shadow enhancers as a source of evolutionary novelty. Science 321, 1314 (2008). 33. Sanyal, A., Lajoie, B. R., Jain, G. & Dekker, J. The long-range interaction landscape of gene promoters. Nature 489, 109–113 (2012). 34. Li, G. et al. Extensive promoter-centered chromatin interactions provide a topological basis for transcription regulation. Cell 148, 84–98 (2012). 35. Chatterjee, S. et al. Enhancer variants synergistically drive dysfunction of a gene regulatory network in Hirschsprung disease. Cell 167, 355–368 e310 (2016). 36. Wellcome Trust Case Control, C. et al. Genome-wide association study of CNVs in 16,000 cases of eight common diseases and 3,000 shared controls. Nature 464, 713–720 (2010). 37. Wellcome Trust Case Control, C. Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls. Nature 447, 661–678 (2007). 38. Corradin, O. et al. Modeling disease risk through analysis of physical interactions between genetic variants within chromatin regulatory circuitry. Nat. Genet. 48, 1313–1320 (2016). 39. Karwacz, K. et al. Critical role of IRF1 and BATF in forming chromatin landscape during type 1 regulatory cell differentiation. Nat. Immunol. 18, 412–421 (2017). 40. Javierre, B. M. et al. Lineage-specific genome architecture links enhancers and non-coding disease variants to target gene promoters. Cell 167, 1369–1384 e1319 (2016). 41. Lissy, N. A., Davis, P. K., Irwin, M., Kaelin, W. G. & Dowdy, S. F. A common E2F-1 and p73 pathway mediates cell death induced by TCR activation. Nature 407, 642–645 (2000). 42. Shakya, A. et al. Oct1 and OCA-B are selectively required for CD4 memory T cell function. J. Exp. Med. 212, 2115–2131 (2015). 43. Yang, Z., Fujii, H., Mohan, S. V., Goronzy, J. J. & Weyand, C. M. Phosphofructokinase deficiency impairs ATP generation, autophagy, and redox balance in rheumatoid arthritis T cells. J. Exp. Med. 210, 2119–2134 (2013). 44. Yang, Z., Matteson, E. L., Goronzy, J. J. & Weyand, C. M. T-cell metabolism in autoimmune disease. Arthritis Res. Ther. 17, 29 (2015). 45. He, Y. et al. Antiinflammatory effect of Rho kinase blockade via inhibition of NF-kappaB activation in rheumatoid arthritis. Arthritis Rheum. 58, 3366–3376 (2008). 46. Zanin-Zhorov, A. et al. Selective oral ROCK2 inhibitor down-regulates IL-21 and IL-17 secretion in human T cells via STAT3-dependent mechanism. Proc. Natl. Acad. Sci. USA 111, 16814–16819 (2014). 47. Rao, S. S. et al. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell 159, 1665–1680 (2014). 48. Szklarczyk, D. et al. STRINGv10: protein−protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 43, D447–D452 (2015). 49. Marbach, D. et al. Tissue-specific regulatory circuits reveal variable modular perturbations across complex diseases. Nat. Methods 13, 366–370 (2016). 50. Lizio, M. et al. Gateways to the FANTOM5 promoter level mammalian expression atlas. Genome Biol. 16, 22 (2015). 51. Peterson, T. A., Doughty, E. & Kann, M. G. Towards precision medicine: advances in computational approaches for the analysis of human variants. J. Mol. Biol. 425, 4047–4063 (2013). 52. Corradin, O. et al. Combinatorial effects of multiple enhancer variants in linkage disequilibrium dictate levels of gene expression to confer susceptibility to common traits. Genome Res. 24, 1–13 (2014). 53. Yang, J. et al. Conditional and joint multiple-SNP analysis of GWAS summary statistics identifies additional variants influencing complex traits. Nat. Genet. 44, S361–S363 (2012). 369-375. 54. Kuhn, M. Building predictive models in R using the caret package. J. Stat. Soft. 28, 1–26 (2008). 55. Pepe, M., Longton, G. & Janes, H. Estimation and comparison of receiver operating characteristic curves. Stata J. 9, 1 (2009). 56. Ritchie, M. E. et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 43, e47 (2015). 57. Landrum, M. J. et al. ClinVar: public archive of interpretations of clinically relevant variants. Nucleic Acids Res. 44, D862–D868 (2016). 58. Griffith, O. L. et al. ORegAnno: an open-access community-driven resource for regulatory annotation. Nucleic Acids Res. 36, D107–D113 (2008). 59. Purcell, S. et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am. J. Hum. Genet. 81, 559–575 (2007). 60. Ljubić, I. et al. An algorithmic framework for the exact solution of the Prize-Collecting Steiner Tree Problem. Math. Program. 105, 427–449 (2006). 61. Bailly-Bechet, M. et al. Finding undetected protein associations in cell signaling by belief propagation. Proc. Natl. Acad. Sci. USA 108, 882–887 (2011). 62. Weirauch, M. T. et al. Determination and inference of eukaryotic transcription factor sequence specificity. Cell 158, 1431–1443 (2014). 63. Touzet, H. & Varre, J. S. Efficient and accurate P-value computation for Position Weight Matrices. Algorithms Mol. Biol. 2, 15 (2007). 64. Trynka, G. et al. Dense genotyping identifies and localizes multiple common and rare variant association signals in celiac disease. Nat. Genet. 43, 1193–1201 (2011). 65. Howie, B. N., Donnelly, P. & Marchini, J. A flexible and accurate genotype imputation method for the next generation of genome-wide association studies. PLoS Genet. 5, e1000529 (2009). ## Acknowledgements We thank the Research Information Services at the Children’s Hospital of Philadelphia for providing computing support. This work was supported by National Institutes of Health of United States of America grants GM104369, GM108716, HG006130, and HD089245 (to K.T.), AA022994 (to S.H.) and AA024486 (to S.H. and K.T.). ## Author information Authors ### Contributions L.G., Y.U. and K.T. conceived and designed the study; L.G. and Y.U. designed and implemented the ARVIN algorithm. B.H., X.M., J.W. and S.H. provided additional analytical tools. L.G. and Y.U. performed data analysis. P.G. performed experimental validation. K.T. supervised the overall study. L.G., Y.U. and K.T. wrote the paper. ### Corresponding author Correspondence to Kai Tan. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Gao, L., Uzun, Y., Gao, P. et al. Identifying noncoding risk variants using disease-relevant gene regulatory networks. Nat Commun 9, 702 (2018). https://doi.org/10.1038/s41467-018-03133-y • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-018-03133-y • ### Epigenomics and genotype-phenotype association analyses reveal conserved genetic architecture of complex traits in cattle and human • Shuli Liu • Ying Yu • Lingzhao Fang BMC Biology (2020) • ### Missing heritability in Parkinson’s disease: the emerging role of non-coding genetic variation • Jochen Ohnmacht • Patrick May • Rejko Krüger Journal of Neural Transmission (2020) • ### An integrative approach for building personalized gene regulatory networks for precision medicine • Monique G. P. van der Wijst • Dylan H. de Vries • Lude Franke Genome Medicine (2018)
2022-05-20 21:53:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6509957909584045, "perplexity": 6589.959654417705}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00672.warc.gz"}
https://www.physicsforums.com/threads/times-circling-the-earth.817334/
# Times circling the Earth 1. Jun 4, 2015 ### Nick666 So after watching this, circling around the Earth 7 times per second, close to the speed of light, one week of travel results in 100 years on Earth. Simplifying, 1 second on the machine versus 5200 seconds on earth. So I have to ask, how many times did the machine travel around the Earth ? 5200*7 ? Last edited: Jun 4, 2015 2. Jun 4, 2015 3. Jun 4, 2015 ### Nick666 It just seems to me the obvious answer would be 5200*7. But then I try to imagine the perspective of the train rider, and the circling of the Earth 36400 times versus its measly one measured second, something doesnt seem right. 4. Jun 4, 2015 ### Simon Bridge It's a tad tricky because the train rider is not in an inertial frame. But basically, everyone will see the same number of circuits... you can imagine the train going back and forth instead of in a circle and use the usual twin's paradox analysis. 5. Jun 4, 2015 ### Nick666 I understand the usual twin paradox...but there's something thats bugging me. Say the train sends a photon to a mirror in space thats ~150000 km away. Now shouldnt the train receive back the photon after 5200*7 rotations from its perspective, yet we on earth see the train receiving it after 7 rotations ? 6. Jun 4, 2015 ### A.T. 150000 km as measured in which frame? 7. Jun 4, 2015 ### Nick666 Ermm... we on earth measure it. Half the distance to the moon or something like that. 8. Jun 4, 2015 ### A.T. The train will measure a different distance to the mirror due to length contraction, which will vary depending which direction the train is currently moving. Also, in the non-inertial train frame light doesn't travel along straight lines at constant speed c. So there is no reason to assume that the train will "receive back the photon after 5200*7 rotations from its perspective". Just like with the Twin Paradox the fallacy is assuming rules of inertial frames for a non-inertial frame. 9. Jun 4, 2015 ### Nick666 But I can imagine that mirror being a sphere-mirror of ~150000km in circumference, and sending the photon to whatever direction, wouldnt that mean that the length contraction happens in all directions ? 10. Jun 4, 2015 ### Staff: Mentor No. You're trying to say that all points on the mirror are a distance of $75000/\pi$ kilometers from the center of the earth... but an observer on the orbiter will find that the distance from center of earth to the mirror along a line parallel to the motion of the orbiter will be contracted while the distance along a line perpendicular to the motion of the orbiter will not. Thus, the sphericalness of the mirror is itself a frame-dependent thing. (This shouldn't be surprising. Even in ordinary straight-line motion If I use light and mirrors, or radar, to determine the shape of an object moving relative to me, if it is spherical in one frame it won't be spherical in others). 11. Jun 4, 2015 ### Nick666 I understand the sphericalness of the sphere being frame-dependent. But if the distance along a line perpendicular to the motion will not contract, doesnt that mean that it will take 5200*7 rotations for the train to receive the photon while we see the train as receiving the photon after 7 rotations ? Cause thats what I understood from A.T. , "it shrinks so it will receive the photon way before 5200*7 rotations" . But you say it doesnt shrink in that specific direction. 12. Jun 4, 2015 ### A.T. You are ignoring the rest of my post: - The length contraction, which will vary depending which direction the train is currently moving. - In the non-inertial train frame light doesn't travel along straight lines at constant speed c. 13. Jun 4, 2015 ### Staff: Mentor Remember, the train is constantly changing direction so the direction of shrinkage is also constantly changing and the light is not travelling in straight line relative to the train observer. You've constructed a fairly complicated setup here so it's hard to see the answer clearly, but it really all comes down to what A.T. said above - you're applying inertial-frame simplifications to non-inertial movement, and that never works. 14. Jun 4, 2015 ### Nick666 So what you're saying is that the speed of light is less or higher than c from some points of view of the non-inertial train ? 15. Jun 4, 2015 ### A.T. In non-inertial frames the time for a light round trip of length d can be different from d/c. 16. Jun 4, 2015 ### Nick666 Yes, but can it be higher or lower than d/c ? 17. Jun 4, 2015 ### Staff: Mentor Either. In this context, $d$ is a coordinate distance not a proper distance, and it doesn't have much physical significance. If you want to work this problem out properly, you will have to stop trying to understand it in terms of time dilation and length contraction; these are simplifications of the more general machinery of the Lorentz transformations between momentarily comoving inertial frames and they won't work here. Instead, pick a coordinate system, any coordinate system that works for you; describe the worldlines of the orbiter, the earth-bound observer, and some interesting light rays in that coordinate system; and calculate the proper time on the orbiter's and earthbound observer's worldlines between the points of intersection of these worldlines and the paths of the light rays. 18. Jun 4, 2015 ### pervect Staff Emeritus The coordinate speed of light can be different from c in non-inertial frames - the depends on your coordinate choices. Coordinate speeds are not usually regarded as "physical" however. I should explain more, probably, but I'm running out of time, so I'll leave it at that for now. [add] I guess I should say that the issues involve the fact that differences between coordinates do not always reflect either proper distances or proper times. But this then leaves wanting a clear explanation of what is meant by proper distance and proper time. 19. Jun 4, 2015 ### Simon Bridge The light fired from a train going in a circle is a common misdirect on "einstein was wrong" websites. Its also a common exercize in introductory GR courses. To understand the issues... it helps to be more careful. Lets get rid of the distracting parts of the description like the exact numbers and specific locations. An observer O sees another observer T moving at a constant relativistic speed v on a stationary circular track radius r. At t=0 on O and Ts clocks, O and T are right next to each other. At that time O notices a pulse of light travel radially outward to a mirror at distance d from the track (which is d+r from the center of the track)... where it is reflected directly back again. Meantime the train completes an integer n number of circuits of the track, arriving in time to receive the returning pulse. (A circuit is completed each time T passes by O.) O figures that $2d/c=2n\pi r/v$ since the time to go n times around the track is also the time for light to get to the mirror and back. Is this the sort of setup you are thinking of? To work out what T figures, you have to change to the situation where T is stationary. This means the track and O are following an odd path around T, and so is the mirror. Thus a circuit is completed each time O passes by T. The track is no longer a circle, and keeps changing shape while it moves. The path the light takes is no longer a straight line. Any calculation anyone wants to do that ignores any of that is wrong. The question is... how many circuits does T count compared with O? That sound good to you? 20. Jun 5, 2015 ### Nick666 Allright, I get it now. I never thought of imagining the T(rain) stopped and the rest of the setup "circling around" . Though I have to say this example I cooked up in my head all by my self, no anti-relativity sites, dont care about those. Thank you all for the answers.
2017-08-18 03:51:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5662696957588196, "perplexity": 982.891559871953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104560.59/warc/CC-MAIN-20170818024629-20170818044629-00480.warc.gz"}
https://cs.stackexchange.com/questions/62448/when-is-%CE%B5-removal-from-a-cfg-idempotent
# When is ε-removal from a CFG idempotent? For which context-free grammars is it idempotent to remove $\varepsilon$-productions? Given that there are multiple rewriting algorithms which preserve language and leave the grammar without $\varepsilon$-productions (apart from $S \to \varepsilon$ iff $\varepsilon \in L(G)$), is this sensitive to the choice of algorithm? As far as I can tell: • If $L(G) \not \ni \varepsilon$ then rewriting is always idempotent: the second pass finds no $\varepsilon$-productions to operate on. • If the start symbol doesn't occur on any right-hand-side it is always idempotent: after the first rewrite this property is maintained and there are no $\varepsilon$-productions except maybe $S \to \varepsilon$. The second pass may find an $\varepsilon$-production but since $S$ doesn't occur on any right-hand side there are no rules to transform. Is there some variant of $\varepsilon$-removal which is always idempotent? Will treating the start symbol as never nullable (even if it is) produce an idempotent $\varepsilon$-removal algorithm? My tests suggest so, but I can't think of a proof at the moment. Pretending the start symbol to not be nullable will lead to an $\varepsilon$-removal algorithm which is idempotent on all CFGs. I'll now formalize and prove this. I define a non-terminal $A$ to be S-nullable iff • It is not $S$ (the start symbol); and • It either has a rule $A \to \varepsilon$ or it has a rule $A \to X_1 \ldots X_n$ where each $X_i$ is an S-nullable non-terminal. The $\varepsilon$-removal algorithm then becomes: Input: A context-free grammar $G = (N, \Sigma, S, P)$. Output: A context-free grammar $G_\varepsilon$ with no S-nullable non-terminals and $L(G_\varepsilon) = L(G)$. Procedure: let $G' = (N, \Sigma, S, P \setminus \{(A \to \varepsilon) \in P \mid A \not= S\})$. For each non-terminal $A$ in $N$ which is S-nullable in $G$, for each rule $B \to \alpha A \beta$ in $G'$, add the rule $B \to \alpha \beta$ to $G'$ if $\alpha \beta \not= \varepsilon$ or $B = S$ ($\alpha$ and $\beta$ are strings in $(N \cup \Sigma)^*$). Let $G_\varepsilon$ be $G'$ after all such rules have been added. Note that one-by-one addition of rules means that if we have $S \to ABCd$ and $ABC$ is nullable and we nullify $A$, $B$, $C$ in that order, we will first add $S \to BCd$, then add $S \to ACd$ and $S \to Cd$, then add $S \to ABd \mid Bd \mid Ad \mid d$. In other words, it should be equivalent to doing the combinatorial expansion of rules one rule at a time. Clearly $G_\varepsilon$ has no S-nullable rules: all $\varepsilon$ rules were removed (except $S \to \varepsilon$ if it was there), and no $\varepsilon$ rules were added (except maybe $S \to \varepsilon$). Hence no non-terminal is S-nullable directly (since $S$ is never S-nullable), and the inductive case has no basis to apply to. Hence the algorithm is idempotent: it only adds or removes rules if there is an S-nullable non-terminal in $G$. Also the algorithm preserves language, i.e. $L(G_\varepsilon) = L(G)$, by translation of derivations. When we derive $A \Rightarrow \varepsilon$, we can instead avoid introducing $A$ in the first place, unless $A = S$ in which case $S \to \varepsilon$ is a rule. In the other direction, when we derive $\alpha \beta$ we can sprinkle it with (S-)nullable non-terminals and derive $\varepsilon$ from those. Once more, with formality: In one direction, when $\alpha \beta$ is derived from $A$ in $G_\varepsilon$ and $A \to \alpha \beta$ doesn't occur in $G$, this is because we added this rule based on finding $A \to \alpha B \beta$ with $B$ S-nullable. Either this rule occurs in $G$, in which case derive $\varepsilon$ from $B$, or this rule was itself added; let $\alpha' \beta' = \alpha B \beta$ such that we added $A \to \alpha' \beta'$ based on finding $A \to \alpha' C \beta'$ where $C$ is S-nullable. Every inductive step searches for a right-hand side that's longer by 1 and bounded by the length of the longest rule in $G$, hence this process terminates. Derive $\varepsilon$ from all the back-filled non-terminals (they're all S-nullable). This will derive $\alpha \beta$ from $A$ in multiple steps, hence $\Rightarrow^*$ is preserved (in one direction). In the other direction, if $\varepsilon$ is derived from $A$ (in a single step) where $A \to \varepsilon$ doesn't occur in $G_\varepsilon$, we know that $A \not= S$ (because we never remove $S \to \varepsilon$) and $A$ is S-nullable; hence we must have previously derived some sentential form containing $A$ in at least one derivation step. Let $B \to \alpha A \beta$ be the rule which introduced the $A$ in question. If $\alpha \beta \not= \varepsilon$, we added the rule $B \to \alpha \beta$. Use this rule instead. (Any derivations performed between introducing and eliminating $A$ can still be performed, and have identical results.) If $\alpha \beta = \varepsilon$, then either $B = S$ or $B$ is S-nullable. If $B = S$ we added the rule $S \to \varepsilon$; use this to replace the derivation $\gamma \underline{S} \delta \Rightarrow \gamma A \delta$ with $\gamma \underline{S} \delta \Rightarrow \gamma \delta$. If $B \not= S$ it is S-nullable (as $B \Rightarrow A \Rightarrow \varepsilon$) and some other rule introduced $B$ to our sentential form. Apply the same replacement; the number of recursive steps is bounded above by the number of derivation steps done so far. Unrelated observation: if in $G$ we have rules $S \to \varepsilon$ and $A \to \alpha S \beta$ then $G_\varepsilon$ will still have those rules, i.e. it will not be essentially non-contracting. However, if $S$ never occurs on the right-hand side of any rule in $G$, then $G_\varepsilon$ will be essentially non-contracting (and $S$ will also not occur on any right-hand side in $G_\varepsilon$).
2019-09-21 17:56:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.941746711730957, "perplexity": 558.6260891445115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574588.96/warc/CC-MAIN-20190921170434-20190921192434-00536.warc.gz"}
http://xrpp.iucr.org/Ba/ch2o2v0001/sec2o2o3/
International Tables for Crystallography Volume B Reciprocal space Edited by U. Shmueli International Tables for Crystallography (2006). Vol. B, ch. 2.2, pp. 210-215   | 1 | 2 | ## Section 2.2.3. Origin specification C. Giacovazzoa* aDipartimento Geomineralogico, Campus Universitario, I-70125 Bari, Italy Correspondence e-mail: c.giacovazzo@area.ba.cnr.it ### 2.2.3. Origin specification | top | pdf | • (a) Once the origin has been chosen, the symmetry operators and, through them, the algebraic form of the s.f. remain fixed. A shift of the origin through a vector with coordinates transforms into and the symmetry operators into , where • (b) Allowed or permissible origins (Hauptman & Karle, 1953, 1959) for a given algebraic form of the s.f. are all those points in direct space which, when taken as origin, maintain the same symmetry operators . The allowed origins will therefore correspond to those points having the same symmetry environment in the sense that they are related to the symmetry elements in the same way. For instance, if for , then the allowed origins in Pmmm are the eight inversion centres. To each functional form of the s.f. a set of permissible origins will correspond. • (c) A translation between permissible origins will be called a permissible or allowed translation. Trivial allowed translations correspond to the lattice periods or to their multiples. A change of origin by an allowed translation does not change the algebraic form of the s.f. Thus, according to (2.2.3.2), all origins allowed by a fixed functional form of the s.f. will be connected by translational vectors such that where V is a vector with zero or integer components. In centred space groups, an origin translation corresponding to a centring vector does not change the functional form of the s.f. Therefore all vectors represent permissible translations. will then be an allowed translation (Giacovazzo, 1974) not only when, as imposed by (2.2.3.3), the difference is equal to one or more lattice units, but also when, for any s, the condition is satisfied. We will call any set of cs. or ncs. space groups having the same allowed origin translations a Hauptman–Karle group (H–K group). The 94 ncs. primitive space groups, the 62 primitive cs. groups, the 44 ncs. centred space groups and the 30 cs. centred space groups can be collected into 13, 4, 14 and 5 H–K groups, respectively (Hauptman & Karle, 1953, 1956; Karle & Hauptman, 1961; Lessinger & Wondratschek, 1975). In Tables 2.2.3.1 –2.2.3.4 the H–K groups are given together with the allowed origin translations. Table 2.2.3.1| top | pdf | Allowed origin translations, seminvariant moduli and phases for centrosymmetric primitive space groups H–K group Space group Pmna Pcca Pbam Pccn Pbcm Pmmm Pnnm Pnnn Pmmn Pccm Pbcn Pban Pbca Pmma Pnma Pnna Allowed origin translations (0, 0, 0); (0, 0, 0) (0, 0, 0) (0, 0, 0) ; ; ; Vector seminvariantly associated with (l) Seminvariant modulus (2, 2, 2) (2, 2) (2) (2) Seminvariant phases Number of semindependent phases to be specified 3 2 1 1 Table 2.2.3.2| top | pdf | Allowed origin translations, seminvariant moduli and phases for noncentrosymmetric primitive space groups H–K group Space group P1 P2 Pm P222 Pmm2 P4 P3 P312 P31m P321 R3 R32 Pc P422 P31c R3m P23 Pcc2 P6 R3c Pma2 P3m1 P6 P622   P432 P4mm P3c1 Pnc2 P4bm Pba2 P4cc     P6mm Pnn2 P4nc     P6cc Allowed origin translations (x, y, z) (0, y, 0) (x, 0, z) (0, 0, 0) (0, 0, z) (0, 0, z) (0, 0, 0) (0, 0, z) (0, 0, 0) (0, 0, z) (0, 0, 0) (x, x, x) (0, 0, 0) Vector seminvariantly associated with (h, k, l) (h, k, l) (h, k, l) (h, k, l) (h, k, l) (l) (l) Seminvariant modulus (0, 0, 0) (2, 0, 2) (0, 2, 0) (2, 2, 2) (2, 2, 0) (2, 0) (2, 2) (3, 0) (6) (0) (2) (0) (2) Seminvariant phases if if ; (mod 3) (mod 6)       ; Allowed variations for the semindependent phases , if , if , if , if , if if (mod 3) if (mod 2) Number of semindependent phases to be specified 3 3 3 3 3 2 2 2 1 1 1 1 1 Table 2.2.3.3| top | pdf | Allowed origin translations, seminvariant moduli and phases for centrosymmetric non-primitive space groups H–K group Space groups Immm Fmmm Ibam Fddd Cmcm Ibca Cmca Imma Cmmm Cccm Cmma Ccca Allowed origin translations (0, 0, 0) (0, 0, 0) (0, 0, 0) (0, 0, 0) (0, 0, 0) Vector seminvariantly associated with (l) Seminvariant modulus (2, 2) (2, 2) (2) (2) (1, 1, 1) Seminvariant phases ; ; All Number of semindependent phases to be specified 2 2 1 1 0 Table 2.2.3.4| top | pdf | Allowed origin translations, seminvariant moduli and phases for noncentrosymmetric non-primitive space groups H–K group Space group C2 Cm Cmm2 C222 Amm2 Imm2 I222 F432 F222 I4 I422 Fmm2 I23 Cc Abm2 Iba2 F23 Fdd2 Ccc2   Ama2 Ima2     I4mm   I432 Aba2       I4cm Allowed origin translations (0, y, 0) (x, 0, z) (0, 0, z) (0, 0, 0) (0, 0, z) (0, 0, z) (0, 0, 0) (0, 0, 0) (0, 0, 0) (0, 0, z) (0, 0, 0) (0, 0, 0) (0, 0, z) (0, 0, 0) Vector seminvariantly associated with (k, l) (h, l) (h, l) (h, l) (h, l) (h, l) (h, l) (l) (l) (l) Seminvariant modulus (0, 2) (0, 0) (2, 0) (2, 2) (2, 0) (2, 0) (2, 2) (2) (4) (0) (2) (4) (0) (1, 1, 1) Seminvariant phases with (mod 4) with (mod 4) All Allowed variations for the semindependent phases if (mod 2) if (mod 2) if (mod 2) if 1 (mod 2) All Number of semindependent phases to be specified 2 2 2 2 2 2 2 1 1 1 1 1 1 0 • (d) Let us consider a product of structure factors being integer numbers. The factor is the phase of the product (2.2.3.5). A structure invariant (s.i.) is a product (2.2.3.5) such that Since are usually known from experiment, it is often said that s.i.'s are combinations of phases for which (2.2.3.6) holds. , , , , are examples of s.i.'s for . The value of any s.i. does not change with an arbitrary shift of the space-group origin and thus it will depend on the crystal structure only. • (e) A structure seminvariant (s.s.) is a product of structure factors [or a combination of phases (2.2.3.7)] whose value is unchanged when the origin is moved by an allowed translation. Let 's be the permissible origin translations of the space group. Then the product (2.2.3.5) [or the sum (2.2.3.7)] is an s.s., if, in accordance with (2.2.3.1), where r is a positive integer, null or a negative integer. Conditions (2.2.3.8) can be written in the following more useful form (Hauptman & Karle, 1953): where is the vector seminvariantly associated with the vector and is the seminvariant modulus. In Tables 2.2.3.1 –2.2.3.4, the reflection seminvariantly associated with , the seminvariant modulus and seminvariant phases are given for every H–K group. The symbol of any group (cf. Giacovazzo, 1974) has the structure , where L stands for the lattice symbol. This symbol is underlined if the space group is cs. By definition, if the class of permissible origin has been chosen, that is to say, if the algebraic form of the symmetry operators has been fixed, then the value of an s.s. does not depend on the origin but on the crystal structure only. • (f) Suppose that we have chosen the symmetry operators and thus fixed the functional form of the s.f.'s and the set of allowed origins. In order to describe the structure in direct space a unique reference origin must be fixed. Thus the phase-determining process must also require a unique permissible origin congruent to the values assigned to the phases. More specifically, at the beginning of the structure-determining process by direct methods we shall assign as many phases as necessary to define a unique origin among those allowed (and, as we shall see, possibly to fix the enantiomorph). From the theory developed so far it is obvious that arbitrary phases can be assigned to one or more s.f.'s if there is at least one allowed origin which, fixed as the origin of the unit cell, will give those phase values to the chosen reflections. The concept of linear dependence will help us to fix the origin. • (g) n phases are linearly semidependent (Hauptman & Karle, 1956) when the n vectors seminvariantly associated with the are linearly dependent modulo , being the seminvariant modulus of the space group. In other words, when is satisfied. The second condition means that at least one exists that is not congruent to zero modulo each of the components of . If (2.2.3.10) is not satisfied for any n-set of integers , the phases are linearly semindependent. If (2.2.3.10) is valid for and , then is said to be linearly semidependent and is an s.s. It may be concluded that a seminvariant phase is linearly semidependent, and, vice versa, that a phase linearly semidependent is an s.s. In Tables 2.2.3.1 –2.2.3.4 the allowed variations (which are those due to the allowed origin translations) for the semindependent phases are given for every H–K group. If is linearly semindependent its value can be fixed arbitrarily because at least one origin compatible with the given value exists. Once is assigned, the necessary condition to be able to fix a second phase is that it should be linearly semindependent of . Similarly, the necessary condition to be able arbitrarily to assign a third phase is that it should be linearly semindependent from and . In general, the number of linearly semindependent phases is equal to the dimension of the seminvariant vector (see Tables 2.2.3.1 –2.2.3.4). The reader will easily verify in (h, k, l) P (2, 2, 2) that the three phases , , define the origin (o indicates odd, e even). • (h) From the theory summarized so far it is clear that a number of semindependent phases , equal to the dimension of the seminvariant vector , may be arbitrarily assigned in order to fix the origin. However, it is not always true that only one allowed origin compatible with the given phases exists. An additional condition is required such that only one permissible origin should lie at the intersection of the lattice planes corresponding to the origin-fixing reflections (or on the lattice plane h if one reflection is sufficient to define the origin). It may be shown that the condition is verified if the determinant formed with the vectors seminvariantly associated with the origin reflections, reduced modulo , has the value ±1. In other words, such a determinant should be primitive modulo . For example, in the three reflections define the origin uniquely because Furthermore, in define the origin uniquely since • (i) If an s.s. or an s.i. has a general value ϕ for a given structure, it will have a value −ϕ for the enantiomorph structure. If , π the s.s. has the same value for both enantiomorphs. Once the origin has been assigned, in ncs. space groups the sign of a given s.s. , π can be assigned to fix the enantiomorph. In practice it is often advisable to use an s.s. or an s.i. whose value is as near as possible to . ### References Giacovazzo, C. (1974). A new scheme for seminvariant tables in all space groups. Acta Cryst. A30, 390–395. Hauptman, H. & Karle, J. (1953). Solution of the phase problem. I. The centrosymmetric crystal. Am. Crystallogr. Assoc. Monograph No. 3. Dayton, Ohio: Polycrystal Book Service. Hauptman, H. & Karle, J. (1956). Structure invariants and seminvariants for non-centrosymmetric space groups. Acta Cryst. 9, 45–55. Hauptman, H. & Karle, J. (1959). Table 2. Equivalence classes, seminvariant vectors and seminvariant moduli for the centered centrosymmetric space groups, referred to a primitive unit cell. Acta Cryst. 12, 93–97. Karle, J. & Hauptman, H. (1961). Seminvariants for non-centrosymmetric space groups with conventional centered cells. Acta Cryst. 14, 217–223. Lessinger, L. & Wondratschek, H. (1975). Seminvariants for space groups and . Acta Cryst. A31, 521.
2019-04-23 19:00:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8284836411476135, "perplexity": 2273.5575355726755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578610036.72/warc/CC-MAIN-20190423174820-20190423200820-00244.warc.gz"}
http://www.numdam.org/item/ASNSP_2009_5_8_1_73_0/
On the set of complex points of a 2-sphere Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 5, Volume 8 (2009) no. 1, pp. 73-87. Let $G$ be a strictly pseudoconvex domain in ${ℂ}^{2}$ with ${C}^{\infty }$-smooth boundary $\partial G$. Let $S$ be a 2-dimensional sphere embedded into $\partial G$. Denote by $ℰ$ the set of all complex points on $S$. We study how the structure of the set $ℰ$ depends on the smoothness of $S$. Classification: 32T15,  32V40,  53D10 Shcherbina, Nikolay 1 1 Department of Mathematics, University of Wuppertal, 42119 Wuppertal, Germany @article{ASNSP_2009_5_8_1_73_0, author = {Shcherbina, Nikolay}, title = {On the set of complex points of a 2-sphere}, journal = {Annali della Scuola Normale Superiore di Pisa - Classe di Scienze}, pages = {73--87}, publisher = {Scuola Normale Superiore, Pisa}, volume = {Ser. 5, 8}, number = {1}, year = {2009}, zbl = {1194.32028}, mrnumber = {2512201}, language = {en}, url = {http://www.numdam.org/item/ASNSP_2009_5_8_1_73_0/} } TY - JOUR AU - Shcherbina, Nikolay TI - On the set of complex points of a 2-sphere JO - Annali della Scuola Normale Superiore di Pisa - Classe di Scienze PY - 2009 DA - 2009/// SP - 73 EP - 87 VL - Ser. 5, 8 IS - 1 PB - Scuola Normale Superiore, Pisa UR - http://www.numdam.org/item/ASNSP_2009_5_8_1_73_0/ UR - https://zbmath.org/?q=an%3A1194.32028 UR - https://www.ams.org/mathscinet-getitem?mr=2512201 LA - en ID - ASNSP_2009_5_8_1_73_0 ER - %0 Journal Article %A Shcherbina, Nikolay %T On the set of complex points of a 2-sphere %J Annali della Scuola Normale Superiore di Pisa - Classe di Scienze %D 2009 %P 73-87 %V Ser. 5, 8 %N 1 %I Scuola Normale Superiore, Pisa %G en %F ASNSP_2009_5_8_1_73_0 Shcherbina, Nikolay. On the set of complex points of a 2-sphere. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 5, Volume 8 (2009) no. 1, pp. 73-87. http://www.numdam.org/item/ASNSP_2009_5_8_1_73_0/ [1] D. Bennequin, Entrelacements et équations de Pfaff, Astérisque 107-108 (1983), 87–161. | MR [2] E. Bishop, Differentiable manifolds in complex Euclidean space, Duke Math. J. 32 (1965), 1–21. | MR | Zbl [3] E. Bedford and W. Klingenberg, On the envelope of holomorphy of a 2-sphere in ${ℂ}^{2}$, J. Amer. Math. Soc. 4 (1991), 623–643. | MR | Zbl [4] E. M. Chirka, Regularity of the boundaries of analytic sets, Mat. Sb. 117 (1982), 291–336; English transl. in Sb. Math. 45 (1983), 291–335. | MR [5] Ya. Eliashberg, Filling by holomorphic discs and its applications, Geometry of low-dimensional manifolds, 2 (Durham, 1989), London Math. Soc. Lecture Note Ser., Vol. 151, Cambridge Univ. Press, Cambridge, 1990, 45–67. | MR [6] O. G. Eroshkin, On a topological property of the boundary of an analytic subset of a strictly pseudoconvex domain in ${ℂ}^{2}$, Mat. Zametki 49 (1991), 149–151; English transl. in Math. Notes 49 (1991), 546–547. | MR [7] M. Gromov, Pseudoholomorphic curves in symplectic manifolds, Invent. Math. 82 (1985), 307–347. | EuDML | MR | Zbl [8] M. W. Hirsch, “Differential Topology”, Grad. Texts in Math., Vol. 33, Springer-Verlag, New York, 1976. | MR | Zbl [9] B. Jöricke, Local polynomial hulls of discs near isolated parabolic points, Indiana Univ. Math. J. 46 (1997), 789–826. | MR | Zbl [10] N. G. Kružilin, Two-dimensional spheres on the boundaries of pseudoconvex domains in ${ℂ}^{2}$, Izv. Akad. Nauk SSSR Ser. Mat. 55 (1991), 1194–1237; English transl. in Math. USSR Izv. 39 (1992), 1151–1187. | MR [11] H. F. Lai, Characteristic classes of real manifolds immersed in complex manifolds, Trans. Amer. Math. Soc. 172 (1972), 1–33. | MR | Zbl [12] S. Nemirovski, Complex analysis and differential topology on complex surfaces, Uspekhi Mat. Nauk 54 (1999), 47–74; English transl. in Russian Math. Surveys 54 (1999), 729–752. | MR [13] E. Stein, “Singular Integrals and Differentiability Properties of Functions”, Princeton Univ. Press, Princeton, 1970. | MR | Zbl [14] H. Whitney, A function not constant on a connected set of critical points, Duke Math. J. 1 (1935), 514–517. | JFM | MR [15] J. Wiegerinck, Local polynomially convex hulls at degenerated $CR$ singularities of surfaces in ${ℂ}^{2}$, Indiana Univ. Math. J. 44 (1995), 897–915. | MR | Zbl
2022-11-30 13:47:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48610877990722656, "perplexity": 1223.813937922911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00326.warc.gz"}
https://socratic.org/questions/how-do-you-write-a-quadratic-function-in-intercept-form-whose-graph-has-x-interc-3
# How do you write a quadratic function in intercept form whose graph has x intercepts -7, -2 and passes through (-5, -6)? Oct 9, 2017 $y = \left(x + 7\right) \left(x + 2\right)$ #### Explanation: $\text{given the x-intercepts ( zeros) of a quadratic function}$ $\text{say "x=a" and } x = b$ $\text{then the factors are "(x-a)" and } \left(x - b\right)$ $\text{and the quadratic function can be expressed as a}$ $\text{product of the factors}$ $y = k \left(x - a\right) \left(x - b\right)$ $\text{where k is a multiplier and can be found if we are}$ $\text{given a point on the parabola}$ $\text{here "x=-7" and } x = - 2$ $\Rightarrow \left(x + 7\right) , \left(x + 2\right) \text{ are the factors}$ $\Rightarrow y = k \left(x + 7\right) \left(x + 2\right)$ $\text{to find k substitute "(-5,-6)" into the equation}$ $- 6 = k \left(2\right) \left(- 3\right) = - 6 k \Rightarrow k = 1$ $\Rightarrow y = \left(x + 7\right) \left(x + 2\right) \leftarrow \textcolor{red}{\text{ in intercept form}}$
2021-11-30 10:10:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9789646863937378, "perplexity": 2264.4323007105763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358966.62/warc/CC-MAIN-20211130080511-20211130110511-00301.warc.gz"}
https://codeforces.com/blog/entry/45269
### Deemo's blog By Deemo, history, 6 years ago, Can anyone tell me how to solve this problem? http://codeforces.com/contest/177/problem/G2 » 6 years ago, # | ← Rev. 5 →   +13 Let's calculate for each prefix s[1..i] the minimum fibonacci index minind[i] such that s[1..i] is a suffix of fib[i] and s[i + 1..n] is a prefix of fib[i + 1]. Because the relation fib[i] = fib[i - 1]fib[i - 2]can be written equivalently as: fib[i] = fib[i - 2]fib[i - 3]fib[i - 2]it follows that $min_ind[i]$ is either ∞ or less than n (because fib[i] has all the prefixes of fib[i - 2]). After that, it is essentially a linear reccurence of type DP[i] = DP[i - 1] + DP[i - 2] + occurences_less_than_inffor all $\i \geq n$ (or n + 2 or smth.). You basically have to compute DP[n] and DP[n + 1] and occurences_less_than_inf, and then do matrix exp. I think that should work :).
2022-05-24 19:11:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8369261622428894, "perplexity": 656.5906181459952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00673.warc.gz"}
https://www.physicsoverflow.org/18620/how-calculate-gravity-path-integrals-about-ads-background?show=18621
# How to calculate gravity path integrals about an AdS background? + 5 like - 0 dislike 591 views Suppose I have some Lagrangian of some higher derivative gravity coupled to a may be matter fields. Now I want to fluctuate it to quadratic order about an AdS background and calculate the 1-loop partition function. Can someone point to a reference where such a thing might be done? This sounds like something standard which someone would have done... I can't see how one can even write down the fluctuated Lagrangian to quadratic order...And even if I wrote that down, how does one impose that the background is AdS? Then the issues of gauge fixing and ghosts are only added complication.. I can imagine calculating the first order variation in the Christoffel symbols and Riemann and Ricci tensors but that goes only so far. This post imported from StackExchange Physics at 2014-06-08 08:18 (UCT), posted by SE-user user6818 asked Jun 6, 2014 What is the problem to take Lagrangian and expand it to any order? as an example of how people expand you may have a look at arxiv.org/abs/hep-th/9901121 or other classical papers like Freedman, D'Hocker,... There are not so many results about one-loop diagrams with legs in AdS. Fast googling gives arxiv.org/abs/hep-th/0506185 and arxiv.org/abs/1007.2653 This post imported from StackExchange Physics at 2014-06-08 08:18 (UCT), posted by SE-user John @John Thanks! Let me see the references. What I am confused about is how is the gauge fixed and ghosts introduced for doing this path-integral over metric fluctuations...also has anyone calculated 4-point stress-tensor correlations from the gravity side? This post imported from StackExchange Physics at 2014-06-08 08:18 (UCT), posted by SE-user user6818 Before Faddeev and Popov discovered their ghosts people had already computed a lot in various gauges. It seems that AdS/CFT computations are still difficult enough for people, so that they do want to be so general as to introduce ghosts. At least you cannot see ghosts in any of the classical papers. The best you can find is four-point of scalars from exchange of something in the bulk arxiv.org/pdf/1404.5625.pdf and refs therein This post imported from StackExchange Physics at 2014-06-08 08:18 (UCT), posted by SE-user John Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverfl$\varnothing$wThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2023-03-24 00:36:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7495593428611755, "perplexity": 1580.7238535723643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00016.warc.gz"}
https://datacrayon.com/practical-evolutionary-algorithms/objective-functions/
## Practical Evolutionary Algorithms A practical book on Evolutionary Algorithms that teaches you the concepts and how they’re implemented in practice. Get the book ## Preamble import numpy as np # for multi-dimensional containers import pandas as pd # for DataFrames import plotly.graph_objects as go # for data visualisation ## Introduction Objective functions are perhaps the most important part of any Evolutionary Algorithm, whilst simultaneously being the least important part too. They are important because they encapsulate the problem the Evolutionary Algorithm is trying to solve, and they are unimportant because they have no algorithmic part in the operation of the Evolutionary Algorithm itself. Put simply, objective functions expect some kind of solution input, i.e. the problem variables, and they use this input to calculate some output, i.e. the objective values. These objective values can be considered to be how the problem variables of a solution scored with respect to the current problem. For example, the input could be variables that define the components of a vehicle, the objective function could be a simulation which tests the vehicle in some environment, and the objective values could be the average speed and ride comfort of the vehicle. In the figure below, we have highlighted the stage at which the objective function is typically invoked - the evaluation stage. It is after this that we find out whether a potential solution to the problem performs well or not, and have some idea about trade-offs between multiple solutions using the objective values. The stage that typically follows this is the termination stage, where we can use this information to determine whether we stop the optimisation process or continue. ## Objective Functions in General Let's have a quick look at what we mean by an objective function. We can express an objective function mathematically. $\begin{array}{}\text{(1)}& f\left(x\right)=\left({f}_{1}\left(x\right),{f}_{2}\left(x\right),\dots ,{f}_{\mathrm{M}}\left(x\right)\right)\end{array}$ Before we can talk about this, we need to explain what $x$ is. In this case, $x$ is a solution to the problem, and it's defined as a vector of $\mathrm{D}$ decision variables. $\begin{array}{}\text{(2)}& x=⟨{x}_{1},{x}_{2},\dots ,{x}_{\mathrm{D}}⟩\end{array}$ Let's asume that the number of decision variables for a problem is 8 so, in this case, $\mathrm{D}=8$. We can create such a solution using Python and initialise it with random numbers. D = 8 x = np.random.rand(D) print(x) [0.7409217 0.64042696 0.37010243 0.46896916 0.56932005 0.65240768 0.19321112 0.27596008] Now we have a single solution consisting of randomly initialised values for ${x}_{1}$ through to ${x}_{8}$. It should be noted that this is a real-encoded solution, which is a distinction we make now as we will discuss solution encoding later in this book. Note When running this notebook for yourself, you should expect the numbers to be different because we are generating random numbers. Let's have a look at $f\left(x\right)$ in Equation 1. This is a function which takes the solution $x$ as input and then uses it for some calculations before giving us some output. The subscript $\mathrm{M}$ indicates the number of objectives we can expect, so for a two-objective problem, we can say $\mathrm{M}=2$. For the sake of example, let's say that ${f}_{1}\left(x\right)$ will calculate the sum of all elements in $x$, and ${f}_{2}$ will calculate the product of all elements in $x$. $\begin{array}{}\text{(3.1)}& {f}_{1}\left(x\right)=\sum _{k=1}^{n}{x}_{k}\end{array}$ $\begin{array}{}\text{(3.2)}& {f}_{2}\left(x\right)=\prod _{k=1}^{n}{x}_{k}\end{array}$ We can implement such an objective function in Python quite easily. def f(x): f1 = np.sum(x) # Equation (3.1) f2 = np.prod(x) # Equation (3.2) return np.array([f1, f2]) Now let's invoke this function and pass in the solution $x$ that we made earlier. We'll store the results in a variable named $y$, in line with Equation 4. $\begin{array}{}\text{(4)}& {y}_{M}=f\left(x\right)\end{array}$ Which translated to Python will look something like the following. y = f(x) print(y) [3.91131918e+00 1.63103045e-03] This has returned our two objective values which quantify the performance of the corresponding solution's problem variables. There is much more to an objective function than what we've covered here, and the objectives we have defined here are entirely arbitrary. Nonetheless, we have implemented a two-objective (or bi-objective) function which we may wish to minimise or maximise. Let's use Python to generate 50 more solutions $x$ with $\mathrm{D}=8$ variables and calculate their objective values according to Equations 3.1 and 3.2. objective_values = np.empty((0, 2)) for i in range(50): x = np.random.rand(8) y = f(x) objective_values = np.vstack([objective_values, y]) # convert to DataFrame objective_values = pd.DataFrame(objective_values, columns=["f1", "f2"]) We won't output these 50 solutions in the interest of saving space, but let's instead visualise all 50 of them using a scatter plot. fig = go.Figure() fig.show() ## Conclusion In this section, we covered the very basics in what we mean by an objective function. We expressed the concept mathematically and then made a direct implementation using Python. We then generated a set of 50 solutions, calculated the objective values for each one, and plotted the objective space using a scatterplot. In the next section, we will look at a popular and synthetic objective function named ZDT1, following a similar approach where we implement a Python function from its mathematical form. ## Practical Evolutionary Algorithms A practical book on Evolutionary Algorithms that teaches you the concepts and how they’re implemented in practice. Get the book ## ISBN 978-1-915907-00-4 ## Cite Rostami, S. (2020). Practical Evolutionary Algorithms. Polyra Publishing.
2023-03-27 11:18:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6999901533126831, "perplexity": 585.440537655067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00679.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/4/lesson/4.1.3/problem/4-37
### Home > APCALC > Chapter 4 > Lesson 4.1.3 > Problem4-37 4-37. Sketch a graph of $f\left(x\right) = x^{3} – 2x^{2}$. At what point(s) will the line tangent to $f$ be parallel to the secant line through $\left(0,f\left(0\right)\right)$ and $\left(2,f\left(2\right)\right)$? Calculate the slope of the secant between $\left(0, f\left(0\right)\right) \text{ and } \left(2, f\left(2\right)\right)$. $\text{slope of secant }=\frac{f(2)-f(0)}{2-0}=\frac{0-0}{2}=0$ We want to know where the slope of the tangent is the same as the slope of the secant. Recall that the slope of the tangent is also known as $f^\prime\left(x\right)$, find where $f^\prime\left(x\right) = 0$. The slope of the tangent $=$ the slope of the secant at coordinate points ( ________, _________ ) and ( ________, _________ ). You must analytically compute the exact coordinates, but note that the slope of tangents lines is $0$ at the local maximum and local minimum. Use the eTool below to examine the graph. Click the link at right for the full version of the eTool: Calc 4-37 HW eTool
2022-07-05 18:33:25
{"extraction_info": {"found_math": true, "script_math_tex": 10, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.959092915058136, "perplexity": 566.7070988937008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00714.warc.gz"}
http://13ten.net/hm6d6v/closure-of-rational-number-in-topology-e4aae8
The point (0), however, is not closed; in fact, (0) = spec(Z). o��\$Ɵ���a8��weSӄ����j}��-�ۢ=�X7�M^r�ND'�����`�'�p*i��m�]�[+&�OgG��|]�%��4ˬ��]R�)������R3�L�P���Y���@�7P�ʖ���d�]�Uh�S�+Q���C�׸mF�dqu?�Wo�-���A���F�iK� �%�.�P��-��D���@�� ��K���D�B� k�9@�9('�O5-y:Va�sQ��*;�f't/��. The interior of a set, [math]S[/math], in a topological space is the set of points that are contained in an open set wholly contained in [math]S[/math]. For (i), note that fnpg= N n[p 1 i=1 fi+ npg. Fully ex­pressed, for X a met­ric space with met­ric d, x is a point of clo­sure of S if for every r > 0, there is a y i… T is closed under arbitrary unions and nite intersections. Each time, the collection of points was either finite or countable and the most important property of a point, in a sense, was its location in some coordinate or number system. Solution: The solution is analogous to that for exercise 30.5(b). uncountable number of limit points. 3 0 obj << First the trivial case: If Xis nite then the topology is the discrete topology, so everything is open and closed and boundaries are empty. Hint. Solution: Part (a) This is an interesting problem with an analog to the density of rational numbers in R under the standard topology. If X is the Euclidean space R, then the closure of the set Q of rational numbers is the whole space R. We say that Q is dense in R. If X is the complex plane C = R 2, then cl({z in C: |z| > 1}) = {z in C: |z| ≥ 1}. :A subset V of Xis said to be closed if XnV belongs to : Exercise 4.11 : ([1, H. Fu rstenberg]) Consider N with the arithmetic pro-gression topology. So, for each prime number p, the point (p) 2 spec(Z) is closed since (p) = V(p). > Why is the closure of the interior of the rational numbers empty? The empty set ;and the whole space R are closed. To see this, consider a closed For example, if X is the set of rational numbers, with the usual relative topology induced by the Euclidean space R, and if S = {q in Q : q 2 > 2, q > 0}, then S is closed in Q, and the closure of S in Q is S; however, the closure of S in the Euclidean space R is the set of all real numbers … closure of a rational language in the pro nite topology. The algebraic closure ... x - y|}; the completion is the field of real numbers. x��XKs�6��W��B��� gr�S��&��i:I�D[�Z���;����H�ڙ\r�~��� &��I2y� �s=�=��H�M,Lf�0� >> Closure is a property that is defined for a set of numbers and an operation. Then N(x; ) U i for every i, 1 i m. Hence N(x; ) Uand Uis open. (2)There are in nitely prime numbers. The closure of a set also depends upon in which space we are taking the closure. De nition 5.14. 5. %PDF-1.5 For Q in R, Q is not closed. %���� Open bases are more often considered than closed ones, hence if one speaks simply of a base of a topological space, an open base is meant. 2) The union of a finite number of closed sets is closed. To describe the topology on spec(Z) note that the closure of any point is the set of prime ideals containing that point. When regarding a base of an open, or closed, topology, it is common to refer to it as an open or closed base of the given topological space. Convergence Definition Example: Consider the set of rational numbers \$\$\mathbb{Q} \subseteq \mathbb{R}\$\$ (with usual topology), then the only closed set containing \$\$\mathbb{Q}\$\$ in \$\$\mathbb{R}\$\$. Basic Point-Set Topology 3 means that f(x) is not in O.On the other hand, x0 was in f −1(O) so f(x 0) is in O.Since O was assumed to be open, there is an interval (c,d) about f(x0) that is contained in O.The points f(x) that are not in O are therefore not in (c,d) so they remain at least a fixed positive distance from f(x0).To summarize: there are points 1 Open and closed sets First, some commonly used notation. It is known that the pro nite closure of a rational language i s rational too [16, 8].
2021-04-20 03:32:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742678761482239, "perplexity": 590.498707320256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039375537.73/warc/CC-MAIN-20210420025739-20210420055739-00091.warc.gz"}
https://eepower.com/market-insights/using-evs-as-mobile-battery-storage-could-boost-decarbonization/
Market Insights # Using EVs as Mobile Battery Storage Could Boost Decarbonization December 12, 2022 by Claire Turvill ## MIT researchers have published a paper on vehicle-to-grid (V2G) technology, which allows electric vehicles to return energy to the power grid and provide an eventually renewable power alternative. Electric and hybrid vehicles accounted for 11 percent of market sales in 2021. Of those sales, 4.8 percent were battery electric vehicles (BEVs) and plug-in hybrid electric vehicles (PHEVs). The drivers of these cars are familiar with using charging stations while at home or work to give their batteries a full charge. ##### A battery energy storage system in a garage. Image used courtesy of Adobe Stock A Massachusetts Institute of Technology (MIT) team has published a paper in the Energy Advances journal that considers the possibility of reversing the charge flow so that when plugged in, cars with full batteries could give back to the power grid The hope is that as the number of electric vehicles (EVs) on the road continues to increase rapidly, the vehicle-to-grid (V2G) technology could become a cost-effective and mobile energy storage option for a smoother transition into renewable energy. Jim Owens, the lead author on the MIT paper and a doctoral student at MIT in Chemical Engineering, believes V2G offers the possibility of boosting renewable energy growth and decreasing dependency on stationary storage and always-on generators. ### Research Team Calculates Energy Savings from V2G A challenge in increasing the use of renewable energy sources is the capacity and the limited number of existing energy storage batteries. Solar and wind provide irregular energy production, and the batteries necessary to hold the energy for later use can be large and incredibly expensive. Buildings and homes that use solar panels tend to rely on the power grid for backup. ##### Generation and demand profiles over a week with 50 percent V2G participation. Image used courtesy of Energy Advances Models that consider the impact of tight carbon restraints are most representative of how the V2G technology could be most beneficial in supporting a decarbonized future. While the percentage of renewable energy sources powering the U.S. power grid increases every year, nuclear, coal, and natural gas still account for 79 percent of electricity generation as of 2021. This means that unless an EV is plugged into a charging station that is directly powered by a solar panel, the electricity used to charge EVs is coming from fossil fuels. V2G supports a smoother transition to an all-electric power grid as an economical and long-term energy storage option. Adapting to an all-electric power grid requires ample battery storage to hold onto electricity when not actively produced by solar and wind. Ideally, EVs will be charged using renewable sources and then be able to give back to the power grid in times of low production. However, until EVs are charged entirely from renewable sources, using their battery power to give back to the grid will not be an inherently net-zero process. ### V2G Prospects Point to Promising Future EV owners are not expected to jump at the opportunity to give their car’s battery power to a utility or power systems operator. Any future software used to facilitate the battery dispatch could be tailored to individual needs to best suit each car owner. Similar to the adoption of residential solar panels, car owners could be paid for their contribution back to the power grid. Outside of personal vehicles, Owens is also considering the impact of heavy-duty EVs, such as delivery trucks from Amazon and FedEx that are likely to be early adopters of EVs. These trucks have a regular schedule during the day and are mostly idle overnight, making them appealing to V2G services. Even if fleet and personal vehicles choose not to participate round-the-clock, expanding access to V2G technology could be impactful in times including energy blackouts, hot-day congestion, and peak demand putting stress on transmission lines.
2023-02-03 01:11:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3031461834907532, "perplexity": 1933.7609869982196}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00031.warc.gz"}
https://ccssmathanswers.com/eureka-math-geometry-module-2-lesson-1/
# Eureka Math Geometry Module 2 Lesson 1 Answer Key ## Engage NY Eureka Math Geometry Module 2 Lesson 1 Answer Key ### Eureka Math Geometry Module 2 Lesson 1 Example Answer Key Example 1. Use construction tools to create a scale drawing of ∆ ABC with a scale factor of r = 2. Solution 1: Draw $$\overline{A B}$$. To determine B’, adjust the compass to the length of AB. Then reposition the compass so that the point is at B, and mark off the length of $$\overline{A B}$$; label the intersection with as B’. C’ is determined in a similar manner. Join B’ to C’. Solution 2: Draw a segment that will be longer than double the length of $$\overline{A B}$$. Label one end as A’. Adjust the compass to the length of $$\overline{A B}$$, and mark off two consecutive such lengths along the segment, and label the endpoint as B’. Copy ∠A. Determine C’ along the $$\overline{A B}$$ in the same way as B’. Join B’ to C’. Example 2. Use construction tools to create a scale drawing of ∆ XYZ with a scale factor of r = $$\frac{1}{2}$$ Which construction technique have we learned that can be used in this question that was not used in the previous two problems? We can use the construction to determine the perpendicular bisector to locate the midpoint of two sides of ∆ XYZ. As the solutions to Exercise 1 showed, the constructions can be done on other sides of the triangle (i.e., the perpendicular bisectors of $$\overline{Y Z}$$ and $$\overline{X Z}$$ are acceptable places to start.) ### Eureka Math Geometry Module 2 Lesson 1 Opening Exercise Answer Key Above is a picture of a bicycle. Which of the images below appears to be a well-scaled image of the original? Why? Only the third image appears to be a well-scaled image since the image is in proportion to the original. ### Eureka Math Geometry Module 2 Lesson 1 Exercise Answer Key Exercise 1. Use construction tools to create a scale drawing of ∆ DEF with a scale factor of r = 3. What properties does your scale drawing share with the original figure? Explain how you know. By measurement, I can see that each side is three times the length of the corresponding side of the original figure and that all three angles are equal in measurement to the three corresponding angles in the original figure. Exercise 2. Use construction tools to create a scale drawing of A PQR with a scale factor of r = $$\frac{1}{4}$$. What properties do the scale drawing and the original figure share? Explain how you know. By measurement, I can see that all three sides are each one-quarter the lengths of the corresponding sides of the original figure, and all three angles are equal in measurement to the three corresponding angles in the original figure. Exercise 3. Triangle EFG is provided below, and one angle of scale drawing ∆ E’F’G’ is also provided. Use construction tools to complete the scale drawing so that the scale factor is r = 3. What properties do the scale drawing and the original figure share? Explain how you know. Extend either ray from G’. Use the compass to mark off a length equal to 3EG on one ray and a length equal to 3FG on the other. Label the ends of the two lengths E’ and F’, respectively. Join E’ to F’. By measurement, I can see that each side is three times the length of the corresponding side of the original figure and that all three angles are equal in measurement to the three corresponding angles in the original figure. Exercise 4. Triangle ABC is provided below, and one side of scale drawing ∆ A’B’C’ is also provided. Use construction tools to complete the scale drawing and determine the scale factor. One possible solution: We can copy ∠A and ∠C at points A’ and C’ so that the new rays Intersect as shown and call the intersection point B’. By measuring, we can see that AT’ = 2AC, A’B’ = 2AB, and B’C’ = 2BC. We already know that m∠A’ = m∠A and m∠C’ = m∠C. By the triangle sum theorem, m∠B’ = m∠B. ### Eureka Math Geometry Module 2 Lesson 1 Problem Set Answer Key Question 1. Use construction tools to create a scale drawing of ∆ ABC with a scale factor of r = 3. Question 2. Use construction tools to create a scale drawing of ∆ ABC with a scale factor of r = $$\frac{1}{2}$$. Question 3. Triangle EFG is provided below, and one angle of scale drawing ∆ E’F’G’ Is also provided. Use construction tools to complete a scale drawing so that the scale factor is r = 2. Question 4. Triangle MTC is provided below, and one angle of scale drawing ∆ M’T’C’ is also provided. Use construction tools to complete a scale drawing so that the scale factor is r =$$\frac{1}{4}$$. Question 5. Triangle ABC is provided below, and one side of scale drawing ∆ A’B’C’ is also provided. Use construction tools to complete the scale drawing and determine the scale factor. The ratio of B’C’ : BC is 5 : 1, so the scale factor is 5. Question 6. Triangle XYZ is provided below, and one side of scale drawing ∆ X’Y’Z’ is also provided. Use construction tools to complete the scale drawing and determine the scale factor. The ratio of X’Z’ : XZ is 1: 2, so the scale factor is $$\frac{1}{2}$$. Question 7. Quadrilateral GHIJ Is a scale drawing of quadrilateral ABCD with scale factor r. Describe each of the following statements as always true, sometimes true, or never true, and justify your answer. a. AB = GH Sometimes true, but only if r = 1. b. m∠ABC = m∠GHI Always true because ∠GHI corresponds to ∠ABC in the original drawing, and angle measures are preserved in scale drawings. C. $$\frac{A B}{G H}=\frac{B C}{H I}$$ Always true because distances in a scale drawing are equal to their corresponding distances in the original drawing times the scale factor r, so $$\frac{A B}{G H}=\frac{A B}{r(A B)}=\frac{1}{r}$$ and $$\frac{B C}{H I}=\frac{B C}{r(B C)}=\frac{1}{r}$$. d. Perimeter (GHIJ) = r ∙ Perimeter(ABCD) Always true because the distances In a scale drawing are equal to their corresponding distances in the original drawing times the scale factor r, so Perimeter(GHIJ) = GH + HI + iJ +1G Perimeter(GHIJ) = r(AB) + r(BC) + r(CD) + r(DA) Perimeter(GHJJ) = r(AB + BC + CD + DA) Perimeter(GHIJ) = r Perimeter(ABCD). e. Area (GHIJ) = r ∙ Area(ABCD) where r ≠ 1 Never true because the area of a scale drawing is related to the area of the original drawing by the factor r2. The scale factor r > 0 and r ≠ 1, so r ≠ r2. f. r < 0 One possible solution: Since the scale drawing will clearly be a reduction, use the compass to mark the number of lengths equal to the length of $$\overline{A^{\prime} B^{\prime}}$$ along $$\overline{A B}$$. Once the length of $$\overline{A^{\prime} C^{\prime}}$$ is determined to be $$\frac{1}{2}$$ the length of $$\overline{A B}$$, use the compass to find a length that is half the length of $$\overline{A B}$$ and half the length of $$\overline{B C}$$. Construct circles with radii of lengths $$\frac{1}{2}$$ AC and $$\frac{1}{2}$$ BC By measurement, I can see that each side is $$\frac{1}{2}$$ the length of the corresponding side of the original figure and that all three angles are equal in measurement to the three corresponding angles in the original figure.
2022-05-28 07:08:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5410422086715698, "perplexity": 962.5064330019921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00002.warc.gz"}
https://princeton.learningu.org/teach/teachers/duohead/bio.html
Welcome to Princeton Splash, a student-run organization at Princeton University # ESP Biography ## CHRISTOPHER ZHANG, ESP Teacher Major: Mathematics College/Employer: Princeton Year of Graduation: 2018 Not Available. ## Past Classes (Look at the class archive for more.) Law of Large Numbers and the Central Limit Theorem in Splash Spring 2017 We cover basic probability and then the two most important topics in probability and statistics. A Handwaving Introduction to Algebraic Topology in Splash Spring 2017 Rigorous math is good, but it takes a lot of time. Non-rigorous math is also good. Without rigor, we can learn a lot of cool math without being bogged down by the technicalities. We'll talk about stuff like coffee cups and donuts, Klein bottles, fundamental groups, and projective space. Basics of Probability in Splash Spring 16 Topics in probability to be decided. Topics may include: Kolmogorov's axioms, central limit theorem, Markov chains. Quotient Maps i.e. Mathematical Glue in Splash Spring 15 The torus is constructed by taking a rectangle in the plane and "gluing" two parallel sides and then the other two parallel sides. Quotient maps is the rigorous way to do this. Other things that can be constructed by quotient maps include projective spaces, the Klein bottle, and other fun things. Infinite Sums in Splash Spring 15 Proves some of the properties of infinite sums and Taylor series in calculus.
2019-03-27 01:23:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8987154960632324, "perplexity": 1898.7934239032445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912207146.96/warc/CC-MAIN-20190327000624-20190327022624-00256.warc.gz"}
http://mathoverflow.net/feeds/question/77072
Definable measure preserving isomorphisms of $p$-adic semialgebraic sets - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-18T20:44:39Z http://mathoverflow.net/feeds/question/77072 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/77072/definable-measure-preserving-isomorphisms-of-p-adic-semialgebraic-sets Definable measure preserving isomorphisms of $p$-adic semialgebraic sets Math-player 2011-10-03T20:01:58Z 2012-11-15T20:21:59Z <p>Hi,</p> <p>Consider a $p$-adic field $K$ (finite extension $\DeclareMathOperator{\bQ}{\mathbb{Q}}$of $\bQ_p$) in Macintyre language $\DeclareMathOperator{\cL}{\mathcal{L}}$ $\cL_{\rm Mac}$. Let $Z$ be a definable (i.e. semialgebraic) analytic subset $Z$ of $K^n$ of $p$-adic dimension $d$ and let $C$ be a definable subset of $Z$ of strictly lower dimension.</p> <p>My question is:</p> <p>Is there an analytic definable isomorphism $f: Z \rightarrow Z \setminus C$ such that $\mid {\rm Jac} f \mid=1$?</p> <p>I know the answer for $n=1$ is no but I need to know whether it is still no for $n>1$.</p> <p>Translation with no model theory:by a semialgebraic set we mean a set union of sets of the form $$\lbrace x \in {\bQ}_p^m \mid f(x) =0, g_1(x) \in P_{n_1}, \dots, g_k(x) \in P_{n_k} \rbrace ,$$ where $P_n$ is the set $\lbrace y \in {\bQ}_p^\times \mid \exists x \in {\bQ}_p^\times, y=x^n \rbrace$. A semialgebraic function is a function whose graph is a semialgebraic set.</p> <p>It was shown that (Scowcroft &amp; van den Dries) and Cluckers (see R. Cluckers: Classification of semi-algebraic p-adic sets up to semi-algebraic bijection, Journal für die reine und angewandte Mathematik, 540, 105 - 114 (2001) math.LO/0311434. ) that dimension is a semialgebraic invariant which means that two semialgebraic sets have a semialgebraic bijection between them if and only if they have the same dimension. (Semialgebraic sets of dimension $n$ have a semialgebraic bijection with an open subset of ${\bQ}_p^n$.) But they did not constrain the semialgebraic bijection to have $p$-adic jacobian 1 which I do now.</p> <p>Thank you</p> http://mathoverflow.net/questions/77072/definable-measure-preserving-isomorphisms-of-p-adic-semialgebraic-sets/78222#78222 Answer by Math-player for Definable measure preserving isomorphisms of $p$-adic semialgebraic sets Math-player 2011-10-15T19:25:19Z 2011-10-23T10:04:54Z <p>Observe that $f$ is an analytic isomorphism to conclude that $Z$ and $Z\setminus C$ are both closed in the relative topology (i.e. the topology induced on $Z$ by the topology of the ambiant space). But $Z \setminus C$ cannot be closed in the relative topology since $C$ is not open in the ambiant topology. Hence there is no such isomorphism.</p>
2013-06-18 20:44:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9774865508079529, "perplexity": 717.0959289234021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707186142/warc/CC-MAIN-20130516122626-00057-ip-10-60-113-184.ec2.internal.warc.gz"}
https://stackoverflow.com/questions/17988756/how-to-select-lines-between-two-marker-patterns-which-may-occur-multiple-times-w
# How to select lines between two marker patterns which may occur multiple times with awk/sed Using awk or sed how can I select lines which are occurring between two different marker patterns? There may be multiple sections marked with these patterns. For example: Suppose the file contains: abc def1 ghi1 jkl1 mno abc def2 ghi2 jkl2 mno pqr stu And the starting pattern is abc and ending pattern is mno So, I need the output as: def1 ghi1 jkl1 def2 ghi2 jkl2 I am using sed to match the pattern once: sed -e '1,/abc/d' -e '/mno/,$d' <FILE> Is there any way in sed or awk to do it repeatedly until the end of file? ## 10 Answers Use awk with a flag to trigger the print when necessary: $ awk '/abc/{flag=1;next}/mno/{flag=0}flag' file def1 ghi1 jkl1 def2 ghi2 jkl2 How does this work? • /abc/ matches lines having this text, as well as /mno/ does. • /abc/{flag=1;next} sets the flag when the text abc is found. Then, it skips the line. • /mno/{flag=0} unsets the flag when the text mno is found. • The final flag is a pattern with the default action, which is to print $0: if flag is equal 1 the line is printed. For a more detailed description and examples, together with cases when the patterns are either shown or not, see How to select lines between two patterns?. • If you want to print everything between and including the pattern then you can use awk '/abc/{a=1}/mno/{print;a=0}a' file. – scai Nov 7, 2013 at 8:08 • Yes, @scai ! or even awk '/abc/{a=1} a; /mno/{a=0}' file - with this, putting a condition before the /mno/ we make it evaluate the line as true (and print it) before setting a=0. This way we can avoid writing print. Nov 7, 2013 at 9:43 • @scai @fedorqui For including pattern output, you can do awk '/abc/,/mno/' file Dec 4, 2013 at 6:44 • @EirNym that is a weird scenario that can be handled on very different ways: which lines would you like to print? Probably awk 'flag; /PAT1/{flag=1; next} /PAT1/{flag=0}' file would make. Apr 24, 2017 at 8:28 • For newbies like me, there is a doc. 1. A awk "rule" contains a "pattern" and an "action", either of which (but not both) may be omitted. So [pattern] { action } or pattern [{ action }]. 2. An action consists of one or more awk statements, enclosed in braces (‘{…}’). —— So the ending flag is abbr of flag {print$0} Jan 7, 2021 at 8:40 Using sed: sed -n -e '/^abc$/,/^mno$/{ /^abc$/d; /^mno$/d; p; }' The -n option means do not print by default. The pattern looks for lines containing just abc to just mno, and then executes the actions in the { ... }. The first action deletes the abc line; the second the mno line; and the p prints the remaining lines. You can relax the regexes as required. Any lines outside the range of abc..mno are simply not printed. • @JonathanLeffler can I know what is the purpose of using -e Dec 6, 2016 at 4:33 • @KasunSiyambalapitiya: Mostly it means I like to use it. Formally, it specifies that the next argument is (part of) the script that sed should execute. If you want or need to use several arguments to include the entire script, then you must use -e before each such argument; otherwise, it's optional (but explicit). Dec 6, 2016 at 4:41 • Nice! (I prefer sed over awk.) When using complex regular expressions, it would be nice not to have to repeat them. Isn't it possible to delete the first / last line of the "selected" range? Or to first apply the d to all lines up to the first match, and then another d to all lines starting with the second match? Dec 8, 2016 at 10:12 • (Replying to my own comment.) If there's only one section to be cut, I could tentatively solve this e.g. for LaTeX using sed -n '1,/\\begin{document}/d;/\\end{document}/d;p'. (This is cheating a little bit, since the second part does not delete up to the document end, and I would not know how to cut multiple parts as the OP asked for.) Dec 8, 2016 at 10:50 • @JonathanLeffler what is the reason for inserting the $ mark, as in /^abc$ and others Jan 25, 2017 at 4:58 This might work for you (GNU sed): sed '/^abc$/,/^mno$/{//!b};d' file Delete all lines except for those between lines starting abc and mno • !d;//d golfs 2 characters better :-) stackoverflow.com/a/31380266/895245 Jul 13, 2015 at 9:54 • This is awesome. The {//!b} prevents the abc and mno from being included in the output, but I can't figure out how. Could you explain? Feb 16, 2017 at 17:44 • @Brendan the instruction //!b reads if the current line is neither one of the lines that match the range, break and therefore print those lines otherwise all other lines are deleted. Feb 17, 2017 at 1:14 sed '/^abc$/,/^mno$/!d;//d' file golfs two characters better than ppotong's {//!b};d The empty forward slashes // mean: "reuse the last regular expression used". and the command does the same as the more understandable: sed '/^abc$/,/^mno$/!d;/^abc$/d;/^mno$/d' file This seems to be POSIX: If an RE is empty (that is, no pattern is specified) sed shall behave as if the last RE used in the last command applied (either as an address or as part of a substitute command) was specified. • I think the second solution will end up with nothing as the second command is also a range. However kudos for the first. Jul 13, 2015 at 14:20 • @potong true! I have to study more why the first one works. Thanks! Jul 13, 2015 at 14:22 From the previous response's links, the one that did it for me, running ksh on Solaris, was this: sed '1,/firstmatch/d;/secondmatch/,$d' • 1,/firstmatch/d: from line 1 until the first time you find firstmatch, delete. • /secondmatch/,$d: from the first occurrance of secondmatch until the end of file, delete. • Semicolon separates the two commands, which are executed in sequence. • Just curious, why does the range limiter (1,) come before /firstmatch/? I'm guessing this could also be phrased '/firstmatch/1,d;/secondmatch,$d'? Jun 25, 2018 at 0:40 • With "1,/firstmatch/d" you are saying "from line 1 until the first time you find 'firstmatch', delete". Whereas, with "/secondmatch/,$d" you say "from the first occurrance of 'secondmatch' until the end of file, delete". the semicolon separates the two commands, which are executed in sequence. Dec 20, 2018 at 17:18 something like this works for me: file.awk: BEGIN { record=0 } /^abc$/ { record=1 } /^mno$/ { record=0; print "s="s; s="" } !/^abc|mno$/ { if (record==1) { s = s"\n"$0 } } using: awk -f file.awk data... edit: O_o fedorqui solution is way better/prettier than mine. • In GNU awk if (record=1) should be if (record==1), i.e. double = - see gawk comparison operators May 26, 2014 at 8:53 Don_crissti's answer from Show only text between 2 matching pattern? firstmatch="abc" secondmatch="cdf" sed "/$firstmatch/,/$secondmatch/!d;//d" infile which is much more efficient than AWK's application, see here. • I don't think linking the time comparisons makes much sense here, since the requirements of the questions are quite different, hence the solutions. Sep 11, 2015 at 15:11 • I disagree because we should have some criterias to compare answers. Only a few has SED applications. Sep 11, 2015 at 16:10 perl -lne 'print if((/abc/../mno/) && !(/abc/||/mno/))' your_file • Good to know perl equivalent as it is a pretty good alternative to both awk and sed. Mar 8, 2017 at 23:46 I tried to use awk to print lines between two patterns while pattern2 also match pattern1. And the pattern1 line should also be printed. e.g. source package AAA aaa bbb ccc package BBB ddd eee package CCC fff ggg hhh iii package DDD jjj should has an ouput of package BBB ddd eee Where pattern1 is package BBB, pattern2 is package \w*. Note that CCC isn't a known value so can't be literally matched. In this case, neither @scai 's awk '/abc/{a=1}/mno/{print;a=0}a' file nor @fedorqui 's awk '/abc/{a=1} a; /mno/{a=0}' file works for me. Finally, I managed to solve it by awk '/package BBB/{flag=1;print;next}/package \w*/{flag=0}flag' file, haha A little more effort result in awk '/package BBB/{flag=1;print;next}flag;/package \w*/{flag=0}' file, to print pattern2 line also, that is, package BBB ddd eee package CCC This can also be done with logical operations and increment/decrement operations on a flag: awk '/mno/&&--f||f||/abc/&&f++' file • I'm absolutely certain that i've used awk in the past for this problem, and it was nothing like this complex. – Owl Mar 28 at 10:45 • Obviously the accepted answer in awk that predates my answer by more than 7 years is much more readable, and I saw that answer before I posted mine. I'm just throwing this one here because it is one byte shorter than the accepted answer even after renaming its variable flag to f, in the spirit of some good ol' code golf fun. :-) Mar 30 at 7:14
2022-05-24 07:10:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5277848839759827, "perplexity": 3599.8140317172533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00217.warc.gz"}
https://www.tutorke.com/lesson/11098-the-positions-of-airport-p-and-q-are-60%EF%BF%BD%EF%BF%BDn-45%EF%BF%BD%EF%BF%BDw-and-60%EF%BF%BD%EF%BF%BDn-k%EF%BF%BD%EF%BF%BDe-respectively.aspx
Get premium membership and access revision papers with marking schemes, video lessons and live classes. OR # Form 4 Mathematics Paper 2 Section 2 Exam Questions and Answers The positions of airport P and Q are (60°N, 45°W) and (60°N, K°E) respectively. It takes a plane 5 hours to travel due eat from P to Q at an average speed of 600 knots. By taking R = 6370 km and pi = 22/7 a) Calculate the value of K b) The local time at P is 10.45am. What is the local time at Q when the plane reaches there? (Give time in 12 hour clock) c) Find the distance PQ measured along a circle of latitude to the nearest Km. (7m 18s) 562 Views     SHARE |
2023-01-27 20:03:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4049920439720154, "perplexity": 3563.678504257807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00660.warc.gz"}
https://gmatclub.com/forum/6-persons-are-going-to-theater-and-will-sit-next-to-each-oth-95974.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 15 Nov 2018, 23:44 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in November PrevNext SuMoTuWeThFrSa 28293031123 45678910 11121314151617 18192021222324 2526272829301 Open Detailed Calendar • Free GMAT Strategy Webinar November 17, 2018 November 17, 2018 07:00 AM PST 09:00 AM PST Nov. 17, 7 AM PST. Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. • GMATbuster's Weekly GMAT Quant Quiz # 9 November 17, 2018 November 17, 2018 09:00 AM PST 11:00 AM PST Join the Quiz Saturday November 17th, 9 AM PST. The Quiz will last approximately 2 hours. Make sure you are on time or you will be at a disadvantage. 6 persons are going to theater and will sit next to each oth Author Message TAGS: Hide Tags Intern Joined: 16 Jun 2010 Posts: 2 6 persons are going to theater and will sit next to each oth  [#permalink] Show Tags 16 Jun 2010, 19:08 00:00 Difficulty: 15% (low) Question Stats: 91% (01:08) correct 9% (00:02) wrong based on 12 sessions HideShow timer Statistics 6 persons are going to theater and will sit next to each other in 6 adjacent seats. But Martia and Jan can not sit next to each other. In how many arrangement can this be done I understood that the restriction must be deal first by finding the number of way the restriction happen and remove from the total number of way to arrange the n ! It is 2 ! for arrangement and 4 ! for the remaining 4 people but what I don t understand is why it is time by 5 as the OA gives I saw some other type like that For instance digit 1,2,3,4,5 IF EACH DIGIT is used only once how many ways can each digit be arranged such 2 and 4 are not adjacent - In this case the restriction is 2!x4! not multiplied by anything else Can anyone explain me why regards SVP Status: Three Down. Joined: 09 Jun 2010 Posts: 1856 Concentration: General Management, Nonprofit Show Tags 16 Jun 2010, 19:45 1 Total number of arrangements = 6! Assuming the two sit next to each other, we have 5!x2 arrangements (This is because when they are sitting next to each other, we can consider both of them as one "unit" and hence there are 5 units, i.e them and the other 4 people. This will lead to an arrangement of 5! and then between them, they can be seated in two ways, so it's 2x5!) So answer = Total - Arrangements with them sitting next to each other = 6! - 2x5! = 5! x 4 = 480 I think you assumed they were sitting next to each other and did only the 2! and 4! What are the numbers given in the official answer? I believe you only mentioned word choices. Math Expert Joined: 02 Sep 2009 Posts: 50615 Show Tags 16 Jun 2010, 19:54 1 Maude wrote: Hello Please need help to understand the following 6 persons are going to theater and will sit next to each other in 6 adjacent seats But Martia and Jan can not sit next to each other .In how many Arrangement can this be done I understood that the restriction must be deal first by finding the number of way the restriction happen and remove from the total number of way to arrange the n ! It is 2 ! for arrangement and 4 ! for the remaining 4 people but what I don t understand is why it is time by 5 as the OA gives I saw some other type like that For instance digit 1,2,3,4,5 IF EACH DIGIT is used only once how many ways can each digit be arranged such 2 and 4 are not adjacent - In this case the restriction is 2!x4! not multiplied by anything else Can anyone explain me why regards Hi, and welcome to Gmat Club! Below is the solution for your problem. You are right saying that probably the best way to deal with the questions like this is to count total # of arrangements and then subtract # of arrangements for which opposite of restriction occur. But the way you are calculating the later is not correct. Total # of arrangements of 6 people (let's say A, B, C, D, E, F) is $$6!$$. # of arrangement for which 2 particular persons (let's say A and B) are adjacent can be calculated as follows: consider these two persons as one unit like {AB}. We would have total 5 units: {AB}{C}{D}{E}{F} - # of arrangement of them 5!, # of arrangements of A and B within their unit is 2!, hence total # of arrangement when A and B are adjacent is $$5!*2!$$. # of arrangement when A and B are not adjacent is $$6!-5!*2!$$. Total # of arrangements of 5 distinct digits is $$5!$$. # of arrangement for which 2 digits 2 and 4 are adjacent is: consider these two digits as one unit like {24}. We would have total 4 units: {24}{1}{3}{5} - # of arrangement of them 4!, # of arrangements of 2 and 4 within their unit is 2!, hence total # of arrangement when 2 and 4 are adjacent is $$4!*2!$$. # of arrangement when 2 and 4 are not adjacent is $$5!-4!*2!$$. Hope it helps. _________________ CEO Status: GMATINSIGHT Tutor Joined: 08 Jul 2010 Posts: 2700 Location: India GMAT: INSIGHT WE: Education (Education) Re: 6 persons are going to theater and will sit next to each oth  [#permalink] Show Tags 22 Dec 2015, 01:12 Maude wrote: 6 persons are going to theater and will sit next to each other in 6 adjacent seats. But Martia and Jan can not sit next to each other. In how many arrangement can this be done 6 person can sit on 6 seats in way = 6*5*4*3*21 = 6! = 720 6 person can sit on 6 seats in ways such that Martia and Jan sit next to each other = (5*4*3*2*1)*(2!) = 5!*2! = 240 Favorable cases = 720 - 240 = 480 _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION Senior Manager Joined: 15 Oct 2015 Posts: 322 Concentration: Finance, Strategy GPA: 3.93 WE: Account Management (Education) Re: 6 persons are going to theater and will sit next to each oth  [#permalink] Show Tags 22 Dec 2015, 02:40 the quick formular for this kind of questions is solve removing the constraint - the opposite of the constraint. Hence, 6! - 5! the opposite of the constraint is if the two guys must sit next to each other, that is, they are one, hence you have 5. how many ways can you arrange 6 unique letters minus how many ways can you arrange 5 unique letters Math Expert Joined: 02 Aug 2009 Posts: 7035 Re: 6 persons are going to theater and will sit next to each oth  [#permalink] Show Tags 22 Dec 2015, 03:06 Nez wrote: the quick formular for this kind of questions is solve removing the constraint - the opposite of the constraint. Hence, 6! - 5! the opposite of the constraint is if the two guys must sit next to each other, that is, they are one, hence you have 5. how many ways can you arrange 6 unique letters minus how many ways can you arrange 5 unique letters Hi, you have found the correct method but missed out the final step.. both are taken as one and we get the arrangement as 5! but within the two both can be arranged in 2! ways.. _________________ 1) Absolute modulus : http://gmatclub.com/forum/absolute-modulus-a-better-understanding-210849.html#p1622372 2)Combination of similar and dissimilar things : http://gmatclub.com/forum/topic215915.html 3) effects of arithmetic operations : https://gmatclub.com/forum/effects-of-arithmetic-operations-on-fractions-269413.html GMAT online Tutor Director Joined: 27 May 2012 Posts: 606 Re: 6 persons are going to theater and will sit next to each oth  [#permalink] Show Tags 07 Sep 2018, 10:01 Maude wrote: 6 persons are going to theater and will sit next to each other in 6 adjacent seats. But Martia and Jan can not sit next to each other. In how many arrangement can this be done I understood that the restriction must be deal first by finding the number of way the restriction happen and remove from the total number of way to arrange the n ! It is 2 ! for arrangement and 4 ! for the remaining 4 people but what I don t understand is why it is time by 5 as the OA gives I saw some other type like that For instance digit 1,2,3,4,5 IF EACH DIGIT is used only once how many ways can each digit be arranged such 2 and 4 are not adjacent - In this case the restriction is 2!x4! not multiplied by anything else Can anyone explain me why regards Dear Moderator, How can it be A , when no choices are given, hope you will look into this. Thank you. _________________ - Stne Manager Joined: 18 Jun 2018 Posts: 231 Re: 6 persons are going to theater and will sit next to each oth  [#permalink] Show Tags 07 Sep 2018, 10:07 Maude wrote: 6 persons are going to theater and will sit next to each other in 6 adjacent seats. But Martia and Jan can not sit next to each other. In how many arrangement can this be done I understood that the restriction must be deal first by finding the number of way the restriction happen and remove from the total number of way to arrange the n ! It is 2 ! for arrangement and 4 ! for the remaining 4 people but what I don t understand is why it is time by 5 as the OA gives I saw some other type like that For instance digit 1,2,3,4,5 IF EACH DIGIT is used only once how many ways can each digit be arranged such 2 and 4 are not adjacent - In this case the restriction is 2!x4! not multiplied by anything else Can anyone explain me why regards Total number of arranging 6 persons $$=6!$$ Total number of arranging 6 persons such that Matina and Jan are together $$=2*5!$$ the number of arranging 6 persons such that Matina and Jan are not together $$= 6!-2*5!$$ Re: 6 persons are going to theater and will sit next to each oth &nbs [#permalink] 07 Sep 2018, 10:07 Display posts from previous: Sort by 6 persons are going to theater and will sit next to each oth Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-11-16 07:44:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5686691999435425, "perplexity": 1669.6232087295323}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742981.53/warc/CC-MAIN-20181116070420-20181116092420-00330.warc.gz"}
http://www.scholarpedia.org/article/User:RobertsEugene
# Dr. Eugene Roberts Curator of ScholarpediaCurator Index: 1(Redirected from User:RobertsEugene) ## Curator and author Featured Author: Eugene Roberts Eugene Roberts (b. in Krasnodar, Russia, 1920) received his Ph.D. in biochemistry from the University of Michigan in 1943. Upon completing his thesis, he was selected to be Assistant Head of the Manhattan Project's inhalation section at the University of Rochester (Rochester, New York) to work on the toxicology of uranium dusts. In 1946, Roberts joined the Division of Cancer Research at Washington University in St. Louis, MO where he developed a comprehensive program of study of nitrogen metabolism to characterize differences between normal and neoplastic tissues. In 1954 he joined the staff of the City of Hope Medical Center, Duarte CA, as Chairman of Biochemistry and Associate Director of Research. From 1983 to 1999 Roberts was Director of the Department of Neurobiochemistry at the Beckman Research Institute, City of Hope, and he now holds the position of Director Emeritus. Eugene Roberts reported the discovery of γ-aminobutyric acid (GABA) in the brain in 1950, and then pioneered the immunohistochemical localization of neurotransmitter-specific neural systems at the light and electron microscopic levels using antisera to the synthetic enzyme glutamic acid decarboxylase. His work was instrumental in establishing GABA as the major inhibitory neurotransmitter in the vertebrate central nervous system. His current work includes studies of the effects of steroids and of amyloid on nervous system function. For more information, visit http://www.cityofhope.org/Researchers/RobertsEugene/. Scholarpedia articles: Gamma-aminobutyric acid. Scholarpedia, 2(10):3356 (2007) (Author profile by Abdellatif Nemri) List of previous featured authors ## Biography Eugene Roberts (b. in Krasnodar, Russia, 1920) was brought by his parents in 1922 to Detroit, Michigan, U.S.A. He received his early education in public schools near his home, winning a scholarship to Wayne State University in Detroit, from which he received a B.S. degree in chemistry, magna cum laude, in 1940. At the University of Michigan in Ann Arbor, a scholarship from the McGregor Foundation and a University Fellowship aided Roberts in earning a M.S. degree in 1941 and a Ph.D. in 1943 in biochemistry. His doctoral thesis "Factors influencing the deposition of “fat” in the liver" was supervised by Prof. H.C. Eckstein. Upon completing his thesis, Roberts was selected to be Assistant Head of the inhalation section at the Manhattan Project at the University of Rochester (Rochester, New York) to work on the toxicology of uranium dusts. As a result of this work, safe limits for human exposure were established and assiduously enforced so that patient health records in plants involved in this work were reported to be superior to those in many diverse industries. In 1946, Roberts joined the Division of Cancer Research at Washington University in St. Louis, MO where he developed a comprehensive program of study of nitrogen metabolism to characterize differences between normal and neoplastic tissues. The structure and function of DNA were unknown at this time and its genetic role unsuspected. However, a number of leads and questions raised by that work still are under active investigation in many laboratories. In 1954, Roberts joined the staff of the City of Hope Medical Center, Duarte CA, as Chairman of Biochemistry and Associate Director of Research. He recruited staff, organized electronic and machine shops, a stock room, and a medical and scientific library. Roberts set the pattern for the research effort, which consisted of hiring talented and committed young scientists and leaving them alone to seek their own unique destinies while furnishing them moral and material support. After having convinced administrators and the board of directors of the validity of this approach to management of basic science research activities, Roberts resigned as Associate Director of Research. In 1949 Roberts had discovered and in 1950 had reported the presence of uniquely large quantities of GABA in the brain. Its great importance in brain function became apparent in 1957. In 1968 he organized an interdisciplinary Division of Neurosciences, the first of its kind. It had a staff of 13 senior independent scientists, all in different areas of endeavor. In 1983 Roberts requested to be relieved of all administrative duties and devoted himself entirely to his own research. He was made Distinguished Scientist and Director of Neurobiochemistry at the City of Hope Medical Center. Simultaneously he resigned from committees, editorial boards, and as advisor to foundations. His attention now is focused largely on identifying major inhibitory command-control mechanisms at levels of membrane, metabolism, genome, brain, and society. His current work includes studies of the effects of steroids and of amyloid on nervous system function. His research on memory, attenuation of progression of degeneration after spinal cord injury, aging, and Alzheimer disease has aroused considerable interest. Awaiting future development is his recent suggestion that GABAergic malfunction in the limbic system resulting from a genetic defect in voltage-gated Na+-channel SCN5A may give rise to susceptibility to schizophrenia. He would be pleased with the following summation as an epitaph: Eugene Roberts reported the discovery of γ-aminobutyric acid (GABA) in the brain in 1950, and then pioneered the immunohistochemical localization of neurotransmitter-specific neural systems at the light and electron microscopic levels using antisera to the synthetic enzyme glutamic acid decarboxylase. His work was instrumental in establishing GABA as the major inhibitory neurotransmitter in the vertebrate central nervous system. The discovery of GABA by Eugene Roberts, May 2008 ## Introduction My scientific work since 1946 can be epitomized as a search for patterns. What usually began as a single-minded devotion to in-depth analysis of one or a small number of variables always led to questions of how the results might relate to the whole living unit, whether it be cell, tissue, or organism. In studies of nitrogen metabolism of normal and neoplastic tissues (Roberts and Simonsen, 1960) it appeared desirable to determine the composition of pools of non-protein amino acids and related substances. It was anticipated that patterns of steady-state concentrations of these constituents would reflect characteristics of the tissues in a way that might reveal key metabolic differences. Progress was slow and curiosity limited until methods became available that enabled a large number of determinations to be made in a reasonably short period of time. The development of two-dimensional paper chromatographic procedures allowed the detection of microgram quantities of substances for which other adequate microanalytical procedures were not available and made it feasible to survey rapidly the distribution of free or loosely bound amino acids and other ninhydrin-reactive substances. I hastened to apply these techniques for the first time to animal tissues soon after they were introduced in the United States by C.E. Dent. The paper chromatographic procedures furnished tools that were ideally suited for giving simultaneous information rapidly about the maximal number of ninhydrin-reactive constituents and, although often employed in a semi-quantitative fashion, could give valuable hints about the presence of new materials and indicate which substances should be studied further in particular biological situations. Column chromatographic methods already were being applied extensively, but the procedures, although quantitative, were more time-consuming and allowed far fewer samples to be examined. In addition, unknown substances are much easier to detect by the paper chromatographic procedures. My earliest observation with the two-dimensional paper chromatographic method showed that, in a given species at a particular stage of development, each normal tissue has a distribution of easily extractable ninhydrin-reactive constituents that is characteristic for that tissue, whereas quite similar patterns were observed in transplanted and spontaneous tumors. The latter findings agreed with Greenstein’s generalization based on enzyme assays: “no matter how or from which tissues tumors arise, they more nearly resemble each other chemically than they do normal tissues or than normal tissues resemble each other.” ## Discovery of GABA in brain Working during the summer of 1949 at the Roscoe B. Jackson Memorial Laboratory in Bar Harbor, Maine, where an usually good selection of transplantable mouse tumors carried in a number of inbred mouse strains was available for study, I analyzed the free amino acid content of the C1300 transplantable neuroblastoma, then available only in solid form. Several mouse brain extracts were chromatographed for comparison with the neuroblastoma. An unidentified and previously unobserved ninhydrin-reactive material was seen on the chromatograms of the brain extracts. At most, only traces of this material had appeared in a large number of extracts of many other normal and neoplastic tissues previously examined, or in samples of urine and blood. Upon returning to my laboratories at Washington University in St. Louis, the unknown material was isolated from suitably prepared paper chromatograms. A study of the properties of the substance revealed it to be γ-aminobutyric acid (GABA). Initial identification was based on the comigration of the unknown with GABA on paper chromatography in three different solvent systems. I gave some of the material to Sidney Udenfriend, at that time also on the staff of Washington University, who made an absolute identification of the GABA in our extracts by the isotope derivative method. Abstracts were submitted to the Federation meetings that year (March 1950; see Fig. <ref>F1</ref>, Fig. <ref>F2</ref>). An abstract reporting the presence of an “unidentified amino acid in brain only” appeared from the laboratory of Jorge Awapara, in the proceedings of the same meeting (Awapara, 1950; see Fig. <ref>F3</ref>). Ambiguity has arisen in some derivative accounts in the assignment of priority to the first report of the presence of GABA in brain. To clarify the matter, the relevant three abstracts appear here to attest to Roberts’ primacy of the discovery (Scans of the original abstracts <ref>F1</ref> <ref>F2</ref> <ref>F3</ref>, and transcripts, see below). We were helped initially in the identification of GABA in brain extracts by the fact that in 1949 GABA had been found to be prominent among the soluble nitrogenous components detectable by two-dimensional paper chromatography in the potato tuber (Steward, 1949). This caused me to write semi-facetiously in my notebook, “This proves that the brain is like a potato!” Unfortunately, little has happened to the state of the world since that time to change my mind. GABA had been found in nature before. In 1910 Prof. Dr. Dankwart Ackermann found it to be produced in putrefying mixtures by the action of bacteria (Ackermann, 1910, Ackermann and Kutscher 1910), and, subsequently, many reports had been made about the occurrence of GABA and/or its formation in bacteria, fungi, and plants. I was thrilled to receive a letter from Ackermann with congratulations upon publishing the first report of the presence of GABA in brain. The only authentic sample of GABA that could be located at that time was in the chemical stockroom of the Department of Chemistry at the University of Illinois. Subsequently, we were able to make our own GABA by simple hydrolysis of a free and generous supply of 2-pyrrolidinone obtained from Cliff’s Dow Chemical Company in Marquette, Michigan. In order to demonstrate conclusively that the precursor for GABA was glutamic acid in crude brain preparations, it was necessary to employ 14C-labeled glutamic acid (3H-labeled substances not yet being available). No commercial sources were available. A sample of uniformly labeled L-glutamic acid isolated from a hydrolysate of algae grown in 14CO2 was kindly furnished by Konrad Bloch, then on the staff of the University of Chicago. Since L-glutamic decarboxylase (GAD) from other sources was known to require pyridoxal phosphate as a coenzyme, it was necessary to test this substance for its effect on the decarboxylation of glutamic acid in brain preparations. The only source of pyridoxal phosphate available to us, after much searching, was in the possession of Wayne W. Umbright then at the Merck Institute for Therapeutic Research. He gave us a generous supply of this cofactor. At the time of the discovery of GABA in brain, I was immediately faced with a serious conflict. I was working in the Wernse Laboratories of Cancer Research in Washington University School of Medicine under E.V. Cowdry, a great scientist, and a fine human being. I desperately wanted to work on the metabolism and function of GABA, but my obligations lay in the field of cancer research. Nonetheless, for almost three years thereafter, most of my research efforts were devoted to the study of GABA in brain. During that period I received much encouragement from Cowdry. Never once was I criticized or reprimanded for diverting my efforts from the main thrust of his program. I am most grateful to this gentleman for his support and forbearance during the period I remained in his laboratory and for his friendship in the subsequent years before his death. ## Beyond the discovery For several years, the unique presence of relatively large amounts of GABA in the tissue of the central nervous system (CNS) of various species remained a puzzle. The great neurochemist Heinrich Waelsch once discouragingly remarked that GABA was probably a metabolic wastebasket. My continuing efforts to convince some of the eminent neurophysiologists working at Washington University at that time to test GABA in various nerve preparations at the end of their planned experiments met with no cooperation, even though I brought GABA solutions personally to their laboratories in the hope of persuading them to test it. In the first review on the subject in 1956 (Roberts, 1956), written after I had moved to my present position at City of Hope Medical Center, I concluded in desperation, “Perhaps the most difficult question to answer would be whether the presence in the gray matter of the CNS of uniquely high concentrations of γ-aminobutyric acid and the enzyme which forms it from glutamic acid has a direct or indirect connection to conduction of the nerve impulse in this tissue.” However, later that year, the first suggestion that GABA might have an inhibitory function in the vertebrate nervous system came from studies in which it was found that topically applied solutions of GABA exerted inhibitory effects on electrical activity in the brain (Hayashi and Nagai, 1956, Hayashi and Suhara, 1956). In 1957, evidence for an inhibitory function for GABA came from studies that established GABA as the major factor in brain extracts responsible for the inhibitory action of these extracts on the crayfish stretch receptor system (Bazemore et al., 1957). Within a brief period, the activity in this field increased greatly so that the research being carried out ranged from the study of the effects of GABA on ionic movements in single neurons to clinical evaluation of the role of the GABA system in, for example, epilepsy, schizophrenia, and various types of mental retardation. This warranted the convocation of a memorable interdisciplinary conference in 1959 at the City of Hope Research Institute attended by most of the individuals who had a role in opening up this exciting field and who presented summaries of their work (Roberts et al., 1960). This first GABA conference was the greatest learning experience of my life. Having spent most of my scientific career in the rather narrow confines of classical biochemistry and the bare beginnings of molecular biology, I was thrust into the world of EEG, membranes, electrodes, voltage clamps, neuroanatomy, clinical seizures, neuroembryology and animal behavior. I had the privilege of meeting a number of the world’s leading neuroscientists among the participants, some of whom became close friends. What a mind-boggling intellectual feast! The meeting, itself, was over-whelming to a number of us. The sense of excitement was pervasive because we all sensed that a new era was beginning. The subject of neural inhibition finally had returned to front stage and center after many years of languishing in the wings. It was obvious that much of the future progress in the field would depend on interdisciplinary efforts and that we all would have to begin to learn each other’s language and ways of thinking. At times the proceedings resembled what one imagines might have taken place at the Tower of Babel. However, we all shared the optimistic feeling that we could help each other learn enough so that effective communication soon would take place. For some of us this turned out to be true, and many students in the laboratories of the participants reaped the befit of the “new enlightenment.” It was a particularly heartening social occasion because individuals from Australia, Canada, England, France, Hungary, Japan, United States, and the Soviet Union met in enthusiastic amity and forged long-lasting scientific and personal links. ## Abstracts reporting the discovery of GABA Figure 1: Conference abstract reporting the discovery of GABA by Eugene Roberts in 1950. Figure 2: Conference abstract by Sidney Udenfriend reporting a technique that was used as a confirmation to Roberts' GABA identification in 1950. Figure 3: Conference abstract reporting the detection of an unidentified amino acid by Jorge Awapara in 1950. Three abstracts related to GABA were submitted to the conference of the American Society of Biological Chemists, and published in the proceedings. $$\gamma$$-Aminobutyric acid in brain. EUGENE ROBERTS and SAM FRANKEL (introduced by C. CARRUTHERS). Division of Cancer Research, Washington Univ., St. Louis, Mo. Relatively large quantities of an unidentified ninhydrin-reactive material were found in numerous two-dimensional paper chromatograms of protein-free extracts of fresh mouse, rabbit, and frog brains. At most, only traces of this material were found in a large number of many other normal and neoplastic tissues and in urine and blood. The eluate from suitably chosen strips of one-dimensional phenol chromatograms of mouse brain extract contained the unknown substance and only traces of valine and an unidentified peptide material. A comparison of the properties of the unknown substance in the eluate with those of known compounds by chromatography in different solvent systems showed it to be identical with $$\gamma$$-aminobutyric acid. This conclusion was independently confirmed by the isotopic derivative method using the I-131- and S-35 labeled p-iodo-phenyl sulfonyl derivatives (S. Udenfriend). Experiments with brain homogenates showed a formation of $$\gamma$$-aminobutyric acid which appeared to be accelerated when glutamic acid was added. $$\gamma$$-Aminobutyric acid was also formed in homogenates of liver and muscle. Experiments are under way to characterize the precursors of $$\gamma$$-aminobutyric acid and the enzymes involved in its formation. A micro technique for identification of organic compounds using isotopic indicators and paper chromatography. SIDNEY UDENFRIEND (introduced by MILDRED COHN). Dept. of Biological Chemistry, Washington Univ. Med. School, St. Louis, Mo. Isotopic indicators and paper chromatography, as used in the isotopic derivative method of amino acid analysis, can be employed for rigorous identification of microgram quantities of compounds. A reagent is synthesized in two isotopic forms, with isotopes that can be determined accurately, one in the presence of the other. The unknown substance and the compound being used for comparison are converted to derivatives of the reagents, each with a different isotope. The two derivatives are then mixed and subjected to chromatography on paper. If the two derivatives are identical then one band results in which the proportions of the two isotopes remain constant throughout. This is ascertained by cutting the band into consecutive transverse strips and measuring the isotope proportions in each. If the substances are not identical then the isotope ratio will vary from one end of the band to the other. The technique has been applied to the identification of a substance isolated from paper chromatograms of tissue extracts, having an Rf similar to $$\gamma$$-aminobutyric acid. An authentic sample of S35 labelled pipsyl $$\gamma$$-aminobutyric acid was mixed with the I131 labelled pipsyl derivative of the unknown. One band resulted with ratios of I131 to S35 of 0.433, 0.449, 0,439, 0.433, in consecutive strips. A mixture of S35-pipsyl leucine with I131-pipsyl isoleucine yielded one band with ratios of 3.35, 2.23, 1.90, 1.63, 1.38, in consecutive strips. Similarly, S35-pipsyl valine and I131-pipsyl norvaline yielded one band with consecutive ratios of 0.078, 0.136, 0.253, 0.498, 1.07, 1.26, 3.17. Detection and identification of metabolites in tissues by means of paper chromatography. JORGE AWAPARA (introduced by HARRY J. DEUEL, JR.). Univ. of Texas, Anderson Hospital for Cancer Research, Houston. Paper chromatography has been used as a quantitative method for some amino acids which are completely resolved by this procedure. The appearance of several unidentified components in chromatograms from tissue extracts has led to a system of isolating those components by means of paper chromatography, in sufficient quantity to allow identification. Thus far, ethanolaminophosporic ester has been identified in nearly all organs of the rat and some human tumors. Presently, a peptide has been detected in many organs and in blood and an unidentified amino acid in brain only. The identification of the latter is under way. ## References • Ackermann D. Über ein neues, auf bakteriellem Wege gewinnbares, Aporrhegma. Hoppe-Seyler’s Zeitschrift für physiologische Chemie 69:273-281, 1910. • Ackermann D. and Kutscher, Fr. Über die Aporrhegmen. Hoppe-Seyler’s Zeitschrift für physiologische Chemie. 69:265-272, 1910. • Awapara, J. Detection and identification of metabolites in tissues by means of paper chromatography. Fed. Proc. 9:148, 1950. • Bazemore, A.W., Elliott, K.A.C., Florey, E. Isolation of Factor I. J. Neurochem., 1:334-339, 1957. • Hayashi, T. and Nagai, K. Action of ω-amino acids on the motor cortex of higher animals, especially γ-amino-β-oxybutric acid as the real inhibitory principle in brain. In: Abstracts of Reviews: Abstracts of Communications. Brussels: 20th International Physiological Congress, p. 410, 1956. • Hayashi T. and Suhara, R. Substances which produce epileptic seizure when applied on the motor cortex of dogs and substances which inhibit the seizure directly. In: Abstracts of Reviews: Abstracts of Communications. Brussels: 20th International Physiological Congress, p. 410, 1956. • Roberts, E., Baxter, C.F., Van Harreveld, A., Wiersma, C.A.G., Adey. W.R., and Killam, K.F., eds. Inhibition in the Nervous System and Gamma-aminobutyric Acid. Oxford: Pergamon Press, 1960. • Roberts, E. and Simonsen D.G. Free amino acids and related substances in normal and neoplastic tissues. In: Amino Acids, Proteins and Cancer Biochemistry, J.T. Edsall, editor, New York: Academic Press, pp. 121-145, 1960. • Steward, F.C. Thompson, J.F., and Dent, C.E. γ-Aminobutyric Acid: A Constituent of the Potato Tuber? Science, 110:439-440, 1949.
2020-10-28 03:37:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39131081104278564, "perplexity": 4393.99589306659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896048.53/warc/CC-MAIN-20201028014458-20201028044458-00576.warc.gz"}
http://mathhelpforum.com/calculus/142545-related-rates.html
# Math Help - Related Rates 1. ## Related Rates The altitude of a right-angled triangle is 6 cm and the base is increasing at a constant rate of 2cm/s. At what rate it the hypotenuse increasing when its length is 10 cm? No idea how to do this question. But i did approach it. I tried using similar triangles but it didn't work out. Why? Let length be y and base be x At y=10 6/y = x/h 6/10 = x/h x = 3/5 h dx/dt = 3/5 dh/dt thus dh/dt = 10/3 cm/s The actual answer is 1.6 2. Since it's a right triangle you need to use the pythagorean theorem... $x^2 + y^2 = z^2$ And since we are dealing with rates you need to derive the equation $2x\frac{dx}{dt} + 2y\frac{dy}{dt} = 2z\frac{dz}{dt}$ The 2's cancel so we're left with $x\frac{dx}{dt} + y\frac{dy}{dt} = z\frac{dz}{dt}$ now what we're looking for is how fast the hypotenuse is changing, $\frac{dz}{dt}$ and what we are given is $x = 6$ $\frac{dx}{dt} = 0$ $y=8$ Use pythagorean theorem here to get $y$ $\frac{dy}{dt} = 2$ and $z = 10$ then we just solve $x\frac{dx}{dt} + y\frac{dy}{dt} = z\frac{dz}{dt}$ $0 + 8*2 = 10\frac{dz}{dt}$ $\frac{dz}{dt} = \frac{16}{10} = 1.6$ Hope this helps! it's my first time posting a reply so sorry if it's unclear, let me know if you have any questions 3. Originally Posted by Zion Since it's a right triangle you need to use the pythagorean theorem... $x^2 + y^2 = z^2$ And since we are dealing with rates you need to derive the equation $2x\frac{dx}{dt} + 2y\frac{dy}{dt} = 2z\frac{dz}{dt}$ The 2's cancel so we're left with $x\frac{dx}{dt} + y\frac{dy}{dt} = z\frac{dz}{dt}$ now what we're looking for is how fast the hypotenuse is changing, $\frac{dz}{dt}$ and what we are given is $x = 6$ $\frac{dx}{dt} = 0$ $y=8$ Use pythagorean theorem here to get $y$ $\frac{dy}{dt} = 2$ and $z = 10$ then we just solve $x\frac{dx}{dt} + y\frac{dy}{dt} = z\frac{dz}{dt}$ $0 + 8*2 = 10\frac{dz}{dt}$ $\frac{dz}{dt} = \frac{16}{10} = 1.6$ Hope this helps! it's my first time posting a reply so sorry if it's unclear, let me know if you have any questions Wow thats great. But i dont really understand this. The altitude, the line from the vertex of right angle triangle to its perpendicular opposite is 6. And it is given dx/dt = 2 m/s Please do help me again! 4. Originally Posted by Lukybear Wow thats great. But i dont really understand this. The altitude, the line from the vertex of right angle triangle to its perpendicular opposite is 6. And it is given dx/dt = 2 m/s Please do help me again! Here's an easier wayI think. First of all redraw your diagram. Draw it (I don't know how to here so I'm giving you a verbal description - sorry) with the right angle in the bottom left corner. The height (vertical line on left) is 6cm, call the base x and the hypotenuse h. Your initial diagram was incorrect. Tell me when you are done and I'll help with the next step. 5. You still there? Happy to walk you through the steps of related rates problems if interested. 6. Originally Posted by Debsta Here's an easier wayI think. First of all redraw your diagram. Draw it (I don't know how to here so I'm giving you a verbal description - sorry) with the right angle in the bottom left corner. The height (vertical line on left) is 6cm, call the base x and the hypotenuse h. Your initial diagram was incorrect. Tell me when you are done and I'll help with the next step. But if just going on my diagram how would you do this problem. As I am pretty sure that the altitude is that line joining the vertex to perpendicular opposite. 7. Originally Posted by Lukybear But if just going on my diagram how would you do this problem. As I am pretty sure that the altitude is that line joining the vertex to perpendicular opposite. A triangle actually has 3 altitudes, depending on which side you define to be the base - altitude and "base" are perpendicular. Your diagram is incorrect for this problem. 8. Originally Posted by Debsta A triangle actually has 3 altitudes, depending on which side you define to be the base - altitude and "base" are perpendicular. Your diagram is incorrect for this problem. So your saying that the hypotenuse is between the base and the altitude? 9. Originally Posted by Lukybear So your saying that the hypotenuse is between the base and the altitude? yes
2014-04-20 18:45:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7400107383728027, "perplexity": 610.0748836591521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
https://projecteuclid.org/euclid.aos/1449755958
## The Annals of Statistics ### Inference using noisy degrees: Differentially private $\beta$-model and synthetic graphs #### Abstract The $\beta$-model of random graphs is an exponential family model with the degree sequence as a sufficient statistic. In this paper, we contribute three key results. First, we characterize conditions that lead to a quadratic time algorithm to check for the existence of MLE of the $\beta$-model, and show that the MLE never exists for the degree partition $\beta$-model. Second, motivated by privacy problems with network data, we derive a differentially private estimator of the parameters of $\beta$-model, and show it is consistent and asymptotically normally distributed—it achieves the same rate of convergence as the nonprivate estimator. We present an efficient algorithm for the private estimator that can be used to release synthetic graphs. Our techniques can also be used to release degree distributions and degree partitions accurately and privately, and to perform inference from noisy degrees arising from contexts other than privacy. We evaluate the proposed estimator on real graphs and compare it with a current algorithm for releasing degree distributions and find that it does significantly better. Finally, our paper addresses shortcomings of current approaches to a fundamental problem of how to perform valid statistical inference from data released by privacy mechanisms, and lays a foundational groundwork on how to achieve optimal and private statistical inference in a principled manner by modeling the privacy mechanism; these principles should be applicable to a class of models beyond the $\beta$-model. #### Article information Source Ann. Statist., Volume 44, Number 1 (2016), 87-112. Dates Revised: June 2015 First available in Project Euclid: 10 December 2015 https://projecteuclid.org/euclid.aos/1449755958 Digital Object Identifier doi:10.1214/15-AOS1358 Mathematical Reviews number (MathSciNet) MR3449763 Zentralblatt MATH identifier 1331.62114 Subjects Primary: 62F12: Asymptotic properties of estimators 91D30: Social networks Secondary: 62F30: Inference under constraints #### Citation Karwa, Vishesh; Slavković, Aleksandra. Inference using noisy degrees: Differentially private $\beta$-model and synthetic graphs. Ann. Statist. 44 (2016), no. 1, 87--112. doi:10.1214/15-AOS1358. https://projecteuclid.org/euclid.aos/1449755958 #### References • Arratia, R. and Liggett, T. M. (2005). How likely is an i.i.d. degree sequence to be graphical? Ann. Appl. Probab. 15 652–670. • Barndorff-Nielsen, O. (1978). Information and Exponential Families in Statistical Theory. Wiley, Chichester. • Bhattacharya, A., Sivasubramanian, S. and Srinivasan, M. K. (2006). The polytope of degree partitions. Electron. J. Combin. 13 Research Paper 46, 18 pp. (electronic). • Blitzstein, J. and Diaconis, P. (2010). A sequential importance sampling algorithm for generating random graphs with prescribed degrees. Internet Math. 6 489–522. • Carroll, R. J., Ruppert, D., Stefanski, L. A. and Crainiceanu, C. M. (2006). Measurement Error in Nonlinear Models: A Modern Perspective, 2nd ed. Monographs on Statistics and Applied Probability 105. Chapman & Hall/CRC, Boca Raton, FL. • Chatterjee, S., Diaconis, P. and Sly, A. (2011). Random graphs with a given degree sequence. Ann. Appl. Probab. 21 1400–1435. • Duchi, J. C., Jordan, M. I. and Wainwright, M. J. (2013). Local privacy, data processing inequalities, and statistical minimax rates. Preprint. Available at arXiv:1302.3203. • Dwork, C., McSherry, F., Nissim, K. and Smith, A. (2006a). Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography. Lecture Notes in Computer Science 3876 265–284. Springer, Berlin. • Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I. and Naor, M. (2006b). Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology—EUROCRYPT 2006. Lecture Notes in Computer Science 4004 486–503. Springer, Berlin. • Engström, A. and Norén, P. (2010). Polytopes from subgraph statistics. Preprint. Available at arXiv:1011.3552. • Fienberg, S. E., Rinaldo, A. and Yang, X. (2010). Differential privacy and the risk-utility tradeoff for multi-dimensional contingency tables. In Proceedings of the 2010 International Conference on Privacy in Statistical Databases, PSD’10 187–199. Springer, Berlin. • Fienberg, S. E. and Slavković, A. B. (2010). Data privacy and confidentiality. In International Encyclopedia of Statistical Science 342–345. Springer, Berlin. • Ghosh, A., Roughgarden, T. and Sundararajan, M. (2009). Universally utility-maximizing privacy mechanisms. In STOC’09—Proceedings of the 2009 ACM International Symposium on Theory of Computing 351–359. ACM, New York. • Goodreau, S. M., Kitts, J. A. and Morris, M. (2009). Birds of a feather, or friend of a friend? Using exponential random graph models to investigate adolescent social networks. Demography 46 103–125. • Hakimi, S. L. (1962). On realizability of a set of integers as degrees of the vertices of a linear graph. I. J. Soc. Indust. Appl. Math. 10 496–506. • Handcock, M. S. and Gile, K. J. (2010). Modeling social networks from sampled data. Ann. Appl. Stat. 4 5–25. • Havel, V. (1955). A remark on the existence of finite graphs. Casopis Pest. Mat. 80 477–480. • Hay, M., Li, C., Miklau, G. and Jensen, D. (2009). Accurate estimation of the degree distribution of private networks. In Ninth IEEE International Conference on Data Mining, ICDM’09 169–178. IEEE, New York. • Helleringer, S. and Kohler, H.-P. (2007). Sexual network structure and the spread of HIV in Africa: Evidence from Likoma island, Malawi. AIDS 21 2323–2332. • Helleringer, S., Kohler, H.-P., Chimbiri, A., Chatonda, P. and Mkandawire, J. (2009). The Likoma network study: Context, data collection, and initial results. Demogr. Res. 21 427–468. • Hillar, C. and Wibisono, A. (2013). Maximum entropy distributions on graphs. Preprint. Available at arXiv:1301.3321. • Holland, P. W. and Leinhardt, S. (1981). An exponential family of probability distributions for directed graphs. J. Amer. Statist. Assoc. 76 33–65. • Hundepool, A., Domingo-Ferrer, J., Franconi, L., Giessing, S., Nordholt, E. S., Spicer, K. and de Wolf, P.-P. (2012). Statistical Disclosure Control. Wiley, Chichester. • Hunter, D. R. (2004). MM algorithms for generalized Bradley–Terry models. Ann. Statist. 32 384–406. • Hunter, D. R., Goodreau, S. M. and Handcock, M. S. (2008). Goodness of fit of social network models. J. Amer. Statist. Assoc. 103 248–258. • Karwa, V. and Slavković, A. B. (2012). Differentially private graphical degree sequences and synthetic graphs. In Privacy in Statistical Databases 273–285. Spinger, Berlin. • Karwa, V. and Slavković, A. (2015). Supplement to “Inference using noisy degrees: Differentially private $\beta$-model and synthetic graphs.” DOI:10.1214/15-AOS1358SUPP. • Karwa, V., Slavković, A. B. and Krivitsky, P. (2014). Differentially private exponential random graphs. In Privacy in Statistical Databases 143–155. Springer, Berlin. • Kasiviswanathan, S. P., Nissim, K., Raskhodnikova, S. and Smith, A. (2013). Analyzing graphs with node differential privacy. In Theory of Cryptography 457–476. Springer, Berlin. • Mahadev, N. V. and Peled, U. N. (1995). Threshold Graphs and Related Topics. Elsevier, Amsterdam. • Narayanan, A. and Shmatikov, V. (2009). De-anonymizing social networks. In 30th IEEE Symposium on Security and Privacy 173–187. IEEE, New York. • Nissim, K., Raskhodnikova, S. and Smith, A. (2007). Smooth sensitivity and sampling in private data analysis. In STOC’07—Proceedings of the 39th Annual ACM Symposium on Theory of Computing 75–84. ACM, New York. • Ogawa, M., Hara, H. and Takemura, A. (2011). Graver basis for an undirected graph and its application to testing the beta model of random graphs. Preprint. Available at arXiv:1102.2583. • Olhede, S. C. and Wolfe, P. J. (2012). Degree-based network models. Preprint. Available at arXiv:1211.6537. • Perry, P. O. and Wolfe, P. J. (2012). Null models for network data. Preprint. Available at arXiv:1201.5871. • Ramanayake, A. and Zayatz, L. (2010). Balancing disclosure risk with data quality. Statistical Research Division Research Report Series No. 2010-04, U.S. Census Bureau, Washington, DC. • Rinaldo, A., Fienberg, S. E. and Zhou, Y. (2009). On the geometry of discrete exponential families with application to exponential random graph models. Electron. J. Stat. 3 446–484. • Rinaldo, A., Petrović, S. and Fienberg, S. E. (2013). Maximum likelihood estimation in the $\beta$-model. Ann. Statist. 41 1085–1110. • Sadeghi, K. and Rinaldo, A. (2014). Statistical models for degree distributions of networks. Preprint. Available at arXiv:1411.3825. • Sampson, S. F. (1968). A novitiate in a period of change: An experimental and case study of social relationships Ph.D. thesis, Cornell Univ., Ithaca, NY. • Smith, A. (2008). Efficient, differentially private point estimators. Preprint. Available at arXiv:0809.4794. • Snijders, T. A. B. (2003). Accounting for degree distributions in empirical analysis of network dynamics. In Dynamic Social Network Modeling and Analysis: Workshop Summary and Papers 146–161. The National Academies Press, Washington, DC. • Vu, D. and Slavković, A. (2009). Differential privacy for clinical trial data: Preliminary evaluations. In IEEE International Conference on Data Mining Workshops, ICDMW’09 138–143. IEEE, New York. • Wasserman, L. and Zhou, S. (2010). A statistical framework for differential privacy. J. Amer. Statist. Assoc. 105 375–389. • Willenborg, L. and de Waal, T. (1996). Statistical Disclosure Control in Practice. Springer, New York. • Yan, T. and Xu, J. (2013). A central limit theorem in the $\beta$-model for undirected random graphs with a diverging number of vertices. Biometrika 100 519–524. • Yan, T., Zhao, Y. and Qin, H. (2015). Asymptotic normality in the maximum entropy models on graphs with an increasing number of parameters. J. Multivariate Anal. 133 61–76. • Zachary, W. W. (1977). An information flow model for conflict and fission in small groups. Journal of Anthropological Research 33 452–473. • Zhang, J. and Chen, Y. (2013). Sampling for conditional inference on network data. J. Amer. Statist. Assoc. 108 1295–1307. #### Supplemental materials • Supplement to “Inference using noisy degrees: Differentially Private $\beta$-model and synthetic graphs”. This supplementary material contains the proof of the key Theorems 2, 3 and 4 from the paper.
2019-06-20 23:29:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24866971373558044, "perplexity": 4968.455896005201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00133.warc.gz"}
https://stacks.math.columbia.edu/tag/07NK
## 15.39 Some results on power series rings Questions on formally smooth maps between Noetherian local rings can often be reduced to questions on maps between power series rings. In this section we prove some helper lemmas to facilitate this kind of argument. Lemma 15.39.1. Let $K$ be a field of characteristic $0$ and $A = K[[x_1, \ldots , x_ n]]$. Let $L$ be a field of characteristic $p > 0$ and $B = L[[x_1, \ldots , x_ n]]$. Let $\Lambda$ be a Cohen ring. Let $C = \Lambda [[x_1, \ldots , x_ n]]$. 1. $\mathbf{Q} \to A$ is formally smooth in the $\mathfrak m$-adic topology. 2. $\mathbf{F}_ p \to B$ is formally smooth in the $\mathfrak m$-adic topology. 3. $\mathbf{Z} \to C$ is formally smooth in the $\mathfrak m$-adic topology. Proof. By the universal property of power series rings it suffices to prove: 1. $\mathbf{Q} \to K$ is formally smooth. 2. $\mathbf{F}_ p \to L$ is formally smooth. 3. $\mathbf{Z} \to \Lambda$ is formally smooth in the $\mathfrak m$-adic topology. The first two are Algebra, Proposition 10.158.9. The third follows from Algebra, Lemma 10.160.7 since for any test diagram as in Definition 15.37.1 some power of $p$ will be zero in $A/J$ and hence some power of $p$ will be zero in $A$. $\square$ Lemma 15.39.2. Let $K$ be a field and $A = K[[x_1, \ldots , x_ n]]$. Let $\Lambda$ be a Cohen ring and let $B = \Lambda [[x_1, \ldots , x_ n]]$. 1. If $y_1, \ldots , y_ n \in A$ is a regular system of parameters then $K[[y_1, \ldots , y_ n]] \to A$ is an isomorphism. 2. If $z_1, \ldots , z_ r \in A$ form part of a regular system of parameters for $A$, then $r \leq n$ and $A/(z_1, \ldots , z_ r) \cong K[[y_1, \ldots , y_{n - r}]]$. 3. If $p, y_1, \ldots , y_ n \in B$ is a regular system of parameters then $\Lambda [[y_1, \ldots , y_ n]] \to B$ is an isomorphism. 4. If $p, z_1, \ldots , z_ r \in B$ form part of a regular system of parameters for $B$, then $r \leq n$ and $B/(z_1, \ldots , z_ r) \cong \Lambda [[y_1, \ldots , y_{n - r}]]$. Proof. Proof of (1). Set $A' = K[[y_1, \ldots , y_ n]]$. It is clear that the map $A' \to A$ induces an isomorphism $A'/\mathfrak m_{A'}^ n \to A/\mathfrak m_ A^ n$ for all $n \geq 1$. Since $A$ and $A'$ are both complete we deduce that $A' \to A$ is an isomorphism. Proof of (2). Extend $z_1, \ldots , z_ r$ to a regular system of parameters $z_1, \ldots , z_ r, y_1, \ldots , y_{n - r}$ of $A$. Consider the map $A' = K[[z_1, \ldots , z_ r, y_1, \ldots , y_{n - r}]] \to A$. This is an isomorphism by (1). Hence (2) follows as it is clear that $A'/(z_1, \ldots , z_ r) \cong K[[y_1, \ldots , y_{n - r}]]$. The proofs of (3) and (4) are exactly the same as the proofs of (1) and (2). $\square$ Lemma 15.39.3. Let $A \to B$ be a local homomorphism of Noetherian complete local rings. Then there exists a commutative diagram $\xymatrix{ S \ar[r] & B \\ R \ar[u] \ar[r] & A \ar[u] }$ with the following properties: 1. the horizontal arrows are surjective, 2. if the characteristic of $A/\mathfrak m_ A$ is zero, then $S$ and $R$ are power series rings over fields, 3. if the characteristic of $A/\mathfrak m_ A$ is $p > 0$, then $S$ and $R$ are power series rings over Cohen rings, and 4. $R \to S$ maps a regular system of parameters of $R$ to part of a regular system of parameters of $S$. In particular $R \to S$ is flat (see Algebra, Lemma 10.128.2) with regular fibre $S/\mathfrak m_ R S$ (see Algebra, Lemma 10.106.3). Proof. Use the Cohen structure theorem (Algebra, Theorem 10.160.8) to choose a surjection $S \to B$ as in the statement of the lemma where we choose $S$ to be a power series over a Cohen ring if the residue characteristic is $p > 0$ and a power series over a field else. Let $J \subset S$ be the kernel of $S \to B$. Next, choose a surjection $R = \Lambda [[x_1, \ldots , x_ n]] \to A$ where we choose $\Lambda$ to be a Cohen ring if the residue characteristic of $A$ is $p > 0$ and $\Lambda$ equal to the residue field of $A$ otherwise. We lift the composition $\Lambda [[x_1, \ldots , x_ n]] \to A \to B$ to a map $\varphi : R \to S$. This is possible because $\Lambda [[x_1, \ldots , x_ n]]$ is formally smooth over $\mathbf{Z}$ in the $\mathfrak m$-adic topology (see Lemma 15.39.1) by an application of Lemma 15.37.5. Finally, we replace $\varphi$ by the map $\varphi ' : R = \Lambda [[x_1, \ldots , x_ n]] \to S' = S[[y_1, \ldots , y_ n]]$ with $\varphi '|_\Lambda = \varphi |_\Lambda$ and $\varphi '(x_ i) = \varphi (x_ i) + y_ i$. We also replace $S \to B$ by the map $S' \to B$ which maps $y_ i$ to zero. After this replacement it is clear that a regular system of parameters of $R$ maps to part of a regular sequence in $S'$ and we win. $\square$ There should be an elementary proof of the following lemma. Lemma 15.39.4. Let $S \to R$ and $S' \to R$ be surjective maps of complete Noetherian local rings. Then $S \times _ R S'$ is a complete Noetherian local ring. Proof. Let $k$ be the residue field of $R$. If the characteristic of $k$ is $p > 0$, then we denote $\Lambda$ a Cohen ring (Algebra, Definition 10.160.5) with residue field $k$ (Algebra, Lemma 10.160.6). If the characteristic of $k$ is $0$ we set $\Lambda = k$. Choose a surjection $\Lambda [[x_1, \ldots , x_ n]] \to R$ (as in the Cohen structure theorem, see Algebra, Theorem 10.160.8) and lift this to maps $\Lambda [[x_1, \ldots , x_ n]] \to S$ and $\varphi : \Lambda [[x_1, \ldots , x_ n]] \to S$ and $\varphi ' : \Lambda [[x_1, \ldots , x_ n]] \to S'$ using Lemmas 15.39.1 and 15.37.5. Next, choose $f_1, \ldots , f_ m \in S$ generating the kernel of $S \to R$ and $f'_1, \ldots , f'_{m'} \in S'$ generating the kernel of $S' \to R$. Then the map $\Lambda [[x_1, \ldots , x_ n, y_1, \ldots , y_ m, z_1, \ldots , z_{m'}]] \longrightarrow S \times _ R S,$ which sends $x_ i$ to $(\varphi (x_ i), \varphi '(x_ i))$ and $y_ j$ to $(f_ j, 0)$ and $z_{j'}$ to $(0, f'_ j)$ is surjective. Thus $S \times _ R S'$ is a quotient of a complete local ring, whence complete. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-12-07 09:48:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9900227785110474, "perplexity": 103.43203311009854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00811.warc.gz"}
http://focusproductionsinc.com/lib/category/topology/
## Simplicial Objects in Algebraic Topology Posted on Posted in Topology Format: Paperback Language: Format: PDF / Kindle / ePub Size: 6.77 MB It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds. As such, the goal of the course is to study compact topological spaces and metric spaces and continuous maps between these spaces. Then construction of spaces, manifold...etc are more advanced topic. Trace each traversable network on a separate piece of paper. Yotov, Fundamental groups of complements of plane curves and symplectic invariants. ## Local Homotopy Theory (Springer Monographs in Mathematics) Posted on Posted in Topology Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 10.66 MB Fiber bundles aren't just an exercise in abstraction for its own sake. If you tie a slipknot around a soccer ball, you can easily pull the slipknot closed by sliding it along the surface of the ball. A cross cap is basically just a Möbius band, and since that has a boundary that is just a circle, it can be "glued" into a circular hole cut in a sphere. This creates a file with the extension .dmp (for example, city_data.dmp). Sketch some more networks and fill in the table. As a consequence we obtain that, for n large enough with respect to i, the i-th Betti number of M_{g,n} is a polynomial in n of degree at most 2i. ## The Structure of Compact Groups (De Gruyter Studies in Posted on Posted in Topology Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 11.52 MB The intrinsic point of view is more flexible. Examples include studies of conformational change between states of the same protein (including multiple NMR structure solutions). 1993). (These problems will be returned to in Section 8). evolutionary relationships. Discover new applications of symplectic and contact geometry in mathematics and physics; V. Topology is a branch of mathematics that studies classification of sets and their equivalence classes up to homoemorphisms (continuous bijections with continuous inverse). ## Mathematics: The Man-Made Universe (Dover Books on Posted on Posted in Topology Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 11.62 MB The default cluster tolerance is 0.001 meters in real-world units. Plus, storage costs were enormous and each byte of storage came at a premium. For example, if one of the functions in $C(X)$ is called "temperature," there is a corresponding semidecidable property "the temperature of the system is between $0$ and $100$ degrees inclusive," which you can decide by computing the temperature to finite precision. (What if $X$ is not compact? ## Basic Topological Structures of Ordinary Differential Posted on Posted in Topology Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 8.92 MB Example 3: Let A be any subset of a discrete topological space X, show that the derived set A' = $\phi$ Experiment with different numbers of areas (islands) and bridges in Konigsberg Plus (requires Macromedia Flash Player). In algebraic geometry, you deal with a manifold that is described by algebraic equations. You cannot separate the strands unless you cut one of them (that is, you have to break a covalent bond). Single-cell and deep sequencing of tumors have revealed considerable heterogeneity in both solid and blood tumors. ## Fractals for the Classroom: Part Two: Complex Systems and Posted on Posted in Topology Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 5.76 MB The simplest comparison approach might be to define a measure based only on features: say.g. a point on A would be buried by a β-strand to the right and an α-helix above. Ideas from algebraic topology have had strong influence on algebra and algebraic geometry. Unlike a soccer ball, a bagel is not a true sphere. This regular genus 2 configuration can be modified so that instead of two octagons, one embedded in the other, there are four octagons: two interior, and two exterior. ## Quantum Field Theory and Topology (Grundlehren der Posted on Posted in Topology Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 14.73 MB Trisections are to 4-manifolds as Heegaard splittings are to 3-manifolds. O fits inside P and the tail of the P can be squished to the "hole" part. Chapter Three examines fuzzy nets, fuzzy upper and lower limits, and fuzzy convergence and is followed by a study of fuzzy metric spaces. By convention, Open CASCADE requires that the following condition is respected: Face tolerance <= Edge tolerance <= Vertex tolerance, where an Edge lies on a Face, and a Vertex – on an Edge. ## Fixed Point Theory (Springer Monographs in Mathematics) Posted on Posted in Topology Format: Paperback Language: English Format: PDF / Kindle / ePub Size: 13.38 MB There is only 1 edition record, so we'll show it here... • Add edition? This leads to a variational Polyakov formula, when the variation is taken in the direction of a conformal factor with a logarithmic singularity. The concept of torque goes to the heart of an explanation of why the Earth and the Moon rotate in empty three-dimensional space, and more importantly, why the Moon's rotation is synchronous with its orbit around the Earth. ## The Topology of Chaos: Alice in Stretch and Squeezeland Posted on Posted in Topology Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 13.73 MB We also mention other examples with infinite free homotopy classes. The CERN twitter site says that all four experiments saw collision-like events. The question arises then as to how to identify the alignment with the most meaningful compromise between the two factors (May. Positive space can be thought of as the forward-facing tetrahedron, and negative space as the rearward-facing tetrahedron (obscured in the projection by the forward faces). The cohomological McKay correspondence says that the cohomology of Y has a basis given by irreducible representations of G (or conjugacy classes of G). ## Differentiable manifolds a first course Posted on Posted in Topology Format: Paperback Language: Format: PDF / Kindle / ePub Size: 5.10 MB
2017-09-26 09:05:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6673049926757812, "perplexity": 1698.2794152825861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695375.98/warc/CC-MAIN-20170926085159-20170926105159-00250.warc.gz"}
https://zxi.mytechroad.com/blog/category/difficulty/hard/page/2/
Press "Enter" to skip to content # Posts published in “Hard” https://leetcode.com/problems/maximum-gap/description/ Problem: Given an unsorted array, find the maximum difference between the successive elements in its sorted form. Try to solve it in linear time/space. Return 0 if the array contains less than 2 elements. You may assume all elements in the array are non-negative integers and fit in the 32-bit signed integer range. Example: Input:  [5, 0, 4, 2, 12, 10] Output: 5 Explanation: Sorted: [0, 2, 4, 5, 10, 12] max gap is 10 – 5 = 5 Idea: Bucket sort. Use n buckets to store all the numbers. For each bucket, only track the min / max value. max gap must come from two “adjacent” buckets, b[i], b[j], j > i, b[i+1] … b[j – 1] must be empty. max gap 只可能来自”相邻”的两个桶 b[i] 和 b[j], j > i, b[i] 和 b[j] 之间的桶(如果有)必须为空。 max gap = b[j].min – b[i].min Time complexity: O(n) Space complexity: O(n) Solution: C++ Problem: Implement a MyCalendarThree class to store your events. A new event can always be added. Your class will have one method, book(int start, int end). Formally, this represents a booking on the half open interval [start, end), the range of real numbers x such that start <= x < end. K-booking happens when K events have some non-empty intersection (ie., there is some time that is common to all K events.) For each call to the method MyCalendar.book, return an integer K representing the largest integer such that there exists a K-booking in the calendar. Your class will be called like this: MyCalendarThree cal = new MyCalendarThree();MyCalendarThree.book(start, end) Example 1: MyCalendarThree(); MyCalendarThree.book(10, 20); // returns 1 MyCalendarThree.book(50, 60); // returns 1 MyCalendarThree.book(10, 40); // returns 2 MyCalendarThree.book(5, 15); // returns 3 MyCalendarThree.book(5, 10); // returns 3 MyCalendarThree.book(25, 55); // returns 3 Explanation: The first two events can be booked and are disjoint, so the maximum K-booking is a 1-booking. The third event [10, 40) intersects the first event, and the maximum K-booking is a 2-booking. The remaining events cause the maximum K-booking to be only a 3-booking. Note that the last event locally causes a 2-booking, but the answer is still 3 because eg. [10, 20), [10, 40), and [5, 15) are still triple booked. Note: • The number of calls to MyCalendarThree.book per test case will be at most 400. • In calls to MyCalendarThree.book(start, end)start and end are integers in the range [0, 10^9]. Idea: Similar to LeetCode 731 My Calendar II Use an ordered / tree map to track the number of event at current time. For a new book event, increase the number of events at start, decrease the number of events at end. Scan the timeline to find the maximum number of events. # Solution 1: Count Boundaries Time complexity: O(n^2) Space complexity: O(n) # Solution 3: Segment Tree ## Python3 Related Problems: Problem: Given n balloons, indexed from 0 to n-1. Each balloon is painted with a number on it represented by array nums. You are asked to burst all the balloons. If the you burst balloon i you will get nums[left] * nums[i] * nums[right] coins. Here left and right are adjacent indices of i. After the burst, the left and rightthen becomes adjacent. Find the maximum coins you can collect by bursting the balloons wisely. Note: (1) You may imagine nums[-1] = nums[n] = 1. They are not real therefore you can not burst them. (2) 0 ≤ n ≤ 500, 0 ≤ nums[i] ≤ 100 Example: Given [3, 1, 5, 8] Return 167 Idea DP Solution1: C++ / Recursion with memoization Solution2: C++  / DP Java / DP ## Java Problem: A message containing letters from A-Z is being encoded to numbers using the following mapping way: Beyond that, now the encoded string can also contain the character ‘*’, which can be treated as one of the numbers from 1 to 9. Given the encoded message containing digits and the character ‘*’, return the total number of ways to decode it. Also, since the answer may be very large, you should return the output mod 109 + 7. Example 1: Example 2: Note: 1. The length of the input string will fit in range [1, 105]. 2. The input string will only contain the character ‘*’ and digits ‘0’ – ‘9’. Idea: DP Time complexity: O(n) Space complexity: O(1) Solution: C++ Related Problems: Problem: Given a chemical formula (given as a string), return the count of each atom. An atomic element always starts with an uppercase character, then zero or more lowercase letters, representing the name. 1 or more digits representing the count of that element may follow if the count is greater than 1. If the count is 1, no digits will follow. For example, H2O and H2O2 are possible, but H1O2 is impossible. Two formulas concatenated together produce another formula. For example, H2O2He3Mg4 is also a formula. A formula placed in parentheses, and a count (optionally added) is also a formula. For example, (H2O2) and (H2O2)3 are formulas. Given a formula, output the count of all elements as a string in the following form: the first name (in sorted order), followed by its count (if that count is more than 1), followed by the second name (in sorted order), followed by its count (if that count is more than 1), and so on. Example 1: Example 2: Example 3: Note: • All atom names consist of lowercase letters, except for the first character which is uppercase. • The length of formula will be in the range [1, 1000]. • formula will only consist of letters, digits, and round parentheses, and is a valid formula as defined in the problem. Idea: Recursion Time complexity: O(n) Space complexity: O(n) Solution: C++ Java Mission News Theme by Compete Themes.
2020-02-17 09:38:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2850668132305145, "perplexity": 2496.390469801492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00114.warc.gz"}
https://carmalou.com/how-to/2017/08/06/how-to-change-hostname-for-raspberry-pi.html
If you are using ssh shortcuts for your Raspberry Pi, it can be a good idea to give each pi you set up a unique hostname. If you are using the raspian distribution from raspberrypi.org, raspberrypi.local is the default hostname. Since IP addresses can change, it’s not good to use this in your ssh config file. So instead, I’ll show you how to change the hostname to be unique on each different pi. First, while ssh’d into the pi, use: the hostname command. This will show you what the current hostname of the pi is. Next use sudo nano /etc/hosts and change the hostname in that file. (It’s listed next to the IP address.) Then use sudo nano /etc/hostname and change the hostname in this file. Lastly, use reboot to restart the pi and ssh back in using the new hostname and voila! Questions? Tweet me!
2021-05-16 21:33:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23357976973056793, "perplexity": 4242.372676201501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989914.60/warc/CC-MAIN-20210516201947-20210516231947-00159.warc.gz"}