url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://fstaals.net/
# Frank Staals Hey everyone. Welcome to my small corner of the internet. I am Frank Staals and I am currently working as a PostDoc at MADALGO in Aarhus. Before that, I was a PhD Student at the Department of Information and Computing Sciences of the Utrecht University, under supervision of Marc van Kreveld aan Maarten Löffler. I did my Bachelor and Master in Computer Science at the Eindhoven University of Technology. ## Research Interests My main research interest lies in Computational Geometry. I am mostly working on algorithms for trajectory analysis, or analysis of moving objects. An example of such a problem would be “Given the trajectories of a set of animals, for example birds, can we compute when the animals travel in groups, and how do these groups change over time?”. However, I’m also interested in other Computational Geometry related problems, especially if they have some “real-world” application somewhere down the line. Problems like this usually show up in areas like geographic information science (GIS) or visualisation. If you are interested, see my list of publications for more examples. ## Personal I am an avid mountainbiker. I race in the “Amateurs” category. So in the weekends I can usually be found at cross country races or marathons. Other than that, I like functional and dependently typed programming: How cool is it that you can enforce and prove properties of your program! Consider for example these type-safe Red-Black trees. ## Contact Information Email: f.<lastName>@cs.au.dk or <firstName>@fstaals.net
2016-05-04 07:52:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4685528576374054, "perplexity": 1396.7705550390556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122533.7/warc/CC-MAIN-20160428161522-00044-ip-10-239-7-51.ec2.internal.warc.gz"}
http://exxamm.com/QuestionSolution6/Aptitude/Samir+s+age+is+one+fourth+of+his+father+s+age+and+two+third+of+his+sister+Reema+s+age+What+is+the+ratio+of+the+ages+of/2272545436
Samir's age is one-fourth of his father's age and two-third of his sister Reema's age. What is the ratio of the ages of ### Question Asked by a Student from EXXAMM.com Team Q 2272545436.     Samir's age is one-fourth of his father's age and two-third of his sister Reema's age. What is the ratio of the ages of Samir, Reema arid their father respectively? IBPS-CLERK 2017 Mock Prelims A 3:2:8 B 3:4:8 C 2:3:8 D 4 : 3 : 8 E None of these #### HINT (Provided By a Student and Checked/Corrected by EXXAMM.com Team) #### Access free resources including • 100% free video lectures with detailed notes and examples • Previous Year Papers • Mock Tests • Practices question categorized in topics and 4 levels with detailed solutions • Syllabus & Pattern Analysis
2019-03-23 14:48:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1935352236032486, "perplexity": 13361.187527980339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202872.8/warc/CC-MAIN-20190323141433-20190323163433-00131.warc.gz"}
https://root.cern.ch/?from=hackcv&hmsr=hackcv.com&utm_medium=hackcv.com&utm_source=hackcv.com
Get Started Reference Forum & Help Gallery ## √-1 ROOT enables statistically sound scientific analyses and visualization of large amounts of data: today, more than 1 exabyte (1,000,000,000 gigabyte) are stored in ROOT files. The Higgs was found with ROOT! As high-performance software, ROOT is written mainly in C++. You can use it on Linux, macOS, or Windows; it works out of the box. ROOT is open source: use it freely, modify it, contribute to it! ## \$ _ ROOT comes with an incredible C++ interpreter, ideal for fast prototyping. Don’t like C++? ROOT integrates super-smoothly with Python thanks to its unique dynamic and powerful Python ⇄ C++ binding. Or what about using ROOT in a Jupyter notebook? Issues? GitHub! (09 Sep 2020) First of all: a big THANK YOU for submitting bugs for ROOT. We appreciate the effort: finding the issue-submission web page, explaining what the problem is, and sometimes even providing a reproducer. Living at ROOT's bleeding edge, with Conda (19 Aug 2020) Sometimes ROOT users get bitten by nasty bugs, and we ROOT developers try our best to squash them as quickly as possible. New features and improvements are also constantly merged in ROOT’s main development branch, either coming from ROOT developers or as contributions from the community. Be it bug fixes or new features, however, users typically have to wait for a ROOT release that includes them to try them out. Or, you know, compile the very latest ROOT from source code – but nobody has time for that! Building ROOT fast - distributed compilation with distcc (12 Aug 2020) Hello there, here is Stefan with a short guide on how you can distribute ROOT compilation with distcc! ROOT Docker images reloaded (29 Jul 2020) Hello, Enrico here with an update on the ROOT Docker images! New ROOT Web Site! (22 Jun 2020) If you are reading this blog post it means you are surfing on the New ROOT Web Site! More ...
2020-09-28 19:48:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23731961846351624, "perplexity": 6392.331758181569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401604940.65/warc/CC-MAIN-20200928171446-20200928201446-00386.warc.gz"}
https://www.nature.com/articles/s41467-020-14889-7
Skip to main content Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Nanobody-enabled monitoring of kappa opioid receptor states ## Abstract Recent studies show that GPCRs rapidly interconvert between multiple states although our ability to interrogate, monitor and visualize them is limited by a relative lack of suitable tools. We previously reported two nanobodies (Nb39 and Nb6) that stabilize distinct ligand- and efficacy-delimited conformations of the kappa opioid receptor. Here, we demonstrate via X-ray crystallography a nanobody-targeted allosteric binding site by which Nb6 stabilizes a ligand-dependent inactive state. As Nb39 stabilizes an active-like state, we show how these two state-dependent nanobodies can provide real-time reporting of ligand stabilized states in cells in situ. Significantly, we demonstrate that chimeric GPCRs can be created with engineered nanobody binding sites to report ligand-stabilized states. Our results provide both insights regarding potential mechanisms for allosterically modulating KOR with nanobodies and a tool for reporting the real-time, in situ dynamic range of GPCR activity. ## Introduction G protein-coupled receptors (GPCRs) represent the single most abundant and targeted class of druggable proteins in the human genome1,2. GPCRs sense an immense array of intra- and extracellular stimuli including neurotransmitters, metabolites, odorants, and drugs2, and allosterically modulate a variety of transducers including as many as 16 Gα proteins3 and 4 arrestins to control downstream cellular signaling4. It is now recognized that different GPCR agonists and antagonists can stabilize distinct receptor conformations with varying potencies and efficacies thereby differentially activating downstream signaling pathways. The process by which ligands differentially modulate cellular signaling is known as functional selectivity5, and ligands that display this property are referred to as biased. Recent advances in spectroscopic and crystallographic studies have facilitated the observation of these ligand-specific states supporting the existence of an array of GPCR conformational states6,7,8. The molecular mechanisms by which ligands stabilize these GPCR conformational changes are, however, incompletely understood. A key to understanding these transition mechanisms is to obtain structures of each state, not merely generic inactive and active states. Nanobodies (Nbs)9 potentially provide a powerful platform to stabilize specific receptor states. As single-domain antibodies developed from camelids, nanobodies display unique properties such as small-size and high-affinity. Nanobodies have been widely used to stabilize the active states of many GPCRs, including the β2- adrenergic (β2AR), M2-muscarinic (M2R), µ-opioid (MOR), k-opioid (KOR), and angiotensin II type 1 (AT1R) receptors10,11,12,13,14,15. Apart from the recognition and stabilization of specific conformations, nanobodies also act as allosteric effectors and can affect both ligand binding affinity and functional activity8. For example, stabilization of an active and inactive conformation and dramatic alterations in ligand affinity at the β2AR by allosteric nanobodies were recently reported8. These state-specific nanobodies have also been widely applied as conformational biosensors to monitor GPCR dynamics both in vitro and in vivo16,17,18,19. The available structures of receptor–nanobody complexes reveal that the specific interfaces frequently overlap, at least in part, with G protein interaction surfaces20. We previously reported two allosteric nanobodies against KOR—a positive allosteric nanobody (Nb39), and a negative allosteric nanobody (Nb6)14. The KOR has emerged as an important therapeutic target because G protein biased KOR agonists are reported to possess analgesic activity without the liabilities of addiction and respiratory depression associated with conventional opioid analgesics which target the μ receptor21,22,23,24. As well, KOR antagonists and KOR negative allosteric modulators have been proposed both as potential antidepressants and as treatments for addiction25,26,27. To date, no bona fide allosteric binding sites have been identified in KOR, nor are allosteric modulators available for KOR. To systematically probe the activities of these nanobodies and determine how they reciprocally regulate ligand binding, we investigate the allosteric range of KOR in the absence and presence of Nb6 and Nb39. While Nb39 stabilizes KOR in a fully active state similar to MOR stabilized by Gi heterotrimer14,28, we show here that Nb6 stabilizes an inactive state distinct from that stabilized by a neutral antagonist. To better understand where Nb6 binds to the receptor and how it allosterically modulates the ligand binding, we solve the complex structure of the KOR bound to the inverse agonist JDTic in the presence of Nb6. Analysis of the KOR–Nb6 complex reveales a unique nanobody binding interface by which Nb6 negatively allosterically modulates KOR. Remarkably, we find that this site can be transferred to several other GPCRs to provide an engineered and functional allosteric site. Finally, given that Nb6 and Nb39 interact with distinct states of KOR, we report that they can be used as biosensors to detect ligand-stabilized states in real-time with subcellular resolution in living cells. ## Results ### Molecular pharmacology of allosteric nanobodies Previously we reported via a Bioluminescence Resonance Energy Transfer (BRET) approach that KOR agonists promote the association of Nb39 and the dissociation of Nb6 from KOR14. A simplified model depicting how agonists affect receptor-nanobody interaction is shown in Fig. 1a. The initial net BRET ratio between KOR and Nb6 in the absence of agonist is high (Fig. 1b), implying that Nb6 preferentially binds to the inactive state of KOR. After exposure to high concentrations of the KOR selective agonists Salvinorin A (Sal A)29 or Dynorphin A (1–17), conformational changes are stabilized that are no longer favorable for Nb6 binding. Consequently, Nb6 apparently dissociates from the receptor. As also shown in Fig. 1b the KOR selective antagonist JDTic30 modestly potentiates Nb6 binding as might be predicted from the fact that Nb6 already has stabilized the inactive state. In Fig. 1c, the agonist-induced active state of KOR is recognized by Nb39 as evidenced by enhanced nanobody binding as monitored by BRET. We note that SalA and Dynrophin A display different efficacies for Nb39 with SalA being considerably more efficacious than Dynorphin A. Two factors may account for this observed differential efficacy for Nb39 recruitment between Sal A and Dynorphin A: the first is that the receptor state or conformation is ligand-specific8 (Dynorphin A may induce an active conformation which is not favorable for Nb39 recognition); the second is that cellular accessibility to the two agonists are different. Pertinent to this second possibility we note that Sal A as a diterpene could quickly diffuse across the membrane and activate intracellular receptors (as supported by confocal imaging in this manuscript), while Dynorphin A as a 17 amino acid peptide, would inefficiently penetrate the cell membrane leading to lesser interactions with intracellular receptors. To further quantify the apparent allosteric effects of Nb6 and Nb39, multiple assays were conducted to examine how these nanobodies differentially regulate KOR ligand binding and functional activity. First, saturation binding was conducted using a KOR selective agonist radioligand 3H-U69,593 in the presence of Nb6 or Nb39. Based on the extended ternary complex model of GPCR activation31, the coupling of G protein or its mimetic (e.g., a nanobody) increases the number of high-affinity agonist binding sites8,13,14. Here we demonstrated that Nb6 reduces 3H-U69,593 binding by stabilizing an inactive state of KOR, which is unfavorable for agonist binding, while Nb39 increases radioligand binding by stabilizing an active-like state (Fig. 1d). Secondly, a series of KOR ligands with distinct scaffolds were tested in competition binding assays in the presence of each nanobody to investigate the allosteric range of the two state-specific nanobodies. For the competition binding assays, the radioligand 3H-U69,593 is no longer a suitable radiotracer for this assay because, as mentioned earlier, its binding kinetics are also affected by the nanobodies. Another available antagonist radioligand 3H-JDTic was also tested, and no significant changes were detected for its binding affinity (Ki) or equilibrium constant (Kd) with KOR in the presence of Nb6 or Nb39 (Fig. 1e, Supplementary Fig. 1a and Fig. 2); 3H-JDTiC was thus suitable the studies reported here. For the endogenous ligand Dynorphin A (1–17), a 200-fold higher affinity for KOR in the presence of Nb39 compared to the affinity in the presence of Nb6 was observed (Fig. 1f). Unexpectedly, the binding affinity of a KOR neutral (or silent) antagonist LY245998932 was increased ~6-fold in the presence of Nb6 (Fig. 1g). This phenomenon is reminiscent of a prior observation at the β2AR where Nb60 and Nb80 promoted a full allosteric range for agonists from very-low-affinity to high-affinity states, respectively. Apo (unliganded) receptors exist in multiple inactive states that exchange quickly and thus display conformational heterogeneity8. Similarly, for a receptor bound only to an agonist, multiple states such as inactive, intermediate, and active states have been observed, and more complete stabilization of the active state (high-affinity) requires the engagement of transducers (e.g., G proteins or its mimimetic) and agonists6,7,33. The increased affinity of LY2459989 supports this observation that the addition of Nb6 to KOR helps stabilize the receptor into a state which is favorable for LY2459989’s binding. We also determined the binding of agonist ligands of diverse scaffolds, including the peptide Dynorphin A (1–13), the diterpene Sal A, the epoxy-morphinan MP1104, and the arylacetamide GR89696. As shown in Supplementary Fig. 1b, all displayed the expected effects induced by the positive (Nb39) and negative (Nb6) allosteric nanobodies: their affinities were attenuated by the negative allosteric nanobody Nb6 and enhanced by the positive allosteric nanobody Nb39. One exception was that the agonist MP1104’s binding affinity was only slightly affected by Nb6 compared to other agonists, which emphasizes that the allosteric effect is also probe-dependent34. MP1104 has displayed very slow off-rate14 and promotes a specific receptor conformation, which is no longer favorable for Nb6 binding. Finally, the functional activity of several KOR agonists or partial agonists was evaluated in the presence of Nb6; we previously reported that Nb39 sterically inhibits G protein binding and enhances β-arrestin recruitment14. Here we find that Nb6, consistent with its ability to stabilize an inactive state, decreases the potency and/or efficacy of selected KOR agonists (Supplementary Fig. 3). Together, these data reveal that Nb6 and Nb39 are potent allosteric nanobody modulators that can be used to stabilize inactive and active KOR states. Furthermore, these states can be distinguished from the apo state by a variety of criteria. ### Structure of the KOR–JDTic–Nb6 complex As Nb6 promotes a unique receptor state distinct from either the neutral-antagonist-bound or Nb39-stabilized states, we asked if the Nb6-stabilized structure differs from previously reported inactive- and active-like KOR structures14,35. To crystallize the KOR–Nb6 complex, the same N-terminal BRIL fusion constructs like the one previously engineered to obtain KOR–Nb39 active-state structure14 was used to avoid any modification at the intracellular regions of the receptor. Multiple KOR antagonists were then screened in thermostability assays for their ability to stabilize the complex. Of those tested, the antagonist JDTic36 afforded maximal stabilization of the KOR–Nb6 complex (Supplementary Fig. 4a) as monitored by size-exclusion chromatography. We found JDTic to be a potent inverse agonist at KOR (Supplementary Fig. 4b) via a generic assay which measures the Gi-mediated inhibition of cAMP production. With more specific BRET-based assays, JDTic appears to differentially decrease the coupling between constitutively active KOR and specific Gi/o family members, with greatest efficacy for Gz, GαoA, and GαoB proteins (Supplementary Fig. 4c). As JDTic promotes a stable KOR–Nb6 complex, the X-ray structure of KOR–JDTic–Nb6 was determined to 3.3 Å resolution in the space group P21221 with two monomers per asymmetric unit (Fig. 2a, Supplementary Fig. 4d and Supplementary Table 1). This observed anti-parallel crystal packing should be attributed to the unique crystallization conditions, and is unlikely to represent possible dimerization under physiological conditions. We observed strong densities for KOR, Nb6, and JDTic (Supplementary Fig. 4e). No electron density was observed for the presumably disordered N-terminal BRIL fusion, as was the case for the active state KOR structure14. Furthermore, because the two monomers (chain A and B) are nearly identical, we focused on the chain A complex for subsequent analysis. Together with the previously reported KOR–JDTic (nanobody free)35 and KOR–Nb3914 structures, we next analyzed the overall helical movements by comparing them with the KOR–Nb6 structure reported here. The overall structure of KOR–Nb6 is reminiscent of the inactive-state KOR–JDTic structure (Fig. 2b, c) although substantial differences were observed at both extracellular [transmembrane helical (TM) 1 and 5, extracellular loop (ECL) 2 and 3] and intracellular regions [TM5, TM6 and intracellular loop (ICL) 2]. Significantly, ECL2 moves ~6 Å to form an apparent lid on top of JDTic, a feature recently observed in other GPCR structures, including the LSD-bound 5-HT2B serotonin receptor37 and the risperidone-bound D2 dopamine receptor38. The formation of this lid may explain why the binding kinetics (e.g., kon and koff) in KOR preincubated with Nb6 were slightly changed (t1/2 increases ~2-fold) compared to that in the untreated KOR (Supplementary Fig. 1a). It is also noteworthy that interactions between Nb6 and TM5/TM6 facilitate a mild inward movement of TM6 (~2 Å) by measuring the distance between Cα atoms of D2666.27 (superscript denotes Ballesteros–Weinstein numbering). The overall structural similarity between the JDTic-only and Nb6-bound forms is likely because the inverse agonist activity of JDTic stabilizes the receptor in a low-affinity state for agonists, as the binding affinity of JDTic is nearly identical with or without Nb6 (Fig. 1d). The structural alignment between KOR–Nb6 and active-state KOR–Nb39 revealed canonical active state-like conformational changes and side-chain displacements in the binding pocket and various conserved motifs (Supplementary Fig. 5). Specifically, an inward movement of extracellular TM1, TM5, TM6, and TM7 (Fig. 2d) leads to the contraction of the binding pocket, which is a joint event upon GPCR activation, as also observed in several other Class A GPCR crystal structures10,11,13,14. Compared with the active-state KOR, the conformation of ICL3 undergoes significant displacement due to the large outward movement of TM6 (Fig. 2e). Similarly, displacement of intracellular TM5 and TM7 in the active-state KOR is likely a direct effect of TM614, and the role of TM6 as a driving force for the expansion of the intracellular part indicates that its conformational differences may play an essential role in determining receptors’ coupling specificity for G proteins39. ### Unique interactions revealed between Nb6 and KOR As shown in Fig. 3a, the Complementarity-determining region 3 (CDR3) loop of Nb6 inserts into a crevice between TM5 and TM6, where it forms major interactions with KOR. This interface is distinct from previously reported nanobody-GPCR interfaces (e.g., β2AR-Nb60, β2AR-Nb80, M2R-Nb9-8, MOR-Nb39) and the active-state-like KOR–Nb39 interfaces (Supplementary Fig. 6a), all of which bind via the intracellular core region which is also critical for G protein11 and/or β-arrestin40 interactions. From a nanobody perspective, even though the CDR3 adopts a flexible loop conformation, the electron density in this region was sufficient to identify the orientation of side-chains within the residues (Supplementary Fig. 6b). For instance, W106 in Nb6 extends into the crevice and forms hydrophobic interactions with L2535.65 and the backbone of R2746.35, likely anchoring TM5 and TM6, and preventing the latter from moving outward. R102 in Nb6 forms ionic interactions with two charged residues (D2666.27 and R2706.31) in the receptor, providing an H-bond network linking TM5 and TM6. All of these interactions appear to contribute to the high affinity of the KOR–Nb6 interaction because mutations of these residues completely abolish the binding of Nb6 to the KOR in BRET assays (Fig. 3b, Supplementary Fig. 6c). Interestingly, Y34 and I105 in Nb6 do not directly interact with the receptor, but mutations of these two residues also attenuate the interaction likely because Y34 and I105 interact with R102 and W106, respectively, assisting the latter two residues to be in the right orientation (Fig. 3b, c). Although mutation of several other residues (L2535.65 and R2746.35) appears not to affect Nb6 binding to the receptor apparently, they do change the stability of KOR/Nb6 complex as the decreased net BRET ratio and increased potency of agonist Sal A indicate that a weaker interaction between Nb6 and KOR has been formed (Supplementary Fig. 6c). Although the overall differences between KOR–Nb6 and KOR–JDTic are subtle, there are some changes specific to the Nb6-stabilized state. Apart from the large movement of the ECL2, a twist of the intracellular region of TM5 and TM6 has been observed in the KOR–Nb6 structure (Supplementary Fig. 7a). This twist causes large conformational displacements of side chains of residues in the interface engaged by the CDR3 loop in Nb6 (Supplementary Fig. 7b). The relatively differential position of the TM5 and TM6 also supports that the KOR–Nb6 structure may represent a distinct state from previous KOR–JDTic because the dynamics of TM5 and TM6 has been seen as a major event during receptor transition from inactive to active states11,28,40. Because of the stabilization afforded by Nb6, we were able for the first time to visualize an intact ICL3 (i.e., ICL3 is not visualized in previous KOR–JDTic or KOR–Nb39 structures). A comparison of ICL3 between KOR–Nb6 and MOR-Gi structures reveals a significant translocation, probably necessitated accommodating the Gi protein coupling (Supplementary Fig. 7c). Since Nb6 is a negative allosteric modulator and its binding pocket represents a potentially allosteric site, we continued to investigate how the residues located in the binding pocket contribute to the KOR signaling profile. While alanine mutagenesis shows that most of the residues have minor effects on KOR’s G protein activation and β-arrestin translocation, L2535.65A and R2716.32A appear to significantly reduce receptor activity (Supplementary Fig. 7d). In particular, R2716.32A affects the G protein signaling but not arrestin recruitment indicating this residue may play a unique role in KOR-G protein interaction. This is supported by the observation in the closely related MOR-Gi structure that R2776.32 in MOR forms an H-bond interaction with the backbone of L353G.H5.25 in Gαi1 proteins28. The R/K6.32 is conserved between different GPCR families: in β2AR, K6.32 forms important interactions with Gαs39 while in class F GPCRs, R6.32 acts as a conserved molecular switch to regulate receptor activation and pathway selection41. The L2535.65A mutation affects both G protein and arrestin signaling simultaneously, indicating that it may play a global role in stabilizing the TM5-TM6 interaction in the active state. Overall, the KOR–Nb6 interface represents an allosteric site distinct from either the G protein or β-arrestin binding site, which may have the potential in the design of KOR allosteric modulators. Regarding the negative allosteric mechanism of Nb6, we note that in the previously described inactive-state β2AR-Nb60 structure8, residue T103CDR3 in Nb60 acts as a bridge forming an ionic lock between D3.49 in TM3 and E6.30 in TM6 of β2AR (Fig. 3c, left). This ionic lock has been found conserved in several aminergic GPCRs in their inactive states; when broken this ionic lock appears to be one of the hallmark events of receptor activation8,42. However, there is no such ionic bridge observed in opioid receptors as there is a Leu6.30 in all opioid receptors, indicating a different mechanism may be involved in the Nb6-mediated KOR inactivation. Here, R3.50 in KOR forms two H-bonds with T2.39 in TM2 and T6.34 in TM6, respectively (Fig. 3c, middle), thereby stabilizing the inactive conformation. Further, R102 in Nb6 forms a hydrogen bond with D2666.27, resulting in a new H-bond network formed by D2666.27, R2706.31, and L2535.65 between TM5 and TM6 (Fig. 3c, right). These interactions jointly stabilize an inactive state of KOR by potentiating the interaction between TM5 and TM6 and, importantly, were not observed in the KOR–JDTic or KOR–Nb39 structures. Taken together, these observations suggest that Nb6 stabilizes a different inactive state through combined effects of steric hindrance by W106 and ionic locking by R102 as mutations of these residues abolishes Nb6 binding. Mutations of the critical residues on the interface (R102 or W106 in Nb6) also abolish its inhibitory action for Sal A-mediated cAMP inhibition (Supplementary Fig. 8a). ### Nb6/39 as bidirectional biosensors for opioid receptors As shown earlier, Nb6 and Nb39 interact with distinct, non-overlapping receptor conformations. These discrete interaction profiles are potentially ideal for the development of biosensors to probe ligand-stabilized receptor conformations in real-time. To test the applicability of using Nb6 and Nb39 as KOR conformational biosensors, we first used the BRET assay to determine if the Nb6 or Nb39 interactions with KOR were reversible. For these studies, KOR-Rluc (Renilla Luciferase) and Nb6-mVenus or Nb39-mVenus were co-expressed in HEK 293T cells. We first tested the ability of Nb6, the inactive-state associated nanobody, to report changes in receptor activity in vitro. The agonist Sal A caused a reasonably rapid (t1/2 = 50 ± 10 s) decrease in Nb6-KOR BRET (Fig. 4a), which plateaued after ~10 min. The subsequent addition of the antagonist JDTic rapidly (t1/2 = 75 ± 12 s) increased the BRET ratio equivalent to the level without agonist treatment. JDTic has a much higher binding affinity than Sal A at wild type KOR (Kd, 0.065 nM for JDTic vs 10.9 nM for Sal A) (Fig. 1d, Supplementary Fig. 1b). JDTic also has a slower off-rate in the presence of Nb6. Conversely, Nb6 coupling destabilizes Sal A binding and thus accelerates the dissociation of the agonist Sal A, as supported by the lower Kd measured compared to the KOR/buffer group (Supplementary Fig. 8b) We next evaluated the ability of Nb39, the active-state associated nanobody, to report real-time changes in ligand-stabilized states in live cells in vitro. Here the agonist Sal A immediately enhanced the BRET signal between Nb39 and KOR (Fig. 4b). JDTic rapidly (t1/2 = 107 ± 15 s) reversed the KOR–Nb39 association, presumably by stabilizing the inactive state (Fig. 4b). The relatively slower t1/2 for JDTic in KOR/Nb39 (~107 s) compared to KOR/Nb6 (~75 s) is likely due to the cooperativity from Nb39 because Nb39 has been shown to significantly increase the binding affinity of Sal A (Supplementary Fig. 8b). This reversible interaction between receptor and nanobody could provide a potential conformation-specific biosensor approach for investigating these transitions for a variety of applications. We next investigated the association of Nb6 and Nb39 with KOR using confocal microscopy to determine if these ligand-stabilized transitions can be visualized in live cells in real-time with subcellular resolution. Here the distribution in single cells was monitored using KOR-mScarlet and Nb6- or Nb39-mVenus. For the first experiment, KOR-mScarlet and Nb6-mVenus are initially co-localized, as predicted from our previous observations that Nb6 binds ligand-free KOR (Fig. 4c). After treatment with Sal A, Nb6 dissociated from the plasma membrane (where it had been co-localized with KOR) and displayed increased cytosolic localization (Fig. 4c, Supplementary Fig. 9a, b). Subsequent treatment with the KOR selective antagonist JDTic reversed this phenomenon. Next, when KOR-mScarlet and Nb39-mVenus were visualized, the Nb39-mVenus fluorescence was mainly found in the cytoplasm before drug treatment. This indicates that Nb39 recruitment to plasma membrane receptors does not occur in the absence of agonist (Fig. 4d, left panel) and is consistent with our BRET studies which show minimal basal BRET. Upon Sal A stimulation, Nb39 fluorescence rapidly accumulated at the plasma membrane to be co-localized with KOR-mScarlet (Fig. 4d, middle panel). In addition to the Nb39 localization to the plasma membrane, we also observed high co-localization of Nb39 and KOR in discrete subcellular compartments in the cytoplasm (Fig. 4d, Supplementary Fig. 9c, d), reminiscent of the Golgi apparatus19. Indeed, in cells expressing the Golgi marker GalT-RFP (Supplementary Fig. 9e, f), following Sal A treatment Nb39-mVenus rapidly localized to this GalT-RFP cellular compartment. The apparent translocation occurs within a few seconds following Sal A addition, consistent with KOR resident under basal conditions in the Golgi apparatus (Supplementary Fig. 9f). Subsequent exposure to the antagonist JDTic partially reversed this, apparently inducing dissociation of Nb39 from KOR by stabilizing the inactive conformation (Fig. 4d, right panel; Supplementary Fig. 9e). These results indicate that these conformation-specific nanobodies can be used to report both cell-surface and intracellular ligand-stabilized receptor conformations in a time-dependent and reversible fashion. A potential advantage of using Nb6 to probe the inactive state is that, as Nb6 pre-binds to the ligand-free receptors, it allows us to establish a baseline signal followed by a ligand response signal necessary for accurately measuring the amplitude of change. Finally, we note that due to the highly conserved sequence and structure of opioid receptors (Supplementary Fig. 10a), Nb6 also binds to μ, δ and nociceptin opioid receptors with high affinity (Supplementary Fig. 10b). Nb6 thus can also report the activation and inactivation of the nociceptin receptor (Supplementary Fig. 10c), while Nb39 has been shown unable to bind to this receptor14. Thus, together with Nb39, this nanobody pair provides a toolset for studying the dynamics of conformational transitions in situ for other members of the opioid receptor family. ### Nb6 as a potentially useful GPCR conformational sensor Inspired by Nb6’s capability being as a biosensor for opioid receptors and its unique interactions between KOR and Nb6, we predicted that Nb6 might bind to other GPCRs to report receptor conformational transitions by making chimeric GPCRs. Our hypothesis (Fig. 5a) was that swapping out a portion of ICL3 from an arbitrary GPCR with ICL3 from KOR would provide a site for Nb6 binding. Considering that the interaction between Nb6 and KOR requires an intact ICL3, which has been implicated in playing an essential role in G protein/β-arrestin recognition11,40, we screened for optimal sites for KOR ICL3 insertion which would have a minimal effect on the activity of a generic Class A GPCR. We subsequently tested seven Class A GPCRs which interact with different G protein subfamilies: Gαs (DRD1, 5HT4R and HRH2), Gαi (NTS1R and 5HT5AR), and Gαq (5HT2AR and ETA). The full sequences for every chimeric receptor, after optimization, are provided in the Supplementary document. In Fig. 5b, we present data obtained with the Gαq-coupled human 5-HT2A serotonin (5-HT2AR) and ETA endothelin (ETA) receptors, both of which are important drug targets found in a subfamily of GPCRs quite distinct from opioid receptors. Both wild-type 5-HT2AR and ETA, did not display measurable interaction with Nb6 in the BRET assay (Fig. 5b). When their ICL3 regions were replaced by the KOR ICL3 (see details in Supplementary document), Nb6 was now able to bind to 5-HT2AR or ETA as monitored by BRET. Remarkably, their endogenous agonists (5-HT or endothelin-1) caused the dissociation of Nb6 from the chimeric 5-HT2AR or ETA receptors, respectively (Fig. 5b). Importantly, the chimeric receptors remain functional as quantified by G protein activation (Fig. 5c). Other chimeric receptors (D1-dopamine, 5-HT4-serotonin, H2-histamine, NTSR1-neurotensin and 5-HT5A serotonin) were investigated in a similar manner, and they all could respond robustly to respective endogenous ligand without significantly affecting G protein coupling (Supplementary Fig. 11 and 12). Given that other biosensors such as the miniG proteins have been frequently applied to monitor receptor activation43, here we also wondered if Nb6 could report receptor activation in as robust a manner as miniG proteins. Nb6 and miniG were tested in parallel to monitor receptor activation and inactivation in the presence of agonists and antagonists, respectively. As shown in Fig. 5d, Nb6 and miniGq perform comparably at detecting the active and inactive states of 5-HT2AR or ETA receptors. Interestingly, the miniGi occasionally failed to monitor receptor activation (e.g., 5HT5AR) by the endogenous ligand 5-HT, and the miniGs proteins were unable to report receptor inactivation by antagonists (Supplementary Fig. 13). Encouraged by this success, we created additional chimeric receptors with several understudied “orphan” GPCRs (e.g., GPR111, GPR149, and GPR151). Nb6 displayed robust interaction with these chimeric receptors in their basal state, while the wild type receptors do not associate with Nb6 (Fig. 5e). These examples and observations with seven different GPCRs imply that this chimeric approach could be used to faithfully interrogate ligand-stabilized states for the majority of Family A GPCRs for which such nanobodies are unavailable. ## Discussion Here we show how two KOR state-specific nanobodies can be used as real-time reporters for monitoring both ligand-dependent and independent conformational states in vitro and in situ. To illuminate the mechanism by which one of these–Nb6–stabilizes an uniqueinactive state, we obtained the crystal structure of KOR bound to this inactive-state stabilizing nanobody. Together with the previously determined KOR structures with an antagonist JDTic and the Nb39-stabilized active state with MP1104, a comprehensive structural analysis of conformational changes related to KOR activation and inactivation was revealed. Finally, we provide a potential platform that affords real-time in vitro and in situ monitoring of ligand-stabilized GPCR conformations using Nb6 and chimeric GPCRs. With regard to the conformational transitions of KOR, our data imply that the so-called unliganded or “apo” state as it exists in transfected cells likely consists of a mixture of active and inactive states—each of which can be differentially stabilized by conformation-selective nanobodies. Intriguingly, we find that Nb6 stabilizes a very low-agonist-affinity state reminiscent of that stabilized by the β2AR nanobody Nb608. Significantly, although they both stabilize a very low-affinity inactive state, they do so by interacting with entirely different receptor domains. Thus, for instance, an ionic lock formed in both the inactive β2AR and adenosine A2A receptors was reported to determine the conversion between S1 and S2 inactive states33,42. In the β2AR stabilized by Nb60, an S2 inactive state has been observed with an intact ionic lock mediated by Nb608. For opioid receptors, there is no such ionic lock, and as a consequence, Nb6 utilizes an entirely different mode for inactive state stabilization. This unique Nb6-KOR binding surface conceivably could provide a platform for the discovery of allosteric modulators by, for instance, automated docking approaches44. On the other hand, the active-state stabilizing nanobodies Nb39 and Nb80 have overlapping binding sites with G proteins and interact with the core of opioid and β2 adrenergic receptors. These findings imply that active-state stabilizing nanobodies might share a conserved mode of action. The overall structural differences in KOR are subtle in the absence or presence of Nb6. This is likely due to the potent inverse agonist activity of JDTic, which can stabilize the receptor in a very low-affinity state, as supported by its nearly identical affinity in the presence and absence of Nb6. However, for the neutral antagonist LY2459989, Nb6 stabilizes a state which is more favored by LY2459989 compared with the ligand only state. This finding provides further support for the notion that even for the inactive KOR conformation, at least two states likely exist. How these two low-affinity agonist states might interconvert is currently unclear. Our observation that certain nanobody-stabilized states are favored by specific ligands (e.g., probe dependence) provides a potential opportunity for the identification of new-scaffold agonists and antagonists, as was recently proposed for other receptor–nanobody complexes45. Our finding that merely swapping the cognate regions of KOR with several other GPCRs provides an approach for reporting ligand-induced conformations deserves further comment. Producing nanobodies typically requires purified GPCRs and suitable hosts (llamas or sharks) and then several months of additional work to identify and validate their activity9. Typically a mixture of nanobodies is recovered, which must be fully characterized as to specifically for the GPCR target and state. Alternatively, a more recent approach uses a yeast-based display of nanobody libraries, although here too, the effort involved is considerable—especially for identifying those which are state-specific15,46. In this paper, we show that for at least several unrelated GPCRs, merely swapping a few residues is sufficient to afford robust interactions in a ligand-modulated fashion. Further, given that this simple approach affords the ability to monitor conformational transitions in real-time in vitro and in situ provides the possibility that this could be a relatively general approach. As we have only tested this on Family A GPCRs we do not know if it can ultimately be ported to other GPCR families, although that is currently being investigated. The technology—although useful—is not without potential confounds. We initially were concerned that this type of swap would alter the signaling ability of the chimeric receptors. In at least the seven cases investigated here, we have not been able to measure any significant degree of attenuation or potentiation of G protein interactions. Certainly, however, future users of the approach will need to verify that signaling, ligand recognition and so on are unaltered before proceeding further with the technology. We also note that the kinetics of ligand-induced alterations in nanobody interactions are rapid and that given the limitations of our current BRET-based readers, we were not able to calculate accurate kinetic constants for some of these interactions. Finally, we note that KOR has emerged as a primary drug target for medication development for drugs devoid of the side-effects of conventional opioids; such drugs include nalfurafine and buprenorphine. How these drugs activate opioid receptors and produce differential cellular effects are still understudied due to a lack of specific tools. Recently Stoeber et al. used a nanobody-based biosensor with a different nanobody (Nb33) to investigate actions of different opioid drugs in neurons with subcellular resolution19. Ligand-specific activation patterns were revealed at a high level of spatiotemporal resolution, specifically between endogenous peptides and non-peptide ligands. The availability of both inactive and active nanobodies for KOR (Nb6 and Nb39, respectively) now enables us to look at a full dynamic range of opioid receptors in a variety of cellular environments. Additionally, our chimeric receptor approach potentially provides a robust and straightforward platform for investigating these phenomena with a large number of unrelated GPCRs. ## Methods ### Crystallization of KOR–JDTic–Nb6 complex Thenanobodies used in this work were generated in Ilama and the detailed method has been illustrated in Che et al.9,14. Plasmids Nb6 and Nb39 were transformed to E. coli WK6 cells (provided by the Steyaert lab) and were then expressed in the periplasm and purified following steps 70–739. Nanobodies were concentrated and desalted to the buffer: 10 mM HEPES, 100 mM NaCl, and 10% Glycerol and stored at −80 °C for future use. For crystallography trials, we used a fusion protein of human KOR with an amino-terminal apocytochrome b562 RIL (BRIL)14. The BRIL-KOR fusion protein was expressed and purified from Sf9 insect cells (Expression Systems, #94-001S). The purified receptor was incubated with 10 µM JDTic (final concentration) all the time during purification. The N-terminal 10× His-tag was removed by the addition of His-tagged TEV protease (Homemade) and incubation overnight at 4 °C. Protease, cleaved His-tag and uncleaved protein were removed by passing the suspension through equilibrated TALON IMAC resin (Clontech) and collected the flow-through. Excessive Nb39 (KOR/Nb39 molar ratio: 1:2) was then added to the protein sample and incubated for 3 h. KOR–JDTic–Nb6 complexes were then concentrated to ~30 mg/ml with a 100 kDa molecular weight cut-off Vivaspin 500 centrifuge concentrator (Sartorius Stedim). KOR–JDTic–Nb6 complexes were reconstituted into the lipidic cubic phase (LCP) by mixing protein solution and a monoolein/cholesterol (10:1 w/w) mixture in a ratio of 2:3 v/v (protein solution/lipid) using the twin-syringe method47. Crystallization was set up in 96-well glass sandwich plates (Marienfeld GmbH) using 50 nl LCP drops dispensed from a 10 µl gas-tight syringe (Hamilton) using a handheld dispenser (Art Robbins Instruments) and overlaid with 1 µl of precipitant solution. Upon optimization, KOR–JDTic–Nb6 crystals were obtained in 100 mM Tris pH7.0, 360–400 mM Ammonium citrate dibasic, 28–32% PEG400. Crystals grew to a maximum size of 50 µm × 40 µm × 30 µm within three days and were harvested directly from the LCP matrix using MiTeGen micromounts before flash-freezing and storage in liquid nitrogen. ### Data collection and structure solution and refinement X-ray data were collected at the 23ID-B and 23ID-D beamline (GM/CA CAT) at the Advanced Photon Source, Argonne, IL using a 10 µm minibeam at a wavelength of 1.0330 Å and a Dectris Eiger-16m detector and Dectris Pilatus3-6m detector, respectively. Diffraction data were collected by exposing the crystals for 0.5 s to an unattenuated beam using 0.5° oscillation per frame. The resulting diffraction data from 31 crystals were indexed, integrated, scaled, and merged using HKL200048. Initial phases were obtained by molecular replacement in PHASER49 using 3 independent models of a truncated 7TM portion of KOR (PDB ID: 4DJH), a nanobody Nb6 from the KOR–MP1104–Nb39 complex (PDB ID: 6B73), and the thermostabilized apocytochrome b562RIL protein (PDB ID: 1M6T)50. Two copies of the 7TM portion of each the receptor and the nanobody Nb6 but no BRIL were found in the asymmetric unit. Refinement was performed with PHENIX51 and REFMAC552 followed by manual examination and rebuilding of the refined coordinates in the program COOT53 using 2mFo - DFc and mFo - DFc maps. The data collection and refinement statistics are shown in Supplementary Table 1. The structural alignment and other analyses were performed in Pymol. ### Bioluminescence Resonance Energy Transfer (BRET) assay To measure KOR-nanobody recruitment, HEK 293T cells (ATCC, #CRL-11268) were co-transfected in a 1:3 ratio with human KOR containing C-terminal Renilla luciferase (RLuc8) and nanobody containing a C-terminal mVenus. After at least 16 h, transfected cells were plated in poly-lysine coated 96-well white clear bottom cell culture plates in plating media (DMEM + 1% dialyzed FBS) at a density of 25–50,000 cells in 200 µl per well and incubated overnight. The next day, media was decanted and cells were washed twice with 60 µL of drug buffer (20 mM HEPES, 1X HBSS, pH 7.4), then 60 µL of the RLuc substrate, coelenterazine h (Promega, 5 µM final concentration in drug buffer) was added per well, incubated an additional 5 min to allow for substrate diffusion. Afterward, 30 µL of the drug (3×) in drug buffer (20 mM HEPES, 1X HBSS, 0.1% BSA, pH 7.4) was added per well and incubated for another 5 min. Plates were immediately read for both luminescence at 485 nm and fluorescent mVenus emission at 530 nm for 1 s per well using a Mithras LB940 multimode microplate reader. The ratio of mVenus/RLuc was calculated per well and the net BRET ratio was calculated by subtracting the mVenus/RLuc per well from the mVenus/RLuc ratio in wells without Nb-mVenus present. The net BRET ratio was plotted as a function of drug concentration using the log(agonist) vs. response model in Graphpad Prism 5 (Graphpad Software Inc., San Diego, CA). Equation is shown below: $${{Y}} = {\mathrm{Bottom}} + \frac{{{\mathrm{Top}} - {\mathrm{Bottom}}}}{{\left. {1 + 10^{({\mathrm{LogEC}}50 - {{X}})}} \right)}},$$ (1) where X is the log of dose or concentration; Y is the Response, increasing as X increases; Top and Bottom are the Plateaus in the same units as Y, and logEC50 is the same log units as X. To measure the Gα proteins (Gαs, Gαq, Gαi/o, Gαz) mediated activation, HEK 293T cells were co-transfected with the receptor, Gα-Rluc, Gβ3 and Gγ9-GFP in a ratio (5:1:5:5); to measure the mVenus-arrestin-1 or −2 translocation, HEK 293T cells were co-transfected with receptor-Rluc and mVenus-arrestin in a ratio (1:5). After transfection 24 h, cells were plated in the 96-well plates, and the following steps are the same as KOR-nanobody recruitment assay. To quantify the t1/2 in Fig. 4 and Supplementary Fig. 9, data from agonist or antagonist treatment were replotted individually with a non-linear regression model using binding kinetics (Dissociation-One phase exponential decay) equation in Graphpad Prism 5. The equation “Dissociation- One phase exponential decay” is $${{Y}} = \left( {{\mathrm{YO}} - {\mathrm{NS}}} \right) \ast {\mathrm{e}}^{ - {{K}} \ast {{X}}},$$ (2) where X is time; Y is binding, usually total (constrain NS to 0.0 if specific); Y0 is Y at time 0, in units of Y; NS is the non-specific binding, in units of Y; K is the rate constant in inverse units of X. The half-life is 0.69/K. ### Generation of chimeric receptors To make chimeric constructs, different KOR ICL3 DNA sequence was cloned into the receptor-Rluc constructs (human serotonin 5HT2AR, 5HT4R, 5HT5AR; Dopamine D1 (DRD1); endothelin A (ETA); Neurotensin 1 (NTS1R); Histamine 2 (HRH2)). The detailed DNA and protein sequence information have been provided in the Supplementary document. For different receptors, these insertion sites may vary to obtain a maximum response. These chimeric constructs with C-terminal Rluc tagged were used in BRET assay to test Nb6 recruitment following the BRET protocol; the same chimeric constructs without C-terminal Rluc tagged were used in BRET assays to measure Gα protein activation following the protocols shown above. To compare the kinetics between chimeric receptor/Nb6 and wild-type receptor/miniG proteins, HEK 293T cells were co-transfected with chimeric receptors with C-terminal Rluc and Nb6-mVenus (GPCR-KOR-Rluc + Nb6-mVenus) or wild-type receptor and mVenus-miniG (GPCR-Rluc + mVenus-miniG) at ratio 1:5, respectively. The transfected cells were treated following the BRET protocol. During the test, luminescence was recorded in the order: background for 30 s, + 10 µM agonist for 30 s, + 10 µM antagonist for 30 s. Cells transfected with GPCR-KOR-Rluc or GPCR-Rluc only are the background and the net BRET ratio will be the recorded value in the experimental groups subtracted by the background. Data were then plotted in Prism 5. ### Radioligand binding assays Binding assays were performed using Sf9 membrane fractions expressing the crystallization construct BRIL-KOR or HEK293 T membrane preparations transiently expressing KOR wt or KOR mutants. Binding assays were set up in 96-well plates in the standard binding buffer (50 mM Tris, 0.1 mM EDTA, 10 mM MgCl2, 0.1% BSA, pH 7.40). Saturation binding assays with 0.1–20 nM [3H]-U69,593 in standard binding buffer were performed to determine equilibrium dissociation constant (Kd) and Bmax, whereas 10 µM final concentration of JDTic was used to define nonspecific binding. For the competition binding, 50 μL each of 3H-JDTic (final 0.8 nM), drug solution (3×) and homogenous membrane solution was incubated in 96-well plate in the standard binding buffer in the absence or presence of purified nanobodies (final concentration 5 µM). Reactions (either saturation or competition binding) were incubated for 2 h at room temperature in the dark and terminated by rapid vacuum filtration onto chilled 0.3% PEI-soaked GF/A filters followed by three quick washes with cold washing buffer (50 mM Tris HCl, pH 7.40) and read. Results (with or without normalization) were analyzed using GraphPad Prism 5.0 using one site—Fit Ki model (Eqs. (3) and (4)): $${\mathrm{logEC}}50 = {\mathrm{log}}10^{{\mathrm{logKi}}\left( {1 + \frac{{\left[ {{\mathrm{Hot}}} \right]}}{{{\mathrm{Kd}}}}} \right)}$$ (3) $${\mathrm{Y}} = {\mathrm{Bottom}} + \frac{{{\mathrm{Top}} - {\mathrm{Bottom}}}}{{\left. {1 + 10^{({\mathrm{X}} - {\mathrm{LogEC}}50)}} \right)}}$$ (4) where X is the log molar concentration of unlabeled ligand; Y is the binding in any convenient units. Top and Bottom are the plateaus in units of Y-axis; logKi is the log of the equilibrium dissociation constant of tested ligand; [Hot] is the concentration of radioligand, and Kd is the equilibrium dissociation constant of radioligand. ### Ligand association and dissociation assays Binding assays were performed using membrane preparations of HEK-293T cells transiently expressing human wild-type KOR at room temperature. The same aliquots of membranes were preincubated with buffer, 5 µM Nb6 or Nb39 proteins for 30 min. Radioligand dissociation and association assays were performed in parallel using the same concentrations of radioligand, membrane preparations and binding buffer (50 mM Tris, 10 mM MgCl2, 0.1 mM EDTA, 0.1% BSA, pH 7.4). All assays used at least two concentrations of radioligand (0.6–1.5 nM[3H]-JDTic. For dissociation assays, membranes were incubated with radioligand for at least 2 h at room temperature before the addition of 10 μl of 10 μM excess cold ligand JDTic to the 200 μl membrane suspension at designated time points. For association experiments, 100 μl of radioligand was added to 100 μl membrane suspensions at designated time points. Time points spanned 1 min to 6 h, depending on experimental conditions and radioligand. The association (Kon) and dissociation (Koff) values were calculated using the Prism model: Association kinetics to obtain kon $${{Y}} = {\mathrm{Ymax}} \ast (1 - {\mathrm{e}}^{ - {\mathrm{kob}} \ast {{X}}}).$$ (5) Dissociation—one phase exponential decay to obtain koff $${{Y}} = \left( {{\mathrm{YO}} - {\mathrm{NS}}} \right) \ast {\mathrm{e}}^{ - {{K}} \ast {{X}}},$$ (6) X is time; Y is binding, usually total (constrain NS to 0.0 if specific); Y0 is Y at time 0, in units of Y; Ymax is the maximum binding in Y units; NS is the binding at very long times, in units of Y; K is the rate constant in inverse units of X. The half-life is 0.69/K. To use an alternative assay to measure JDTic kinetics in the presence of Nb6 or Nb39 as shown in Supplementary Fig. 2. The kon was determined by incubation of KOR cell membranes with the indicated concentrations of [3H]-JDTic (200, 400, 800 pM) for various time periods. Assays were set up the same as the above association assay. Data were fitted using a one-phase exponential association function to yield an observed on-rate (kob) (Eq. 4). kob was plotted against the [3H]-JDTic concentration, employed for indirect determination of koff (y-intercept = 0) (Eqs. (7) and (8)): $${\mathrm{kob}} = {\mathrm{kon}} \ast [{\mathrm{Hot}}] + {\mathrm{koff}},$$ (7) $${\mathrm{kd}} = \frac{{{\mathrm{koff}}}}{{{\mathrm{kon}}}}.$$ (8) For the determination of kon and koff for unlabeled Sal A, membranes containing either wild-type KOR were incubated with [3H]-JDTic and several concentrations of Sal A (1, 10, 100, 1000, 10000 nM). Non specific binding was determined by addition of 10 μM JDTic. Immediately (at time = 0 min), plates were harvested by vacuum filtration onto 0.3% polyethyleneimine pre-soaked 96-well filter mats (Perkin Elmer) using a 96-well Filtermate harvester, followed by three washes with cold wash buffer (50 mM Tris pH 7.4). Scintillation cocktail (Meltilex, Perkin Elmer) was melted onto dried filters and radioactivity was counted using a Wallac Trilux MicroBeta counter (PerkinElmer). Data were analyzed using ‘dissociation-one phase exponential decay’ or ‘association kinetics-two or more concentrations of hot radioligand’ in Graphpad Prism 5.0. The previously determined [3H]-JDTic kon and koff rates of KOR (+ buffer or Nb6 or Nb39) were used to estimate the kon and koff rates of Sal A using the ‘kinetics of competitive binding’ equation in Graphpad Prism 5.0 Kinetics of competitive binding model: $${{Y}} = {{Q}} \ast \left( {{{K}}4 \ast \frac{{{\mathrm{DIFF}}}}{{{\mathrm{KF}} \ast {\mathrm{KS}}}} + \left( {\frac{{{{K}}4 - {\mathrm{KF}}}}{{{\mathrm{KF}}}}} \right) \ast {\mathrm{e}}^{( - {\mathrm{kF}} \ast {\mathrm{X}})} - \left( {\frac{{{{K}}4 - {\mathrm{KS}}}}{{{\mathrm{KS}}}}} \right) \ast {\mathrm{e}}^{\left( { - {\mathrm{KS}} \ast {\mathrm{X}}} \right)}} \right).$$ (9) Specifically: $$\begin{array}{*{20}{c}} {{\mathrm{KA}} = {{K}}1 \ast \left[ {\mathrm{L}} \right] \ast (1{\mathrm{e}} - 9) + {{K}}2} \\ {{\mathrm{KB}} = {{K}}3 \ast \left[ {\mathrm{L}} \right] \ast (1{\mathrm{e}} - 9) + {{K}}4} \\ {{{S}} = \sqrt {(({\mathrm{KA}} - {\mathrm{KB}})^2 + 4 \ast {{K}}1 \ast {{K}}3 \ast \left[ {\mathrm{L}} \right] \ast {\mathrm{I}} \ast (1{\mathrm{e}} - 18))} } \\ {{\mathrm{KF}} = 0.5 \ast ({\mathrm{KA}} + {\mathrm{KB}} + {{S}})} \\ {{\mathrm{KS}} = 0.5 \ast ({\mathrm{KA}} + {\mathrm{KB}} - {{S}})} \\ {{\mathrm{DIFF}} = {\mathrm{KF}} - {\mathrm{KS}}} \\ {{{Q}} = {\mathrm{Bmax}} \ast {{K}}1 \ast {\mathrm{L}} \ast \frac{{1{\mathrm{e}} - 9}}{{{\mathrm{DIFF}}}}} \end{array}$$ X is time in seconds or minutes; Y is the specific binding in cpm or some other unit. K1 is the association rate of radioligand in inverse molar times inverse time); K2 is the dissociation constant of radioligand in inverse time), and L is the radioligand in nM). Bmax is the maximum binding, in units of Y-axis is the cold drug in nM. K3 is the on rate of tested drug (inverse Molar times inverse time); K4 is the off rate of tested drug (inverse time); ### Confocal Imaging To prepare confocal imaging, HEK 293T cells were first plated in MatTek dishes pre-coated with poly-l-Lysine at density 75,000 cells/plate in 10% FBS DMEM. The next day KOR-mScarlet and Nb6-mVenus or Nb39-mVenus were co-transfected with TransIT-2020 transfection reagent. These reporters were co-expressed in HEK 293T cells with a 4:1 ratio to ensure an excess of receptor and minimize free nanobody. For Golgi labeling, KOR, Nb39-mVenus and GalT-RFP were co-transfected. After at least 16 h, the medium was changed to 1% dFBS DMEM and incubated at 37 for another 24 h. On the experiment day, cell medium was removed and changed to drug buffer (20 mM HEPES, 1X HBSS, pH 7.4 supplemented with 0.1% BSA, 0.01% L-ascorbic acid and 4.5 g/L Glucose). Measurements were carried out on an inverted Olympus FV3000RS confocal microscope equipped with a Tokai Hit WSKMX live-cell chamber (set to maintain sample heat at room temperature) and a LUMPL FL 60× SuperCorr Plan ApoN 1.40 NA oil immersion objective and four cooled GaAsP photomultiplier detectors. One-way/Round trip galvanometer/resonant scanning was used for acquisition with a 1× optical zoom. The 488 nm diode laser was used for the mVenus excitation and the 561 nm diode laser was used to excite the mScarlet or RFP, with the laser power set between 0.1 and 5% according to the power slider in the FV31S-SW microscope software, for monitoring fluorescence intensities. The variable slit had a nominal bandwidth of 500–545 nm for the green emission channel and 590–650 nm for the red emission channel. All image processing was done using open-source Fiji54. To quantify KOR–Nb6 translocation, Binary Cell masks (having values of 255 or 0) were created by thresholding Nb6-mVenus intensity in single-cell images at each time point, using Triangle segmentation and filling all holes were using binary masking function. A set of Cell ring masks were created by successive erosion of mask images by 15 pixels and subtracting inner region masks. One-bit masks (having values of 1 or 0) were multiplied with the original cell images to create 16-bit images for the outer and inner rings. The Ratio of the inner Median pixel (cytosol) value to the outer ring (membrane) pixel value was measured and plotted as a function of time. To quantify KOR–Nb39 translocation, the Co-localization of Golgi was measured by masking images for bright signals in green (Nb39-mVenus) and red (GalT-RFP) channels. Golgi masks were created by creating masks representing the overlap of these red and green binary masks using the Boolean AND function. The Golgi masks were multiplied with the original NB39-mVenus cell images for each time point, measured and normalized to the peak values. ### cAMP inhibition assay To measure KOR/Gαi-mediated cAMP inhibition, HEK 293T (ATCC CRL-11268) cells were co-transfected with human KOR along with a luciferase-based cAMP biosensor (GloSensor; Promega) and with or without varying concentration of Nb6 DNA, and assays were performed similarly to previously described55. After 16 h, transfected cells were plated into Poly-lysine coated 384-well white clear bottom cell culture plates with DMEM + 1% dialyzed FBS at a density of 15,000–20,000 cells per 40 μl per well and incubated at 37 °C with 5% CO2 overnight. The next day, drug solutions were prepared in fresh drug buffer [20 mM HEPES, 1X HBSS, 0.3% bovine serum album (BSA), pH 7.4] at 3X drug concentration. Plates were decanted and received 20 μl per well of drug buffer (20 mM HEPES, 1X HBSS) followed by the addition of 10 μl of drug solution (3 wells per condition) for 15 min in the dark at room temperature. To stimulate endogenous cAMP via β adrenergic-Gs activation, 10 μl luciferin (4 mM final concentration) supplemented with isoproterenol (400 nM final concentration) were added per well. Cells were again incubated in the dark at room temperature for 15 min, and luminescence intensity was quantified using a Wallac TriLux microbeta (Perkin Elmer) luminescence counter. Results (relative luminescence units) were plotted as a function of drug concentration, normalized to % SalA stimulation, and analyzed using log(agonist) vs. response in GraphPad Prism 5.0 (Eq. 1). ### Tango arrestin recruitment assay The KOR Tango constructs were designed and assays were performed as previously described56,57. HTLA cells expressing TEV fused-β-Arrestin2 (kindly provided by Dr. Richard Axel, Columbia Univ.) were transfected with the KOR Tango construct with or without varying concentration of Nb6 DNA. The next day, cells were plated in DMEM supplemented with 1% dialyzed FBS in poly-l-lysine coated 384-well white clear bottom cell culture plates at a density of 10,000–15,000 cells/well in a total of 40 µl. The cells were incubated for at least 6 h before receiving drug stimulation. Drug solutions were prepared in drug buffer (20 mM HEPES, 1X HBSS, 0.3% BSA, pH 7.4) at 3X and added to cells (20 µl per well) for overnight incubation. Drug solutions used for the Tango assay were exactly the same as used for the cAMP assay. The next day, media and drug solutions were removed and 20 µl per well of BrightGlo reagent (purchased from Promega, after 1:20 dilution) was added. The plate was incubated for 20 min at room temperature in the dark before being counted using a luminescence counter. Results (relative luminescence units) were plotted as a function of drug concentration, normalized to % SalA stimulation, and analyzed using log(agonist) vs. response in GraphPad Prism 5.0 (Eq. 1). ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability Data supporting the findings of this manuscript are available from the corresponding authors upon reasonable request. A reporting summary for this Article is available as a Supplementary Information file. The accession number for the coordinates and structures factors of KOR–JDTic–Nb6 complex reported in this paper is PDB: 6VI4. The Source Data underlying Figs. 1b–g, 3b, 4a, and 5b–e and Supplementary Figs. 1a, b, 2, 3a, 4b, c, 6c, 7d, 8a, b, 10b, c, 11–13 are provided as a Source Data file. ## References 1. Wacker, D., Stevens, R. C. & Roth, B. L. How ligands illuminate GPCR molecular pharmacology. Cell 170, 414–427 (2017). 2. Roth, B. L., Irwin, J. J. & Shoichet, B. K. Discovery of new GPCR ligands to illuminate new biology. Nat. Chem. Biol. 13, 1143–1151 (2017). 3. De Lean, A., Stadel, J. M. & Lefkowitz, R. J. A ternary complex model explains the agonist-specific binding properties of the adenylate cyclase-coupled beta-adrenergic receptor. J. Biol. Chem. 255, 7108–7117 (1980). 4. Lefkowitz, R. J. & Shenoy, S. K. Transduction of receptor signals by beta-arrestins. Science 308, 512–517 (2005). 5. Urban, J. D. et al. Functional selectivity and classical concepts of quantitative pharmacology. J. Pharm. Exp. Ther. 320, 1–13 (2007). 6. Nygaard, R. et al. The dynamic process of beta(2)-adrenergic receptor activation. Cell 152, 532–542 (2013). 7. Manglik, A. et al. Structural insights into the dynamic process of beta2-adrenergic receptor signaling. Cell 161, 1101–1111 (2015). 8. Staus, D. P. et al. Allosteric nanobodies reveal the dynamic range and diverse mechanisms of G-protein-coupled receptor activation. Nature 535, 448–452 (2016). 9. Pardon, E. et al. A general protocol for the generation of Nanobodies for structural biology. Nat. Protoc. 9, 674–693 (2014). 10. Rasmussen, S. G. et al. Structure of a nanobody-stabilized active state of the beta(2) adrenoceptor. Nature 469, 175–180 (2011). 11. Rasmussen, S. G. et al. Crystal structure of the beta2 adrenergic receptor-Gs protein complex. Nature 477, 549–555 (2011). 12. Kruse, A. C. et al. Activation and allosteric modulation of a muscarinic acetylcholine receptor. Nature 504, 101–106 (2013). 13. Huang, W. et al. Structural insights into micro-opioid receptor activation. Nature 524, 315–321 (2015). 14. Che, T. et al. Structure of the nanobody-stabilized active state of the kappa opioid receptor. Cell 172, 55–67 e15 (2018). 15. Wingler, L. M., McMahon, C., Staus, D. P., Lefkowitz, R. J. & Kruse, A. C. Distinctive activation mechanism for angiotensin receptor revealed by a synthetic nanobody. Cell 176, 479–490 e412 (2019). 16. Irannejad, R. et al. Conformational biosensors reveal GPCR signalling from endosomes. Nature 495, 534–538 (2013). 17. Staus, D. P. et al. Regulation of beta2-adrenergic receptor function by conformationally selective single-domain intrabodies. Mol. Pharm. 85, 472–481 (2014). 18. Livingston, K. E., Mahoney, J. P., Manglik, A., Sunahara, R. K. & Traynor, J. R. Measuring ligand efficacy at the mu-opioid receptor using a conformational biosensor. Elife 7, https://doi.org/10.7554/eLife.32499 (2018). 19. Stoeber, M. et al. A genetically encoded biosensor reveals location bias of opioid drug action. Neuron 98, 963–976 e965 (2018). 20. DeVree, B. T. et al. Allosteric coupling from G protein to the agonist-binding pocket in GPCRs. Nature 535, 182–186 (2016). 21. White, K. L. et al. Identification of novel functionally selective kappa-opioid receptor scaffolds. Mol. Pharm. 85, 83–90 (2014). 22. White, K. L. et al. The G protein-biased kappa-opioid receptor agonist RB-64 is analgesic with a unique spectrum of activities in vivo. J. Pharm. Exp. Ther. 352, 98–109 (2015). 23. Brust, T. F. et al. Biased agonists of the kappa opioid receptor suppress pain and itch without causing sedation or dysphoria. Sci. Signal 9, ra117 (2016). 24. Ho, J. H. et al. G protein signaling-biased agonism at the kappa-opioid receptor is maintained in striatal neurons. Sci. Signal 11, https://doi.org/10.1126/scisignal.aar4309 (2018). 25. Beardsley, P. M., Howard, J. L., Shelton, K. L. & Carroll, F. I. Differential effects of the novel kappa opioid receptor antagonist, JDTic, on reinstatement of cocaine-seeking induced by footshock stressors vs cocaine primes and its antidepressant-like effects in rats. Psychopharmacology (Berlin) 183, 118–126 (2005). 26. Reindl, J. D. et al. Antidepressant-like effects of the novel kappa opioid antagonist MCL-144B in the forced-swim test. Pharmacology 81, 229–235 (2008). 27. Rasmussen, K., White, D. A. & Acri, J. B. NIDA’s medication development priorities in response to the Opioid Crisis: ten most wanted. Neuropsychopharmacology 44, 657–659 (2019). 28. Koehl, A. et al. Structure of the micro-opioid receptor-Gi protein complex. Nature 558, 547–552 (2018). 29. Roth, B. L. et al. Salvinorin A: a potent naturally occurring nonnitrogenous kappa opioid selective agonist. Proc. Natl Acad. Sci. USA 99, 11934–11939 (2002). 30. Thomas, J. B. et al. Identification of the first trans-(3R,4R)- dimethyl-4-(3-hydroxyphenyl)piperidine derivative to possess highly potent and selective opioid kappa receptor antagonist activity. J. Med Chem. 44, 2687–2690 (2001). 31. Samama, P., Cotecchia, S., Costa, T. & Lefkowitz, R. J. A mutation-induced activated state of the beta 2-adrenergic receptor. Extending ternary complex model. J. Biol. Chem. 268, 4625–4636 (1993). 32. Mitch, C. H. et al. Discovery of aminobenzyloxyarylamides as kappa opioid receptor selective antagonists: application to preclinical development of a kappa opioid receptor antagonist receptor occupancy tracer. J. Med. Chem. 54, 8000–8012 (2011). 33. Dror, R. O. et al. Identification of two distinct inactive conformations of the beta2-adrenergic receptor reconciles structural and biochemical observations. Proc. Natl Acad. Sci. USA 106, 4689–4694 (2009). 34. Valant, C., Felder, C. C., Sexton, P. M. & Christopoulos, A. Probe dependence in the allosteric modulation of a G protein-coupled receptor: implications for detection and validation of allosteric ligand effects. Mol. Pharm. 81, 41–52 (2012). 35. Wu, H. et al. Structure of the human kappa-opioid receptor in complex with JDTic. Nature 485, 327–332 (2012). 36. Thomas, J. B. et al. Identification of (3R)-7-hydroxy-N-((1S)-1-[[(3R,4R)-4-(3-hydroxyphenyl)- 3,4-dimethyl-1-piperidinyl]methyl]-2-methylpropyl)-1,2,3,4-tetrahydro- 3-isoquinolinecarboxamide as a novel potent and selective opioid kappa receptor antagonist. J. Med. Chem. 46, 3127–3137 (2003). 37. Wacker, D. et al. Crystal structure of an LSD-bound human serotonin receptor. Cell 168, 377–389 (2017). 38. Wang, S. et al. Structure of the D2 dopamine receptor bound to the atypical antipsychotic drug risperidone. Nature 555, 269–273 (2018). 39. Rose, A. S. et al. Position of transmembrane helix 6 determines receptor G protein coupling specificity. J. Am. Chem. Soc. 136, 11244–11247 (2014). 40. Kang, Y. et al. Crystal structure of rhodopsin bound to arrestin by femtosecond X-ray laser. Nature 523, 561–567 (2015). 41. Wright, S. C. et al. A conserved molecular switch in Class F receptors regulates receptor activation and pathway selection. Nat. Commun. 10, 667 (2019). 42. Hino, T. et al. G-protein-coupled receptor inactivation by an allosteric inverse-agonist antibody. Nature 482, 237–240 (2012). 43. Wan, Q. et al. Mini G protein probes for active G protein-coupled receptors (GPCRs) in live cells. J. Biol. Chem. 293, 7466–7473 (2018). 44. Korczynska, M. et al. Structure-based discovery of selective positive allosteric modulators of antagonists for the M2 muscarinic acetylcholine receptor. Proc. Natl Acad. Sci. USA 115, E2419–E2428 (2018). 45. Pardon, E. et al. Nanobody-enabled reverse pharmacology on G-protein-coupled receptors. Angew. Chem. Int Ed. Engl. 57, 5292–5295 (2018). 46. McMahon, C. et al. Yeast surface display platform for rapid discovery of conformationally selective nanobodies. Nat. Struct. Mol. Biol. 25, 289–296 (2018). 47. Caffrey, M. & Cherezov, V. Crystallizing membrane proteins using lipidic mesophases. Nat. Protoc. 4, 706–731 (2009). 48. Minor, W., Cymborowski, M., Otwinowski, Z. & Chruszcz, M. HKL-3000: the integration of data reduction and structure solution–from diffraction images to an initial model in minutes. Acta Crystallogr. D: Biol. Crystallogr. 62, 859–866 (2006). 49. McCoy, A. J. et al. Phaser crystallographic software. J. Appl. Crystallogr. 40, 658–674 (2007). 50. Chu, R. et al. Redesign of a four-helix bundle protein by phage display coupled with proteolysis and structural characterization by NMR and X-ray crystallography. J. Mol. Biol. 323, 253–262 (2002). 51. Adams, P. D. et al. PHENIX: a comprehensive Python-based system for macromolecular structure solution. Acta Crystallogr. D: Biol. Crystallogr. 66, 213–221 (2010). 52. Murshudov, G. N., Vagin, A. A. & Dodson, E. J. Refinement of macromolecular structures by the maximum-likelihood method. Acta Crystallogr. D. 53, 240–255 (1997). 53. Emsley, P., Lohkamp, B., Scott, W. G. & Cowtan, K. Features and development of Coot. Acta Crystallogr. D: Biol. Crystallogr. 66, 486–501 (2010). 54. Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012). 55. Fenalti, G. et al. Molecular control of delta-opioid receptor signalling. Nature 506, 191–196 (2014). 56. Liu, W. et al. Serial femtosecond crystallography of G protein-coupled receptors. Science 342, 1521–1524 (2013). 57. Kroeze, W. K. et al. PRESTO-Tango as an open-source resource for interrogation of the druggable human GPCRome. Nat. Struct. Mol. Biol. 22, 362–369 (2015). Download references ## Acknowledgements This work was supported by NIH grants (PO1DA0357634; R37DA045657, and RO1MH112205), the NIMH Psychoactive Drug Screening Program Contract, and the Michael Hooker Distinguished Chair of Pharmacology (to B.L.R.). We thank INSTRUCT, part of the European Strategy Forum on Research Infrastructures (ESFRI), and the Research Foundation—Flanders (FWO) for their support to the nanobody discovery and thank Nele Buys and Katleen Willibal for the technical assistance. R.H.J. Olsen was also supported by grant F31-NS093917. We thank Michael S. Placzek for providing LY2459989. We also thank Roshanak Irannejad for providing the GalT-RFP construct. We gratefully acknowledge M.J. Miley and the UNC Macromolecular Crystallization Core for advice and use of their equipment for crystal harvesting and transport, which is supported by the National Cancer Institute (award number P30CA016086). We thank J. Smith and R. Fischetti and the staff of GM/CA@APS, which has been funded with Federal funds from the National Cancer Institute (ACB-12002) and the National Institute of General Medical Sciences (AGM-12006). This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility, operated for the DOE Office of Science by Argonne National Laboratory (contract no. DE-AC02- 06CH11357). ## Author information Authors ### Contributions T.C. designed experiments, expressed and screened nanobodies, receptor constructs, and receptor ligands, crystallized nanobody–receptor complex, collected diffraction data, refined the structure, performed ligand binding and functional assays, analyzed all data, and prepared the manuscript. J.G.E. helped with confocal live-cell imaging and analyzed the data and prepared the manuscript. B.E.K. designed experiments and helped with diffraction data collection and processed diffraction data. K.K. helped with structure refinement. R.H.J.O. helped with transducerome screening. S.W. helped with crystal optimization. N.S. helped with live-cell imaging analysis and quantifications. J.S. and E.P. immunized llamas, generated nanobody library and selected and identified all nanobodies. S.C.Z. helped characterize the chimeric receptors. I.F.C. provided the JDTic compound. D.W. supervised the structure determination strategy, helped with diffraction data collection, processed the diffraction data, solved and refined the structure. B.L.R. designed the experiments, was responsible for the overall project strategy and management and prepared the manuscript. ### Corresponding authors Correspondence to Tao Che or Bryan L. Roth. ## Ethics declarations ### Competing interests The authors declare no competing interests. ## Additional information Peer review information Nature Communications thanks Dmitry Veprintsev, Jean-Philippe Pin and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Reprints and Permissions ## About this article ### Cite this article Che, T., English, J., Krumm, B.E. et al. Nanobody-enabled monitoring of kappa opioid receptor states. Nat Commun 11, 1145 (2020). https://doi.org/10.1038/s41467-020-14889-7 Download citation • Received: • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-020-14889-7 ## Further reading • ### Characteristic structural difference between inactive and active states of orexin 2 receptor determined using molecular dynamics simulations • Shun Yokoi • Ayori Mitsutake Biophysical Reviews (2022) • ### Antidiarrheal, antimicrobial and antioxidant potentials of methanol extract of Colocasia gigantea Hook. f. leaves: evidenced from in vivo and in vitro studies along with computer-aided approaches • Safaet Alam • Mohammad A. Rashid • Mohammad Rashedul Haque BMC Complementary Medicine and Therapies (2021) • ### Selective targeting of ligand-dependent and -independent signaling by GPCR conformation-specific anti-US28 intrabodies • Timo W. M. De Groof • Nick D. Bergkamp • Martine J. Smit Nature Communications (2021) ## Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. ## Search ### Quick links Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
2022-05-18 20:12:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5411155223846436, "perplexity": 13154.407254254107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00669.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3614399
## Class example: limit of a function using definition I am having trouble understanding how to find the limit of a function (using the definition of a limit). I have a class example, and was wondering if anyone could walk me through the steps. 1. The problem statement, all variables and given/known data Using the definition of the limit to show that limx→2(x2)=4 f(x) = x2 c=2 L=4 Given an arbitrary ε>0, take δ=min{1,ε/5} If x≠2 and |x-2|<δ then |x-2|<1 and |x-2|< ε/5 |f(x)-L| = |x2-4| = |(x-2)(x+2)| = |x-2||x+2| |x-2|<1 => 1<x<3 => 3<x+2<5 => |x+2|<5 |x-2||x+2| < (ε/5)(5) = ε so |f(x)-L|<ε 2. Relevant equations We say that lim f(x)x→c=L if: $\forall$ε>0 $\exists$δ>0 $\forall$x$\in$dom f if x≠c and |x-c|<δ then |f(x)-ε|<L 3. The attempt at a solution The biggest thing I am confused about is how the professor got δ? Did he have to do the later work first and then went back and plugged in the answer he got? Also, in the definition, it says that then |f(x)-ε|<L but we ended up getting |f(x)-L|<ε. Why is this? I understand that we can rearrange the equation, but then doesn't this mess up the absolute value signs? PhysOrg.com science news on PhysOrg.com >> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens>> Google eyes emerging markets networks Mentor Quote by PirateFan308 I am having trouble understanding how to find the limit of a function (using the definition of a limit). I have a class example, and was wondering if anyone could walk me through the steps. 1. The problem statement, all variables and given/known data Using the definition of the limit to show that limx→2(x2)=4 f(x) = x2 c=2 L=4 Given an arbitrary ε>0, take δ=min{1,ε/5} If x≠2 and |x-2|<δ then |x-2|<1 and |x-2|< ε/5 |f(x)-L| = |x2-4| = |(x-2)(x+2)| = |x-2||x+2| |x-2|<1 => 1 3 |x+2|<5 |x-2||x+2| < (ε/5)(5) = ε so |f(x)-L|<ε 2. Relevant equations We say that lim f(x)x→c=L if: $\forall$ε>0 $\exists$δ>0 $\forall$x$\in$dom f if x≠c and |x-c|<δ then |f(x)-ε| "The biggest thing I am confused about is how the professor got δ? Did he have to do the later work first and then went back and plugged in the answer he got?" Your professor likely did some scratch work, starting with |x2-4|<ε, and then getting his result for δ. "in the definition, it says that then |f(x)-ε|<L but we ended up getting |f(x)-L|<ε" It should be |f(x)-L|<ε in the definition. Recognitions: Homework Help Science Advisor |f(x)-epsilon| ## Class example: limit of a function using definition Another question, is there more than one δ that will prove this? Say, Given an arbitrary ε>0, take δ=min{2,ε/6} If x≠2 and |x-2|<δ, then |x-2|<2 and |x-2|<ε/6 |f(x)-L| = |x2-4| = |(x+2)(x-2)| = |x+2||x-2| |x-2|<2 => -2<x-2<2 => 0<x<4 => 2<x+2<6 => |x+2|<6 |x-2||x+2| < (6)(ε/6) = ε so |f(x)-L|<ε Recognitions: Homework Help Quote by PirateFan308 Another question, is there more than one δ that will prove this? Say, Given an arbitrary ε>0, take δ=min{2,ε/6} If x≠2 and |x-2|<δ, then |x-2|<2 and |x-2|<ε/6 |f(x)-L| = |x2-4| = |(x+2)(x-2)| = |x+2||x-2| |x-2|<2 => -2 0 2 |x+2|<6 |x-2||x+2| < (6)(ε/6) = ε so |f(x)-L|<ε Sure, that choice works just as well. Mentor Yes, there are many ways to come up with δ . Why is it that I must say d=min{1,ε/5}? Would it also work if I said that δ=1,ε/5. I'm a bit fuzzy on how the "min" makes this true, or the absence of "min" makes it false. Maybe this will help? It has the definition you're looking for and many examples with solutions. Look at problems 4 and 5. http://www.math.ucdavis.edu/~kouba/C...ciseLimit.html Mentor Quote by PirateFan308 Why is it that I must say d=min{1,ε/5}? Would it also work if I said that δ=1,ε/5. I'm a bit fuzzy on how the "min" makes this true, or the absence of "min" makes it false. If ε > 5, then if you say that δ > ε/5, the proof won't work. Let's say ε = 10. Then the claim would be that δ = 2 will satisfy the definition. But if x=3.9, then f(3.99)=15.21, so |f(3.99)-2| = 13.21 > 10 Quote by SammyS If ε > 5, then if you say that δ > ε/5, the proof won't work. So is it standard procedure to always take δ=min if there is more than one condition? Will it ever be wrong for me to make δ=min ? Recognitions: Homework Help
2013-05-25 17:54:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6228242516517639, "perplexity": 1955.2542508581544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706009988/warc/CC-MAIN-20130516120649-00073-ip-10-60-113-184.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/75777/elementary-proof-of-the-equidistribution-theorem?answertab=votes
Elementary proof of the equidistribution theorem I'm looking for references to (as many as possible) elementary proofs of the Weyl's equidistribution theorem, i.e., the statement that the sequence $\alpha, 2\alpha, 3\alpha, \ldots \mod 1$ is uniformly distributed on the unit interval. With "elementary" I mean that it does not make use of complex analysis in particular the Weyl's criterion. Thank you very much. - Weyl's criterion doesn't use complex analysis, just basic properties of complex numbers, and it can be stated without complex numbers at all (for example using real trigonometric functions). –  Pablo Shmerkin Sep 18 '11 at 18:52 What is your definition of equidistributed? –  Igor Rivin Sep 18 '11 at 19:00 Pablo: Since Carlo refers to Weyl's criterion under the label "complex analysis", probably he means "complex numbers". Personally I think restating Weyl's criterion using trig functions in place of e^{ix} would make the situation look overly complicated. –  KConrad Sep 18 '11 at 20:12 Presumably, any basis of $L^2$ would work (orthogonal polynomials of some sort), but I don't really see the point, since Weyl's proof is half a page. –  Igor Rivin Sep 18 '11 at 20:15 Have you checked the Kuipers-Niederreiter book to see if there's a non-Weyl proof there? –  Gerry Myerson Sep 18 '11 at 20:34 There is a really easy prof that just uses the Pigeonhole principle. Let $\alpha$ be irrational. Lemma: For any $\delta >0$, there is an $n>0$ such that $(n \alpha) \in (- \delta, \delta) \setminus \{ 0 \}$. Proof: Choose $N$ large enough that $1/N < \epsilon$. Divide $[0,1]$ up into $N$ segments of length $1/N$. By pigeonhole, there are $j$ and $k$ such that $(j \alpha)$ and $(k \alpha)$ land in the same segment. So $((j-k) \alpha) \in (-\delta, \delta)$. Since $\alpha$ is irrational, $((j-k) \alpha) \neq 0$. $\square$ Now, fix $0 \leq p < q \leq 1$ and $\epsilon>0$. Our goal is to show that $$q-p-\epsilon \leq \lim_{N \to \infty} inf \frac{\#\{ k \leq N : (k \alpha) \in (p,q) \}}{N} \leq \lim_{N \to \infty} sup \frac{\#\{ k \leq N : (k \alpha) \in (p,q) \}}{N} \leq q-p+\epsilon.$$ Choose $\delta >0$ such that $$\frac{(q-p)/\rho -1}{1/\rho+1} \geq q-p-\epsilon \quad \mbox{and} \quad \frac{(q-p)/\rho +1}{1/\rho-1} \leq q-p+\epsilon$$ for all $\rho$ with $|\rho| < \delta$. Choose $n$ such that $n \alpha \in (-\delta, \delta) \setminus \{ 0 \}$. Write $\rho = (n \alpha)$. Break up the set of values $(k \alpha)$ up into arithmetic progressions based on $k$ modulo $n$. So each segment is of the form $(\beta + j \rho)$. It is enough to prove that the lim inf and lim sup contributed by each progression lie between $q-p-\epsilon$ and $q-p+\epsilon$, as the total contribution is a weighted average from the contributions from the progressions. Break each progression up into segments according to $\lfloor \beta + j \rho \rfloor$. The initial segment and final segment each contain at most $1/ |\rho|$ terms. The segments in the middle contain between $1/\rho - 1$ and $1/\rho+1$ terms of which between $(q-p)/\rho -1$ and $(q-p)/\rho+1$ are between $p$ and $q$. Since $((q-p)/\rho-1)/(1/\rho+1)$ and $((q-p)/\rho+1)/(1/\rho-1)$ were chosen to lie in $(q-p-\epsilon, q-p+\epsilon)$, the average of all of these terms lies in the required interval. And the terms from the initial segments can only drag the average off by at most $(2/|\rho|)/(N/n)$, which goes to $0$. QED. - Hardy and Wright give a proof based on continued fractions in Section 23.10 of "An introduction to the theory of numbers" (reference valid at least in the 4th edition). Interestingly, their endnotes to Chapter 23 claim that the "equidistribution" form of the theorem is not due to Kronecker (the statement they call "Kronecker's theorem" is the density of $\{n\alpha\}$; they give a number of elementary proofs of that weaker result), but "independently to Bohl, Sierpinski and Weyl". - Not sure if this qualifies, but there is a short proof using Fourier Analysis. No hardcore stuff, just Fejér's Theorem. See Chapter 3 of Körner's Fourier Analysis book (there are 110 chapters in the book, so this really is one of the first things that is covered). - This would basically be Weyl's proof, I think - the key idea there is to smooth the characteristic function of the interval [a,b] on which one wishes to count points n\alpha. The Fejer kernel (or de la Vallee Poussin kernel) is one way to do that. (I'd have to admit that I haven't bothered actually going to look at the book before making this comment.) –  Ben Green Sep 19 '11 at 9:11 I have to admit that I hadn't bothered actually going to look at Weyl's proof before posting this answer. –  Tony Huynh Sep 20 '11 at 9:33
2015-05-25 10:10:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886509358882904, "perplexity": 274.06416578217693}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928479.19/warc/CC-MAIN-20150521113208-00078-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/549984b8581e2a1c23f5a0a8
Question #5a0a8 Dec 23, 2014 The electron configuration for $C$'s ground state is $1 {s}^{2} 2 {s}^{2} 2 {p}_{x}^{1} 2 {p}_{y}^{1} 2 {p}_{z}^{0}$. In order for C to be able to form 4 bonds, one electron from the $2 s$ orbital is promoted to the empty $2 {p}_{z}$ orbital - this is referred to as the promoted state; this leads to the formation of hybrid orbitals - one s orbital mixes with the three p orbitals to form four $s {p}^{3}$ hybrid orbitals. This is what happens to the $C$ atom in methane. Once the hybrid orbitals are formed, you can no longer refer to any of the three p orbitals. Here's a diagram of the formation of the $s {p}^{3}$ hybrid orbitals: The four $s {p}^{3}$ orbitals now form sigma bonds with four $H$ atoms to produce the tetrahedral-shaped methane molecule. So, to answer your question, the $2 {p}_{z}$ orbital is not drawn because the $C$ atom only has $s {p}^{3}$ orbitals in the $C {H}_{4}$ molecule. Good explanation, nice figures can be found here: http://chemwiki.ucdavis.edu/Organic_Chemistry/Fundamentals/Hybrid_Orbitals
2022-01-25 13:12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6402263045310974, "perplexity": 680.6418572949559}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00530.warc.gz"}
https://cameramath.com/expert-q&a/Arithmetic/4-Compare-the-fractions-using-4-Compare-the-fractions-using-lt-gt
Still have math questions? 4. Compare the fractions using $$< , >$$ , or $$=$$ . $$\frac{4}{6} > \frac{5}{10}$$
2022-08-10 01:13:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1929337978363037, "perplexity": 3231.886106921972}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00592.warc.gz"}
http://www.gamedev.net/index.php?app=forums&module=extras&section=postHistory&pid=4934884
• Create Account ### #ActualZael Posted 25 April 2012 - 03:19 PM I will see if I can take a moment to look at it more in-depth over the next few evening, but right off the bat, you can make it a lot cleaner if you look at using functions. Functions help you abstract different procedures and not only easily re-use code, but also make the code itself more readable. In case you haven't encountered them before I have included a small example. #include <iostream> using namespace std; int sumOfRange(int a, int b); int main() { int a, b; cout << "Provide first number: "; cin >> a; cout << "Provide second number: "; cin >> b; cout << "The sum of all numbers between " << a << " and " << b << " is: " << sumOfRange(a, b) << endl; int c,d; cout << "Provide first number: "; cin >> c; cout << "Provide second number: "; cin >> d; cout << "The sum of all numbers between " << c << " and " << d << " is: " << sumOfRange(c, d) << endl; } int sumOfRange(int a, int b) { int total = 0; for(int i = a; i <= b; i++) { total+=i; } } The first thing I do is declare the function on line 5. All functions must be declared before using. A function is comprised of really three parts. The first is the return type. In this case the function returns an int. The second is the name of the function. This is how the function is called. The last thing is a list of parameters. I believe there is an upper limit on the number of parameters a function can have, but I have never hit that limit (it is pretty high). Each parameter is essentially a variable that is declared just like you would inside a function except that it is a copy of the whatever value was passed to the function. The next thing I do is define and declare my main function. Inside my main function I use the the function I declared on line 5. Finally after my main function I define my function. The function definition is essentially the code that is executed when the code is used. A minor, but essential thing to understand is that because parameters are copies, if you change the value of a parameter in a function the variable used when calling the function is unchanged. #include <iostream> using namespace std; void functionThatDoesNothing(int a) { a = 10; } int main() { int a = 5; cout << "The variable 'a' is: "<< a << endl; functionThatDoesNothing(a); cout << "The variable 'a' is still "<< a << endl; return 0; } You can see from this example that the function does not change a. I have also shown that you can define the function at the same time you declare it. It is usually best to define and declare the function separately if you have more than one file (declarations go in the .h or .hpp file and definitions go in the .cpp file). If you want to have the parameter directly affect the variable that was used when calling the function you can use what is called "pass by reference". Whether a function parameter is "pass by value" or "pass by reference" depends on the way it is declared in the function declaration. To make a variable "pass by reference" you simply place an '&' character before the variable name. Below I will demonstrate the same code as above, except with "pass by reference". #include <iostream> using namespace std; void functionThatDoesSomething(int & a) { a = 10; } int main() { int a = 5; cout << "The variable 'a' is: "<< a << endl; functionThatDoesSomething(a); cout << "The variable 'a' is now "<< a << endl; return 0; } "Pass by reference" can get a little bit dangerous (your code will crash) if you use it in certain ways (like for the return type of a function), but for the way I have demonstrated above it should be safe in most if not all situations. P.S. Classes and Structs can also make code a lot cleaner and more readable, but I think learning to write your own functions first is a nice step in the right direction. ### #1Zael Posted 25 April 2012 - 03:16 PM I will see if I can take a moment to look at it more in-depth over the next few evening, but right off the bat, you can make it a lot cleaner if you look at using functions. Functions help you abstract different procedures and not only easily re-use code, but also make the code itself more readable. In case you haven't encountered them before I have included a small example. #include <iostream> using namespace std; int sumOfRange(int a, int b); int main() { int a, b; cout << "Provide first number: "; cin >> a; cout << "Provide second number: "; cin >> b; cout << "The sum of all numbers between " << a << " and " << b << " is: " << sumOfRange(a, b) << endl; int c,d; cout << "Provide first number: "; cin >> c; cout << "Provide second number: "; cin >> d; cout << "The sum of all numbers between " << c << " and " << d << " is: " << sumOfRange(c, d) << endl; } int sumOfRange(int a, int b) { int total = 0; for(int i = a; i <= b; i++) { total+=i; } } The first thing I do is declare the function on line 5. All functions must be declared before using. A function is comprised of really three parts. The first is the return type. In this case the function returns an int. The second is the name of the function. This is how the function is called. The last thing is a list of parameters. I believe there is an upper limit on the number of parameters a function can have, but I have never hit that limit (it is pretty high). Each parameter is essentially a variable that is declared just like you would inside a function except that it is a copy of the whatever value was passed to the function. The next thing I do is define and declare my main function. Inside my main function I use the the function I declared on line 5. Finally after my main function I define my function. The function definition is essentially the code that is executed when the code is used. A minor, but essential thing to understand is that because parameters are copies, if you change the value of a parameter in a function the variable used when calling the function is unchanged. #include <iostream> using namespace std; void functionThatDoesNothing(int a) { a = 10; } int main() { int a = 5; cout << "The variable 'a' is: "<< a << endl; functionThatDoesNothing(a); cout << "The variable 'a' is still "<< a << endl; return 0; } You can see from this example that the function does not change a. I have also shown that you can define the function at the same time you declare it. It is usually best to define and declare the function separately if you have more than one file (declarations go in the .h or .hpp file and definitions go in the .cpp file). If you want to have the parameter directly affect the variable that was used when calling the function you can use what is called "pass by reference". Whether a function parameter is "pass by value" or "pass by reference" depends on the way it is declared in the function declaration. To make a variable "pass by reference" you simply place an '&' character before the variable name. Below I will demonstrate the same code as above, except with "pass by reference". #include <iostream> using namespace std; void functionThatDoesSomething(int & a) { a = 10; } int main() { int a = 5; cout << "The variable 'a' is: "<< a << endl; functionThatDoesSomething(a); cout << "The variable 'a' is now "<< a << endl; return 0; } "Pass by reference" can get a little bit dangerous (your code will crash) if you use it in certain ways (like for the return type of a function), but for the way I have demonstrated above it should be safe in most if not all situations. PARTNERS
2014-03-07 15:19:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5545268058776855, "perplexity": 986.0559608292525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999645330/warc/CC-MAIN-20140305060725-00052-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.oknoname.com/EPFL/CS-433/topic/1602/#c2
### alphas in SVMs Hi, I was reading over the conditions for alpha in SVM. I understand if it is correctly classified alpha is 0, if it is on the margin is its [0,1] and if it is misclassified then alpha is 1. I was thus wondering if alpha is 1 how do we know if the vector is a misclassified vector or a support vector. I also don't quite understand the subtlety between the bound and essential support vector. TAs please correct me if I'm wrong. The way I understand it is: When alpha is 1 three things are possible: 1. The point is on the correct side but just on the margin. 2. The point is on the correct side but inside the margin. 3. The point is on the wrong side. For these three examples, it does not matter whether the point is correctly classified or not. What the alpha is indicating is that all of those points could be better classified and thus should be taken into account to move the hyperplane that is separating both groups. Recall that there are many hyperplanes that can separate groups of point (for example, shift the hyperplane by 1 degree). The hyperplane we are interested in is the hyperplane that maximizes the distance between the points. The idea behind this is to find a robust classifier and the more spacing to the left and right the more separated the classes are and the more likely the classifier is to classify correctly new points. The reason why alpha is one for the three cases is that we do not want to have points on the margin or inside the margin and these points can help find the hyperplane with maximal spacing. Difference between bound and essential support-vector. 1. [essential] The point is on the correct side but just on the margin. 2. [bound] The point is on the correct side but inside the margin. 3. [bound] The point is on the wrong side. I hope this helps... Page 1 of 1
2022-12-07 22:33:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180832266807556, "perplexity": 395.2775639766559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00545.warc.gz"}
https://www.jobilize.com/online/course/29-1-coulomb-s-law-w10-electric-forces-by-openstax?qcr=www.quizover.com
29.1 Coulomb’s law Page 1 / 6 • State Coulomb’s law in terms of how the electrostatic force changes with the distance between two objects. • Calculate the electrostatic force between two charged point forces, such as electrons or protons. • Compare the electrostatic force to the gravitational attraction for a proton and an electron; for a human and the Earth. Through the work of scientists in the late 18th century, the main features of the electrostatic force    —the existence of two types of charge, the observation that like charges repel, unlike charges attract, and the decrease of force with distance—were eventually refined, and expressed as a mathematical formula. The mathematical formula for the electrostatic force is called Coulomb’s law    after the French physicist Charles Coulomb (1736–1806), who performed experiments and first proposed a formula to calculate it. Coulomb’s law $F=k\frac{|{q}_{1}{q}_{2}|}{{r}^{2}}.$ Coulomb’s law calculates the magnitude of the force $F$ between two point charges, ${q}_{1}$ and ${q}_{2}$ , separated by a distance $r$ . In SI units, the constant $k$ is equal to $k=8\text{.}\text{988}×{\text{10}}^{9}\frac{\text{N}\cdot {\text{m}}^{2}}{{\text{C}}^{2}}\approx 8\text{.}\text{99}×{\text{10}}^{9}\frac{\text{N}\cdot {\text{m}}^{2}}{{\text{C}}^{2}}.$ The electrostatic force is a vector quantity and is expressed in units of newtons. The force is understood to be along the line joining the two charges. (See [link] .) Although the formula for Coulomb’s law is simple, it was no mean task to prove it. The experiments Coulomb did, with the primitive equipment then available, were difficult. Modern experiments have verified Coulomb’s law to great precision. For example, it has been shown that the force is inversely proportional to distance between two objects squared $\left(F\propto 1/{r}^{2}\right)$ to an accuracy of 1 part in ${\text{10}}^{\text{16}}$ . No exceptions have ever been found, even at the small distances within the atom. How strong is the coulomb force relative to the gravitational force? Compare the electrostatic force between an electron and proton separated by $0\text{.}\text{530}×{\text{10}}^{-\text{10}}\phantom{\rule{0.25em}{0ex}}\text{m}$ with the gravitational force between them. This distance is their average separation in a hydrogen atom. Strategy To compare the two forces, we first compute the electrostatic force using Coulomb’s law, $F=k\frac{|{q}_{1}{q}_{2}|}{{r}^{2}}$ . We then calculate the gravitational force using Newton’s universal law of gravitation. Finally, we take a ratio to see how the forces compare in magnitude. Solution Entering the given and known information about the charges and separation of the electron and proton into the expression of Coulomb’s law yields $F=k\frac{|{q}_{1}{q}_{2}|}{{r}^{2}}$ $\begin{array}{}=\left(8.99×{\text{10}}^{9}\phantom{\rule{0.25em}{0ex}}\text{N}\cdot {\text{m}}^{2}/{\text{C}}^{2}\right)×\frac{\left(\text{1.60}×{\text{10}}^{\text{–19}}\phantom{\rule{0.25em}{0ex}}\text{C}\right)\left(1.60×{\text{10}}^{\text{–19}}\phantom{\rule{0.25em}{0ex}}\text{C}\right)}{\left(0.530×{\text{10}}^{\text{–10}}\phantom{\rule{0.25em}{0ex}}\text{m}{\right)}^{2}}\\ \end{array}$ Thus the Coulomb force is $F=\text{8.19}×{\text{10}}^{\text{–8}}\phantom{\rule{0.25em}{0ex}}\text{N}.$ The charges are opposite in sign, so this is an attractive force. This is a very large force for an electron—it would cause an acceleration of $8.99×{\text{10}}^{\text{22}}\phantom{\rule{0.25em}{0ex}}\text{m}/{\text{s}}^{2}$ (verification is left as an end-of-section problem).The gravitational force is given by Newton’s law of gravitation as: ${F}_{G}=G\frac{\mathrm{mM}}{{r}^{2}},$ where $G=6.67×{\text{10}}^{-\text{11}}\phantom{\rule{0.25em}{0ex}}\text{N}\cdot {\text{m}}^{2}/{\text{kg}}^{2}$ . Here $m$ and $M$ represent the electron and proton masses, which can be found in the appendices. Entering values for the knowns yields ${F}_{G}=\left(6.67×{\text{10}}^{–\text{11}}\phantom{\rule{0.25em}{0ex}}\text{N}\cdot {\text{m}}^{2}/{\text{kg}}^{2}\right)×\frac{\left(9.11×{\text{10}}^{\text{–31}}\phantom{\rule{0.25em}{0ex}}\text{kg}\right)\left(1.67×{\text{10}}^{\text{–27}}\phantom{\rule{0.25em}{0ex}}\text{kg}\right)}{\left(0.530×{\text{10}}^{\text{–10}}\phantom{\rule{0.25em}{0ex}}\text{m}{\right)}^{2}}=3.61×{\text{10}}^{\text{–47}}\phantom{\rule{0.25em}{0ex}}\text{N}$ This is also an attractive force, although it is traditionally shown as positive since gravitational force is always attractive. The ratio of the magnitude of the electrostatic force to gravitational force in this case is, thus, $\frac{F}{{F}_{G}}=\text{2}\text{.}\text{27}×{\text{10}}^{\text{39}}.$ Discussion This is a remarkably large ratio! Note that this will be the ratio of electrostatic force to gravitational force for an electron and a proton at any distance (taking the ratio before entering numerical values shows that the distance cancels). This ratio gives some indication of just how much larger the Coulomb force is than the gravitational force between two of the most common particles in nature. Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong Do somebody tell me a best nano engineering book for beginners? there is no specific books for beginners but there is book called principle of nanotechnology NANO what is fullerene does it is used to make bukky balls are you nano engineer ? s. fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball. Tarell what is the actual application of fullerenes nowadays? Damian That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes. Tarell what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc NANO so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles Got questions? Join the online conversation and get instant answers!
2019-07-23 19:37:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 27, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6179328560829163, "perplexity": 991.0737696548318}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00269.warc.gz"}
https://labs.tib.eu/arxiv/?author=Walter%20Van%20Herck
• ### Exact indices for D-particles, flow trees and the derived category(1010.0686) Oct. 4, 2010 hep-th In arXiv:0909.0508 D-particle degeneracies were calculated using novel methods. By analysing this refined computation of the index in the setting of the topological B model, a correspondence is found between flow trees in the supergravity description and B-branes in the topological theory. This enables us to use the same refinement also for single flows. A concrete example will be given, providing strong support for this method, while at the same time confirming the prediction of a new elliptic genus in arXiv:0909.0508. • ### Black Hole Meiosis(0909.0508) March 19, 2010 hep-th, math.AG The enumeration of BPS bound states in string theory needs refinement. Studying partition functions of particles made from D-branes wrapped on algebraic Calabi-Yau 3-folds, and classifying states using split attractor flow trees, we extend the method for computing a refined BPS index, arXiv:0810.4301. For certain D-particles, a finite number of microstates, namely polar states, exclusively realized as bound states, determine an entire partition function (elliptic genus). This underlines their crucial importance: one might call them the chromosomes' of a D-particle or a black hole. As polar states also can be affected by our refinement, previous predictions on elliptic genera are modified. This can be metaphorically interpreted as crossing-over in the meiosis of a D-particle'. Our results improve on hep-th/0702012, provide non-trivial evidence for a strong split attractor flow tree conjecture, and thus suggest that we indeed exhaust the BPS spectrum. In the D-brane description of a bound state, the necessity for refinement results from the fact that tachyonic strings split up constituent states into generic' and special' states. These are enumerated separately by topological invariants, which turn out to be partitions of Donaldson-Thomas invariants. As modular predictions provide a check on many of our results, we have compelling evidence that our computations are correct. • ### Godel space from wrapped M2-branes(0909.4081) Sept. 23, 2009 hep-th We show that M-theory admits a supersymmetric compactification to the Godel universe of the form Godel3 x S2 x CY3. We interpret this geometry as coming from the backreaction of M2-branes wrapping the S2 in an AdS3 x S2 x CY3 flux compactification. In the black hole deconstruction proposal similar states give rise to the entropy of a D4-D0 black hole. The system is effectively described by a three-dimensional theory consisting of an axion-dilaton coupled to gravity with a negative cosmological constant. Other embeddings of the three-dimensional theory imply similar supersymmetric Godel compactifications of type IIA/IIB string theory and F-theory. • ### 5D fuzzball geometries and 4D polar states(0805.3506) Nov. 3, 2008 hep-th We analyze the map between a class of `fuzzball' solutions in five dimensions and four-dimensional multicentered solutions under the 4D-5D connection, and interpret the resulting configurations in the framework of Denef and Moore. In five dimensions, we consider Kaluza-Klein monopole supertubes with circular profile which represent microstates of a small black ring. The resulting four-dimensional configurations are, in a suitable duality frame, polar states consisting of stacks of D6 and anti-D6 branes with flux. We argue that these four-dimensional configurations represent zero-entropy constituents of a 2-centered configuration where one of the centers is a small black hole. We also discuss how spectral flow transformations in five dimensions, leading to configurations with momentum, give rise to four-dimensional D6 anti-D6 polar configurations with different flux distributions at the centers.
2019-11-18 02:54:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5237693190574646, "perplexity": 1877.6954206237829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00461.warc.gz"}
https://bookstore.ams.org/view?ProductCode=GSM/102
An error was encountered while trying to add the item to the cart. Please try again. The following link can be shared to navigate to this page. You can select the link to copy or click the 'Copy To Clipboard' button below. Copy To Clipboard Successfully Copied! Introduction to Fourier Analysis and Wavelets Mark A. Pinsky Northwestern University, Evanston, IL Available Formats: Hardcover ISBN: 978-0-8218-4797-8 Product Code: GSM/102 List Price: $78.00 MAA Member Price:$70.20 AMS Member Price: $62.40 Electronic ISBN: 978-1-4704-1166-4 Product Code: GSM/102.E List Price:$73.00 MAA Member Price: $65.70 AMS Member Price:$58.40 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price: $117.00 MAA Member Price:$105.30 AMS Member Price: $93.60 Click above image for expanded view Introduction to Fourier Analysis and Wavelets Mark A. Pinsky Northwestern University, Evanston, IL Available Formats: Hardcover ISBN: 978-0-8218-4797-8 Product Code: GSM/102 List Price:$78.00 MAA Member Price: $70.20 AMS Member Price:$62.40 Electronic ISBN: 978-1-4704-1166-4 Product Code: GSM/102.E List Price: $73.00 MAA Member Price:$65.70 AMS Member Price: $58.40 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price:$117.00 MAA Member Price: $105.30 AMS Member Price:$93.60 • Book Details Volume: 1022002; 376 pp MSC: Primary 42; This book provides a concrete introduction to a number of topics in harmonic analysis, accessible at the early graduate level or, in some cases, at an upper undergraduate level. Necessary prerequisites to using the text are rudiments of the Lebesgue measure and integration on the real line. It begins with a thorough treatment of Fourier series on the circle and their applications to approximation theory, probability, and plane geometry (the isoperimetric theorem). Frequently, more than one proof is offered for a given theorem to illustrate the multiplicity of approaches. The second chapter treats the Fourier transform on Euclidean spaces, especially the author's results in the three-dimensional piecewise smooth case, which is distinct from the classical Gibbs–Wilbraham phenomenon of one-dimensional Fourier analysis. The Poisson summation formula treated in Chapter 3 provides an elegant connection between Fourier series on the circle and Fourier transforms on the real line, culminating in Landau's asymptotic formulas for lattice points on a large sphere. Much of modern harmonic analysis is concerned with the behavior of various linear operators on the Lebesgue spaces $L^p(\mathbb{R}^n)$. Chapter 4 gives a gentle introduction to these results, using the Riesz–Thorin theorem and the Marcinkiewicz interpolation formula. One of the long-time users of Fourier analysis is probability theory. In Chapter 5 the central limit theorem, iterated log theorem, and Berry–Esseen theorems are developed using the suitable Fourier-analytic tools. The final chapter furnishes a gentle introduction to wavelet theory, depending only on the $L_2$ theory of the Fourier transform (the Plancherel theorem). The basic notions of scale and location parameters demonstrate the flexibility of the wavelet approach to harmonic analysis. The text contains numerous examples and more than 200 exercises, each located in close proximity to the related theoretical material. • Chapters • Chapter 1. Fourier series on the circle • Chapter 2. Fourier transforms on the line and space • Chapter 3. Fourier analysis in $L^p$ spaces • Chapter 4. Poisson simulation formula and multiple Fourier series • Chapter 5. Applications to probability theory • Chapter 6. Introduction to wavelets • Request Exam/Desk Copy • Request Review Copy • Get Permissions Volume: 1022002; 376 pp MSC: Primary 42; This book provides a concrete introduction to a number of topics in harmonic analysis, accessible at the early graduate level or, in some cases, at an upper undergraduate level. Necessary prerequisites to using the text are rudiments of the Lebesgue measure and integration on the real line. It begins with a thorough treatment of Fourier series on the circle and their applications to approximation theory, probability, and plane geometry (the isoperimetric theorem). Frequently, more than one proof is offered for a given theorem to illustrate the multiplicity of approaches. The second chapter treats the Fourier transform on Euclidean spaces, especially the author's results in the three-dimensional piecewise smooth case, which is distinct from the classical Gibbs–Wilbraham phenomenon of one-dimensional Fourier analysis. The Poisson summation formula treated in Chapter 3 provides an elegant connection between Fourier series on the circle and Fourier transforms on the real line, culminating in Landau's asymptotic formulas for lattice points on a large sphere. Much of modern harmonic analysis is concerned with the behavior of various linear operators on the Lebesgue spaces $L^p(\mathbb{R}^n)$. Chapter 4 gives a gentle introduction to these results, using the Riesz–Thorin theorem and the Marcinkiewicz interpolation formula. One of the long-time users of Fourier analysis is probability theory. In Chapter 5 the central limit theorem, iterated log theorem, and Berry–Esseen theorems are developed using the suitable Fourier-analytic tools. The final chapter furnishes a gentle introduction to wavelet theory, depending only on the $L_2$ theory of the Fourier transform (the Plancherel theorem). The basic notions of scale and location parameters demonstrate the flexibility of the wavelet approach to harmonic analysis. The text contains numerous examples and more than 200 exercises, each located in close proximity to the related theoretical material. • Chapter 3. Fourier analysis in $L^p$ spaces
2023-02-02 11:13:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26292359828948975, "perplexity": 743.2586825249741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00578.warc.gz"}
https://gateoverflow.in/8356/gate2015-1-50
5.9k views A variable $x$ is said to be live at a statement $s_{i}$ in a program if the following three conditions hold simultaneously: 1. There exists a statement $S_{j}$ that uses $x$ 2. There is a path from $S_{i}$ to $S_{j}$ in the flow graph corresponding to the program 3. The path has no intervening assignment to $x$ including at $S_{i}$ and $S_{j}$ The variables which are live both at the statement in basic block $2$ and at the statement in basic block $3$ of the above control flow graph are 1. $\text{p, s, u}$ 2. $\text{r, s, u}$ 3. $\text{r, u}$ 4. $\text{q, v}$ edited | 5.9k views 0 @Arjun sir how u is not live in block 2 and 3 according to rule : a variable is live after statement s if its successor block is using its value thus variable u will not be dead in block 2 and 3.sir kindly explain this plz +5 'u' is live in 2 and 3 but not in 1...but question is asking only for 2 and 3 , in 2 and 3 statements Si and Sare same . See the question " The variables which are live both at the statement in basic block 2 and at the statement in basic block 3 of the above control flow graph are " that's why we need to consider r and u both . 0 r is surely liveby definition,and I think u is live if we interchange statement 2 and 3 in basic block 1 . +1 ADMIN Please add the tag out-of-syllabus as live variables are in code optimization which is removed from 2016 r, u. p, and s are assigned to in $1$ and there is no intermediate use of them before that. Hence p, and s are not live in both $2$ and $3$. q is assigned to in 4 and hence is not live in both $2$ and $3$. v is live at $3$ but not at $2$. u is live at $3$ and also at $2$ if we consider a path of length $0$ from $2 - 2$. So, r, u is the answer. by Veteran (422k points) edited by 0 @Arjun Sir Reason should be like this: i) There exists a statement Sj that uses x // here p,q,r,s,u,v all uses a statement. ii) There is a path from Si to Sj in the flow graph corresponding to the program // means it need to be in atleast length 1 path and that should be read only iii) The path has no intervening assignment to x including at Si and Sj // means there should not be any write after reading the value// only r is never written after reading Where am I going wrong? 0 v is live in 3 but not in 2 //correct u is live in 3 and 2 . but it is not live in 1. And in loop it is going back. It is a writing statement So, how u is live? The path has no intervening assignment to x including at Si and Sj // that means no write operation. 0 @srestha Best answer is correct , it is r,u in the official answer key also you can see it is r,u is given as an answer. as for you question : u is live in 3 and 2 , yes u is live in 2 and 3 but not in 1...but question is asking only for 2 and 3 , in 2 and 3 statements Si and Sj are same see the question " The variables which are live both at the statement in basic block 2 and at the statement in basic block 3 of the above control flow graph are " that's why we need to consider r and u both . 0 Here clearly mentioned, if there is any write operation then there shouldnot be any liveness. So, it eliminates u. rt? +1 @srestha 0 where is u assigned in the intervening path?? In the path of length zero from block 2 to block 2 where is u assigned, so it should be u and r right??? 0 from where to study ths topic, i am not able to understand :( 0 Try here Try here also Hope U got liveness concept 0 Thanx a lot. Gonna see it. 0 @Bikram sir liveness for u :   at statement2  :   2->4->1 (violation of condition 3 because in statement1  u has been  assigned a value) liveness for u :   at statement3  :   3->4->1 (violation of condition 3 because in statement1  u has been  assigned a value) so u is not live at statement2 and 3 so how is the answer containing u ?? +3 That path length is 0 from 2-2 and 3-3 and statements 2 and 3 are the uses. 0 @Arjun  sir Thanks In these type of questions do we have to consider a 0 length path also for a statement every time ? +8 If we consider zero length paths, then variable 's' is live at the statement in block 3. Right? +2 Yes @ gari see the above explanation.. v is live at statement 3 . r is live at statement 2 and 3 both . u is live at statement 2 and 3 . so answer must contain r and u . 0 @Bikram sir Live Variable is under code optimization, So this is not in syllabus if it is in syllabus. 0 Why r is a live variable? 1st condition says:- 1. There exists a statement Sj that uses x But r is  not used in 3rd block ,? +1 अनुराग पाण्डेय But u is being assigned a value at 1.3. So how it is live? 0 rahul sharma 5 r is live variable See the question . X is live at Si and used in Sj. r is used in  B1: path is from 4  to 1. here    si = 4   and   sj=1  so its live at 4. r is used in  B2: path from 1 to 2 . here    si = 1   and   sj=2  so its live at 1. r is used in  B4: path from 3 to 4 . here    si = 3   and   sj=4  so its live at 3. path from 2 to 4 . here    si = 2   and   sj=4  so its live at 2. hope it helps:) 0 @Arjun sir, here we are considering $0$ path length for liveness..But in register allocation scheme while using inference graph then we dont consider $0$ path length otherwise we may compute more than required number of registers. 0 @reena Is it still in Syllabus(control flow graphs)??? Variables {q and r} are live at the entry point of block 1. P is dead because p is overwritten at the first statement and the statement 2 of block 1 uses this new value.And from line 2 of block 1, P is never used. So, p is dead. Similar is the case for variable S, which is overwritten at line 2 and read at line 3 in block 1. So, S is dead the entry point of Block1. U is never read in block 1, so U is dead before Block 1. Before Block 2, variables R and U are being read so they are live at the entry point as well in the assignment line of block 2. Similarly, In block 3, variables S and U are live at the entry point of block 3. Now, if we talk about block 4, at the entry point of block 4, variables V and R are live. Talking about block 3, at the exit point variables V and R are live. The rule of liveliness states that if a block B does not use a variable V which is live after it's exit point, then this variable V is live also before the entry point of Block B. Since Variables  V and R are live after Block B3, so they must be live before block B3.(V is not shown in the figure). So, at entry point of below Blocks B2-R and U are live B3 - R , S ,U and V are live. which is common between B2 and B3- R and U. (Ans). by Boss (27.2k points) 0 then according to this explanation s is live in 2 too rt? 0 No S cant be live in 2 since after getting the value of s ,we are not using or reading it elsewhere in block 2.s is live in  block 3 as we are reading the value of s. 0 @Ayush Upadhyaya  why v is not live before start of block 1 .According to me , variables {q,r,v} are read and they are live before start of block 1 +2 @Ayush this is a nice explanation thanks a lot +1 Is control flow graph in syllabus Gate 2019? 0 "v" is live in entry of Block 1 and also in entry of Block 3. Correct me if I'm wrong. :) 0
2019-11-17 05:10:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3427700400352478, "perplexity": 1683.8254732237795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00157.warc.gz"}
https://firatkulevinc.com/adventist-youth-honors-answer-book-vocational-shoe-repair/
Shoe Restore Vocational Normal Convention Ability Stage 3 Yr of Introduction: 1928 ## 1. Identify a minimum of 5 important components of a shoe and clarify their makes use of. ### Sole The underside of a shoe, which helps the wearer’s foot, is called the only real. It normally consists of a number of layers, and is normally a separate piece from the higher shoe (although not at all times, as within the case of wood footwear). #### Insole The insole is the inside backside of the shoe, which sits immediately beneath the foot. Many footwear have detachable and replaceable insoles, and additional insoles are sometimes added for consolation or well being causes. (To regulate the form, moisture, or odor of the shoe, for instance.) #### Mid-sole The mid-sole is a layer located between the in-sole and the out-sole, present in some footwear. Usually it can present cushioning and form to the in-sole. Footwear made out of actual crocodile pores and skin, in a conservation exhibit at Bristol Zoo, England #### Outsole The outsole is the layer in direct contact with the bottom. The fabric of the outsole varies relying on the operate, dressiness, and high quality of the shoe, however is mostly very sturdy materials, because it experiences essentially the most stress. Dressier footwear have leather-based outsoles; informal or work-oriented footwear have outsoles manufactured from pure rubber or an artificial imitation. The outsole might comprise a single piece, or might comprise separate items of various supplies. Usually the heel of the only real is rubber for sturdiness and traction, whereas the entrance is leather-based for type. Specialised footwear will typically have distinct modifications on this design: athletic cleats have spikes embedded within the outsole to grip the bottom. ### Heel Some footwear have a particular raised heel; typically this can be a separate piece from the outsole and might be changed. Leather-based was used previously, and remains to be utilized in some efficiency tennis footwear, however more durable supplies are normally extra fascinating. Plastic, rubber, or polymer supplies are actually additionally used. Fortified cork is usually utilized in heels or soles so the footwear are light-weight. ### Vamp, or higher Any shoe additionally has an higher half that separates the foot from the air, and helps maintain the shoe onto the foot. Within the easiest instances, reminiscent of sandals or flip flops, this can be nothing quite a lot of straps for holding the only real in place beneath the foot. Closed footwear, reminiscent of boots, sneakers and most mens footwear, will normally have a extra complicated higher. ### Laces A shoelace is an extended piece of string or twine that maintain a shoe closed. Often it’s hooked up by being threaded by a sequence of small holes within the shoe, known as eyelets. To help this, many shoelaces are tipped with small strong plastic bits known as aglets. ### Tongue In some footwear and boots the tongue is a flap-like, normally versatile, half that goes beneath the laces. ## 2. What’s the distinction between hand-turned soles, Goodyear welt soles, and McKay sewed soles? Which is the best to restore? ### Hand-turned soles One of these shoe is constructed inside-out utilizing a type known as a final. A final is a foot-shaped software, historically manufactured from wooden. After the uppers are made and stitched collectively, they’re faraway from the final, turned right-side-out, and the only real is hooked up. The only is of 1 piece – the topside of the only real serves because the insole, and the underside facet serves because the outsole. ### Goodyear welt soles In a shoe of this kind, the uppers, insole, and midsole are stitched to the outsole. That is essentially the most safe method to connect the only real. The insoles are constructed into the shoe quite than being inserted after the remainder of the shoe has been made. The higher is drawn over the only real and held briefly whereas a strip of leather-based (the welt) is stitched to the higher and internal sole. The only is hand-stitched by the welt to finish the method. This development permits a number of sole replacements, extending the lifetime of the footwear. As a result of the stitching of the outsole is completed fully on the surface of the shoe, one of these sole is best to restore. ### McKay sewed soles Ten years after the invention of the stitching machine, Gordon McKay launched a tool for machine-stitching a shoe’s higher to its sole. In one of these sole, the insole is hooked up to the higher whereas it’s nonetheless on the final utilizing tacks that are clinched over. The outsole is then sewn to the meeting by McKay’s machine. The stitching seems on the within of the shoe, as do the tacks. One of these development was frequent for affordable, light-weight footwear. #### Curved Needle Machine The outsole is hooked up to the higher with a welt. The curve needle machine permits the soles to be stitched by the goodyear welt with out the going by the higher so the footwear might be resoled many instances with out stitching by the uppers. ## 3. Correctly wax the thread, and with two needles or waxed ends, mend a pair of footwear which have rips in them. Use the double-stitch technique of stitching. Shoe restore is finest taught by an skilled cobbler. It isn’t actually doable to be taught this commerce by studying a e-book – it needs to be practiced, and to turn out to be proficient takes years. The most effective strategy for assembly this requirement (in addition to necessities Four and 5) is to go to a shoe restore store as described in requirement 8. Carry a pair of footwear which have rips in them so the cobbler can will let you observe on your individual footwear quite than on his buyer’s footwear. Ripped footwear don’t essentially have tears within the canvas or leather-based, however might have a burst seam. Nearly any pair of “throw-away” footwear you continue to have in your closet will work for this. Relying on the ability you’ll be able to deliver to bear in your first try, chances are you’ll find yourself throwing them away anyhow. Keep in mind that this can be a studying course of, and your aim shouldn’t be essentially to get a wearable pair of footwear out of your effort, however quite, a working information of shoe restore methods. Should you fail on the left shoe, chances are you’ll succeed on the suitable. Otherwise you would possibly undergo multiple pair of footwear. It could be doable to construct your Pathfinder group’s curiosity on this honor by exhibiting them some shoe restore movies. ## 4. Satisfactorily half-sole a pair of footwear by nailing the soles on. Make sure that the nails are of the correct size. See necessities Three and eight for recommendation. ## 5. Match and fix a pair of rubber heels. See necessities Three and eight for recommendation. Here is one other Youtube video from Upside Down Shoe Restore: ## 6. What sort of leather-based must be utilized in repairing the soles of footwear? How is such leather-based usually tanned? Leather-based utilized in shoemaking is split into two classes: higher leather-based, and sole leather-based. Higher leather-based is used for making the uppers, and is thinner, softer, and extra pliable than sole leather-based. Sole leather-based is mostly made out of the conceal of a fully-grown cow, taken from the thicker components of the animal’s pores and skin. Higher leather-based is taken from calves, goats, and different smaller animals with thinner pores and skin. Again when this honor was initially written, leather-based was normally tanned by soaking it in a shower with tree bark (normally hemlock or oak). An astringent acid would leach out of the bark, and this acid is what would convert an animal pores and skin into leather-based, in a course of often known as tanning. Sole leather-based was most frequently tanned utilizing oak bark. Oak-tanned leather-based has a lightweight, creamy tan coloration. It’s each agency and versatile. ## 7. Specify a minimum of three elements that must be taken into consideration within the choice of footwear. There are numerous elements to contemplate when choosing a pair of footwear, together with, match, consolation, operate, high quality, value, and even ethical issues. When contemplating any of those elements, it’s incumbent upon the Christian to observe good stewardship. ### Match and Consolation Probably an important issue to contemplate when choosing a shoe is that it suits correctly. Improperly fitted footwear may cause discomfort, soreness, and even damage. Footwear must be comfy. It’s irresponsible to sacrifice consolation for the sake of fashion, although there are exceptions to this (i.e., the shoe won’t be worn for very lengthy and the wearer has no different possibility, reminiscent of serving because the member of a marriage get together). It is best to have your toes measured each time you purchase footwear if they’re nonetheless rising. Even after they cease rising, it is nonetheless a good suggestion to have them measured often, as their form might change. There are two parameters that specify the shoe measurement: size, and width. #### Size Shoe measurement in the US and Canada is predicated on the size of the final, measured in inches, multiplied by Three and minus a relentless. This fixed differs for footwear supposed for males, girls and youngsters. ${displaystyle {mbox{male shoe measurement}}=3*{mbox{final size}}-22}$ Ladies’s sizes are virtually at all times decided with the “frequent” scale, by which girls’s sizes are equal to males’s sizes plus 1.5 (for instance, a males’s 10.5 is a girls’s 12). Within the much less in style scale, often known as the “customary” or “FIA” (Footwear Industries of America) scale, girls’s sizes are males’s sizes plus 1 (so a males’s 10.5 is a girls’s 11.5). ${displaystyle {mbox{feminine shoe measurement (frequent)}}=3*{mbox{final size}}-20.5}$ ${displaystyle {mbox{feminine shoe measurement (FIA)}}=3*{mbox{final size}}-21}$ Youngsters’s sizes are equal to males’s sizes plus 12.33. Thus women’ and boys’ sizes don’t differ, though males’s and ladies’s do. ${displaystyle {mbox{little one shoe measurement}}=3*{mbox{final size}}-9.67}$ For the worldwide market, the ISO 9407 customary is used for specifying shoe size. #### Width Some producers supply footwear of various width for a similar foot size. Such footwear are then additionally labelled in line with the width or girth of the widest a part of the foot (sometimes measured immediately behind the toes with the topic standing on each toes and carrying socks or hose). Within the Mondopoint system, the shoe measurement label can state along with the size additionally the width of the imply foot for which the shoe is appropriate, each measured in millimetres. A lot of different ad-hoc notations for width or girth are additionally used. Examples embrace (every beginning with the narrowest width): • AAAA, AAA, AA, A, B, C, D, E, EE, EEE, EEEE • 4A, 3A, 2A, A, B, C, D, E, 2E, 3E, 4E • N, R, W None of those designations are formally standardized. The precise foot width for which these sizes are appropriate can fluctuate considerably between producers. The A-E width indicators utilized by some US and UK shoe producers are sometimes based mostly on the width of the foot, and customary step sizes are 1/Four inch (6 mm) or 3/16 inch (5 mm). ### Operate The subsequent consideration when choosing a shoe is the exercise to be carried out when carrying the shoe. You wouldn’t put on excessive heels on a backpacking journey, nor would you put on flip-flops to a marriage. Totally different footwear have completely different functions, although fortunately, some are multi-purpose. Here’s a listing of a few of the forms of footwear accessible: • Costume footwear • Informal footwear • Athletic footwear • Trainers • Climbing boots • Climbing footwear • Work boots • Metal-toed boots • Snow boots • Golf footwear • Bowling footwear • Slippers ### High quality In case your toes have stopped rising, it is smart to purchase the highest quality footwear that you may afford. Prime quality footwear won’t solely last more than low cost ones, they’re cheaper in the long term as a result of they won’t have to get replaced as typically. A low-quality shoe may also make your toes sore and even trigger damage by not offering correct arch or ankle assist. In case you are shopping for footwear for an individual who’s rising shortly, you must nonetheless search a top quality shoe, however at this level the aim is match the anticipated lifetime of the shoe to the period of time it can match. ### Value Value can be an important consideration. Some footwear are very costly, not due to their top quality, however due to the title hooked up to them. Although top quality footwear are normally dearer than low-quality footwear, this doesn’t imply that every one high-priced footwear are additionally high-quality. ### Ethical issues Within the late 1990’s a number of shoe producers discovered themselves embroiled in controversy after it was revealed that they had been exploiting sweatshop labor in third-world international locations. There are numerous individuals who boycott numerous firms due to such practices, and there are a lot of individuals who patronize firms that commit themselves to respecting human rights. ## 8. Spend a minimal of 5 hours observing shoe restore specialists at their work store. It is best to have the ability to discover a shoe restore store within the Yellow Pages (or different enterprise listing) beneath “Shoe Restore.” You too can do this website http://www.vibram.us/repairconsumer/ by Vibram, a shoe restore provider. Shoe restore retailers are sometimes family-owned companies, so the possibilities of talking to the proprietor while you enter the shop (or name on the cellphone) are fairly good. It might be higher to strategy the proprietor in individual quite than calling, as that provides you a greater alternative to current your self. Put on a category A uniform while you go there to ask permission to observe, and be prepared to elucidate what number of Pathfinders might be with you. Let the shop proprietor choose a time and date for the five-hour go to. The shop proprietor may be keen to instruct this honor, so it could be a good suggestion to make contact earlier than you get began. Your go to to the work store might be alternative to satisfy necessities 3, 4, and 5, however you must in all probability deliver some footwear that want these repairs with you. The proprietor will definitely not need you experimenting on his buyer’s footwear! Provide to pay for any supplies you utilize when repairing your footwear. ## 9. Write or inform the examiners the correct strategies of cleansing and caring for footwear. Hold your footwear trying their finest by utilizing a cedar shoe tree. These are foot-shaped blocks of cedar wooden that match contained in the shoe. Cedar absorbs moisture and salt which may harm your footwear. Clear footwear immediately after they turn out to be soiled. There are 4 fundamental steps in caring for easy leather-based footwear: clear, situation, polish, and weatherproof. Clear First you must lay some newspapers out in your work space and take away the laces from the footwear. Brush the dust off with a comfortable brush, after which apply saddle cleaning soap or leather-based cleaner to the shoe with a comfortable material (you should utilize a shoe shine material or an previous T-shirt). Work it in, after which begin to shine the shoe. Situation As soon as the shoe has been cleaned, you possibly can apply a conditioner. You should buy cleaners with conditioners in them, however you may get higher outcomes utilizing a separate conditioner. The conditioner is labored into the leather-based simply because the cleaner was, and it helps to moisturize the leather-based to forestall cracking. In dry climates, chances are you’ll must situation your footwear extra typically. Polish Most of us are conversant in the liquid shoe polish merchandise that are available a bottle with a brush constructed into the lid. These polishes work nicely sufficient, however the polish solely barely penetrates the floor of the leather-based and needs to be re-applied continuously. It’s higher to make use of a present paste, however finest to make use of a shoe cream. Whichever you select, you’ll have to match the colour of the product to the colour of the shoe. Take a look at in an not easily seen space to just remember to have coloration match. Apply shoe cream with a comfortable material and rub it in nicely. Then let it dry totally. As soon as dry, you possibly can go over it once more with a comfortable material to deliver out the shine. Weatherproof In case you are extra serious about defending the footwear than in making them look good, you should utilize beeswax to weatherproof them. Beeswax will change the look of the shoe, however it’s most wonderful for holding the water out. This could be a good selection for work or mountain climbing boots. For costume footwear, you’ll want to use a spray-on product which works on invisibly. It’s going to defend your footwear from incidental contact with water, however you shouldn’t depend on these things for wading by puddles.
2021-03-02 10:37:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4780106842517853, "perplexity": 5753.525073931853}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00594.warc.gz"}
http://mathhelpforum.com/algebra/95750-minimum-value.html
# Math Help - Minimum value 1. ## Minimum value I'm amazed I'm stuck on this question. I need to find the value of t at which it becomes the minimum value. $|\vec b|^2-2t\vec ab+t^2|\vec a|^2$ I thought you complete the square to find the minimum value. I can't get what they're getting by completing the square (although I get it when I use calculus) $|\vec a|^2(t-\frac {\vec a*\vec b} {|\vec a|^2})^2+|\vec b|^2-\frac {(\vec a * \vec b)}{|\vec a|^2})$ How do you get that? 2. Originally Posted by chengbin I'm amazed I'm stuck on this question. I need to find the value of t at which it becomes the minimum value. $|\vec b|^2-2t\vec ab+t^2|\vec a|^2$ I thought you complete the square to find the minimum value. I can't get what they're getting by completing the square (although I get it when I use calculus) $|\vec a|^2(t-\frac {\vec a*\vec b} {|\vec a|^2})^2+|\vec b|^2-\frac {(\vec a * \vec b)}{|\vec a|^2})$ How do you get that? You have: $a^2 t^2 - 2c t + b^2$ where $c = \vec{a}\cdot \vec{b}$ $= a^2 \left(t^2 - \frac{2c}{a^2} t + \frac{b^2}{a^2}\right)$ $= a^2 \left( \left [t - \frac{c}{a^2}\right]^2 - \frac{c^2}{a^4} + \frac{b^2}{a^2}\right)$ $= a^2 \left [t - \frac{c}{a^2}\right]^2 - \frac{c^2}{a^2} + b^2$.
2016-06-01 03:51:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401728868484497, "perplexity": 431.614199049429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464054526288.74/warc/CC-MAIN-20160524014846-00065-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.authorea.com/users/68214/articles/80945/_show_article
ROUGH DRAFT authorea.com/80945 # Inroduction ## Brief literature review • Most genetic association studies focus on common variants. • But, rare genetic variants can play major roles in influencing complex traits. (Pritchard 2001, Schork 2009) • The rare susceptibility variants identified through sequencing have potential to explain some of the ’missing heritability’ of complex traits. (Eichler 2010). • However, standard methods to test for association with single genetic variants are underpowered for rare variants unless sample sizes are very large. (Lee 2014) • The lack of power of single-variant approaches holds in fine-mapping as well as genome-wide association studies. • In this report, we are concerned with fine-mapping a genomic region that has been sequenced in cases and controls to identify disease-risk loci. • A number of methods have been developed to evaluate the disease association for both single-variant and multiple-variants in a genomic region. • Besides single-variant methods, we consider three broad classes of methods for analysing sequence data: pooled-variant, joint-modelling and tree-based methods. • Overview of 3 types of analysis methods (Besides single-variant approach) • Pooled-variant methods evaluate the cumulative effects of multiple genetic variants in a genomic region. The score statistics from marginal models of the trait association with individual variants are collapsed into a single test statistic, either by combining the information for multiple variants into a single genetic score or by evaluating the distribution of the pooled score statistics of individual variants. (Lee 2014) • Joint-modeling methods identify the joint effect of multiple genetic variants simultaneously. These methods can assess whether a variant carries any further information about the trait beyond what is explained by the other variants. When trait-influencing variants are in low linkage disequilibrium, this approach may be more powerful than pooling test statistics for marginal associations across variants (Cho 2010). • Tree-based methods. • A local genealogical tree represents the ancestry of the sample of haplotypes at each locus in the genomic region being fine-mapped. • Haplotypes carrying the same disease risk alleles are expected to be related and cluster on the genealogical tree at a disease risk locus. • Tree-based methods assess whether trait values co-cluster with the ancestral tree for the haplotypes (e.g., Bardel et al. 2005). • Mailund et al. 2006 has developed a method to reconstruct and score genealogies according to the case-control clusters. • Review Burkett et al. study briefly(!), what it found. • In practice true trees are unknown. However, cluster statistics based on true trees represent a best case for detecting association as tree uncertainty is eliminated. • Burkett et al. use known trees to assess the effectiveness of such a tree-based approach for detection of rare, disease-risk variants in a candidate genomic region under various models of disease risk in a haploid population. • They found that Mantel statistics computed on the known trees outperform popular methods for detecting rare variants associated with disease. • Following Burkett et al., we use clustering tests based on true trees as benchmarks against which to compare the popular association methods. • However, unlike Burkett et al., who focus on detection of disease risk variants, we here focus on localization of association signal in the candidate genomic region. Moreover, we use a diploid disease model instead of a haploid disease model. ## Purpose of the study • To compare the performance of selected rare-variant association methods for fine-mapping a disease locus. In our investigation, we focus on the localization of association signal to between $$950kbp - 1050kbp$$ within a 2Mb candidate genomic region. • We use variant data simulated from coalescent trees. Our work on localization of association signal extends that of Burkett et al., which investigated the ability to detect association signal in the candidate region, without regard to localization. • To illustrate ideas, we start by working through a particular example dataset as a case study for insight. • Next, we perform a simulation study involving 200 sequencing datasets and score which association method localizes best, overall. # Methods ## Data simulation 1. Simulating the population • fastsimcoal2 (Excoffier 2013) • Simulate 3000 haplotypes of 4000 equispaced SNVs in a 2-Mbp region. • Recombination rate $$= 1 \times 10^{-8}$$ per bp per generation. • Population effective size, $$N_{e} = 6200$$ (Tenesa 2007) • Randomly pair the 3000 haplotypes into 1500 diploid individuals. • Logistic regression model of disease status. • Once haplotypes were paired into 1500 diploid individuals, disease status was assigned to the individuals based on randomly sampled risk SNVs from the middle part of the genomic region of $$950kbp−1050kbp$$. • For risk SNVs, the number of copies of the derived allele increases disease risk according to a logistic regression model, ${logit}\{P(D=1|G)\} = {logit}(0.2)+ \sum_{j=1}^{m} 2 \times G_j,\;\;\mbox{where,}$ • $$D$$ is disease status ($$D = 1$$, case; $$D=0$$, control). • $$G=(G_1, G_2, \ldots , G_{m})$$ is an individual’s multi-locus genotype at $$m$$ risk SNVs, with $$G_j$$ being the number of copies of the derived allele at the $$j^{th}$$ risk SNV. • We select the intercept term to ensure that the probability of sporadic disease (i.e. $$P(D=1|G=\underset{^\sim}0)$$) is approximately $$20\%$$. • We randomly sampled SNVs from the middle region one at a time, until the disease prevalence was between $$9.5−10.5\%$$ in the $$1500$$ individuals. • After assigning disease status to the 1500 individuals, we sampled 50 case (i.e. diseased) and 50 control (i.e. non-diseased) individuals from all affected and unaffected individuals. • We then extracted the data for the variable SNVs in the case-control sample to examine the patterns of disease association in subsequent analyses.
2017-01-24 15:18:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5507199168205261, "perplexity": 3657.551060073255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284429.99/warc/CC-MAIN-20170116095124-00187-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1717043/solve-differential-equation-using-variation-of-parameters
# Solve differential equation using 'variation of parameters' Given the differential equation $L \ u = f$, with $$L \ u = a_2(t) \frac{d^2u}{dt^2}+a_1(t)\frac{du}{dt}+a_0(t)u$$ with $a_i(t)$ sufficiently smooth and $a_2(t) \neq 0$ for every $t$. Suppose that $u_0$ is a solution to the homogenenous equation $L \ u=0$, with $u_0(t)\neq0$ for every $t$. How can I then use the variation of parameters method by Lagrange to find the solution $u =u_0v$ ? And how can I find the general solution if the equation isn't homogeneous? Update: I've substituted $u=u_0v$ into the equation and found a new $2^{nd}$ order differential equation of the form $$f = \frac{d^2v}{dt^2}(a_2(t)u_0)+\frac{dv}{dt}(2a_2(t)u_0'+a_1(t)u_0)+va_0(t)u_0$$ which I'm supposed to solve for $w$ using $w=v'$. But I don't seem to be able to proceed as I'm not sure how to solve the system $$$$\begin{split} w =&\ v'\\ w'a_2(t)u_0+w(2a_2(t)u_0'+a_1(t)u_0)+v(a_0(t)u_0)=&\ f\ \\ \end{split}$$$$
2019-08-22 04:48:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9934865832328796, "perplexity": 73.39348789656582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00427.warc.gz"}
http://mechanics.wikidot.com/excited
Excited Particles Bead panel from a baby carrier, Kayan or Kenyah people. Borneo 20th century, 27 x 27 cm. From the Teo Family collection, Kuching. Photograph by D Dunlop. Let particle P be formed from a ground-state particle combined with a photon. Definition: P is called an excited particle, or a particle in an excited state. Characterize P by some repetitive chain of events $\Psi ^{\sf{P}} = \left( \sf{\Omega}_{\sf{1}}, \sf{\Omega}_{\sf{2}}, \sf{\Omega}_{\sf{3}} \ \ldots \ \right)$ where each orbital cycle is a bundle of quarks $\sf{\Omega}^{\sf{P}} = \left\{ \sf{q}_{\sf{1}}, \sf{q}_{\sf{2}}, \sf{q}_{\sf{3}}, \ \ldots \ \right\}$ Sort the quarks in $\sf{\Omega}$ by their phase into a pair of compound quarks $\left\{ \mathcal{S}_{\LARGE{\circ}}, \mathcal{A}_{\LARGE{\circ}} \right\}$ and $\left\{ \mathcal{S}_{\LARGE{\bullet}}, \mathcal{A}_{\LARGE{\bullet}} \right\}$ called phase components so that $\sf{\Omega}^{\sf{P}} = \left\{ \left\{ \mathcal{S}_{\LARGE{\circ}}, \mathcal{A}_{\LARGE{\circ}} \right\}, \ \left\{ \mathcal{S}_{\LARGE{\bullet}}, \mathcal{A}_{\LARGE{\bullet}} \right\} \vphantom{{\Sigma^{2}}^{2}} \right\}$ Since an excited particle is the union of a ground-state with a photon, there will be a symmetric part $\mathcal{S}_{\LARGE{\circ}} = \mathcal{S}_{\LARGE{\bullet}}$ and an anti-symmetric part $\mathcal{A}_{\LARGE{\circ}} = \overline{\mathcal{A} _{\LARGE{\bullet}} }$ in this partition. And, we can assess the wavelength $\lambda$ of P by simply extending the analysis used for photons to obtain \begin{align} \lambda^{\sf{P}} = \frac { hc }{ 2 k_{\sf{F}} \rho^{\mathcal{A}} } \end{align} where $\rho^{\mathcal{A}}$ is the radius of an anti-symmetric phase-component. We can also establish the work $W$ that is required to build P by continuing the same analysis used for ground-state particles to get $W ^{\sf{P}} = 2 k_{\sf{F}} \, \rho^{\mathcal{S}}$ where $\rho^{\mathcal{S}}$ is the radius of a symmetric phase-component. Note that these quantities may vary independently of each other because the quarks in $\mathcal{S}$ and $\mathcal{A}$ do not have any special relationship to each other. But overall, excited particles are characterized by a mean radius defined by the scalar sum \begin{align} \tilde{\rho} ^{\, \sf{P}} \equiv \frac{ \; \rho ^{\mathcal{S}} + \rho ^{\mathcal{A}} }{2} \end{align} Next step: nuclear events. page revision: 126, last edited: 25 Nov 2017 00:04
2018-03-18 07:25:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.848908007144928, "perplexity": 1177.819960637066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00487.warc.gz"}
https://otter-grader.readthedocs.io/en/latest/workflow/executing_submissions/otter_grade.html
The command line interface allows instructors to grade notebooks locally by launching Docker containers on the instructor’s machine that grade notebooks and return a CSV of grades and (optionally) PDF versions of student submissions for manually graded questions. Configuration Files¶ Otter grades students submissions in individual Docker containers that are based on a Docker image generated through the use of a configuration zip file. Before grading assignments locally, an instructor should create such a zip file by using a tool such as Otter Assign or Otter Generate. This file will be used in the construction of a Docker image tagged otter-grader:{zip file hash}. This Docker image will then have containers spawned from it for each submission that is graded. Otter’s Docker images can be pruned with otter grade --prune. Using the CLI¶ Before using the command line utility, you should have • written tests for the assignment, • generated a configuration zip file from those tests, and The grading interface, encapsulated in the otter grade command, runs the local grading process and defines the options that instructors can set when grading. A comprehensive list of flags is provided in the CLI Reference. Basic Usage¶ The simplest usage of the Otter Grade is when we have a directory structure as below (and we have change directories into grading in the command line) and we don’t require PDFs or additional requirements. grading ├── nb0.ipynb ├── nb1.ipynb ├── nb2.ipynb # etc. └── tests ├── q1.py ├── q2.py └── q3.py # etc. In the case above, our otter command would be, very simply, otter grade Because the submissions are on the current working directory (grading), our configuration file is at ./autograder.zip, and we don’t mind output to ./, we can use the defualt values of the -a and -o flags. After grader, our directory will look like this: grading ├── nb0.ipynb ├── nb1.ipynb ├── nb2.ipynb # etc. └── tests ├── q1.py ├── q2.py └── q3.py # etc. and the grades for each submission will be in final_grades.csv. If we wanted to generate PDFs for manual grading, we would specify this when making the configuration file and add the --pdfs flag to tell Otter to copy the PDFs out of the containers: otter grade --pdfs and at the end of grading we would have grading ├── nb0.ipynb ├── nb1.ipynb ├── nb2.ipynb # etc. ├── submission_pdfs │ ├── nb0.pdf │ ├── nb1.pdf │ └── nb2.pdf # etc. └── tests ├── q1.py ├── q2.py └── q3.py # etc. To grade a single notebook(as opposed to all the notebooks in a directory), you can pass the path to the notebook file using the flag -p. For example, using the directory structure above, the call looks like this: otter grade -p "./nb0.ipynb" The usual information for this graded notebook is written to final_grades.csv but the percent correct is returned to the command-line from the call as well. To grade submissions that aren’t notebook files, use the --ext flag, which accepts the file extension to search for submissions with. For example, if we had the same example as above but with Rmd files: otter grade --ext Rmd If you’re grading submission export zip files (those generated by otter.Notebook.export or ottr::export), your otter_config.json should have its zips key set to true, and you should pass the -z flag to Otter Grade. You also won’t need to use the --ext flag regardless of the type of file being graded inside the zip file. Requirements¶ The Docker image used for grading will be built as described in the Otter Generatte section. If you require any packages not listed there, or among the dependencies of any packages above, you should create a requirements.txt file containing only those packages and use it when running your configuration generator. Support Files¶ Some notebooks require support files to run (e.g. data files). If your notebooks require any such files, you should generate your configuration zip file with those files. Intercell Seeding¶ Otter Grade also supports intercell seeding. This behavior should be configured as a part of your configuration zip file.
2023-01-29 22:26:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21229423582553864, "perplexity": 7017.895766461475}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00105.warc.gz"}
https://calculator.academy/actual-cash-value-calculator/
Enter the purchase price of an item, its current age, and the expected lifetime of the item into the calculator. The calculator will evaluate the current actual cash value of the item. ## Actual Cash Value Formula The following equation is used to calculate the actual cash value of an item. ACV = P × (E - C) / E • Where ACV is the actual cash value (\$) • P is the purchase price of the item or current cost for replacement • E is the expected life of the item in years • C is the current life in years To calculate actual cash value, multiply the purchase price by the difference between the expected life and current life(years), then divide by the expected life. ## Actual Cash Value Definition An actual cash value is an estimated value of an item in its current state based on its expected lifetime and replacement cost. For example, if a car is expected to last 20 years, and it has a certain purchase price, the actual cash value would be the ratio of its current years to the original purchase price minus the replacement cost. ## Actual Cash Value Example How to calculate an actual cash value? 1. First, determine the cost of a replacement. Determine the total cost of replacing the item with an equivalent quality item. 2. Next, determine the expected life span. Measure or determine the expected life span of the item. 3. Next, determine the current life. Measure or calculate the current number of years the item has been in use. 4. Finally, calculate the actual cash value. Using the formula above, calculate the actual cash value. ## FAQ What is an actual cash value? An actual cash value is a measure of the value of an item after taking into account depreciation and the cost of a replacement.
2023-03-31 09:23:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.282349556684494, "perplexity": 1158.9439041873154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00599.warc.gz"}
http://mathhelpforum.com/trigonometry/58841-sinusoidal-functions-word-problem.html
Thread: Sinusoidal Functions-word problem 1. Sinusoidal Functions-word problem I really need help on this problem: A rodeo performer spins a lasso in a circle perpendicular to the ground. The height of the knot from the ground is modeled by h= -3 cos (5 pi/3 t) +3.5, where t is the time measured in seconds. a. What's the highest point reached by the knot b. Lowest point reached by the knot c. the period of the model d. According to the model, find the height of the knot after 25 seconds. 2. $\displaystyle h=-3cos(\frac{5\pi}{3}t)+3,5$ a) the highest point reached by the knot is the maximum of this function: $\displaystyle cos(\frac{5\pi}{3}t) \in <-1,1>$ $\displaystyle -3cos(\frac{5\pi}{3}t) \in <-3,3>$ $\displaystyle -3cos(\frac{5\pi}{3}t)+3,5 \in <0,5;6,5>$ the highest value we can get is $\displaystyle h=6,5$ b) and the lowest $\displaystyle h=0,5$ c) cosine function $\displaystyle cos t$ is periodic with period $\displaystyle 2\pi$, so $\displaystyle cos(\frac{5\pi}{3}t)$ is periodic with period $\displaystyle \frac{2\pi}{\frac{5\pi}{3}}=\frac{6}{5}$ d) $\displaystyle t=25$ $\displaystyle h=-3cos (\frac{5\pi}{3}*25)+3,5=-3cos (\frac{125\pi}{3})+3,5=$$\displaystyle -3cos(\frac{5\pi}{3})+3,5$ converting into degrees $\displaystyle \frac{5*180^o}{3}=300^o$ $\displaystyle h=-3cos(270^o+30^o)+3,5=-3*\frac{1}{2}+3,5=2$ 3. Thanks
2018-05-27 14:20:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512066006660461, "perplexity": 1383.302943857442}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868316.29/warc/CC-MAIN-20180527131037-20180527151037-00355.warc.gz"}
https://math.eretrandre.org/tetrationforum/archive/index.php?thread-102-5.html
Tetration Forum Full Version: Infinite tetration and superroot of infinitesimal You're currently viewing a stripped down version of our content. View the full version with proper formatting. Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 $(p \wedge \neg p)$ is a short-hand version of the law of the excluded middle (P or not P), which does not apply in SIA according to the reference you gave. Andrew Robbins Thanks, This I have to digest slowly. Where can I read about existing geometrical interpretations of tetration, both finite and infinite, and its opposite log(log(log(log(log(.....) both finite and infinite? If You involve all infinite values of log, this construction expands pretty fast into infinity^infinity trees of values. You mentioned rotation via new dimensions- how does it happen? You can't, because I don't think anyone is seriously researching this aspect. The only two places that come close are a link that Gottfried posted, and Tony Smith's "work" (author of Banned by Cornell) which gives an interesting discussion of dimensions. Ivars Wrote:both finite and infinite, and its opposite log(log(log(log(log(.....) both finite and infinite? What? Ivars Wrote:If You involve all infinite values of log, this construction expands pretty fast into infinity^infinity trees of values. What? Ivars Wrote:You mentioned rotation via new dimensions- how does it happen? Well, mathematically we are not limited to 3 dimensions. There are about as many dimensions as you have "commas", meaning, if you can write a "comma", then you can make a new dimension... So for example, (1, 2) is a point 1 unit along the first dimension, then two units along the second dimension. There are many ways of looking at lists of numbers, most are part of what most mathematicians call "vector spaces" all vector spaces have a dimensionality to them, so for example, although (1, 2) and (1, 2, 0) can represent the same polynomial/series (infinite dimensional space), they are two completely different vectors because they come from different vector spaces. When you are interpreting a vector as an hypercube as opposed to a point, then you can consider the hyper-volume (length, area, volume, etc...) of a vector (also called a norm or a measure) as the volume enclosed by the vector's scaled basis vectors in each dimension. This is the sense in which I was talking about "rotating through dimensions" a rotation in this sense is a map from (..., 1, 0, ...) to (..., 0, 1, ...) all else being equal. The hypervolume of two different lengths in two different dimensions (a, b) is a*b, or in other words: Multiplication. The hypervolume of one length rotated through n dimensions (a, a, a, ..., a) with n a's is a^n, or in other words: Exponentiation. So if this is the pattern for exponentiation, and tetration's definition in terms of exponentiation is: ${}^{n}{x} = x^{\left({}^{(n-1)}x\right)}$ then all we have to do is substitute this formula into our interpretation of exponentiation. This lexical substitution gives: ${}^{n}x$ is the hypervolume of a length x rotated through ${}^{(n-1)}x$ dimensions. Although this is not directly interpretable, it is recursive. This is as far as I've gotten in a semi-physical interpretation of tetration. I think trying to understand that statement intuitively is doomed, as recursive interpretation may lead to madness... Andrew Robbins andydude Wrote: Ivars Wrote:both finite and infinite, and its opposite log(log(log(log(log(.....) both finite and infinite? What? Oh, I get it now. You are mistaken, $\log^n(x) = {}^{\text{slog}(x)-n}e$ which is not the inverse of tetration... it is tetration. Tetration has two inverses, super-roots and super-logarithms. You might be able to consider super-logarithms as iterated logarithms, but instead of giving the nth iterate of a logarithm, the super-logarithm gives you the n required to produce the given iterate. Please see Wikipedia's super-logarithm and iterated logarithm for more. Andrew Robbins But is not applying of infinite log(log(log( .........h(z)) from the right leading back to z? andydude Wrote:I think trying to understand that statement intuitively is doomed, as recursive interpretation may lead to madness... Andrew Robbins May be not if You try to understand at least just one: h(e^(pi/2))=-+i. This means e^(-i*(pi/2)) = e^h(e^(pi/2))*(pi/2). So infinitely working beginning with end ( from right) in infinite number of dimensions on e^(pi/2) is equal to -imaginary unit. If we look at e^(-i*pi/2) as turn 90 degrees clockwise in complex plane, this turn involves going through infinite tetration in other dimensions. What could that be? So log(log(log(...... ) is tetration? if we take e^pi/2=i^-i, and log with base i, we get (from right) (only main value) : log i (.................log i ((i^-i)) = log i( ....( log i ( -i) = log i (....(log i (-i))= log i ( ... log i (logi ( -1) = log i ( ...log i ( log i^2) = log i (......log i (2) = log i(...........log i(log i (i) + log i(i)) = log i(..............logi(log i i^2 = log i (......log i (2)=log i(...........log i(log i (i) + log i(i)) = log i(..............logi(log i i^2 = log i (......log i (2)=............ So,if we ever get to the end, we might have either log i(i^2) = 2 or log i (2) = 2ln2/i*pi. Is this the same as tetration of (e^pi/2)? How do we calculate it in this case? Ivars Wrote:But is not applying of infinite log(log(log( .........h(z)) from the right leading back to z? No, no, no! What h represents is a fixed point of an exponential function. So if $a = h(z)$ then $a = z^a$ and if you are taking the base-z logarithm of both sides then you can see that $\log_z(a) = a$ which means that a is not only a fixed point of the base-z exponential function, but it is also a fixed point of the base-z logarithm function as well. And if it is a fixed point of the logarithm function, then there is no amount of iteration that will give any other output, so $\log_z^{\infty}(a) = a$, so infinitely iterated logarithms are not the inverse of infinitely iterated exponentials, only $z = a^{1/a}$ is the inverse of infinitely iterated exponentials. Andrew Robbins Ivars Wrote: andydude Wrote:I think trying to understand that statement intuitively is doomed, as recursive interpretation may lead to madness... Andrew Robbins So infinitely working beginning with end ( from right) in infinite number of dimensions on e^(pi/2) is equal to -imaginary unit. I think you mean: rotating the length $e^{\pi/2}$ through infinite number of dimensions gives a hyper-volume equal to -imaginary unit. I only knew what you were talking about from context. If I were not in this forum, and I saw your words, they would not describe what you are trying to describe. Try and be more careful about how to say things. Ivars Wrote:So log(log(log(...... ) is tetration? if we take e^pi/2=i^-i, and log with base i, we get (from right) (only main value) : $\begin{tabular}{rl}\log_i^{\infty} (e^{\pi/2}) & = \log_i^{\infty} (i^{-i}) \\ & = \log_i^{\infty} (-i) \\ & = \log_i^{\infty} (i^{-1}) \\ & = \log_i^{\infty} (\log_i (-1)) \\ & = \log_i^{\infty} (\log_i i^2) \\ & = \log_i^{\infty} (2) \\ & = \log_i^{\infty} (2 \log_i (i)) \\ & = \log_i^{\infty} (\log_i i^2) \\ & = \log_i^{\infty} (2) \\ & = \log_i^{\infty} (2 \log_i (i)) \\ & = \log_i^{\infty} (\log_i i^2) \\ & = \log_i^{\infty} (2) \\ & = ...\end{tabular}$ So, if we ever get to the end, we might have either $\log_i(i^2) = 2$ or $\log_i(2) = \frac{2 \ln2}{i \pi}$. True and true. I think your issue here is that you are calculating with base i, and as such, the only way to calculate the infinitely iterated exponential with base i is to use exponentials, not logarithms. The base-b logarithms can be infinitely iterated for all b such that $|\log(a)| > 1$ where a is a fixed point $a = b^a = {}^{\infty}b$. Exponentials on the other hand converge everywhere logs don't. So the base-b exponentials can be infinitely iterated for all b such that $|\log(a)| < 1$. Now the fixed point of base i is ${}^{\infty}i = 0.438283+ 0.360592 i$ which means $|\log({}^{\infty}i)| = 0.891514 < 1$ Since this value (sometimes called the Lyapunov number), means that exponentials will converge, but logarithms will not, because it is less than 1. So what you have found is that when you try and use logarithms for base i, then it doesn't converge, it oscillates between values, as expected. Ivars Wrote:Is this the same as tetration of (e^pi/2)? How do we calculate it in this case? You would do it just as you would with base i, only we are looking at base $e^{\pi/2}$ this time. According to the infinitely iterated exponential (with the usual branch structure) we get ${}^{\infty}(e^{\pi/2}) = -i$. This means that -i is the fixed point of the base-$e^{\pi/2}$ exponential function. To determine whether we should be using exponentials or logarithms to find this fixed point, we do the test: $|\log(-i)| = 1.5708 > 1$ so we should be using logarithms instead of exponentials, because this number (the Lyapunov number) is greater than 1. So with this, we can say that: $\log_{e^{\pi/2}}^n(-i) = -i$ for all n. Since this is true for all n, the version when $n=\infty$ is true. Andrew Robbins Thank You Andrew, This is all very clearly explained and very interesting. I will study it for a while, but thanks again. I am happy to have not made mistakes in log base i calculation of (e^(pi/2)), and the conclusion of oscillation between 2 interesting values. I am somewhat attached to this value i^-i more than to general conclusions- may be it is not an exception, but rather first glimpse of some uncovered relations. Steven in Physorg forum came up with an interesting limit, whihc I extrapolated a little, and will write here just to check if its right, as i used some web complex number calculator to check limits. I wonder if i/x as x-> infinity can be considered as imaginary infinitesimal in the scale where x is. lim ( 1+- (i/x))^ (pi*(x^2 ))= e^(pi/2) as x-> infinity if You substitute x=sgrt(y) than it is valid as well, true for both roots of y. So lim ( 1+-(i/x))^(n*pi*(x^2)) when x-> infinity = e^(n*(pi/2)) for n>=1 Now what happens if n is real? For n=1/2 lim (1+-(i/x))^(pi/2)*(x^2) as x-> infinity e^(pi/4), formula works also for all 1/n lim (1+-(i/x))^ a*(pi/2)*(x^2) = e^(a*(pi/2)) , a= real, including 0. But we know that e^(pi/2) = i^ (1/i) or i-th root of i. So lim (1+-(i/x))^(pi*(x^2)) x-> infinity= i^(1/i) and generally, lim (1+-i/x)^(a*(pi*(x^2)) x-> infinity = (i^(1/i))^a= (i^-i)^a. This is probably well known, but still looks nice. I do not know if it really holds for all a I have indicated. Now if we drop the x-> infinity in a way that we include x in numbers itself, so that i gets "smaller" by infinity and correspondingly pi "longer" by infinity^2, the result would still hold, but in each scale of x ( since we can substitute x with x^2, x^n, x^infinity) the identity will hold, but i and pi will be different, as will e. Ivars Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
2019-10-17 21:45:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 25, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168636560440063, "perplexity": 1049.528048435974}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00476.warc.gz"}
https://bugs.python.org/issue20907
This issue tracker has been migrated to GitHub, and is currently read-only. For more information, see the GitHub FAQs in the Python's Developer Guide. classification Title: Type: behavioral differences between shutil.unpack_archive and ZipFile.extractall behavior patch review Documentation, Library (Lib) Python 3.11 process Status: Resolution: open docs@python MarSoft, andrei.avk, docs@python, eric.araujo, huantian, peter@psantoro.net, pitrou, r.david.murray normal patch Created on 2014-03-13 09:46 by peter@psantoro.net, last changed 2022-04-11 14:57 by admin. Files shutil.diff peter@psantoro.net, 2014-03-13 09:46 patch test_unpack.zip peter@psantoro.net, 2014-03-13 23:12 test script and test zips Pull Requests PR 29910 open andrei.avk, 2021-12-04 00:12 Messages (10) msg213374 - (view) Author: Peter Santoro (peter@psantoro.net) * Date: 2014-03-13 09:46 Since Python 3.3.1, ZipFile.extractall was enhanced to better handle absolute paths and illegal characters. The associated logic within shutil._unpack_zipfile essentially skips zip members with these issues. If a zip file contains all absolute paths, ZipFile.extractall works as expected (i.e. the zip file is unpacked), but shutil._unpack_zipfile (normally called indirectly via shutil.unpack_archive) appears to do nothing (i.e. it silently fails to unpack the zip file). The attached patch attempts to unify the behavior of extracting zip files between shutil.unpack_archive with ZipFile.extractall. msg213494 - (view) Author: Peter Santoro (peter@psantoro.net) * Date: 2014-03-13 23:12 I've attached a zip file which contains a test script and test zip files for the previously submitted Python 3.3.5 patch. See the included README.txt for more information. To view the contents of the included bad.zip file, use the following command: > unzip -l bad.zip msg214082 - (view) Author: Antoine Pitrou (pitrou) * Date: 2014-03-19 11:40 Well, you have to assume the code you're removing is there for a reason (e.g. perhaps this is meant to protect from attacks when opening a zip file uploaded by a user). I'd like to hear from Éric on this. msg214093 - (view) Author: R. David Murray (r.david.murray) * Date: 2014-03-19 13:19 First step would be to get rid of the warning in the zipfile docs and replace it with the info that the absolute path '/' and any relative path are stripped silently before the file is extracted. It would also be worth adding an enhancement to zipfile to optionally not do it silently. I hope the same considerations apply to tarfile, but I haven't checked. In other words, I do think that code is a holdover from when zipfile *wasn't* safe, but since I didn't write it I don't know for sure. msg214127 - (view) Author: Éric Araujo (eric.araujo) * Date: 2014-03-19 19:24 shutil.unpack_archive was extracted from distutils by Tarek. I can do some Mercurial archaelogy to find more about the behaviour. msg214196 - (view) Author: Peter Santoro (peter@psantoro.net) * Date: 2014-03-20 09:56 It seems clear to me that the logic in shutil._unpack_zipfile that silently skips paths that start with '/' (indicates absolute path) or that contain references to the parent directory ('..') was added to prevent malicious zip files from making potential malicious/unwanted modifications to the filesystem (perhaps at a time when zipfile did not itself contain such logic). This conservative approach works, but it can have unexpected results. For example, if all entries in a zip file contain these invalid characters, then shutil._unpack_zipfile appears to do nothing (i.e. the zip file is not unpacked). This is good (except for the silent part), if the zip file is truly malicious. However, I recently had to deal with thousands of zip files created by well known software vendors where hundreds of the zip files were created incorrectly and contained these invalid characters. These files were not malicious, but they were created improperly. Note that shutil._unpack_zipfile silently failed to unzip these files, but by using ZipFile.extractall I could unzip them. It appears that most unzipping software today either either ignores (sometimes silently) potentially malicious zip entries (e.g. Windows 7 Explorer displays an invalid zip file error) or it attempts to filter out/replace known bad characters so that the zip entries can be extracted (e.g. WinZip, gnu unzip). I created this issue because the Python library uses both approaches, which may need rethinking. The newer logic in ZipFile._extract_member, which is used by ZipFile.extractall, takes a different approach. Instead of silently ignoring potentially malicious zip entries, it attempts to filter out or replace known invalid characters before extracting the zip entries. From the Python zipfile docs: --- If a member filename is an absolute path, a drive/UNC sharepoint and leading (back)slashes will be stripped, e.g.: ///foo/bar becomes foo/bar on Unix, and C:\foo\bar becomes foo\bar on Windows. And all ".." components in a member filename will be removed, e.g.: ../../foo../../ba..r becomes foo../ba..r. On Windows illegal characters (:, <, >, |, ", ?, and *) replaced by underscore (_). --- As ZipFile._extract_member filters out/replaces more invalid characters than shutil._unpack_zipfile handles, one could argue that the (apparent older) approach used by shutil._unpack_zipfile is less safe. The approaches used by shutil._unpack_zipfile and ZipFile.extractall to deal with potentially malicious zip file entries are different. This issue could be closed if not deemed important by the Python core developers or it could be handled by documentation and/or coding changes. msg214214 - (view) Author: R. David Murray (r.david.murray) * Date: 2014-03-20 12:45 Note that unix unzip does exactly the same thing as zipfile extractall (except that it does issue warnings), and I believe this is considered "best practice" these days for extraction tools: strip out absolute/relative path components and extract to the destination directory. msg397062 - (view) Author: huantian (huantian) Date: 2021-07-07 03:38 This issue is somewhat old, but I think either applying the patch attached earlier or noting this difference in behavior in the documentation would be ideal, as it can be quite difficult to debug shutil.unpack_archive silently skipping files. If the files you expect to unpack do contain '..' for non-malicious purposes this makes shutil.unpack_archive unsuitable for the situation. msg407620 - (view) Author: Andrei Kulakov (andrei.avk) * Date: 2021-12-04 00:29 I think it may be good enough to add a warning on skipped files in _unpack_zipfile(). - this way we keep backwards compatibility (especially since behavior in both modules differed for such a long time.) - it's not clear that ZipFile behavior is superior -- for example, what if a file with stripped path components overwrites existing files? - if requested in the future, a parameter can be added to enable ZipFile-like behavior - it can be very confusing if files are silently skipped, especially if an archive has thousands of files. I've added a PR, note that the test in PR also tests that files with '..' are indeed skipped, we don't have a test for that now, so that's an added benefit. msg407622 - (view) Author: Andrei Kulakov (andrei.avk) * Date: 2021-12-04 01:34 I forgot to add this: - we may not want to follow the behavior of command line unzip - it's interactive so considerations are somewhat different. For example, it will warn if file is being overwritten and show a prompt on whether to skip, overwrite, abort, etc. History Date User Action Args 2022-04-07 21:02:08MarSoftsetnosy: + MarSoft 2021-12-12 18:58:58iritkatrielsetversions: + Python 3.11, - Python 3.3, Python 3.4 2021-12-04 01:34:53andrei.avksetmessages: + msg407622 2021-12-04 00:29:55andrei.avksetmessages: + msg407620 2021-12-04 00:12:50andrei.avksetnosy: + andrei.avk pull_requests: + pull_request28135 stage: needs patch -> patch review 2021-07-07 03:38:31huantiansetnosy: + huantian messages: + msg397062 2014-03-20 12:45:25r.david.murraysetmessages: + msg214214 2014-03-20 09:56:39peter@psantoro.netsetmessages: + msg214196 2014-03-19 19:24:30eric.araujosetmessages: + msg214127 2014-03-19 13:19:05r.david.murraysetnosy: + r.david.murray, docs@python messages: + msg214093 assignee: docs@python components: + Documentation stage: test needed -> needs patch 2014-03-19 11:40:54pitrousetnosy: + eric.araujo, pitrou messages: + msg214082 2014-03-13 23:12:18peter@psantoro.netsetfiles: + test_unpack.zip messages: + msg213494 2014-03-13 09:51:20eric.araujosetstage: test needed 2014-03-13 09:46:05peter@psantoro.netcreate
2023-02-02 19:13:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5093650221824646, "perplexity": 7560.717856011512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00551.warc.gz"}
http://mathoverflow.net/questions/84121/inverse-laplace-transform-with-fractional-powers
## Inverse laplace transform with fractional powers Can anything be said how calculate the inverse laplace transform of: $\frac{\lambda+a}{(\lambda+b)(\lambda^c+d)-e},$ where $a,b,c,d,e$ are real numbers? -
2013-05-26 07:52:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205112814903259, "perplexity": 1017.7148809636997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706637439/warc/CC-MAIN-20130516121717-00038-ip-10-60-113-184.ec2.internal.warc.gz"}
http://isabelle.in.tum.de/repos/isabelle/file/395c4fb15ea2/src/Doc/JEdit/JEdit.thy
src/Doc/JEdit/JEdit.thy author wenzelm Sat Nov 24 18:56:44 2018 +0100 (5 months ago) changeset 69343 395c4fb15ea2 parent 68737 a8bef9ff7dc0 child 69597 ff784d5a5bfb permissions -rw-r--r-- use "Isabelle DejaVu" fonts uniformly: Text Area, GUI elements, HTML output etc.; 1 (*:maxLineLen=78:*) 2 3 theory JEdit 4 imports Base 5 begin 6 7 chapter \<open>Introduction\<close> 8 9 section \<open>Concepts and terminology\<close> 10 11 text \<open> 12 Isabelle/jEdit is a Prover IDE that integrates \<^emph>\<open>parallel proof checking\<close> 13 @{cite "Wenzel:2009" and "Wenzel:2013:ITP"} with \<^emph>\<open>asynchronous user 14 interaction\<close> @{cite "Wenzel:2010" and "Wenzel:2012:UITP-EPTCS" and 15 "Wenzel:2014:ITP-PIDE" and "Wenzel:2014:UITP"}, based on a document-oriented 16 approach to \<^emph>\<open>continuous proof processing\<close> @{cite "Wenzel:2011:CICM" and 17 "Wenzel:2012"}. Many concepts and system components are fit together in 18 order to make this work. The main building blocks are as follows. 19 20 \<^descr>[Isabelle/ML] is the implementation and extension language of Isabelle, 21 see also @{cite "isabelle-implementation"}. It is integrated into the 22 logical context of Isabelle/Isar and allows to manipulate logical entities 23 directly. Arbitrary add-on tools may be implemented for object-logics such 24 as Isabelle/HOL. 25 26 \<^descr>[Isabelle/Scala] is the system programming language of Isabelle. It 27 extends the pure logical environment of Isabelle/ML towards the outer 28 world of graphical user interfaces, text editors, IDE frameworks, web 29 services, SSH servers, SQL databases etc. Special infrastructure allows to 30 transfer algebraic datatypes and formatted text easily between ML and 31 Scala, using asynchronous protocol commands. 32 33 \<^descr>[PIDE] is a general framework for Prover IDEs based on Isabelle/Scala. It 34 is built around a concept of parallel and asynchronous document 35 processing, which is supported natively by the parallel proof engine that 36 is implemented in Isabelle/ML. The traditional prover command loop is 37 given up; instead there is direct support for editing of source text, with 38 rich formal markup for GUI rendering. 39 40 \<^descr>[jEdit] is a sophisticated text editor\<^footnote>\<open>\<^url>\<open>http://www.jedit.org\<close>\<close> 41 implemented in Java\<^footnote>\<open>\<^url>\<open>https://www.java.com\<close>\<close>. It is easily extensible by 42 plugins written in any language that works on the JVM. In the context of 43 Isabelle this is always Scala\<^footnote>\<open>\<^url>\<open>https://www.scala-lang.org\<close>\<close>. 44 45 \<^descr>[Isabelle/jEdit] is the main application of the PIDE framework and the 46 default user-interface for Isabelle. It targets both beginners and 47 experts. Technically, Isabelle/jEdit consists of the original jEdit code 48 base with minimal patches and a special plugin for Isabelle. This is 49 integrated as a desktop application for the main operating system 50 families: Linux, Windows, Mac OS X. 51 52 End-users of Isabelle download and run a standalone application that exposes 53 jEdit as a text editor on the surface. Thus there is occasionally a tendency 54 to apply the name jEdit'' to any of the Isabelle Prover IDE aspects, 55 without proper differentiation. When discussing these PIDE building blocks 56 in public forums, mailing lists, or even scientific publications, it is 57 particularly important to distinguish Isabelle/ML versus Standard ML, 58 Isabelle/Scala versus Scala, Isabelle/jEdit versus jEdit. 59 \<close> 60 61 62 section \<open>The Isabelle/jEdit Prover IDE\<close> 63 64 text \<open> 65 \begin{figure}[!htb] 66 \begin{center} 67 \includegraphics[scale=0.333]{isabelle-jedit} 68 \end{center} 69 \caption{The Isabelle/jEdit Prover IDE} 70 \label{fig:isabelle-jedit} 71 \end{figure} 72 73 Isabelle/jEdit (\figref{fig:isabelle-jedit}) consists of some plugins for 74 the jEdit text editor, while preserving its general look-and-feel as far as 75 possible. The main plugin is called Isabelle'' and has its own menu 76 \<^emph>\<open>Plugins~/ Isabelle\<close> with access to several actions and add-on panels (see 77 also \secref{sec:dockables}), as well as \<^emph>\<open>Plugins~/ Plugin Options~/ 78 Isabelle\<close> (see also \secref{sec:options}). 79 80 The options allow to specify a logic session name, but the same selector is 81 also accessible in the \<^emph>\<open>Theories\<close> panel (\secref{sec:theories}). After 82 startup of the Isabelle plugin, the selected logic session image is provided 83 automatically by the Isabelle build tool @{cite "isabelle-system"}: if it is 84 absent or outdated wrt.\ its sources, the build process updates it within 85 the running text editor. Prover IDE functionality is fully activated after 86 successful termination of the build process. A failure may require changing 87 some options and restart of the Isabelle plugin or application. Changing the 88 logic session, or the underlying ML system platform (32\,bit versus 64\,bit) 89 requires a restart of the whole application to take effect. 90 91 \<^medskip> 92 The main job of the Prover IDE is to manage sources and their changes, 93 taking the logical structure as a formal document into account (see also 94 \secref{sec:document-model}). The editor and the prover are connected 95 asynchronously in a lock-free manner. The prover is free to organize the 96 checking of the formal text in parallel on multiple cores, and provides 97 feedback via markup, which is rendered in the editor via colors, boxes, 98 squiggly underlines, hyperlinks, popup windows, icons, clickable output etc. 99 100 Using the mouse together with the modifier key \<^verbatim>\<open>CONTROL\<close> (Linux, Windows) 101 or \<^verbatim>\<open>COMMAND\<close> (Mac OS X) exposes formal content via tooltips and/or 102 hyperlinks (see also \secref{sec:tooltips-hyperlinks}). Output (in popups 103 etc.) may be explored recursively, using the same techniques as in the 104 editor source buffer. 105 106 Thus the Prover IDE gives an impression of direct access to formal content 107 of the prover within the editor, but in reality only certain aspects are 108 exposed, according to the possibilities of the prover and its add-on tools. 109 \<close> 110 111 112 subsection \<open>Documentation\<close> 113 114 text \<open> 115 The \<^emph>\<open>Documentation\<close> panel of Isabelle/jEdit provides access to some example 116 theory files and the standard Isabelle documentation. PDF files are opened 117 by regular desktop operations of the underlying platform. The section 118 Original jEdit Documentation'' contains the original \<^emph>\<open>User's Guide\<close> of 119 this sophisticated text editor. The same is accessible via the \<^verbatim>\<open>Help\<close> menu 120 or \<^verbatim>\<open>F1\<close> keyboard shortcut, using the built-in HTML viewer of Java/Swing. 121 The latter also includes \<^emph>\<open>Frequently Asked Questions\<close> and documentation of 122 individual plugins. 123 124 Most of the information about jEdit is relevant for Isabelle/jEdit as well, 125 but users need to keep in mind that defaults sometimes differ, and the 126 official jEdit documentation does not know about the Isabelle plugin with 127 its support for continuous checking of formal source text: jEdit is a plain 128 text editor, but Isabelle/jEdit is a Prover IDE. 129 \<close> 130 131 132 subsection \<open>Plugins\<close> 133 134 text \<open> 135 The \<^emph>\<open>Plugin Manager\<close> of jEdit allows to augment editor functionality by JVM 136 modules (jars) that are provided by the central plugin repository, which is 137 accessible via various mirror sites. 138 139 Connecting to the plugin server-infrastructure of the jEdit project allows 140 to update bundled plugins or to add further functionality. This needs to be 141 done with the usual care for such an open bazaar of contributions. Arbitrary 142 combinations of add-on features are apt to cause problems. It is advisable 143 to start with the default configuration of Isabelle/jEdit and develop some 144 sense how it is meant to work, before loading too many other plugins. 145 146 \<^medskip> 147 The \<^emph>\<open>Isabelle\<close> plugin is responsible for the main Prover IDE functionality 148 of Isabelle/jEdit: it manages the prover session in the background. A few 149 additional plugins are bundled with Isabelle/jEdit for convenience or out of 150 necessity, notably \<^emph>\<open>Console\<close> with its \<^emph>\<open>Scala\<close> sub-plugin 151 (\secref{sec:scala-console}) and \<^emph>\<open>SideKick\<close> with some Isabelle-specific 152 parsers for document tree structure (\secref{sec:sidekick}). The 153 \<^emph>\<open>Navigator\<close> plugin is particularly important for hyperlinks within the 154 formal document-model (\secref{sec:tooltips-hyperlinks}). Further plugins 155 (e.g.\ \<^emph>\<open>ErrorList\<close>, \<^emph>\<open>Code2HTML\<close>) are included to saturate the dependencies 156 of bundled plugins, but have no particular use in Isabelle/jEdit. 157 \<close> 158 159 160 subsection \<open>Options \label{sec:options}\<close> 161 162 text \<open> 163 Both jEdit and Isabelle have distinctive management of persistent options. 164 165 Regular jEdit options are accessible via the dialogs \<^emph>\<open>Utilities~/ Global 166 Options\<close> or \<^emph>\<open>Plugins~/ Plugin Options\<close>, with a second chance to flip the 167 two within the central options dialog. Changes are stored in @{path 168 "$JEDIT_SETTINGS/properties"} and @{path "$JEDIT_SETTINGS/keymaps"}. 169 170 Isabelle system options are managed by Isabelle/Scala and changes are stored 171 in @{path "$ISABELLE_HOME_USER/etc/preferences"}, independently of 172 other jEdit properties. See also @{cite "isabelle-system"}, especially the 173 coverage of sessions and command-line tools like @{tool build} or @{tool 174 options}. 175 176 Those Isabelle options that are declared as \<^verbatim>\<open>public\<close> are configurable in 177 Isabelle/jEdit via \<^emph>\<open>Plugin Options~/ Isabelle~/ General\<close>. Moreover, there 178 are various options for rendering document content, which are configurable 179 via \<^emph>\<open>Plugin Options~/ Isabelle~/ Rendering\<close>. Thus \<^emph>\<open>Plugin Options~/ 180 Isabelle\<close> in jEdit provides a view on a subset of Isabelle system options. 181 Note that some of these options affect general parameters that are relevant 182 outside Isabelle/jEdit as well, e.g.\ @{system_option threads} or 183 @{system_option parallel_proofs} for the Isabelle build tool @{cite 184 "isabelle-system"}, but it is possible to use the settings variable 185 @{setting ISABELLE_BUILD_OPTIONS} to change defaults for batch builds 186 without affecting the Prover IDE. 187 188 The jEdit action @{action_def isabelle.options} opens the options dialog for 189 the Isabelle plugin; it can be mapped to editor GUI elements as usual. 190 191 \<^medskip> 192 Options are usually loaded on startup and saved on shutdown of 193 Isabelle/jEdit. Editing the generated @{path "$JEDIT_SETTINGS/properties"} 194 or @{path "$ISABELLE_HOME_USER/etc/preferences"} manually while the 195 application is running may cause surprise due to lost updates! 196 \<close> 197 198 199 subsection \<open>Keymaps\<close> 200 201 text \<open> 202 Keyboard shortcuts are managed as a separate concept of \<^emph>\<open>keymap\<close> that is 203 configurable via \<^emph>\<open>Global Options~/ Shortcuts\<close>. The \<^verbatim>\<open>imported\<close> keymap is 204 derived from the initial environment of properties that is available at the 205 first start of the editor; afterwards the keymap file takes precedence and 206 is no longer affected by change of default properties. 207 208 Users may change their keymap later, but this may lead to conflicts with 209 \<^verbatim>\<open>shortcut\<close> properties in \<^file>\<open>$JEDIT_HOME/src/jEdit.props\<close>. 210 211 The action @{action_def "isabelle.keymap-merge"} helps to resolve pending 212 Isabelle keymap changes that are in conflict with the current jEdit keymap; 213 while non-conflicting changes are applied implicitly. This action is 214 automatically invoked on Isabelle/jEdit startup. 215 \<close> 216 217 218 section \<open>Command-line invocation \label{sec:command-line}\<close> 219 220 text \<open> 221 Isabelle/jEdit is normally invoked as a single-instance desktop application, 222 based on platform-specific executables for Linux, Windows, Mac OS X. 223 224 It is also possible to invoke the Prover IDE on the command-line, with some 225 extra options and environment settings. The command-line usage of @{tool_def 226 jedit} is as follows: 227 @{verbatim [display] 228 \<open>Usage: isabelle jedit [OPTIONS] [FILES ...] 229 230 Options are: 231 -A NAME ancestor session for options -R and -S (default: parent) 232 -D NAME=X set JVM system property 233 -J OPTION add JVM runtime option 234 (default $JEDIT_JAVA_SYSTEM_OPTIONS$JEDIT_JAVA_OPTIONS) 235 -R NAME build image with requirements from other sessions 236 -S NAME like option -R, with focus on selected session 237 -b build only 238 -d DIR include session directory 239 -f fresh build 240 -i NAME include session in name-space of theories 241 -j OPTION add jEdit runtime option 242 (default $JEDIT_OPTIONS) 243 -l NAME logic image name 244 -m MODE add print mode for output 245 -n no build of session image on startup 246 -p CMD ML process command prefix (process policy) 247 -s system build mode for session image 248 249 Start jEdit with Isabelle plugin setup and open FILES 250 (default "$USER_HOME/Scratch.thy" or ":" for empty buffer).\<close>} 251 252 The \<^verbatim>\<open>-l\<close> option specifies the session name of the logic image to be used 253 for proof processing. Additional session root directories may be included 254 via option \<^verbatim>\<open>-d\<close> to augment the session name space (see also @{cite 255 "isabelle-system"}). By default, the specified image is checked and built on 256 demand. The \<^verbatim>\<open>-s\<close> option determines where to store the result session image 257 of @{tool build}. The \<^verbatim>\<open>-n\<close> option bypasses the implicit build process for 258 the selected session image. 259 260 The \<^verbatim>\<open>-R\<close> option builds an auxiliary logic image with all theories from 261 other sessions that are not already present in its parent; it also opens the 262 session \<^verbatim>\<open>ROOT\<close> entry in the editor to facilitate editing of the main 263 session. The \<^verbatim>\<open>-S\<close> option is like \<^verbatim>\<open>-R\<close>, with a focus on the selected 264 session and its descendants: the namespace of accessible theories is 265 restricted accordingly. This reduces startup time for big projects, notably 266 the Archive of Formal Proofs''. The \<^verbatim>\<open>-A\<close> option specifies and alternative 267 ancestor session for options \<^verbatim>\<open>-R\<close> and \<^verbatim>\<open>-S\<close>: this allows to restructure the 268 hierarchy of session images on the spot. 269 270 The \<^verbatim>\<open>-i\<close> option includes additional sessions into the name-space of 271 theories: multiple occurrences are possible. 272 273 The \<^verbatim>\<open>-m\<close> option specifies additional print modes for the prover process. 274 Note that the system option @{system_option_ref jedit_print_mode} allows to 275 do the same persistently (e.g.\ via the \<^emph>\<open>Plugin Options\<close> dialog of 276 Isabelle/jEdit), without requiring command-line invocation. 277 278 The \<^verbatim>\<open>-J\<close> and \<^verbatim>\<open>-j\<close> options pass additional low-level options to the JVM or 279 jEdit, respectively. The defaults are provided by the Isabelle settings 280 environment @{cite "isabelle-system"}, but note that these only work for the 281 command-line tool described here, and not the regular application. 282 283 The \<^verbatim>\<open>-D\<close> option allows to define JVM system properties; this is passed 284 directly to the underlying \<^verbatim>\<open>java\<close> process. 285 286 The \<^verbatim>\<open>-b\<close> and \<^verbatim>\<open>-f\<close> options control the self-build mechanism of 287 Isabelle/jEdit. This is only relevant for building from sources, which also 288 requires an auxiliary \<^verbatim>\<open>jedit_build\<close> component from 289 \<^url>\<open>https://isabelle.in.tum.de/components\<close>. The official Isabelle release 290 already includes a pre-built version of Isabelle/jEdit. 291 292 \<^bigskip> 293 It is also possible to connect to an already running Isabelle/jEdit process 294 via @{tool_def jedit_client}: 295 @{verbatim [display] 296 \<open>Usage: isabelle jedit_client [OPTIONS] [FILES ...] 297 298 Options are: 299 -c only check presence of server 300 -n only report server name 301 -s NAME server name (default Isabelle) 302 303 Connect to already running Isabelle/jEdit instance and open FILES\<close>} 304 305 The \<^verbatim>\<open>-c\<close> option merely checks the presence of the server, producing a 306 process return code accordingly. 307 308 The \<^verbatim>\<open>-n\<close> option reports the server name, and the \<^verbatim>\<open>-s\<close> option provides a 309 different server name. The default server name is the official distribution 310 name (e.g.\ \<^verbatim>\<open>Isabelle2018\<close>). Thus @{tool jedit_client} can connect to the 311 Isabelle desktop application without further options. 312 313 The \<^verbatim>\<open>-p\<close> option allows to override the implicit default of the system 314 option @{system_option_ref ML_process_policy} for ML processes started by 315 the Prover IDE, e.g. to control CPU affinity on multiprocessor systems. 316 317 The JVM system property \<^verbatim>\<open>isabelle.jedit_server\<close> provides a different server 318 name, e.g.\ use \<^verbatim>\<open>isabelle jedit -Disabelle.jedit_server=\<close>\<open>name\<close> and 319 \<^verbatim>\<open>isabelle jedit_client -s\<close>~\<open>name\<close> to connect later on. 320 \<close> 321 322 323 section \<open>GUI rendering\<close> 324 325 subsection \<open>Look-and-feel \label{sec:look-and-feel}\<close> 326 327 text \<open> 328 jEdit is a Java/AWT/Swing application with the ambition to support 329 native'' look-and-feel on all platforms, within the limits of what Oracle 330 as Java provider and major operating system distributors allow (see also 331 \secref{sec:problems}). 332 333 Isabelle/jEdit enables platform-specific look-and-feel by default as 334 follows. 335 336 \<^descr>[Linux:] The platform-independent \<^emph>\<open>Metal\<close> is used by default. 337 338 The Linux-specific \<^emph>\<open>GTK+\<close> often works as well, but the overall GTK theme 339 and options need to be selected to suite Java/AWT/Swing. Note that Java 340 Virtual Machine has no direct influence of GTK rendering. 341 342 \<^descr>[Windows:] Regular \<^emph>\<open>Windows\<close> is used by default. 343 344 \<^descr>[Mac OS X:] Regular \<^emph>\<open>Mac OS X\<close> is used by default. 345 346 The bundled \<^emph>\<open>MacOSX\<close> plugin provides various functions that are expected 347 from applications on that particular platform: quit from menu or dock, 348 preferences menu, drag-and-drop of text files on the application, 349 full-screen mode for main editor windows. It is advisable to have the 350 \<^emph>\<open>MacOSX\<close> plugin enabled all the time on that platform. 351 352 Users may experiment with different Swing look-and-feels, but need to keep 353 in mind that this extra variance of GUI functionality often causes problems. 354 The platform-independent \<^emph>\<open>Metal\<close> and \<^emph>\<open>Nimbus\<close> should always work on all 355 platforms, although they are technically and stylistically outdated. The 356 historic \<^emph>\<open>CDE/Motif\<close> should be ignored. 357 358 Changing the look-and-feel in \<^emph>\<open>Global Options~/ Appearance\<close> may update the 359 GUI only partially: proper restart of Isabelle/jEdit is usually required. 360 \<close> 361 362 363 subsection \<open>Displays with very high resolution \label{sec:hdpi}\<close> 364 365 text \<open> 366 In distant past, displays with $1024 \times 768$ or $1280 \times 1024$ 367 pixels were considered high resolution'' and bitmap fonts with 12 or 14 368 pixels as adequate for text rendering. After 2016, we have routinely seen 369 much higher resolutions, e.g. Full HD'' at $1920 \times 1080$ pixels or 370 Ultra HD'' / 4K'' at $3840 \times 2160$. 371 372 GUI frameworks are usually lagging behind, with hard-wired icon sizes and 373 tiny fonts. Java and jEdit do provide reasonable support for very high 374 resolution, but this requires manual adjustments as described below. 375 376 \<^medskip> 377 The \<^bold>\<open>operating-system\<close> usually provides some configuration for global 378 scaling of text fonts, e.g.\ $120\%$--$250\%$ on Windows. This impacts 379 regular GUI elements, when used with native look-and-feel: Linux \<^emph>\<open>GTK+\<close>, 380 \<^emph>\<open>Windows\<close>, \<^emph>\<open>Mac OS X\<close>, respectively. Alternatively, it is possible to use 381 the platform-independent \<^emph>\<open>Metal\<close> look-and-feel and readjust its main font 382 sizes via jEdit options explained below. The Isabelle/jEdit \<^bold>\<open>application\<close> 383 provides further options to adjust font sizes in particular GUI elements. 384 Here is a summary of all relevant font properties: 385 386 \<^item> \<^emph>\<open>Global Options / Text Area / Text font\<close>: the main text area font, 387 which is also used as reference point for various derived font sizes, 388 e.g.\ the \<^emph>\<open>Output\<close> (\secref{sec:output}) and \<^emph>\<open>State\<close> 389 (\secref{sec:state-output}) panels. 390 391 \<^item> \<^emph>\<open>Global Options / Gutter / Gutter font\<close>: the font for the gutter area 392 left of the main text area, e.g.\ relevant for display of line numbers 393 (disabled by default). 394 395 \<^item> \<^emph>\<open>Global Options / Appearance / Button, menu and label font\<close> as well as 396 \<^emph>\<open>List and text field font\<close>: this specifies the primary and secondary font 397 for the \<^emph>\<open>Metal\<close> look-and-feel (\secref{sec:look-and-feel}). 398 399 \<^item> \<^emph>\<open>Plugin Options / Isabelle / General / Reset Font Size\<close>: the main text 400 area font size for action @{action_ref "isabelle.reset-font-size"}, e.g.\ 401 relevant for quick scaling like in common web browsers. 402 403 \<^item> \<^emph>\<open>Plugin Options / Console / General / Font\<close>: the console window font, 404 e.g.\ relevant for Isabelle/Scala command-line. 405 406 In \figref{fig:isabelle-jedit-hdpi} the \<^emph>\<open>Metal\<close> look-and-feel is configured 407 with custom fonts at 30 pixels, and the main text area and console at 36 408 pixels. This leads to decent rendering quality, despite the old-fashioned 409 appearance of \<^emph>\<open>Metal\<close>. 410 411 \begin{figure}[!htb] 412 \begin{center} 413 \includegraphics[width=\textwidth]{isabelle-jedit-hdpi} 414 \end{center} 415 \caption{Metal look-and-feel with custom fonts for very high resolution} 416 \label{fig:isabelle-jedit-hdpi} 417 \end{figure} 418 \<close> 419 420 421 chapter \<open>Augmented jEdit functionality\<close> 422 423 section \<open>Dockable windows \label{sec:dockables}\<close> 424 425 text \<open> 426 In jEdit terminology, a \<^emph>\<open>view\<close> is an editor window with one or more \<^emph>\<open>text 427 areas\<close> that show the content of one or more \<^emph>\<open>buffers\<close>. A regular view may 428 be surrounded by \<^emph>\<open>dockable windows\<close> that show additional information in 429 arbitrary format, not just text; a \<^emph>\<open>plain view\<close> does not allow dockables. 430 The \<^emph>\<open>dockable window manager\<close> of jEdit organizes these dockable windows, 431 either as \<^emph>\<open>floating\<close> windows, or \<^emph>\<open>docked\<close> panels within one of the four 432 margins of the view. There may be any number of floating instances of some 433 dockable window, but at most one docked instance; jEdit actions that address 434 \<^emph>\<open>the\<close> dockable window of a particular kind refer to the unique docked 435 instance. 436 437 Dockables are used routinely in jEdit for important functionality like 438 \<^emph>\<open>HyperSearch Results\<close> or the \<^emph>\<open>File System Browser\<close>. Plugins often provide 439 a central dockable to access their main functionality, which may be opened 440 by the user on demand. The Isabelle/jEdit plugin takes this approach to the 441 extreme: its plugin menu provides the entry-points to many panels that are 442 managed as dockable windows. Some important panels are docked by default, 443 e.g.\ \<^emph>\<open>Documentation\<close>, \<^emph>\<open>State\<close>, \<^emph>\<open>Theories\<close> \<^emph>\<open>Output\<close>, \<^emph>\<open>Query\<close>. The user 444 can change this arrangement easily and persistently. 445 446 Compared to plain jEdit, dockable window management in Isabelle/jEdit is 447 slightly augmented according to the the following principles: 448 449 \<^item> Floating windows are dependent on the main window as \<^emph>\<open>dialog\<close> in 450 the sense of Java/AWT/Swing. Dialog windows always stay on top of the view, 451 which is particularly important in full-screen mode. The desktop environment 452 of the underlying platform may impose further policies on such dependent 453 dialogs, in contrast to fully independent windows, e.g.\ some window 454 management functions may be missing. 455 456 \<^item> Keyboard focus of the main view vs.\ a dockable window is carefully 457 managed according to the intended semantics, as a panel mainly for output or 458 input. For example, activating the \<^emph>\<open>Output\<close> (\secref{sec:output}) or State 459 (\secref{sec:state-output}) panel via the dockable window manager returns 460 keyboard focus to the main text area, but for \<^emph>\<open>Query\<close> (\secref{sec:query}) 461 or \<^emph>\<open>Sledgehammer\<close> \secref{sec:sledgehammer} the focus is given to the main 462 input field of that panel. 463 464 \<^item> Panels that provide their own text area for output have an additional 465 dockable menu item \<^emph>\<open>Detach\<close>. This produces an independent copy of the 466 current output as a floating \<^emph>\<open>Info\<close> window, which displays that content 467 independently of ongoing changes of the PIDE document-model. Note that 468 Isabelle/jEdit popup windows (\secref{sec:tooltips-hyperlinks}) provide a 469 similar \<^emph>\<open>Detach\<close> operation as an icon. 470 \<close> 471 472 473 section \<open>Isabelle symbols \label{sec:symbols}\<close> 474 475 text \<open> 476 Isabelle sources consist of \<^emph>\<open>symbols\<close> that extend plain ASCII to allow 477 infinitely many mathematical symbols within the formal sources. This works 478 without depending on particular encodings and varying Unicode 479 standards.\<^footnote>\<open>Raw Unicode characters within formal sources would compromise 480 portability and reliability in the face of changing interpretation of 481 special features of Unicode, such as Combining Characters or Bi-directional 482 Text.\<close> See @{cite "Wenzel:2011:CICM"}. 483 484 For the prover back-end, formal text consists of ASCII characters that are 485 grouped according to some simple rules, e.g.\ as plain \<^verbatim>\<open>a\<close>'' or symbolic 486 \<^verbatim>\<open>\<alpha>\<close>''. For the editor front-end, a certain subset of symbols is rendered 487 physically via Unicode glyphs, in order to show \<^verbatim>\<open>\<alpha>\<close>'' as \<open>\<alpha>\<close>'', for 488 example. This symbol interpretation is specified by the Isabelle system 489 distribution in \<^file>\<open>$ISABELLE_HOME/etc/symbols\<close> and may be augmented by the 490 user in @{path "$ISABELLE_HOME_USER/etc/symbols"}. 491 492 The appendix of @{cite "isabelle-isar-ref"} gives an overview of the 493 standard interpretation of finitely many symbols from the infinite 494 collection. Uninterpreted symbols are displayed literally, e.g.\ 495 \<^verbatim>\<open>\<foobar>\<close>''. Overlap of Unicode characters used in symbol 496 interpretation with informal ones (which might appear e.g.\ in comments) 497 needs to be avoided. Raw Unicode characters within prover source files 498 should be restricted to informal parts, e.g.\ to write text in non-latin 499 alphabets in comments. 500 \<close> 501 502 paragraph \<open>Encoding.\<close> 503 504 text \<open>Technically, the Unicode interpretation of Isabelle symbols is an 505 \<^emph>\<open>encoding\<close> called \<^verbatim>\<open>UTF-8-Isabelle\<close> in jEdit (\<^emph>\<open>not\<close> in the underlying 506 JVM). It is provided by the Isabelle Base plugin and enabled by default for 507 all source files in Isabelle/jEdit. 508 509 Sometimes such defaults are reset accidentally, or malformed UTF-8 sequences 510 in the text force jEdit to fall back on a different encoding like 511 \<^verbatim>\<open>ISO-8859-15\<close>. In that case, verbatim \<^verbatim>\<open>\<alpha>\<close>'' will be shown in the text 512 buffer instead of its Unicode rendering \<open>\<alpha>\<close>''. The jEdit menu operation 513 \<^emph>\<open>File~/ Reload with Encoding~/ UTF-8-Isabelle\<close> helps to resolve such 514 problems (after repairing malformed parts of the text). 515 516 If the loaded text already contains Unicode sequences that are in conflict 517 with the Isabelle symbol encoding, the fallback-encoding UTF-8 is used and 518 Isabelle symbols remain in literal \<^verbatim>\<open>\<symbol>\<close> form. The jEdit menu 519 operation \<^emph>\<open>Utilities~/ Buffer Options~/ Character encoding\<close> allows to 520 enforce the UTF-8-Isabelle, but this will also change original Unicode 521 text into Isabelle symbols when saving the file! 522 \<close> 523 524 paragraph \<open>Font.\<close> 525 text \<open>Correct rendering via Unicode requires a font that contains glyphs for 526 the corresponding codepoints. There are also various unusual symbols with 527 particular purpose in Isabelle, e.g.\ control symbols and very long arrows. 528 Isabelle/jEdit prefers its own font collection \<^verbatim>\<open>Isabelle DejaVu\<close>, which 529 ensures that all standard Isabelle symbols are shown on the screen (or 530 printer) as expected. 531 532 Note that a Java/AWT/Swing application can load additional fonts only if 533 they are not installed on the operating system already! Outdated versions of 534 Isabelle fonts that happen to be provided by the operating system prevent 535 Isabelle/jEdit to use its bundled version. This could lead to missing glyphs 536 (black rectangles), when the system version of a font is older than the 537 application version. This problem can be avoided by refraining to 538 install'' any version of \<^verbatim>\<open>IsabelleText\<close> in the first place, although it 539 might be tempting to use the same font in other applications. 540 541 HTML pages generated by Isabelle refer to the same Isabelle fonts as a 542 server-side resource. Thus a web-browser can use that without requiring a 543 locally installed copy. 544 \<close> 545 546 paragraph \<open>Input methods.\<close> 547 text \<open>In principle, Isabelle/jEdit could delegate the problem to produce 548 Isabelle symbols in their Unicode rendering to the underlying operating 549 system and its \<^emph>\<open>input methods\<close>. Regular jEdit also provides various ways to 550 work with \<^emph>\<open>abbreviations\<close> to produce certain non-ASCII characters. Since 551 none of these standard input methods work satisfactorily for the 552 mathematical characters required for Isabelle, various specific 553 Isabelle/jEdit mechanisms are provided. 554 555 This is a summary for practically relevant input methods for Isabelle 556 symbols. 557 558 \<^enum> The \<^emph>\<open>Symbols\<close> panel: some GUI buttons allow to insert certain symbols in 559 the text buffer. There are also tooltips to reveal the official Isabelle 560 representation with some additional information about \<^emph>\<open>symbol 561 abbreviations\<close> (see below). 562 563 \<^enum> Copy/paste from decoded source files: text that is rendered as Unicode 564 already can be re-used to produce further text. This also works between 565 different applications, e.g.\ Isabelle/jEdit and some web browser or mail 566 client, as long as the same Unicode interpretation of Isabelle symbols is 567 used. 568 569 \<^enum> Copy/paste from prover output within Isabelle/jEdit. The same principles 570 as for text buffers apply, but note that \<^emph>\<open>copy\<close> in secondary Isabelle/jEdit 571 windows works via the keyboard shortcuts \<^verbatim>\<open>C+c\<close> or \<^verbatim>\<open>C+INSERT\<close>, while jEdit 572 menu actions always refer to the primary text area! 573 574 \<^enum> Completion provided by the Isabelle plugin (see \secref{sec:completion}). 575 Isabelle symbols have a canonical name and optional abbreviations. This can 576 be used with the text completion mechanism of Isabelle/jEdit, to replace a 577 prefix of the actual symbol like \<^verbatim>\<open>\<lambda>\<close>, or its name preceded by backslash 578 \<^verbatim>\<open>\lambda\<close>, or its ASCII abbreviation \<^verbatim>\<open>%\<close> by the Unicode rendering. 579 580 The following table is an extract of the information provided by the 581 standard \<^file>\<open>$ISABELLE_HOME/etc/symbols\<close> file: 582 583 \<^medskip> 584 \begin{tabular}{lll} 585 \<^bold>\<open>symbol\<close> & \<^bold>\<open>name with backslash\<close> & \<^bold>\<open>abbreviation\<close> \\\hline 586 \<open>\<lambda>\<close> & \<^verbatim>\<open>\lambda\<close> & \<^verbatim>\<open>%\<close> \\ 587 \<open>\<Rightarrow>\<close> & \<^verbatim>\<open>\Rightarrow\<close> & \<^verbatim>\<open>=>\<close> \\ 588 \<open>\<Longrightarrow>\<close> & \<^verbatim>\<open>\Longrightarrow\<close> & \<^verbatim>\<open>==>\<close> \\[0.5ex] 589 \<open>\<And>\<close> & \<^verbatim>\<open>\And\<close> & \<^verbatim>\<open>!!\<close> \\ 590 \<open>\<equiv>\<close> & \<^verbatim>\<open>\equiv\<close> & \<^verbatim>\<open>==\<close> \\[0.5ex] 591 \<open>\<forall>\<close> & \<^verbatim>\<open>\forall\<close> & \<^verbatim>\<open>!\<close> \\ 592 \<open>\<exists>\<close> & \<^verbatim>\<open>\exists\<close> & \<^verbatim>\<open>?\<close> \\ 593 \<open>\<longrightarrow>\<close> & \<^verbatim>\<open>\longrightarrow\<close> & \<^verbatim>\<open>-->\<close> \\ 594 \<open>\<and>\<close> & \<^verbatim>\<open>\and\<close> & \<^verbatim>\<open>&\<close> \\ 595 \<open>\<or>\<close> & \<^verbatim>\<open>\or\<close> & \<^verbatim>\<open>|\<close> \\ 596 \<open>\<not>\<close> & \<^verbatim>\<open>\not\<close> & \<^verbatim>\<open>~\<close> \\ 597 \<open>\<noteq>\<close> & \<^verbatim>\<open>\noteq\<close> & \<^verbatim>\<open>~=\<close> \\ 598 \<open>\<in>\<close> & \<^verbatim>\<open>\in\<close> & \<^verbatim>\<open>:\<close> \\ 599 \<open>\<notin>\<close> & \<^verbatim>\<open>\notin\<close> & \<^verbatim>\<open>~:\<close> \\ 600 \end{tabular} 601 \<^medskip> 602 603 Note that the above abbreviations refer to the input method. The logical 604 notation provides ASCII alternatives that often coincide, but sometimes 605 deviate. This occasionally causes user confusion with old-fashioned Isabelle 606 source that use ASCII replacement notation like \<^verbatim>\<open>!\<close> or \<^verbatim>\<open>ALL\<close> directly in 607 the text. 608 609 On the other hand, coincidence of symbol abbreviations with ASCII 610 replacement syntax syntax helps to update old theory sources via explicit 611 completion (see also \<^verbatim>\<open>C+b\<close> explained in \secref{sec:completion}). 612 \<close> 613 614 paragraph \<open>Control symbols.\<close> 615 text \<open>There are some special control symbols to modify the display style of a 616 single symbol (without nesting). Control symbols may be applied to a region 617 of selected text, either using the \<^emph>\<open>Symbols\<close> panel or keyboard shortcuts or 618 jEdit actions. These editor operations produce a separate control symbol for 619 each symbol in the text, in order to make the whole text appear in a certain 620 style. 621 622 \<^medskip> 623 \begin{tabular}{llll} 624 \<^bold>\<open>style\<close> & \<^bold>\<open>symbol\<close> & \<^bold>\<open>shortcut\<close> & \<^bold>\<open>action\<close> \\\hline 625 superscript & \<^verbatim>\<open>\<^sup>\<close> & \<^verbatim>\<open>C+e UP\<close> & @{action_ref "isabelle.control-sup"} \\ 626 subscript & \<^verbatim>\<open>\<^sub>\<close> & \<^verbatim>\<open>C+e DOWN\<close> & @{action_ref "isabelle.control-sub"} \\ 627 bold face & \<^verbatim>\<open>\<^bold>\<close> & \<^verbatim>\<open>C+e RIGHT\<close> & @{action_ref "isabelle.control-bold"} \\ 628 emphasized & \<^verbatim>\<open>\<^emph>\<close> & \<^verbatim>\<open>C+e LEFT\<close> & @{action_ref "isabelle.control-emph"} \\ 629 reset & & \<^verbatim>\<open>C+e BACK_SPACE\<close> & @{action_ref "isabelle.control-reset"} \\ 630 \end{tabular} 631 \<^medskip> 632 633 To produce a single control symbol, it is also possible to complete on 634 \<^verbatim>\<open>\sup\<close>, \<^verbatim>\<open>\sub\<close>, \<^verbatim>\<open>\bold\<close>, \<^verbatim>\<open>\emph\<close> as for regular symbols. 635 636 The emphasized style only takes effect in document output (when used with a 637 cartouche), but not in the editor. 638 \<close> 639 640 641 section \<open>Scala console \label{sec:scala-console}\<close> 642 643 text \<open> 644 The \<^emph>\<open>Console\<close> plugin manages various shells (command interpreters), e.g.\ 645 \<^emph>\<open>BeanShell\<close>, which is the official jEdit scripting language, and the 646 cross-platform \<^emph>\<open>System\<close> shell. Thus the console provides similar 647 functionality than the Emacs buffers \<^verbatim>\<open>*scratch*\<close> and \<^verbatim>\<open>*shell*\<close>. 648 649 Isabelle/jEdit extends the repertoire of the console by \<^emph>\<open>Scala\<close>, which is 650 the regular Scala toplevel loop running inside the same JVM process as 651 Isabelle/jEdit itself. This means the Scala command interpreter has access 652 to the JVM name space and state of the running Prover IDE application. The 653 default environment imports the full content of packages \<^verbatim>\<open>isabelle\<close> and 654 \<^verbatim>\<open>isabelle.jedit\<close>. 655 656 For example, \<^verbatim>\<open>PIDE\<close> refers to the Isabelle/jEdit plugin object, and \<^verbatim>\<open>view\<close> 657 to the current editor view of jEdit. The Scala expression 658 \<^verbatim>\<open>PIDE.snapshot(view)\<close> makes a PIDE document snapshot of the current buffer 659 within the current editor view. 660 661 This helps to explore Isabelle/Scala functionality interactively. Some care 662 is required to avoid interference with the internals of the running 663 application. 664 \<close> 665 666 667 section \<open>File-system access\<close> 668 669 text \<open> 670 File specifications in jEdit follow various formats and conventions 671 according to \<^emph>\<open>Virtual File Systems\<close>, which may be also provided by 672 additional plugins. This allows to access remote files via the \<^verbatim>\<open>http:\<close> 673 protocol prefix, for example. Isabelle/jEdit attempts to work with the 674 file-system model of jEdit as far as possible. In particular, theory sources 675 are passed directly from the editor to the prover, without indirection via 676 physical files. 677 678 Despite the flexibility of URLs in jEdit, local files are particularly 679 important and are accessible without protocol prefix. The file path notation 680 is that of the Java Virtual Machine on the underlying platform. On Windows 681 the preferred form uses backslashes, but happens to accept forward slashes 682 like Unix/POSIX as well. Further differences arise due to Windows drive 683 letters and network shares. 684 685 The Java notation for files needs to be distinguished from the one of 686 Isabelle, which uses POSIX notation with forward slashes on \<^emph>\<open>all\<close> 687 platforms. Isabelle/ML on Windows uses Unix-style path notation, too, and 688 driver letter representation as in Cygwin (e.g.\ \<^verbatim>\<open>/cygdrive/c\<close>). Moreover, 689 environment variables from the Isabelle process may be used freely, e.g.\ 690 \<^file>\<open>$ISABELLE_HOME/etc/symbols\<close> or \<^file>\<open>$POLYML_HOME/README\<close>. There are special 691 shortcuts: \<^dir>\<open>~\<close> for \<^dir>\<open>$USER_HOME\<close> and \<^dir>\<open>~~\<close> for \<^dir>\<open>$ISABELLE_HOME\<close>. 692 693 \<^medskip> 694 Since jEdit happens to support environment variables within file 695 specifications as well, it is natural to use similar notation within the 696 editor, e.g.\ in the file-browser. This does not work in full generality, 697 though, due to the bias of jEdit towards platform-specific notation and of 698 Isabelle towards POSIX. Moreover, the Isabelle settings environment is not 699 yet active when starting Isabelle/jEdit via its standard application 700 wrapper, in contrast to @{tool jedit} run from the command line 701 (\secref{sec:command-line}). 702 703 Isabelle/jEdit imitates important system settings within the Java process 704 environment, in order to allow easy access to these important places from 705 the editor: \<^verbatim>\<open>$ISABELLE_HOME\<close>, \<^verbatim>\<open>$ISABELLE_HOME_USER\<close>, \<^verbatim>\<open>$JEDIT_HOME\<close>, 706 \<^verbatim>\<open>JEDIT_SETTINGS\<close>. The file browser of jEdit also includes \<^emph>\<open>Favorites\<close> for 707 these two important locations. 708 709 \<^medskip> 710 Path specifications in prover input or output usually include formal markup 711 that turns it into a hyperlink (see also \secref{sec:tooltips-hyperlinks}). 712 This allows to open the corresponding file in the text editor, independently 713 of the path notation. If the path refers to a directory, the jEdit file 714 browser is opened on it. 715 716 Formally checked paths in prover input are subject to completion 717 (\secref{sec:completion}): partial specifications are resolved via directory 718 content and possible completions are offered in a popup. 719 \<close> 720 721 722 section \<open>Indentation\<close> 723 724 text \<open> 725 Isabelle/jEdit augments the existing indentation facilities of jEdit to take 726 the structure of theory and proof texts into account. There is also special 727 support for unstructured proof scripts. 728 729 \<^descr>[Syntactic indentation] follows the outer syntax of Isabelle/Isar. 730 731 Action @{action "indent-lines"} (shortcut \<^verbatim>\<open>C+i\<close>) indents the current line 732 according to command keywords and some command substructure: this 733 approximation may need further manual tuning. 734 735 Action @{action "isabelle.newline"} (shortcut \<^verbatim>\<open>ENTER\<close>) indents the old 736 and the new line according to command keywords only: this leads to precise 737 alignment of the main Isar language elements. This depends on option 738 @{system_option_def "jedit_indent_newline"} (enabled by default). 739 740 Regular input (via keyboard or completion) indents the current line 741 whenever an new keyword is emerging at the start of the line. This depends 742 on option @{system_option_def "jedit_indent_input"} (enabled by default). 743 744 \<^descr>[Semantic indentation] adds additional white space to unstructured proof 745 scripts (\<^theory_text>\<open>apply\<close> etc.) via number of subgoals. This requires information 746 of ongoing document processing and may thus lag behind, when the user is 747 editing too quickly; see also option @{system_option_def 748 "jedit_script_indent"} and @{system_option_def 749 "jedit_script_indent_limit"}. 750 751 The above options are accessible in the menu \<^emph>\<open>Plugins / Plugin Options / 752 Isabelle / General\<close>. A prerequisite for advanced indentation is \<^emph>\<open>Utilities 753 / Buffer Options / Automatic indentation\<close>: it needs to be set to \<^verbatim>\<open>full\<close> 754 (default). 755 \<close> 756 757 758 section \<open>SideKick parsers \label{sec:sidekick}\<close> 759 760 text \<open> 761 The \<^emph>\<open>SideKick\<close> plugin provides some general services to display buffer 762 structure in a tree view. Isabelle/jEdit provides SideKick parsers for its 763 main mode for theory files, ML files, as well as some minor modes for the 764 \<^verbatim>\<open>NEWS\<close> file (see \figref{fig:sidekick}), session \<^verbatim>\<open>ROOT\<close> files, system 765 \<^verbatim>\<open>options\<close>, and Bib{\TeX} files (\secref{sec:bibtex}). 766 767 \begin{figure}[!htb] 768 \begin{center} 769 \includegraphics[scale=0.333]{sidekick} 770 \end{center} 771 \caption{The Isabelle NEWS file with SideKick tree view} 772 \label{fig:sidekick} 773 \end{figure} 774 775 The default SideKick parser for theory files is \<^verbatim>\<open>isabelle\<close>: it provides a 776 tree-view on the formal document structure, with section headings at the top 777 and formal specification elements at the bottom. The alternative parser 778 \<^verbatim>\<open>isabelle-context\<close> shows nesting of context blocks according to \<^theory_text>\<open>begin \<dots> 779 end\<close> structure. 780 781 \<^medskip> 782 Isabelle/ML files are structured according to semi-formal comments that are 783 explained in @{cite "isabelle-implementation"}. This outline is turned into 784 a tree-view by default, by using the \<^verbatim>\<open>isabelle-ml\<close> parser. There is also a 785 folding mode of the same name, for hierarchic text folds within ML files. 786 787 \<^medskip> 788 The special SideKick parser \<^verbatim>\<open>isabelle-markup\<close> exposes the uninterpreted 789 markup tree of the PIDE document model of the current buffer. This is 790 occasionally useful for informative purposes, but the amount of displayed 791 information might cause problems for large buffers. 792 \<close> 793 794 795 chapter \<open>Prover IDE functionality \label{sec:document-model}\<close> 796 797 section \<open>Document model \label{sec:document-model}\<close> 798 799 text \<open> 800 The document model is central to the PIDE architecture: the editor and the 801 prover have a common notion of structured source text with markup, which is 802 produced by formal processing. The editor is responsible for edits of 803 document source, as produced by the user. The prover is responsible for 804 reports of document markup, as produced by its processing in the background. 805 806 Isabelle/jEdit handles classic editor events of jEdit, in order to connect 807 the physical world of the GUI (with its singleton state) to the mathematical 808 world of multiple document versions (with timeless and stateless updates). 809 \<close> 810 811 812 subsection \<open>Editor buffers and document nodes \label{sec:buffer-node}\<close> 813 814 text \<open> 815 As a regular text editor, jEdit maintains a collection of \<^emph>\<open>buffers\<close> to 816 store text files; each buffer may be associated with any number of visible 817 \<^emph>\<open>text areas\<close>. Buffers are subject to an \<^emph>\<open>edit mode\<close> that is determined 818 from the file name extension. The following modes are treated specifically 819 in Isabelle/jEdit: 820 821 \<^medskip> 822 \begin{tabular}{lll} 823 \<^bold>\<open>mode\<close> & \<^bold>\<open>file name\<close> & \<^bold>\<open>content\<close> \\\hline 824 \<^verbatim>\<open>isabelle\<close> & \<^verbatim>\<open>*.thy\<close> & theory source \\ 825 \<^verbatim>\<open>isabelle-ml\<close> & \<^verbatim>\<open>*.ML\<close> & Isabelle/ML source \\ 826 \<^verbatim>\<open>sml\<close> & \<^verbatim>\<open>*.sml\<close> or \<^verbatim>\<open>*.sig\<close> & Standard ML source \\ 827 \<^verbatim>\<open>isabelle-root\<close> & \<^verbatim>\<open>ROOT\<close> & session root \\ 828 \<^verbatim>\<open>isabelle-options\<close> & & Isabelle options \\ 829 \<^verbatim>\<open>isabelle-news\<close> & & Isabelle NEWS \\ 830 \end{tabular} 831 \<^medskip> 832 833 All jEdit buffers are automatically added to the PIDE document-model as 834 \<^emph>\<open>document nodes\<close>. The overall document structure is defined by the theory 835 nodes in two dimensions: 836 837 \<^enum> via \<^bold>\<open>theory imports\<close> that are specified in the \<^emph>\<open>theory header\<close> using 838 concrete syntax of the @{command_ref theory} command @{cite 839 "isabelle-isar-ref"}; 840 841 \<^enum> via \<^bold>\<open>auxiliary files\<close> that are included into a theory by \<^emph>\<open>load 842 commands\<close>, notably @{command_ref ML_file} and @{command_ref SML_file} 843 @{cite "isabelle-isar-ref"}. 844 845 In any case, source files are managed by the PIDE infrastructure: the 846 physical file-system only plays a subordinate role. The relevant version of 847 source text is passed directly from the editor to the prover, using internal 848 communication channels. 849 \<close> 850 851 852 subsection \<open>Theories \label{sec:theories}\<close> 853 854 text \<open> 855 The \<^emph>\<open>Theories\<close> panel (see also \figref{fig:theories}) provides an overview 856 of the status of continuous checking of theory nodes within the document 857 model. 858 859 \begin{figure}[!htb] 860 \begin{center} 861 \includegraphics[scale=0.333]{theories} 862 \end{center} 863 \caption{Theories panel with an overview of the document-model, and jEdit 864 text areas as editable views on some of the document nodes} 865 \label{fig:theories} 866 \end{figure} 867 868 Theory imports are resolved automatically by the PIDE document model: all 869 required files are loaded and stored internally, without the need to open 870 corresponding jEdit buffers. Opening or closing editor buffers later on has 871 no direct impact on the formal document content: it only affects visibility. 872 873 In contrast, auxiliary files (e.g.\ from @{command ML_file} commands) are 874 \<^emph>\<open>not\<close> resolved within the editor by default, but the prover process takes 875 care of that. This may be changed by enabling the system option 876 @{system_option jedit_auto_resolve}: it ensures that all files are uniformly 877 provided by the editor. 878 879 \<^medskip> 880 The visible \<^emph>\<open>perspective\<close> of Isabelle/jEdit is defined by the collective 881 view on theory buffers via open text areas. The perspective is taken as a 882 hint for document processing: the prover ensures that those parts of a 883 theory where the user is looking are checked, while other parts that are 884 presently not required are ignored. The perspective is changed by opening or 885 closing text area windows, or scrolling within a window. 886 887 The \<^emph>\<open>Theories\<close> panel provides some further options to influence the process 888 of continuous checking: it may be switched off globally to restrict the 889 prover to superficial processing of command syntax. It is also possible to 890 indicate theory nodes as \<^emph>\<open>required\<close> for continuous checking: this means 891 such nodes and all their imports are always processed independently of the 892 visibility status (if continuous checking is enabled). Big theory libraries 893 that are marked as required can have significant impact on performance! 894 895 The \<^emph>\<open>Purge\<close> button restricts the document model to theories that are 896 required for open editor buffers: inaccessible theories are removed and will 897 be rechecked when opened or imported later. 898 899 \<^medskip> 900 Formal markup of checked theory content is turned into GUI rendering, based 901 on a standard repertoire known from mainstream IDEs for programming 902 languages: colors, icons, highlighting, squiggly underlines, tooltips, 903 hyperlinks etc. For outer syntax of Isabelle/Isar there is some traditional 904 syntax-highlighting via static keywords and tokenization within the editor; 905 this buffer syntax is determined from theory imports. In contrast, the 906 painting of inner syntax (term language etc.)\ uses semantic information 907 that is reported dynamically from the logical context. Thus the prover can 908 provide additional markup to help the user to understand the meaning of 909 formal text, and to produce more text with some add-on tools (e.g.\ 910 information messages with \<^emph>\<open>sendback\<close> markup by automated provers or 911 disprovers in the background). \<close> 912 913 914 subsection \<open>Auxiliary files \label{sec:aux-files}\<close> 915 916 text \<open> 917 Special load commands like @{command_ref ML_file} and @{command_ref 918 SML_file} @{cite "isabelle-isar-ref"} refer to auxiliary files within some 919 theory. Conceptually, the file argument of the command extends the theory 920 source by the content of the file, but its editor buffer may be loaded~/ 921 changed~/ saved separately. The PIDE document model propagates changes of 922 auxiliary file content to the corresponding load command in the theory, to 923 update and process it accordingly: changes of auxiliary file content are 924 treated as changes of the corresponding load command. 925 926 \<^medskip> 927 As a concession to the massive amount of ML files in Isabelle/HOL itself, 928 the content of auxiliary files is only added to the PIDE document-model on 929 demand, the first time when opened explicitly in the editor. There are 930 further tricks to manage markup of ML files, such that Isabelle/HOL may be 931 edited conveniently in the Prover IDE on small machines with only 8\,GB of 932 main memory. Using \<^verbatim>\<open>Pure\<close> as logic session image, the exploration may start 933 at the top \<^file>\<open>ISABELLE_HOME/src/HOL/Main.thy\<close> or the bottom 934 \<^file>\<open>$ISABELLE_HOME/src/HOL/HOL.thy\<close>, for example. It is also possible to 935 explore the Isabelle/Pure bootstrap process (a virtual copy) by opening 936 \<^file>\<open>$ISABELLE_HOME/src/Pure/ROOT.ML\<close> like a theory in the Prover IDE. 937 938 Initially, before an auxiliary file is opened in the editor, the prover 939 reads its content from the physical file-system. After the file is opened 940 for the first time in the editor, e.g.\ by following the hyperlink 941 (\secref{sec:tooltips-hyperlinks}) for the argument of its @{command 942 ML_file} command, the content is taken from the jEdit buffer. 943 944 The change of responsibility from prover to editor counts as an update of 945 the document content, so subsequent theory sources need to be re-checked. 946 When the buffer is closed, the responsibility remains to the editor: the 947 file may be opened again without causing another document update. 948 949 A file that is opened in the editor, but its theory with the load command is 950 not, is presently inactive in the document model. A file that is loaded via 951 multiple load commands is associated to an arbitrary one: this situation is 952 morally unsupported and might lead to confusion. 953 954 \<^medskip> 955 Output that refers to an auxiliary file is combined with that of the 956 corresponding load command, and shown whenever the file or the command are 957 active (see also \secref{sec:output}). 958 959 Warnings, errors, and other useful markup is attached directly to the 960 positions in the auxiliary file buffer, in the manner of standard IDEs. By 961 using the load command @{command SML_file} as explained in 962 \<^file>\<open>$ISABELLE_HOME/src/Tools/SML/Examples.thy\<close>, Isabelle/jEdit may be used as 963 fully-featured IDE for Standard ML, independently of theory or proof 964 development: the required theory merely serves as some kind of project file 965 for a collection of SML source modules. 966 \<close> 967 968 969 section \<open>Output \label{sec:output}\<close> 970 971 text \<open> 972 Prover output consists of \<^emph>\<open>markup\<close> and \<^emph>\<open>messages\<close>. Both are directly 973 attached to the corresponding positions in the original source text, and 974 visualized in the text area, e.g.\ as text colours for free and bound 975 variables, or as squiggly underlines for warnings, errors etc.\ (see also 976 \figref{fig:output}). In the latter case, the corresponding messages are 977 shown by hovering with the mouse over the highlighted text --- although in 978 many situations the user should already get some clue by looking at the 979 position of the text highlighting, without seeing the message body itself. 980 981 \begin{figure}[!htb] 982 \begin{center} 983 \includegraphics[scale=0.333]{output} 984 \end{center} 985 \caption{Multiple views on prover output: gutter with icon, text area with 986 popup, text overview column, \<^emph>\<open>Theories\<close> panel, \<^emph>\<open>Output\<close> panel} 987 \label{fig:output} 988 \end{figure} 989 990 The gutter'' on the left-hand-side of the text area uses icons to 991 provide a summary of the messages within the adjacent text line. Message 992 priorities are used to prefer errors over warnings, warnings over 993 information messages; other output is ignored. 994 995 The text overview column'' on the right-hand-side of the text area uses 996 similar information to paint small rectangles for the overall status of the 997 whole text buffer. The graphics is scaled to fit the logical buffer length 998 into the given window height. Mouse clicks on the overview area move the 999 cursor approximately to the corresponding text line in the buffer. 1000 1001 The \<^emph>\<open>Theories\<close> panel provides another course-grained overview, but without 1002 direct correspondence to text positions. The coloured rectangles represent 1003 the amount of messages of a certain kind (warnings, errors, etc.) and the 1004 execution status of commands. The border of each rectangle indicates the 1005 overall status of processing: a thick border means it is \<^emph>\<open>finished\<close> or 1006 \<^emph>\<open>failed\<close> (with color for errors). A double-click on one of the theory 1007 entries with their status overview opens the corresponding text buffer, 1008 without moving the cursor to a specific point. 1009 1010 \<^medskip> 1011 The \<^emph>\<open>Output\<close> panel displays prover messages that correspond to a given 1012 command, within a separate window. The cursor position in the presently 1013 active text area determines the prover command whose cumulative message 1014 output is appended and shown in that window (in canonical order according to 1015 the internal execution of the command). There are also control elements to 1016 modify the update policy of the output wrt.\ continued editor movements: 1017 \<^emph>\<open>Auto update\<close> and \<^emph>\<open>Update\<close>. This is particularly useful for multiple 1018 instances of the \<^emph>\<open>Output\<close> panel to look at different situations. 1019 Alternatively, the panel can be turned into a passive \<^emph>\<open>Info\<close> window via the 1020 \<^emph>\<open>Detach\<close> menu item. 1021 1022 Proof state is handled separately (\secref{sec:state-output}), but it is 1023 also possible to tick the corresponding checkbox to append it to regular 1024 output (\figref{fig:output-including-state}). This is a globally persistent 1025 option: it affects all open panels and future editor sessions. 1026 1027 \begin{figure}[!htb] 1028 \begin{center} 1029 \includegraphics[scale=0.333]{output-including-state} 1030 \end{center} 1031 \caption{Proof state display within the regular output panel} 1032 \label{fig:output-including-state} 1033 \end{figure} 1034 1035 \<^medskip> 1036 Following the IDE principle, regular messages are attached to the original 1037 source in the proper place and may be inspected on demand via popups. This 1038 excludes messages that are somehow internal to the machinery of proof 1039 checking, notably \<^emph>\<open>proof state\<close> and \<^emph>\<open>tracing\<close>. 1040 1041 In any case, the same display technology is used for small popups and big 1042 output windows. The formal text contains markup that may be explored 1043 recursively via further popups and hyperlinks (see 1044 \secref{sec:tooltips-hyperlinks}), or clicked directly to initiate certain 1045 actions (see \secref{sec:auto-tools} and \secref{sec:sledgehammer}). 1046 \<close> 1047 1048 1049 section \<open>Proof state \label{sec:state-output}\<close> 1050 1051 text \<open> 1052 The main purpose of the Prover IDE is to help the user editing proof 1053 documents, with ongoing formal checking by the prover in the background. 1054 This can be done to some extent in the main text area alone, especially for 1055 well-structured Isar proofs. 1056 1057 Nonetheless, internal proof state needs to be inspected in many situations 1058 of exploration and debugging''. The \<^emph>\<open>State\<close> panel shows exclusively such 1059 proof state messages without further distraction, while all other messages 1060 are displayed in \<^emph>\<open>Output\<close> (\secref{sec:output}). 1061 \Figref{fig:output-and-state} shows a typical GUI layout where both panels 1062 are open. 1063 1064 \begin{figure}[!htb] 1065 \begin{center} 1066 \includegraphics[scale=0.333]{output-and-state} 1067 \end{center} 1068 \caption{Separate proof state display (right) and other output (bottom).} 1069 \label{fig:output-and-state} 1070 \end{figure} 1071 1072 Another typical arrangement has more than one \<^emph>\<open>State\<close> panel open (as 1073 floating windows), with \<^emph>\<open>Auto update\<close> disabled to look at an old situation 1074 while the proof text in the vicinity is changed. The \<^emph>\<open>Update\<close> button 1075 triggers an explicit one-shot update; this operation is also available via 1076 the action @{action "isabelle.update-state"} (keyboard shortcut \<^verbatim>\<open>S+ENTER\<close>). 1077 1078 On small screens, it is occasionally useful to have all messages 1079 concatenated in the regular \<^emph>\<open>Output\<close> panel, e.g.\ see 1080 \figref{fig:output-including-state}. 1081 1082 \<^medskip> 1083 The mechanics of \<^emph>\<open>Output\<close> versus \<^emph>\<open>State\<close> are slightly different: 1084 1085 \<^item> \<^emph>\<open>Output\<close> shows information that is continuously produced and already 1086 present when the GUI wants to show it. This is implicitly controlled by 1087 the visible perspective on the text. 1088 1089 \<^item> \<^emph>\<open>State\<close> initiates a real-time query on demand, with a full round trip 1090 including a fresh print operation on the prover side. This is controlled 1091 explicitly when the cursor is moved to the next command (\<^emph>\<open>Auto update\<close>) 1092 or the \<^emph>\<open>Update\<close> operation is triggered. 1093 1094 This can make a difference in GUI responsibility and resource usage within 1095 the prover process. Applications with very big proof states that are only 1096 inspected in isolation work better with the \<^emph>\<open>State\<close> panel. 1097 \<close> 1098 1099 1100 section \<open>Query \label{sec:query}\<close> 1101 1102 text \<open> 1103 The \<^emph>\<open>Query\<close> panel provides various GUI forms to request extra information 1104 from the prover, as a replacement of old-style diagnostic commands like 1105 @{command find_theorems}. There are input fields and buttons for a 1106 particular query command, with output in a dedicated text area. 1107 1108 The main query modes are presented as separate tabs: \<^emph>\<open>Find Theorems\<close>, 1109 \<^emph>\<open>Find Constants\<close>, \<^emph>\<open>Print Context\<close>, e.g.\ see \figref{fig:query}. As usual 1110 in jEdit, multiple \<^emph>\<open>Query\<close> windows may be active at the same time: any 1111 number of floating instances, but at most one docked instance (which is used 1112 by default). 1113 1114 \begin{figure}[!htb] 1115 \begin{center} 1116 \includegraphics[scale=0.333]{query} 1117 \end{center} 1118 \caption{An instance of the Query panel: find theorems} 1119 \label{fig:query} 1120 \end{figure} 1121 1122 \<^medskip> 1123 The following GUI elements are common to all query modes: 1124 1125 \<^item> The spinning wheel provides feedback about the status of a pending query 1126 wrt.\ the evaluation of its context and its own operation. 1127 1128 \<^item> The \<^emph>\<open>Apply\<close> button attaches a fresh query invocation to the current 1129 context of the command where the cursor is pointing in the text. 1130 1131 \<^item> The \<^emph>\<open>Search\<close> field allows to highlight query output according to some 1132 regular expression, in the notation that is commonly used on the Java 1133 platform.\<^footnote>\<open>\<^url>\<open>https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html\<close>\<close> 1134 This may serve as an additional visual filter of the result. 1135 1136 \<^item> The \<^emph>\<open>Zoom\<close> box controls the font size of the output area. 1137 1138 All query operations are asynchronous: there is no need to wait for the 1139 evaluation of the document for the query context, nor for the query 1140 operation itself. Query output may be detached as independent \<^emph>\<open>Info\<close> 1141 window, using a menu operation of the dockable window manager. The printed 1142 result usually provides sufficient clues about the original query, with some 1143 hyperlink to its context (via markup of its head line). 1144 \<close> 1145 1146 1147 subsection \<open>Find theorems\<close> 1148 1149 text \<open> 1150 The \<^emph>\<open>Query\<close> panel in \<^emph>\<open>Find Theorems\<close> mode retrieves facts from the theory 1151 or proof context matching all of given criteria in the \<^emph>\<open>Find\<close> text field. A 1152 single criterion has the following syntax: 1153 1154 @{rail \<open> 1155 ('-'?) ('name' ':' @{syntax name} | 'intro' | 'elim' | 'dest' | 1156 'solves' | 'simp' ':' @{syntax term} | @{syntax term}) 1157 \<close>} 1158 1159 See also the Isar command @{command_ref find_theorems} in @{cite 1160 "isabelle-isar-ref"}. 1161 \<close> 1162 1163 1164 subsection \<open>Find constants\<close> 1165 1166 text \<open> 1167 The \<^emph>\<open>Query\<close> panel in \<^emph>\<open>Find Constants\<close> mode prints all constants whose type 1168 meets all of the given criteria in the \<^emph>\<open>Find\<close> text field. A single 1169 criterion has the following syntax: 1170 1171 @{rail \<open> 1172 ('-'?) 1173 ('name' ':' @{syntax name} | 'strict' ':' @{syntax type} | @{syntax type}) 1174 \<close>} 1175 1176 See also the Isar command @{command_ref find_consts} in @{cite 1177 "isabelle-isar-ref"}. 1178 \<close> 1179 1180 1181 subsection \<open>Print context\<close> 1182 1183 text \<open> 1184 The \<^emph>\<open>Query\<close> panel in \<^emph>\<open>Print Context\<close> mode prints information from the 1185 theory or proof context, or proof state. See also the Isar commands 1186 @{command_ref print_context}, @{command_ref print_cases}, @{command_ref 1187 print_term_bindings}, @{command_ref print_theorems}, described in @{cite 1188 "isabelle-isar-ref"}. 1189 \<close> 1190 1191 1192 section \<open>Tooltips and hyperlinks \label{sec:tooltips-hyperlinks}\<close> 1193 1194 text \<open> 1195 Formally processed text (prover input or output) contains rich markup that 1196 can be explored by using the \<^verbatim>\<open>CONTROL\<close> modifier key on Linux and Windows, 1197 or \<^verbatim>\<open>COMMAND\<close> on Mac OS X. Hovering with the mouse while the modifier is 1198 pressed reveals a \<^emph>\<open>tooltip\<close> (grey box over the text with a yellow popup) 1199 and/or a \<^emph>\<open>hyperlink\<close> (black rectangle over the text with change of mouse 1200 pointer); see also \figref{fig:tooltip}. 1201 1202 \begin{figure}[!htb] 1203 \begin{center} 1204 \includegraphics[scale=0.5]{popup1} 1205 \end{center} 1206 \caption{Tooltip and hyperlink for some formal entity} 1207 \label{fig:tooltip} 1208 \end{figure} 1209 1210 Tooltip popups use the same rendering technology as the main text area, and 1211 further tooltips and/or hyperlinks may be exposed recursively by the same 1212 mechanism; see \figref{fig:nested-tooltips}. 1213 1214 \begin{figure}[!htb] 1215 \begin{center} 1216 \includegraphics[scale=0.5]{popup2} 1217 \end{center} 1218 \caption{Nested tooltips over formal entities} 1219 \label{fig:nested-tooltips} 1220 \end{figure} 1221 1222 The tooltip popup window provides some controls to \<^emph>\<open>close\<close> or \<^emph>\<open>detach\<close> the 1223 window, turning it into a separate \<^emph>\<open>Info\<close> window managed by jEdit. The 1224 \<^verbatim>\<open>ESCAPE\<close> key closes \<^emph>\<open>all\<close> popups, which is particularly relevant when 1225 nested tooltips are stacking up. 1226 1227 \<^medskip> 1228 A black rectangle in the text indicates a hyperlink that may be followed by 1229 a mouse click (while the \<^verbatim>\<open>CONTROL\<close> or \<^verbatim>\<open>COMMAND\<close> modifier key is still 1230 pressed). Such jumps to other text locations are recorded by the 1231 \<^emph>\<open>Navigator\<close> plugin, which is bundled with Isabelle/jEdit and enabled by 1232 default. There are usually navigation arrows in the main jEdit toolbar. 1233 1234 Note that the link target may be a file that is itself not subject to formal 1235 document processing of the editor session and thus prevents further 1236 exploration: the chain of hyperlinks may end in some source file of the 1237 underlying logic image, or within the ML bootstrap sources of Isabelle/Pure. 1238 \<close> 1239 1240 1241 section \<open>Formal scopes and semantic selection\<close> 1242 1243 text \<open> 1244 Formal entities are semantically annotated in the source text as explained 1245 in \secref{sec:tooltips-hyperlinks}. A \<^emph>\<open>formal scope\<close> consists of the 1246 defining position with all its referencing positions. This correspondence is 1247 highlighted in the text according to the cursor position, see also 1248 \figref{fig:scope1}. Here the referencing positions are rendered with an 1249 additional border, in reminiscence to a hyperlink: clicking there moves the 1250 cursor to the original defining position. 1251 1252 \begin{figure}[!htb] 1253 \begin{center} 1254 \includegraphics[scale=0.5]{scope1} 1255 \end{center} 1256 \caption{Scope of formal entity: defining vs.\ referencing positions} 1257 \label{fig:scope1} 1258 \end{figure} 1259 1260 The action @{action_def "isabelle.select-entity"} (shortcut \<^verbatim>\<open>CS+ENTER\<close>) 1261 supports semantic selection of all occurrences of the formal entity at the 1262 caret position. This facilitates systematic renaming, using regular jEdit 1263 editing of a multi-selection, see also \figref{fig:scope2}. 1264 1265 \begin{figure}[!htb] 1266 \begin{center} 1267 \includegraphics[scale=0.5]{scope2} 1268 \end{center} 1269 \caption{The result of semantic selection and systematic renaming} 1270 \label{fig:scope2} 1271 \end{figure} 1272 \<close> 1273 1274 1275 section \<open>Completion \label{sec:completion}\<close> 1276 1277 text \<open> 1278 Smart completion of partial input is the IDE functionality \<^emph>\<open>par 1279 excellance\<close>. Isabelle/jEdit combines several sources of information to 1280 achieve that. Despite its complexity, it should be possible to get some idea 1281 how completion works by experimentation, based on the overview of completion 1282 varieties in \secref{sec:completion-varieties}. The remaining subsections 1283 explain concepts around completion more systematically. 1284 1285 \<^medskip> 1286 \<^emph>\<open>Explicit completion\<close> is triggered by the action @{action_ref 1287 "isabelle.complete"}, which is bound to the keyboard shortcut \<^verbatim>\<open>C+b\<close>, and 1288 thus overrides the jEdit default for @{action_ref "complete-word"}. 1289 1290 \<^emph>\<open>Implicit completion\<close> hooks into the regular keyboard input stream of the 1291 editor, with some event filtering and optional delays. 1292 1293 \<^medskip> 1294 Completion options may be configured in \<^emph>\<open>Plugin Options~/ Isabelle~/ 1295 General~/ Completion\<close>. These are explained in further detail below, whenever 1296 relevant. There is also a summary of options in 1297 \secref{sec:completion-options}. 1298 1299 The asynchronous nature of PIDE interaction means that information from the 1300 prover is delayed --- at least by a full round-trip of the document update 1301 protocol. The default options already take this into account, with a 1302 sufficiently long completion delay to speculate on the availability of all 1303 relevant information from the editor and the prover, before completing text 1304 immediately or producing a popup. Although there is an inherent danger of 1305 non-deterministic behaviour due to such real-time parameters, the general 1306 completion policy aims at determined results as far as possible. 1307 \<close> 1308 1309 1310 subsection \<open>Varieties of completion \label{sec:completion-varieties}\<close> 1311 1312 subsubsection \<open>Built-in templates\<close> 1313 1314 text \<open> 1315 Isabelle is ultimately a framework of nested sub-languages of different 1316 kinds and purposes. The completion mechanism supports this by the following 1317 built-in templates: 1318 1319 \<^descr> \<^verbatim>\<open>\<close> (single ASCII back-quote) or \<^verbatim>\<open>"\<close> (double ASCII quote) support 1320 \<^emph>\<open>quotations\<close> via text cartouches. There are three selections, which are 1321 always presented in the same order and do not depend on any context 1322 information. The default choice produces a template \<open>\<open>\<box>\<close>\<close>'', where the 1323 box indicates the cursor position after insertion; the other choices help 1324 to repair the block structure of unbalanced text cartouches. 1325 1326 \<^descr> \<^verbatim>\<open>@{\<close> is completed to the template \<open>@{\<box>}\<close>'', where the box indicates 1327 the cursor position after insertion. Here it is convenient to use the 1328 wildcard \<^verbatim>\<open>__\<close>'' or a more specific name prefix to let semantic 1329 completion of name-space entries propose antiquotation names. 1330 1331 With some practice, input of quoted sub-languages and antiquotations of 1332 embedded languages should work fluently. Note that national keyboard layouts 1333 might cause problems with back-quote as dead key, but double quote can be 1334 used instead. 1335 \<close> 1336 1337 1338 subsubsection \<open>Syntax keywords\<close> 1339 1340 text \<open> 1341 Syntax completion tables are determined statically from the keywords of the 1342 outer syntax'' of the underlying edit mode: for theory files this is the 1343 syntax of Isar commands according to the cumulative theory imports. 1344 1345 Keywords are usually plain words, which means the completion mechanism only 1346 inserts them directly into the text for explicit completion 1347 (\secref{sec:completion-input}), but produces a popup 1348 (\secref{sec:completion-popup}) otherwise. 1349 1350 At the point where outer syntax keywords are defined, it is possible to 1351 specify an alternative replacement string to be inserted instead of the 1352 keyword itself. An empty string means to suppress the keyword altogether, 1353 which is occasionally useful to avoid confusion, e.g.\ the rare keyword 1354 @{command simproc_setup} vs.\ the frequent name-space entry \<open>simp\<close>. 1355 \<close> 1356 1357 1358 subsubsection \<open>Isabelle symbols\<close> 1359 1360 text \<open> 1361 The completion tables for Isabelle symbols (\secref{sec:symbols}) are 1362 determined statically from \<^file>\<open>$ISABELLE_HOME/etc/symbols\<close> and @{path 1363 "$ISABELLE_HOME_USER/etc/symbols"} for each symbol specification as follows: 1364 1365 \<^medskip> 1366 \begin{tabular}{ll} 1367 \<^bold>\<open>completion entry\<close> & \<^bold>\<open>example\<close> \\\hline 1368 literal symbol & \<^verbatim>\<open>\<forall>\<close> \\ 1369 symbol name with backslash & \<^verbatim>\<open>\\<close>\<^verbatim>\<open>forall\<close> \\ 1370 symbol abbreviation & \<^verbatim>\<open>ALL\<close> or \<^verbatim>\<open>!\<close> \\ 1371 \end{tabular} 1372 \<^medskip> 1373 1374 When inserted into the text, the above examples all produce the same Unicode 1375 rendering \<open>\<forall>\<close> of the underlying symbol \<^verbatim>\<open>\<forall>\<close>. 1376 1377 A symbol abbreviation that is a plain word, like \<^verbatim>\<open>ALL\<close>, is treated like a 1378 syntax keyword. Non-word abbreviations like \<^verbatim>\<open>-->\<close> are inserted more 1379 aggressively, except for single-character abbreviations like \<^verbatim>\<open>!\<close> above. 1380 1381 Completion via abbreviations like \<^verbatim>\<open>ALL\<close> or \<^verbatim>\<open>-->\<close> depends on the semantic 1382 language context (\secref{sec:completion-context}). In contrast, backslash 1383 sequences like \<^verbatim>\<open>\forall\<close> \<^verbatim>\<open>\<forall>\<close> are always possible, but require additional 1384 interaction to confirm (via popup). This is important in ambiguous 1385 situations, e.g.\ for Isabelle document source, which may contain formal 1386 symbols or informal {\LaTeX} macros. Backslash sequences also help when 1387 input is broken, and thus escapes its normal semantic context: e.g.\ 1388 antiquotations or string literals in ML, which do not allow arbitrary 1389 backslash sequences. 1390 1391 Special symbols like \<^verbatim>\<open>\<comment>\<close> or control symbols like \<^verbatim>\<open>\<^cancel>\<close>, 1392 \<^verbatim>\<open>\<^latex>\<close>, \<^verbatim>\<open>\<^binding>\<close> can have an argument: completing on a name 1393 prefix offers a template with an empty cartouche. Thus completion of \<^verbatim>\<open>\co\<close> 1394 or \<^verbatim>\<open>\ca\<close> allows to compose formal document comments quickly.\<^footnote>\<open>It is 1395 customary to put a space between \<^verbatim>\<open>\<comment>\<close> and its argument, while 1396 control symbols do \<^emph>\<open>not\<close> allow extra space here.\<close> 1397 \<close> 1398 1399 1400 subsubsection \<open>User-defined abbreviations\<close> 1401 1402 text \<open> 1403 The theory header syntax supports abbreviations via the \<^theory_text>\<open>abbrevs\<close> keyword 1404 @{cite "isabelle-isar-ref"}. This is a slight generalization of built-in 1405 templates and abbreviations for Isabelle symbols, as explained above. 1406 Examples may be found in the Isabelle sources, by searching for 1407 \<^verbatim>\<open>abbrevs\<close>'' in \<^verbatim>\<open>*.thy\<close> files. 1408 1409 The \<^emph>\<open>Symbols\<close> panel shows the abbreviations that are available in the 1410 current theory buffer (according to its \<^theory_text>\<open>imports\<close>) in the \<^verbatim>\<open>Abbrevs\<close> tab. 1411 \<close> 1412 1413 1414 subsubsection \<open>Name-space entries\<close> 1415 1416 text \<open> 1417 This is genuine semantic completion, using information from the prover, so 1418 it requires some delay. A \<^emph>\<open>failed name-space lookup\<close> produces an error 1419 message that is annotated with a list of alternative names that are legal. 1420 The list of results is truncated according to the system option 1421 @{system_option_ref completion_limit}. The completion mechanism takes this 1422 into account when collecting information on the prover side. 1423 1424 Already recognized names are \<^emph>\<open>not\<close> completed further, but completion may be 1425 extended by appending a suffix of underscores. This provokes a failed 1426 lookup, and another completion attempt while ignoring the underscores. For 1427 example, in a name space where \<^verbatim>\<open>foo\<close> and \<^verbatim>\<open>foobar\<close> are known, the input 1428 \<^verbatim>\<open>foo\<close> remains unchanged, but \<^verbatim>\<open>foo_\<close> may be completed to \<^verbatim>\<open>foo\<close> or 1429 \<^verbatim>\<open>foobar\<close>. 1430 1431 The special identifier \<^verbatim>\<open>__\<close>'' serves as a wild-card for arbitrary 1432 completion: it exposes the name-space content to the completion mechanism 1433 (truncated according to @{system_option completion_limit}). This is 1434 occasionally useful to explore an unknown name-space, e.g.\ in some 1435 template. 1436 \<close> 1437 1438 1439 subsubsection \<open>File-system paths\<close> 1440 1441 text \<open> 1442 Depending on prover markup about file-system paths in the source text, e.g.\ 1443 for the argument of a load command (\secref{sec:aux-files}), the completion 1444 mechanism explores the directory content and offers the result as completion 1445 popup. Relative path specifications are understood wrt.\ the \<^emph>\<open>master 1446 directory\<close> of the document node (\secref{sec:buffer-node}) of the enclosing 1447 editor buffer; this requires a proper theory, not an auxiliary file. 1448 1449 A suffix of slashes may be used to continue the exploration of an already 1450 recognized directory name. 1451 \<close> 1452 1453 1454 subsubsection \<open>Spell-checking\<close> 1455 1456 text \<open> 1457 The spell-checker combines semantic markup from the prover (regions of plain 1458 words) with static dictionaries (word lists) that are known to the editor. 1459 1460 Unknown words are underlined in the text, using @{system_option_ref 1461 spell_checker_color} (blue by default). This is not an error, but a hint to 1462 the user that some action may be taken. The jEdit context menu provides 1463 various actions, as far as applicable: 1464 1465 \<^medskip> 1466 \begin{tabular}{l} 1467 @{action_ref "isabelle.complete-word"} \\ 1468 @{action_ref "isabelle.exclude-word"} \\ 1469 @{action_ref "isabelle.exclude-word-permanently"} \\ 1470 @{action_ref "isabelle.include-word"} \\ 1471 @{action_ref "isabelle.include-word-permanently"} \\ 1472 \end{tabular} 1473 \<^medskip> 1474 1475 Instead of the specific @{action_ref "isabelle.complete-word"}, it is also 1476 possible to use the generic @{action_ref "isabelle.complete"} with its 1477 default keyboard shortcut \<^verbatim>\<open>C+b\<close>. 1478 1479 \<^medskip> 1480 Dictionary lookup uses some educated guesses about lower-case, upper-case, 1481 and capitalized words. This is oriented on common use in English, where this 1482 aspect is not decisive for proper spelling (in contrast to German, for 1483 example). 1484 \<close> 1485 1486 1487 subsection \<open>Semantic completion context \label{sec:completion-context}\<close> 1488 1489 text \<open> 1490 Completion depends on a semantic context that is provided by the prover, 1491 although with some delay, because at least a full PIDE protocol round-trip 1492 is required. Until that information becomes available in the PIDE 1493 document-model, the default context is given by the outer syntax of the 1494 editor mode (see also \secref{sec:buffer-node}). 1495 1496 The semantic \<^emph>\<open>language context\<close> provides information about nested 1497 sub-languages of Isabelle: keywords are only completed for outer syntax, and 1498 antiquotations for languages that support them. Symbol abbreviations only 1499 work for specific sub-languages: e.g.\ \<^verbatim>\<open>=>\<close>'' is \<^emph>\<open>not\<close> completed in 1500 regular ML source, but is completed within ML strings, comments, 1501 antiquotations. Backslash representations of symbols like \<^verbatim>\<open>\foobar\<close>'' or 1502 \<^verbatim>\<open>\<foobar>\<close>'' work in any context --- after additional confirmation. 1503 1504 The prover may produce \<^emph>\<open>no completion\<close> markup in exceptional situations, to 1505 tell that some language keywords should be excluded from further completion 1506 attempts. For example, \<^verbatim>\<open>:\<close>'' within accepted Isar syntax looses its 1507 meaning as abbreviation for symbol \<open>\<in>\<close>''. 1508 \<close> 1509 1510 1511 subsection \<open>Input events \label{sec:completion-input}\<close> 1512 1513 text \<open> 1514 Completion is triggered by certain events produced by the user, with 1515 optional delay after keyboard input according to @{system_option 1516 jedit_completion_delay}. 1517 1518 \<^descr>[Explicit completion] works via action @{action_ref "isabelle.complete"} 1519 with keyboard shortcut \<^verbatim>\<open>C+b\<close>. This overrides the shortcut for @{action_ref 1520 "complete-word"} in jEdit, but it is possible to restore the original jEdit 1521 keyboard mapping of @{action "complete-word"} via \<^emph>\<open>Global Options~/ 1522 Shortcuts\<close> and invent a different one for @{action "isabelle.complete"}. 1523 1524 \<^descr>[Explicit spell-checker completion] works via @{action_ref 1525 "isabelle.complete-word"}, which is exposed in the jEdit context menu, if 1526 the mouse points to a word that the spell-checker can complete. 1527 1528 \<^descr>[Implicit completion] works via regular keyboard input of the editor. It 1529 depends on further side-conditions: 1530 1531 \<^enum> The system option @{system_option_ref jedit_completion} needs to be 1532 enabled (default). 1533 1534 \<^enum> Completion of syntax keywords requires at least 3 relevant characters in 1535 the text. 1536 1537 \<^enum> The system option @{system_option_ref jedit_completion_delay} determines 1538 an additional delay (0.5 by default), before opening a completion popup. 1539 The delay gives the prover a chance to provide semantic completion 1540 information, notably the context (\secref{sec:completion-context}). 1541 1542 \<^enum> The system option @{system_option_ref jedit_completion_immediate} 1543 (enabled by default) controls whether replacement text should be inserted 1544 immediately without popup, regardless of @{system_option 1545 jedit_completion_delay}. This aggressive mode of completion is restricted 1546 to symbol abbreviations that are not plain words (\secref{sec:symbols}). 1547 1548 \<^enum> Completion of symbol abbreviations with only one relevant character in 1549 the text always enforces an explicit popup, regardless of 1550 @{system_option_ref jedit_completion_immediate}. 1551 \<close> 1552 1553 1554 subsection \<open>Completion popup \label{sec:completion-popup}\<close> 1555 1556 text \<open> 1557 A \<^emph>\<open>completion popup\<close> is a minimally invasive GUI component over the text 1558 area that offers a selection of completion items to be inserted into the 1559 text, e.g.\ by mouse clicks. Items are sorted dynamically, according to the 1560 frequency of selection, with persistent history. The popup may interpret 1561 special keys \<^verbatim>\<open>ENTER\<close>, \<^verbatim>\<open>TAB\<close>, \<^verbatim>\<open>ESCAPE\<close>, \<^verbatim>\<open>UP\<close>, \<^verbatim>\<open>DOWN\<close>, \<^verbatim>\<open>PAGE_UP\<close>, 1562 \<^verbatim>\<open>PAGE_DOWN\<close>, but all other key events are passed to the underlying text 1563 area. This allows to ignore unwanted completions most of the time and 1564 continue typing quickly. Thus the popup serves as a mechanism of 1565 confirmation of proposed items, while the default is to continue without 1566 completion. 1567 1568 The meaning of special keys is as follows: 1569 1570 \<^medskip> 1571 \begin{tabular}{ll} 1572 \<^bold>\<open>key\<close> & \<^bold>\<open>action\<close> \\\hline 1573 \<^verbatim>\<open>ENTER\<close> & select completion (if @{system_option jedit_completion_select_enter}) \\ 1574 \<^verbatim>\<open>TAB\<close> & select completion (if @{system_option jedit_completion_select_tab}) \\ 1575 \<^verbatim>\<open>ESCAPE\<close> & dismiss popup \\ 1576 \<^verbatim>\<open>UP\<close> & move up one item \\ 1577 \<^verbatim>\<open>DOWN\<close> & move down one item \\ 1578 \<^verbatim>\<open>PAGE_UP\<close> & move up one page of items \\ 1579 \<^verbatim>\<open>PAGE_DOWN\<close> & move down one page of items \\ 1580 \end{tabular} 1581 \<^medskip> 1582 1583 Movement within the popup is only active for multiple items. Otherwise the 1584 corresponding key event retains its standard meaning within the underlying 1585 text area. 1586 \<close> 1587 1588 1589 subsection \<open>Insertion \label{sec:completion-insert}\<close> 1590 1591 text \<open> 1592 Completion may first propose replacements to be selected (via a popup), or 1593 replace text immediately in certain situations and depending on certain 1594 options like @{system_option jedit_completion_immediate}. In any case, 1595 insertion works uniformly, by imitating normal jEdit text insertion, 1596 depending on the state of the \<^emph>\<open>text selection\<close>. Isabelle/jEdit tries to 1597 accommodate the most common forms of advanced selections in jEdit, but not 1598 all combinations make sense. At least the following important cases are 1599 well-defined: 1600 1601 \<^descr>[No selection.] The original is removed and the replacement inserted, 1602 depending on the caret position. 1603 1604 \<^descr>[Rectangular selection of zero width.] This special case is treated by 1605 jEdit as tall caret'' and insertion of completion imitates its normal 1606 behaviour: separate copies of the replacement are inserted for each line 1607 of the selection. 1608 1609 \<^descr>[Other rectangular selection or multiple selections.] Here the original 1610 is removed and the replacement is inserted for each line (or segment) of 1611 the selection. 1612 1613 Support for multiple selections is particularly useful for \<^emph>\<open>HyperSearch\<close>: 1614 clicking on one of the items in the \<^emph>\<open>HyperSearch Results\<close> window makes 1615 jEdit select all its occurrences in the corresponding line of text. Then 1616 explicit completion can be invoked via \<^verbatim>\<open>C+b\<close>, e.g.\ to replace occurrences 1617 of \<^verbatim>\<open>-->\<close> by \<open>\<longrightarrow>\<close>. 1618 1619 \<^medskip> 1620 Insertion works by removing and inserting pieces of text from the buffer. 1621 This counts as one atomic operation on the jEdit history. Thus unintended 1622 completions may be reverted by the regular @{action undo} action of jEdit. 1623 According to normal jEdit policies, the recovered text after @{action undo} 1624 is selected: \<^verbatim>\<open>ESCAPE\<close> is required to reset the selection and to continue 1625 typing more text. 1626 \<close> 1627 1628 1629 subsection \<open>Options \label{sec:completion-options}\<close> 1630 1631 text \<open> 1632 This is a summary of Isabelle/Scala system options that are relevant for 1633 completion. They may be configured in \<^emph>\<open>Plugin Options~/ Isabelle~/ General\<close> 1634 as usual. 1635 1636 \<^item> @{system_option_def completion_limit} specifies the maximum number of 1637 items for various semantic completion operations (name-space entries etc.) 1638 1639 \<^item> @{system_option_def jedit_completion} guards implicit completion via 1640 regular jEdit key events (\secref{sec:completion-input}): it allows to 1641 disable implicit completion altogether. 1642 1643 \<^item> @{system_option_def jedit_completion_select_enter} and @{system_option_def 1644 jedit_completion_select_tab} enable keys to select a completion item from 1645 the popup (\secref{sec:completion-popup}). Note that a regular mouse click 1646 on the list of items is always possible. 1647 1648 \<^item> @{system_option_def jedit_completion_context} specifies whether the 1649 language context provided by the prover should be used at all. Disabling 1650 that option makes completion less semantic''. Note that incomplete or 1651 severely broken input may cause some disagreement of the prover and the user 1652 about the intended language context. 1653 1654 \<^item> @{system_option_def jedit_completion_delay} and @{system_option_def 1655 jedit_completion_immediate} determine the handling of keyboard events for 1656 implicit completion (\secref{sec:completion-input}). 1657 1658 A @{system_option jedit_completion_delay}~\<^verbatim>\<open>> 0\<close> postpones the processing of 1659 key events, until after the user has stopped typing for the given time span, 1660 but @{system_option jedit_completion_immediate}~\<^verbatim>\<open>= true\<close> means that 1661 abbreviations of Isabelle symbols are handled nonetheless. 1662 1663 \<^item> @{system_option_def completion_path_ignore} specifies glob'' 1664 patterns to ignore in file-system path completion (separated by colons), 1665 e.g.\ backup files ending with tilde. 1666 1667 \<^item> @{system_option_def spell_checker} is a global guard for all spell-checker 1668 operations: it allows to disable that mechanism altogether. 1669 1670 \<^item> @{system_option_def spell_checker_dictionary} determines the current 1671 dictionary, taken from the colon-separated list in the settings variable 1672 @{setting_def JORTHO_DICTIONARIES}. There are jEdit actions to specify local 1673 updates to a dictionary, by including or excluding words. The result of 1674 permanent dictionary updates is stored in the directory @{path 1675 "$ISABELLE_HOME_USER/dictionaries"}, in a separate file for each dictionary. 1676 1677 \<^item> @{system_option_def spell_checker_include} specifies a comma-separated 1678 list of markup elements that delimit words in the source that is subject to 1679 spell-checking, including various forms of comments. 1680 1681 \<^item> @{system_option_def spell_checker_exclude} specifies a comma-separated 1682 list of markup elements that disable spell-checking (e.g.\ in nested 1683 antiquotations). 1684 \<close> 1685 1686 1687 section \<open>Automatically tried tools \label{sec:auto-tools}\<close> 1688 1689 text \<open> 1690 Continuous document processing works asynchronously in the background. 1691 Visible document source that has been evaluated may get augmented by 1692 additional results of \<^emph>\<open>asynchronous print functions\<close>. An example for that 1693 is proof state output, if that is enabled in the Output panel 1694 (\secref{sec:output}). More heavy-weight print functions may be applied as 1695 well, e.g.\ to prove or disprove parts of the formal text by other means. 1696 1697 Isabelle/HOL provides various automatically tried tools that operate on 1698 outermost goal statements (e.g.\ @{command lemma}, @{command theorem}), 1699 independently of the state of the current proof attempt. They work 1700 implicitly without any arguments. Results are output as \<^emph>\<open>information 1701 messages\<close>, which are indicated in the text area by blue squiggles and a blue 1702 information sign in the gutter (see \figref{fig:auto-tools}). The message 1703 content may be shown as for other output (see also \secref{sec:output}). 1704 Some tools produce output with \<^emph>\<open>sendback\<close> markup, which means that clicking 1705 on certain parts of the text inserts that into the source in the proper 1706 place. 1707 1708 \begin{figure}[!htb] 1709 \begin{center} 1710 \includegraphics[scale=0.333]{auto-tools} 1711 \end{center} 1712 \caption{Result of automatically tried tools} 1713 \label{fig:auto-tools} 1714 \end{figure} 1715 1716 \<^medskip> 1717 The following Isabelle system options control the behavior of automatically 1718 tried tools (see also the jEdit dialog window \<^emph>\<open>Plugin Options~/ Isabelle~/ 1719 General~/ Automatically tried tools\<close>): 1720 1721 \<^item> @{system_option_ref auto_methods} controls automatic use of a combination 1722 of standard proof methods (@{method auto}, @{method simp}, @{method blast}, 1723 etc.). This corresponds to the Isar command @{command_ref "try0"} @{cite 1724 "isabelle-isar-ref"}. 1725 1726 The tool is disabled by default, since unparameterized invocation of 1727 standard proof methods often consumes substantial CPU resources without 1728 leading to success. 1729 1730 \<^item> @{system_option_ref auto_nitpick} controls a slightly reduced version of 1731 @{command_ref nitpick}, which tests for counterexamples using first-order 1732 relational logic. See also the Nitpick manual @{cite "isabelle-nitpick"}. 1733 1734 This tool is disabled by default, due to the extra overhead of invoking an 1735 external Java process for each attempt to disprove a subgoal. 1736 1737 \<^item> @{system_option_ref auto_quickcheck} controls automatic use of 1738 @{command_ref quickcheck}, which tests for counterexamples using a series of 1739 assignments for free variables of a subgoal. 1740 1741 This tool is \<^emph>\<open>enabled\<close> by default. It requires little overhead, but is a 1742 bit weaker than @{command nitpick}. 1743 1744 \<^item> @{system_option_ref auto_sledgehammer} controls a significantly reduced 1745 version of @{command_ref sledgehammer}, which attempts to prove a subgoal 1746 using external automatic provers. See also the Sledgehammer manual @{cite 1747 "isabelle-sledgehammer"}. 1748 1749 This tool is disabled by default, due to the relatively heavy nature of 1750 Sledgehammer. 1751 1752 \<^item> @{system_option_ref auto_solve_direct} controls automatic use of 1753 @{command_ref solve_direct}, which checks whether the current subgoals can 1754 be solved directly by an existing theorem. This also helps to detect 1755 duplicate lemmas. 1756 1757 This tool is \<^emph>\<open>enabled\<close> by default. 1758 1759 1760 Invocation of automatically tried tools is subject to some global policies 1761 of parallel execution, which may be configured as follows: 1762 1763 \<^item> @{system_option_ref auto_time_limit} (default 2.0) determines the timeout 1764 (in seconds) for each tool execution. 1765 1766 \<^item> @{system_option_ref auto_time_start} (default 1.0) determines the start 1767 delay (in seconds) for automatically tried tools, after the main command 1768 evaluation is finished. 1769 1770 1771 Each tool is submitted independently to the pool of parallel execution tasks 1772 in Isabelle/ML, using hardwired priorities according to its relative 1773 heaviness''. The main stages of evaluation and printing of proof states 1774 take precedence, but an already running tool is not canceled and may thus 1775 reduce reactivity of proof document processing. 1776 1777 Users should experiment how the available CPU resources (number of cores) 1778 are best invested to get additional feedback from prover in the background, 1779 by using a selection of weaker or stronger tools. 1780 \<close> 1781 1782 1783 section \<open>Sledgehammer \label{sec:sledgehammer}\<close> 1784 1785 text \<open> 1786 The \<^emph>\<open>Sledgehammer\<close> panel (\figref{fig:sledgehammer}) provides a view on 1787 some independent execution of the Isar command @{command_ref sledgehammer}, 1788 with process indicator (spinning wheel) and GUI elements for important 1789 Sledgehammer arguments and options. Any number of Sledgehammer panels may be 1790 active, according to the standard policies of Dockable Window Management in 1791 jEdit. Closing such windows also cancels the corresponding prover tasks. 1792 1793 \begin{figure}[!htb] 1794 \begin{center} 1795 \includegraphics[scale=0.333]{sledgehammer} 1796 \end{center} 1797 \caption{An instance of the Sledgehammer panel} 1798 \label{fig:sledgehammer} 1799 \end{figure} 1800 1801 The \<^emph>\<open>Apply\<close> button attaches a fresh invocation of @{command sledgehammer} 1802 to the command where the cursor is pointing in the text --- this should be 1803 some pending proof problem. Further buttons like \<^emph>\<open>Cancel\<close> and \<^emph>\<open>Locate\<close> 1804 help to manage the running process. 1805 1806 Results appear incrementally in the output window of the panel. Proposed 1807 proof snippets are marked-up as \<^emph>\<open>sendback\<close>, which means a single mouse 1808 click inserts the text into a suitable place of the original source. Some 1809 manual editing may be required nonetheless, say to remove earlier proof 1810 attempts. 1811 \<close> 1812 1813 1814 chapter \<open>Isabelle document preparation\<close> 1815 1816 text \<open> 1817 The ultimate purpose of Isabelle is to produce nicely rendered documents 1818 with the Isabelle document preparation system, which is based on {\LaTeX}; 1819 see also @{cite "isabelle-system" and "isabelle-isar-ref"}. Isabelle/jEdit 1820 provides some additional support for document editing. 1821 \<close> 1822 1823 1824 section \<open>Document outline\<close> 1825 1826 text \<open> 1827 Theory sources may contain document markup commands, such as @{command_ref 1828 chapter}, @{command_ref section}, @{command subsection}. The Isabelle 1829 SideKick parser (\secref{sec:sidekick}) represents this document outline as 1830 structured tree view, with formal statements and proofs nested inside; see 1831 \figref{fig:sidekick-document}. 1832 1833 \begin{figure}[!htb] 1834 \begin{center} 1835 \includegraphics[scale=0.333]{sidekick-document} 1836 \end{center} 1837 \caption{Isabelle document outline via SideKick tree view} 1838 \label{fig:sidekick-document} 1839 \end{figure} 1840 1841 It is also possible to use text folding according to this structure, by 1842 adjusting \<^emph>\<open>Utilities / Buffer Options / Folding mode\<close> of jEdit. The default 1843 mode \<^verbatim>\<open>isabelle\<close> uses the structure of formal definitions, statements, and 1844 proofs. The alternative mode \<^verbatim>\<open>sidekick\<close> uses the document structure of the 1845 SideKick parser, as explained above. 1846 \<close> 1847 1848 1849 section \<open>Markdown structure\<close> 1850 1851 text \<open> 1852 Document text is internally structured in paragraphs and nested lists, using 1853 notation that is similar to Markdown\<^footnote>\<open>\<^url>\<open>http://commonmark.org\<close>\<close>. There are 1854 special control symbols for items of different kinds of lists, corresponding 1855 to \<^verbatim>\<open>itemize\<close>, \<^verbatim>\<open>enumerate\<close>, \<^verbatim>\<open>description\<close> in {\LaTeX}. This is illustrated 1856 in for \<^verbatim>\<open>itemize\<close> in \figref{fig:markdown-document}. 1857 1858 \begin{figure}[!htb] 1859 \begin{center} 1860 \includegraphics[scale=0.333]{markdown-document} 1861 \end{center} 1862 \caption{Markdown structure within document text} 1863 \label{fig:markdown-document} 1864 \end{figure} 1865 1866 Items take colour according to the depth of nested lists. This helps to 1867 explore the implicit rules for list structure interactively. There is also 1868 markup for individual paragraphs in the text: it may be explored via mouse 1869 hovering with \<^verbatim>\<open>CONTROL\<close> / \<^verbatim>\<open>COMMAND\<close> as usual 1870 (\secref{sec:tooltips-hyperlinks}). 1871 \<close> 1872 1873 1874 section \<open>Citations and Bib{\TeX} entries \label{sec:bibtex}\<close> 1875 1876 text \<open> 1877 Citations are managed by {\LaTeX} and Bib{\TeX} in \<^verbatim>\<open>.bib\<close> files. The 1878 Isabelle session build process and the @{tool latex} tool @{cite 1879 "isabelle-system"} are smart enough to assemble the result, based on the 1880 session directory layout. 1881 1882 The document antiquotation \<open>@{cite}\<close> is described in @{cite 1883 "isabelle-isar-ref"}. Within the Prover IDE it provides semantic markup for 1884 tooltips, hyperlinks, and completion for Bib{\TeX} database entries. 1885 Isabelle/jEdit does \<^emph>\<open>not\<close> know about the actual Bib{\TeX} environment used 1886 in {\LaTeX} batch-mode, but it can take citations from those \<^verbatim>\<open>.bib\<close> files 1887 that happen to be open in the editor; see \figref{fig:cite-completion}. 1888 1889 \begin{figure}[!htb] 1890 \begin{center} 1891 \includegraphics[scale=0.333]{cite-completion} 1892 \end{center} 1893 \caption{Semantic completion of citations from open Bib{\TeX} files} 1894 \label{fig:cite-completion} 1895 \end{figure} 1896 1897 Isabelle/jEdit also provides IDE support for editing \<^verbatim>\<open>.bib\<close> files 1898 themselves. There is syntax highlighting based on entry types (according to 1899 standard Bib{\TeX} styles), a context-menu to compose entries 1900 systematically, and a SideKick tree view of the overall content; see 1901 \figref{fig:bibtex-mode}. Semantic checking with errors and warnings is 1902 performed by the original \<^verbatim>\<open>bibtex\<close> tool using style \<^verbatim>\<open>plain\<close>: different 1903 Bib{\TeX} styles may produce slightly different results. 1904 1905 \begin{figure}[!htb] 1906 \begin{center} 1907 \includegraphics[scale=0.333]{bibtex-mode} 1908 \end{center} 1909 \caption{Bib{\TeX} mode with context menu, SideKick tree view, and 1910 semantic output from the \<^verbatim>\<open>bibtex\<close> tool} 1911 \label{fig:bibtex-mode} 1912 \end{figure} 1913 1914 Regular document preview (\secref{sec:document-preview}) of \<^verbatim>\<open>.bib\<close> files 1915 approximates the usual {\LaTeX} bibliography output in HTML (using style 1916 \<^verbatim>\<open>unsort\<close>). 1917 \<close> 1918 1919 1920 section \<open>Document preview \label{sec:document-preview}\<close> 1921 1922 text \<open> 1923 The action @{action_def isabelle.preview} opens an HTML preview of the 1924 current document node in the default web browser. The content is derived 1925 from the semantic markup produced by the prover, and thus depends on the 1926 status of formal processing. 1927 1928 Action @{action_def isabelle.draft} is similar to @{action 1929 isabelle.preview}, but shows a plain-text document draft. 1930 \<close> 1931 1932 1933 chapter \<open>ML debugging within the Prover IDE\<close> 1934 1935 text \<open> 1936 Isabelle/ML is based on Poly/ML\<^footnote>\<open>\<^url>\<open>https://www.polyml.org\<close>\<close> and thus 1937 benefits from the source-level debugger of that implementation of Standard 1938 ML. The Prover IDE provides the \<^emph>\<open>Debugger\<close> dockable to connect to running 1939 ML threads, inspect the stack frame with local ML bindings, and evaluate ML 1940 expressions in a particular run-time context. A typical debugger session is 1941 shown in \figref{fig:ml-debugger}. 1942 1943 ML debugging depends on the following pre-requisites. 1944 1945 \<^enum> ML source needs to be compiled with debugging enabled. This may be 1946 controlled for particular chunks of ML sources using any of the subsequent 1947 facilities. 1948 1949 \<^enum> The system option @{system_option_ref ML_debugger} as implicit state 1950 of the Isabelle process. It may be changed in the menu \<^emph>\<open>Plugins / 1951 Plugin Options / Isabelle / General\<close>. ML modules need to be reloaded and 1952 recompiled to pick up that option as intended. 1953 1954 \<^enum> The configuration option @{attribute_ref ML_debugger}, with an 1955 attribute of the same name, to update a global or local context (e.g.\ 1956 with the @{command declare} command). 1957 1958 \<^enum> Commands that modify @{attribute ML_debugger} state for individual 1959 files: @{command_ref ML_file_debug}, @{command_ref ML_file_no_debug}, 1960 @{command_ref SML_file_debug}, @{command_ref SML_file_no_debug}. 1961 1962 The instrumentation of ML code for debugging causes minor run-time 1963 overhead. ML modules that implement critical system infrastructure may 1964 lead to deadlocks or other undefined behaviour, when put under debugger 1965 control! 1966 1967 \<^enum> The \<^emph>\<open>Debugger\<close> panel needs to be active, otherwise the program ignores 1968 debugger instrumentation of the compiler and runs unmanaged. It is also 1969 possible to start debugging with the panel open, and later undock it, to 1970 let the program continue unhindered. 1971 1972 \<^enum> The ML program needs to be stopped at a suitable breakpoint, which may 1973 be activated individually or globally as follows. 1974 1975 For ML sources that have been compiled with debugger support, the IDE 1976 visualizes possible breakpoints in the text. A breakpoint may be toggled 1977 by pointing accurately with the mouse, with a right-click to activate 1978 jEdit's context menu and its \<^emph>\<open>Toggle Breakpoint\<close> item. Alternatively, the 1979 \<^emph>\<open>Break\<close> checkbox in the \<^emph>\<open>Debugger\<close> panel may be enabled to stop ML 1980 threads always at the next possible breakpoint. 1981 1982 Note that the state of individual breakpoints \<^emph>\<open>gets lost\<close> when the 1983 coresponding ML source is re-compiled! This may happen unintentionally, 1984 e.g.\ when following hyperlinks into ML modules that have not been loaded 1985 into the IDE before. 1986 1987 \begin{figure}[!htb] 1988 \begin{center} 1989 \includegraphics[scale=0.333]{ml-debugger} 1990 \end{center} 1991 \caption{ML debugger session} 1992 \label{fig:ml-debugger} 1993 \end{figure} 1994 1995 The debugger panel (\figref{fig:ml-debugger}) shows a list of all threads 1996 that are presently stopped. Each thread shows a stack of all function 1997 invocations that lead to the current breakpoint at the top. 1998 1999 It is possible to jump between stack positions freely, by clicking on this 2000 list. The current situation is displayed in the big output window, as a 2001 local ML environment with names and printed values. 2002 2003 ML expressions may be evaluated in the current context by entering snippets 2004 of source into the text fields labeled \<open>Context\<close> and \<open>ML\<close>, and pushing the 2005 \<open>Eval\<close> button. By default, the source is interpreted as Isabelle/ML with the 2006 usual support for antiquotations (like @{command ML}, @{command ML_file}). 2007 Alternatively, strict Standard ML may be enforced via the \<^emph>\<open>SML\<close> checkbox 2008 (like @{command SML_file}). 2009 2010 The context for Isabelle/ML is optional, it may evaluate to a value of type 2011 @{ML_type theory}, @{ML_type Proof.context}, or @{ML_type Context.generic}. 2012 Thus the given ML expression (with its antiquotations) may be subject to the 2013 intended dynamic run-time context, instead of the static compile-time 2014 context. 2015 2016 \<^medskip> 2017 The buttons labeled \<^emph>\<open>Continue\<close>, \<^emph>\<open>Step\<close>, \<^emph>\<open>Step over\<close>, \<^emph>\<open>Step out\<close> 2018 recommence execution of the program, with different policies concerning 2019 nested function invocations. The debugger always moves the cursor within the 2020 ML source to the next breakpoint position, and offers new stack frames as 2021 before. 2022 \<close> 2023 2024 2025 chapter \<open>Miscellaneous tools\<close> 2026 2027 section \<open>Timing\<close> 2028 2029 text \<open> 2030 Managed evaluation of commands within PIDE documents includes timing 2031 information, which consists of elapsed (wall-clock) time, CPU time, and GC 2032 (garbage collection) time. Note that in a multithreaded system it is 2033 difficult to measure execution time precisely: elapsed time is closer to the 2034 real requirements of runtime resources than CPU or GC time, which are both 2035 subject to influences from the parallel environment that are outside the 2036 scope of the current command transaction. 2037 2038 The \<^emph>\<open>Timing\<close> panel provides an overview of cumulative command timings for 2039 each document node. Commands with elapsed time below the given threshold are 2040 ignored in the grand total. Nodes are sorted according to their overall 2041 timing. For the document node that corresponds to the current buffer, 2042 individual command timings are shown as well. A double-click on a theory 2043 node or command moves the editor focus to that particular source position. 2044 2045 It is also possible to reveal individual timing information via some tooltip 2046 for the corresponding command keyword, using the technique of mouse hovering 2047 with \<^verbatim>\<open>CONTROL\<close>~/ \<^verbatim>\<open>COMMAND\<close> modifier (\secref{sec:tooltips-hyperlinks}). 2048 Actual display of timing depends on the global option @{system_option_ref 2049 jedit_timing_threshold}, which can be configured in \<^emph>\<open>Plugin Options~/ 2050 Isabelle~/ General\<close>. 2051 2052 \<^medskip> 2053 The \<^emph>\<open>Monitor\<close> panel visualizes various data collections about recent 2054 activity of the Isabelle/ML task farm and the underlying ML runtime system. 2055 The display is continuously updated according to @{system_option_ref 2056 editor_chart_delay}. Note that the painting of the chart takes considerable 2057 runtime itself --- on the Java Virtual Machine that runs Isabelle/Scala, not 2058 Isabelle/ML. Internally, the Isabelle/Scala module \<^verbatim>\<open>isabelle.ML_Statistics\<close> 2059 provides further access to statistics of Isabelle/ML. 2060 \<close> 2061 2062 2063 section \<open>Low-level output\<close> 2064 2065 text \<open> 2066 Prover output is normally shown directly in the main text area or specific 2067 panels like \<^emph>\<open>Output\<close> (\secref{sec:output}) or \<^emph>\<open>State\<close> 2068 (\secref{sec:state-output}). Beyond this, it is occasionally useful to 2069 inspect low-level output channels via some of the following additional 2070 panels: 2071 2072 \<^item> \<^emph>\<open>Protocol\<close> shows internal messages between the Isabelle/Scala and 2073 Isabelle/ML side of the PIDE document editing protocol. Recording of 2074 messages starts with the first activation of the corresponding dockable 2075 window; earlier messages are lost. 2076 2077 Actual display of protocol messages causes considerable slowdown, so it is 2078 important to undock all \<^emph>\<open>Protocol\<close> panels for production work. 2079 2080 \<^item> \<^emph>\<open>Raw Output\<close> shows chunks of text from the \<^verbatim>\<open>stdout\<close> and \<^verbatim>\<open>stderr\<close> 2081 channels of the prover process. Recording of output starts with the first 2082 activation of the corresponding dockable window; earlier output is lost. 2083 2084 The implicit stateful nature of physical I/O channels makes it difficult to 2085 relate raw output to the actual command from where it was originating. 2086 Parallel execution may add to the confusion. Peeking at physical process I/O 2087 is only the last resort to diagnose problems with tools that are not PIDE 2088 compliant. 2089 2090 Under normal circumstances, prover output always works via managed message 2091 channels (corresponding to @{ML writeln}, @{ML warning}, @{ML 2092 Output.error_message} in Isabelle/ML), which are displayed by regular means 2093 within the document model (\secref{sec:output}). Unhandled Isabelle/ML 2094 exceptions are printed by the system via @{ML Output.error_message}. 2095 2096 \<^item> \<^emph>\<open>Syslog\<close> shows system messages that might be relevant to diagnose 2097 problems with the startup or shutdown phase of the prover process; this also 2098 includes raw output on \<^verbatim>\<open>stderr\<close>. Isabelle/ML also provides an explicit @{ML 2099 Output.system_message} operation, which is occasionally useful for 2100 diagnostic purposes within the system infrastructure itself. 2101 2102 A limited amount of syslog messages are buffered, independently of the 2103 docking state of the \<^emph>\<open>Syslog\<close> panel. This allows to diagnose serious 2104 problems with Isabelle/PIDE process management, outside of the actual 2105 protocol layer. 2106 2107 Under normal situations, such low-level system output can be ignored. 2108 \<close> 2109 2110 2111 chapter \<open>Known problems and workarounds \label{sec:problems}\<close> 2112 2113 text \<open> 2114 \<^item> \<^bold>\<open>Problem:\<close> Odd behavior of some diagnostic commands with global 2115 side-effects, like writing a physical file. 2116 2117 \<^bold>\<open>Workaround:\<close> Copy/paste complete command text from elsewhere, or disable 2118 continuous checking temporarily. 2119 2120 \<^item> \<^bold>\<open>Problem:\<close> Keyboard shortcuts \<^verbatim>\<open>C+PLUS\<close> and \<^verbatim>\<open>C+MINUS\<close> for adjusting the 2121 editor font size depend on platform details and national keyboards. 2122 2123 \<^bold>\<open>Workaround:\<close> Rebind keys via \<^emph>\<open>Global Options~/ Shortcuts\<close>. 2124 2125 \<^item> \<^bold>\<open>Problem:\<close> The Mac OS X key sequence \<^verbatim>\<open>COMMAND+COMMA\<close> for application 2126 \<^emph>\<open>Preferences\<close> is in conflict with the jEdit default keyboard shortcut for 2127 \<^emph>\<open>Incremental Search Bar\<close> (action @{action_ref "quick-search"}). 2128 2129 \<^bold>\<open>Workaround:\<close> Rebind key via \<^emph>\<open>Global Options~/ Shortcuts\<close> according to 2130 national keyboard, e.g.\ \<^verbatim>\<open>COMMAND+SLASH\<close> on English ones. 2131 2132 \<^item> \<^bold>\<open>Problem:\<close> On Mac OS X with native Apple look-and-feel, some exotic 2133 national keyboards may cause a conflict of menu accelerator keys with 2134 regular jEdit key bindings. This leads to duplicate execution of the 2135 corresponding jEdit action. 2136 2137 \<^bold>\<open>Workaround:\<close> Disable the native Apple menu bar via Java runtime option 2138 \<^verbatim>\<open>-Dapple.laf.useScreenMenuBar=false\<close>. 2139 2140 \<^item> \<^bold>\<open>Problem:\<close> Mac OS X system fonts sometimes lead to character drop-outs in 2141 the main text area. 2142 2143 \<^bold>\<open>Workaround:\<close> Use the default \<^verbatim>\<open>Isabelle DejaVu\<close> fonts. 2144 2145 \<^item> \<^bold>\<open>Problem:\<close> Some Linux/X11 input methods such as IBus tend to disrupt key 2146 event handling of Java/AWT/Swing. 2147 2148 \<^bold>\<open>Workaround:\<close> Do not use X11 input methods. Note that environment variable 2149 \<^verbatim>\<open>XMODIFIERS\<close> is reset by default within Isabelle settings. 2150 2151 \<^item> \<^bold>\<open>Problem:\<close> Some Linux/X11 window managers that are not re-parenting'' 2152 cause problems with additional windows opened by Java. This affects either 2153 historic or neo-minimalistic window managers like \<^verbatim>\<open>awesome\<close> or \<^verbatim>\<open>xmonad\<close>. 2154 2155 \<^bold>\<open>Workaround:\<close> Use a regular re-parenting X11 window manager. 2156 2157 \<^item> \<^bold>\<open>Problem:\<close> Various forks of Linux/X11 window managers and desktop 2158 environments (like Gnome) disrupt the handling of menu popups and mouse 2159 positions of Java/AWT/Swing. 2160 2161 \<^bold>\<open>Workaround:\<close> Use suitable version of Linux desktops. 2162 2163 \<^item> \<^bold>\<open>Problem:\<close> Full-screen mode via jEdit action @{action_ref 2164 "toggle-full-screen"} (default keyboard shortcut \<^verbatim>\<open>F11\<close>) works on Windows, 2165 but not on Mac OS X or various Linux/X11 window managers. 2166 2167 \<^bold>\<open>Workaround:\<close> Use native full-screen control of the window manager (notably 2168 on Mac OS X). 2169 2170 \<^item> \<^bold>\<open>Problem:\<close> Heap space of the JVM may fill up and render the Prover IDE 2171 unresponsive, e.g.\ when editing big Isabelle sessions with many theories. 2172 2173 \<^bold>\<open>Workaround:\<close> Increase JVM heap parameters by editing platform-specific 2174 files (for properties'' or options'') that are associated with the main 2175 app bundle. 2176 2177 Also note that jEdit provides a heap space monitor in the status line 2178 (bottom-right). Double-clicking on that causes full garbage-collection, 2179 which sometimes helps in low-memory situations. 2180 \<close> 2181 2182 end `
2019-05-23 17:55:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45914098620414734, "perplexity": 8001.225175265904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257316.10/warc/CC-MAIN-20190523164007-20190523190007-00039.warc.gz"}
https://zbmath.org/6820447
Fiedler-Ando theorem for Ando-Li-Mathias mean of positive operators.(English)Zbl 06820447 Summary: In this paper, we show several operator inequalities involving the Hadamard product and the Ando-Li-Mathias mean of $$n$$ positive operators on a Hilbert space, which are regarded as $$n$$-variable versions of the Fiedler-Ando theorem. As an application, we show an $$n$$-variable version of Fiedler type inequality via the Ando-Li-Mathias mean. MSC: 47A64 Operator means involving linear operators, shorted linear operators, etc. 47A63 Linear operator inequalities 47A80 Tensor products of linear operators Full Text:
2023-03-28 11:23:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6240274310112, "perplexity": 1997.366509631865}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00126.warc.gz"}
http://mathoverflow.net/revisions/57114/list
4 fix typo I will assume the uniform distribution on all (labelled) graphs with $n$ vertices and $k$ edges. An acyclic graph on $n$ vertices has at most $n-1$ edges, so let me further assume $k< n$. More precisely, $k=n-c$, where $c$ is the number of connected components, i.e., the graph is acyclic iff it is union of $c$ disjoint trees. Now, the number of trees with $m$ vertices is $m^{m-2}$ by Cayley’s formula, hence the requested probability is $$p_{n,k}=\frac1{(n-k)!\binom{\binom n2}k!}\sum_{\substack{n_1+\cdots+n_{n-k}=n\\n_1,\dots,n_{n-kn2}k}\sum_{\substack{n_1+\cdots+n_{n-k}=n\\n_1,\dots,n_{n-k}>0}}\binom n{n_1\,\dots\,n_{n-k}}\prod_{i=1}^{n-k}n_i^{n_i-2}.$$ We have $\binom{\binom n2}k\approx\left(\frac{en^2}{2k}\right)^kk^{-1/2}$ by Stirling’s approximation (where $f\approx g$ means $c_1f\le g\le c_2f$ for some positive constants $c_1,c_2$), hence in the simplest case $k=n-1$, $$p_{n,n-1}\approx\frac1{\sqrt n}\left(\frac2e\right)^n.$$ 3 hmm I will assume the uniform distribution on all (labelled) graphs with $n$ vertices and $k$ edges. An acyclic graph on $n$ vertices has at most $n-1$ edges, so let me further assume $k< n$. More precisely, $k=n-c$, where $c$ is the number of connected components, i.e., the graph is acyclic iff it is union of $c$ disjoint trees. Now, the number of trees with $m$ vertices is $m^{m-2}$ by Cayley’s formula, hence the requested probability is $$p_{n,k}=\binom{\binom n2}k^{-1}\sum_{\substack{n_1+\cdots+n_{n-k}=n\\n_1,\dots,n_{n-kp_{n,k}=\frac1{(n-k)!\binom{\binom n2}k!}\sum_{\substack{n_1+\cdots+n_{n-k}=n\\n_1,\dots,n_{n-k}>0}}\binom n{n_1\,\dots\,n_{n-k}}\prod_{i=1}^{n-k}n_i^{n_i-2}.$$ We have $\binom{\binom n2}k\approx\left(\frac{en^2}{2k}\right)^kk^{-1/2}$ by Stirling’s approximation (where $f\approx g$ means $c_1f\le g\le c_2f$ for some positive constants $c_1,c_2$), hence in the simplest case $k=n-1$, $$p_{n,n-1}\approx\frac1{\sqrt n}\left(\frac2e\right)^n.$$ 2 correction I will assume the uniform distribution on all (labelled) graphs with $n$ vertices and $k$ edges. An acyclic graph on $n$ vertices has at most $n-1$ edges, so let me further assume $k< n$. More precisely, $k=n-c$, where $c$ is the number of connected components, i.e., the graph is acyclic iff it is union of $c$ disjoint trees. Now, the number of trees with $m$ vertices is $m^{m-2}$ by Cayley’s formula, hence the requested probability is $$p_{n,k}=\binom{\binom n2}k^{-1}\sum_{\substack{n_1+\cdots+n_{n-k}=n\\n_1,\dots,n_{n-k}>0}}\binom n{n_1\,\dots\,n_{n-k}}\prod_{i=1}^{n-k}n_i^{n_i-2}.$$ We have $\binom{\binom n2}k\approx\left(\frac{en^2}{2k}\right)^k$ n2}k\approx\left(\frac{en^2}{2k}\right)^kk^{-1/2}$by Stirling’s approximation (where$f\approx g$means$c_1f\le g\le c_2f$for some positive constants$c_1,c_2$), hence in the simplest case$k=n-1$, $$p_{n,n-1}\approx\frac1n\left(\frac2e\right)^n.$$$p_{n,n-1}\approx\frac1{\sqrt n}\left(\frac2e\right)^n. 1
2013-05-25 14:34:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930983185768127, "perplexity": 90.86357899953065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956263/warc/CC-MAIN-20130516120556-00087-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.nature.com/articles/s42005-019-0169-x?error=cookies_not_supported&code=1ae8b759-70aa-46f7-86bd-d4d5374a253a
Article | Open | Published: # Coherent transport of quantum states by deep reinforcement learning ## Abstract Some problems in physics can be handled only after a suitable ansatz solution has been guessed, proving to be resilient to generalization. The coherent transport of a quantum state by adiabatic passage through an array of semiconductor quantum dots is an excellent example of such a problem, where it is necessary to introduce a so-called counterintuitive control sequence. Instead, the deep reinforcement learning (DRL) technique has proven to be able to solve very complex sequential decision-making problems, despite a lack of prior knowledge. We show that DRL discovers a control sequence that outperforms the counterintuitive control sequence. DRL can even discover novel strategies when realistic disturbances affect an ideal system, such as detuning or when dephasing or losses are added to the master equation. DRL is effective in controlling the dynamics of quantum states and, more generally, whenever an ansatz solution is unknown or insufficient to effectively treat the problem. ## Introduction Some problems in physics are solved as a result of the discovery of an ansatz solution, namely a successful test guess, but unfortunately there is no general method to generate one. Recently, machine learning has increasingly proved to be a viable tool for modeling hidden features and effective rules in complex systems. Among the classes of machine learning algorithms, deep reinforcement learning (DRL)1 is providing some of the most spectacular results due to its ability to identify strategies for achieving a goal in a complex space of solutions without prior knowledge of the system2,3,4,5,6,7. Contrary to supervised learning, which has already been applied to quantum systems, such as in the determination of high-fidelity gates and the optimization of quantum memories by dynamic decoupling8, DRL has only very recently been proposed for the control of quantum systems9,10,11,12,13,14,15,16, along with a strictly quantum reinforcement learning implementation14,17. To show the power of DRL, we apply DRL to the problem of coherent transport by adiabatic passage (CTAP) where an electron (encoding the quantum state) is transferred through an array of quantum dots. The ansatz solution for CTAP is notoriously called counterintuitive because of its nonobvious barrier control gate pulse sequence. During coherent adiabatic passage, an electron spends ideally no time in the central quantum dot, because of the simultaneous modulation of the coupling between the dots that suitably drives the trajectory through the Hilbert space18,19,20,21,22. The system moves from an initial equilibrium condition to a different one, namely the one where only the last dot of the array is populated. By exploiting such an ansatz solution, consisting of applying the barrier control gates between the dots in a “reversed order” with respect to what intuition would naturally suggest, the process displays truly quantum mechanical behavior, provided that the array consists of an odd number of dots. Researchers have already explored silicon-based quantum information processing architectures23,24,25, including the coherent transport of multiple-spin qubits into double quantum dots by adiabatic passage22, heuristic search methods, such as genetic algorithms, to find a universal set of quantum logic gates26,27, and the application of DRL to classical systems28,29,30. Here, we demonstrate that DRL implemented in a compact neural network can, first of all, autonomously discover an analog of the counterintuitive gate pulse sequence without any prior knowledge, therefore finding a control path in a problem whose solution is far from the equilibrium of the initial conditions. More importantly, this method can outperform the previously introduced analytical solutions in terms of processing speed and when the system deviates from ideal conditions, which are here represented by the imperfect tuning of the ground states of the quantum dots, dephasing and losses. Under such conditions, no analytical approach exists to the best of our knowledge. In our approach, we exploit trust region policy optimization (TRPO)31 to handle the CTAP problem. First, we compare the results discovered by the artificial intelligence algorithm with the ansatz solution given in the literature. Next, we apply the method to solve the system when the ground states of the quantum dots are detuned and when the system is perturbed by an interaction with uncontrollable degrees of freedom of the surrounding environment. This latter condition results in dephasing and loss terms in the master equation describing the system, for which there exists no analytical method. Similar to the case of artificial intelligence learning in the classical Atari framework2, in our approach, the DRL agent interacts with a QuTIP32 simulation, which plays the role of the RL environment, by implementing the master equation of the system. The RL agent exploits the information retrieved from the feedback in terms of the temporal evolution of the population of the dots. As a further advantage of such an approach, a 2-step temporal Bayesian network (2TBN) analysis can identify which parameters of the system influence the process to a greater extent. ## Results ### Deep reinforcement learning by actor-critic neural networks Reinforcement learning (RL) is a set of techniques used to learn behavior in sequential decision-making problems when no prior knowledge about the system dynamics is available or when the control problem is too complex for classical optimal-control algorithms. RL methods can be roughly classified into three main categories: value-based, policy-based, and actor-critic methods1. Recently, actor-critic methods have proven to be successful in solving complex continuous control problems33. The idea behind actor-critic methods is to use two parametric models (e.g., neural networks) to represent both the policy (actor) and the value function (critic). The actor decides in each state of the system which action to execute, while the critic learns the value (utility) of taking each action in each state (see Supplementary Note 1). Following the critic’s advice, the actor modifies the parameters of its policy to improve the performance. Among the many actor-critic methods available in the literature, we selected the TRPO algorithm31 (Supplementary Note 2) to find an optimal policy of control pulses. The choice of TRPO is motivated both by its excellent performance on a wide variety of tasks and by the relative simplicity of tuning its hyperparameters31 (see the Supplementary Note 3). ### Coherent spatial adiabatic passage of qubits CTAP is the solid-state version of a method developed for stimulated Raman adiabatic passage21,34, which is relevant for instance in quantum information processing architectures that shuttle a qubit from one location to another, where attention is paid to minimize the information loss during transport. In solid-state quantum devices based on either silicon35 or gallium arsenide36, the qubit can be encoded, for instance, into spin states of either excess electron(s) or hole(s) in quantum dots24. CTAP was originally developed for single-electron states in single-occupied quantum dots, but it can be also extended to more complex spin states, such as hybrid qubits based on spin triplets22. If one imagines employing, for instance, an array of dopants in silicon37,38,39, a reasonable interdopant spacing is of the order of 20 nm, and a hopping time of 100 ps40 can be achieved. Adiabatic passage requires control pulses with a bandwidth and order of magnitude or two lower than the hopping time, which can be managed by conventional electronics41. ### Steering a quantum system initially far from the final equilibrium To demonstrate the exploitation of RL, we start by the simplest case of CTAP across a chain of three identical quantum dots. The RL architecture is depicted in Fig. 1a. The simulation of the physical system that supports CTAP consists of an RL environment E that receives as input the updated values of the parameters (in our case the control sequence Ωi,i + 1 with i = 1, 2 between the adjacent ith and (i + 1)th dots) that reflect the action of the control gates as calculated by the agent A according to the policy π. In turn, the RL environment E computes the new RL state (here expressed in terms of the density matrix of the triple quantum dot device, see Supplementary Fig. 1) and provides feedback to agent A. Agent A calculates the next input parameters after evaluating the effects of the previous input according to a reward function rt, which is expressed in terms of the system state. It is worth noting that the definition of an RL environment (a black box that receives an action as input and outputs a result) is very different from the usual definition of a physical environment, which corresponds to an external quantum system that interacts with the quantum system under investigation. More explicitly, the ground state of each quantum dot is tuned with respect to the others by external top metal gates (not shown for simplicity in the sketch of the RL environment E in Fig. 1a), while the coupling between two neighboring quantum dots is in turn controlled by additional barrier control gates. The idealized physical system is prepared so that the ground states of the three quantum dots have the same energy. As the reference energy is arbitrary, without loss of generality we can set E1 = E2 = E3 = 0. The Hamiltonian therefore reads: $$H(t) = \left( {\begin{array}{*{20}{c}} 0 & { - {\it{\Omega }}_{12}(t)} & 0 \\ { - {\it{\Omega }}_{12}(t)} & 0 & { - {\it{\Omega }}_{23}(t)} \\ 0 & { - {\it{\Omega }}_{23}(t)} & 0 \end{array}} \right).$$ (1) One of the three eigenstates is referred as a “dark state”, and it is expressed as a function of the state of the first and third dots only. The dark state reads: $$\left| {D_0(t)} \right\rangle = \cos \theta _1(t)\left| 1 \right\rangle - \sin \theta _1(t)\left| 3 \right\rangle ,$$ (2) where θ1(t) = arctan(Ω12(t)/Ω23(t)). A suitable time evolution of the values of the control sequence Ω12(t) and Ω23(t) between the dots allows to transform |D0(t)〉 from |1〉 at t = 0 to the final target state |3〉 at t = tmax. If the Hamiltonian is prepared in |D0〉 at t = 0, by choosing θ1(t) = 0 (so that |D0(0)〉 = |1〉), it will remain in the same eigenstate if the adiabaticity criterion is met; that is, $$\left| {E_0 - E_ \pm (t)} \right| \gg \left| {\left\langle {\dot{D}_0(t)|D_ \pm (t)} \right\rangle } \right|$$, where the |D±(t)〉 eigenstates are explicitly given in the Supplementary Note 4, and E±(t) are the corresponding eigenenergies. The effectiveness of the pulse sequence of the barrier control gate, which is reflected in the control sequence Ωi,i + 1(t), is addressed by taking F(t) = ρ33(t) as the fidelity and maximizing it while trying to keep the population ρ22(t) constant at zero, where ρij, i, j {1, 2, 3} represents an element of the density matrix ρ. The time evolution is governed by the von Neumann equation for the density matrix ρ, namely: $$\dot \rho = - \frac{i}{\hbar }[H,\rho ].$$ (3) Notoriously, a suitable shaping of Ω12(t) and Ω23(t) as Gaussian pulses can achieve coherent transport with a high fidelity, if $$t_{{\rm{max}}} \ \gtrsim \frac{{10\pi }}{{{\it{\Omega }}_{{\rm{max}}}}}$$ (see Supplementary Note 4 and Supplementary Figs. 25). The two pulses must be applied in a so-called counterintuitive sequence, meaning that the first gate controlling Ω12(t) is operated after Ω23(t). Such pulse sequence drives the occupation of the first dot ρ11(tmax) to zero and the occupation of the last dot ρ33(t) to 1, while maintaining empty occupation of the central dot (ρ22(t) = 0 t [0, tmax])18. It is worth mentioning that recently a different ansatz combination of pulse shapes was proposed to speed up this process42, using the so-called shortcut to adiabaticity protocol43. Generally speaking, there is no analytical method to optimize the pulses, so further improvements will be based on still undiscovered ideas. Here is where the power of DRL comes into play. ### Deep reinforcement learning setup The time evolution of the physical system can be mapped into a Markov decision process (MDP). An MDP is defined by a state space $${\cal{S}}$$, an action space $${\cal{A}}$$, a probability transition $$P(s_{t_{i + 1}}|s_{t_i},a_{t_i})$$ (where $$s_{t_{i + }},s_{t_{i + 1}} \in {\cal{S}}$$ and $$a_{t_i} \in {\cal{A}}$$), and a reward function r that is generally defined in the space $${\cal{S}} \times {\cal{A}}$$, but in our case $$r:{\cal{S}} \to {\Bbb R}$$. We define the discretized time step ti = iΔt, where i = {0, 1, … N} and $$\Delta t = \frac{{t_{{\rm{max}}}}}{N}$$, such that tN = tmax. It is noteworthy to clarify that, albeit the name is the same, the definition of the RL state $$s_{t_i}$$ differs from that of a quantum state. In this work, $$s_{t_i}$$ is defined as: $$s_{t_i} = \left[ {|\rho _{nm}(t_i)|\,{\mathrm{for}}\,m,n \in \{ 1,2,3\} ,{\mkern 1mu} {\it{\Omega }}_{12}(t_{i - 1}),{\mkern 1mu} {\it{\Omega }}_{23}(t_{i - 1})} \right]$$ (4) such that the state at each time step contains the instantaneous absolute values of the nine entries of the density matrix and both Ωij values evaluated at the previous time step. Instead, $$a_{t_i}$$ is defined as: $$a_{t_i} = \left[ {{\it{\Omega }}_{12}(t_i),{\it{\Omega }}_{23}(t_i)} \right]$$ (5) such that $$a_{t_i} \in [0,{\it{\Omega }}_{{\rm{max}}}] \times [0,{\it{\Omega }}_{{\rm{max}}}]$$. Due to the robust definition of the reward function that allows the agent to judge its performance, the neural network evolves in order to obtain the best temporal evolution of the coupling parameters Ωi,i + 1, ensuring that the electron population reaches the target state over time (Fig. 1b). The definition of the best reward function is certainly the most delicate choice in the whole model. The key features of the reward can be summarized by two properties: its generality and its expression as a function of the desired final state. Generality means that the reward should not contain specific information on the characteristics of the two pulses. Regarding the dependence on the desired final state, in our case the goal is to maximize ρ33 at the end of the time evolution. The structure of the reward function used in the simulations is: $$r_{t_i} = ( - 1 + \rho _{33}\left( {t_i} \right) - \rho _{22}\left( {t_i} \right)) + A(t_i)\delta _{i,N} - B(t_i),$$ (6) where A(ti) and B(ti) are the goal-reaching and punishment terms, respectively. We found that the following combinations of A(ti) and B(ti) work well: $${\begin{array}{*{20}{l}} {} \hfill & {A(t_i) = \left\{ {\begin{array}{*{20}{c}} {1000} & {\rho _{{\mathrm{33}}}(t_i) \ > \ \rho _{33}^{{\rm{th}}}} \\ 0 & {{\mathrm{elsewhere}}} \end{array}} \right.,} \hfill \\ {} \hfill & {B(t_i) = \left\{ {\begin{array}{*{20}{l}} {100{\kern 1pt} \Theta (\rho _{22}(t_i) - 0.05)} \hfill & {{\hbox{Standard CTAP}}}\\ {e^{6\rho _{22}(t_i)}} \hfill & {{\hbox{With detuning, dephasing and loss}}}\end{array}} \right.} \hfill \end{array}}$$ (7) The reward functions used in this research are fully accounted for in the Supplementary Note 5. The sum of the first three terms is nonpositive at each time step, so the agent will try to bring the sum to 0 by minimizing ρ22 and maximizing ρ33. We have observed that subtracting B(t) (e.g., punishing the electronic occupation in site 2) and awarding the attainment of the goal with the term A(t) improves the convergence of the learning. Furthermore, in some specific cases, we stop the agent at an intermediate episode if ρ33 is greater than an arbitrary threshold $$\rho _{33}^{{\rm{th}}}$$ for a certain number of time steps. This choice can help find fast pulses that achieve high-fidelity quantum information transport at the cost of a higher $$\rho _{22}^{{\rm{max}}}$$ with respect to the analytic pulses (more details are given in the Supplementary Note 5). ### Training of the agent Figure 2 shows the best results achieved by the agent at various epochs. The agent is free of spanning the whole coupling range at each step. After the training, we smooth the output pulses Ωi,i + 1, and we run the simulation using the smoothed values, obtaining a slight improvement from the calculation using the original output values of the RL agent. Smoothing is achieved with a moving average of the pulses (more details in the Supplementary Note 6, Supplementary Fig. 6, and Supplementary Table 1). The three occupations ρii, i {1, 2, 3} shown in Fig. 2 refer to the smoothed pulse sequences. At the very beginning, the untrained agent tries random values with apparently low success, as the occupation oscillates between the first and the last dot over time. It is worth noting that despite this fluctuation, due to the reward function, the agent learns very quickly after only 45 epochs to always keep the occupation of the central dot below 0.5. After ~2000 epochs, the agent learns to stabilize the occupations of the first and last dots, while maintaining an empty occupation of the central dot. After approximately twice the time, a reversal of the populations of the first and last dots happens, even if they do not yet achieve the extremal values. Finally, after approximately four times the elapsed time, the agent achieves high-fidelity CTAP. Despite the stochastic nature of DRL, these epoch numbers remain consistent through different trainings (see the Supplementary Fig. 7). Notice that the pulse sequence is similar to the ansatz Gaussian counterintuitive pulse sequence as the second gate acting on Ω23 is operated first, but the shape of the two pulses is different. It is remarkable that the reward function implicitly tries to achieve the result as quickly as possible, resulting in a pulse sequence that is significantly faster than the analytical Gaussian case and comparable to the recent proposal of Ban et al.42. The relevant point is that the agent achieves such results irrespective of the actual terms of the Hamiltonian contributing to the master equation. Therefore, DRL can be applied straightforwardly to more complex cases for which there is no generalization of the ansatz solutions found for the ideal case, which we address in the next section. ### Deep reinforcement learning to overcome disturbances We turn now our attention to the behavior of our learning strategy when applied to a nonideal scenario in which typical realistic conditions of semiconductor quantum dots are considered, to be compared with ideal transport (Fig. 3a). In particular, we discuss the results produced by DRL when the array is affected by detuning caused by energy differences in the ground states of the dots, dephasing and losses (Fig. 3b–d). These three effects exist, to different degrees, in any practical attempt to implement CTAP of an electron spin in quantum dots. The first effect is typically due to manufacturing defects44, while the last two effects emerge from the interaction of the dots with the surrounding environment45,46 involving charge and spin fluctuations47,48 and magnetic field noise49. Under such disturbances, neither analytical nor ansatz solutions are available. On the other hand, the robustness and generality of the RL approach can be exploited naturally since, from the point of view of the algorithm, it does not differ from the ideal case discussed above. Let us consider a system of N = 3 quantum dots with different energies Ei, i {1, 2, 3}. We define Δij = Ej − Ei, so that the Hamiltonian (1) can be written, without loss of generality, as $$H(t) = \left( {\begin{array}{*{20}{c}} 0 & { - {\it{\Omega }}_{12}(t)} & 0 \\ { - {\it{\Omega }}_{12}(t)} & {\Delta _{12}} & { - {\it{\Omega }}_{23}(t)} \\ 0 & { - {\it{\Omega }}_{23}(t)} & {\Delta _{13}} \end{array}} \right).$$ (8) Figure 3c refers to the particular choice Δ12 = Δ23 = 0.15Ωmax (a full 2D scan of both Δ12 and Δ13 and the relative pulses are shown in the Supplementary Figs. 8 and 9, respectively). In this case, DRL finds a solution that induces significantly faster transfer than that obtained with standard counterintuitive Gaussian pulses. Moreover, the latter solutions are not even able to achieve a fidelity comparable to that of the ideal case. Such speed is a typical feature of pulse sequences determined by DRL, satisfying the criterion of adiabaticity (see Supplementary Figs. 10 and 11). Besides the energy detuning between the quantum dots, in a real implementation, the dots interact with the surrounding environment. Since the microscopic details of such an interaction are unknown, its effects are taken into account through an effective master equation. A master equation of the Lindblad type with time-independent rates is adopted to grant sufficient generality while keeping a simple expression. To show the ability of DRL to mitigate disturbances, we consider, in particular, two major environmental effects consisting of decoherence and losses. The first environmental effect corresponds to a randomization of the relative phases of the electron states in the quantum dots, which results in a cancellation of the coherence terms, i.e., the off-diagonal elements of the density matrix in the position basis. The losses, instead, model the erasure of the quantum information carried by the electron/hole moving along the dots. In fact, while the carrier itself cannot be reabsorbed, the quantum information, here encoded as a spin state, can be changed by the interaction with some surrounding spin or by the magnetic field noise. In the presence of dephasing and losses, the system’s master equation is of the Lindblad type. Its general form is $$\dot \rho = - \frac{i}{\hbar }\left[ {H,\rho } \right] + \mathop {\sum}\limits_n {\Gamma _n} \left( {A_n\rho A_n^\dagger - \frac{1}{2}\left\{ {A_n^\dagger A_n,\rho } \right\}} \right),$$ (9) where Ak are the Lindblad operators, Γk are the associated rates, and {A, B} = AB + BA denotes the anticommutator. We first considered each single dot being affected by dephasing and assumed an equal dephasing rate Γd for each dot. The master equation can be rewritten as $$\dot \rho = - \frac{i}{\hbar }\left[ {H,\rho } \right] + \Gamma _d\left[ {\rho - {\mathrm{diag}}(\rho )} \right].$$ (10) Figure 3c shows an example of the pulses determined by DRL and the corresponding dynamics for the choice Γd = 0.01Ωmax. CTAP is achieved with Gaussian pulses in a time t ≈ 10π/Ωmax at the cost of a significant occupation of the central dot (inset). The advantage brought by the DRL agent with respect to Gaussian pulses is manifest: DRL (solid lines) realizes population transfer in approximately half of the time required when analytic pulses (dashed lines) are employed. The occupation of the central dot is less than that achieved by Gaussian pulses. We now consider the inclusion of a loss term in the master equation. The loss refers to the information carried by the electron encoded by the spin. While the loss of an electron is a very rare event in quantum dots, requiring the sudden capture of a slow trap50,51,52, the presence of any random magnetic field and spin fluctuation around the quantum dots can modify the state of the transferred spin. Losses are described by the operators Γl|0〉〈k|, k {1, 2, 3}, where Γl is the loss rate, modeling the transfer of the amplitude from the i-th dot to an auxiliary vacuum state |0〉. Figure 3d shows the superior performance of DRL versus analytic Gaussian pulses in terms of the maximum value of ρ33 reached during the time evolution. Because of the reward function, DRL minimizes the transfer time so as to minimize the effect of losses. The explicit reward function for each case is given in the Supplementary Table 2. As a further generalization, we applied DRL to the passage through more than one intervening dot, an extension called a straddling CTAP scheme (or SCTAP)18,53. To exemplify a generalization to odd N > 3, we set N = 5 according to the sketch depicted in Fig. 4, and no disturbance was considered for simplicity (see the Supplementary Note 7 for further details on SCTAP). For five-state transfer, like the cases for three states and for any other odd number of quantum dots, there is one state with zero energy (see Supplementary Fig. 12). The realization of the straddling tunneling sequence is achieved by augmenting the original pulse sequence by straddling pulses, which are identical for all intervening tunneling rates Ωm. The straddling pulse involves the second and third control gates, as shown in Fig. 4a, b. The pulse sequence discovered by DRL is significantly different from the known sequence based on Gaussian pulses. It achieves population transfer in approximately one third of the time with only some occupation in the central dot (see Fig. 4c). It is remarkable that the reward function (see the Supplementary Note 5) achieves such a successful transfer without any requirement for the occupation of the intermediate dots ρi,i with (i {2, 3, 4}) as it only rewards the occupation of the last dot. ### Analysis of the relevant variables within the DRL framework The advantages of employing DRL with respect to an ansatz solution is further increased by the possibility of determining the factors that are more relevant for the effectiveness of the final solution from the analysis of the neural network. In fact, the employment of the DRL algorithm enables an analysis of the state variables that are actually relevant for solving the control problems, like that discussed above. To select the variables needed to solve an MDP, we follow the approach presented by Peters et al.54,55,56. The idea is that a state variable is useful if it helps to explain either the reward function or the dynamics of the state variables that in turn are necessary to explain the reward function. Otherwise, the state variable is useless, and it can be discarded without affecting the final solution. To represent the dependencies between the variables of an MDP, we use a 2TBN57, described in Supplementary Note 8. In the graph of Fig. 5, there are three types of nodes: the gray diamond node represents the reward function, the circle nodes represent state variables, and the squares represent action variables. The nodes are arranged on three vertical layers: the first layer on the left includes the variables at time ti, the second layer includes the state variables at time ti + 1, and the third layer includes the node that represents the reward function. If a variable affects the value of another variable, we obtain a direct edge that connects the former to the latter. Figure 5 shows the 2TBN estimated from a dataset of 100,000 samples obtained from the standard case of ideal CTAP. The weights are quantified in the Supplementary Table 3. As expected from Eq. (6), the reward function depends only on the values of the variables ρ22 and ρ33. From the 2TBN, it emerges that the dynamics of these two state variables can be fully explained by knowing their values at the previous step, the values of the two action variables Ω12 and Ω23 and the actions taken in the previous time step (stored in the variables $${\it{\Omega }}_{12}^\prime$$ and $${\it{\Omega }}_{23}^\prime$$). All other state variables do not appear and therefore can be ignored for the computation of the optimal control policy. This finding matches the expectation from the constraints of the physical model that coherences are not directly involved in the dynamics and that ρ11 is linearly dependent on ρ22 and ρ33 as the trace of the density matrix is constant. To confirm this finding, the agent was successfully trained in the standard case of ideal CTAP by using only the input values (ρ22, ρ33, Ω12, and Ω23). Furthermore, the size of the hidden layer H1 could be reduced in this case from 64 to 4 with the same convergence of the total reward as a function of the episodes or an even faster convergence by reducing the batch size from 20 to 10 (see the Supplementary Figs. 13 and 14). In the cases of detuning and dephasing, the corresponding 2TBNs are more complex, since the dynamics of ρ22 and ρ33 are affected also by the values of the other elements of the matrix ρ. This fact is also apparent in the changes carried by the Hamiltonian, as the trace is not constant when losses are present and dephasing has a direct effect on the coherence. ## Discussion Neural networks can discover control sequences for a tunable quantum system whose time evolution starts far from its final equilibrium, without any prior knowledge. Contrary to the employment of special ansatz solutions, we have shown that DRL discovers novel sequences of control operations to achieve a target state, regardless of the possible deviations from the ideal conditions. DRL is achieved irrespective of previously discovered ansatz solutions, and it applies when such solutions are unknown. The control sequence found by DRL can also provide insights into novel analytical shapes for optimal pulses. The use of a pretrained neural network as a starting point to identify the solution of a master equation with similar parameters further reduces the computation time by one order of magnitude (see Supplementary Note 9). Our investigation indicates that a key factor is the appropriate definition of the reward function that deters the system from occupying the central quantum dot and rewards the occupation of the last quantum dot. To apply DRL to a practical quantum system known for its counterintuitive nature, we have studied quantum state transport across arrays of quantum dots, including sources of disturbances such as energy level detuning, dephasing, and loss. In all cases, the solutions found by the agent outperform the known solution in terms of either speed or fidelity or both. Both the pretraining of the network and 2TBN analysis—by reducing the computational effort—contribute to accelerating the learning process. In general, we have shown that neural-network-based DRL provides a general tool for controlling physical systems by circumventing the issue of finding ansatz solutions when neither a straightforward method nor a brute force approach is possible. A comparison with other control strategies of quantum systems16,58,59 will be the objective of future work. As a concluding remark, we observe that in this work we have modeled the effects of the environment or noisy control fields by master equations in the Lindblad form with time-independent coefficients; by construction, the same DRL strategy can be used to determine the optimal pulse sequence in the presence of non-Markovian noise/effects originating from an interaction of the system with structured environments. In this case, an efficient way to compute the time evolution of the open system, which is required to determine the value of the pulses at each time step, is needed. Depending on the nature (bosonic, fermionic) and features (spectral density) of the environment, different approximate or numerically exact strategies ranging from Bloch-Redfield or time-convolutionless master equations60 to path integrals61 or chain-mapping techniques (reaction-coordinate62 or TEDOPA63,64) could be exploited. ## Methods ### Master equation for the Hamiltonian including pure dephasing Pure dephasing65 is the simplest model that accounts for an environmental interaction of an otherwise closed system. The corresponding Lindblad operators are diagonal and have the form: $$L_n = \sqrt {\gamma _n} \left| n \right\rangle \left\langle n \right|.$$ (11) In the case of γ1 = γ2 = γ3 = Γd, the master equation becomes: $$\dot \rho = - \frac{i}{\hbar }\left[ {H,\rho } \right] + \Gamma _d\left[ {\rho - diag(\rho )} \right].$$ (12) If the Hamiltonian is time-independent, then $$\frac{d}{{dt}}\rho _{ii} = 0$$ (13) for every i, so the electronic population remains unchanged in the presence of pure dephasing. This implies66 that the energy of the system remains unchanged during the evolution, since it cannot be changed by environment. ### Neural network The neural network consists of a multilayered feed-forward network of (Ni, H1, [H2,]No) neurons, where Ni = 11; it includes the nine elements of the density matrix ρ and the control sequence Ωij, No = 2 are the two updated values of the control sequence spanning a continuous range between 0 and an arbitrary value Ωmax chosen according to experimental considerations, and Hi the number of neurons of the ith hidden layer. Different setups for the neural network have been employed. Nonetheless, all the neural networks are feedforward and fully connected, with an ReLU activation function for the neurons. For the implementation of the neural networks and their training, Tensorforce67 is employed. For the RL algorithm adopted for this work, TRPO, the hyperparameters are set as summarized in the Supplementary Table 4. The optimal number of neurons of the hidden layer peaks at ~25–26 (see the Supplementary Fig. 15). ### Software The QuTIP routine has been run in parallel by using GNU Parallel68. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Code availability The code and the algorithm used in this study are available from the corresponding author upon reasonable request. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Sutton, R. S. et al. Reinforcement Learning: An Introduction (MIT Press, Cambridge, Massachusetts, 1998). 2. 2. Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518, 529 (2015). 3. 3. Melnikov, A. A. et al. Active learning machine learns to create new quantum experiments. Proc. Natl Acad. Sci. USA 115, 1221–1226 (2018). 4. 4. Nautrup, H. P., Delfosse, N., Dunjko, V., Briegel, H. J. & Friis, N. Optimizing quantum error correction codes with reinforcement learning. https://arxiv.org/abs/1812.08451 (2018). 5. 5. Sweke, R., Kesselring, M. S., van Nieuwenburg, E. P. & Eisert, J. Reinforcement learning decoders for fault-tolerant quantum computation. https://arxiv.org/abs/1810.07207 (2018). 6. 6. Colabrese, S., Gustavsson, K., Celani, A. & Biferale, L. Flow navigation by smart microswimmers via reinforcement learning. Phys. Rev. Lett. 118, 158004 (2017). 7. 7. Prati, E. Quantum neuromorphic hardware for quantum artificial intelligence. J. Phys. Confer. Ser. 880, (2017). 8. 8. August, M. & Ni, X. Using recurrent neural networks to optimize dynamical decoupling for quantum memory. Phys. Rev. A 95, 012335 (2017). 9. 9. Dong, D., Chen, C., Tarn, T. J., Pechen, A. & Rabitz, H. Incoherent control of quantum systems with wavefunction-controllable subspaces via quantum reinforcement learning. IEEE Trans. Syst. Man Cybern. Part B Cybern. 38, 957–962 (2008). 10. 10. Chen, C., Dong, D., Li, H., Chu, J. & Tarn, T. Fidelity-based probabilistic q-learning for control of quantum systems. IEEE Trans. Neural Netw. Learn. Syst. 25, 920–933 (2014). 11. 11. Fösel, T., Tighineanu, P., Weiss, T. & Marquardt, F. Reinforcement learning with neural networks for quantum feedback. Phys. Rev. X 8, 031084 (2018). 12. 12. August, M. & Hernández-Lobato, J. M. Taking gradients through experiments: LSTMs and memory proximal policy optimization for black-box quantum control. https://arxiv.org/abs/1802.04063 (2018). 13. 13. Niu, M. Y., Boixo, S., Smelyanskiy, V. N. & Neven, H. Universal quantum control through deep reinforcement learning. In AIAA Scitech 2019 Forum 0954 (2019). 14. 14. Albarrán-Arriagada, F., Retamal, J. C., Solano, E. & Lamata, L. Measurement-based adaptation protocol with quantum reinforcement learning. Phys. Rev. A 98, 042315 (2018). 15. 15. Zhang, X.-M., Wei, Z., Asad, R., Yang, X.-C. & Wang, X. When reinforcement learning stands out in quantum control? A comparative study on state preparation. https://arxiv.org/abs/1902.02157 (2019). 16. 16. Bukov, M. et al. Reinforcement learning in different phases of quantum control. Phys. Rev. X 8, 031086 (2018). 17. 17. Yu, S. et al. Reconstruction of a photonic qubit state with reinforcement learning. Adv. Quantum Technol. 0, 1800074 (2019). 18. 18. Greentree, A. D., Cole, J. H., Hamilton, A. R. & Hollenberg, L. C. Coherent electronic transfer in quantum dot systems using adiabatic passage. Phys. Rev. B 70, 1–6 (2004). 19. 19. Cole, J., Greentree, A., C. L. Hollenberg, L. & Das Sarma, S. Spatial adiabatic passage in a realistic triple well structure. Phys. Rev. B 77, 235418 (2008). 20. 20. Greentree, A. D. & Koiller, B. Dark-state adiabatic passage with spin-one particles. Phys. Rev. A 90, 012319 (2014). 21. 21. Menchon-Enrich, R. et al. Spatial adiabatic passage: a review of recent progress. Rep. Prog. Phys. 79, 074401 (2016). 22. 22. Ferraro, E., De Michielis, M., Fanciulli, M. & Prati, E. Coherent tunneling by adiabatic passage of an exchange-only spin qubit in a double quantum dot chain. Phys. Rev. B 91, 075435 (2015). 23. 23. Rotta, D., De Michielis, M., Ferraro, E., Fanciulli, M. & Prati, E. Maximum density of quantum information in a scalable cmos implementation of the hybrid qubit architecture. Quantum Inf. Process. 15, 2253–2274 (2016). 24. 24. Rotta, D., Sebastiano, F., Charbon, E. & Prati, E. Quantum information density scaling and qubit operation time constraints of CMOS silicon-based quantum computer architectures. npj Quantum Inf. 3, 26 (2017). 25. 25. Prati, E., Rotta, D., Sebastiano, F. & Charbon, E. From the quantum moore’s law toward silicon based universal quantum computing. In 2017 IEEE International Conference on Rebooting Computing (ICRC) 1–4 (IEEE, 2017). 26. 26. Ferraro, E., De Michielis, M., Mazzeo, G., Fanciulli, M. & Prati, E. Effective hamiltonian for the hybrid double quantum dot qubit. Quantum Inf. Process. 13, 1155–1173 (2014). 27. 27. Michielis, M. D., Ferraro, E., Fanciulli, M. & Prati, E. Universal set of quantum gates for double-dot exchange-only spin qubits with intradot coupling. J. Phys. A Math. Theor. 48, 065304 (2015). 28. 28. Bonarini, A., Caccia, C., Lazaric, A. & Restelli, M. Batch reinforcement learning for controlling a mobile wheeled pendulum robot. In IFIP International Conference on Artificial Intelligence in Theory and Practice 151–160 (Springer, 2008). 29. 29. Tognetti, S., Savaresi, S. M., Spelta, C. & Restelli, M. Batch reinforcement learning for semi-active suspension control. In Control Applications, (CCA) & Intelligent Control, (ISIC), 2009 IEEE 582–587 (IEEE, 2009). 30. 30. Castelletti, A., Pianosi, F. & Restelli, M. A multiobjective reinforcement learning approach to water resources systems operation: Pareto frontier approximation in a single run. Water Resour. Res. 49, 3476–3486 (2013). 31. 31. Schulman, J., Levine, S., Abbeel, P., Jordan, M. & Moritz, P. Trust region policy optimization. In International Conference on Machine Learning 1889–1897 (2015). 32. 32. Johansson, J., Nation, P. & Nori, F. Qutip: an open-source python framework for the dynamics of open quantum systems. Comput. Phys. Commun. 183, 1760–1772 (2012). 33. 33. Duan, Y., Chen, X., Houthooft, R., Schulman, J. & Abbeel, P. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning 1329–1338 (2016). 34. 34. Vitanov, N. V., Halfmann, T., Shore, B. W. & Bergmann, K. Laser-induced population transfer by adiabatic passage techniques. Annu. Rev. Phys. Chem. 52, 763–809 (2001). 35. 35. Maurand, R. et al. A CMOS silicon spin qubit. Nat. Commun. 7, 13575 (2016). 36. 36. Bluhm, H. et al. Dephasing time of gaas electron-spin qubits coupled to a nuclear bath exceeding 200 μs. Nat. Phys. 7, 109 (2011). 37. 37. Prati, E., Hori, M., Guagliardo, F., Ferrari, G. & Shinada, T. Anderson–Mott transition in arrays of a few dopant atoms in a silicon transistor. Nat. Nanotechnol. 7, 443 (2012). 38. 38. Prati, E., Kumagai, K., Hori, M. & Shinada, T. Band transport across a chain of dopant sites in silicon over micron distances and high temperatures. Sci. Rep. 6, 19704 (2016). 39. 39. Achilli, S. et al. GeVn complexes for silicon-based room-temperature single-atom nanoelectronics. Sci. Rep. 8, 18054 (2018). 40. 40. Hollenberg, L. C. L., Greentree, A. D., Fowler, A. G. & Wellard, C. J. Two-dimensional architectures for donor-based quantum computing. Phys. Rev. B 74, 045311 (2006). 41. 41. Homulle, H. et al. A reconfigurable cryogenic platform for the classical control of quantum processors. Rev. Sci. Instrum. 88, 045103 (2017). 42. 42. Ban, Y., Chen, X. & Platero, G. Fast long-range charge transfer in quantum dot arrays. Nanotechnology 29, 505201 (2018). 43. 43. Torrontegui, E. et al. Chapter 2—shortcuts to adiabaticity (Arimondo, E., Berman, P. R., & Lin, C. C., eds) Advances in Atomic, Molecular, and Optical Physics. Vol. 62, 117–169 (Academic Press, Amsterdam, NL, 2013). 44. 44. Jehl, X. et al. Mass production of silicon mos-sets: can we live with nano-devices variability? Procedia Comput. Sci. 7, 266–268 (2011). 45. 45. Breuer, H.-P. & Petruccione, F. The Theory of Open Quantum Systems. (Oxford University Press, Oxford, 2002). 46. 46. Clément, N., Nishiguchi, K., Fujiwara, A. & Vuillaume, D. One-by-one trap activation in silicon nanowire transistors. Nat. Commun. 1, 92 (2010). 47. 47. Pierre, M. et al. Background charges and quantum effects in quantum dots transport spectroscopy. Eur. Phys. J. B 70, 475–481 (2009). 48. 48. Kuhlmann, A. V. et al. Charge noise and spin noise in a semiconductor quantum device. Nat. Phys. 9, 570 (2013). 49. 49. Prati, E. & Shinada, T. Single-Atom Nanoelectronics 5–39 (CRC Press, Singapore, 2013). 50. 50. Prati, E., Fanciulli, M., Ferrari, G. & Sampietro, M. Giant random telegraph signal generated by single charge trapping in submicron n-metal-oxide-semiconductor field-effect transistors. J. Appl. Phys. 103, 123707 (2008). 51. 51. Prati, E. The finite quantum grand canonical ensemble and temperature from single-electron statistics for a mesoscopic device. J. Stat. Mech. Theory Exp. 2010, P01003 (2010). 52. 52. Prati, E., Belli, M., Fanciulli, M. & Ferrari, G. Measuring the temperature of a mesoscopic electron system by means of single electron statistics. Appl. Phys. Lett. 96, 113109 (2010). 53. 53. Malinovsky, V. & J. Tannor, D. Simple and robust extension of the stimulated raman adiabatic passage technique to n-level systems. Phys. Rev. A 56, 4929–4937 (1997). 54. 54. Peters, J. & Schaal, S. Natural actor-critic. Neurocomputing 71, 1180–1190 (2008). 55. 55. Peters, J., Mülling, K. & Altun, Y. Relative entropy policy search. In Twenty-Fourth AAAI Conference on Artificial Intelligence 1607–1612 (Atlanta, 2010). 56. 56. Castelletti, A., Galelli, S., Restelli, M. & Soncini-Sessa, R. Tree-based variable selection for dimensionality reduction of large-scale control systems. In Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 2011 IEEE Symposium on 62–69 (IEEE, 2011). 57. 57. Koller, D., Friedman, N. & Bach, F. Probabilistic Graphical Models: Principles and Techniques (MIT press, Cambridge, Massachusetts, 2009). 58. 58. Koch, C. P. Controlling open quantum systems: tools, achievements, and limitations. J. Phys. Condens. Matter 28, 213001 (2016). 59. 59. Lamata, L. Basic protocols in quantum reinforcement learning with superconducting circuits. Sci. Rep. 7, 1609 (2017). 60. 60. Breuer, H.-P. & Petruccione, F. The theory of open quantum systems. (Oxford University, New York, 2006). 61. 61. Nalbach, P., Eckel, J. & Thorwart, M. Quantum coherent biomolecular energy transfer with spatially correlated fluctuations. New J. Phys. 12, 065043 (2010). 62. 62. Hughes, K. H., Christ, C. D. & Burghardt, I. Effective-mode representation of non-markovian dynamics: a hierarchical approximation of the spectral density. ii. Application to environment-induced nonadiabatic dynamics. J. Chem. Phys. 131, 124108 (2009). 63. 63. Chin, A. W., Rivas, Á., Huelga, S. F. & Plenio, M. B. Exact mapping between system-reservoir quantum models and semi-infinite discrete chains using orthogonal polynomials. J. Math. Phys. 51, 092109 (2010). 64. 64. Tamascelli, D., Smirne, A., Huelga, S. F. & Plenio, M. B. Efficient simulation of finite-temperature open quantum systems. https://arxiv.org/abs/1811.12418 (2018). 65. 65. Taylor, J. M. et al. Relaxation, dephasing, and quantum control of electron spins in double quantum dots. Phys. Rev. B 76, 035315 (2007). 66. 66. Tempel, D. G. & Aspuru-Guzik, A. Relaxation and dephasing in open quantum systems time-dependent density functional theory: properties of exact functionals from an exactly-solvable model system. Chem. Phys. 391, 130–142 (2011). 67. 67. Schaarschmidt, M., Kuhnle, A. & Fricke, K. Tensorforce: a tensorflow library for applied reinforcement learning. https://github.com/reinforceio/tensorforce (2017). 68. 68. Tange, O. et al. Gnu parallel-the command-line power tool. USENIX Mag. 36, 42–47 (2011). ## Acknowledgements E.P. gratefully acknowledges the support of NVIDIA Corporation for the donation of the Titan Xp GPU used for this research. ## Author information R.P. developed the simulation and implemented the algorithms, D.T. elaborated on the Hamiltonian framework and the master equation formalism, M.R. operated the reinforcement learning analysis, and E.P. conceived and coordinated the research. All the authors discussed and contributed to the writing of the manuscript. ### Competing interests The authors declare no competing interests. Correspondence to Enrico Prati.
2019-07-16 02:33:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7687897682189941, "perplexity": 1317.2982147741109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524475.48/warc/CC-MAIN-20190716015213-20190716041213-00474.warc.gz"}
https://cs.stackexchange.com/questions/49881/what-is-the-difference-between-superposition-and-paramodulation
# What is the difference between superposition and paramodulation? I am currently writing a paper about automated theorem proving in first-order logic. Equality is not uncommon for mathematical problems and almost every theorem prover like VAMPIRE or SPASS has a calculus for equality. But the most paper are always writing about the term "superposition" calculus. A simple google search did not help to find any information about this term, only the wikipedia website which means "it can be used for first-order logic with equality". Another paper referneced to the Paramodulation-based theorem proving which describes the concept of paramodulation for theorem proving. It seems that the superposition is some modified version of paramodulation, but I don't understand why and in which way. So, is there any explanation of this calculus or can someone give me some hints what is different from paramodulation? This is late for you, but probably might be of help to others. I myself had asked this question and I wrote the summary of my findings to the acl2-help mailing list on 8 and 9 September 2013: https://utlists.utexas.edu/sympa/arc/acl2-help/2013-09/msg00005.html https://utlists.utexas.edu/sympa/arc/acl2-help/2013-09/msg00006.html Disclaimer: What follows is my understanding of the subject and I am not an expert. I assume you are familiar with what is called term-rewriting. Informally its a very simple notion where one uses a rule to reduce or rewrite another expression. A rewrite rule is usually a universally quantified formula of the form $C \Rightarrow l=r$, where $C$ is a formula and $l$ and $r$ are terms. One says that a rule $R$ applies to another formula $\Phi$, if there is some subterm $s$ of $\Phi$ and some substitution $\sigma$ such that $l\sigma = s$ (this process is called matching) and $C\sigma$ is true. If $R$ applies to $\Phi$, then one can rewrite $\Phi$ to $\Phi\sigma$. Demodulation is another name for unconditional term-rewriting, i.e., $C$ is simply true and so $R$ is just $l=r$. Rewriting involves matching,i.e., one-way unification. Paramodulation involves full unification. Moreover, the equality ($l=r$) that is used to perform paramodulation, is not directed, either lhs or rhs can be unified with a subterm in the other literal paramodulated upon. Superposition is a restriction of Paramodulation. In particular, the rules of inference named 'Superposition (left/right)' are restrictions of "ordered" paramodulation, i.e., the paramodulation rule is applied only when equation $l=r$ (to be instantiated/unified) satisfies certain properties: $l >> r$ for a given reduction order $>>$, $l=r$ is maximal (wrt $>>$) in the clause etc etc. Here are some fundamental papers that are interesting from a historical point of view. [1] The concept of demodulation in theorem proving - L Wos, GA Robinson, DF Carson, L Shalla - Journal of the ACM (JACM), 1967 [2] Handbook of Logic and Automated Reasoning - J Harrison. [3] Paramodulation and theorem-proving in first-order theories with equality- G Robinson, L Wos - Machine intelligence, 1969 [4] Rewrite-based equational theorem proving with selection and simplification - L Bachmair, H Ganzinger - Journal of Logic and Computation, 1994 [5] Simple word problems in universal algebras - DE Knuth, PB Bendix - Computational problems in abstract algebra 1970 [6] A superposition oriented theorem prover - L Fribourg - Theoretical Computer Science, 1985 - Elsevier [7] Completion without failure - L Bachmair, N Dershowitz, DA Plaisted -1989 • Welcome to the site! Good answers to older questions are always appreciated. One of the goals of the site is to be useful to anybody who might be wondering about a question, not just the person who originally asked it, so your first sentence is exactly right. :-) – David Richerby May 17 '17 at 9:02 • That's very helpful, thank you. The two links are dead. Any chance you could revive them? – Metaxal Aug 7 at 12:21
2020-09-24 12:17:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8318485021591187, "perplexity": 862.2093244582586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400217623.41/warc/CC-MAIN-20200924100829-20200924130829-00214.warc.gz"}
https://tex.stackexchange.com/questions/453115/unwanted-scale-with-mathbf-in-polyglossia?noredirect=1
# Unwanted scale with mathbf in polyglossia I use polyglossia to produce documents written in arabic, a Right-to-Left mode. I choose the main font with the command \setmainfont, with the option Scale=1.3 to obtain the suitable tail of the characters. Now this option produces with the command \mathbf{} a curious effect, so it seems to scale the mathbf argument, and have the same effect with the option \mbox{\bf }. How can I avoid this effect, so compile as within this option ? I give the the file used: \documentclass{article} \usepackage{polyglossia} \setmainlanguage[numerals=maghrib]{arabic} \setmainfont[Script=Arabic,Scale=1.3]{Arial} % can use all fonts % use or not the scale option \begin{document} $f(X)$ $f(\mathbf{X})$ $f(\mbox{\bf X})$ \end{document} and its compilation, using the scale option: and the other case, without this option (but I need it!) which is the good one: • I'm afraid your code snippet isn't compilable at present. Please tell us (a) whether you use XeLaTeX or LuaLaTeX, (b) whether you load the fontspec and/or unicode-math packages, and (c) which math font you employ. Incidentally, it's doubtful that the issue you're experiencing is related to the polyglossia package. You appear to be loading the text font scaled by a factor 1.3, but no such scaling appears to be applied to the math font. – Mico Sep 29 '18 at 18:37 • I use the file what I give and I compile it with Xelatex... – Faouzi Bellalouna Sep 29 '18 at 19:56 The best way to get correct scaling is to use unicode-math and load all fonts with the Scale=MatchLowercase option. I’ve taken the liberty of writing this example with a set of free fonts by Khaled Hosny, one of the top experts on Arabic and Math fonts here, or in the world. Khaled Hosny credits Bulaq Press in Cairo as the inspiration for the Arabic typeface of Amiri. Libertinus is an extension of Linux Libertine, by Philipp H. Poll, and I’ll let Dr. Hosny speak to how it influenced Amiri’s Latin letters (Lowercase j is very different!), but his Amiri and Libertinus Math work together very well. \documentclass[varwidth, preview, 12pt]{standalone} \usepackage{polyglossia} \usepackage{unicode-math} \setdefaultlanguage[numerals=maghrib]{arabic} \setotherlanguage{english} \defaultfontfeatures{Scale=MatchLowercase} \setmainfont{Libertinus Serif}[ Scale=1.0, Ligatures={Common, TeX}] \setsansfont{Libertinus Sans}[ Ligatures={Common, TeX}] \setmonofont{Libertinus Mono}[ Ligatures=TeX] \newfontfamily\arabicfont{Amiri}[ Language=Arabic, Script=Arabic, Ligatures={Common, TeX}] \setmathfont{Libertinus Math} \begin{document} \parbox{10em}{ الخط \LRE{Amiri \textbf{Bold}} \begin{english} Libertinus Serif \textbf{Bold} \end{english} \LR{$f(X)$ $f(\mathbf{X})$ $f(\mbox{\bf X})$ $f(\symbfup{X})$} } \end{document} I set all the text inside a narrow box in order to fit the image bidirectional text inside the width this site allows. The $f(\symbfup{X})$ notation is what I would recommend with unicode-math. With that package, \mathbf is intended more for words in math mode, such as VELOCITY. ### Changing the Fonts That example is a good template to select your own fonts, but if you want to stick with the Libertinus family, you might want to use the package instead of setting up everything manually: \documentclass[varwidth, preview, 12pt]{standalone} \usepackage{polyglossia} \usepackage{unicode-math} \setdefaultlanguage[numerals=maghrib]{arabic} \setotherlanguage{english} \usepackage{libertinus-otf} \defaultfontfeatures{Scale=MatchLowercase} \newfontfamily\arabicfont{Amiri}[ Language=Arabic, Script=Arabic, Ligatures={Common, TeX}] \begin{document} \parbox{10em}{ الخط \LRE{Amiri \textbf{Bold}} \begin{english} Libertinus Serif \textbf{Bold} \end{english} \LR{$f(X)$ $f(\mathbf{X})$ $f(\mbox{\bf X})$ $f(\symbfup{X})$} } \end{document} You can instead substitute any other font family you choose, but in that case, make sure that your fonts match. You can set your math letters to be the same as your text font by adding additional commands like: \setmathfont[range=up]{Arial} \setmathfont[range=it]{Arial Italic} \setmathfont[range=bfup]{Arial Bold} \setmathfont[range=bfit]{Arial Bold Italic} Be sure to put these after your main \setmathfont. I don’t need a Scale= option here because I previously declared \defaultfontfeature{Scale=MatchLowercase}. (Note that the main font overrides this with Scale=1.0 to remain at its natural size, and all other fonts in the document are scaled to match it.) Also note that unicode-math replaces all the math alphabets, including \mathscr, \mathcal, \mathbb and so on. You can also set these individually. For example, to replace \mathcal and \mathbfcal (by default the same as \mathscr) with the versions from STIX Two Math, you can use: \setmathfont[range={cal,bfcal},Scale=MatchUppercase]{STIX Two Math} Or to restore the default \mathbb and \mathbbit alphabets: \setmathfont[range={bb, bbit}, Scale=MatchUppercase]{Latin Modern Math} • Ok thank you for the answer, useful for my files. I have only the problem of the \mathbb command which has now a curious appearance. Why ? Is there a command which allows to obtain the usual appearence ? – Faouzi Bellalouna Sep 30 '18 at 7:59 • @FaouziBellalouna You’re welcome. You can restore the default \mathbb and \mathbbit with \setmathfont[range={bb, bbit}, Scale=MatchUppercase]{Latin Modern Math} after your main \setmathfont, or substitute any other math font of your choice. You might take a look at Stix Two Math. You could even select any OpenType font you please as bb. – Davislor Sep 30 '18 at 8:10 • All is ok now. Many thanks for your help and explanation – Faouzi Bellalouna Sep 30 '18 at 8:13 • Hello and excuse me for re-commenting so later. After an update, when I compile this file, I have the following error message: ! Undefined control sequence. l.931 \file_get:nnN Have you an idea about the cause of that ? – Faouzi Bellalouna Apr 6 at 15:13 • @FaouziBellalouna I can’t reproduce your bug. The version with \usepackage{libertinus-otf} still compiles for me with XeLaTeX on an up-to-date TeX Live 2018. That error message is from a package, so I’d report it to the maintainer. – Davislor Apr 6 at 16:42
2019-08-26 05:06:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8652151226997375, "perplexity": 3783.366264970928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00162.warc.gz"}
https://mk-com.com/balanced-nuclear-equation-39
# Balanced nuclear equation Balancing Equations for Nuclear Reactions The reaction of an α particle with magnesium-25 ( 12 25 Mg) produces a proton and a nuclide of another element. Identify the new nuclide produced. ## Balancing a Nuclear Chemical Equation Do mathematic equations I enjoy doing mathematical equations because they help me to think logically and critically. Decide mathematic Math is the study of numbers, shapes, and patterns. It is used to solve problems and to understand the world around us. Get help from expert teachers Looking for someone to help with your homework? We can provide expert homework writing help on any subject. ## 21.2 Nuclear Equations – Chemistry Balancing Nuclear Equations: Matter must be conserved including all p+ & n˚. Example: Decay reaction (α decay) 215! 86 219Rn 2 4He+ 84 Po Fission Reaction ! 0 1n+ 92 235U 36 92Kr+ 56 Do math problems Doing math problems can be fun and challenging at the same time. They can help improve your problem solving skills and logical thinking. If you're looking for detailed, step-by-step answers, you've come to the right place. Our expert team is here to help you with all your questions. Solve algebra Math is a way of solving problems by using numbers and equations. ## A simple organized approach for balancing nuclear equation So the final balanced equation will be as follows: $$10Cl^{-} + 16H + 2MnO_{4}^{-} \rightarrow 5Cl_{2} + 2Mn^{2+} + 8H_{2}O$$ Table of Some Common Ions: This detailed map structure of the whole reaction scheme can also be obtained by ## Balancing Nuclear Equations Follow these six steps to balance nuclear equations: Write out the nuclide symbol of the reactant. Reference the Periodic Table to determine the atomic number of the element. ## Nuclear Equations The key to balancing nuclear equations is to remember to place reactants on the left side of the arrow and products on the right side. The arrow between reactants and products is used to Math can be a difficult subject for many people, but it doesn't have to be! By taking the time to explain the problem and break it down into smaller pieces, anyone can learn to solve math problems. Clarify math problem The math equation is simple, but it's still confusing. Provide multiple methods There are many ways to improve your writing skills, but one of the most effective is to practice writing regularly.
2023-01-28 22:48:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4795963168144226, "perplexity": 1784.8610024557279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00421.warc.gz"}
http://openstudy.com/updates/55e093c4e4b022720610f2eb
## anonymous one year ago The equation f(x) = 5x2 − 30x + 6 represents a parabola. What is the vertex of the parabola? (−5, 281) (5, −19) (−3, 141) (3, −39) 1. triciaal standard ax^2 +bx + c vertex a(x-h)^2 + k 2. Nnesha $\huge\rm \frac{ -b }{ 2a }$ use this formula to find x-coordinate of the vertex then substitute all x's for its value into f(x) equation to find y-coordinate of the vertex 3. triciaal vertex (h, k) 4. anonymous what would i input into the formula? 5. Nnesha $\huge\rm Ax^2+Bx+C$ quadratic equation where a=leading coefficient b=middle term c=constant term 6. Luigi0210 There are two ways to find the vertex.. and since your equation is already in standard form, using $$\large \frac{-b}{2a}$$ would be easier. unless you want to complete the square and use @triciaal 's way 7. anonymous Oh, okay. Thank you, I will give you an update on how I do :D 8. anonymous Well, Look at the formula, ax^2 How many x^2 numbers do you have in your equation? bx How many numbers do you have that have x? and c, your only remaing number.
2016-10-22 13:44:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5250579714775085, "perplexity": 2579.3953659990007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718987.23/warc/CC-MAIN-20161020183838-00146-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/2nd-order-linear-homogeneous-de.736280/
# 2nd order linear homogeneous DE 1. Feb 3, 2014 ### mathnerd15 when you solve a 2nd order linear non-homogeneous DE, where it is equal to a constant as in Kirchoff's 2nd Law and the roots of the auxiliary equation are imaginary then you have superposition of 2 solutions. so the particular solution is equal to a constant k and you can solve for this by taking q(0), q'(0), q''(0) and it will have a constant form? I'd like to spend some time studying the theory of this. is it always necessary to use Euler's identity? $$q(t)=\frac{-1}{25(25e^{20t})}cos(10\sqrt{46}t)+\frac{21}{50\sqrt{46}e^{20t}}sin(10 \sqrt{46}t)+\frac{1}{25}$$ on another question in the DE which is equal to xsinx can only have derivatives with terms xsinx, xcosx, sinx, cosx by the product rule. so in this case you guess the solution as Axsinx+Bxcosx+Csinx+Dcosx... how do you prove then that these functions are linearly independent? Last edited: Feb 3, 2014 2. Feb 3, 2014 ### SteamKing Staff Emeritus By definition, a homogeneous ODE is equal to zero.
2017-08-24 04:24:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9345384836196899, "perplexity": 588.2887091297702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00644.warc.gz"}
http://svn.haxx.se/subdev/archive-2008-04/0011.shtml
# Re: [Subclipse-dev] Conflict Subclipse vs. Apache 2.2 libraries crashes Eclipse (3.2/3.3)? From: Andrew Vaughan <ajv-lists_at_netspace.net.au> Date: Wed, 9 Apr 2008 11:23:35 +1000 Hi Holger On Wednesday 09 April 2008 08:26, Holger Stratmann wrote: > Hello everybody, > > I'm currently running Subversion on my local machine and had to upgrade > to Apache 2.2 yesterday. > The (subversion) binaries for Apache 2.0 and Apache 2.2 are not > compatible. I knew that. > However, to make Apache 2.2 work, the binaries actually have to be in > the path - which gave me quite a headache yesterday... > > Today, I opened one of my Subversion-controlled projects in Eclipse and > Eclipse crashed and wouldn't even start after that (i.e. crash during > startup). Same for Eclipse 3.2. > This also gave me quite a headache because I initially suspected several > other reasons. After quite some forth and back, I narrowed it down to > the Subclipse plugin and to the binaries in > plugins\org.tigris.subversion.javahl.win32_1.2.4 specifically. > > When I (re)move this folder OR remove the Subversion 1.4.6 libraries > from the path, I can use Eclipse. > > Even after removing org.tigris.subversion.javahl.win32_1.2.4, > I still get the decorations and I think Subclipse even works?! (After > all, org.tigris.subversion.subclipse.core_1.2.4.jar and > org.tigris.subversion.subclipse.ui_1.2.4.jar are still in my plugins > directory and the ("other") binaries are still in my path) > If subclipse can't find javahl, I think it falls back using svnkit. > Soo... the first question is: Will Subclipse still work? I now disabled > it because I wasn't sure. > And the second question: Why does it crash and what can be done about > it? I think it's a bug and gave me quite a headache for quite a while! > See http://subclipse.tigris.org/faq.html#win32-crash Cheers Andrew --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe_at_subclipse.tigris.org For additional commands, e-mail: dev-help_at_subclipse.tigris.org Received on 2008-04-09 03:24:01 CEST This is an archived mail posted to the Subclipse Dev mailing list.
2016-05-05 08:36:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.880806028842926, "perplexity": 12084.269874905493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860126377.4/warc/CC-MAIN-20160428161526-00127-ip-10-239-7-51.ec2.internal.warc.gz"}
https://chem.libretexts.org/Ancillary_Materials/Laboratory_Experiments/Wet_Lab_Experiments/General_Chemistry_Labs/Online_Chemistry_Lab_Manual/Chem_10_Experiments/06%3A_Single_and_Double_Displacement_Reactions_(Experiment)
# 6: Single and Double Displacement Reactions (Experiment) ##### Objectives • To perform and observe the results of a variety of single and double displacement reactions, • To become familiar with some of the observable signs of these reactions, • To identify the products formed in each of these reactions, • To write balanced chemical equations for each single and double displacement reaction studied. During a chemical reaction both the form and composition of matter are changed. Old substances are converted to new substances, which have unique physical and chemical properties of their own. Some of the observable signs that a chemical reaction has occurred include the following: • A metallic deposit appears • Bubbles appear • A temperature change occurs • A color change occurs • A precipitate (cloudy, tiny particles) appears Note that there are many other observable signs for chemical reactions, but these are the ones most likely to be encountered in this lab. ### Single Displacement Reactions All single displacement reactions have the general form: $A + BC → B + AC$ Here, $$A$$ is an element and $$BC$$ is usually an aqueous ionic compound or an acid (consisting of $$B^+$$ and $$C^-$$ aqueous ions). A displaces $$B$$ in $$BC$$, resulting in the formation of a new element $$B$$ and a new ionic compound or acid, $$AC$$. If the new element $$B$$is a metal, it will appear as a metallic deposit. If it is a gas, it will appear as bubbles. An Activity Series of elements is often used to determine if $$A$$ will displace $$B$$ in a single displacement reaction. An Activity Series is provided at the end of the Background section. As a rule, if A has a higher activity that $$B$$, a single displacement reaction will occur. However, if $$A$$ has lower activity than $$B$$, a single displacement reaction will not occur. ##### Example 6.1 : magnesium metal + aqueous aluminum chloride Since $$\ce{Mg}$$ is more active than $$\ce{Al}$$, a single displacement reaction will occur. The predicted products are aluminum metal and aqueous magnesium chloride Reaction Equation: $\ce{3 Mg (s) + 2 AlCl3 (aq) -> 2 Al (s) + 3 MgCl2 (aq)}$ ### Double Displacement Reactions All double displacement reactions have the general form: $AB + CD → AD + CB$ Reactions that can be classified as double displacements include precipitation reactions, neutralization reactions and gas forming reactions. ### Precipitation Reactions Here $$AB$$ and $$CD$$ are usually aqueous ionic compounds (or acids) consisting of aqueous ions ($$A^+$$ and $$B^-$$, $$C^+$$ and $$D^-$$). When a double displacement reaction occurs, the cations and anions switch partners, resulting in the formation of two new ionic compounds AD and CB, one of which is in the solid state. This solid product is an insoluble ionic compound called a precipitate. To determine whether a product ionic compound will be soluble or insoluble, consult the Solubility Rules provided at the end of the Background section. Note that if both of the predicted products are soluble, a precipitation reaction will not occur. ##### Example 6.2 : aqueous lead(II) nitrate + aqueous potassium chloride The predicted products are lead(II) chloride (insoluble) and potassium nitrate (soluble). Since one of the predicted products is insoluble, a precipitation reaction is will occur. Reaction Equation: $\ce{Pb(NO3)2 (aq) + 2 KCl (aq) -> 2 KNO3 (aq) + PbCl2 (s)}$ ### Neutralization Reactions Here $$AB$$ is an acid (consisting of $$H^+$$ and $$X^-$$ aqueous ions) and $$BC$$ is a base (consisting of $$M^+$$ and $$OH^-$$ ions). When a double displacement reaction occurs, the cations and anions switch partners, resulting in the formation of water and a new ionic compound (or salt), which is usually soluble. Neutralization reactions are exothermic, and are generally accompanied by a noticeable release of heat. ##### Example 6.3 : sulfuric acid + aqueous lithium hydroxide The predicted products are water and lithium sulfate. Reaction Equation: $\ce{H2SO4 (aq) + 2 LiOH (aq) -> Li2SO4 (aq) + 2 H2O (l)}$ ### Gas Forming Reactions In these reactions one of the products ($$AD$$ or $$CB$$) after the double displacement is in the gaseous state. One such example is hydrogen sulfide ($$\ce{H2S}$$). However, one of the products could also be carbonic acid ($$\ce{H2CO3}$$) or sulfurous acid ($$\ce{H2SO3}$$). Both carbonic acid and sulfurous acid are unstable and will decompose to form carbon dioxide and sulfur dioxide gases, respectively: Carbonic acid: $\ce{H2CO3 (aq) -> H2O (l) + CO2 (g)}$ Sulfurous Acid: $\ce{H2SO3 (aq) -> H2O (l) + SO2 (g)}$ ##### Example 6.4 : nitric acid + aqueous sodium sulfite The predicted products are sulfurous acid and sodium nitrate. However sulfurous acid decomposes to sulfur dioxide and water: Reaction Equation: $\ce{2 HNO3 (aq) + Na2SO3 (aq) -> 2 NaNO3 (aq) + H2SO3 (aq)} \text{(decomposes)}$ Final Equation: $\ce{2 HNO3 (aq) + Na2SO3 (aq) -> 2 NaNO3 (aq) + H2O (l) + SO2 (g)}$ Writing Equations for Reactions • Write the correct formulas for each reactant and place a yield arrow (→) after the last reactant. • Identify the reaction type – single or double displacement, using the guidelines outlined thus far. • If you determine that a reaction will occur, write the correct formula(s) of the products after the arrow. If you determine that a reaction will not occur, simply write “no reaction” after the arrow. • Balance the equation (to ensure mass conservation). • Be sure to include the physical states of all reactants and products in your final equation. ### Solubility Rules and Activity Series Solubility Rules: 1. Alkali metal compounds, acetates, nitrates, and ammonium compounds are all soluble. 2. Hydroxides of alkali metals and $$\ce{NH4^{+1}}$$, $$\ce{Ca^{+2}}$$, $$\ce{Sr^{+2}}$$, and $$\ce{Ba^{+2}}$$ are soluble. All others are insoluble. 3. All halides (chlorides etc.) are soluble except for those containing $$\ce{Ag^{+1}}$$, $$\ce{Pb^{+2}}$$, and $$\ce{Hg2^{+2}}$$. 4. Most sulfates are soluble, except for $$\ce{BaSO4}$$, $$\ce{SrSO4}$$, $$\ce{Ag2SO4}$$, $$\ce{PbSO4}$$, and $$\ce{CaSO4}$$. 5. Most phosphates, carbonates, chromates and sulfides are insoluble (except those of the alkali metals and ammonium). 6. In addition, all acids are soluble! Activity Series highest activity $$\ce{Li}$$ $$\ce{K}$$ $$\ce{Ca}$$ $$\ce{Na}$$ $$\ce{Mg}$$ $$\ce{Al}$$ $$\ce{Zn}$$ $$\ce{Cr -> Cr^{+3}}$$ $$\ce{Fe -> Fe^{+2}}$$ $$\ce{Cd}$$ $$\ce{Ni -> Ni^{+2}}$$ $$\ce{Sn -> Sn^{+2}}$$ $$\ce{Pb -> Pb^{+2}}$$ $$\ce{H2}$$ $$\ce{Cu -> Cu^{+2}}$$ $$\ce{Ag}$$ $$\ce{Hg -> Hg^{+2}}$$ lowest activity $$\ce{Au -> Au^{+3}}$$ ## Procedure Materials and Equipment Solids: Copper metal, zinc metal, magnesium metal, solid sodium bicarbonate Solutions: 6 M sodium hydroxide, 6 M hydrochloric acid, 6 M ammonium hydroxide, 5% acetic acid; all other solutions are 0.1 M and include silver nitrate, barium chloride, sodium sulfate, potassium chloride, lead(II) nitrate, iron(III) chloride, sodium carbonate, cobalt(II) nitrate, sodium phosphate, zinc nitrate, copper(II) sulfate, sodium chloride, potassium nitrate, nickel(II) nitrate. Equipment: 6 large test tubes, 8 small test tubes, plastic test tube rack (or large beaker) ##### Safety Be especially cautious when using the 6 M $$\ce{HCl}$$ and 6 M $$\ce{NaOH}$$ as they can burn your skin. Also be aware that skin discoloration will result from contact with $$\ce{AgNO3}$$. If you feel any tingling sensations or see any color changes on your skin, flush with water immediately for a minimum of 15 minutes. Inform your instructor of any chemical contact as soon as possible. ### Instructions for Performing Reactions For the reactions involving solid reactants (#2, 4, 5, 7, 11, 13), use the large test tubes. For reactions involving solutions only, use small test tubes. Always use clean test tubes that have been rinsed with distilled water. The test tubes do not have to be dry. Use approximately 3-mL quantities of all solutions. A good estimate is to use two full dropper squirts of each chemical. For reactions involving metals, use just 1-2 pieces of each metal. Place the metal in the test tube first, and then add the solution. The metal should be completely immersed in the solution used. Perform the following reactions and record your observations for each on the report form. Note that some reactions take longer than others. Thus, if results are not obtained immediately, give the reaction some time. All waste is to be disposed of in the plastic container in the hood! 1. Aqueous barium chloride + aqueous sodium sulfate 2. Zinc metal + hydrochloric acid 3. Aqueous sodium phosphate + aqueous copper(II) sulfate 4. Copper metal + aqueous silver nitrate 5. Solid sodium bicarbonate + acetic acid 6. Aqueous nickel(II) nitrate + aqueous sodium hydroxide 7. Copper metal + aqueous zinc nitrate 8. Aqueous potassium chloride + aqueous silver nitrate 9. Hydrochloric acid + aqueous sodium hydroxide 10. Aqueous sodium carbonate + aqueous cobalt(II) nitrate 11. Zinc metal + aqueous lead(II) nitrate 12. Aqueous sodium chloride + aqueous potassium nitrate 13. Magnesium metal + acetic acid 14. Aqueous iron(III) chloride + aqueous ammonium hydroxide When finished, complete your lab report by writing the balanced equations for each reaction studied. ## Pre-laboratory Assignment: Single and Double Displacement Reactions 1. In this lab you will perform a variety of single and double displacement reactions. What are three observable signs that a chemical reaction has occurred? 1. What is the general equation of a single displacement reaction? 1. For each of the following sets of reactants, write the balanced equation for the single displacement reaction that occurs. If you determine that a reaction will not occur, write “NR”, and provide a brief explanation. • Aluminum metal + aqueous nickel(II) nitrate • Gold metal + hydrobromic acid 1. What is the general equation of a double displacement reaction? 1. For each of the following sets of reactants, write the balanced equation for the double displacement reaction that occurs. If you determine that a reaction will not occur, write “NR”, and provide a brief explanation. • Aqueous zinc chloride + aqueous sodium chromate • Aqueous lithium hydroxide + phosphoric acid 1. The equipment required for this lab is fairly simple - just 8 small test tubes and 6 large test tubes. • Using the small test tubes you will mix two aqueous solutions together and observe whether or not a reaction occurs. What quantity of each solution will you use? How will you estimate this quantity? • What do the reactions studied in the large test tubes all have in common? • In the reactions involving both a solid and a solution as reactants, which do you place in the test tube first? ## Lab Report: Single and Double Displacement Reactions For each of the reactions performed, • predict the reaction type (single or double displacement) • predict the names and states of the products formed • write the balanced “molecular” equation, including all physical states. 1. Aqueous barium chloride + aqueous sodium sulfate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 2. Zinc metal + hydrochloric acid Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 3. Aqueous sodium phosphate + aqueous copper(II) sulfate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 4. Copper metal + aqueous silver nitrate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 5. Solid sodium bicarbonate + acetic acid Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 6. Aqueous nickel(II) nitrate + aqueous sodium hydroxide Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 7. Copper metal + aqueous zinc nitrate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 8. Aqueous potassium chloride + aqueous silver nitrate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 9. Hydrochloric acid + aqueous sodium hydroxide Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 10. Aqueous sodium carbonate + cobalt(II) nitrate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 11. Zinc metal + aqueous lead(II) nitrate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 12. Aqueous sodium chloride + aqueous potassium nitrate Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 13. Magnesium metal + acetic acid Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: 14. Aqueous iron(III) chloride + aqueous ammonium hydroxide Reaction Type: Observations: Product Names & States (if none, why not?): Balanced Equation: #### Questions 1. Consider Reactions 3 and 14 studied in this lab. Write the balanced molecular equation (identical to what you completed in the previous section), the complete ionic equation and the net ionic equation for these reactions. Include all physical states, and circle the spectator ions in the complete ionic equations. Reaction 3: Aqueous sodium phosphate + aqueous copper(II) sulfate • Balanced Molecular Equation (from page 1): • Complete Ionic Equation: • Net Ionic Equation: Reaction 14: Aqueous iron(III) chloride + aqueous ammonium hydroxide • Balanced Molecular Equation (from page 3): • Complete Ionic Equation: • Net Ionic Equation: 1. Predict the products for the following single and double displacement reactions, and write balanced molecular equations (including physical states) for each of them. If you predict that no reaction will occur, write “NR”, followed by a brief explanation. • Aluminum metal + aqueous silver acetate • Aqueous zinc nitrate + aqueous lithium chloride • Hydrobromic acid + solid magnesium sulfite • Aqueous rubidium hydroxide + perchloric acid • Tin metal + phosphoric acid • Aqueous lithium chromate + aqueous gold(III) iodide 6: Single and Double Displacement Reactions (Experiment) is shared under a CC BY-NC license and was authored, remixed, and/or curated by Santa Monica College.
2022-05-17 08:45:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5610043406486511, "perplexity": 6834.425582136929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00404.warc.gz"}
http://math.stackexchange.com/questions/293412/to-check-linear-span-of-complex-number/293427
# to check linear span of complex number consider the set $\{ (1,0,-i),(1+i,1-i,1),(i,i,i)\}$ of three vectors from $\mathbb{C}^3$. which of the following is true? a) linear span of set is of dimension $1$ b) linear span of set is of dimension $2$ c) each element of $\mathbb{C}^3$ can be generated as real combination of elements of set d) set is basis of $\mathbb{C}^3$ - You can row reduce as you are probably accustomed to. The vector $(i,i,i)$ is kind of ugly, I would replace it by $(1,1,1)$. That's OK, $(1,1,1)$ is $(i,i,i)$ multiplied by a non-zero constant. –  André Nicolas Feb 3 '13 at 7:09 If you set a matrix whose rows are the vectors noted above and find the Reduced Row Echelon Form of the matrix you will see that the vectors establish a basis for $\mathbb C^3$. In fact: $$\begin{pmatrix} 1 & 0 & -i\\ 1+i & 1-i & 1\\ i&i &i \\ \end{pmatrix}\approx\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}$$ - I like this answer! +1 –  amWhy Feb 3 '13 at 13:21 $a. (1,0,-i) + b.(1+i,1-i,1) +c.(i,i,i)= (0,0,0)$ $a +b(1+i) +ci=0\tag{1}$ $b(1-i) + ci =0 \tag{2}$ $-ai +b + ci=0\tag{3}$ $(2) -(3) \implies a=b$ from $(1) a=c =0$ thus $a=b-c=0$ thus set is a basis -
2015-07-29 11:39:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285495281219482, "perplexity": 242.38952910868232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986423.95/warc/CC-MAIN-20150728002306-00284-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/180542-finding-radius-cylinder.html
# Math Help - Finding the radius of a cylinder 1. ## Finding the radius of a cylinder Hello, I am a very new member that had not made a new account because I was banned or anything ;-) Can anyone please show me how to answer the following question: (my exam is tomorrow so please the sooner the better) Water comes from a cylindrical tank. When 2 cubic metres of water leave the tank, the level of water in the tank goes down by 1.3 millimeters. Calculate the radius of the tank, in metres, correct to one decimal place. the answer is r= sqrt(2/pie*0.0013) = 22.1 Thanks IBstudent, ow sorry I mean IGCSEstudent Hi IGCSE student. Since you want an answer in meters change the drop in level to meters. This drop times the cross sectional area of cylinder in sq meters = 2 cu meters,Solve equation in r radius of cylinder bjh 3. Try to completely imagine a new cylinder, one which has a: Height = 0.0013m Now since volume is: $Volume = \pi Radius^2 * Height$ $2 = \pi x^2 * 0.0013$
2015-03-06 12:05:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7698941826820374, "perplexity": 1463.3750091763902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936468536.58/warc/CC-MAIN-20150226074108-00085-ip-10-28-5-156.ec2.internal.warc.gz"}
http://clay6.com/qa/42626/an-electric-dipole-of-moment-7-2-10-c-m-is-placed-in-a-uniform-electric-fie
Browse Questions # An electric dipole of moment $7.2 × 10^{−8}\; C\; m$ is placed in a uniform electric field of magnitude $2.42 \times 10^4 N C^{−1}$. Initially, the angle between the dipole and the field is zero. Then, the dipole is turned at $30^{\circ}$ with respect to the direction of the electric field. What is the potential energy stored in the dipole in the given field? $(A) 2.33 \times 10^{−4} J$
2016-10-23 22:08:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512666463851929, "perplexity": 100.67491436337218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719437.30/warc/CC-MAIN-20161020183839-00562-ip-10-171-6-4.ec2.internal.warc.gz"}
https://hafegroenteenfruitpakketten.nl/grain/1240/12-1640141622/
Home / Product / single machine batch scheduling with release times and # single machine batch scheduling with release times and The main products are food processing equipment, wood processing equipment, agricultural equipment, packaging machinery, etc. Our equipment has been widely praised in the domestic and foreign markets. ### Single machine batch scheduling with release times ... 22/11/2007  Motivated by a high-throughput logging system, we investigate the single machine scheduling problem with batching, where jobs have release times and processing times, and batches require a setup time. Our objective is to minimize the total flow time, in the online setting. For the online problem where all jobs have identical processing times, we propose a 2 View More ### Single machine batch scheduling with release times and ... 18/10/2011  We study single machine batch scheduling with release times. Our goal is to minimize the sum of weighted flow times (or completion times) and delivery costs. Since the problem is strongly $\\mathcal{NP}$ -hard even with no delivery cost and identical weights for all jobs, an approximation algorithm is presented for the problem with identical weights. This uses View More ### Single Machine Batch Scheduling with Release Times Single Machine Batch Scheduling with Release Times ... We consider the single-familiy scheduling problem with batching on a single ma-chine, with a setup time and release times, under batch availability. We investigate the objective of minimizing the total flow time, in both the online and offline setting. For the online problem, we propose a 2-competitive algorithm for View More ### [PDF] Single machine batch scheduling with release times ... Abstract Motivated by a high-throughput logging system, we investigate the single machine scheduling problem with batching, where jobs have release times and processing times, and batches require a setup time. Our objective is to minimize the total flow time, in the online setting. For the online problem where all jobs have identical processing times, we propose a 2 View More View More ### Single Machine Batch Scheduling with Release Times Single Machine Batch Scheduling with Release Times ... We consider the single-familiy scheduling problem with batching on a single ma-chine, with a setup time and release times, under batch availability. We investigate the objective of minimizing the total flow time, in both the online and offline setting. For the online problem, we propose a 2-competitive algorithm for View More ### Single machine batch scheduling with release times and ... We study single machine batch scheduling with release times. Our goal is to minimize the sum of weighted flow times (or completion times) and delivery costs. Since the problem is strongly $\mathcal{NP}$-hard even with no delivery cost and identical weights for all jobs, an approximation algorithm is presented for the problem with identical weights. View More ### Scheduling a single batching machine with non-identical ... Download Citation Scheduling a single batching machine with non-identical job release times and sizes using Ant Colony Clustering This paper studies the problem of scheduling a batch ... View More ### [PDF] Single machine batch scheduling problem with family ... We also give a polynomial-time approximation scheme (PTAS) for the problem. In this paper we consider the single machine batch scheduling problem with family setup times and release dates to minimize makespan. View More ### Single machine batch scheduling with release times - EconBiz Journal of combinatorial optimization.. - Norwell, Mass. : Springer Science + Business Media, ISSN 1382-6905, ZDB-ID 1339574-9. - Vol. 17.2009, 3, p. 323-338 View More ### Online Single Machine Batch Scheduling Download Citation Online Single Machine Batch Scheduling We are concerned with the problem of safely storing a history of actions that happen rapidly in real time, such as in “buy” and ... View More ### Scheduling Single Machine Scheduling Single machine models Observation: ... at each point in time, schedule an available job (job, which release date has passed) with earliest due date. preemptive EDD-rule leads to at most k preemptions (k = number of distinct release dates) Single machine models: Maximum Lateness Problem 1pmtn,rjLmax: preemptive EDD-rule: at each point in time, schedule an available View More ### Single-machine batch scheduling with job processing time ... Single-machine batch scheduling with job processing time compatibility JID:TCS AID:10156 /FLA Doctopic: Algorithms, automata, complexity and games [m3G; v1.150; Prn:13/04/2015; 13:50] P.1 (1-10) Theoretical Computer Sc... View More ### Single machine scheduling with resource dependent release ... [1] T. C. Edwin Cheng, Adam Janiak, Mikhail Y. Kovalyov, Single machine batch scheduling with resource dependent setup and processing times , 2001, Eur. J. Oper. Res.. [2] Stanislaw Zdrzalka, Scheduling jobs on a single machine with release dates, delivery times and controllable processing times: worst-case analysis , 1991, Oper. Res. Lett.. View More ### Single machine batch scheduling with release times and ... Single machine batch scheduling with release times and delivery costs. 0.00 Avg rating — 0 Votes Article ID: iaor20131941 Volume: 16 Issue: 1 Start Page Number: 69 End Page Number: 79 Publication Date: Feb 2013 Journal: Journal of Scheduling: Authors: Steiner George, Selvarajah Esaignani, Zhang Rui: Keywords: combinatorial optimization: Abstract: We study single View More ### Scheduling a single batching machine with non-identical ... Download Citation Scheduling a single batching machine with non-identical job release times and sizes using Ant Colony Clustering This paper studies the problem of scheduling a batch ... View More ### CiteSeerX — Single machine batch scheduling problem with ... CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): In this paper we consider the single machine batch scheduling problem with family setup times and release dates to minimize makespan. We show that this problem is strongly NP-hard, and give an O n View More ### Single machine batch scheduling with release times - EconBiz Journal of combinatorial optimization.. - Norwell, Mass. : Springer Science + Business Media, ISSN 1382-6905, ZDB-ID 1339574-9. - Vol. 17.2009, 3, p. 323-338 View More ### Single machine batch scheduling problem with family setup ... Home Browse by Title Periodicals Journal of Scheduling Vol. 9, No. 6 Single machine batch scheduling problem with family setup times and release dates to minimize makespan View More ### Scheduling Single Machine Scheduling Single machine models Observation: ... at each point in time, schedule an available job (job, which release date has passed) with earliest due date. preemptive EDD-rule leads to at most k preemptions (k = number of distinct release dates) Single machine models: Maximum Lateness Problem 1pmtn,rjLmax: preemptive EDD-rule: at each point in time, schedule an available View More ### Single-machine batch scheduling with job processing time ... Single-machine batch scheduling with job processing time compatibility JID:TCS AID:10156 /FLA Doctopic: Algorithms, automata, complexity and games [m3G; v1.150; Prn:13/04/2015; 13:50] P.1 (1-10) Theoretical Computer Sc... View More ### Single machine scheduling with resource dependent release ... [1] T. C. Edwin Cheng, Adam Janiak, Mikhail Y. Kovalyov, Single machine batch scheduling with resource dependent setup and processing times , 2001, Eur. J. Oper. Res.. [2] Stanislaw Zdrzalka, Scheduling jobs on a single machine with release dates, delivery times and controllable processing times: worst-case analysis , 1991, Oper. Res. Lett.. View More Request a Quote
2022-05-20 13:11:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35171353816986084, "perplexity": 7589.568891695599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662532032.9/warc/CC-MAIN-20220520124557-20220520154557-00685.warc.gz"}
https://javalab.org/en/lorenzs_water_mill_en/
# Lorenz’s Water Mill There is a spinning wheel. There is a small hole in the bottom of the bucket. When the water falls from the faucet above, the spinning wheel rotates with the water flowing down. Let's predict what direction the spinning wheel will go to. It is easy to think that a spinning wheel is going to go in one direction anyway. But if you think a bit, you can see that it is not so simple. Depending on the amount of water coming from the faucet, the rotation direction of the spinning wheel changes. 1. If the amount of water is small, the aberration hardly moves. 2. As the amount of water increases, the spinning wheel starts slowly turning in either direction. 3. As the amount of water continues to increase, the spinning wheel rotates faster and reaches the other side before the water in the bucket rushes off. Because of this, the rotation speed is lowered again and eventually the direction of rotation changes. In this way, the direction of rotation changes regularly. 4. Increasing the quantity further increases the complexity of the spinning of the spinning wheel, which eventually begins to turn freely. In other words, it will be in the state of chaos. Related Post
2018-12-19 05:16:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8218519687652588, "perplexity": 355.26507809836716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831334.97/warc/CC-MAIN-20181219045716-20181219071716-00289.warc.gz"}
http://exciting.wikidot.com/oxygen-x-ray-emission-spectra
X-Ray Emission Spectra by Christian Vorwerk for exciting oxygen Purpose: In this tutorial, we will show how to perform X-ray emission spectra (XES). To do so, we show the example of the F K-edge emission in LiF. 1. Introduction The calculation of x-ray emission spectra is similar to that of the x-ray absorption spectra, and we will assume in the following that you have already finished the tutorial Excited states from BSE. The main difference resides in the initial states of the transitions, which are core states. As such, they are solutions of the radial Dirac equation in the muffin-tin spheres. A description of the basic setup for BSE calculations can be found in X-ray Absorption Spectra using BSE. In addition, all the input parameters are described in Input Reference. Note that, in order to obtain reliable X-ray emission spectra, the calculation needs to be converged with respect to several parameters, such as the size of the k-mesh. 2. The F-K edge in LiF In a first step, we calculate the electronic groundstate of LiF with the following input file (do not forget to change the path in speciespath). <input> <title>LiF-BSE</title> <structure speciespath="$EXCITINGROOT/species"> <crystal scale="7.608"> <basevect>0.5 0.5 0.0</basevect> <basevect>0.5 0.0 0.5</basevect> <basevect>0.0 0.5 0.5</basevect> </crystal> <species speciesfile="Li.xml"> <atom coord="0.0000 0.0000 0.0000" /> </species> <species speciesfile="F.xml"> <atom coord="0.5000 0.5000 0.5000" /> </species> </structure> <groundstate do="fromscratch" ngridk="10 10 10" xctype="GGA_PBE_SOL" gmaxvr="14.0"/> </input> Create a new directory, which you can call, e.g., LiF-XES and move inside it. $ mkdir LiF-XES $cd LiF-XES The symbol$ indicates the shell prompt. Copy and paste the text of the example above for the ground state of LiF into the file input.xml. Replace the $EXCITINGROOT variable with the correct path by using the command: $ SETUP-excitingroot.sh Run the ground-state calculation with the usual command: $time exciting_smp After the ground-state calculation is completed, we inspect the output file EVALCORE.OUT, which contains the information about core electrons. Since we are interested here in the excitations from the flouride K-edge, we have to look specifically for the F 1s electron. For this purpose we have to consider the core state with quantum numbers n=1 and l=0: Species : 2 (F), atom : 1 n = 1, l = 0, k = 1 : -23.88933329 From the file EVALCORE.OUT we obtain the information that the F 1s electron in LiF has a binding energy of about -23.89 Ha. With this knowledge, we can add to the input file the parameters required to perform a XES calculation. Since we are dealing with excited-state properties, we have to insert the xs element. Paste the following block into the input file: ... <xs xstype="BSE" ngridk="4 4 4" vkloff="0.097 0.273 0.493" ngridq="4 4 4" nempty="30" gqmax="3.0" broad="0.007" scissor="0.20947" tevout="true"> <energywindow intv="23.9 24.1" points="1200"/> <screening screentype="full" nempty="100"/> <BSE xes="true" bsetype="IP" xasspecies="2" xasatom="1" nstlxas="1 5" /> <qpointset> <qpoint>0.0 0.0 0.0</qpoint> </qpointset> </xs> ... You should be already familiar with many parameters shown above. However, three parameters are unique for XES calculations: • The attribute xes = "true" in the BSE element triggers the x-ray emission calculation; • In XES calculations, the attribute nstlxas describes the number of occupied states used in the BSE-Hamiltonian. In the XES process, valence electrons fill the core hole, and with this parameter, we specify which valence states are included in the calculation. In the example above, we use all 5 valence bands of LiF. • While for x-ray absorption, the interaction of the excited electron and the core hole has a significant influence on the spectrum, there is no core hole in the final state of the x-ray emission process. Therefore, bsetype = "IP" has to be chosen to obtain results that can be compared with experimental ones. More details can be found here. We are now ready to run the calculation with the usual command (do not forget to skip the ground-state run by inserting the attribute do = "skip" inside the groundstate element): $ time exciting_smp The calculation will run for a few seconds. The output will be the same as those described in Excited states from BSE. The x-ray emission spectra are given by the imaginary part of the dielectric function (as in the case of x-ray absorption). In order to visualize the results, we execute the following commands: $cp EPSILON/EPSILON_BSE-IP_SCR-full_OC11.OUT epsilon-IP$ PLOT-files.py -f epsilon-IP -lx 'Energy [eV]' -ly 'Im $\varepsilon_M$' -t 'Macroscopic dielectric function' -g -rc -cy 3 -x 650 656 -nl The result is stored in PLOT.png and PLOT.eps and should resemble this plot:
2023-03-30 18:13:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7810267806053162, "perplexity": 1081.9758944459566}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00046.warc.gz"}
http://freewheelings.com/chip-the-wsk/comparison-between-rc4%2C-rc5-and-rc6-9f77ff
Rc7 had a maximum allowed speed of 180 km/h. Passband Bandwidth = 20% f. o RC2 is a 64-bit source-heavy unbalanced Feistel cipher with an 8 to 1024-bit key size, in steps of 8. ; … Why does my symlink to /usr/local/bin not work? What is the difference between using emission and bloom effect? Topic: difference between RC5 and NEC protocol. Whats the difference between RC5 and RC6? Using it is like using MD4: Rarely safe, usually fatal, and a huge crypto. R-SCCH RC1 RC2 R-SCH RC3 RC4 RC5 RC6 Forward/Downlink Channels F-SYNCH. RC4-SHD-KS – Includes RCM-SD3 Camera Wall Mount and RC-RK2 Rack Shelf HD-110, HD-220, HD-230 – Main Camera Local to Codec, 2nd Camera Extended RC4-SHD-3 for Applications Up To 300′ It performs bitwise rotations, AND, NOT, and XOR, as well as modular addition. will be different versions of RC6-w/r/b. Increasing the rounds to 18 or 20 should protect against this. Normalized Response Comparison for RC3, RC4, RC5 and RC6. Normalized Response Comparison for RC3, RC4, RC5 and RC6. (any decimal number between 40-2048?) Login: Pass: Register . Does it really make lualatex more vulnerable as an application? Thanks for A2A. RC4: RC5: WK5: JA: RC5: RC6: WK6: JA: RC6 * Tool set reduced compared to WK2 ** Tool set expanded compared to WK3. Difference between Distance vector routing and Link State routing; Fixed and Flooding Routing algorithms; Routing v/s Routed Protocols in Computer Network; Unicast Routing – Link State Routing ; Distance Vector Routing (DVR) Protocol; Route Poisoning and Count to infinity problem in Routing; Onion Routing; Configuration of Router on a stick; Internet Control Message Protocol (ICMP) … Thanks for A2A. Key size, block size and the number of rounds are convertible and variable in RC5 ciphers. All of these were created or co-created by Ron Rivest, one of the creators of RSA and the creator of MD2, MD4, MD5, and co-creator of MD6. One thing that makes RC5 unique is its use of data-dependent rotations, a feature that theoretically improves security but which, in practice, often makes the cipher harder to analyze and can leave weaknesses that are only found later. sébastien etter (216 7 7) | answered Jun 26 '08, 8:50 a.m. Compare between symmetric key and asymmetric key encryption in terms of security: Encryption is the process of scrambling data to make sure unintended recipients are not able to make any sense of the data. RC6 ultimately lost to Rijndael, but did make it to the top 5 (along with Twofish, Serpent, Rijndael, and MARS). Differences between RC4 and RC5. RC6 was designed to meet the requirements of increased security and better performance [2] . Stick with the winner of the competition, Rijndael (which we now just call "AES"). Thanks for contributing an answer to Cryptography Stack Exchange! Upload; Login; Signup; Submit Search. Rc7 had a maximum allowed speed of 180 km/h. We Make Buying Technical Products Simple. RC4 and RC6 are the same thickness just RC6 is longer. TTE Filters • 7426A Tanner Parkway, Arcade, NY 14009 USA • (t) 1-716-532-2234 • (f) 1-716-532-2702 • tte@tte.com . RC6 cipher is a fast Symmetric Encryption proposed by Ronald L. Rivest(MIT Lab), M.J.B. There are not a lot of major differences … CP Reckoner rc6. www.tte.com. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. RC6 is a fully parameterized algorithm denoted like… The effective span-to-depth ratios a v d were 1.5 for beams RC0–RC3 and RC6, while the a v d was 2.0 for beams RC5 and RC6. Is RC4 secure with a single-use 32-byte random key prefix and 3072 prefix bytes discarded? RC6 is an improvement upon RC5, increasing its security. PASSIVE FILTERS - BAND REJECTION RC Series. RC4 and RC6. RC4 and RC6 are by far the most well-researched. Handy Backup can provide all necessary functions to back up your RSA software data when you need it. What’s Asymmetric Encryption? RC4 is badly broken and has been since 48 hours after it was published. … This Webopedia guide will show you how to create a desktop shortcut to a website using Firefox, Chrome or Internet Explorer (IE). frequency 31.25 kHz • Three Enhanced Capture, Compare, PWM modules (ECCP): - 3 PWM time-base options - Auto-shutdown and auto-restart - PWM steering - Programmable Dead-band Delay 28/40/44-Pin Flash-Based, 8-Bit CMOS Microcontrollers with LCD Driver. Cryptography thus literally means the art of secret writing. I am no sere instructor but im doing my damnest to become a self taught survivor with help … While we can’t cover all of the different types of encryption algorithms, let’s have a look at three of the most common. - 16-bit Compare, max. Engineering, Manufacturing Tolerance Limits Fits Charts. RC5 is a fast block cipher developed based on RC4. It is a bad question, on the "search and research" and "be specific" criteria at least, which shows a definite laziness from the OP. Why are some Old English suffixes marked with a preceding asterisk? These solutions are different from each other in terms of methods and approaches used for data encryption. RC6 algorithm was developed by RSA laboratories from USA. What is the difference between an anonymous-encryption scheme and an authenticated-encryption scheme? As I said, I didn't make any changes to tsconfig moving RC4->RC5->RC6 and I'm not noticing any issues. RC4 and RC6. TTE Filters • 7426A Tanner Parkway, Arcade, NY 14009 USA • (t) 1-716-532-2234 • (f) 1-716-532-2702 • tte@tte.com . What is the main difference between a key, an IV and a nonce? Login: Pass: Register . … It was designed in 1987. I would get the RC6. 1 Answer Active; Voted; Newest; Oldest; 0. RC6 is a parameterized algorithm where the block size, the key size, and the number of rounds are variable. It was leaked to a mailing list (before RC2 was) in 1994. Like, their key size, the reason for the update, fault of previous and features of updates. 344 commits to master since this release Linux kernel v5.9-rc6 for WSL2 Follow installation instructions in README x64 binary information. What happens when all players land on licorice in Candy Land? The clear concrete cover to the reinforcement bars was 20 mm. AES is definitely the way to go, but unfortunately, robmandu must choose between RC4 and 3DES. The first portion of the RC4 keystream shows a significant bias, though the bias shrinks as more keystream is generated. The only differences between RC3 and RC5 are the number of Walsh repeats; 1 for RC3 and 3 for RC5. difference between RC5 and NEC protocol. Can anyone point out the core differences between RC variations? The only differences between RC4 and RC6 are the number of Walsh repeats; 1 for RC4 and 3 for RC6. RC5 is a block cipher using a 1–255 round (12 originally suggested) Feistel-like network with 32, 64, or 128-bit blocks published in 1994. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. It is a stream cipher. Other block ciphers have also been developed over the years, which include Blowfish, CAST5, IDEA, RC4, RC5, and RC6. Is starting a sentence with "Let" acceptable in mathematics/computer science/engineering papers? The only differences between spreading rate 1 and 3 are the number of Walsh repeats; 1 for SR1 and 3 for SR3. It's a bit basic, but it's easy to answer. Details of the tension, compression, and steel stirrups reinforcements are described in Fig. I don't think it's a bad question. Multi-sensor Sigfox device for your IoT project trial. Advanced Member. Interestingly, RC1 was never published, and RC3 was broken at RSA Security during development. We look at the history of Microsoft's Windows operating systems (Windows OS) from 1985 to present... By Vangie Beal Given this, it was developed having in mind the AES specifications like operating on 128 bit blocks. Free Shipping over \$200. Overview; Updates (16) Reviews (4) ... • Height difference between high and low ground (point) shortened by about 96u ... although it'd leave a lot of strange balance between the bamboo and the absence of green. Both parties share a private key (kept secret between them). To see the RC5 decoded data stream via UART, connect the TXD pin to an external serial port's RXD @ 115200 baud (may need an RS-232 level translator if using a Host PC). This means that System V gives a lot more flexibility to a system administrator. Brokensj. ≈ 1684 bits. It has a heterogenous round structure with a total of 18 rounds (16 "MIXING" rounds and 2 "MASHING" rounds). Our doors are tested for burglary resistance according to EN 1627 and certified to burglary resistance classes 4 and 5. bow pivot drill indention. 12-round RC5 with 64-bit blocks is vulnerable to a differential attack using 244 chosen plaintexts. How about inputs length? Allowed values are: Hence it can be more robust than both RC5 and RC6. 3.RC4, RC5, RC6 • A series of symmetric algorithms developed by RSA Security RC4 : A variable key-size stream cipher with byte- oriented operations . Here you will find either a set of files named rc.0, rc.1, rc.2, rc.3, rc.4, rc.5, and rc.6, or a set of directories named rc0.d, rc1.d, rc2.d, rc3.d, rc4.d, rc5.d, and rc6.d. 1.2. And Google no longer uses RC4 in TLS because biases early in the keystream can make it possible to recover secret values from HTTP headers. For RC2 / RC4 / RC5 / RC6 please preorder - available in November / December 2020. dmesg info: Linux version 5.9.0-rc6-lxwsl2+ (wsl@lxwsl2) (gcc … Get the Noritz Rc4 Today & Save Big! By core difference, I didn't mean how their algorithm/makings are different but which points make them apart. Rc1, Rc2, Rc4, and Rc5 have a maximum allowed speed of 135 km/h. Cryptography is the science of devising methods which allow for information to be sent in a secure form in a way that the intended recipient would be able to retrieve the information so provides means of hiding (encryption) and recovering (decryption) information. There are so many ways using it can go fatally wrong, and so few places where it can be used safely. Displaying all posts. You will also find a file named /etc/inittab. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. 30 000 messages. You can change your ad preferences anytime. Rc3 and Rc6 have a maximum allowed speed of 160 km/h. I'll give you my logic on why I chose the RC5. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. Get your … 0 Comments Add a Comment. If you find yourself wondering if you should choose RC6 though, the answer is probably no. Podcast 300: Welcome to 2021 with Joel Spolsky. As I said, I didn't make any changes to tsconfig moving RC4->RC5->RC6 and I'm not noticing any issues. It is a complex cipher using secret indices to select key material. RC2 is a 64-bit source-heavy unbalanced Feistel cipher with an 8 to 1024- bit key size, in steps of 8. – rook Sep 5 '16 at 6:49 i'm not sure; because im not expert , but you keep your old folder node_modules when you do upgrade, i dont' know if it's work if you delete node_modules and do: npm install – stackdave Sep 5 '16 at 7:24 rev 2020.12.18.38240, The best answers are voted up and rise to the top, Cryptography Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. RC4 is a symmetric key cipher and bite-oriented algorithm that encrypts PC and laptop files and disks as well as protects confidential data messages sent to and from secure websites. However, the maximum speed permitted without emergency electromagnetic track brakes is 160 km/h, and keeping … 1.3. Transition Fit: A fit type where clearance or interference can exist between assembled parts depending on tolerance … RC6 is a 20-round Feistel block cipher based off of RC5, with tweaks done to make it acceptable for the AES competition (including using a fixed block size of 128 bits and supporting 128, 192, and 256-bit keys). Rc7 was a rebuild of Rc6 meant to haul replacement trains needed when the X 2000 trains were canceled or delayed. RC2 is a 64-bit source-heavy unbalanced Feistel cipher with an 8 to 1024-bit key size, in steps of 8. These data outline the richest phenolic sources – RC1, RC2, RC4, but the total flavonoids have a smaller share of total phenolics in comparison with RC5, with a ratio of 0.49 and RC6 with a ratio of 0.41, which have the lowest contents in total phenols. RC6 proper has a block size of 128 bits and supports key sizes of 128, 192, and 256 bits up to 2040-bits, but, like RC5, it may be parameterised to support a wide variety of word-lengths, key sizes, and number of rounds. Basically BSD doesn’t have any runlevels. Post 2 … The former is because of its ubiquitous use despite its weaknesses, and the latter because it was part of a competition that involved extensive analysis. RC6 adds two features to RC5: the inclusion of integer multiplication and the use of four 4-bit working registers instead of RC5 s two 2-bit registers. R-DCCH RC3 and RC5. 0 permanent link. The algorithm is based on the random permutation RC5 : a parameterized algorithm with a variable block size, a variable key size, and a variable number of rounds. Honestly, though, the number of folks competent enough to decrypt TDES is pretty small and he'd probably be fine. Other encryption algorithms include SERPENT, RC4/RC5/RC6, LOKI-97, FROG, and Hasty Pudding. An Affiliate of GCG • www.gowandacomponentsgroup.com. 10 years ago. 1 Answer. Creating a desktop... Microsoft Windows is a family of operating systems. Login. Interference Fit: A fit type where interference exists between assembled parts under all tolerance conditions. SICURTEC_PANIC * Resistance level. The word cryptography comes from the Greek word kryptos meaning hidden and graphia meaning writing. Yes, it is lazy but I did mention the "core" difference. Other biases and problems, such as vulnerabilities in how it uses a nonce, exist that can make it difficult to use securely. Given that it survived the first round in the AES competition, it is thought to be quite secure. RC6 is very similar to RC5 in structure, using data-dependent rotations, modular addition, and XOR operations; in fact, RC6 could be viewed as interweaving two parallel RC5 encryption processes, … RC6 is an improved version which is also better researched. RC4 is a stream cipher designed in 1987 by Ron Rivest for RSA data Se curity ... Table I. Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It uses a variable length key value is 256 bits to initialize a 256-bit state table. Rc1, Rc2, Rc4, and Rc5 have a maximum allowed speed of 135 km/h. Posted Most Philips CD-i playes (except the CDI 450, CDI 550 and the CD-i/Hifi combination FW380i) provided an IR receiver for the usage of a wireless controller to point the on … Do we have anyway of knowing if avoiding self-permutation in RC4 makes it any stronger? RC5 is a slightly newer block cipher and is secure with sufficient rounds. Webopedia is an online dictionary and Internet search engine for information technology and computing definitions. A distinct data block size, usually consisting of 64 bits, is transformed into another distinct-size block. 1. Same question about RC2 - the key size must be in the range of 8-128 [bits] and RC5 - the key size must be in the range 0 - 2040 [bits]. dmesg info:; Linux version 5.9.0-rc6-lxwsl2+ (wsl@lxwsl2) (gcc (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5), GNU ld version 2.30-73.el8) #1 SMP Tue Sep 22 11:50:17 +07 2020 The Advanced Encryption Standard (AES) is one of the best examples of symmetric encryption and other being Blowfish, Rivest Cipher 4, Data Encryption Standard (DES), RC5, RC6, etc. Joined: Posts: December 2008 1: View Profile. @SqueamishOssifrage Yup. RA1/AN1/C1INC RB3/CANRX/CTED2/INT3 You can change your ad preferences anytime. No major issues have been found in it. Difference between the TLS cipher suites? Making statements based on opinion; back them up with references or personal experience. 113. RC4 is a stream cipher with a 40 to 2048-bit key written in 1987 with a maximum theoretical strength of log2(256!) Some of the most common encryption methods include AES, RC4, DES, 3DES, RC5, RC6, etc. On This Page Additional Resources Text CDMA2K_VR_Compare CDMA2K_VR_Compare.gif Description BER calculator for blind rate detection Library cdma2000, Channel Coding Class SDFCDMA2K_VR_Compare Parameters Name Description Default Type RadioConfig radio Confi In stock for RC1 / RC3 / RC7. If you ever find yourself needing to use RC4, make sure you combine the key and nonce by putting them through a cryptographic hash function first, rather than concatenating them as traditionally done. It is vulnerable to a related-key attack given 234 known plaintexts. rc6/ccp3 rc5/sdo rc4/sda/sdi rb4/an9/ctpls/kbi0 rb5/t0cki/t3cki/ccp5/kbi1 rb6/pgc/kbi2 rc7/ccp4 rd4/eccp1/p1a/psp4 rd5/p1b/psp5 rd6/p1c/psp6 rd7/p1d/psp7 avdd rb2/cantx/cted1/int2 osc1/clkin/ra7 osc2/clkout/ra6 re0/an5/rd re1/an6/c1out/wr re2/an7/c2out/cs rd3/c2inb/ctmui/psp3 rd2/c2ina/psp2 rd1/c1inb/psp1 rd0/c1ina/psp0 rf0/mdmin rg0/rx1/dt1 rg1/cantx2 vdd rg2/t3cki rg3/tx1/ck1 rg4/t0cki … Yin (RSA Lab)for consideration of AES standard in 1998. Security doors RC4 and RC5 A very robust product that meets high security requirements. What is the Difference between the greenpoison rc5 beta to the rc6 final? A cipher in cryptography is an algorithm used for performing encry… It is defined in RFC 2268, though it was originally leaked to a mailing list through reverse engineering software that used it in 1996. It was designed in 1987. If I send RC5 codes to an RC6 IR reciever, will it accept the signal? Out of these algorithms, DES and AES algorithms are the best known. 1 and Table 1. I am currently using the rc5 beta. – markom Mar 10 '09 at 21:26 That is a shame. However, some of the modern approaches of the symmetric encryption are Blowfish, RC4, RC5, RC6, DES, and AES. I got the RC6, and smack it around like a drunk biker chick with no problems. And, among this AES-128, AES-192 and AES-256 are the most widely used algorithms. It lost the AES competition. ADVERTISER DISCLOSURE: SOME OF THE PRODUCTS THAT APPEAR ON THIS SITE ARE FROM COMPANIES FROM WHICH TECHNOLOGYADVICE RECEIVES COMPENSATION. … DES – Data Encryption Standard – designed at IBM 1.1. The differences between them lie in the length of the hash value. The tips of each arrow illustrate the optimum Lurking Member. Yin (RSA Lab)for consideration of AES standard in 1998. This thread has 1 response. The default key size is 64 bits. If you were doing heavy prying (I mean like metal, car doors, etc) then the RC5 would be better, but the RC6 can handle anything that wood is going to throw at it, and the extra inch IS nice, if you're going to baton with it. DES is a standard. 5cp map in a clean, industrial greenhouse facility. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. To learn more, see our tips on writing great answers. continued next page. Additional operations involved are modular addition and bitwise XOR. It is defined in RFC 2268, though it was or… This Calculator will determine the preferred size and limit tolerances for Running or sliding to interference-locational fits per ANSI B 4.1. The upper limit on the key size is 2040 bits. A distinct data block size, usually consisting of 64 bits, is transformed into another distinct-size block. The default key size is 64 bits. RC4 RC5 RC6 PAS RC1 RC2 RC3 RC4 RC5 RC6 PAS Classification VERY HIGH RISK Level of security provided by products VERY LOW RISK The following chart illustrates the potential security rating classifications that products classified to EN 1627, LPS 2081 and PAS 24 may achieve if tested to LPS 1175. continued next page. Comparison of standards and resistance levels. It's an improved version over its predecessors like RC2, RC4,RC5. Using a fidget spinner to rotate in outer space. Set elements are reordered in RC5 algorithms. Topic: RC5 vs RC6 Whats the difference? The algorithm uses a random permutation for data. As the data stream is merely XOR with generated key sequence. In cryptography, RC5 is a symmetric-key block cipher notable for its simplicity. Might just try to replace all yellow with team colors and/or green. April 26, 2017 difference rc5 rc6. It only takes a minute to sign up. RC6 cipher is a fast Symmetric Encryption proposed by Ronald L. Rivest(MIT Lab), M.J.B. ... RC4, RC5, RC6, DES, and AES. 0 Salvatore Mignogna (1 5 3) | asked Jun 26 '08, 8:21 a.m. Hi i'd like to know what are the principal differences between RC4 and RC5 milestone.Someone can help me? Post 1 made on Monday December 8, 2008 at 06:42: addu. Hi friends I am new to this forum I would like to learn about RC5 and NEC what is the … However, you still shouldn't use any of these ciphers as there are far better alternatives. Two common encryption types are the symmetric and asymmetric encryption. For Spreading rate 1, the number of Walsh repeats is 4; for spreading rate 3, the number of Walsh repeats … The only differences between RC4 and RC6 are the number of Walsh repeats; 1 for RC4 and 3 for RC6. What is this jetliner seen in the Falcon Crest TV series? Reply. LuaLaTeX: Is shell-escape not required? Triple DES (3DES) applies the DES a… Favorite Answer > Changelog: - RC6: Apple TV 2 support, fixed emulators compatibility issues with greenpois0n. Of the ciphers you mentioned, RC4 is the only stream cipher. Anonymous. The only differences between RC3 and RC5 are the number of Walsh repeats; 1 for RC3 and 3 for RC5. RC5 is a parameterized algorithm implementing a 32-, 64- or 128-bit blocks, a variable key size, and a variable number of rounds. Rc4 and rc5. Tolerance: The difference between the maximum and minimum size limits of a part. I was just curious to ask what was the difference between these two jailbreaks. F.RC6 Table 1 shows a comparison of the RCBC design parameters with RC5, and RC6 block ciphers [3, 4]. F0r example: encryption of traffic between a server and client, as well as encryption of data on a disk. EN 1627 ff:2011. You will also find a file named /etc/inittab. Use MathJax to format equations. Remote controllers RC-1 RC-5 RC-6 differences better cheaper alternatives? Hotmail is one of the first public webmail services that can be accessed from any web browser. 0. How to sort and extract a list containing products. And, among this AES-128, AES-192 and AES-256 are the most widely used algorithms. THIS COMPENSATION MAY IMPACT HOW AND WHERE PRODUCTS APPEAR ON THIS SITE INCLUDING, FOR EXAMPLE, THE ORDER IN WHICH THEY APPEAR. Thickness [mm] ** Weight [kg/m²] ** Certificate. An Affiliate of GCG • www.gowandacomponentsgroup.com. It is relatively poorly-analyzed and could easily have severe security vulnerabilities that have not been discovered. – rook Sep 5 '16 at 6:49 i'm not sure; because im not expert , but you keep your old folder node_modules when you do upgrade, i dont' know if it's work if you delete node_modules and do: npm install – … Prefix bytes discarded what was the difference between using emission and bloom effect any of these algorithms, DES and! In steps of 8 it accept the signal up your RSA software data when you need it not! A bigoted narrator while making it clear he is wrong this RSS feed, copy and paste this into. Versions of RC6-w/r/b on RC4 3, 4 ] TV 2 support fixed! Please preorder - available in November / December 2020 Old English suffixes marked with a 40 2048-bit!: addu jetliner seen in the MARKETPLACE service, privacy policy and policy! In Candy land... Microsoft Windows is a 64-bit source-heavy unbalanced Feistel cipher with byte-oriented operations rounds to 18 20! To haul replacement trains needed when the X 2000 trains were canceled delayed! Doors RC4 and RC5 use any of these algorithms, DES, Hasty. Rc6 though, the number of rounds are convertible and variable in RC5 ciphers to... Tension, compression, and XOR, as well as modular addition and bitwise.! The comparison between rc4, rc5 and rc6 differences between spreading rate 1 and 3 are the number Walsh! 2007 at 12:35: RADIO RAHIM RC6 IMO is the difference between using emission and bloom effect rotate... Concrete cover to the RC6 many implementations drop the first few hundred ( or even )... The main difference between a key, an IV and a huge crypto RC5 a! Much traction '08, 8:50 a.m functions to back up your RSA software when! 4 ] a complex cipher using secret indices to select key material a very robust product that meets high requirements... How to answer more keystream is generated keystream is generated encryption uses … will be different of. You need it maximum allowed speed of 180 km/h and to show more. For ex to bypass Uncertainty Principle tips of each arrow illustrate the optimum I would get the RC6 in! Of 64 bits, is transformed into another distinct-size block as encryption of traffic between a,. Issues with greenpois0n the first portion of the hash value an AES candidate for substituting DES algorithm secure... In 1987 with a preceding asterisk text message and online chat abbreviations to help you translate understand. An application? great blades, the answer is probably no a robust... Bytes discarded please preorder - available in November / December 2020?.. Using emission and bloom effect arrow illustrate the optimum I would get the RC6 final is secure sufficient! Sr1 and 3 for RC5 bias shrinks as more keystream is generated or responding to other answers considered... Sliding to interference-locational fits per ANSI B 4.1 bits to initialize a 256-bit state table swing a 16th followed... Some Old English suffixes marked with a maximum allowed speed of 160.... A 256-byte permutation by swapping values based on RC4 a key, an and! Interested in cryptography, RC5 key sequence basic operations as RC5,,! Multiplication to improve diffusion characteristics of the paper of knowing if avoiding self-permutation in makes. A single-use 32-byte random key prefix and 3072 prefix bytes discarded policy and cookie policy an ancient cipher!.188 ( just like the RC4 keystream shows a significant bias, the! Involved are modular addition are different from each other in terms of service, privacy policy and policy... Security requirements to stand for Ron 's Code '' or is 128-bit key enough such. Joined: Posts: December 2008 1: View profile attack using 244 chosen plaintexts speed 160. Against this and he 'd probably be fine replace all yellow with team colors and/or.! On RC4 extract a list containing PRODUCTS 2048-bit key written in 1987 with a single-use 32-byte key. Crest TV series the AES specifications like operating on 128 bit blocks RC4. Aes '' ) comparison between rc4, rc5 and rc6 a mailing list ( before RC2 was ) in 1994 key material used for anything and. A key, an IV and a huge crypto Code '' or is 128-bit key enough such. Rc5 codes to an RC6 IR reciever, will it accept the signal illustrate the optimum I would the. – symmetric encryption proposed by Ronald L. Rivest ( MIT Lab ), M.J.B RSA is the only stream that... Experience with the RC6 is.188 ( just like the RC4 keystream shows a significant bias though. Proposed as an AES candidate for substituting DES algorithm by an 1/8?. And cookie policy ra1/an1/c1inc RB3/CANRX/CTED2/INT3 other encryption algorithms include SERPENT, RC4/RC5/RC6 LOKI-97. You agree to our terms of methods and approaches used for sharing of information or data between and... He 'd probably be fine core difference, I did mention the core '' difference USA. 1,559 text message and online chat abbreviations to help you translate and understand today texting! * Weight [ kg/m² ] * * Weight [ kg/m² ] * *.. Certified to burglary resistance classes 4 and 5 ; user contributions licensed under cc by-sa Pudding! List 1,559 text message and online chat abbreviations to help you translate and understand today 's texting lingo and... Great blades, the key size, block size, block size and tolerances... Product that meets high security requirements 's a bit basic, but also includes multiplication to improve diffusion of. They APPEAR key algorithms are the best known a fidget spinner to rotate in outer space to TDES... Can go fatally wrong, and a huge crypto for consideration of AES standard in 1998 are Blowfish,,! ; 0 online dictionary and Internet search engine for information technology and computing definitions algorithms, DES, iwasts. Just curious to ask what was the difference between using emission and bloom effect answers most liked answers |. Have severe security vulnerabilities that have not been discovered Channels F-SYNCH we use LinkedIn. Is 2040 bits should n't use any of these ciphers as there are far better.... A clean, industrial greenhouse facility 2008 1: View profile ( 256! asking for the methodology Code the... Search engine for information technology and computing definitions have not been discovered all COMPANIES or all types PRODUCTS... First portion of the ciphers you mentioned, RC4, RC5, and comparison between rc4, rc5 and rc6! Is like using MD4: Rarely safe, usually consisting of 64 bits, is transformed into distinct-size! Remote controllers RC-1 RC-5 RC-6 differences better cheaper alternatives is generated encryption uses will... Or 20 should protect against this exist that can make it difficult use... Does not include all COMPANIES or all types of PRODUCTS available in the specifications! Is merely XOR with generated key sequence magnetic reed switch and light sensors and. Additional operations involved are modular addition, or responding to other answers this... And graphia meaning writing Posts: August 2005 799: View profile in RC4 makes it any stronger block... Broken at RSA security during development Linux kernel v5.9-rc6 for WSL2 Follow installation instructions in README x64 information... 256-Bit state table # 5 RC4 and 3 for SR3 site are from COMPANIES from TECHNOLOGYADVICE. From table 1 shows a Comparison of the PRODUCTS that APPEAR on site! Encryption are Blowfish, RC4, and steel stirrups reinforcements are described in Fig ANSI B.. That it survived the first comparison between rc4, rc5 and rc6 in the AES specifications like operating on 128 bit blocks unsure! 2048-Bit key written in 1987 with a single-use 32-byte random key prefix and prefix! As there are still better alternatives between organizations and to show you more relevant ads from... Generated key sequence what are the weaknesses of X in such case? 21:26 that is stream! Operations as RC5, but also includes multiplication to improve diffusion characteristics the... An improved version which is also called DES or sometimes DEA ( Digital encryption algorithm ) RC5. Attack using 244 chosen plaintexts our tips on writing great answers n, p family... Better camp knife dictionary and Internet search engine for information technology and computing definitions minimum size limits of a encryption. Newer block cipher and addresses two word blocks at a time has been since 48 hours after it was in. So much traction each arrow illustrate the optimum I would get the RC6 and PRODUCTS! Zero experience with the winner of the tension, compression, and, among this AES-128, and! Cipher is a stream cipher that should not be used safely characteristics of the tension compression... Using it is defined in RFC 2040, industrial greenhouse facility RC5 have a maximum theoretical strength of log2 256! Products APPEAR on this site are from COMPANIES from which TECHNOLOGYADVICE RECEIVES COMPENSATION full and curved as n?. Between RC2, RC4, RC5 is.250 inches the RC6 final bits initialize! We now just call AES '' ) and has been since 48 hours after it was published iwasts... … - 16-bit compare, max 216 7 7 ) | answered Jun 26,. Addresses two word blocks at a time was ) in 1994 RC5, RC6, DES,,. That meets high security requirements [ mm ] * * Weight [ kg/m² ] *... Rc3 RC4 RC5 RC6 Forward/Downlink Channels F-SYNCH the optimum I would get RC6. Published in a clean, industrial greenhouse facility, and so few places where it can go fatally wrong and. And asymmetric encryption is widely used for anything the number of rounds are variable, AES-192 and AES-256 are number! Have no idea why it ever got so much traction data on disk! Wrong, and XOR, as well as modular addition encryption are Blowfish, RC4, Y.L. Cover to the RC6 IMO is the only differences between them lie in AES!
2021-04-16 08:53:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1812041699886322, "perplexity": 5857.294871135658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00533.warc.gz"}
https://www.ques10.com/p/31668/effect-of-arithmetic-round-off-errors/
Question: Effect of arithmetic round-off errors. 0 modified 10 days ago by Abhishek Tiwari ♦♦ 50 written 12 months ago by 0 • The difference equation of the FIR filter is given by, $y(n)=∑_{m=0}^{N-1}h(m) x(n-m)$ where, each variable is represented by a fixed number of bits. Typically the input and output samples, x(n-m) and y(n), are each represented by 12 bit in the coefficients by 16 bits and two's complement format. • Output of the filter is obtained as a sum of product of h(m) and x(n-m). After each multiplication the product contains more bit then either h(m) or x(n-m). • For example, if 12 bit input is multiplied by 16 bit coefficient, result is 28 bit long and will need to be quantize back to 16 bits before it can be stored in a memory or to 12 bits before it can be sent as an output of the DAC. • This leads to error was effects are similar to those of ADC noise. The common way to quantize the result of arithmetic operation is either: (i) Truncate the result, i.e. to retain the MSB and to discard the LSB. (ii) To round of the results, i.e. to choose the higher order bits closest to the unrounded results. This is achieved by adding half an LSB to the results. • Round-off errors can be minimized by representing all products exactly with the double length register, and then rounding the results after obtaining the final sum.
2019-10-15 20:27:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6050398349761963, "perplexity": 737.310604647333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00070.warc.gz"}
http://regularize.wordpress.com/category/math/imaging/
### Imaging I fell a little bit behind on reporting on my new preprints. In this posts I’ll blog on two closely related ones; one of them already a bit old, the other one quite recent: The papers are As clear from the titles, both papers treat a similar method. The first paper contains all the theory and the second one has few particularly interesting applications. In the first paper we propose to view several known algorithms such as the linearized Bregman method, the Kaczmarz method or the Landweber method from a different angle from which they all are special cases of another algorithm. To start with, consider a linear system $\displaystyle Ax=b$ with ${A\in{\mathbb R}^{m\times n}}$. A fairly simple and old method to solve this, is the Landweber iteration which is $\displaystyle x^{k+1} = x^k - t_k A^T(Ax^k-b).$ Obviously, this is nothing else than a gradient descent for the functional ${\|Ax-b\|_2^2}$ and indeed converges to a minimizer of this functional (i.e. a least squares solution) if the stepsizes ${t_k}$ fulfill ${\epsilon\leq t_k\leq 2\|A\|^{-2} - \epsilon}$ for some ${\epsilon>0}$. If one initializes the method with ${x^0=0}$ it converges to the least squares solution with minimal norm, i.e. to ${A^\dag b}$ (with the pseudo-inverse ${A^\dag}$). A totally different method is even older: The Kaczmarz method. Denoting by ${a_k}$ the ${k}$-th row of ${A}$ and ${b_k}$ the ${k}$-th entry of ${b}$ the method reads as $\displaystyle x^{k+1} = x^k - a_{r(k)}^T\frac{a_{r(k)}\cdot x^k - b_k}{\|a_{r(k)}\|_2^2}$ where ${r(k) = (k\mod m) +1}$ or any other “control sequence” that picks up every index infinitely often. This method also has a simple interpretation: Each equation ${a_k\cdot x = b_k}$ describes a hyperplane in ${{\mathbb R}^n}$. The method does nothing else than projecting the iterates orthogonally onto the hyperplanes in an iterative manner. In the case that the system has a solution, the method converges to one, and if it is initialized with ${x^0=0}$ we have again convergence to the minimum norm solution ${A^\dag b}$. There is yet another method that solves ${Ax=b}$ (but now it’s a bit more recent): The iteration produces two sequences of iterates $\displaystyle \begin{array}{rcl} z^{k+1} & = &z^k - t_k A^T(Ax^k - b)\\ x^{k+1} & = &S_\lambda(z^{k+1}) \end{array}$ for some ${\lambda>0}$, the soft-thresholding function ${S_\lambda(x) = \max(|x|-\lambda,0)\mathrm{sgn}(x)}$ and some stepsize ${t_k}$. For reasons I will not detail here, this is called the linearized Bregman method. It also converges to a solution of the system. The method is remarkably similar, but different from, the Landweber iteration (if the soft-thresholding function wouldn’t be there, both would be the same). It converges to the solution of ${Ax=b}$ that has the minimum value for the functional ${J(x) = \lambda\|x\|_1 + \tfrac12\|x\|_2^2}$. Since this solution of close, and for ${\lambda}$ large enough identical, to the minimum ${\|\cdot\|_1}$ solution, the linearized Bregman method is a method for sparse reconstruction and applied in compressed sensing. Now we put all three methods in a joint framework, and this is the framework of split feasibility problems (SFP). An SFP is a special case of a convex feasibility problems where one wants to find a point ${x}$ in the intersection of multiple simple convex sets. In an SFP one has two different kinds of convex constraints (which I will call “simple” and “difficult” in the following): 1. Constraints that just demand that ${x\in C_i}$ for some convex sets ${C_i}$. I call these constraints “simple” because we assume that the projection onto each ${C_i}$ is simple to obtain. 2. Constraints that demand ${A_ix\in Q_i}$ for some matrices ${A_i}$ and simple convex sets ${Q_i}$. Although we assume that projections onto the ${Q_i}$ are easy, these constraints are “difficult” because of the presence of the matrices ${A_i}$. If there were only simple constraints a very basic method to solve the problem is the methods of alternating projections, also known as POCS (projection onto convex sets): Simply project onto all the sets ${C_i}$ in an iterative manner. For difficult constraints, one can do the following: Construct a hyperplane ${H_k}$ that separates the current iterate ${x^k}$ from the set defined by the constraint ${Ax\in Q}$ and project onto the hyperplane. Since projections onto hyperplanes are simple and since the hyperplane separates we move closer to the constraint set and this is a reasonable step to take. One such separating hyperplane is given as follows: For ${x^k}$ compute ${w^k = Ax^k-P_Q(Ax^k)}$ (with the orthogonal projection ${P_Q}$) and define $\displaystyle H_k = \{x\ : (A^Tw^k)^T\cdot x \leq (A^Tw^k)^T\cdot x^k - \|w^k\|_2^2\}.$ Illustration of projections onto convex sets and separating hyperplanes Now we already can unite the Landweber iteration and the Kaczmarz method as follows: Consider the system ${Ax=b}$ as a split feasibility problem in two different ways: 1. Treat ${Ax=b}$ as one single difficult constraint (i.e. set ${Q=\{b\}}$). Some calculations show that the above proposed method leads to the Landweber iteration (with a special stepsize). 2. Treat ${Ax=b}$ as ${m}$ simple constraints ${a_i\cdot x = b_i}$. Again, some calculations show that this gives the Kaczmarz method. Of course, one could also work “block-wise” and consider groups of equations as difficult constraints to obtain “block-Kaczmarz methods”. Now comes the last twist: By adapting the term of “projection” one gets more methods. Particularly interesting is the notion of Bregman projections which comes from Bregman distances. I will not go into detail here, but Bregman distances are associated to convex functionals ${J}$ and by replacing “projection onto ${C_i}$ or hyperplanes” by respective Bregman projections, one gets another method for split feasibility problems. The two things I found remarkable: • The Bregman projection onto hyperplanes is pretty simple. To project some ${x^k}$ onto the hyperplane ${H = \{x\ :\ a^T\cdot x\leq \beta\}}$, one needs a subgradient ${z^k\in\partial J(x^k)}$ (in fact an “admissible one” but for that detail see the paper) and then performs $\displaystyle x^{k+1} = \nabla J^*(z^k - t_k a)$ (${J^*}$ is the convex dual of ${J}$) with some appropriate stepsize ${t_k}$ (which is the solution of a one-dimensional convex minimization problem). Moreover, ${z^{k+1} = z^k - t_k a}$ is a new admissible subgradient at ${x^{k+1}}$. • If one has a problem with a constraint ${Ax=b}$ (formulated as an SFP in one way or another) the method converges to the minimum-${J}$ solution of the equation if ${J}$ is strongly convex. Note that strong convexity of ${J}$ implies differentiability of ${J^*}$ and Lipschitz continuity of ${\nabla J}$ and hence, the Bregman projection can indeed be carried out. Now one already sees how this relates to the linearized Bregman method: Setting ${J(x) = \lambda\|x\|_1 + \tfrac12\|x\|_2^2}$, a little calculation shows that $\displaystyle \nabla J^*(z) = S_\lambda(z).$ Hence, using the formulation with a “single difficult constraint” leads to the linearized Bregman method with a specific stepsize. It turns out that this stepsize is a pretty good one but also that one can show that a constant stepsize also works as long as it is positive and smaller that ${2\|A\|^{-2}}$. In the paper we present several examples how one can use the framework. I see one strengths of this approach that one can add convex constraints to a given problem without getting into any trouble with the algorithmic framework. The second paper extends a remark that we make in the first one: If one applies the framework of the linearized Bregman method to the case in which one considers the system ${Ax=b}$ as ${m}$ simple (hyperplane-)constraints one obtains a sparse Kaczmarz solver. Indeed one can use the simple iteration $\displaystyle \begin{array}{rcl} z^{k+1} & = &z^k - a_{r(k)}^T\frac{a_{r(k)}\cdot x^k - b_k}{\|a_{r(k)}\|_2^2}\\ x^{k+1} & = &S_\lambda(z^{k+1}) \end{array}$ and will converge to the same sparse solution as the linearized Bregman method. This method has a nice application to “online compressed sensing”: We illustrate this in the paper with an example from radio interferometry. There, large arrays of radio telescopes collect radio emissions from the sky. Each pair of telescopes lead to a single measurement of the Fourier transform of the quantity of interest. Hence, for ${k}$ telescopes, each measurement gives ${k(k-1)/2}$ samples in the Fourier domain. In our example we used data from the Very Large Array telescope which has 27 telescopes leading to 351 Fourier samples. That’s not much, if one want a picture of the emission with several ten thousands of pixels. But the good thing is that the Earth rotates (that’s good for several reasons): When the Earth rotates relative to the sky, the sampling pattern also rotates. Hence, one waits a small amount of time and makes another measurement. Commonly, this is done until the earth has made a half rotation, i.e. one complete measurement takes 12 hours. With the “online compressed sensing” framework we proposed, one can start reconstructing the image as soon the first measurements have arrived. Interestingly, one observes the following behavior: If one monitors the residual of the equation, it goes down during iterations and jumps up when new measurements arrive. But from some point on, the residual stays small! This says that the new measurements do not contradict the previous ones and more interestingly this happened precisely when the reconstruction error dropped down such that “exact reconstruction” in the sense of compressed sensing has happened. In the example of radio interferometry, this happened after 2.5 hours! Reconstruction by online compressed sensing You can find slides of a talk I gave at the Sparse Tomo Days here. Today I’d like to collect some comments one a few papers I stumbled upon recently on the arXiv. 1. TGV minimizers in 1D First, about a month ago two very similar paper appeared in the same week: Both papers treat the recently proposed “total generalized variation” model (which is a somehow-but-not-really-higher-order generalization of total variation). The total variation of a function ${u\in L^1(\Omega)}$ (${\Omega\subset{\mathbb R}^d}$) is defined by duality $\displaystyle TV(u) = \sup\Big\{\int_\Omega \mathrm{div} \phi\, u\,dx\ :\ \phi\in C^\infty_c(\Omega,{\mathbb R}^d), |\phi|\leq 1\Big\}.$ (Note that the demanded high regularity of the test functions ${\phi}$ is not essential here, as we take a supremum over all these functions under the only, but important, requirement that the functions are bounded. Test functions from ${C^1_c(\Omega,{\mathbb R}^d)}$ would also do.) Several possibilities for extensions and generalization of the total variation exist by somehow including higher order derivatives. The “total generalized variation” is a particular successful approach which reads as (now using two non-negative parameter ${\alpha,\beta}$ which do a weighting): $\displaystyle TGV_{\beta,\alpha}^2(u) = \sup\Big\{\int_\Omega \mathrm{div}^2 \phi\, u\,dx\ :\ \phi\in C^\infty_c(\Omega,S^{d\times d}),\ |\phi|\leq \beta,\ |\mathrm{div}\phi|\leq \alpha\Big\}.$ To clarify some notation: ${S^{d\times d}}$ are the symmetric ${d\times d}$ matrices, ${\mathrm{div}^n}$ is the negative adjoint of ${\nabla^n}$ which is the differential operator that collects all partial derivatives up to the ${n}$-th order in a ${d\times\cdots\times d}$-tensor. Moreover ${|\phi|}$ is some matrix norm (e.g. the Frobenius norm) and ${|\mathrm{div}\phi|}$ is some vector norm (e.g. the 2-norm). Both papers investigate so called denoising problems with TGV penalty and ${L^2}$ discrepancy, i.e. minimization problems $\displaystyle \min_u \frac12\int_\Omega(u-u^0)^2\, dx + TGV_{\alpha,\beta}^2(u)$ for a given ${u^0}$. Moreover, both papers treat the one dimensional case and investigate very special cases in which they calculate minimizers analytically. In one dimension the definition of ${TGV^2}$ becomes a little more familiar: $\displaystyle TGV_{\beta,\alpha}^2(u) = \sup\Big\{\int_\Omega \phi''\, u\,dx\ :\ \phi\in C^\infty_c(\Omega,{\mathbb R}),\ |\phi|\leq \beta,\ |\phi'|\leq \alpha\Big\}.$ Some images of both papar are really similar: This one from Papafitsoros and Bredies and this one from Pöschl and Scherzer Although both paper have a very similar scopes it is worth to read both. The calculations are tedious but both paper try to make them accessible and try hard (and did a good job) to provide helpful illustrations. Curiously, the earlier paper cites the later one but not conversely… Another paper I found very interesting was This paper shows a nice duality which I haven’t been aware of, namely the one between the subgradient descent methods and conditional gradient methods. In fact the conditional gradient method which is treated is a generalization of the conditional gradient method which Kristian and I also proposed a while ago in the context of ${\ell^1}$-minimization in the paper Iterated hard shrinkage for minimization problems with sparsity constraints: To minimize the sum $\displaystyle F(u) + \Phi(u)$ with a differentiable ${F}$ and a convex ${\Phi}$ for which the subgradient of ${\Phi}$ is easily invertible (or, put differently, for which you can minimize ${\langle u,a\rangle + \Phi(u)}$ easily), perform the following iteration: 1. At iterate ${u^n}$ linearize ${F}$ but not ${\Phi}$ and calculate a new point ${v^n}$ by $\displaystyle v^n = \mathrm{argmin}_v \langle F'(u^n),v\rangle + \Phi(v)$ 2. Choose a stepsize ${s^n\in [0,1]}$ and set the next iterate as a convex combination of ${u^n}$ and ${v^n}$ $\displaystyle u^{n+1} = u^n + s_n(v^n - u^n).$ Note that for and indicator function $\displaystyle \Phi(u) = \begin{cases} 0 & u\in C\\ \infty & \text{else} \end{cases}$ you obtain the conditional gradient method (also known as Frank-Wolfe method). While Kristian and I derived convergence with an asymptotic rate for the case of ${F(u) = \tfrac12\|Ku-f\|^2}$ and ${\Phi}$ strongly coercive, Francis uses the formulation ${F(u) = f(Au)}$ the assumption that the dual ${f^*}$ of ${f}$ has a bounded effective domain (which say that ${f}$ has linear growth in all directions). With this assumption he obtains explicit constants and rates also for the primal-dual gap. It was great to see that eventually somebody really took the idea from the paper Iterated hard shrinkage for minimization problems with sparsity constraints (and does not think that we do heuristics for ${\ell^0}$ minimization…). I found this draft of a post in my backlog and decided that it would be better to finish it than to leave it unpublished. Actually, the story happened already over a year ago. Some month ago I stumbled upon this piece of shadow art by Fred Eerdekens and a few days later I received the information that my university was going to celebrate his yearly “open house event” this year as “TU Night”. The theme of the TU Night was “Night, Light, Energy” and all member were invited to submit ideas for talks, experiments and exhibits. The piece of shadow art prompted the question “If this weired piece of metal can cast this sentence as a shadow, wouldn’t it be possible to produce another piece of metal that can produce two different sentences, when illuminated from different directions” Together with the announcement of the upcoming TU Night I thought if one could even produce an exhibit like this. Since I am by no means an artists, I looked around at my university and found that there is a Department of Architecture. Since architects are much closer to being artist than I am, I contacted the department and proposed a collaboration and well, Henri Greil proposed to have a joint seminar on this topic. Hence, this summer term I made the experience and worked with students of architecture. In the end, the student produced very nice pieces of shadow art: Although the exhibits produced interesting and unexpected shadows, no group of students could make it and produce two different shadows out of the same object. However, some nice effects can be produced pretty easy: The basic idea is that moving one object around will move around both shadows rather independently. Well this is not totally true but what you can do is to “zoom” one shadow while moving the other sideways (just move the object straight towards one light source). See this movie for a small illustration: I also did my best to produce a more complex object. While it is theoretically not very difficult to see that some given shadows are possible in some given projection geometry, it is not at all straight forward to produce the object theoretically (not to speak of the real world problems while building the piece). It tried hard but I could not do better than this: In this post I will explore a bit the question on how to calculate the discrete gradient and the discrete divergence of an image and a vector field, respectively. Let ${u_0\in{\mathbb R}^{N\times M}}$ be a discrete image, i.e. ${u_0(i,j)}$ denotes the gray value at the ${i,j}$-th pixel. The famous total variation denoising amounts to minimizing the functional $\displaystyle \Phi(u) = \tfrac12 \int (u-u_0)^2 + \lambda\int|\nabla u|$ where the integral shall be understood as summation, ${\nabla}$ stand for the gradient, i.e. the vector of the partial derivatives, and ${|\cdot|}$ stands for the euclidean absolute value. When using primal-dual methods, it is of crucial importance, that the used operators for the gradient and the divergence are numerically adjoint (up to the minus sign). That is, the numerical operations are adjoint in the following sense: If grad is the operation for the gradient and div is the operation for the divergence, then it should hold for any variables ${u}$ and ${v}$ of suitable size and with gradu = grad(u), divv = div(v) that sum(gradu(:).*v(:)) and -sum(u(:).*divv(:)) are equal up to numerical precision. Due to the boundary treatment, the internal MATLAB operations gradient and divergence do not fulfill this requirement. The most common discretization of the gradient uses discrete forward differences and a constant padding at the boundary (which means that Neumann boundary values are applied). In formula, this reads as $\displaystyle (D_xu)_{i,j} = \begin{cases} u_{i+1,j} - u_{i,j} & i The respective adjoints are backward differences with zero boundary treatment (check it!). Apparently, there are many different ways to implement this routines (and the respective adjoints) in MATLAB. Here are four of them: 1. For loops: Run through all pixels in two for-loops and assign the difference to the output. Of course, one should preallocate the output prior to the loop. But you may probably know the first two rules of MATLAB coding? If not, here they are: 1. Avoid for-loops. 2. Avoid for-loops, seriously. I put the routines into extra functions and created anonymous function to call the gradient and the divergence as grad = @(u) cat(3,dxp(u),dyp(u)); div = @(V) dxm(V(:,:,1)) + dym(V(:,:,2)); 2. Shift and subtract: MATLAB is great in using vectors and matrices. And to avoid the for-loop one could also implement the forward difference in ${x}$-direction by shifting the matrix and subtract the original one, i.e. [u(:,2:end) u(:,end)] - u (and similarly for the other differences). Again, I wrote extra functions and used anonymous function as above. 3. Small sparse matrices from the left and from the right: MATLAB is also pretty good with sparse matrices. Since the derivatives in ${x}$-direction only involve the subtraction of two elements in the same column, one can realize this by multiplying an image from the left with a sparse diagonal matrix with just two non-zero diagonals. Similarly, the derivative in ${y}$-direction can be realized by multiplying from the right with a suitable matrix. More precisely, this approach is realized by Dy = spdiags([-ones(M,1) ones(M,1)],[0 1],M,M); Dy(M,:) = 0; Dx = spdiags([-ones(N,1) ones(N,1)],[0 1],N,N); Dy(N,:) = 0; Dxu = Dx*u; Dyu = u*Dy'; (check it). Note that the adjoint of ${x}$-derivative if simple the operation Dx'*u and the operation of the ${y}$-derivative is u*Dy. Together, the calculation of the gradient and the divergence was done by the anonymous functions grad = @(u) cat(3,u*Dx',Dy*u); div = @(V) V(:,:,1)*Dx + Dy'*V(:,:,2); 4. Large sparse matrices: One could think about the following: Vectorize the image by U = u(:) (which amounts to stacking the columns above each other). Then assemble a large ${NM\times NM}$ sparse matrix which has just two non-zero diagonals to do the forward (and other) differences. More precisely this can be done by (with Dx and Dy from above) DX = kron(Dx,speye(M)); DY = kron(speye(N),Dy); DxU = DX*U; DyU = DY*U; Here, it is clear that the respective adjoint are just the multiplication with the transposed matrices. Here, the anonymous functions are grad = @(u) [DX*u DY*u]; div = @(V) DX'*V(:,1) + DY'*V(:,2); The different approaches have different pros and cons. Well, the for-loop only has cons: It is presumably slow and uses a lot indexing which easily leads to bugs. The shift-and-subtract method should go into an extra function to make it easy to use – but this is not necessarily a drawback. For the multiplication with the large matrix, one has to vectorize the image first and every time one wants to look at the result and need to do reshape(U,N,M). But let’s focus on speed: I implemented all methods, and let the run on square images of different sizes for 50 times (after a warmup) and measures the execution times with tic and toc. The assembly of the matrices did not enter the timing. Moreover, no parallel computation was used – just a single core. Finally, memory was not an issue since the larges matrices (of size ${2500\times 2500\times 2}$) only need roughly 100MB of memory. Here is the table with the average times for one calculation of the gradient (in seconds): ${N}$ For-loop Shift-subtract left-right left 100 0.0004 0.0002 0.0001 0.0003 200 0.0018 0.0010 0.0005 0.0015 300 0.0057 0.0011 0.0014 0.0020 400 0.0096 0.0031 0.0022 0.0035 500 0.0178 0.0035 0.0030 0.0054 750 0.0449 0.0114 0.0097 0.0123 1000 0.0737 0.0189 0.0128 0.0212 1500 0.2055 0.0576 0.0379 0.0601 2000 0.3942 0.0915 0.0671 0.1136 2500 0.6719 0.1571 0.1068 0.1788 and here is the one for the divergences: ${N}$ For-loop Shift-subtract left-right left 100 0.0004 0.0003 0.0002 0.0002 200 0.0018 0.0015 0.0005 0.0008 300 0.0048 0.0016 0.0015 0.0012 400 0.0090 0.0036 0.0020 0.0022 500 0.0158 0.0057 0.0027 0.0035 750 0.0409 0.0132 0.0073 0.0069 1000 0.0708 0.0238 0.0130 0.0125 1500 0.2008 0.0654 0.0344 0.0370 2000 0.3886 0.1285 0.0622 0.0671 2500 0.6627 0.2512 0.1084 0.1361 As expected, the for-loop is clearly slower and also as expected, all methods basically scale quadratically (doubling ${N}$ amounts to multiplying the running time by four) since the work per pixel is constant. A little surprisingly, the multiplication from the left and from the right is fastest and also consistently a little faster then multiplication from the left with the larger sparse matrix. I don’t know, why the results are that different for the gradient and for the divergence. Maybe this is related to my use on anonymous functions or the allocation of memory? The second day of SSVM started with an invited lecture of Tony Lindeberg, who has written one very influential and very early book about scale space theory. His talk was both a tour through scale space and a recap of the recent developments in the field. Especially he show how the time aspect could be incorporated into scale space analysis by a close inspection of how receptive fiels are working. There were more talks but I only took notes from the talk of Jan Lellmann who talked about the problem of generating an elevation map from a few given level lines. One application of this could be to observe coastlines at different tides and then trying the reconstruct the full height map at the coast. One specific feature here is that the surface one looks for may have ridges which stem from kinks in the level lines and these ridges are important features of the surface. He argued that a pure convex regularization will not work and proposed to use more input namely a vector field which is derived from the contour lines such that the vector somehow “follows the ridges”, i.e. it connects to level lines in a correct way. Finally another observation I had today: Well, this is not a trend, but a notion which I heard for the first time here but which sounds very natural is the informal classification of data terms in variational models as “weak” or “strong”. For example, a denoising data term ${\|u-u^0\|^2_{L^2(\Omega)}}$ is a strong data term because it gives tight information on the whole set ${\Omega}$. On the other hand, an inpainting data term ${u|_{\Omega\setminus\Omega'} = u^0|_{\Omega\setminus\Omega'}}$ is a weak data term because it basically tell nothing within the region ${\Omega'}$. For afternoon the whole conference has been on tour to three amazing places: • the Riegersburg, which is not only an impressive castle but also features interesting exhibitions about old arm and witches, • the Zotter chocolate factory where they make amazing chocolate in mind-boggling varieties, • and to Schloss Kronberg for the conference dinner (although it was pretty tough to start eating the dinner after visiting Zotter…). I am at the SSVM 13 and the first day is over. The conference is in a very cozy place in Austria, namely the Schloss Seggau which is located on a small hill near the small town Leibnitz in Styria (which is in Austria but totally unaffected by the floods). The conference is also not very large but there is a good crowd of interesting people here and the program reads interesting. The time schedule is tight and I don’t know if I can make it to post something everyday. Especially, I will not be able to blog about every talk. From the first day I have the impression that two things have appeared frequently at different places: • Differential geometry: Tools from differential geometry like the notion of geodesics or Riemannian metrics have not only been used by Martin Rumpf to model different types of spaces of shapes but also to interpolate between textures. • Primal dual methods: Variational models should be convex these days and if they are not you should make them convex somehow. Then you can use primal dual methods. Somehow the magic of primal dual methods is that they are incredibly flexible. Also they work robustly and reasonably fast. Probably, there are more trends to see in the next days. Another few notes to myself: This term I am particularly busy as I am writing a book about variational methods in imaging. The book will be a textbook and I am writing it parallel to a lecture I am currently teaching. (And it is planned that Kristian Bredies, the co-author, teaches the same stuff next term – then there will be another few month of editing, so it will be at least a year until publishing.) In the book we will treat variational approaches to a variety of basic imaging problems. Of course we treat denoising and deblurring but there will also be sections about image interpolation, segmentation and optical flow. In the first part of the book, we present the variational problem and model them properly in Lebesgue and Sobolev spaces and of course in the space ${BV}$. Some effort goes into the analysis of the models and the first step is usually to establish existence of solutions, i.e. minimizers of the respective minimization problems. The work horse is the direct method in the calculus of variations and we mainly use the method for convex functionals in Banach spaces. When I started the section on optical flow I noticed that I hadn’t thought about existence of minimizers before and moreover, most papers and books do not treat this issue. Let’s recall the method of Horn and Schunck to calculate the optical flow: For two images ${u_0}$, ${u_1}$ defined on a domain ${\Omega\subset{\mathbb R}^2}$ one seeks a flow field ${V:\Omega\rightarrow{\mathbb R}^2}$ such that ${V}$ the describes the apparent motion that has happened between both images. Assuming that the points keep their gray value during motion (an assumption known as the brightness constancy constraint) and linearizing this assumption one arrives at the condition $\displaystyle \frac{u_1-u_0}{dt} + V\cdot\nabla u_0 = 0$ (where ${dt}$ is the time between the images ${u_0}$ and ${u_1}$). First, this does not give enough equations to determine ${V}$ and secondly, points with ${\nabla u_0=0}$ are problematic. Horn and Schunck proposed to loose the constraint and to enforce some smoothness of the flow field ${V}$: Their model was to minimize $\displaystyle F(u) = \int_\Omega\big(\frac{u_1-u_0}{dt} + V\cdot\nabla u_0\big)^2{\mathrm d}{x} + \lambda\int_\Omega|\nabla V|^2{\mathrm d}{x}$ for some parameter ${\lambda}$ weighting smoothness of ${V}$ (large ${\lambda}$) against the brightness constancy constraint (small ${\lambda}$). A little bit more general one could choose exponents ${p}$ and ${q}$ and minimize $\displaystyle F(u) = \int_\Omega\big|\frac{u_1-u_0}{dt} + V\cdot\nabla u_0\big|^q{\mathrm d}{x} + \lambda\int_\Omega|\nabla V|^p{\mathrm d}{x}.$ To apply the direct method to obtain existence of minimizers of ${F}$ one ensures 1. properness, i.e. there is some ${V}$ such that ${F}$ is finite, 2. convexity of ${F}$, 3. lower semi-continuiuty of ${F}$ and 4. coercivity of ${F}$. To check these things one has to choose an appropriate space to work in. It seems reasonable to choose ${V\in L^{q}(\Omega,{\mathbb R}^2)}$. Then properness of ${F}$ is easy (consider ${V=0}$, of course assuming that ${u_1-u_0\in L^q(\Omega)}$). Convexity is also clear and for lower semi-continuity one has to work a little more, but that is possible if, e.g., ${\nabla u_0}$ is bounded. Coercivity is not that clear and in fact ${F}$ is not coercive in general. Example 1 (Non-coercivity of the Horn-and-Schunck-model) Simply consider ${u_0(x,y) = ax + by}$ for some ${a,b\in{\mathbb R}}$. Then ${\nabla u(x,y) \equiv [a\ b]^T}$. Set ${V_n(x,y) \equiv [-nb\ na]^T}$ and note that ${\|V^n\|_q\rightarrow\infty}$ while ${F(V^n)}$ stays bounded (in fact constant). I just checked the book “Mathematical problems in Imaging” by Gilles Aubert and Pierre Kornprobst and in Section 5.3.2 they mention that the Horn and Schunck model is not coercive. They add another term to ${F}$ which is roughly a weighted norm of ${V}$ which ensures coercivity. However, it turns out that coercivity of ${F}$ is true under a mild assumption of ${u_0}$. The idea can be found in a pretty old paper by Christoph Schnörr which is called “ Determining Optical Flow for Irregular Domains by Minimizing Quadratic Functionals of a Certain Class” (Int. J. of Comp. Vision, 6(1):25–38, 1991). His argument works for ${q=2}$: Theorem 1 Let ${\Omega\subset{\mathbb R}^2}$ be a bounded Lipschitz domain, ${u_0,u_1\in L^2(\Omega)}$ with ${\nabla u_0\in L^\infty(\Omega)}$ such that ${\partial_x u_0}$ and ${\partial_y u_0}$ are linearly independent in ${L^2(\Omega)}$ and let ${1. Then it holds that ${F:L^2(\Omega)\rightarrow {\mathbb R}\cup\{\infty\}}$ defined by $\displaystyle F(u) = \int_\Omega\big(\frac{u_1-u_0}{dt} + V\cdot\nabla u_0\big)^2{\mathrm d}{x} + \lambda\int_\Omega|\nabla V|^2{\mathrm d}{x}$ is coercive. Proof: Now consider ${V^n}$ such that ${\|V^n\|_2\rightarrow\infty}$. Now we decompose the components of ${V}$ into the constant parts ${QV^n_x}$ and ${QV^n_y}$ and the “zero-mean”-part ${PV^n_x = V^n_x - QV^n_x}$ and ${PV^n_y = V^n_y - QV^n_y}$. First consider that ${PV^n}$ is unbounded, i.e. there is subsequence (also denoted by ${V^n}$) such that ${\|PV^n\|_2\rightarrow\infty}$. By Sobolev embedding and the \href{http://en.wikipedia.org/wiki/Poincar inequality}, we get that ${\int_\Omega|\nabla V^n|^p{\mathrm d}{x}\rightarrow\infty}$. Now consider bounded ${PV^n}$ and hence, unbounded mean values ${QV^n}$. Using a subsequence, we assume that ${QV^n\rightarrow\infty}$. Now we use $\displaystyle \Big\|\frac{u_1 - u_0}{\Delta t} + V\cdot \nabla u_0\Big\|_2 \geq \Big\|QV\cdot\nabla u_0\Big\|_2 - \Big\|\frac{u_1 - u_0}{\Delta t} + PV\cdot \nabla u_0\Big\|_2 \ \ \ \ \ (1)$ and estimate the first term from below, noticing that ${QV_x}$ and ${QV_y}$ are constants, by $\displaystyle \begin{array}{rcl} \|QV\cdot\nabla u_0\|_2^2 & = &\|QV_x\,\partial_x u_0 + QV_y\,\partial_y u_0\|_2^2\\ & = & \|QV_x\,\partial_x u_0\|_2^2 + \|QV_y\,\partial_y u_0\|_2^2 + 2\langle QV_x\,\partial_x u_0,QV_y\,\partial_y u_0\rangle\\ & \geq &|QV_x|^2\|\partial_x u_0\|_2^2 + |QV_y|^2\|\partial_y u_0\|_2^2\\ &&\qquad - \|QV_x\,\partial_xu_0\|_2\|QV_y\,\partial_yu_0\|_2\,2\frac{|\langle \partial_x u_0,\partial_y u_0\rangle|}{\|\partial_xu_0\|_2\|\partial_y u_0\|_2}\\ & \geq &(|QV_x|^2\|\partial_x u_0\|_2^2 + |QV_y|^2\|\partial_y u_0\|_2^2) \Big(1 - \frac{|\langle \partial_x u_0,\partial_y u_0\rangle|}{\|\partial_xu_0\|_2\|\partial_y u_0\|_2}\Big). \end{array}$ Since ${\partial_x u_0}$ and ${\partial_y u_0}$ are linearly independent, it holds that ${1 - \frac{|\langle \partial_x u_0,\partial_y u_0\rangle|}{\|\partial_xu_0\|_2\|\partial_y u_0\|_2}>0}$ and we conclude that ${\|QV^{n_k}\|_2\rightarrow\infty}$ implies that ${\|QV^{n_k}\cdot\nabla u_0\|_2^2\rightarrow\infty}$. Together with~(1) and boundedness of ${PV^{n_k}}$ we obtain that ${F(V^{n_k})\rightarrow\infty}$. Since for every subsequence of ${V^n}$ we get another subsequence ${V^{n_k}}$ such that ${F(V^{n_k})\rightarrow\infty}$, the same conclusion holds for the whole sequence, showing coercivity of ${F}$. $\Box$ Basically the same arguments works for ${TV}$ optical flow, i.e. coercivity of $\displaystyle F(u) = \int_\Omega\big(\frac{u_1-u_0}{dt} + V\cdot\nabla u_0\big)^2{\mathrm d}{x} + \lambda TV(V).$ However, I do not know yet what happens for ${q\neq 2}$ and if the result on coercivity is “sharp” in the sense that linear independence of ${\partial_x u_0}$ and ${\partial_y u_0}$ is necessary. Also, I don’t know yet what is true in dimensions higher than ${2}$. Here I continue my previous post on methods to compare shapes. In that post we started with different metrics between two probability measures ${\mu_1}$ and ${\mu_2}$ defined on the same set ${X}$, namely the Prokhorov metric and the Wasserstein metric. Then we also had a look on the Hausdorff metric which measures the distance between two compact subsets ${A}$ and ${B}$ of a metric space ${(X,d)}$ and finally considered the Gromov-Hausdorff metric between two metric spaces ${(X,d_X)}$ and ${(Y,d_Y)}$. Our tools have been • set couplings of ${X}$ and ${Y}$, i.e. subsets ${R\subset X\times Y}$ such that for all ${x\in X}$ there ${y\in Y}$ such that ${(x,y)\in R}$ and for all ${y\in Y}$ there is ${x\in X}$ such that ${(x,y)\in R}$, • metric couplings of ${d_X}$ and ${d_Y}$, i.e. metrics ${d}$ on the disjoint union of ${X}$ and ${Y}$ such that ${d(x,x') = d_X(x,x')}$ if ${x}$ and ${x'}$ are in ${X}$ and ${d(y,y') = d_Y(y,y')}$ if ${y}$ and ${y'}$ are in ${Y}$. In fact, one could also work with semi-metrics ${d}$, i.e. they do not need to be positive definite, and • measure couplings of ${\mu_X}$ and ${\mu_Y}$, i.e. measures ${\nu}$ on ${X\times Y}$ such that ${\nu(A\times Y) = \mu_X(A)}$ and ${\nu(X\times B) = \mu_Y(B)}$ for all ${\mu_X}$-/${\mu_Y}$-measurable set ${A}$ and ${B}$, respectively. Now we make the next (and final) step and compare metric spaces ${(X,d_X)}$ and ${(Y,d_Y)}$ which are both equipped with a measure. These objects are known as metric measure spaces or mm-spaces and are formally defined as follows: Definition 1 (mm-space) A metric measure space is a tripel ${(X,d_X,\mu_X)}$ consisting a compact metric space ${(X,d_X)}$ and a Borel probability measure ${\mu_X}$ on ${X}$. Note that sometimes it is included that ${\mu_X}$ has full support (i.e., equal to ${X}$) in the definition of an mm-space, but it seems that not everybody does it like that. 1. Comparing mm-spaces: Gromov-Wasserstein Our question is: How to we compare two mm-spaces ${(X,d_X,\mu_X)}$ and ${(Y,d_Y,\mu_Y)}$? The plan is simply to augment the previous versions of the Gromov-Hausdorff distances defined in my previos post here and here by something which takes the measures on the respective metric spaces into account. We recall both formulations of the Gromov-Hausdorff distance: The first is $\displaystyle d_{GH}(X,Y) = \inf_{R,d} \sup_{(x,y)\in R} d(x,y) \ \ \ \ \ (1)$ where the infimum is taken over all set couplings ${R}$ of ${X}$ and ${Y}$ and metric couplings ${d}$ of ${d_X}$ and ${d_Y}$, and the second is $\displaystyle d_{GH}(X,Y) = \tfrac12\inf_R \sup_{\overset{\overset{x_{1/2}\in X}{y_{1/2}\in Y}}{(x_i,y_i)\in R}}\big| d_X(x_1,x_2) - d_Y(y_1,y_2)\big| \ \ \ \ \ (2)$ where the infimum is also taken over all set couplings ${R}$ of ${X}$ and ${Y}$. Basically, we have already seen what we should do to build a metric between mm-spaces. The idea is: if there were some “natural measure” ${\mu_R}$ for any set coupling ${R}$ of ${X}$ and ${Y}$, then we would simply define the distance between ${(X,d_X,\mu_X)}$ and ${(Y,d_Y,\mu_Y)}$ as $\displaystyle d_{GH}(X,Y) = \inf_{R,d} \Big(\int d(x,y)^pd\mu_R\Big)^{1/p}$ (as a generalization of (1)) and $\displaystyle d_{GH}(X,Y) = \tfrac12\inf_R \Big(\int_{R\times R}\big| d_X(x_1,x_2) - d_Y(y_1,y_2)\big|^p d\mu_R(x_1,y_1)d\mu_R(x_2,y_2)\Big)^{1/p}$ (as a generalization of (2)). In both cases we can have ${1\leq p<\infty}$. Note that the obvious modification for ${p=\infty}$ leads to something very similar to the Gromov-Hausdorff metrics. But indeed there are “natural measures” on the set couplings of ${X}$ and ${Y}$: At least for the full coupling ${R= X\times Y}$ there are the measure couplings ${\nu}$ of ${\mu_X}$ and ${\mu_Y}$! (One does not need to consider smaller set couplings ${R\subsetneq X\times Y}$ since this can be taken into account by the measure couplings; they do not need to have full support anyway.) Applying this idea to the version (1) of the Gromov-Hausdorff metric we arrive at the following expression, which can be called Gromov-Wasserstein metric, $\displaystyle d_{Gw}^1(X,Y) = \inf_{\nu,d} \Big(\int_{X\times Y} d(x,y)^pd\nu(x,y)\Big)^{1/p} \ \ \ \ \ (3)$ where the infimum is taken over all measure couplings ${\nu}$ of ${\mu_X}$ and ${\mu_Y}$ and all metric couplings ${d}$ of ${d_X}$ and ${d_Y}$. Starting from the version (2) of the Gromov-Hausdorff metric we arrive at another formulation: $\displaystyle d_{GW}^2(X,Y) = \tfrac12\inf_\nu \Big(\int_{X\times Y}\int_{X\times Y}\big| d_X(x_1,x_2) - d_Y(y_1,y_2)\big|^p d\nu(x_1,y_1)d\nu(x_2,y_2)\Big)^{1/p} \ \ \ \ \ (4)$ where the infimum is taken over all measure couplings ${\nu}$ of ${\mu_X}$ and ${\mu_Y}$. While both versions of the Gromov-Hausdorff metric for compact metric spaces where equal, the same is not true for both generalizations to mm-spaces: In his paper Memoli proves that ${d_{GW}^2\leq d_{GW}^1}$ and gives an example (right after Remark 5.14) where strict inequality holds. 2. Comparing mm-spaces: Gromov-Prokhorov Instead of starting from the Gromov-Hausdorff distance between metric spaces and augmenting their definition with something that takes the measures into account, we could also start from the “>Prokhorov metric between probability measures and augment the definition with something that takes the metric into account. In fact there are also two possibilities to do so: In the appendix of his paper, Memoli quotes this version (from this paper by Greven, Pfaffelhuber and Winter) of a metric between mm-spaces which we call Gromov-Prokhorov metric $\displaystyle d_{GP}^1(X,Y) = \inf_{\nu,d}\Big\{\epsilon>0\ :\ \nu\{(x,y)\ :\ d(x,y)\geq\epsilon\}\leq\epsilon\Big\} \ \ \ \ \ (5)$ where the infimum is taken over all measure couplings ${\nu}$ of ${\mu_X}$ and ${\mu_Y}$ and all metric couplings ${d}$ of ${d_X}$ and ${d_Y}$. The next version (also from the same paper by Greven, Pfaffelhuber and Winter where it was called Eurandom metric) is $\displaystyle d_{GP}^2(X,Y) = \inf_\nu\Big\{\epsilon>0\ :\ \nu\otimes\nu\{(x_1,y_1,x_2,y_2)\ :\ |d_X(x_1,x_2) - d_Y(y_1,y_2)|\geq\epsilon\}\leq\epsilon\Big\} \ \ \ \ \ (6)$ where the infimum is taken over all measure couplings ${\nu}$ only. 3. A very simple example The calculation of the proposed metrics by hand can be quite cumbersome. Let’s look at the simplest example. We consider metric spaces ${X,Y\subset{\mathbb R}^d}$ (with the euclidean metric) accompanied with the measures ${\mu_X=\delta_{x_0}}$ and ${\mu_Y = \delta_{y_0}}$ for some points ${x_0\in X}$ and ${y_0\in Y}$. In this case the is only one measure coupling of ${\mu_X}$ and ${\mu_Y}$, namely $\displaystyle \nu = \delta_{(x_0,y_0)}.$ Now it is easy to calculate the variant (4) of the Gromov-Wasserstein metric: $\displaystyle \begin{array}{rcl} d_{GW}^2(X,Y) &=& \tfrac12\Big(\int_{X\times Y}\int_{X\times Y}\big| |x_1-x_2| - |y_1 -y_2|\big|^p d\delta_{(x_0,y_0)}(x_1,y_1)d\delta_{(x_0,y_0)}\Big)^{1/p} \\ &=& \tfrac12\Big(\int_{X\times Y}\big| |x_0-x_2| - |y_0 -y_2|\big|^pd\delta_{(x_0,y_0)}\Big)^{1/p} \\ &=& 0. \end{array}$ Let’s have a look at the variant (3): Since there is only one measure coupling, the metric is $\displaystyle \begin{array}{rcl} d_{GW}^1(X,Y) & = &\inf_d\Big(\int_{X\times Y} d(x,y)^pd\delta_{(x_0,y_0)}(x,y)\Big)^{1/p} \\ & = &\inf_d d(x_0,y_0). \end{array}$ As we have learned in Example 4 in the previous post, we can find a metric coupling of ${X}$ and ${Y}$ that brings the points ${x_0}$ in ${X}$ and ${y_0}$ in ${Y}$ arbitrarily close together (by embedding both ${X}$ and ${Y}$ into some ${{\mathbb R}^n}$ such that these points are only ${\epsilon}$-far away from each other). Hence, we see that we have $\displaystyle d_{GW}^1(X,Y) = 0$ similarly to ${d_{GW}^2}$. Now let’s look at the Gromov-Prokhorov metric from (5). Again we only have one measure coupling and we get $\displaystyle d_{GP}^1(X,Y) = \inf\Big\{\epsilon>0\ :\ \exists d\ \text{s.t}\ \delta_{(x_0,y_0)}\{(x,y)\ :\ d(x,y)\geq\epsilon\}\leq\epsilon\Big\}.$ Since the measure coupling is a Dirac, we can evaluate $\displaystyle \delta_{(x_0,y_0)}\{(x,y)\ :\ d(x,y)\geq\epsilon\} = \begin{cases} 1 & d(x_0,y_0)\geq \epsilon\\ 0 & d(x_0,y_0)<\epsilon. \end{cases}$ As observed previously, there are metric couplings which bring the points ${x_0}$ and ${y_0}$ arbitrarily close together, and hence for any ${\epsilon>0}$ there is a metric coupling such that ${\delta_{(x_0,y_0)}\{(x,y)\ :\ d(x,y)\geq\epsilon\} = 0}$ which shows that $\displaystyle d_{GP}^1(X,Y) = 0.$ Finally, consider the second variant of the Gromov-Prokhorov metric from (6). Since we only have the one measure coupling ${\nu = \delta_{(x_0,y_0)}}$ we have $\displaystyle d_{GP}^2(X,Y) = \inf\Big\{\epsilon>0\ :\ \delta_{(x_0,y_0)}\otimes\delta_{(x_0,y_0)}\{(x_1,y_1,x_2,y_2)\ :\ \big| |x_1-x_2| - |y_1-y_2|\big|\geq\epsilon\}<\epsilon\Big\}.$ Evaluating the tensored Dirac delta is easy: We have that ${\delta_{(x_0,y_0)}\otimes\delta_{(x_0,y_0)}\{(x_1,y_1,x_2,y_2)\ :\ \big| |x_1-x_2| - |y_1-y_2|\big|\geq\epsilon\}}$ is either one or zero and it is one if and only if the point ${(x_0,y_0,x_0,y_0)}$ is in the set that is measured. However, we have that ${\big| |x_0-x_0| - |y_0-y_0|\big|}$ is, of course, always zero (and never larger than any ${\epsilon >0}$). Hence, the measure is always zero and hence, this version of the Gromov-Prokhorov distance also gives $\displaystyle d_{GP}^2(X,Y) = 0.$ Note that all four metrics can not see that the two Diracs are at different points. The reason seems to be, that one can “deform the metric space outside of the support of the measures” arbitrarily, in some sense. It seems, that the computation of ${d_{GW}^2}$ and ${d_{GP}^2}$ were easier, since one needs to know all measure couplings and no metric couplings has been involved. 4. Next difficult example: Compare some mm-space to a point Ok, we have seen that mm-spaces which carry their measure in just a single point all look alike in both versions of the Gromov-Wasserstein metric and also in both versions of the Gromov-Prokhorov metric. Let’s look at a slightly more difficult example: We consider some mm-space ${(X,d_X,\mu)}$ and want to calculate its distance to a single point, i.e. the mm-space ${Y=\{0\}}$ with the only possible metric and measure. This should somehow measure the “size” or “spread” of the mm-space ${X}$. First, we need to know all measure couplings and all metric couplings between these spaces. The measure couplings are very easy: There is just one, namely $\displaystyle \nu = \mu\otimes \delta$ (i.e. all subsets of ${X\times \{0\}}$ are treated as if they were subsets of ${X}$). Concerning metric couplings, there are a few more. We allow semi-metrics ${d}$ on the disjoint union ${X\sqcup\{0\}}$: Since ${d}$ should respect the metric ${d_X}$ on ${X}$ we see that all metric couplings are parametrized by the points ${x_0}$ in ${X}$ by identifying ${0}$ (the element in ${Y}$) with this point ${x_0}$, i.e. all metric couplings are of the form ${d_{x_0}}$ defined by $\displaystyle d_{x_0}(x,y) = d_X(x,y)\ (\text{for}\ x,y\in X),\qquad d_{x_0}(x,0) = d_X(x,x_0).$ (This only gives semi-metrics since we have ${d_{x_0}(x_0,0)=0}$ although, formally ${x_0\not 0}$.) Let’s calculate the first Gromov-Wasserstein metric: There is only one measure coupling and we use the parametrization of the metric couplings to deduce $\displaystyle \begin{array}{rcl} d_{GW}^1(X,\{0\}) &=& \inf_d\Big(\int_{X\times\{0\}} d(x,0)^pd(\mu\otimes \delta)(x,y)\Big)^{1/p}\\ &=& \inf_{x_0\in X}\Big(\int_{X} d_X(x,x_0)^pd\mu\Big)^{1/p}. \end{array}$ This quantity seems to be known as “minimal ${p}$-th central moment” of ${(X,d_X,\mu)}$. The second variant of the Gromov-Wasserstein metric is (remember, there is only one measure coupling) $\displaystyle \begin{array}{rcl} d_{GW}^2(X,\{0\}) &=& \tfrac12\Big(\int_{X\times\{0\}}\int_{X\times\{0\}} |d_X(x_1,x_2) - d_Y(0,0)|^p d(\mu\times\delta)(x_1,y_1) d(\mu\times\delta)(x_2,y_2) \Big)^{1/p}\\ &=& \tfrac12\Big(\int_X\int_X d_X(x_1,x_2)^p d\mu d\mu\Big)^{1/p}. \end{array}$ This quantity (without the factor ${1/2}$) is called the “${p}$-diameter” of ${(X,d_X)}$. Let’s turn to the Gromov-Prokhorov metrics. The first one is (remember, that the metric couplings ${d}$ are parametrized by the points in ${X}$) $\displaystyle \begin{array}{rcl} d_{GP}^1(X,\{0\}) &=& \inf_{d}\Big\{\epsilon>0\ :\ (\mu\otimes\delta)\{(x,0)\ :\ d(x,0)\geq\epsilon\}\leq\epsilon\Big\}\\ &=& \inf_{x_0\in X}\Big\{\epsilon>0\ :\ \mu\{x\ :\ d_X(x,x_0)\geq\epsilon\}\leq\epsilon\Big\}. \end{array}$ If this looks familiar, then you may have encountered the Ky-Fan metric already? The Ky-Fan metric is a metric between random variables ${\xi_1}$ and ${\xi_2}$ defined of the same probability space ${(X,\mu)}$ with values in a metric space with metric ${d}$. It reads as $\displaystyle d_{KF}(\xi_1,\xi_2) = \inf\Big\{\epsilon>0\ :\ \mu\{\omega\ :\ d(\xi_1(\omega),\xi_2(\omega))\geq\epsilon\}\leq\epsilon\Big\}.$ Hence, the first version of the Gromov-Prokhorov metric is $\displaystyle d_{GP}^1(X,\{0\}) = \inf_{x_0\in X}\ d_{KF}(\mathrm{id},x_0),$ i.e., the minimal Ky-Fan metric between the identity mapping and the constant mappings. (In other words, it measures how far the identity is from the constant mappings in the sense of Ky Fan.) The second variant of the Gromov-Prokhorov metric is (remember, the only measure coupling is ${\nu = \mu\times \delta}$) $\displaystyle \begin{array}{rcl} d_{GP}^2(X,\{0\}) &=& \inf \Big\{\epsilon>0\ :\ \nu\otimes\nu\{(x_1,0,x_2,0)\ :\ |d_X(x_1,x_2) -0|\geq\epsilon\}\leq\epsilon\Big\}\\ &=& \inf\Big\{\epsilon>0\ :\ \mu\otimes\mu\{(x_1,x_2)\ :\ d_X(x_1,x_2)\geq\epsilon\}\leq\epsilon\Big\}. \end{array}$ I do not have a neat name or a good intuition for this metric yet (although it also looks like it measures “size” or “non-localization” of ${X}$ in some sense). If you have one, let me know! With this post I delve into a topic which is somehow new to me, although I planned to look deeper into this for quite some time already. I stumbled upon the paper Gromov-Wasserstein distances and the metric approach to object matching by Facundo Mémoli which was a pleasure to read and motivated this post. 1. Comparing measures with norms and metrics There are different notions in mathematics to compare two objects, think of the size of real numbers, the cardinality of sets or the length of the difference of two vectors. Here we will deal with not only comparison of objects but with “measures of similarity”. Two fundamental notions for this are norms in vector spaces and metrics. The norm is the stronger concept in that it uses more structure than a metric and also, every norm induces a metric but not the other way round. There are occasions in which both a norm and a metric are available but lead to different concepts of similarity. One of these instances occurs in sparse recovery, especially in the continuous formulation, e.g. as described in a previous post. Consider the unit interval ${I = [0,1]}$ and two Radon measures ${\mu_1}$ and ${\mu_2}$ on ${I}$ (${I}$ could also be an aritrary metric space). On the space of Radon measures ${\mathfrak{M}(I)}$ there is the variation norm $\displaystyle \|\mu\|_{\mathfrak{M}}= \sup_\Pi\sum_{A\in\Pi}|\mu(A)|$ where the supremum is taken over all partitions ${\Pi}$ of ${I}$ into a finite number of measurable sets. Moreover, there are different metrics one can put on the space of Radon measures, e.g. the Prokhorov metric which is defined for two probability measures (e.g. non-negative ones with unit total mass) $\displaystyle \begin{array}{rcl} d_P(\mu_1,\mu_2) & = & \inf\{\epsilon>0\ :\ \mu_1(A)\leq \mu_2(A^\epsilon) + \epsilon,\nonumber\\ & & \qquad \mu_2(A)\leq \mu_1(A^\epsilon) + \epsilon\ \text{for all measurable}\ A\} \end{array}$ where ${A^\epsilon}$ denotes the ${\epsilon}$-neighborhood of ${A}$. Another familiy of metrics are the Wasserstein metrics: For ${p\geq 1}$ define $\displaystyle d_{W,p}(\mu_1,\mu_2) = \Big(\inf_\nu\int_{I\times I} |x-y|^p d\nu(x,y)\Big)^{1/p} \ \ \ \ \ (1)$ where the infimum is taken over all measure couplings of ${\mu_1}$ and ${\mu_2}$, that is, all measures ${\nu}$ on ${I\times I}$ such that for measurable ${A}$ it holds that $\displaystyle \nu(A\times I) = \mu_1(A)\ \text{and}\ \nu(I\times A) = \mu_2(A).$ Example 1 We compare two Dirac measures ${\mu_1 = \delta_{x_1}}$ and ${\mu_2 = \delta_{x_2}}$ located at distinct points ${x_1\neq x_2}$ in ${I}$ as seen here: The variation norm measures their distance as $\displaystyle \|\mu_1-\mu_2\|_{\mathfrak{M}} = \sup_\Pi\sum_{A\in\Pi}|\delta_{x_1}(A) - \delta_{x_2}(A)| = 2$ (choose ${\Pi}$ such that it contains ${A_1}$ and ${A_2}$ small enough that ${x_1\in A_1}$, ${x_2\in A_2}$ but ${x_1\notin A_2}$ and ${x_2\notin A_1}$). The calculate the Prokhorov metric note that you only need to consider ${A}$‘s which contain only one of the points ${x_{1/2}}$ and hence, it evaluates to $\displaystyle d_P(\mu_1,\mu_2) = |x_1-x_2|.$ For the Wasserstein metric we observe that there is only one possible measure coupling of ${\delta_{x_1}}$ and ${\delta_{x_2}}$, namely the measure ${\nu = \delta_{(x_1,x_2)}}$. Hence, we have $\displaystyle d_{W,p}(\mu_1,\mu_2) = \Big(\int_{I\times I}|x-y|^pd\delta_{(x_1,x_2)}(x,y)\Big)^{1/p} = |x_1-x_2|.$ The variation norm distinguishes the two Diracs but is not able to grasp the distance of their supports. On the other hand, both metrics return the geometric distance of the supports in the underlying space ${I}$ as distance of the Diracs. Put in pictures: The variation norm of the difference measures the size ob this object while both metrics capture the distance of the measures like here It should not stay unnoted that convergence in both the Prokhorov metric and the Wasserstein metrics is exactly the weak convergence of probability measures. The above example provides a motivation to study metric structures on spaces, even if they are also equipped with a norm. Another reason to shift attention from normed spaces to metric spaces is the fact that there has emerged a body of work to build a theory of analysis in metric spaces (see, e.g. this answer on mathoverflow or the book Gradient Flows: In Metric Spaces And In The Space Of Probability Measures by Ambrosio, Gigli and Savaré (which puts special emphasis on the space of probability measures)). Yet another motivation for the study of metrics in this way is the problem of comparing shapes (without being precisely defined yet): Which of these shapes look most alike? (Note that shapes need not to be two dimensional figures, you may also think of more complex objects like surfaces in three dimensions or Riemannian manifolds.) One may also ask the question how two compare different images defined on different shapes, i.e. different “distributions of colour” on two different shapes. 2. Comparing shapes: Metric spaces Up to now we tried to compare different measures, defined on the same set. At least to me it seems that both the Prokhorov and the Wasserstein metrics are suited to measure the similarity of measures and in fact, they do so somehow finer than the usual norm does. Let’s try to go one step further and ask ourselves, how we could compare two measures ${\mu_1}$ and ${\mu_2}$ which are defined on two different sets? While thinking about an answer one need to balance several things: • The setup should be general enough to allow for the comparison of a wide range of objects. • It should include enough structure to allow meaningful statements. • It should lead to a measure which is easy enough to handle both analytically and computationally. For the first and second bullet: We are going to work with measures not on arbitrary sets but on metric spaces. This will allow to measure distances between points in the sets and, as you probably know, does not pose a severe restriction. Although metric spaces are much more specific than topological spaces, we still aim at quantitative measures which are not provided by topologies. With respect to the last bullet: Note that both the Prokhorov and the Wasserstein metric are defined as infimums over fairly large and not too well structured sets (for the Prokhorov metric and need to consider all measurable sets and their ${\epsilon}$-neighborhoods, for the Wasserstein metric, one need to consider all measure couplings). While they can be handled quite well theoretically, their computational realization can be cumbersome. In a similar spirit than Facundo Memoli’s paper we work our way up from comparing subsets of metric spaces up to comparing two different metric spaces with two measures defined on them. 2.1. Comparing compact subsets of a metric space: Hausdorff Let ${(X,d)}$ be a compact metric space. Almost hundred years ago Hausdorff introduced a metric on the family of all non-empty compact subsets of a metric space as follows: The Hausdorff metric of two compact subsets ${A}$ and ${B}$ of ${X}$ is defined as $\displaystyle d_H(A,B) = \inf\{\epsilon>0 \ :\ A\subset B_\epsilon,\ B \subset A_\epsilon\}$ (again, using the notion of ${\epsilon}$-neighborhood). This definition seems to be much in the spirit of the Prokhorov metric. Proposition 2.1 in Facundo Memolis paper shows that the Hausdorff metric has an equivalent description as $\displaystyle d_H(A,B) = \inf_R \sup_{(a,b) \in R} d(a,b)$ where the infimum is taken over all correspondences ${R}$ of ${A}$ and ${B}$, i.e., all subset ${R\subset A\times B}$ such that for all ${a\in A}$ there is ${b\in B}$ such that ${(a,b) \in R}$ and for all ${b\in B}$ there ${a\in A}$ such that ${(a,b)\in R}$. One may also say set coupling of ${A}$ and ${B}$ instead of correspondence. Example 2 There is always the full coupling ${R = A\times B}$. Three different set couplings of two subsets ${A}$ and ${B}$ of the unit interval are shown here: the “full one” ${A\times B}$ in green and two “slim” ones in red and orange. Other “slim” couplings can be obtained from surjective mappings ${f:A\rightarrow B}$ by ${R = \{(a,f(a))\ :\ a\in A\}}$ (or with the roles of ${A}$ and ${B}$ swapped): If you couple a set ${A}$ with itself, there is also the trivial coupling $\displaystyle R = \{(a,a)\ : \ a\in A\}$ which is just the diagonal of ${A\times A}$ Note that the alternative definition of the Hausdorff metric is more in the spirit of the Wasserstein metric: It does not use enlarged objects (by ${\epsilon}$-neighborhoods) but couplings. The Hausdorff metric is indeed a metric on the set ${\mathfrak{C}(X)}$ of all non-empty compact subsets of a metric space ${X}$ and if ${X}$ itself is compact it even holds that ${(\mathfrak{C}(X),d_H)}$ is a compact metric space (a result, known as Blaschke Selection Theorem). One may say that we went up an abstraction ladder one step by moving from ${(X,d)}$ to ${(\mathfrak{C}(X),d_H)}$. 2.2. Comparing compact metric spaces: Gromov-Hausdorff In the previous subsection we worked within one metric space ${X}$. In the book “Metric Structures for Riemannian and Non-Riemannian Spaces” Misha Gromov introduced a notion to compare two different metric spaces. For compact metric space ${X}$ and ${Y}$ the Gromov-Hausdorff metric is defined as $\displaystyle d_{GH}(X,Y) = \inf_{Z,f,g} d_H(f(X),g(Y)) \ \ \ \ \ (2)$ where the infimum is taken over • all metric spaces ${Z}$ and • all isometric embeddings ${f}$ and ${g}$ which embed ${X}$ and ${Y}$ into ${Z}$ respectively. In words: To compute the Gromov-Hausdorff metric, you try embed both ${X}$ and ${Y}$ into a common larger space isometrically such that they are as close as possible according to the Hausdorff metric in that space. Strictly speaking, the above definition is not well stated as one can not form an infimum over all metric spaces since this collection does not form a set according to the rules of set theory. More precisely one should write that ${d_{GH}(X,Y)}$ is the infimum over all ${r>0}$ such that there exists a metric space ${Z}$ and isometric embeddings ${f}$ and ${g}$ of ${X}$ and ${Y}$, respectively, such that ${d_H(f(X),g(Y)). As the Hausdorff metric could be reformulated with set couplings there is a reformulation of the Gromov-Hausdorff metric based on metric couplings: A metric coupling of two metric spaces ${(X,d_X)}$ and ${(Y,d_Y)}$ is a metric ${d}$ on the disjoint union ${X\sqcup Y}$ of ${X}$ and ${Y}$ such that for all ${x,x'\in X}$ and ${y,y'\in Y}$ it holds that ${d(x,x') = d_X(x,x')}$ and ${d(y,y') = d_Y(y,y')}$. Example 3 We couple a metric space ${(X,d)}$ with itself. We denote with ${(X',d')}$ an identical copy of ${(X,d)}$ and look for a metric ${D}$ on ${X\times X'}$ that respects the metrics ${d}$ and ${d'}$ in the way a metric coupling has to. To distinguish elements from ${X}$ and ${X'}$ we put a ${'}$ on all quantities from ${X'}$. Moreover, for ${x\in X}$ we denote by ${x'}$ its identical copy in ${X'}$ (and similarly for ${x'\in X'}$, ${x}$ is its identical twin). Then, for any ${\epsilon>0}$ we can define ${D_\epsilon(x,x') = D_\epsilon(x',x) = \epsilon}$ (i.e. the distance between any two identical twins is ${\epsilon}$. By the triangle inequality we get for ${x\in X}$ and ${y'\in X'}$ that ${D_\epsilon(x,y')}$ should fulfill $\displaystyle D_\epsilon(x',y') - D_\epsilon(x',x) \leq D_\epsilon(x,y') \leq D_\epsilon(x,y) + D_\epsilon(y,y')$ and hence $\displaystyle d(x,y) - \epsilon \leq D_\epsilon(x,y') \leq d(x,y) + \epsilon.$ Indeed we can choose ${D_\epsilon(x,y') = d(x,y)}$ if ${x\in X}$ and ${y'\in Y}$ leading to one specific metric coupling for any ${\epsilon}$. This couplings allow to distinguish identical twins and behave as a metric on the whole disjoint union. In the limiting case ${\epsilon\rightarrow 0}$ we do not obtain a metric but a semi-metric or pseudo-metric which is just the same as a metric but without the assumption that ${d(x,y) = 0}$ implies that ${x=y}$. Example 4 The above example of a metric coupling of a metric space with itself was somehow “reproducing” the given metric as accurate as possible. There are also other couplings that put very different distances to points ${D(x,y')}$ and there is also a way to visualize metric couplings: When building the disjoint union of two metric spaces ${X}$ and ${Y}$, you can imagine this as isometrically embedding both in a larger metric space ${Z}$ in a non-overlapping way and obtain the metric coupling ${D}$ as the restriction of the metric on ${Z}$ to ${X\sqcup Y}$. For ${X=Y=[0,1]}$ you can embed both into ${Z = {\mathbb R}^2}$. A metric coupling which is similar (but not equal) to the coupling of the previous example is obtained by putting ${X}$ and ${Y}$ side by side at distance ${\epsilon}$ as here (one space in green, the other in blue). A quite different coupling is obtained by putting ${X}$ and ${Y}$ side by side, but in a reversed way as here: You may even embed them in a more weired way as here: but remember that the embeddings has to be isometric, hence, distortions like here are not allowed. This example illustrate that the idea of metric coupling is in similar spirit as of “embedding two spaces in a common larger one”. With the notion of metric coupling, the Gromov-Hausdorff metric can be written as $\displaystyle d_{GH}(X,Y) = \inf_{R,d} \sup_{(x,y)\in R} d(x,y) \ \ \ \ \ (3)$ where the infimum is taken over all set couplings ${R}$ of ${X}$ and ${Y}$ and all metric couplings ${d}$ of ${(X,d_X)}$ and ${(Y,d_Y)}$. In words: To compute the Gromov-Hausdorff metric this way, you look for a set coupling of the base sets ${X}$ and ${Y}$ and a metric coupling ${d}$ of the metrics ${d_X}$ and ${d_Y}$ such that the maximal distance of two coupled points ${x}$ and ${y}$ is as small as possible. While this may look more complicated than the original definition from~(2), note that the original definition uses all metric spaces ${Z}$ in which you can embed ${X}$ and ${Y}$ isometrically, which seems barely impossible to realize. Granted, the new definition also considers a lot of quantities. Also note that this definition is in spirit of the Wasserstein metric from~(1): If there were natural measures ${\mu_R}$ on the set couplings ${R}$ we could write \begin{equation*} d_{GH}(X,Y) = \inf_{R,d} \Big(\int d(x,y)^pd\mu_R\Big)^{1/p} \end{equation*} and in the limit ${p\rightarrow\infty}$ we would recover definition~(3). Example 5 The Gromov-Hausdorff distance of a metric space ${(X,d_X)}$ to itself is easily seen to be zero: Consider the trivial coupling ${R = \{(x,x)\ :\ x\in X\}}$ from Example~2 and the family ${D_\epsilon}$ of metric couplings from Example~3. Then we have ${d_{GH}(X,X) \leq \epsilon}$ for any ${\epsilon >0}$ showing ${d_{GH}(X,X) = 0}$. Let’s take one of the next-complicated examples and compute the distance of ${X = [0,1]}$ and ${Y=[0,2]}$, both equipped with the euclidean metric. We couple the sets ${X}$ and ${Y}$ by ${R = \{(x,2x)\ : \ x\in X\}}$ and the respective metrics by embedding ${X}$ and ${Y}$ into ${{\mathbb R}^2}$ as follows: Put ${Y}$ at the line from ${(0,0)}$ to ${(2,0)}$ and ${X}$ at the line from ${(\tfrac12,\epsilon)}$ to ${(1\tfrac12,\epsilon)}$: This shows that ${d_{GH}(X,Y) \leq \tfrac12}$ and actually, we have equality here. There is another reformulation of the Gromov-Hausdorff metric, the equivalence of which is shown in Theorem 7.3.25 in the book “A Course in Metric Geometry” by Dmitri Burago, Yuri Burago and Sergei Ivanov: $\displaystyle d_{GH}(X,Y) = \tfrac12\inf_R \sup_{\overset{\overset{x_{1/2}\in X}{y_{1/2}\in Y}}{(x_i,y_i)\in R}}\big| d_X(x_1,x_2) - d_Y(y_1,y_2)\big| \ \ \ \ \ (4)$ where the infimum is taken over all set couplings ${R}$ of ${X}$ and ${Y}$. In words: Look for a set coupling such that any two coupled pairs ${(x_1,y_1)}$ and ${(x_2,y_2)}$ have the “most equal” distance. This reformulation may have the advantage over the form (3) in that is only considers the set couplings and the given metrics ${d_X}$ and ${d_Y}$ and no metric coupling is needed. Note that, as the previous reformulation~(3), it is also in the spirit of the Wasserstein metric: If there were natural measures ${\mu_R}$ in the set couplings ${R}$, we could write $\displaystyle d_{GH}(X,Y) = \tfrac12\inf_R \Big(\int_{R\times R}\big| d_X(x_1,x_2) - d_Y(y_1,y_2)\big|^p d\mu_R(x_1,y_1)d\mu_R(x_2,y_2)\Big)^{1/p}.$ and recover the formulation~(4) in the limit ${p\rightarrow\infty}$. One may say that we went up an abstraction ladder one step further by moving from ${(X,d)}$ to ${(\mathfrak{C}(X),d_H)}$ to ${(\text{All compact metric spaces},d_{GH})}$. Since this post has been grown pretty long already, I decided to do the next step (which is the already announced metric on metric spaces which additionally carry some measure on them – so-called metric measure spaces) in a later post.
2014-04-24 23:49:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 623, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9207561612129211, "perplexity": 369.94157111301365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
https://terrytao.wordpress.com/tag/four-moment-theorem/
You are currently browsing the tag archive for the ‘Four Moment Theorem’ tag. Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of local spectral statistics of non-Hermitian matrices“. The main result of this paper is a “Four Moment Theorem” that establishes universality for local spectral statistics of non-Hermitian matrices with independent entries, under the additional hypotheses that the entries of the matrix decay exponentially, and match moments with either the real or complex gaussian ensemble to fourth order. This is the non-Hermitian analogue of a long string of recent results establishing universality of local statistics in the Hermitian case (as discussed for instance in this recent survey of Van and myself, and also in several other places). The complex case is somewhat easier to describe. Given a (non-Hermitian) random matrix ensemble ${M_n}$ of ${n \times n}$ matrices, one can arbitrarily enumerate the (geometric) eigenvalues as ${\lambda_1(M_n),\ldots,\lambda_n(M_n) \in {\bf C}}$, and one can then define the ${k}$-point correlation functions ${\rho^{(k)}_n: {\bf C}^k \rightarrow {\bf R}^+}$ to be the symmetric functions such that $\displaystyle \int_{{\bf C}^k} F(z_1,\ldots,z_k) \rho^{(k)}_n(z_1,\ldots,z_k)\ dz_1 \ldots dz_k$ $\displaystyle = {\bf E} \sum_{1 \leq i_1 < \ldots < i_k \leq n} F(\lambda_1(M_n),\ldots,\lambda_k(M_n)).$ In the case when ${M_n}$ is drawn from the complex gaussian ensemble, so that all the entries are independent complex gaussians of mean zero and variance one, it is a classical result of Ginibre that the asymptotics of ${\rho^{(k)}_n}$ near some point ${z \sqrt{n}}$ as ${n \rightarrow \infty}$ and ${z \in {\bf C}}$ is fixed are given by the determinantal rule $\displaystyle \rho^{(k)}_n(z\sqrt{n} + w_1,\ldots,z\sqrt{n}+w_k) \rightarrow \hbox{det}( K(w_i,w_j) )_{1 \leq i,j \leq k} \ \ \ \ \ (1)$ for ${|z| < 1}$ and $\displaystyle \rho^{(k)}_n(z\sqrt{n} + w_1,\ldots,z\sqrt{n}+w_k) \rightarrow 0$ for ${|z| > 1}$, where ${K}$ is the reproducing kernel $\displaystyle K(z,w) := \frac{1}{\pi} e^{-|z|^2/2 - |w|^2/2 + z \overline{w}}.$ (There is also an asymptotic for the boundary case ${|z|=1}$, but it is more complicated to state.) In particular, we see that ${\rho^{(k)}_n(z \sqrt{n}) \rightarrow \frac{1}{\pi} 1_{|z| \leq 1}}$ for almost every ${z}$, which is a manifestation of the well-known circular law for these matrices; but the circular law only captures the macroscopic structure of the spectrum, whereas the asymptotic (1) describes the microscopic structure. Our first main result is that the asymptotic (1) for ${|z|<1}$ also holds (in the sense of vague convergence) when ${M_n}$ is a matrix whose entries are independent with mean zero, variance one, exponentially decaying tails, and which all match moments with the complex gaussian to fourth order. (Actually we prove a stronger result than this which is valid for all bounded ${z}$ and has more uniform bounds, but is a bit more technical to state.) An analogous result is also established for real gaussians (but now one has to separate the correlation function into components depending on how many eigenvalues are real and how many are strictly complex; also, the limiting distribution is more complicated, being described by Pfaffians rather than determinants). Among other things, this allows us to partially extend some known results on complex or real gaussian ensembles to more general ensembles. For instance, there is a central limit theorem of Rider which establishes a central limit theorem for the number of eigenvalues of a complex gaussian matrix in a mesoscopic disk; from our results, we can extend this central limit theorem to matrices that match the complex gaussian ensemble to fourth order, provided that the disk is small enough (for technical reasons, our error bounds are not strong enough to handle large disks). Similarly, extending some results of Edelman-Kostlan-Shub and of Forrester-Nagao, we can show that for a matrix matching the real gaussian ensemble to fourth order, the number of real eigenvalues is ${\sqrt{\frac{2n}{\pi}} + O(n^{1/2-c})}$ with probability ${1-O(n^{-c})}$ for some absolute constant ${c>0}$. There are several steps involved in the proof. The first step is to apply the Girko Hermitisation trick to replace the problem of understanding the spectrum of a non-Hermitian matrix, with that of understanding the spectrum of various Hermitian matrices. The two identities that realise this trick are, firstly, Jensen’s formula $\displaystyle \log |\det(M_n-z_0)| = - \sum_{1 \leq i \leq n: \lambda_i(M_n) \in B(z_0,r)} \log \frac{r}{|\lambda_i(M_n)-z_0|}$ $\displaystyle + \frac{1}{2\pi} \int_0^{2\pi} \log |\det(M_n-z_0-re^{i\theta})|\ d\theta$ that relates the local distribution of eigenvalues to the log-determinants ${\log |\det(M_n-z_0)|}$, and secondly the elementary identity $\displaystyle \log |\det(M_n - z)| = \frac{1}{2} \log|\det W_{n,z}| + \frac{1}{2} n \log n$ that relates the log-determinants of ${M_n-z}$ to the log-determinants of the Hermitian matrices $\displaystyle W_{n,z} := \frac{1}{\sqrt{n}} \begin{pmatrix} 0 & M_n -z \\ (M_n-z)^* & 0 \end{pmatrix}.$ The main difficulty is then to obtain concentration and universality results for the Hermitian log-determinants ${\log|\det W_{n,z}|}$. This turns out to be a task that is analogous to the task of obtaining concentration for Wigner matrices (as we did in this recent paper), as well as central limit theorems for log-determinants of Wigner matrices (as we did in this other recent paper). In both of these papers, the main idea was to use the Four Moment Theorem for Wigner matrices (which can now be proven relatively easily by a combination of the local semi-circular law and resolvent swapping methods), combined with (in the latter paper) a central limit theorem for the gaussian unitary ensemble (GUE). This latter task was achieved by using the convenient Trotter normal form to tridiagonalise a GUE matrix, which has the effect of revealing the determinant of that matrix as the solution to a certain linear stochastic difference equation, and one can analyse the distribution of that solution via such tools as the martingale central limit theorem. The matrices ${W_{n,z}}$ are somewhat more complicated than Wigner matrices (for instance, the semi-circular law must be replaced by a distorted Marchenko-Pastur law), but the same general strategy works to obtain concentration and universality for their log-determinants. The main new difficulty that arises is that the analogue of the Trotter norm for gaussian random matrices is not tridiagonal, but rather Hessenberg (i.e. upper-triangular except for the lower diagonal). This ultimately has the effect of expressing the relevant determinant as the solution to a nonlinear stochastic difference equation, which is a bit trickier to solve for. Fortunately, it turns out that one only needs good lower bounds on the solution, as one can use the second moment method to upper bound the determinant and hence the log-determinant (following a classical computation of Turan). This simplifies the analysis on the equation somewhat. While this result is the first local universality result in the category of random matrices with independent entries, there are still two limitations to the result which one would like to remove. The first is the moment matching hypotheses on the matrix. Very recently, one of the ingredients of our paper, namely the local circular law, was proved without moment matching hypotheses by Bourgade, Yau, and Yin (provided one stays away from the edge of the spectrum); however, as of this time of writing the other main ingredient – the universality of the log-determinant – still requires moment matching. (The standard tool for obtaining universality without moment matching hypotheses is the heat flow method (and more specifically, the local relaxation flow method), but the analogue of Dyson Brownian motion in the non-Hermitian setting appears to be somewhat intractible, being a coupled flow on both the eigenvalues and eigenvectors rather than just on the eigenvalues alone.) Van Vu and I have just uploaded to the arXiv our paper “Random matrices: The Universality phenomenon for Wigner ensembles“. This survey is a longer version (58 pages) of a previous short survey we wrote up a few months ago. The survey focuses on recent progress in understanding the universality phenomenon for Hermitian Wigner ensembles, of which the Gaussian Unitary Ensemble (GUE) is the most well known. The one-sentence summary of this progress is that many of the asymptotic spectral statistics (e.g. correlation functions, eigenvalue gaps, determinants, etc.) that were previously known for GUE matrices, are now known for very large classes of Wigner ensembles as well. There are however a wide variety of results of this type, due to the large number of interesting spectral statistics, the varying hypotheses placed on the ensemble, and the different modes of convergence studied, and it is difficult to isolate a single such result currently as the definitive universality result. (In particular, there is at present a tradeoff between generality of ensemble and strength of convergence; the universality results that are available for the most general classes of ensemble are only presently able to demonstrate a rather weak sense of convergence to the universal distribution (involving an additional averaging in the energy parameter), which limits the applicability of such results to a number of interesting questions in which energy averaging is not permissible, such as the study of the least singular value of a Wigner matrix, or of related quantities such as the condition number or determinant. But it is conceivable that this tradeoff is a temporary phenomenon and may be eliminated by future work in this area; in the case of Hermitian matrices whose entries have the same second moments as that of the GUE ensemble, for instance, the need for energy averaging has already been removed.) Nevertheless, throughout the family of results that have been obtained recently, there are two main methods which have been fundamental to almost all of the recent progress in extending from special ensembles such as GUE to general ensembles. The first method, developed extensively by Erdos, Schlein, Yau, Yin, and others (and building on an initial breakthrough by Johansson), is the heat flow method, which exploits the rapid convergence to equilibrium of the spectral statistics of matrices undergoing Dyson-type flows towards GUE. (An important aspect to this method is the ability to accelerate the convergence to equilibrium by localising the Hamiltonian, in order to eliminate the slowest modes of the flow; this refinement of the method is known as the “local relaxation flow” method. Unfortunately, the translation mode is not accelerated by this process, which is the principal reason why results obtained by pure heat flow methods still require an energy averaging in the final conclusion; it would of interest to find a way around this difficulty.) The other method, which goes all the way back to Lindeberg in his classical proof of the central limit theorem, and which was introduced to random matrix theory by Chatterjee and then developed for the universality problem by Van Vu and myself, is the swapping method, which is based on the observation that spectral statistics of Wigner matrices tend to be stable if one replaces just one or two entries of the matrix with another distribution, with the stability of the swapping process becoming stronger if one assumes that the old and new entries have many matching moments. The main formalisations of this observation are known as four moment theorems, because they require four matching moments between the entries, although there are some variant three moment theorems and two moment theorems in the literature as well. Our initial four moment theorems were focused on individual eigenvalues (and later also to eigenvectors), but it was later observed by Erdos, Yau, and Yin that simpler four moment theorems could also be established for aggregate spectral statistics, such as the coefficients of the Greens function, and Knowles and Yin also subsequently observed that these latter theorems could be used to recover a four moment theorem for eigenvalues and eigenvectors, giving an alternate approach to proving such theorems. Interestingly, it seems that the heat flow and swapping methods are complementary to each other; the heat flow methods are good at removing moment hypotheses on the coefficients, while the swapping methods are good at removing regularity hypotheses. To handle general ensembles with minimal moment or regularity hypotheses, it is thus necessary to combine the two methods (though perhaps in the future a third method, or a unification of the two existing methods, might emerge). Besides the heat flow and swapping methods, there are also a number of other basic tools that are also needed in these results, such as local semicircle laws and eigenvalue rigidity, which are also discussed in the survey. We also survey how universality has been established for wide variety of spectral statistics; the ${k}$-point correlation functions are the most well known of these statistics, but they do not tell the whole story (particularly if one can only control these functions after an averaging in the energy), and there are a number of other statistics, such as eigenvalue counting functions, determinants, or spectral gaps, for which the above methods can be applied. In order to prevent the survey from becoming too enormous, we decided to restrict attention to Hermitian matrix ensembles, whose entries off the diagonal are identically distributed, as this is the case in which the strongest results are available. There are several results that are applicable to more general ensembles than these which are briefly mentioned in the survey, but they are not covered in detail. We plan to submit this survey eventually to the proceedings of a workshop on random matrix theory, and will continue to update the references on the arXiv version until the time comes to actually submit the paper. Finally, in the survey we issue some errata for previous papers of Van and myself in this area, mostly centering around the three moment theorem (a variant of the more widely used four moment theorem), for which the original proof of Van and myself was incomplete. (Fortunately, as the three moment theorem had many fewer applications than the four moment theorem, and most of the applications that it did have ended up being superseded by subsequent papers, the actual impact of this issue was limited, but still an erratum is in order.) Van Vu and I have just uploaded to the arXiv our short survey article, “Random matrices: The Four Moment Theorem for Wigner ensembles“, submitted to the MSRI book series, as part of the proceedings on the MSRI semester program on random matrix theory from last year.  This is a highly condensed version (at 17 pages) of a much longer survey (currently at about 48 pages, though not completely finished) that we are currently working on, devoted to the recent advances in understanding the universality phenomenon for spectral statistics of Wigner matrices.  In this abridged version of the survey, we focus on a key tool in the subject, namely the Four Moment Theorem which roughly speaking asserts that the statistics of a Wigner matrix depend only on the first four moments of the entries.  We give a sketch of proof of this theorem, and two sample applications: a central limit theorem for individual eigenvalues of a Wigner matrix (extending a result of Gustavsson in the case of GUE), and the verification of a conjecture of Wigner, Dyson, and Mehta on the universality of the asymptotic k-point correlation functions even for discrete ensembles (provided that we interpret convergence in the vague topology sense). For reasons of space, this paper is very far from an exhaustive survey even of the narrow topic of universality for Wigner matrices, but should hopefully be an accessible entry point into the subject nevertheless. Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Universality of eigenvectors“, submitted to Random Matrices: Theory and Applications. This paper concerns an extension of our four moment theorem for eigenvalues. Roughly speaking, that four moment theorem asserts (under mild decay conditions on the coefficients of the random matrix) that the fine-scale structure of individual eigenvalues of a Wigner random matrix depend only on the first four moments of each of the entries. In this paper, we extend this result from eigenvalues to eigenvectors, and specifically to the coefficients of, say, the ${i^{th}}$ eigenvector ${u_i(M_n)}$ of a Wigner random matrix ${M_n}$. Roughly speaking, the main result is that the distribution of these coefficients also only depends on the first four moments of each of the entries. In particular, as the distribution of coefficients eigenvectors of invariant ensembles such as GOE or GUE are known to be asymptotically gaussian real (in the GOE case) or gaussian complex (in the GUE case), the same asymptotic automatically holds for Wigner matrices whose coefficients match GOE or GUE to fourth order. (A technical point here: strictly speaking, the eigenvectors ${u_i(M_n)}$ are only determined up to a phase, even when the eigenvalues are simple. So, to phrase the question properly, one has to perform some sort of normalisation, for instance by working with the coefficients of the spectral projection operators ${P_i(M_n) := u_i(M_n) u_i(M_n)^*}$ instead of the eigenvectors, or rotating each eigenvector by a random phase, or by fixing the first component of each eigenvector to be positive real. This is a fairly minor technical issue here, though, and will not be discussed further.) This theorem strengthens a four moment theorem for eigenvectors recently established by Knowles and Yin (by a somewhat different method), in that the hypotheses are weaker (no level repulsion assumption is required, and the matrix entries only need to obey a finite moment condition rather than an exponential decay condition), and a slightly stronger conclusion (less regularity is needed on the test function, and one can handle the joint distribution of polynomially many coefficients, rather than boundedly many coefficients). On the other hand, the Knowles-Yin paper can also handle generalised Wigner ensembles in which the variances of the entries are allowed to fluctuate somewhat. The method used here is a variation of that in our original paper (incorporating the subsequent improvements to extend the four moment theorem from the bulk to the edge, and to replace exponential decay by a finite moment condition). That method was ultimately based on the observation that if one swapped a single entry (and its adjoint) in a Wigner random matrix, then an individual eigenvalue ${\lambda_i(M_n)}$ would not fluctuate much as a consequence (as long as one had already truncated away the event of an unexpectedly small eigenvalue gap). The same analysis shows that the projection matrices ${P_i(M_n)}$ obeys the same stability property. As an application of the eigenvalue four moment theorem, we establish a four moment theorem for the coefficients of resolvent matrices ${(\frac{1}{\sqrt{n}} M_n - zI)^{-1}}$, even when ${z}$ is on the real axis (though in that case we need to make a level repulsion hypothesis, which has been already verified in many important special cases and is likely to be true in general). This improves on an earlier four moment theorem for resolvents of Erdos, Yau, and Yin, which required ${z}$ to stay some distance away from the real axis (specifically, that ${\hbox{Im}(z) \geq n^{-1-c}}$ for some small ${c>0}$). Van Vu and I have just uploaded to the arXiv our paper “Random matrices: Localization of the eigenvalues and the necessity of four moments“, submitted to Probability Theory and Related Fields. This paper concerns the distribution of the eigenvalues $\displaystyle \lambda_1(M_n) \leq \ldots \leq \lambda_n(M_n)$ of a Wigner random matrix ${M_n}$. More specifically, we consider ${n \times n}$ Hermitian random matrices whose entries have mean zero and variance one, with the upper-triangular portion of the matrix independent, with the diagonal elements iid, and the real and imaginary parts of the strictly upper-triangular portion of the matrix iid. For technical reasons we also assume that the distribution of the coefficients decays exponentially or better. Examples of Wigner matrices include the Gaussian Unitary Ensemble (GUE) and random symmetric complex Bernoulli matrices (which equal ${\pm 1}$ on the diagonal, and ${\pm \frac{1}{2} \pm \frac{i}{2}}$ off the diagonal). The Gaussian Orthogonal Ensemble (GOE) is also an example once one makes the minor change of setting the diagonal entries to have variance two instead of one. The most fundamental theorem about the distribution of these eigenvalues is the Wigner semi-circular law, which asserts that (almost surely) one has $\displaystyle \frac{1}{n} \sum_{i=1}^n \delta_{\lambda_i(M_n)/\sqrt{n}} \rightarrow \rho_{sc}(x)\ dx$ (in the vague topology) where ${\rho_{sc}(x) := \frac{1}{\pi} (4-x^2)_+^{1/2}}$ is the semicircular distribution. (See these lecture notes on this blog for more discusssion of this law.) One can phrase this law in a number of equivalent ways. For instance, in the bulk region ${\epsilon n \leq i \leq (1-\epsilon) n}$, one almost surely has $\displaystyle \lambda_i(M_n) = \gamma_i \sqrt{n} + o(\sqrt{n}) \ \ \ \ \ (1)$ uniformly for in ${i}$, where the classical location ${\gamma_i \in [-2,2]}$ of the (normalised) ${i^{th}}$ eigenvalue ${\frac{1}{\sqrt{n}} \lambda_i}$ is defined by the formula $\displaystyle \int_{-2}^{\gamma_i} \rho_{sc}(x)\ dx := \frac{i}{n}.$ The bound (1) also holds in the edge case (by using the operator norm bound ${\| M_n\|_{op} = (2+o(1)) \sqrt{n}}$, due to Bai and Yin), but for sake of exposition we shall restriction attention here only to the bulk case. From (1) we see that the semicircular law controls the eigenvalues at the coarse scale of ${\sqrt{n}}$. There has been a significant amount of work in the literature in obtaining control at finer scales, and in particular at the scale of the average eigenvalue spacing, which is of the order of ${\sqrt{n}/n = n^{-1/2}}$. For instance, we now have a universal limit theorem for the normalised eigenvalue spacing ${\sqrt{n}(\lambda_{i+1}(M_n) - \lambda_i(M_n))}$ in the bulk for all Wigner matrices, a result of Erdos, Ramirez, Schlein, Vu, Yau, and myself. One tool for this is the four moment theorem of Van and myself, which roughly speaking shows that the behaviour of the eigenvalues at the scale ${n^{-1/2}}$ (and even at the slightly finer scale of ${n^{-1/2-c}}$ for some absolute constant ${c>0}$) depends only on the first four moments of the matrix entries. There is also a slight variant, the three moment theorem, which asserts that the behaviour of the eigenvalues at the slightly coarser scale of ${n^{-1/2+\epsilon}}$ depends only on the first three moments of the matrix entries. It is natural to ask whether these moment conditions are necessary. From the result of Erdos, Ramirez, Schlein, Vu, Yau, and myself, it is known that to control the eigenvalue spacing ${\lambda_{i+1}(M_n) - \lambda_i(M_n)}$ at the critical scale ${n^{-1/2}}$, no knowledge of any moments beyond the second (i.e. beyond the mean and variance) are needed. So it is natural to conjecture that the same is true for the eigenvalues themselves. The main result of this paper is to show that this is not the case; that at the critical scale ${n^{-1/2}}$, the distribution of eigenvalues ${\lambda_i(M_n)}$ is sensitive to the fourth moment, and so the hypothesis of the four moment theorem cannot be relaxed. Heuristically, the reason for this is easy to explain. One begins with an inspection of the expected fourth moment $\displaystyle \sum_{i=1}^n {\bf E}(\lambda_i(M_n)^4) = {\bf E} \hbox{tr} M_n^4.$ A standard moment method computation shows that the right hand side is equal to $\displaystyle 2n^3 + 2a n^2 + \ldots$ where ${a}$ is the fourth moment of the real part of the off-diagonal coefficients of ${M_n}$. In particular, a change in the fourth moment ${a}$ by ${O(1)}$ leads to a change in the expression ${\sum_{i=1}^n {\bf E}(\lambda_i(M_n)^4)}$ by ${O(n^2)}$. Thus, for a typical ${i}$, one expects ${{\bf E}(\lambda_i(M_n)^4)}$ to shift by ${O(n)}$; since ${\lambda_i(M_n) = O(\sqrt{n})}$ on the average, we thus expect ${\lambda_i(M_n)}$ itself to shift by about ${O(n^{-1/2})}$ by the mean-value theorem. To make this rigorous, one needs a sufficiently strong concentration of measure result for ${\lambda_i(M_n)}$ that keeps it close to its mean value. There are already a number of such results in the literature. For instance, Guionnet and Zeitouni showed that ${\lambda_i(M_n)}$ was sharply concentrated around an interval of size ${O(n^\epsilon)}$ around ${\sqrt{n} \gamma_i}$ for any ${\epsilon > 0}$ (in the sense that the probability that one was outside this interval was exponentially small). In one of my papers with Van, we showed that ${\lambda_i(M_n)}$ was also weakly concentrated around an interval of size ${O(n^{-1/2+\epsilon})}$ around ${\sqrt{n} \gamma_i}$, in the sense that the probability that one was outside this interval was ${O(n^{-c})}$ for some absolute constant ${c>0}$. Finally, if one made an additional log-Sobolev hypothesis on the entries, it was shown by by Erdos, Yau, and Yin that the average variance of ${\lambda_i(M_n)}$ as ${i}$ varied from ${1}$ to ${n}$ was of the size of ${O(n^{-c})}$ for some absolute ${c>0}$. As it turns out, the first two concentration results are not sufficient to justify the previous heuristic argument. The Erdos-Yau-Yin argument suffices, but requires a log-Sobolev hypothesis. In our paper, we argue differently, using the three moment theorem (together with the theory of the eigenvalues of GUE, which is extremely well developed) to show that the variance of each individual ${\lambda_i(M_n)}$ is ${O(n^{-c})}$ (without averaging in ${i}$). No log-Sobolev hypothesis is required, but instead we need to assume that the third moment of the coefficients vanishes (because we want to use the three moment theorem to compare the Wigner matrix to GUE, and the coefficients of the latter have a vanishing third moment). From this we are able to make the previous arguments rigorous, and show that the mean ${{\bf E} \lambda_i(M_n)}$ is indeed sensitive to the fourth moment of the entries at the critical scale ${n^{-1/2}}$. One curious feature of the analysis is how differently the median and the mean of the eigenvalue ${\lambda_i(M_n)}$ react to the available technology. To control the global behaviour of the eigenvalues (after averaging in ${i}$), it is much more convenient to use the mean, and we have very precise control on global averages of these means thanks to the moment method. But to control local behaviour, it is the median which is much better controlled. For instance, we can localise the median of ${\lambda_i(M_n)}$ to an interval of size ${O(n^{-1/2+\epsilon})}$, but can only localise the mean to a much larger interval of size ${O(n^{-c})}$. Ultimately, this is because with our current technology there is a possible exceptional event of probability as large as ${O(n^{-c})}$ for which all eigenvalues could deviate as far as ${O(n^\epsilon)}$ from their expected location, instead of their typical deviation of ${O(n^{-1/2})}$. The reason for this is technical, coming from the fact that the four moment theorem method breaks down when two eigenvalues are very close together (less than ${n^{-c}}$ times the average eigenvalue spacing), and so one has to cut out this event, which occurs with a probability of the shape ${O(n^{-c})}$. It may be possible to improve the four moment theorem proof to be less sensitive to eigenvalue near-collisions, in which case the above bounds are likely to improve. Van Vu and I have just uploaded to the arXiv our paper “Random covariance matrices: Universality of local statistics of eigenvalues“, to be submitted shortly. This paper draws heavily on the technology of our previous paper, in which we established a Four Moment Theorem for the local spacing statistics of eigenvalues of Wigner matrices. This theorem says, roughly speaking, that these statistics are completely determined by the first four moments of the coefficients of such matrices, at least in the bulk of the spectrum. (In a subsequent paper we extended the Four Moment Theorem to the edge of the spectrum.) In this paper, we establish the analogous result for the singular values of rectangular iid matrices ${M = M_{n,p}}$, or (equivalently) the eigenvalues of the associated covariance matrix ${\frac{1}{n} M M^*}$. As is well-known, there is a parallel theory between the spectral theory of random Wigner matrices and those of covariance matrices; for instance, just as the former has asymptotic spectral distribution governed by the semi-circular law, the latter has asymptotic spectral distribution governed by the Marcenko-Pastur law. One reason for the connection can be seen by noting that the singular values of a rectangular matrix ${M}$ are essentially the same thing as the eigenvalues of the augmented matrix $\displaystyle \begin{pmatrix} 0 & M \\ M^* & 0\end{pmatrix}$ after eliminating sign ambiguities and degeneracies. So one can view singular values of a rectangular iid matrix as the eigenvalues of a matrix which resembles a Wigner matrix, except that two diagonal blocks of that matrix have been zeroed out. The zeroing out of these elements prevents one from applying the entire Wigner universality theory directly to the covariance matrix setting (in particular, the crucial Talagrand concentration inequality for the magnitude of a projection of a random vector to a subspace does not work perfectly once there are many zero coefficients). Nevertheless, a large part of the theory (particularly the deterministic components of the theory, such as eigenvalue variation formulae) carry through without much difficulty. The one place where one has to spend a bit of time to check details is to ensure that the Erdos-Schlein-Yau delocalisation result (that asserts, roughly speaking, that the eigenvectors of a Wigner matrix are about as small in ${\ell^\infty}$ norm as one could hope to get) is also true for in the covariance matrix setting, but this is a straightforward (though somewhat tedious) adaptation of the method (which is based on the Stieltjes transform). As an application, we extend the sine kernel distribution of local covariance matrix statistics, first established in the case of Wishart ensembles (when the underlying variables are gaussian) by Nagao and Wadati, and later extended to gaussian-divisible matrices by Ben Arous and Peche, to any distributions which matches one of these distributions to up to four moments, which covers virtually all complex distributions with independent iid real and imaginary parts, with basically the lone exception of the complex Bernoulli ensemble. Recently, Erdos, Schlein, Yau, and Yin generalised their local relaxation flow method to also obtain similar universality results for distributions which have a large amount of smoothness, but without any matching moment conditions. By combining their techniques with ours as in our joint paper, one should probably be able to remove both smoothness and moment conditions, in particular now covering the complex Bernoulli ensemble. In this paper we also record a new observation that the exponential decay hypothesis in our earlier paper can be relaxed to a finite moment condition, for a sufficiently high (but fixed) moment. This is done by rearranging the order of steps of the original argument carefully.
2015-08-04 05:22:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 124, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732333779335022, "perplexity": 274.15028213455577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990445.44/warc/CC-MAIN-20150728002310-00190-ip-10-236-191-2.ec2.internal.warc.gz"}
https://eddiema.ca/
# Ed's Big Plans ## On SSL/TLS and HTTPS — Briefings for my Computer Security class Brief: This is a technical briefing I made for fellow students in a Computer Security class (Winter 2011) about Secure Socket Layer [SSL] (now Transport Layer Security [TLS]). These briefings fit in nicely since it came after Dr. Obimbo explained the number theory behind RSA (modulus prime exponentiation). I went to the primary source on this topic to write this document: thanks to the Internet Engineering Task Force. I’ve posted this document as a consumable reference for anyone who needs it. >>> Download: A Network Security Spotlight on SSL/TLS and HTTPS (pdf) <<< Figure \$_: A Schematic of the SSL/TLS Handshaking Procedure. Eddie Ma February 16th, 2012 at 1:29 pm Posted in Featured,Technology ## Fuzzy c-means for greylevel image segmentation Here’s a script I threw together to do grey-level segmentation using fuzzy c-means. This appeared as a small part of a project in the image processing course I took. The algorithm deployed was really a proof of concept meant to replicate and verify the results of another author — as such, I don’t recommend ever using fuzzy c-means for this task as it’s pretty inefficient. This software will handle any number of grey-level-segments you desire, but I recommend eight as a maximum. The code and course project paper are originally dated April 20th, 2011. ( Requires pypng — Python PNG encoder/decoder ) Here are the examples included in the above archive. I like pictures. Sunny in 8-bit greyscale, 3-bit greyscale, 2-bit greyscale. Auryo in 8-bit greyscale, 3-bit greyscale, 2-bit greyscale. Enjoy 😀 Eddie Ma December 20th, 2011 at 11:21 pm ## My talk at Barcode of Life, Adelaide (2011) I’ve just finished my presentation in Adelaide. This is the first real biology-heavy conference I’ve been to. Sujeevan has brought me along with the BOLD team from BIO in order to present my work — and more importantly — to acquire some resolution about the barcoding culture and its biological significance. The Consortium for The Barcode of Life (CBOL) co-hosted this event with many biodiversity parties in Australia. Another huge group present was the International Barcode of Life (iBOL) project. The Barcode of Life Conference is held every other year and is attended by researchers interested in the concerted international barcoding effort. I presented my preliminary findings with a data analysis session and had excellent feedback — it’s pretty clear where to go next with my thesis! My talk describes the first steps to automating barcode (contig-like) assembly from ab1 sequencer trace files. This talk describes the present need for automation, trends that we can readily detect in currently assembled data and most importantly — detectable patterns in how human experts perform manual barcode assembly. The full name of the conference is Fourth International Barcode of Life Conference. Slides 6, 16, 24 from my presentation — The need for automation; Compositional bias and human edits [null hypothesis]; Where are human edits occurring [in Lepidoptera]? This has been a very enjoyable conference 😀 Eddie Ma December 3rd, 2011 at 5:12 pm ## My talk at Complex Adaptive Systems, Chicago (2011) I’ve just returned from the Complex Adaptive Systems conference on Wednesday after an eight hour drive — well, most of the driving was done by Dr. Obimbo and Haochen. I presented my paper An evolutionary computation attack on one-round TEA. This paper is built on top of my course project in Computer Security (University of Guelph, Winter 2011). This is my first cryptanalysis paper, and is an aside to the bioinformatics focus of my thesis. My slides introduce Tiny Encryption Algorithm (TEA) pretty well, along with Genetic Algorithm (GA) and Harmony Search (HS). The slides detailing the results aren’t quite as self-explanatory, but are bearable since the theme is fairly easy to establish: simpler keys are easier to break than more complicated ones. I might look at cryptanalysis again in future — but I’ll certainly use Evolutionary Computation (EC) again. It’s just too readily available in my toolkit, and is far too easy to deploy. One of the major lessons of this project that became very clear through during discussion with the audience is that the operators that are part of an EC algorithm should reflect the kind of problem we’re trying to solve. This might seem obvious at first, but I think it’s more subtle than that. For this project, HS enabled the EC to probe a keyspace with many repetitions — something that GA operators alone didn’t provide us. In general however, the solution space is lumpy enough that using ECs against stronger encryption schemes is just not viable — unless the EC had some magic to overcome the linearly inseparable lumps. I haven’t yet met such an operator and am not convinced one way or another about its existence. I’ll certainly introduce you if I ever do bump into it 😛 Eddie Ma November 4th, 2011 at 12:13 am ## Partial Derivatives for Residuals of the Gaussian Function I needed to get the partial derivatives for the residuals of the Gaussian Function this week. This is needed for a curve fit I’ll use later. I completely forgot about Maxima, which can do this automatically — so I did it by hand (Maxima is like Maple, but it’s free). I’ve included my work in this post for future reference. If you want a quick refresh on calculus or a step-by-step for this particular function, enjoy :D. The math below is rendered with MathJax. The Gaussian Function is given by … $$f(x) = ae^{-\frac{(x-b)^2}{2c^2}}$$ • a, b, c are the curve parameters with respect to which we differentiate the residual function • e is Euler’s number Given a set of coordinates I’d like to fit (xi, yi), i ∈ [1, m], the residuals are given by … $$r_i = y_i – ae^{-\frac{(x_i-b)^2}{2c^2}}$$ We want to get … $$\frac{\partial{r}}{\partial{a}}, \frac{\partial{r}}{\partial{b}}, \frac{\partial{r}}{\partial{c}}$$ Eddie Ma October 10th, 2011 at 11:40 am Posted in Brain,Featured ## Exposing MATLAB data with JFrame & JTextArea Today, I’ll describe my first foray into MATLAB. My task is simple — there’s already a GUI developed as a toolbox. The user inputs a bunch of data, and it crunches it away and provides a graph and displays a dialogue box about the moments of the data. The issue here is that we’d like to know more about the data — particularly, we’d like to retrieve the ordered pairs corresponding to each of the plotted points in the graph. In this exercise, I’ll show you how to use JFrame & JTextArea to display the coordinates of a graph in an already existing MATLAB GUI toolbox or plugin. The reason why I’ve decided on this approach rather than outputting directly to the MATLAB console is because I eventually want to add additional functions to reshape and reformat the text, and also to save the text that appears in its own window using additional Java swing components. But that’s a story for another day. The Quick How-To … ( 3 steps ) Choose the toolbox or plugin you’d like to modify and open up its “.m” file. This is where you’ll want to add your custom code. There are three parts to this exercise — first, we need to get a few classes from Java; then, we need to store the data we want to display; finally, we make the display appear. In this example, I’ll import the bare minimal amount of Java classes — you can extend this functionality by adding more swing classes if you like. I’ve arbitrarily prefixed my Java variables above with the prefix “expose”. import javax.swing.JFrame; import javax.swing.JTextArea; import javax.swing.JScrollPane; // Scrollbars -- optional. exposeReport = JFrame('Data Report'); exposeTa = JTextArea(24, 80); exposeScroll = JScrollPane(exposeTa); exposeScroll.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_ALWAYS); exposePane = exposeReport.getContentPane(); exposeReport.pack(); I’m not exactly sure why MATLAB doesn’t include scrollbars automatically. Of the above, there are only two variables which we’ll need to refer to again later — “exposeReport” and “exposePane”. The next step is to use your JTextArea as an output terminal. Just append strings to it as the graph is being built — you’ll have to look at your specific plugin to figure out the logic behind the graph — in general, you’ll be looking for a for-loop and a function that alters the graph. // Look for a 'for' loop with a line that adds data to the graph. // Here, I've used the variable SomeX for a point's x-coordinate, // and SomeY for a point's y-coordinate. exposeTa.append(sprintf('%d\t%d\n', SomeX, SomeY)); The loop spins around and incrementally adds all of the points. Note that I’ve used “%d” as a conversion to numbers expressed in base-ten (including floating point values). This is different from the conversion character in C where “%d” indicates only integers. We add the final code after the loop exits. This next code makes your data report window visible — just as you’d expect from Java. exposeReport.setVisible(1); I opted to put this line after the code that reveals the toolbox’s original summary window — this causes my report to appear on top. That’s all there is to it! Enjoy 😀 MATLAB from a CS student perspective MATLAB has always been a bit of an oddball language for me. It’s dynamically typed but also exposes Java’s classes if you ask nicely. It’s C-looking and provides functions that the C-programmer would be happy accepting such as strcat and sprintf, but again — putting on the dynamic spin by returning the constructed string rather than modifying the contents of a buffer. All in all, the design choices do a good job of making MATLAB do what it’s intended to do; it gives scientists and engineers a way to succinctly express math to a machine without having to take too many computer science courses. Eddie Ma July 11th, 2011 at 1:18 am
2019-10-17 05:43:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39511996507644653, "perplexity": 1899.7227036405188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00472.warc.gz"}
https://math.stackexchange.com/questions/3364948/use-integration-by-parts-to-express-int-fxdx-in-terms-of-int-gxdx
# Use integration by parts to express $\int f(x)dx$ in terms of $\int g(x)dx$ This is the first question from Single Variable Calculus 6th Edition Chapter 7, Section 2. (Authors: Hughes-Hallet, Gleason, McCallum, et al.) I've formatted the title of this question to be in a more general form, but the question from the book is as follows: Use integration by parts to express $$\int x^2e^xdx$$ in terms of: $$a. \int x^3e^xdx \qquad \qquad b. \int xe^xdx$$ The book also provides the solution in the back, but not the arithmetic to justify it. 1. (a) $$\frac {x^3e^x}{3} - \frac {1}{3} \int x^3e^xdx$$ (b) $$x^2e^x-2 \int xe^xdx$$ What I understand: Integration by parts. What I do not understand: How they got that solution. • Hint: apply parts with $u=x^2, dv=e^xdx$. Then apply parts with $u=e^x, dv=x^2dx$. – lulu Sep 21 '19 at 22:50 • Just to clarify, the book is basically asking that the anti-derivative of g(x) be constructed using u and dv from f(x)? And what comes before the anti-deriviative of g(x) as a result of integration by parts using f(x) doesn't matter? Thank you for the hint by the way, it helped a lot. – Evilscaught Sep 21 '19 at 23:28 • Well, I'm not sure of your notation. The version of parts I am using is $\int u\,dv=uv-\int v \,du$. – lulu Sep 21 '19 at 23:32 • The integration by parts notation I'm using is from the book, it is as follows: $\int uv'dx = uv - \int u'vdx$ But, I have previously seen the notation that you're using. – Evilscaught Sep 21 '19 at 23:37 • Well, that's more or less the same...as $dv=v'\,dx$ and $du=u'\,dx$. – lulu Sep 21 '19 at 23:39 Integration by parts just exploits the product rule for derivatives and the fundamental theorem of calculus: $$fg=\int (fg)'=\int f'g+\int fg'\implies \int f'g=fg-\int fg'$$. So, $$(\dfrac{x^3e^x}3)'=x^2e^x+\dfrac {x^3e^x}3\implies \int x^2e^x=\dfrac {x^3e^x}3-\int\dfrac {x^3e^x}3$$. $$b)$$ is done the same way.
2021-06-21 08:08:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8387517333030701, "perplexity": 327.78499564992023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488268274.66/warc/CC-MAIN-20210621055537-20210621085537-00464.warc.gz"}
https://texfaq.org/FAQ-compactbib
Frequently Asked Question List for TeX # Reducing spacing in the bibliography Bibliographies are, in fact, implemented as lists, so all the confusion about reducing list item spacing also applies to bibliographies. If the `natbib` package “works” for you (it may not if you are using some special-purpose bibliography style), the solution is relatively simple — add ``````\usepackage{natbib} \setlength{\bibsep}{0.0pt} `````` to the preamble of your document. The `compactbib` package has a similar effect. Its primary purpose is to produce two bibliographies, and it seems to preclude use of BibTeX (though the package documentation, in the package file itself, isn’t particularly clear). Otherwise, one is into unseemly hacking of something or other. The `mdwlist` package actually does the job, but it doesn’t work here, because it makes a different-named list, while the name `thebibliography` is built into LaTeX and BibTeX. Therefore, we need to patch the underlying macro: ``````\let\oldbibliography\thebibliography \renewcommand{\thebibliography}[1]{% \oldbibliography{#1}% \setlength{\itemsep}{0pt}% } `````` The `savetrees` package performs such a patch, among a plethora of space-saving measures: you can, in principle, suppress all its other actions, and have it provide you a compressed bibliography only. FAQ ID: Q-compactbib Tags: citations
2019-06-27 09:15:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461454510688782, "perplexity": 2672.056380068884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001014.85/warc/CC-MAIN-20190627075525-20190627101525-00281.warc.gz"}
http://hitay.sg/6/xa1zp/article.php?id=4e9e6a-how-to-calculate-percentage-purity-in-titration
should prove useful for the new AQA, Edexcel and OCR GCSE (9–1) the Mr (relative molecular mass) of the substances. Quantitative chemistry calculations Help for problem solving in doing % Percentage Purity of Sodium Carbonate. As far as I can tell, you are calculating correctly. titrated with standard sodium hydroxide solution of concentration 0.1000 e.g. processed e.g. These days many used cooking oils from restaurants were re used by street sellers to fry their food. The percent purity of a sample describes what proportion of that BIOLOGY CHEMISTRY PHYSICS * ADVANCED LEVEL CHEMISTRY, GCSE others. Suppose that a titration is performed and 20.70 mL of 0.500 M NaOH is required to reach the end point when titrated against 15.00 mL of HCl of unknown concentration. 24.0cm^3 of this solution required to reach the end-point of the titration. An assay is any procedure used to concentration x volume=mole. Spotted any careless error? % reaction yield 14.2b the percent yield. 0.05 x 0.0283=0.001415. To 'assay' means to analyse a sample for (easy start, not using moles), Reacting mass ratio calculations of reactants and products hydrochloric acid. The titration was 16.45 cm3, so, - an assay calculation is sketched out below for A Level students + link to almost impossible to get 100.00% purity and so samples are always Note the molarity used [M-NaOH]. active drug. ratio is 180g aspirin reacts with 40g of sodium hydroxide. calculation  14.5 purity of aspirin? Boyle's and Charles Laws, Radioactivity & half-life calculations including 14.4 here an acid (aspirin) is Step 3: Work moles to calculate empirical formula and deduce molecular formula of a compound/molecule % purity = actual - an assay calculation is sketched out below for A Level students + link to atom economy 14.3 Na = 23). reaction is ... C6H4(OCOCH3)COOH mass), Using reactant quantities. + NaOH COOH + H2O COONa 2.Reference 1) JIS K 8073-1992 Benzoic acid (Reagent) 3.Cautions in measurement 1) Use pure water without carbon dioxide contained. Try the free Mathway calculator and 14.2a comment or request a type of All right, let me just multiply these two numbers. component in a mixture of a % of an element or ion etc. CaCO3 (s) + 2HCl (aq) → CaCl2 The percentage purity of crystal = ( 39.2001 g / 4.2 g ) x ( 100 % / 1 ) = 933.34 % [ [ which is impossible ; even much more than your own result ] ] (aq) + H2O (l) + CO2 (g), Step 1: Thank you. & A analyse and test for its purity of the % content of a specified percentage % purity calculations & OCR 21st Century Science, OCR Gateway how percentage purity calculations. The claim was tested using a titration where 16mL of 0.1M sodium hydroxide was needed to neutralize 25mL of the 10% vinegar solution. concentrations (and diagrams of apparatus), How to do acid-alkali simple titration and analyse an aspirin sample without using the mole Above is typical periodic table used in GCSE science-chemistry specifications in titration calculations, diagrams of apparatus, details of procedures, Electrolysis products calculations (negative cathode and positive anode products), Other calculations of the product may be even more important. Online practice exam chemistry To determine the percentage purity of a sample of impure sodium carbonate we must first find out how much, in volume, hydrochloric acid it takes to neutralise the solution (this is an acid and base titration and will need an indicator). from any remaining solution and dried on a filter paper. Given that the chemical worked out below and much reactant is needed? These mainly include purification reactions. 1) Preparation of NaOH Standard Titration Solution Prepare 0.5M NaOH by dissolving 5g NaOH into 250ml Water in a volumetric flask. Titrations can be used to Other molarites can be used but ideally in the range of 0.4M to 1.2M . 0.001415mol of H2SO4 will titrate double of its own mole which are. Using the number of mL of HCl required in the pilot titration, calculate by ratio the number of grams to be taken for each unknown analysis that would require approximately 35 … 0.00283mol of NaHCO3 has mass. exothermic/endothermic reactions, Gas calculations involving PVT relationships, Calculate the % purity of the sample of the drug. mol dm-3. We will want the highest yield The more a product is Another 25.00 ml amount of this sample required 8.96 ml of 0.1341 M AgNO3 for titration. Therefore the reacting mass CALCULATIONS and solved problems for KS4 Science GCSE/IGCSE CHEMISTRY Calculating percentage purity in titration ? percentage of the material which is the actually desired chemical in a sample calculating % percent purity? Chalk is almost pure calcium carbonate. relative isotopic mass and calculating relative atomic mass, Calculating relative How to fix or calculate Na2O (sodium oxide) content in a mixture of sodium silicate and sodium hydroxide with known solid contents? from equations others. was analysed for water by heating a sample in an oven at 110oC moles) and brief mention of actual percent % yield and theoretical yield, Calculate the percentage purity of the potassium sulphite. syllabuses, What is relative atomic mass?, damage. Website content © Dr crystallised pharmaceutical product was found to contain 11.57g of the Phil Brown 2000+. For instance if you plan to mix sugar and vinegar together for an experiment youll need to calculate the concentration of sugar in the mixture. from the final mixture, it will still have small amounts of other & limiting reactant calculations, Energy transfers in physical/chemical changes, 121.2 g of sample, by mass, is composed of a specific compound or element. Conversely, the percent purity of an impure sample of a chemical of unknown percent purity can be determined by reaction with a pure compound as in an acid-base titration. and why you can't actually get 100% yield in practice, 14.2b atom economy calculations  *  In both cases, you have over 100% purity, which of course doesn't make sense. the mass of NaOH used in the Science percentage % purity calculations  WJEC gcse science chemistry dating materials. Reaction: $$\ce{CaCO3 + 2HCl -> CaCl2 + H2O + CO2}$$ Number of milli equivalence of $\ce{HCl}$ used initially is $\mathrm{15}$ Number of milli moles of $\ce{HCl}$ used initially is $\mathrm{\frac{15}{2}}$ ... Finding the percent of calcium carbonate in commercial antacid by using back titration. GCSE/IGCSE/O level Chemistry percentage % purity calculations Information Study Notes for revising for AQA GCSE of purity-titration calculation based on an organic acid titrations. Mr(NaOH) = 40   (atomic masses: C = 12, H = 1, O = 16, what is an assay? Chemistry Calculations: Calculating % purity & assay results, Doc much of a reactant is needed? (NOT using There we go. -2 by titration analysis to the theoretical grams given by the formula, we can calculate the percent purity of the sample. Science Triple Award Science Separate Sciences Courses aid to percentage % analyse the purity of a substance e.g. 13.4K views View 3 Upvoters Moles of H2SO4 in 28.3cm3 are. purity and assay chemical calculations with worked examples are fully with sodium hydroxide solution of concentration 4.00g/dm3. sodium hydroxide solution using phenolphthalein indicator, the end-point us, with impurities in it, that may have harmful effects. Percent recovery computes the percentage of an original substance that is recovered after a chemical reaction is completed. % reaction yield and theoretical yield calculations A 25.00 ml amount of this sample required 34.04 ml of 0.2644 M NaOH for titration. The prepared salt Percent purity can also be determined, in theory, by measuring the amount of product obtained from a reaction. Assessment, Recycling, 14.1 Percentage purity of a chemical reaction product. 14. Try the given examples, or type in your own Example 14.1 (4) Purity calculation percentage % purity calculations CIE Note the molarity used [M-NaOH]. Can anyone tell me how I can measure the purity of lime (%CaO) and Ca(OH) 2 (slaked lime) percent in milk of lime by titration or other simple methods? Percentage purity of a substance can be calculated by dividing the mass of the pure chemical by the total mass of the sample, and then multiplying this number by 100. concept in the calculation e.g. are unofficial. % purity = actual amount of desired material x 100 / total amount of material % purity = 11.57 x 100 / 12 = 96.4% (to 1dp) the formula mass of sodium chloride is 58.5 the formula mass of chloride is 35.5 therefore you need to scale up from the … formula for salicylic acid is C7H6O3 Level Chemistry (revise courses equal to US grade 8, grade 9 grade 10 percentage Aspirin Titration – Determining the purity of aspirin In this experiment you will determine the percentage purity of aspirin in two different strengths of over the counter aspirin tablets by performing a titration with sodium hydroxide. chemical and pharmaceutical industries. The rest of the method seems correct as Moles NaOH + Mole KHP Mole KHP x Formula mass will give the mass of KHP present in the flask. A 12.00g sample of a When we make something in a chemical reaction, and separate it how do you determine the % percentage 3:From the volume used in the titration, calculate the concentration of sodium hydroxide (mol L-1). aspirin. crystals are not necessarily completely dry. These revision notes and practice questions on how to calculate percent amount of desired material x 100 / total amount of material, % purity = what is the formula for Chemistry percentage % purity calculations, WJEC GCE AS A2 Chemistry percentage effects in a drug or medicine. 1) Preparation of NaOH Standard Titration Solution Prepare 0.5M NaOH by dissolving 5g NaOH into 250ml Water in a volumetric flask. Calculate percentage purity of calcium carbonate. makes fertilizers or paint. Other GCSE chemical calculations % PURITY OF A PRODUCT from a chemical in a compound, Empirical formula and formula mass of a compound from reacting masses O Level  Online Percent purity assay Chemical Calculations, 14. product you obtain from a reaction. Example 14.1 (Q3) Purity calculation and basic starter chemical calculations for A level AS/A2/IB courses. of it. Brown's Chemistry - GCSE/IGCSE/GCE (basic A level) (starting with reacting masses or % composition), Moles and the molar volume of a gas, Avogadro's Law, Reacting gas volume 1M NaOH = 40g NaOH in 1000 mL water 0.5M NaOH = Xg NaOH in 1000 mL water = > X = 0.5 x 40 / 1 = 20g How do you do assay calculations? for analytical standards in laboratories or (21.18.4) M A = M B × V B V A = 0.500 M × 20.70 … If the aspirin required 16.45 cm3 Please submit your feedback or enquiries via our Feedback page. Step 4: Calculate the mass of aspirin titrated, therefore the mass of aspirin A titration is the volume of one solution required to react correctly with an identified volume of a different solution. However, the dilution of solutions, 14.4 Somewhere there has to be a much of a reactant is needed? is: The aspirin from the above experiment was not pure. others. water of crystallisation percentage % purity calculations Revision KS4 Science (a) Calculate the percentage purity of the salt. salt remaining = 56.15 - 51.32 = 4.83g, % salt purity after heating = 56.15g, Therefore the mass However in any chemical process it is I seek to calculate the percentage purity of the naoh. % purity calculations, CCEA/CEA GCE AS A2 Chemistry revising in the pharmaceutical industry to manufacture a drug for treating solid was obtained, but analysis showed that only 109.2g of it was So let me get my calculator. purity calculations textbook revision chemistry science courses as well as Advanced A Level Chemistry courses. All copyrights reserved on revision notes, images, of the NaOH(aq) to neutralise it, calculate the percent purity of the The steep portions of the titration curve near the two equivalence points are not so steep and do not extend over so large a pH range as is required for a titration accuracy of 0.1 relative percent. This is because the The solution was gently heated to evaporate most of impurities may harm the people using the drugs. and Gay-Lussac's Law (ratio of gaseous the grams to moles for salicylic acid. Find out the Percentage Purity of Impure Sample of Oxalic Acid.You are Supplied M/100 KMnO4 Solution Chemistry Lab ManualNCERT Solutions Class 12 Chemistry Sample Papers Chemical Equations Indicator KMnO4 is a self-indicator. important e.g. substance? and why you can't actually get 100% yield in practice, how how can you determine the percentage % purity of a calculation of quantities required, limiting analysed in industry to monitor the quality of the product. purity by measuring how much carbon dioxide is given off. The salt maybe required to be completely anhydrous, that is, not containing the grams from the volume, Step 3: Calculate aspirin. You can apply the same sort query ? Commonly, titration is done to determine the concentration of an acid by titrating with a … e.g. of original salt = 56.47 - 51.32 = 5.15g, and the mass of pure vehicles, which themselves have additives in to enhance engine What was the percentage of iron in the sample, if titration of 20.00 mL aliquot of the diluted solution requires on average 33.45 mL of 0.01005 M solution of potassium permanganate? Mole ratio btw H2SO4 and NaHCO3 is 1:2. titrated = 0.0658 x 180 / 40 = 0.296g, therefore the % purity salicylic acid gives 1 mole of aspirin. % reaction yield and theoretical yield calculations (relating reacting masses and formula calculations revision guide to percentage % purity calculations how do you Purity means how much of stuff that isn't citric acid is mixed in with your citric acid. is the first permanent pink colour. or analysed as to its percentage purity. - an assay calculation is sketched out below for A Level students + link to Calculate the Mr of calcium carbonate, Step 2: Calculate Embedded content, if any, are copyrights of their respective owners. 15. And this'll tell us how many moles of sodium hydroxide we have in this solution. GCSE calculation not covered? %purity=pure substance/impure substance x 100. Percentage purity from titrations - worked question - YouTube 5(K2SO3) + 2(KMnO4) = 6(K2SO4) + 2(MnSO4) + 3(H2O) out' below. doing chemical calculations, how how calculate % percentage purity? % purity, % percentage & theoretical yield, dilution of solutions A piece of iron wire weighs 0.2756g. Comment on the 4.3% claim. KS3 SCIENCES * GCSE Copyright © 2005, 2020 - OnlineMathLearning.com. 4: NOTE: The result you obtain should be close to the expected value (0.100 M). The percent purity of the sample was the mass of the HCl in the sample divided by the total sample mass. Burets and Titration Titration is a method for accurately determining the concentrations of solutions. Calculate the % purity of the sample of the drug. Moles of pure HCl in the standard HCl and moles of muriatic acid were converted to grams present and the % purity calculated according to Eq. Purity of Benzoic acid can be calculated from titration volume of Sodium hydroxide. HOME PAGE * Percent purity = 109.2 ÷ 121.2 × 100% = 90.0%. We can work out its The tritration formula is used to find the percentage of acid in a solution. AQA GCE Chemistry OCR GCE Chemistry percentage % purity calculations Edexcel GCE Chemistry Salters Chemistry by distillation or crystallisation, the more costly the calculations, 14.5 the percent purity. out the calculated mass of the aspirin. calculation of quantities required, limiting temperature and pressure (STP). Exam revision summaries & references to science course specifications 121.2 g of solid was obtained, but analysis showed that only 109.2g of it was aspirin. Sodium chloride was The simplified equation for the But let's suppose for the moment that you are asking to find the concentration of a citric acid solution via titration. dish empty = 51.32g. The crystals were separated *  Report 10 months ago #3 You can calculate the moles of NaHCO3 from the mass and the Mr given in the question, as well as the number of moles of NaHCO3 that react from the number of moles of hydrochloric acid. It will be impure. 5: Calculate the total number of moles of sulfuric acid added to the carbonate. performance, you wouldn't want other impurities that may cause engine 2.128 liters of carbon dioxide gas was collected at standard % purity of a product 14.2a revising percentage % purity calculations Additional 5. For example; 8%, 10% and 12%. Titration goes up to the endpoint, which is the maximum inflexion on titration curve. Step 2: Change Suppose we own a factory that What was the percent yield? of argument to thousands of domestic and industrial products from the (NOT using moles), Introducing moles: The connection between moles, mass and formula mass - the basis of reacting mole ratio calculations ratios, Avogadro's Law converting the cm3 to dm3. The above equation can be used to solve for the molarity of the acid. So the liters cancel out, and then now we're going to actually have to get a calculator out. An assay calculation is 'sketched This ScienceStruck article explains how to calculate the percent … Copying of website material is NOT The yield is the amount of of aspirin. problem solver below to practice various math topics. The formula for percent purity is: Example: The aspirin from the above experiment was not pure. percentage % purity calculations courses for pre-university students (equal to The medical drug aspirin is made from salicylic acid. 11.57 x 100 / 12 = 96.4% (to 1dp). We welcome your feedback, comments and questions about this site or page. It would not be acceptable Pharmaceutical Industry Economics & Sustainability, Life Cycle Even at pre-A level you can do a titration = 4.00 x 16.45/1000 = 0.0658g. 1.456 g sample of unknown alloy was dissolved in acid and diluted to 100 mL. Dividing this by the sample mass and multiplying by 100 will give the percentage KHP in the sample. problem and check your answer with the step-by-step explanations. A sample conaining both HCl and HNO3 had a density of 1.083g/ml. We have 0.485 moles per liter. At this point stop the titration and boil the sample and continue the procedure as was done for the primary standard. Near the HCO 3 - equivalence point pH of 8.3 the change in pH caused by adding 1.0 mL of acid is only about 0.3 units and 10 mL are needed for a pH change of 1 unit. reactant quantities, Chemical & This page includes of fully worked examples of percent purity calculations. much reactant is needed? ethanol solvent, diluted with deionised water and titrated with standardised 0.100 mol/dm3 Solution: Percent purity = 109.2 ÷ 121.2 × 100% = 90.0%. pharmaceutical products where impurities could have dangerous side any water. doing chemical calculations, percent purity has to be used in stoichiometric calculations. End Point Colourless to permanent pink colour (KMnO4 in burette). 14.3 dilution of solutions calculations  substances mixed with it. If it is not, check your calculation. Then consider the total number of moles vs the actual number of moles that react and write the ratio as a percentage. You're finding the amount (mol) of NaOH consumed by each titration, converting to mols of ASA, and then to mass. the water and allow the salt to crystallise. presented via a reacting mass calculation as follows ... 0.300g of aspirin was titrated = 100 x 0.296 / 0.300 = 98.7%, Above is typical periodic table used in GCSE science-chemistry specifications in were obtained and from them calculate the % purity of the salt. The aspirin is dissolved in process, but the purer the product gets. how do you assay aspirin? Procedure Weigh exactly 2.0 g of oxalic acid and dissolve in water to prepare […] A 25cm^3 portion of this solution was acidified with an excess of dilute sulphuric acid and then titrated with 0.02 mol dm^-3 potassium manganate (VII) solution. its purity. permitted. That's what the molarity tells us. Calculate the molarity (moles/L) and the percentage (w/v, ie – g/100mL = %) of acetic acid in the original vinegar. the above assay calculation could be compromise, so it is important that before sale, the product is assayed Purity is very atom economy water of crystallisation Calculate the percent purity of the product. percentage % purity calculations CEA/CEA gcse science chemistry O 0. The formula for percent purity level chalk was reacted with an excess of dilute hydrochloric acid. Example: Chalk is almost pure calcium carbonate. prepared by neutralising sodium hydroxide solution with dilute preparation reaction. possible, for the lowest cost. % Purity = Mass of Pure Substance / Mass Standard x 100% (5) % purity is the still be important, but the purity 1 mole of Similarly in fuels for road EMAIL Average the three value from the 3 titrations will give the answer. Calculate the percent purity of the product. formula/molecular mass of a compound or element molecule, Law of Conservation of Mass and simple reacting mass calculations, Composition by percentage mass of elements Advanced AS Q17 & Q20 examples reactants-products), Molarity, volumes and solution = 4.83 x 100 / 5.15 = 93.8% (to 1dp). Other molarites can be used but ideally in the range of 0.4M to 1.0M . If we have a factory that makes medical drugs then the yield will The chemical name for aspirin is acetylsalicylic acid. 2 x 0.001415=0.00283. US grade 11 and grade 12 and AP Honours/honors level percentage % purity (and diagrams of apparatus), water of crystallisation, quantity of reactants Calculate the mass percent of … calculations AS Advanced Level A2 IB Revising percentage % purity calculations + NaOH  ==> C6H4(OCOCH3)COONa + H2O, Mr(aspirin) = 180,  Step 1: Calculate to measure the evaporation of any residual water. & limiting reactant calculations. required, atom economy, 14.1 Titration won't tell you that. Mass of evaporating 0.00283 x 84=0.237g. The following results and I've 'usually' used these values in my exemplar calculations to cover most Revision notes for GCE Advanced Subsidiary Level percentage % purity so we can scale this up to get Science percentage % purity calculations, Edexcel GCSE Science/IGCSE Chemistry and formula mass determination, Reacting masses, concentration of solution and volumetric titration calculations 10 g of % purity calculations A level Mass of dish + salt It also determines their efficiency. 1M NaOH = 40g NaOH in 1000 mL water 0.5M NaOH = Xg NaOH in 1000 mL water = > X = 0.5 x 40 / 1 = 20g In an experiment, 100.0 grams of salicylic acid gave 121.2 grams and the chemical formula for aspirin is C9H8O4. quizzes, worksheets etc. Of carbon dioxide is given off 5g NaOH into 250ml water in a drug for us!, worksheets etc oven at 110oC to measure the how to calculate percentage purity in titration of any residual water 24.0cm^3 of this sample 34.04. Solution of concentration 0.1000 mol dm-3 moles of H2SO4 will titrate double of its mole. Much carbon dioxide gas was collected at standard temperature and pressure ( STP ) give the answer given that chemical... 0.2644 M NaOH for titration various math topics evaporation of any residual.... Argument to thousands of domestic and industrial products from the chemical formula for percent purity = ÷... The tritration formula is used to find the concentration of a product from a chemical Preparation reaction % 12... Multiply these two numbers it, that is, not containing any water experiment was not.. For titration side effects in a mixture of sodium silicate and sodium hydroxide with known solid contents and! Many moles of H2SO4 in 28.3cm3 are acid titrations calculator out and questions about this or... Grams of salicylic acid gave 121.2 grams of salicylic acid gives 1 mole of aspirin hydroxide solution dilute. And check your answer with the step-by-step explanations 0.100 M ) was obtained, analysis. Method for accurately determining the concentrations of solutions calculations % purity of the drug the same sort of to... Of 0.1M sodium hydroxide Q20 examples of percent purity can also be determined, in theory, by,! Procedure as was done for the moment that you are calculating correctly 109.2 ÷ ×. Are copyrights of their respective owners or medicine make sense 25mL of the.. Of percent purity of Benzoic acid can be used but ideally in the e.g. Tell, you are calculating correctly as a percentage are copyrights of their respective owners aspirin C9H8O4. To manufacture a drug for treating us, with impurities in it, that is, not containing any.! Ratio is 180g aspirin reacts with 40g of sodium hydroxide solution with dilute hydrochloric acid filter.! 0.001415Mol of H2SO4 will titrate double of its own mole which are feedback or enquiries via our page. G sample of it was aspirin GCSE calculation not covered solution Prepare 0.5M NaOH by dissolving 5g into... Online practice exam chemistry calculations Help for problem solving in doing chemical calculations % purity the! Of concentration 0.1000 mol dm-3 makes fertilizers or paint, for the primary standard the process, but analysis that... Worked examples of percent purity is the percentage KHP in the pharmaceutical industry manufacture... Ratio as a percentage chemistry and basic starter chemical calculations, how much carbon dioxide is off... To find the concentration of a substance for percent purity calculations g sample of a acid. Percentage % purity of the HCl in the sample of the material which is the formula percent... Actually desired chemical in a mixture of sodium silicate and sodium hydroxide of! Its purity doing % percentage purity in titration used to solve for the moment that are... If the aspirin aq ) to neutralise it, calculate the % purity of the aspirin from. Much carbon dioxide is given off about this site or page so the liters cancel out, then! Aspirin is C9H8O4 medical drug aspirin is made from salicylic acid is C7H6O3 and the chemical and industries! A 12.00g sample of the sample of the active drug suppose for the primary standard moles of hydroxide! Comments and questions about this site or page the drugs was 16.45 cm3 of the acid to a! The calculation e.g % percent purity = 109.2 ÷ 121.2 × 100 % = %! Was reacted with an excess of dilute hydrochloric acid the end-point of the sample divided by the total of. Practice various math topics with standard sodium hydroxide was needed to neutralize 25mL of the drug M. The procedure as was done for the lowest cost converting the cm3 to dm3 a. By measuring how much reactant is needed Change the grams to moles for salicylic acid so the cancel! Write the ratio as a percentage chemical Preparation reaction references to Science course specifications are unofficial 25.00 amount... %, 10 % vinegar solution 5: calculate the Mr ( relative molecular mass ) of the.. Includes of fully worked examples of purity-titration calculation based on an organic titrations... Q17 & Q20 examples of purity-titration calculation based on an organic acid titrations that sample, by mass, composed. 250Ml water in a sample of a sample describes what proportion of that sample, by measuring how much dioxide. Link to others write the ratio as a percentage required 34.04 ml of 0.2644 NaOH. Titration titration is a method for accurately determining the concentrations of solutions to practice various math topics as a.! We will want the highest yield possible, for the primary standard to 1.0M NaOH into 250ml water in solution! The tritration formula is used to analyse the purity of the NaOH ( aq ) to neutralise it that... Neutralize 25mL of the NaOH ( aq ) to neutralise it, calculate the percent purity the... A simple titration and analyse an aspirin sample without using the drugs for treating us with. Sample describes what proportion of that sample, by measuring how much reactant is needed standard temperature pressure... Submit your feedback, comments and questions about this site or page cm3 of the NaOH out calculated! 4: NOTE: the result you obtain from a reaction the procedure as was for! 'Assay ' means to analyse a sample in an oven at 110oC to measure evaporation... The HCl in the pharmaceutical industry to manufacture a drug or medicine ) is titrated with standard sodium was! Goes up how to calculate percentage purity in titration the carbonate out its purity by measuring the amount product. With dilute hydrochloric acid pressure ( STP ) 250ml water in a solution Science! Starter chemical calculations for a Level students + link to others were used. Theory, by measuring the amount of this solution x 16.45/1000 = 0.0658g or.. Dissolving 5g NaOH into 250ml water in a volumetric flask chemistry and basic starter chemical,! Highest yield possible, for the primary standard as a percentage simple titration and boil the was! Boil the sample of the salt = 109.2 ÷ 121.2 × 100 % 90.0! Percent purity of the salt to crystallise a crystallised pharmaceutical product was found to contain 11.57g the... A percentage %, 10 % and 12 % the maximum inflexion on titration curve both cases, you calculating! Me just multiply these two numbers sulfuric acid added to the expected value 0.100... As i can tell, you are calculating correctly the procedure as was done for molarity. Argument to thousands of domestic and industrial products from the chemical and pharmaceutical industries a method for determining... Liters of carbon dioxide gas was collected at standard temperature and pressure ( STP ), not any! It, that may have harmful effects reacting mass ratio is 180g aspirin with., or type in your own problem and check your answer with the step-by-step explanations oven at 110oC measure. Permanent pink colour ( KMnO4 in burette ) here an acid ( aspirin is. About this site or page solution Prepare 0.5M NaOH by dissolving 5g NaOH into water... Obtain should be close to the carbonate and allow the salt obtain should be close the! Questions about this site or page many used cooking oils from restaurants were re used street. If the aspirin answer with the step-by-step explanations copyrights reserved on revision,... Percent recovery computes the percentage of an original substance that is, not containing any.... Table used in how to calculate percentage purity in titration science-chemistry specifications in doing % percentage purity of product. & references to Science course specifications are unofficial and industrial products from the 3 titrations will give the percentage acid. Aspirin sample without using the drugs mole concept in the titration and boil the sample and continue the as! Titration titration is a method for accurately determining the concentrations of solutions salt to crystallise calculated of... In it, calculate the % purity of Benzoic acid can be used to find the concentration of citric... A 25.00 ml amount of this sample required 34.04 ml of 0.1341 AgNO3., calculate the percentage purity in titration end-point of the HCl in the titration was cm3. Water by heating a sample of it have harmful effects Work out its purity worksheets. The cm3 to dm3 tested using a titration where 16mL of 0.1M sodium.! On an organic acid titrations be determined, in theory, by measuring much. 16.45/1000 = 0.0658g by dissolving 5g NaOH into 250ml water in a volumetric flask vs the actual of. The HCl in the range of 0.4M to 1.0M of solutions on an organic acid titrations dioxide is off. Method for accurately determining the concentrations of solutions example 14.1 ( Q3 ) purity calculation - an assay is... Hcl in the range of 0.4M to 1.2M crystallised pharmaceutical product was to. Be completely anhydrous, that is, not containing any water us, impurities... Product gets can also be determined, in theory, by mass, is composed of a substance e.g -... 12 % how to calculate percentage purity in titration: calculate the mass of the active drug by the total number of vs. Level you can do a simple titration and analyse an aspirin sample without using the mole concept in sample! The calculation e.g in titration needed to neutralize 25mL of the aspirin sample a. Needed to neutralize 25mL of the active drug grams to moles for salicylic acid step:... In this solution required to reach the end-point of the NaOH ( aq ) how to calculate percentage purity in titration neutralise it that! Purity, which is the maximum inflexion on titration curve up to the expected value ( 0.100 M.. Allow the salt much carbon dioxide is given off total number of moles that react and write the ratio a...
2021-06-21 01:37:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6871223449707031, "perplexity": 4735.141572392362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00476.warc.gz"}
https://www.jobilize.com/trigonometry/test/section-exercises-quadratic-functions-by-openstax
# 5.1 Quadratic functions  (Page 7/15) Page 7 / 15 Access these online resources for additional instruction and practice with quadratic equations. ## Key equations general form of a quadratic function $f\left(x\right)=a{x}^{2}+bx+c$ standard form of a quadratic function $f\left(x\right)=a{\left(x-h\right)}^{2}+k$ ## Key concepts • A polynomial function of degree two is called a quadratic function. • The graph of a quadratic function is a parabola. A parabola is a U-shaped curve that can open either up or down. • The axis of symmetry is the vertical line passing through the vertex. The zeros, or $\text{\hspace{0.17em}}x\text{-}$ intercepts, are the points at which the parabola crosses the $\text{\hspace{0.17em}}x\text{-}$ axis. The $\text{\hspace{0.17em}}y\text{-}$ intercept is the point at which the parabola crosses the $\text{\hspace{0.17em}}y\text{-}$ axis. See [link] , [link] , and [link] . • Quadratic functions are often written in general form. Standard or vertex form is useful to easily identify the vertex of a parabola. Either form can be written from a graph. See [link] . • The vertex can be found from an equation representing a quadratic function. See [link] . • The domain of a quadratic function is all real numbers. The range varies with the function. See [link] . • A quadratic function’s minimum or maximum value is given by the $\text{\hspace{0.17em}}y\text{-}$ value of the vertex. • The minimum or maximum value of a quadratic function can be used to determine the range of the function and to solve many kinds of real-world problems, including problems involving area and revenue. See [link] and [link] . • The vertex and the intercepts can be identified and interpreted to solve real-world problems. See [link] . ## Verbal Explain the advantage of writing a quadratic function in standard form. When written in that form, the vertex can be easily identified. How can the vertex of a parabola be used in solving real-world problems? Explain why the condition of $\text{\hspace{0.17em}}a\ne 0\text{\hspace{0.17em}}$ is imposed in the definition of the quadratic function. If $\text{\hspace{0.17em}}a=0\text{\hspace{0.17em}}$ then the function becomes a linear function. What is another name for the standard form of a quadratic function? What two algebraic methods can be used to find the horizontal intercepts of a quadratic function? If possible, we can use factoring. Otherwise, we can use the quadratic formula. ## Algebraic For the following exercises, rewrite the quadratic functions in standard form and give the vertex. $f\left(x\right)={x}^{2}-12x+32$ $g\left(x\right)={x}^{2}+2x-3$ $f\left(x\right)={\left(x+1\right)}^{2}-2,\text{\hspace{0.17em}}$ Vertex $\text{\hspace{0.17em}}\left(-1,-4\right)$ $f\left(x\right)={x}^{2}-x$ $f\left(x\right)={x}^{2}+5x-2$ $f\left(x\right)={\left(x+\frac{5}{2}\right)}^{2}-\frac{33}{4},\text{\hspace{0.17em}}$ Vertex $\text{\hspace{0.17em}}\left(-\frac{5}{2},-\frac{33}{4}\right)$ $h\left(x\right)=2{x}^{2}+8x-10$ $k\left(x\right)=3{x}^{2}-6x-9$ $f\left(x\right)=3{\left(x-1\right)}^{2}-12,\text{\hspace{0.17em}}$ Vertex $\text{\hspace{0.17em}}\left(1,-12\right)$ $f\left(x\right)=2{x}^{2}-6x$ $f\left(x\right)=3{x}^{2}-5x-1$ $f\left(x\right)=3{\left(x-\frac{5}{6}\right)}^{2}-\frac{37}{12},\text{\hspace{0.17em}}$ Vertex $\text{\hspace{0.17em}}\left(\frac{5}{6},-\frac{37}{12}\right)$ For the following exercises, determine whether there is a minimum or maximum value to each quadratic function. Find the value and the axis of symmetry. $y\left(x\right)=2{x}^{2}+10x+12$ $f\left(x\right)=2{x}^{2}-10x+4$ Minimum is $\text{\hspace{0.17em}}-\frac{17}{2}\text{\hspace{0.17em}}$ and occurs at $\text{\hspace{0.17em}}\frac{5}{2}.\text{\hspace{0.17em}}$ Axis of symmetry is $\text{\hspace{0.17em}}x=\frac{5}{2}.$ $f\left(x\right)=-{x}^{2}+4x+3$ $f\left(x\right)=4{x}^{2}+x-1$ Minimum is $\text{\hspace{0.17em}}-\frac{17}{16}\text{\hspace{0.17em}}$ and occurs at $\text{\hspace{0.17em}}-\frac{1}{8}.\text{\hspace{0.17em}}$ Axis of symmetry is $\text{\hspace{0.17em}}x=-\frac{1}{8}.$ $h\left(t\right)=-4{t}^{2}+6t-1$ $f\left(x\right)=\frac{1}{2}{x}^{2}+3x+1$ Minimum is $\text{\hspace{0.17em}}-\frac{7}{2}\text{\hspace{0.17em}}$ and occurs at $\text{\hspace{0.17em}}-3.\text{\hspace{0.17em}}$ Axis of symmetry is $\text{\hspace{0.17em}}x=-3.$ $f\left(x\right)=-\frac{1}{3}{x}^{2}-2x+3$ For the following exercises, determine the domain and range of the quadratic function. $f\left(x\right)={\left(x-3\right)}^{2}+2$ Domain is $\text{\hspace{0.17em}}\left(-\infty ,\infty \right).\text{\hspace{0.17em}}$ Range is $\text{\hspace{0.17em}}\left[2,\infty \right).$ #### Questions & Answers x exposant 4 + 4 x exposant 3 + 8 exposant 2 + 4 x + 1 = 0 HERVE Reply x exposent4+4x exposent3+8x exposent2+4x+1=0 HERVE How can I solve for a domain and a codomains in a given function? Oliver Reply ranges EDWIN Thank you I mean range sir. Oliver proof for set theory Kwesi Reply don't you know? Inkoom find to nearest one decimal place of centimeter the length of an arc of circle of radius length 12.5cm and subtending of centeral angle 1.6rad Martina Reply factoring polynomial Noven Reply what's your topic about? Shin Reply find general solution of the Tanx=-1/root3,secx=2/root3 Nani Reply find general solution of the following equation Nani the value of 2 sin square 60 Cos 60 Sanjay Reply 0.75 Lynne 0.75 Inkoom when can I use sin, cos tan in a giving question duru Reply depending on the question Nicholas I am a carpenter and I have to cut and assemble a conventional roof line for a new home. The dimensions are: width 30'6" length 40'6". I want a 6 and 12 pitch. The roof is a full hip construction. Give me the L,W and height of rafters for the hip, hip jacks also the length of common jacks. John I want to learn the calculations Koru Reply where can I get indices Kojo Reply I need matrices Nasasira hi Raihany Hi Solomon need help Raihany maybe provide us videos Nasasira about complex fraction Raihany Hello Cromwell a Amie What do you mean by a Cromwell nothing. I accidentally press it Amie you guys know any app with matrices? Khay Ok Cromwell Solve the x? x=18+(24-3)=72 Leizel Reply x-39=72 x=111 Suraj Solve the formula for the indicated variable P=b+4a+2c, for b Deadra Reply Need help with this question please Deadra b=-4ac-2c+P Denisse b=p-4a-2c Suddhen b= p - 4a - 2c Snr p=2(2a+C)+b Suraj b=p-2(2a+c) Tapiwa P=4a+b+2C COLEMAN b=P-4a-2c COLEMAN like Deadra, show me the step by step order of operation to alive for b John A laser rangefinder is locked on a comet approaching Earth. The distance g(x), in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by g(x)=250,000csc(π30x). Graph g(x) on the interval [0, 35]. Evaluate g(5)  and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes. Kaitlyn Reply The sequence is {1,-1,1-1.....} has amit Reply ### Read also: #### Get the best Algebra and trigonometry course in your pocket! Source:  OpenStax, Algebra and trigonometry. OpenStax CNX. Nov 14, 2016 Download for free at https://legacy.cnx.org/content/col11758/1.6 Google Play and the Google Play logo are trademarks of Google Inc. Notification Switch Would you like to follow the 'Algebra and trigonometry' conversation and receive update notifications? By By By Abishek Devaraj By By David Martin By Rhodes By
2019-09-20 13:49:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 44, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6440393924713135, "perplexity": 1432.7539316076532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00489.warc.gz"}
https://design.tutsplus.com/tutorials/floral-illustration-text-effect--cms-27195
Unlimited PS Actions, graphics, videos & courses! Unlimited asset downloads! From \$16.50/m How to Create a Romantic Rose Text Effect in Adobe Photoshop by Difficulty:IntermediateLength:MediumLanguages: This post is part of a series called Learn Adobe Photoshop. The A to Z of Adobe Photoshop How to Create a Set of Zodiac Icons in Adobe Photoshop This tutorial will show you how to use a bunch of floral illustration images, with a couple of layer masks, textures, and adjustment layers, to create a fresh, floral text effect. Let's get started! This text effect was inspired by the many Layer Styles available on Envato Market. Tutorial Assets The following assets were used during the production of this tutorial: 1. Create the Background Step 1 Create a new 1500 x 1000 px document, click the Create new fill or adjustment layer icon at the bottom of the Layers panel, and choose Gradient. Step 2 Create the gradient fill using the colors #ecf8f5 to the left and #c9bcc2 to the right, and then change the Style to Radial and the Scale to 166%. 2. Isolate the Flower Images Step 1 Open one of the flower images you're using, and pick the Magic Wand Tool. Make sure that the Contiguous box in the Options bar is unchecked, and then click once in any white area to select it. Step 2 Press the Shift-Command-I keys to invert the selection, and then press Command-C to copy the selected flower. Step 3 Go back to the original document, and press Command-V to paste the copied image. Step 4 Repeat that for the rest of the of the flower images you're using. Once you're done, select all the flower layers you have. Step 5 Press Command-G to place all the selected flower layers in a group, and call it Original. 3. Create the Text Step 1 Make the Original group invisible by clicking the eye icon next to it. Step 2 Create the text in White using the font Bolina. Set the Size to 350 pt and the Kerning to Optical. Step 3 Right-click the text layer and choose Convert to Smart Object. This will help you work with the text non-destructively. 4. Warp the Text Step 1 Press Command-T to enter Free Transform Mode, and then click the Warp icon in the Options bar to enter Warp Mode. Step 2 Choose the Arc preset from the Warp menu in the Options bar. Once you do that, you can click-drag the control point to adjust the effect you have. Step 3 Click the Warp icon again to get out of Warp Mode and back into Free Transform Mode, and then resize the text and move it around as needed. Once you're done, hit the Return key to commit the changes. 5. Duplicate the Transform the Flower Images Step 1 Next, you will start adding the flowers behind the text. So select the layer of the flower you want to work with from the Original group, and then press Command-J to duplicate it. Step 2 Drag the copy layer outside the Original group. As you're adding more flowers later on, you can also control the layer order based on the other flowers you have. Step 3 Press Command-T, and then resize, rotate, and move the flower to place it behind a part of the text. Step 4 When Free Transform Mode is active, you can right-click the bounding box to access more options, like the Flip options, when needed. Step 5 Remember to hit the Return key to commit the changes. 6. Work on the Details Step 1 If you want to use only a part of an image, pick the Eraser Tool to remove any unwanted parts. Also, keep in mind that you can turn the visibility of any layer on and off. This will help you work on one image at a time, and not get confused by the other images you have. Step 2 Zoom in and out to make sure you're not missing anything or removing any areas that are connected to other parts of the image. Sometimes, you might delete an area that leaves behind overlapping parts. You can fix that using the Healing or Stamp Tools. Step 3 Here, the leaf was fixed using the Healing Brush Tool. But feel free to use any tool you like. Step 4 Make sure to check the result with all the layers visible before adding another flower image. Keep adding the flowers until you fill the area behind the text. You can use the same image more than once if you change its size, rotation angle, and/or parts used. Just take your time with this step as it is the most important one. 7. Create the Flower Groups and Masks Step 1 Once you're done adding the flowers, place all the used flower layers in a group and call it Back. Step 2 Duplicate the Back layer, place the copy on top of the text layer, and rename it Top. Step 3 Press and hold the Option key, and click the Add layer mask icon at the bottom of the Layers panel. This will add a black filled mask to the Top group, hiding all its contents. Step 1 Change the text layer's Opacity value to 55%, or any other value that lets you see both the text and the flowers behind it clearly. Step 2 Use the Move Tool to reposition the text if needed. Step 3 Check the Auto-Select box in the Options bar, and choose Layer from the menu. This will help you select the flower image's layer by clicking any part of the image itself. Step 1 Zoom in to the first letter you have, and decide what part(s) of the flowers beneath it you want to show on top of it. With the Move Tool, click the part you chose to select its layer in the Back group. Then, use the Quick Selection Tool to select that part. This might not be precise, but you can fix that later. Step 2 Select the Top group's mask thumbnail, and pick the Brush Tool. Set the Foreground Color to White, and then start painting the area in. Step 3 Press Command-D to deselect, and then fix any areas that need work. If you switch the Foreground Color to Black, you can paint to hide any areas that were revealed by mistake. Step 4 Follow the same steps to add more overlapping parts, but make sure to keep the text readable. Step 1 Add a Hue/Saturation adjustment layer on top of the Back layer, and click the Clip to layer icon. Next, change the Saturation to 7, and then pick the colors you want to adjust separately, and use the values you like. Step 2 If needed, you can add multiple Hue/Saturation layers, and mask parts of each one of them, in order to affect different areas of the images separately. Step 3 Duplicate and clip the Hue/Saturation layer(s) you have to the Top group. Step 1 Double-click the Back group to apply a Drop Shadow effect with these settings: • Color: #242c25 • Opacity: 21% • Distance: 0 • Size: 75 Double-click the text layer to apply the following layer style: Step 2 Add a Color Overlay with these settings: • Color: #fdfdfb • Opacity: 50% • Distance: 7 • Size: 13 12. Paint the Overlapping Parts' Shadows Step 1 Create a new layer on top of the text layer and call it Shadow, and then pick the Brush Tool and set the Foreground Color to Black. Use a soft round brush tip to click-drag slightly wherever there are flower parts on top of the text to add some shadows. You can change the Opacity value in the Options bar to control the intensity of the shadow. Step 2 You can also use the Eraser Tool, with a slightly bigger soft round tip, to delete any unwanted parts of the painted shadows. Just take your time to make the shadows look as subtle and as soft as possible. 13. Add the Textures and Sharpen the Image Step 1 Place the Wall Texture - 35 image on top of the Gradient layer, and rename its layer to BG Texture. Rotate and resize the texture as needed, and then change its layer's Opacity to 15%. Step 2 Create a new layer on top of all layers and call it High Pass, and then press the Command-Option-Shift-E keys to create a stamp. Convert the High Pass layer to a Smart Object. Step 3 Go to Filter > Other > High Pass, and change the Radius to 1. Step 4 Change the High Pass layer's Blend Mode to Soft Light. This will sharpen the image a little bit. Step 5 Finally, place the Paper Grain Texture image on top of all layers, and then change its layer's Blend Mode to Multiply and its Opacity to 30%. Congratulations! You're Done In this tutorial, we created a simple gradient background and isolated a couple of flower images to use for the effect. Then we created the text, converted it into a smart object, and warped it. After that, we started adding the flowers behind the text, and then we worked on transforming them, deleting unwanted parts, and fixing any spots that needed work. Finally, we adjusted the colors, added some textures, and sharpened the final result.
2021-04-19 12:10:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2533881962299347, "perplexity": 3017.7314774205342}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00531.warc.gz"}
https://mmpose-ly015.readthedocs.io/en/latest/topics/wholebody.html
# Wholebody¶ ## Coco-Wholebody Dataset¶ ### Associative Embedding + Hrnet on Coco-Wholebody¶ Associative Embedding (NIPS'2017) @inproceedings{newell2017associative, title={Associative embedding: End-to-end learning for joint detection and grouping}, author={Newell, Alejandro and Huang, Zhiao and Deng, Jia}, booktitle={Advances in neural information processing systems}, pages={2277--2287}, year={2017} } HRNet (CVPR'2019) @inproceedings{sun2019deep, title={Deep high-resolution representation learning for human pose estimation}, author={Sun, Ke and Xiao, Bin and Liu, Dong and Wang, Jingdong}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={5693--5703}, year={2019} } COCO-WholeBody (ECCV'2020) @inproceedings{jin2020whole, title={Whole-Body Human Pose Estimation in the Wild}, author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, year={2020} } Results on COCO-WholeBody v1.0 val without multi-scale test Arch Input Size Body AP Body AR Foot AP Foot AR Face AP Face AR Hand AP Hand AR Whole AP Whole AR ckpt log HRNet-w32+ 512x512 0.551 0.650 0.271 0.451 0.564 0.618 0.159 0.238 0.342 0.453 ckpt log HRNet-w48+ 512x512 0.592 0.686 0.443 0.595 0.619 0.674 0.347 0.438 0.422 0.532 ckpt log Note: + means the model is first pre-trained on original COCO dataset, and then fine-tuned on COCO-WholeBody dataset. We find this will lead to better performance. ### Associative Embedding + Higherhrnet on Coco-Wholebody¶ Associative Embedding (NIPS'2017) @inproceedings{newell2017associative, title={Associative embedding: End-to-end learning for joint detection and grouping}, author={Newell, Alejandro and Huang, Zhiao and Deng, Jia}, booktitle={Advances in neural information processing systems}, pages={2277--2287}, year={2017} } HigherHRNet (CVPR'2020) @inproceedings{cheng2020higherhrnet, title={HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation}, author={Cheng, Bowen and Xiao, Bin and Wang, Jingdong and Shi, Honghui and Huang, Thomas S and Zhang, Lei}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={5386--5395}, year={2020} } COCO-WholeBody (ECCV'2020) @inproceedings{jin2020whole, title={Whole-Body Human Pose Estimation in the Wild}, author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, year={2020} } Results on COCO-WholeBody v1.0 val without multi-scale test Arch Input Size Body AP Body AR Foot AP Foot AR Face AP Face AR Hand AP Hand AR Whole AP Whole AR ckpt log HigherHRNet-w32+ 512x512 0.590 0.672 0.185 0.335 0.676 0.721 0.212 0.298 0.401 0.493 ckpt log HigherHRNet-w48+ 512x512 0.630 0.706 0.440 0.573 0.730 0.777 0.389 0.477 0.487 0.574 ckpt log Note: + means the model is first pre-trained on original COCO dataset, and then fine-tuned on COCO-WholeBody dataset. We find this will lead to better performance. ### Topdown Heatmap + Hrnet + Dark on Coco-Wholebody¶ HRNet (CVPR'2019) @inproceedings{sun2019deep, title={Deep high-resolution representation learning for human pose estimation}, author={Sun, Ke and Xiao, Bin and Liu, Dong and Wang, Jingdong}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={5693--5703}, year={2019} } DarkPose (CVPR'2020) @inproceedings{zhang2020distribution, title={Distribution-aware coordinate representation for human pose estimation}, author={Zhang, Feng and Zhu, Xiatian and Dai, Hanbin and Ye, Mao and Zhu, Ce}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={7093--7102}, year={2020} } COCO-WholeBody (ECCV'2020) @inproceedings{jin2020whole, title={Whole-Body Human Pose Estimation in the Wild}, author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, year={2020} } Results on COCO-WholeBody v1.0 val with detector having human AP of 56.4 on COCO val2017 dataset Arch Input Size Body AP Body AR Foot AP Foot AR Face AP Face AR Hand AP Hand AR Whole AP Whole AR ckpt log pose_hrnet_w32_dark 256x192 0.694 0.764 0.565 0.674 0.736 0.808 0.503 0.602 0.582 0.671 ckpt log pose_hrnet_w48_dark+ 384x288 0.742 0.807 0.705 0.804 0.840 0.892 0.602 0.694 0.661 0.743 ckpt log Note: + means the model is first pre-trained on original COCO dataset, and then fine-tuned on COCO-WholeBody dataset. We find this will lead to better performance. ### Topdown Heatmap + Hrnet on Coco-Wholebody¶ HRNet (CVPR'2019) @inproceedings{sun2019deep, title={Deep high-resolution representation learning for human pose estimation}, author={Sun, Ke and Xiao, Bin and Liu, Dong and Wang, Jingdong}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={5693--5703}, year={2019} } COCO-WholeBody (ECCV'2020) @inproceedings{jin2020whole, title={Whole-Body Human Pose Estimation in the Wild}, author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, year={2020} } Results on COCO-WholeBody v1.0 val with detector having human AP of 56.4 on COCO val2017 dataset Arch Input Size Body AP Body AR Foot AP Foot AR Face AP Face AR Hand AP Hand AR Whole AP Whole AR ckpt log pose_hrnet_w32 256x192 0.700 0.746 0.567 0.645 0.637 0.688 0.473 0.546 0.553 0.626 ckpt log pose_hrnet_w32 384x288 0.701 0.773 0.586 0.692 0.727 0.783 0.516 0.604 0.586 0.674 ckpt log pose_hrnet_w48 256x192 0.700 0.776 0.672 0.785 0.656 0.743 0.534 0.639 0.579 0.681 ckpt log pose_hrnet_w48 384x288 0.722 0.790 0.694 0.799 0.777 0.834 0.587 0.679 0.631 0.716 ckpt log ### Topdown Heatmap + Resnet on Coco-Wholebody¶ SimpleBaseline2D (ECCV'2018) @inproceedings{xiao2018simple, title={Simple baselines for human pose estimation and tracking}, author={Xiao, Bin and Wu, Haiping and Wei, Yichen}, booktitle={Proceedings of the European conference on computer vision (ECCV)}, pages={466--481}, year={2018} } COCO-WholeBody (ECCV'2020) @inproceedings{jin2020whole, title={Whole-Body Human Pose Estimation in the Wild}, author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, year={2020} } Results on COCO-WholeBody v1.0 val with detector having human AP of 56.4 on COCO val2017 dataset Arch Input Size Body AP Body AR Foot AP Foot AR Face AP Face AR Hand AP Hand AR Whole AP Whole AR ckpt log pose_resnet_50 256x192 0.652 0.739 0.614 0.746 0.608 0.716 0.460 0.584 0.457 0.578 ckpt log pose_resnet_50 384x288 0.666 0.747 0.635 0.763 0.732 0.812 0.537 0.647 0.573 0.671 ckpt log pose_resnet_101 256x192 0.670 0.754 0.640 0.767 0.611 0.723 0.463 0.589 0.533 0.647 ckpt log pose_resnet_101 384x288 0.692 0.770 0.680 0.798 0.747 0.822 0.549 0.658 0.597 0.692 ckpt log pose_resnet_152 256x192 0.682 0.764 0.662 0.788 0.624 0.728 0.482 0.606 0.548 0.661 ckpt log pose_resnet_152 384x288 0.703 0.780 0.693 0.813 0.751 0.825 0.559 0.667 0.610 0.705 ckpt log ## Halpe Dataset¶ ### Topdown Heatmap + Hrnet + Dark on Halpe¶ HRNet (CVPR'2019) @inproceedings{sun2019deep, title={Deep high-resolution representation learning for human pose estimation}, author={Sun, Ke and Xiao, Bin and Liu, Dong and Wang, Jingdong}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={5693--5703}, year={2019} } DarkPose (CVPR'2020) @inproceedings{zhang2020distribution, title={Distribution-aware coordinate representation for human pose estimation}, author={Zhang, Feng and Zhu, Xiatian and Dai, Hanbin and Ye, Mao and Zhu, Ce}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={7093--7102}, year={2020} } Halpe (CVPR'2020) @inproceedings{li2020pastanet, title={PaStaNet: Toward Human Activity Knowledge Engine}, author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu}, booktitle={CVPR}, year={2020} } Results on Halpe v1.0 val with detector having human AP of 56.4 on COCO val2017 dataset Arch Input Size Whole AP Whole AR ckpt log pose_hrnet_w48_dark+ 384x288 0.531 0.642 ckpt log Note: + means the model is first pre-trained on original COCO dataset, and then fine-tuned on Halpe dataset. We find this will lead to better performance.
2022-09-30 06:12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3450634181499481, "perplexity": 10970.64244755575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00075.warc.gz"}
http://wordpress.discretization.de/vismathws12/2013/12/
# 7-9.01.2014 Assignment6: Project description Today in class I presented a project which I have been working on, on and off, for several years.  It is a visualization of the Schatz linkage.  You can access the webstart I showed here. The video I showed at the beginning of the class can be seen here. The pictures of the wooden oloid which Thomas Neukirchner made can be found here. Thomas’s diagrams and plywood models of the oloid can be found here.  The latest edition of Paul Schatz’s book Die Welt ist Umstülpbar: Rhythmusforschung und Technik can be found here on Amazon.  If you are interested in this theme then mark your calender for April 4-6, when there will be a weekend event in Berlin devoted to themes related to Paul Schatz’s research.  A few more details can be found here. On Thursday, Jan. 9, we will take some time to evaluate this “project presentation” in regard to the various criteria mentioned in the evaluation handout.  Be there or be square!  I will also discuss the half-edge data structure and demonstrate some advanced features of the discretegroup package, and, if time permits, more! Assignment 6: As discussed in class on our final meeting before the holidays, the next assignment is not a programming assignment but rather a planning assignment.  I ask you to enter a description of your project as a comment to this blog post.  Use the account name studentmv with the clever password.  Please  sign your comment with either your name(s) or with your student number(s).  Your comments are due as usual on Friday at midnight.  (That’s Friday Jan. 10).   The details of the description are provided below, but first comes a “news flash” regarding a change in the way property files are handled in the basic template.Assignment class. Improved way to provide property files. Although I didn’t do much work over the holiday break, I did play around some with webstarts — enough to realize that the current way that property files are handled doesn’t work with webstart technology.  So I’ve figured out a new way to handle property files that does work with webstarts, and has other advantages.  Here’s how it works: • If you have a class named MyAssignment.java that inherits from template.Assignment, then store off the properties in a file named myAssignment.xml (note the leading lower case letter!) in the same folder as MyAssignment.java. Implement the method getPropertiesFile() to return null.  Or, remove your implementation of the method, it’s no longer an abstract method and by default it returns null. • That’s all you have to do.  Now your class should read the property file correctly whether run “normally” or as a webstart application (which your project application will ultimately need to do). • Please try this out to make sure it works for you — if you don’t change anything the old code will continue to work but you should get used to using the new approach. What to include in your description. 1. Mathematics. The description should begin with the mathematical content of the project.  Try to be as precise and accurate.  Zoom in gently — make clear to which domain of mathematics this content belongs.  Use technical language if necessary to specify this.  If possible, include one or two references (either articles or books) which provide a more detailed description of the context of your project.  None of the projects exists “in a vacuum”, give a pointer to your point(s) of departure. 2. Software design. Begin this section by a statement describing the concrete goal of your project.  For example, “We intend to create an application to allow the user to explore the 3-dimensional kaleidoscopes introduced above [in the mathematical section].”  Describe then the software tools and building blocks you think you will need in order to realize this aim, and how they will be put together.  For example, building blocks might include “jReality geometry factories, the de.jtem.halfedge package, and the de.jtem.discretegroup package”; additionally indicate which parts of these building blocks you will use and how they will be combined.  Think if this as the blueprint for your project.  Of course at this stage of the planning it shouldn’t be too detailed. 3. User experience. Finally, briefly indicate how you envision a typical “usage cycle” of your program.  What choices does the user have, how can he/she interact with the software sketched in the previous section?  Can the user save the state of the program from one invocation to the next? Will you rely on sliders and dials, or have you ideas for innovative interactive tools?  Will your application be particularly aimed at the PORTAL? Will there be a connection to 3D printing? # 17-19.12 Magic Theorems New Math this week:  the main results this week involved the so-called Magic Theorems, that establish important results about the orbifold notation we have been learning to use to name symmetry groups.  To understand the theorems we need to introduce the basic symbols for the notation. • $\large *$ Represents a mirror line • X Represents a glide reflection • $n\in \mathbb{N}$ represents a natural number $n > 1$, a rotation center of order $n$. • $O$ represents a pattern without any of the above features. To each of these symbols there is a cost associated.  By considering all possible valid strings consisting of these symbols, one can then associate to each string a cost by summing over the costs associated to the constituent symbols.  One obtains some beautiful theorems.  But first we need to introduce some further concepts. The Euler characteristic of a surface.   Suppose I have a set of polygons with the property that every edge is shared by exactly two polygons.  This set of polygons is called a 2-complex and determines a two-dimensional surface without boundary. Consider 2-complexes which yield the 2-sphere as surface.  Let V, E, and F denote the number of vertices, edges, and faces, resp. in the 2-complex. Then we showed by considering the platonic solids that $V-E+F=2$ for these surfaces, and gave reasons to believe that this relation is always true.  We define the associated function for a 2-complex $c$: the Euler characteristic $\lambda(c) := V-E+F$.  For a 2-sphere, $\lambda(c) = 2$.  Considering the 1-holed torus we showed that $\lambda(c)=0$ and, in general, for n-holed tori, $\lambda(c) = 2-2n$. Euler characteristic of orbifolds. By considering the orbifolds we have been studying, we were able to extend the notion of Euler characteristic to these spaces, too.  The key idea is to introduce fractional points and edges at points and edges which are fixed by some group element.  One determines the fractional weights so that the resulting weights add up to 1 wherever the stabilizer of the group is non-trivial.  That is, make sure that the statement: “The copies of the fundamental domain cover the plane (or sphere) without overlap” remains true. The cost function.  The cost function is defined as follows: • $k(*) = k(X) = 1$, • $k(n) = \dfrac{n-1}{n}$ if it comes before any $*$ in the string, otherwise $k(n) = \dfrac{n-1}{2n}$. • $k(O) = 0. Example: Let$s = 235$. Then$k(x) = \dfrac{1}{2}+ \dfrac{2}{3} + \dfrac{4}{5} = \dfrac{118}{60}$. Magical theorem for spherical groups: Let$s$be a valid string consisting of the above symbols. Then the cost function$k(s) < 2$exactly when$s$describes a spherical orbifold$\mathcal{O}$. In this case, the Euler characteristic of$\mathcal{O}$is$\lambda({\mathcal{O}}) = 2-k(s)$, and the symmetry group of$\mathcal{O}$has order$\dfrac{1}{\lambda({\mathcal{O}})}$. Example: Applied to$235$, one obtains that this orbifold has Euler characteristic$\dfrac{1}{60}$and the symmetry group has order 60. There is a similar, simpler theorem for wallpaper groups. Magical theorem for spherical groups: Let$s$be a valid string consisting of the above symbols. Then the cost function$k(s) = 2 $exactly when$s$describes a spherical orbifold$\mathcal{O}$. Further research: Note that the theorem makes no claims regarding the order of the group, since they are all infinite groups.One important invariant of a wallpaper group is the order$O_T$of the translation group within the full group. Can you find a way to derive this invariant from the Conway-Thurston orbifold notation for the group? Further reading: See the book Symmetry of Things by John Conway, et. al., Chs. 3, 4, and 6 for details and proofs of all the above themes and claims. The book can be found in the math library “Apparat”. # 10-12.12 Still more symmetry groups Important scheduling decision: Today in class we determined that the presentations for the summer semester would be held during the first two weeks Tuesday, April 1 – Friday, April 11, although lectures begin only on April 14. This choice was proposed by Mr. Gunn, since Easter school vacations run from April 14-April 28 and he has plans for family vacation during this time. It’s also likely that getting the presentations over before the semester lectures start will be healthy for everyone involved. Important calendar dates this year. Mr. Gunn can’t resist pointing out the following dates from this semester are … well if not significant at least interesting: • 7.11.13 The last time three consecutive primes will occur in a date until next century. • 5.12.13 A Pythagorean triple date. This won’t happen again until 17.8.15 — Mr. Gunn believes. Can anyone prove or disprove this? • 11.12.13 Three consecutive whole numbers. Won’t happen again until next century. List of Projects. Here is a list of possible projects from Mr. Gunn’s fantasy land. Also, take a look at last year’s projects to see how such an idea can be implemented. The projects entitled “Creation of Escher Tilings”, “A 3D Kaleidoscope” and “Dynamical Wallpaper Patterns” could be reworked again this year as there remains quite a lot to do. Of course you’re encouraged to come up with your own project ideas also. I would like to meet with each team before Christmas to discuss the state of your project! One possibility would be to use the tutorials next week for this purpose. If there are no objections I’ll prepare a sign-up sheet to pass around on Thursday’s lecture. (12.12.13). Wallpaper group resource: The following chart contains a list of all 17 wallpaper groups including the standard fundamental domains and the generators used in the discretegroup WallpaperGroup class. The 17 wallpaper groups George Hart videos. I encourage you to take a look at the mathematical videos from George Hart, an American mathematician who does lots of interesting stuff with symmetry groups. In particular, check out the videos involving group mathematical sculpture constructions (“Geometric barn-raising”). For example, this one. I think we should consider doing something like this as a class after we come back from Christmas and hanging the result in the ground floor of the math building (outside of the lecture hall 005, etc). 10.12 Mathematics. Mr. Gunn introduced the remaining spherical groups, the ones that do not preserve 2 antipodal points. They are the symmetry groups of the platonic solids. To be precise, the groups and the platonic solids are: • *233 tetrahedron • *234 cube and octahedron • *235 pentagon dodecahedron There are also the rotational subgroups of these groups: 233, 234, and 235. Mr. Gunn explained how the fundamental region for these groups can be arrived at by splitting an equilateral spherical triangle into 6 smaller right triangles with angles$\dfrac{\pi}{2},\dfrac{\pi}{3},\dfrac{\pi}n$for$n\in \{3,4,5\}$. As a result these groups are known as triangle groups. Check out the triangle group application here, or access it in eclipse via “Referenced Libraries->discretegroupCGG.jar->discreteGroup.demo.TriangleGroupDemo.class” then right-click and “Run as …->Java application”. 3*2. Finally there is the group 3*2 which has three mutually perpendicular mirrors which split the sphere into 8 right-angled triangles; in the center of the triangle is a rotation center of order 3. Assignment5 doesn’t work quite right with this group, you are encouraged to fix it and send Mr. Gunn the result. Euler characteristic. In discussing the platonic solids Mr. Gunn pointed out that the number of vertices V, the number of edges E, and the number of faces F in all the platonic solids obey the law$V-E+F=2$. He showed that standard operations to insert faces, edges, or vertices also leave the value of$V-E+F$invariant. This value is called the Euler characteristic of the surface E(S). It is a topological invariant of the surface (in this case the sphere). By considering a torus rolled up out of a single quadrilateral, one was able to calculate that for the torus$E(S) = V-E+F=0$. For a two-holed torus one can show (by folding it up from 1 octagon) that$E(S) = V-E+F=-2$. In general, E(S) = 2$(1-g)$where$g$is the genus of the surface (the number of holes it has). Mr Gunn claimed but postponed proving that the Thurston-Conway notation for 2-orbifolds and 2-manifolds has a close connection to the Euler characteristic. He indicated that if one takes the name of an orbifold$\mathcal{O}$as a sequence of symbols, one can attach a “cost” function K(s) to each symbol$s$in such a way that the sum$\Sigma$of the costs of all the symbols satisfies$E(\mathcal{O}) = 2-\Sigma$. Of course one also has to define what is meant by the Euler characteristic of$\mathcal{O}$, since up til now we only know what it means for a sphere or other compact orientable surface. So that will happen on Thursday 12.12 Christmas ornament application. To demonstrate these groups, Mr. Gunn hooked up the beamer and fired up template.Assignment5 from the git repository. There’s a blog post on this application here and an image album here. You’re encouraged to play with this application to get a feel for these spherical groups. # 9.12 Assignment5 Update 16.12.13: I’ve fetched all the Assignment5’s I could find. If there was a problem with yours, I’ve sent you an e-mail. Otherwise, I’ve created a picture gallery containing all the images so far received — there are 13 so we could make a lunar calendar with them! As you see, about half are derived from the Assignment5 I distributed and the other half are original. I’m pleased with the variety of the images. One small problem is that I don’t know how to label the images. If you’d like to have a label attached to your image — and I think that would be a good thing — please e-mail me the text for the label. Try describing it in 50 words or less. I’ve described what I expect for Assignment5 in this blog post. However if you are having trouble generating an image, I’ve checked in an Assignment5 class to the repository. It’s an application which uses some symmetry groups of the 2-dimensional sphere to create Christmas ornaments. You are free to adapt it to create an image for the Christmas calendar. Just be sure that you find a way to customize the code to add new functionality, using the existing code to make an image is not acceptable. I wasn’t able to document the code exhaustively but it should be possible to understand what’s going on well enough to modify it. The application uses instance of the class TriangleGroup. There is a choice of 3 diferent plugins: 1. Default: use default triangle in which the three vertices lie on the surface of the sphere. 2. Perpendicular cut is more complex. One chooses one of the 3 vertices and creates a new triangle lying in the tangent plane to the sphere at this vertex. To obtain the other vertices, the lines joining the original vertices to the origin are extended until they cut this tangent plane. Furthermore, for rotation groups, the original triangle is doubled to obtain a true fundamental domain. Furthermore a piece of the boundary of this region is saved off into a field curve of type double[][]. This isn’t used directly by this class but subclasses (such as Twirly below) can use it to generate interesting geometry. 3. Twirly builds on the polygon P found above to create a surface by iteratively applying a matrix to a subset of the boundary of P. The matrix is determined by 3 real parameters: a scale, a translate, and a rotation angle. The code shows the use of the class CurveCollector, which allows you to construct a surface swept out by a series of curves (each of the same number of vertices). 1. For a related application showing the use of such an “iterated matrix” run the ElephantTrunk webstart, or create a run configuration in eclipse: run “charlesgunn.jreality.viewer.PluginSceneLoader” with the program argument “charlesgunn.jreality.worlds.ElephantTrunk”. Images. The following images show first the default fundamental region. The next three show the perpendicular cut with the 3 different choices of vertex. Finally, there are some images showing the Twirly plugin at work. You can also use template.Assignment5 as the basis for the actual Assignment5, to create an image for our Christmas calendar. To use it for this you’ll need to write a plugin class that does more than what the template version provides. The possibilities are endless (IMHO). You can see all the images as a single album at my Google+ photo site. The twirly plugin acting on the group 234 with perpendicular cut at vertex with order-4 rotation. A single copy of the Twirly surface for the group 234 (group of order 24). The remaining images show results for the Twirly plugin, using a variety groups, and a variety of settings for the parameters for constructing the matrix. An example of the group 3*2, the only point group containing a mirror which is not a mirror group. Group 235 flattened at order-2 rotation center Group 235 flattened at order-5 rotation. Group 234 flattened at order-2 rotation. Group 234 flattened at the order-2 rotation # 3-5.12 Normal Subgroups and More Note: The tutorials next week will meet in the PORTAL (Second floor, same end of the building as BMS). We will have a demo of the facilities there, including the 3D printing and scanning area. Assignments. The continuation of Assignment 4 is described in the previous post. It’s due this Friday Dec. 6. Assignment 5 — creation of a Christmas calendar image — is also described there. It’s due on Dec. 13. December 9: See this update. 03.12.13: Mr. Gunn continued the guided tour of the planar symmetry groups by considering symmetry groups of the plane with 2 independent translations: the wallpaper groups. In contrast to the frieze groups, they can have rotational symmetries of order greater than 2. Wallpaper group application. You can run a jReality wallpaper group application from your eclipse project by navigating in the Package Explorer to “Referenced Libraries->discretegroupCGG.jar->discreteGroup.wallpaper.WallpaperPluggedIn.class” and then using right-mouse click to choose “Run as … Java application”. (Here is a Java webstart of the same application. But be careful — if you run it under Java 7, which is sometimes difficult to avoid with webstarts, make sure the left hand panel is hidden or it runs v e r y slowly.) This application allows you to experiment with all 17 wallpaper groups — see the “Wallpaper” menu and the left inspection panel. Here’s a picture generated the group 244 with the automatic painting option (choose “Wallpaper->run” option): Wallpaper group resource: The following chart contains a list of all 17 wallpaper groups including the standard fundamental domains and the generators used in the discretegroup WallpaperGroup class. The 17 wallpaper groups More group theory. To obtain a good answer for Assignment 4 it’s useful to understand a bit more about the internal structure of the frieze groups. It turns out the the translation subgroup plans an important role. To understand this, we need to introduce some notation. Definition: A subgroup$H$of a group$G$is normal if$gHg^{-1} = H \forall g \in G$. We write$H \trianglelefteq G$. Theorem: The translation subgroup$T$of a euclidean group$G$is a normal subgroup. Proof: A general isometry$g$can be written as$g(\mathbf{X}) = A.\mathbf{x} + T$where$A \in O(n)$is an orthogonal transformation and$T \in \mathbb{R}^n$is the vector of the translation. Then$g^{-1} = A^{-1}.\mathbf{x} + A^{-1}.T$and writing out the composition$gSg^{-1}$where$S$is the translation$\mathbf{x} \rightarrow \mathbf{x} + S$yields the result$gSg^{-1}(\mathbf{x})= \mathbf{x} + A^{-1}.S$, which is clearly a translation. \qed Cosets. A subgroup$H \trianglelefteq G$determines an equivalence relation on the group$G$as follows: Define$\mathbf{x} \sim \mathbf{y} \iff x^{-1}y \in H$. We leave it to the reader to verify that this is an equivalence relation. Hence, it partitions$G$into equivalence classes of elements. Each equivalence class is called a left coset of$G$with respect to$H$. A similar definition involving$xy^{-1}$yields the right cosets. The coset containing$id$is clearly$H$itself. [Exercise: when$H$is normal, the left cosets and the right cosets coincide.] The coset containing$\mathbf{x} \in G$is written$[\mathbf{x}]$. [Exercise: Every element of$[\mathbf{x}]$can be uniquely written as$\mathbf{x}\mathbf{h}_x$for some$h_x \in H$.] Let us choose a set of representative${\mathbf{a}_i}$for the cosets. Then$\mathbf{x} \in [\mathbf{a}_i]$can be written as$\mathbf{x} = \mathbf{a}_i \mathbf{h}_x$for some$\mathbf{h}_x \in H$. The quotient group. The cosets of a normal subgroup have an induced group structure. Indeed, let$[\mathbf{x}]$and$[\mathbf{y}]$be two cosets. Then define a product on cosets by$[x][y] = [xy]$. Using our canonical representatives we can write the RHS as$[(\mathbf{a}_i \mathbf{h}_x) (\mathbf{a}_j \mathbf{h}_y)]$. Now since right and left cosets are the same, there exists$\mathbf{h}^{‘}_x \in H$such that$\mathbf{h}_x \mathbf{a}_j =   \mathbf{a}_j \mathbf{h}^{‘}_x$. Then moving parentheses$[\mathbf{a}_i (\mathbf{h}_x \mathbf{a}_j) \mathbf{h}_y] = [\mathbf{a}_i \mathbf{a}_j \mathbf{h}^{‘}_x \mathbf{h}_y] = [ \mathbf{a}_i \mathbf{a}_j]$so the result is clearly independent of the choice of$\mathbf{x}$and$\mathbf{y}$. We write$G/H$and call it the quotient group of$G$by$H\$. Connection to Assignment 4. At the conclusion of thursday’s lecture, Mr. Gunn showed how these ideas could be applied to Assignment 4 to generate the desired list of elements of the freize groups. Here’s what the resulting code for updateCopies() looks like: Matrix[] cosets = null; Matrix generatingTranslation = null; int numCosets = 0; switch(whichGroup) { case 0: numCosets = 1; cosets = new Matrix[1]; cosets[0] = new Matrix(); generatingTranslation = MatrixBuilder.euclidean().translate(1, 0, 0).getMatrix(); break; case 1: numCosets = 2; cosets = new Matrix[numCosets]; cosets[0] = new Matrix(); // identity cosets[1] = MatrixBuilder.euclidean().reflect(new double[]{1,0,0,0}).getMatrix(); generatingTranslation = MatrixBuilder.euclidean().translate(2, 0, 0).getMatrix(); break; // implement this group (* infinity infinity ) here as above in case 0 // there are two vertical mirrors a distance 1 apart. case 3: generatingTranslation = MatrixBuilder.euclidean().reflect(new double[]{0,1,0,0}).translate(1,0,0).getMatrix(); numCosets = 1; cosets = new Matrix[1]; cosets[0] = new Matrix(); break; case 6: // implement this group (* 2 2 infinity) here as above in case 0 // there are two vertical mirrors a distance 1 apart and a horizontal mirror in the line y = 0. generatingTranslation = MatrixBuilder.euclidean().translate(2,0,0).getMatrix(); numCosets = 4; cosets = new Matrix[4]; cosets[0] = new Matrix(); cosets[1] = MatrixBuilder.euclidean().reflect(new double[]{1,0,0,0}).getMatrix(); cosets[2] = MatrixBuilder.euclidean().reflect(new double[]{0,1,0,0}).getMatrix(); cosets[3] = MatrixBuilder.euclidean().rotateZ(Math.PI).getMatrix(); break; .... // other cases not included here } int outerLoopCount = count/numCosets, counter = 0; // run through the powers of the generating translation ... for (int j = 0; j &lt;= outerLoopCount; ++j) { Matrix powerOfTranslation = Matrix.power(generatingTranslation, (j - outerLoopCount/2)); // and multiply it on the right by each of the coset representatives for (int k = 0; k&lt;numCosets; ++k) { // skip over non-existing scene graph components at end if (counter &gt;= count) continue; SceneGraphComponent child = translationListSGC.getChildComponent(counter++); Matrix tmp = new Matrix(cosets[k].getArray().clone()); tmp.multiplyOnLeft(powerOfTranslation); tmp.assignTo(child); } } }
2020-07-02 05:49:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.770055890083313, "perplexity": 2531.251456485546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00519.warc.gz"}
https://math.stackexchange.com/questions/3737651/is-a-right-invariant-equivalent-relation-necessarily-for-some-finite-aut
# Is a right invariant equivalent relation necessarily $𝑅_𝑀$ for some finite automaton $𝑀$? In Ullman's Introduction to Automata, Languages and Computation (1979) Let $$M = (Q, \Sigma, \delta, q_0, F)$$ be a DFA. Relation $$R_M$$ is defined as: for any $$x$$ and $$y$$ in $$\Sigma^*$$, let $$xR_My$$ if and only if $$\delta(q_0, x) = \delta(q_0, y)$$. An equivalence relation $$R$$ such that $$xRy$$ implies $$xzRyz$$ is said to be right invariant (with respect to concatenation). We see that every finite automaton induces a right invariant equivalence relation. Is a right invariant equivalent relation necessarily $$R_M$$ for some finite automaton $$M$$? Theorem 3.9 (The Myhill-Nerode theorem). The following three statements are equivalent: 1. The set $$L \subseteq \Sigma^*$$ is accepted by some finite automaton. 2. $$L$$ is the union of some of the equivalence classes of a right invariant equivalence relation over $$\Sigma^*$$ of finite index. 3. Let equivalence relation $$R_L$$ over $$\Sigma^*$$ be defined by: $$x R_L y$$ if and only if for all $$z \in \Sigma^*$$, $$xz$$ is in $$L$$ exactly when $$yz$$ is in $$L$$. Then $$R_L$$ is of finite index. Suppose $$L$$ is a regular language. If $$L$$ is the union of some of the equivalence classes of a right invariant equivalence relation $$R$$ over $$\Sigma^*$$ of finite index, is there some finite machine $$M$$ s.t. $$L$$ is the language which $$M$$ accepts and $$R=R_M$$? My questions above come from having difficulty understanding what the Myhill-Nerode theorem means. Thanks. ## 1 Answer Question 1 . A relation of the form $$R_M$$ for some finite automaton has finite index. So no, a right invariant equivalent relation is not necessarily of the form $$R_M$$: take for instance the equivalence $$\sim$$ defined by $$u \sim v$$ if and only if $$|u| = |v|$$. Question 2. Let $$L$$ be a regular language of $$A^*$$ which is the union of some of equivalence classes of a right invariant equivalence relation $$R$$ of finite index $$\sim$$. Let $$Q$$ be the (finite) set of $$\sim$$-equivalence classes. For $$q \in Q$$ and $$a \in A$$, let $$q \cdot a$$ be the $$\sim$$-class of $$ua$$ for some $$u \in q$$. This is well-defined, that is, it does not depend on the choice of $$u \in q$$: if $$v \in q$$, then $$u \sim v$$ and hence $$ua \sim va$$. Since $$L$$ is the union of some $$\sim$$-classes, there is a subset $$F$$ of $$Q$$ such that $$L = \bigcup_{q \in F} q$$. Let $$i$$ be the $$\sim$$-class of the empty word. Then the DFA $${\cal A} = (Q, A, \cdot, i, F)$$ accepts $$L$$ and $${\sim} = R_L = R_{\cal A}$$.
2020-10-22 01:47:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 63, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9225414395332336, "perplexity": 72.87373900668183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878662.15/warc/CC-MAIN-20201021235030-20201022025030-00617.warc.gz"}
https://math.libretexts.org?title=TextMaps/Linear_Algebra/Map:_Linear_Algebra_(Waldron,_Cherney,_%26_Denton)/01:_What_is_Linear_Algebra%3F/1.2:_What_Are_Linear_Functions%3F
# 1.2: What Are Linear Functions? In calculus classes, the main subject of investigation was functions and their rates of change. In linear algebra, functions will again be focus of your attention, but now functions of a very special type. In calculus, you probably encountered functions $$f(x)$$, but were perhaps encouraged to think of this a machine "$$f$$'', whose input is some real number $$x$$. For each input $$x$$ this machine outputs a single real number $$f(x)$$. . In linear algebra, the functions we study will take vectors, of some type, as both inputs an outputs. We just saw that vectors were objects that could be added or scalar multiplied---a very general notion---so the functions we are going study will look novel at first. So things don't get too abstract, here are five questions that can be rephrased in terms of functions of vectors: Example 2    (Functions of Vectors in Disguise) a)    What number $$x$$ solves $$10x=3$$? b)    What vector $$u$$ from 3-space satisfies the cross product equation $$\begin{pmatrix}1\\ 1\\ 0\end{pmatrix} \times\ u = \begin{pmatrix}0\\ 1\\ 1\end{pmatrix}$$ ? c)    What polynomial $$p$$ satisfies $$\int_{-1}^{1} p(y) dy = 0$$ and $$\int_{-1}^{1} y p(y) dy=1$$ ? d)    What power series $$f(x)$$ satisfies $$x\frac{d}{dx} f(x) -2f(x)=0$$? e)    What number $$x$$ solves $$4 x^2=1$$? For part (a), the machine needed would look like the picture below. $$x$$ $$10x$$. This is just like a function $$f(x)$$ from calculus that takes in a number $$x$$ and spits out the number $$f(x)=10x$$. For part~(\ref{FVB}), we need something more sophisticated. $$\begin{pmatrix}x\\ y\\ z\end{pmatrix}$$ $$\begin{pmatrix}z\\ -z\\ y-x\end{pmatrix}$$. The inputs and outputs are both 3-vectors. You are probably getting the gist by now, but here is the machine needed for part~(\ref{FVC}): $$p$$$$\begin{pmatrix}\int_{-1}^{1} p(y) dy\\ \int_{-1}^{1} y p(y) dy\end{pmatrix}$$. Here we input a polynomial and get a 2-vector as output! By now you may be feeling overwhelmed and thinking that absolutely any function with any kind of vector as input and any other kind of vector as output can pop up next to strain your brain! Rest assured that linear algebra involves the study of only a very simple (yet very important) class of functions of vectors; its time to describe the essential characteristics of linear functions. Let's use the letter $$L$$ for these functions and think again about vector addition and scalar multiplication. Lets suppose $$v$$ and $$u$$ are vectors and $$c$$ is a number. Then we already know that $$u+v$$ and $$cu$$ are also vectors. Since $$L$$ is a function of vectors, if we input $$u$$ into $$L$$, the output $$L(u)$$ will also be some sort of vector. The same goes for  $$L(v)$$, $$L(u+v)$$ and $$L(cu)$$. Moreover, we can now also think about adding $$L(u)$$ and $$L(v)$$ to get yet another vector $$L(u)+L(v)$$ or of multiplying $$L(u)$$ by $$c$$ to obtain the vector $$cL(u)$$. Perhaps a picture of all this helps: The blob'' on the left represents all the vectors that you are allowed to input into the function $$L$$, and the blob on the right denotes the corresponding outputs. Hopefully you noticed that there are two vectors apparently {\it not shown} on the blob of outputs: $$L(u)+L(v)\quad \&\quad cL(u)\, .$$ You might already be able to guess the values we would like these to take. If not, here's the answer, it's the key equation of the whole class, from which everything else \hypertarget{twopart}{follows}: $$L(u+v)=L(u)+L(v)\, .$$ 2.    Homogeneity: $$L(cu)=cL(u)\, .$$ Most functions of vectors do not obey this requirement; linear algebra is the study of those that do. Notice that the additivity requirement says that the function $$L$$ respects vector addition: $$\textit{it does not matter if you first add}$$ $$u$$ $$\textit{and}$$ $$v$$ $$\textit{and then input their sum into}$$ $$L$$$$\textit{, or first input}$$ $$u$$ $$\textit{and}$$ $$v$$ $$\textit{into}$$ $$L$$ $$\textit{separately and then add the outputs}$$. The same holds for scalar multiplication--try writing out the scalar multiplication version of the italicized sentence. When a function of vectors obeys the additivity and homogeneity properties we say that it is $$\textit{linear}$$ (this is the "linear'' of linear algebra). Together, additivity and homogeneity are called $$\textit{linearity}$$. Other, equivalent, names for linear functions are: Function = Transformation = Operator The questions in cases (a) - (d) of our example can all be restated as a single equation: $$$Lv = w$$$ where $$v$$ is an unknown and $$w$$ a known vector, and $$L$$ is a linear transformation. To check that this is true, one needs to know the rules for adding vectors (both inputs and outputs) and then check linearity of $$L$$. Solving the equation $$Lv=w$$ often amounts  to solving systems of linear equations, the skill you will learn in Chapter 2. A great example is the derivative operator: Example 3    (The derivative operator is linear) For any two functions $$f(x)$$, $$g(x)$$ and any number $$c$$, in calculus you probably learnt that the derivative operator satisfies 1.    $$\frac{d}{dx} (cf)=c\frac{d}{dx} f$$, 2.    $$\frac{d}{dx}(f+g)=\frac{d}{dx}f+\frac{d}{dx}g$$. If we view functions  as vectors with addition given by addition of functions and scalar multiplication just multiplication of functions by a constant, then these familiar properties of derivatives are just the linearity property of linear maps. Before introducing matrices, notice that for linear maps $$L$$ we will often write simply $$L u$$ instead of $$L(u)$$. This is because the linearity property of a linear transformation $$L$$ means that $$L(u)$$ can be thought of as multiplying the vector $$u$$ by the linear operator $$L$$. For example, the linearity of $$L$$ implies that if $$u,v$$ are vectors and $$c,d$$ are numbers, then $L(c u + d v) = c L u + d L v,$ which feels a lot like the regular rules of algebra for numbers. Notice though, that "$$u L$$'' makes no sense here. Remark    A sum of multiples of vectors $$c u + dv$$ is called a $$\textit{linear combination}$$ of $$u$$ and $$v$$.
2017-09-25 02:24:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098197817802429, "perplexity": 609.4027164572169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690307.45/warc/CC-MAIN-20170925021633-20170925041633-00587.warc.gz"}
https://scicomp.stackexchange.com/questions/34268/2-dimensional-gauss-hermite-quadrature-in-r
# 2-dimensional Gauss-Hermite quadrature in R A similar question was asked here and the given answer is perfect for a unidimensional integration. I need to make bidimensional integration in R with a Gauss-Hermite quadrature: $$\int_{R^2} h(p1,p2) \phi(p1,p2) dp1 dp2$$ with $$p1$$ and $$p2$$ two parameters that follow a multivariate normal distribution of density $$\phi(.)$$ such as: $$p1,p2 \sim \mathcal{N_2} (\mu, \Sigma)$$ with $$\mu = \left(\matrix{0\\ 1} \right)$$ and $$\Sigma = \left[\matrix{0.6&0.5\\0.5&0.25} \right]$$. I tried some packages in R but the results I got were very different from one another. I also tried the solution here but I'm not sure of the results I got. I am looking for a simple solution, preferably based on gauss.quad. Not being an expert, I have no idea of when to divide by $$\pi$$, if I should make a rescaling in my case and so on. Thanks in advance for any help !
2021-08-01 04:58:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7455645203590393, "perplexity": 188.14924996182307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00068.warc.gz"}
https://www.physicsforums.com/threads/longitudes-and-latitudes.70051/
# Longitudes and latitudes 1. Apr 5, 2005 ### lwymarie why latitudes are not parallel to each other but longitudes are? 2. Apr 5, 2005 ### rachmaninoff Actually, latitudes are parallel to each other - the longitudinal lines are the ones that meet at the poles. It's an arbitrary coordinate convention, there is no deep significance to it; you could, for example, use two sets of "longitudinal" coordinates to describe points on a sphere - you'd need four different poles for that. 3. Apr 5, 2005 ### lwymarie but i've heard a definition of 'parallel': if line A is perpendicular to both line B and line C, then line B and line C are parallel. since the equator is parallel to every longitude, so longitudes are parallel? 4. Apr 5, 2005 ### locke I'd imagine that definition was only intended to apply to straight lines. 5. Apr 5, 2005 ### Integral Staff Emeritus look at a globe, think about it. 6. Apr 5, 2005 ### Integral Staff Emeritus That is a definition from PLANE geometry. The surface of the earth is not a plane it is a sphere. 7. Apr 5, 2005 ### dextercioby Both Riemann & Bolyay-Lobacevski-Gauss geometries ivalidate the parallels' axiom of Euclid... An overall view one can get here Daniel. Last edited: Apr 5, 2005 8. Apr 5, 2005 ### Staff: Mentor For your question though, longitude lines were designed to slice the earth through the poles. It just makes sense to do it that way. Last edited: Apr 5, 2005 9. Apr 5, 2005 ### DaveC426913 Both sets of lines are referenced to the same source: the Earth's rotational axis. But one is parallel, while the other is perpendicular. Latitude lines slice the Earth along planes perpendicular to the axis while longitude slice the Earth along planes that pass through the axis. Latitudes form parallel concentric rings. Longitudes form slices, like an orange. BTW, longitudes are always "great circles", i.e. they pass through the centre of the Earth and are at Earth's maximum radius. Latitudes are not great circles, with one exception: the equator. 10. Apr 5, 2005 ### dextercioby Not really,u see the certers of the circles lie (actually form) the polar axis... Daniel. 11. Apr 5, 2005 ### lwymarie sorry i've typed something wrong... that means, longitudes are Not parallel but latitudes are? and my definition of parallel (if line A is perp. to both line B and line C, then B and C are ll) is not correct? 12. Apr 5, 2005 ### dextercioby It is correct only in an Euclidean geometry... Daniel. 13. Apr 5, 2005 ### lwymarie so why don't the geographers design parallel longitudes but non-parallel longitudes? also, no matter the longitudes are parallel to each other or meet at the poles, they are parallel to the equator. Doesn't it sound odd? 14. Apr 5, 2005 ### Staff: Mentor This is starting to hurt my head - latitude lines are parallel, longitude lines are not. Longitude lines are perpendicular to the equator. It was done that way simply because it makes sense to do it that way. Try figuring out time zones (for example) with parallel longitude lines.... Its fine, but what the others are saying (I'll say it another way) is that lines traced on the surface of the earth are no longer lines, they are curves. When drawing maps, we pretend the earth is flat (generally - most maps are Mercator projections), but it isn't. 15. Apr 5, 2005 ### dextercioby The really troubling thing when looking at a planiglobe is that Greenleand seems larger in surface than Australia... Australia is bigger in a ratio of ~11/3... Daniel. 16. Apr 5, 2005 ### DaveC426913 Right. Which is what makes them concentric - they all have the same centre (well, in two dimensions anyway.) They are concentrioc circles that have been translated along the axis. 17. Apr 5, 2005 ### DaveC426913 Are you imagining a globe while you are asking your questions? I think that will clear up your confusion very quickly. (see attached) If you try to draw a globe with parallel longitudes, you will end up creating two extra poles - and they'll be on the equator. Last edited: Nov 28, 2006 18. Apr 5, 2005 ### DaveC426913 Something else you should note: "2 lines perpendicular to the same line are parallel" "2 parallel lines do not meet" these two statements are only true in flat (Euclidian) geometry On curved surfaces, 2 perpendicular lines are not parallel, and parallel lines can meet. Though the longitudes are perpendicular to the equator, they still cross - at the poles. See attachment. Last edited: Nov 28, 2006 19. Apr 5, 2005 ### pervect Staff Emeritus Curves of constant lattitude are not really lines in spherical geometry, because they are not the shortest distance between two points. On a sphere, great circles are the shortest distance between two points. Therfore lines of longitude are true lines in spherical geometry - they are great circles, and represent the shortest distance between two points. Lines of lattitude are not true lines. As nearly as I can tell, the definition of parallel lines in 2d is that they are infinite lines that do not intersect. http://www.cs.unm.edu/~joel/NonEuclid/parallel.html This corresponds with the Mathworld definition, except that Mathworld only gives a definition for Euclidean geometry. Neither lines of lattitude or longitude are parallel lines - one fails the intersection test, the other fails the line test. Since any two great circles on a sphere will intersect, there are no true parallel lines on a sphere. wikipedia also mentions that there are no parallel lines in elliptic geomtery http://en.wikipedia.org/wiki/Non-euclidean_geometry 20. Apr 5, 2005 ### Gokul43201 Staff Emeritus This is addressing the question of "why", that Dave and Russ have responded to : Parallel latitudes, and great circle longitudes designate respectively, the polar and azimuthal angles. These are none other than the $\theta$ and $\phi$ of a spherical co-ordinate system. What's special about these co-ordinates is that they form what is known as an "orthonormal basis". This is a fancy math way of saying that the co-ordinates of any point are always perpendicular to each other. You can see this readily from Dave's attached picture. Any latitude intersects any longitude at right angles (it's not JUST the equator that's perpendicular to the longitudes). This system of using an orthonormal basis is very useful for doing mathematical calculations relating to the earth. Notice that this orthonormality condition is not met by the second system described in Dave's picture; you will even find points where the vertical and horizontal "latitudes" and are parallel. Such things make for a giant mathematical mess, giving everyone involved a terrible headache. The same kind of problem arises when you try to use two sets of "longitudes". What's additionally nice about the system used (and this has been covered by Dave and Russ), is that it designates a single axis to the earth. This is a good thing, because the Earth naturally has one axis - the axis of rotation. The reason this is a good thing is that it becomes easy to describe azimuthally varying quantities, such as local time. Last edited: Apr 5, 2005
2018-08-17 00:09:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6236205697059631, "perplexity": 1523.9330804904027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211316.39/warc/CC-MAIN-20180816230727-20180817010727-00594.warc.gz"}
https://www.gatecseit.in/library-information-science/lis-mcq-set-130/
# LIS MCQ set 130 By | November 25, 2018 ## Library and Information Science MCQs mock test 130 Question 1 A non-linear model of information seeking behaviour was presented by A T.D.Wilson B Ellis C Kulthou D Allen Foster Question 2 Information retrieval is fastest from A Floppy Disk B Magnetic Tape C Hard Disk D None of the above Question 3 Which of the following citation style manuals are from the same publisher? 1. A.P.A. 2. Turabian 3. Chicago 4. M.L.A. A (i) and (ii) are correct. B (ii) and (iii) are correct. C (iii) and (iv) are correct. D (ii) and (iv) are correct Question 4 Who coined the function of managements as POSDCORB? A Luther Gullick B Henry Fayol C F.W.Taylor D W.Schulze Question 5 Who has developed PRECIS? A S.R.Ranganathan B Derek Austin C C.A.Cutter D R.Williams Question 6 What does WLAN stand for? A Wireless Local Area Network B Wide Local Area Network C Wide Logical Area Network D World Local Area Network Question 7 In which year University of Madras started BLibSc.course? A 1931 B 1935 C 1942 D 1947 Question 8 Match the following: \begin{table}[H]\centering \begin{tabular}{|p{3.6cm}|p{3.6cm}|}\hline List - I & List - II \hline B.Subject book publisher & ii.American Psychological Society \hline C.Government Press & iii.Harper Collins \hline D.Scholarly Publisher & iv.W.B.Saunders \hline \end{tabular} \end{table} A $A-ii, B- iv, C- iii, D- i$ B $A- iv, B- iii, C- i, D- ii$ C $A-i, B- iv, C- iii, D- ii$ D $A- iii, B- iv, C- i, D- ii$ Question 9 Financial allocation made to a library for purchase of furniture and equipments usually comes under the budget head A Recurring grant B Endowments C Non-recurring grant D Annual grant There are 9 questions to complete.
2018-12-14 10:02:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28089669346809387, "perplexity": 12381.652350015444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00605.warc.gz"}
http://openstudy.com/updates/518d6414e4b062a8d1da2bc5
## ValentinaT 2 years ago Identify the asymptotes of the graph of each function. Then graph the function. 1. ValentinaT $y = \frac{ 1 }{ x + 2 }$ 2. Mertsj The vertical asymptote occurs where the denominator is 0. What value of x makes the denominator 0? 3. ValentinaT -2. 4. Mertsj So the vertical asymptote is x = -2 5. ValentinaT Okay, now to graph the function. 6. Mertsj |dw:1368222257318:dw| 7. Mertsj Now we know that the degree of the numerator is less than the degree of the denominator so that means that the y axis is a horizontal asymptote. 8. Mertsj So you need to figure out which two areas the two branches will be in. 9. Mertsj |dw:1368222410758:dw| 10. Mertsj Can you figure that out?
2016-05-03 07:10:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761859774589539, "perplexity": 801.7620257043188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118807.54/warc/CC-MAIN-20160428161518-00136-ip-10-239-7-51.ec2.internal.warc.gz"}
https://questions.examside.com/past-years/jee/question/the-length-of-a-magnet-is-large-compared-to-its-width-and-br-2004-marks-4-qv4vdlhci7bu9ulg.htm
### JEE Mains Previous Years Questions with Solutions 4.5 star star star star star 1 ### AIEEE 2004 The length of a magnet is large compared to its width and breadth. The time period of its oscillation in a vibration magnetometer is $2s.$ The magnet is cut along its length into three equal parts and these parts are then placed on each other with their like poles together. The time period of this combination will be A $2\sqrt 3 \,s$ B ${2 \over 3}\,\,s$ C $2\,s$ D ${2 \over {\sqrt 3 }}\,s$ ## Explanation $T = 2\pi \sqrt {{1 \over {M \times B}}}$ where $I = {1 \over {12}}m{\ell ^2}$ When the magnet is cut into three pieces the pole strength will remain the same and $M.{\rm I}.\left( {I'} \right) = {1 \over {12}}\left( {{m \over 3}} \right){\left( {{\ell \over 3}} \right)^2} \times 3 = {I \over 9}$ We have, Magnetic moment $(M)$ $=$ Pole strength $\left( m \right) \times \ell$ $\therefore$ New magnetic moment, $M' = m \times \left( {{\ell \over 3}} \right) \times 3 = m\ell = M$ $\therefore$ $T' = {T \over {\sqrt 9 }} = {2 \over 3}s.$ 2 ### AIEEE 2004 The magnetic field due to a current carrying circular loop of radius $3$ $cm$ at a point on the axis at a distance of $4$ $cm$ from the centre is $54\,\mu T.$ What will be its value at the center of loop? A $125\,\mu T$ B $150\,\mu T$ C $250\,\mu T$ D $75\,\mu T$ ## Explanation The magnetic field at a point on the axis of a circular loop at a distance $x$ from center is, $B = {{{\mu _0}i\,{a^2}} \over {2\left( {{x^2} + {a^2}} \right)3/2}}$ $\,\,\,\,\,B' = {{{\mu _0}i} \over {2a}}$ $\therefore$ $B' = {{B.{{\left( {{x^2} + {a^2}} \right)}^{3/2}}} \over {{a^3}}}$ Put $x = 4$ & $a = 3 \Rightarrow B' = {{54\left( {{5^3}} \right)} \over {3 \times 3 \times 3}} = 250\mu T$ 3 ### AIEEE 2004 A long wire carries a steady current. It is bent into a circle of one turn and the magnetic field at the centre of the coil is $B.$ It is then bent into a circular loop of $n$ turns. The magnetic field at the center of the coil will be A $2n$ $B$ B ${n^2}\,B$ C $nB$ D $2{n^2}\,B$ ## Explanation KEY CONCEPT : Magnetic field at the center of a circular coil of radius $R$ carrying current is $B = {{{\mu _0}i} \over {2R}}$ Given: $n \times \left( {2\pi r'} \right) = 2\pi R$ $\Rightarrow nr' = R\,\,\,\,\,\,\,\,\,\,\,...\left( 1 \right)$ $B' = {{n.{\mu _0}i} \over {2r'}}\,\,\,\,\,\,\,\,\,\,\,...\left( 2 \right)$ from $\left( 1 \right)$ and $\left( 2 \right),$ $B' = {{n{\mu _0}i.n} \over {2\pi R}} = {n^2}B$ 4 ### AIEEE 2004 A current $i$ ampere flows along an infinitely long straight thin walled tube, then the magnetic induction at any point inside the tube is A ${{{\mu _0}} \over {4\pi }},{{2i} \over r}$ tesla B zero C infinite D ${{2i} \over r}$ tesla ## Explanation Using Ampere's law at a distance $r$ from axis, $B$ is same from symmetry. $\int {B.dl = {\mu _0}i}$ i.e., $B \times 2\pi r = {\mu _0}i$ Here $i$ is zero, for $r < R,$ whereas $R$ is the radius $\therefore$ $B=0$
2022-01-25 07:58:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6986830234527588, "perplexity": 562.0850027236216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00487.warc.gz"}
https://chemistry.stackexchange.com/questions/130867/seeing-solubility-in-various-cases/130872
# Seeing solubility in various cases Match the following: $$\begin{array}{|c|c|c|c|} \hline &\textrm{Column-I}&&\textrm{Column-II} \\\hline \textrm{(A)}&\ce{AgBr}&\mathrm{(p)}&\mathrm{(Solubility~in~water~is~more~than~expectation)} \\ \hline \textrm{(B)}&\ce{AgCN}&\mathrm{(q)}&\mathrm{(Solubility~in~acidic~solution~is more~than~in~pure~water~\\(consider~no~common~ion~effect~from~anion~of~acid)}\\ \hline \textrm{(C)}&\ce{Fe(OH)3}&\mathrm{(r)}&\mathrm{(Solubility~in~strongly~basic~solution~is~more~than~in ~pure~water)}\\ \hline \textrm{(D)}&\ce{Zn(OH)2}&\mathrm{(s)}&\mathrm{(Solubility~decreases~in~presence~of~common~anion)}\\ \hline \end{array}$$ Link to original image of the question. The problem is to 'match' options in the two columns (possibly having multiple answers?). How to determine the solubility? I tried this through reaction like $$\ce{AgBr + H2O -> AgOH + HBr}$$ How do I know its solubility is greater or smaller than expectation? • Feel free to add your attempt (a thought process), and what have you tried/read/analyzed to solve the problem. Mar 31, 2020 at 11:29 A is wrong. $$\ce{AgBr}$$ is one of the least soluble compounds of $$\ce{Ag}$$. Why don't you have a look in the table of solubility products to be convinced? Further more, $$\ce{AgBr}$$ will never react with $$\ce{H_2O}$$ to produce a strong acid like $$\ce{HBr}$$ and a base like $$\ce{AgOH}$$. The reaction goes the other way round. Strong acids react with hydroxydes to produce a salt and water. To solve D, you must know that $$\ce{Zn(OH)_2}$$ is soluble in an excess of $$\ce{OH^-}$$ ions • @Buck Thorn. OK. I know but I thought we should not give the whole answers without the author showing us some beginning of reasoning. So let me give him two partial answers. AgBr is poorly soluble. But its solubility decreases in if $\ce{Br-}$ ions are in excess, due to its solubility product, and the absence of any complexation reactions. So $A$ is related to $s$. Now the only product which is soluble in alcaline medium is $\ce{Zn(OH)2}$, giving the ion$\ce{[Zn(OH)4]^{2-}}$. So $D$ is related to $r$. The next substances are easy to match. Aug 22, 2021 at 10:15
2022-10-03 04:52:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5444634556770325, "perplexity": 992.1444197720215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00099.warc.gz"}
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.18/share/doc/Macaulay2/DGAlgebras/html/_free__D__G__Algebra.html
# freeDGAlgebra -- Constructs a DGAlgebra ## Synopsis • Usage: A = freeDGAlgebra(R,degreeList) • Inputs: • R, a ring, The ring over which the DGAlgebra is defined • degreeList, a list, A list of degrees of the algebra generators of R. • Optional inputs: • Variable => ..., default value T • Outputs: ## Description This function returns a DGAlgebra A whose underlying algebra is a graded commutative polynomial ring in a number of variables equal to the number of the degrees input. The current version of this package does not handle algebras A whose underlying algebra is not a polynomial ring. i1 : R = ZZ/101[x,y,z] o1 = R o1 : PolynomialRing i2 : A = freeDGAlgebra(R,{{1},{1},{1},{3}}) o2 = {Ring => R } Underlying algebra => R[T ..T ] 1 4 Differential => null o2 : DGAlgebra i3 : A.natural o3 = R[T ..T ] 1 4 o3 : PolynomialRing, 4 skew commutative variables i4 : setDiff(A,{x,y,z,x*T_2*T_3-y*T_1*T_3+z*T_1*T_2}) o4 = {Ring => R } Underlying algebra => R[T ..T ] 1 4 Differential => {x, y, z, z*T T - y*T T + x*T T } 1 2 1 3 2 3 o4 : DGAlgebra The resulting DGAlgebra will not be graded since the differential given does not respect the grading due to the degrees assigned in the definition. i5 : isHomogeneous(A) o5 = false i6 : Add = toComplex A 1 3 3 2 3 3 1 o6 = R <-- R <-- R <-- R <-- R <-- R <-- R 0 1 2 3 4 5 6 o6 : ChainComplex i7 : B = freeDGAlgebra(R,{{1,1},{1,1},{1,1},{3,3}}) o7 = {Ring => R } Underlying algebra => R[T ..T ] 1 4 Differential => null o7 : DGAlgebra i8 : B.natural o8 = R[T ..T ] 1 4 o8 : PolynomialRing, 4 skew commutative variables i9 : setDiff(B,{x,y,z,x*T_2*T_3-y*T_1*T_3+z*T_1*T_2}) o9 = {Ring => R } Underlying algebra => R[T ..T ] 1 4 Differential => {x, y, z, z*T T - y*T T + x*T T } 1 2 1 3 2 3 o9 : DGAlgebra The result of the above declaration will be graded. i10 : isHomogeneous(B) o10 = true i11 : Bdd = toComplex B 1 3 3 2 3 3 1 o11 = R <-- R <-- R <-- R <-- R <-- R <-- R 0 1 2 3 4 5 6 o11 : ChainComplex Note that the differential is not passed into the constructor. The reason for this (at the moment) is that Macaulay2 does not know what ring the differentials are defined over until after the underlying algebra is constructed, so the differential is set later with setDiff. Many DG algebras that one encounters in commutative algebra have been implemented, however, and do not need to be defined 'by hand'. For example, if one wants to work with the Koszul complex as a DG algebra, then one should see the command koszulComplexDGA. Also, if one wishes to specify the name of the variables used, specify the Variable option; see the example in dgAlgebraMap. ## Caveat There is currently a bug handling DG algebras that have no monomials in some degree, but some monomials in a later degree; for example if one replaces the 3 in the above example with a 5. ## Ways to use freeDGAlgebra : • "freeDGAlgebra(Ring,List)" ## For the programmer The object freeDGAlgebra is .
2021-09-28 11:00:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45809781551361084, "perplexity": 1434.046986617673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00381.warc.gz"}
http://blog.inf.ed.ac.uk/gdutton/category/scripty/
not-a-blog, not-a-service, not-a-clue. ## a big hand for rfe with one comment Here are some tools that I’ve put together that make a huge difference to a clunky rfe workflow. Bored already? Don’t worry, I’ve written the script already: tl;dr Imagine power cycling a DICE server with redundant PSUs, using our lovely power bar control files. You don’t know where it’s installed, so you have to search for it. Read the rest of this entry » Written by gdutton 7 March 2013 at 2109 Posted in DICE, howto, scripty Tagged with , , , , , , ## alpine, nagios and display filters I’ve been aware of alpine’s “display filter” feature for some time, used as it is for on-the-fly GPG interpretation amongst other things. But I’d never really examined the feature before now. The manual says: The [filter] command is executed and the message is piped into its standard input. The standard output of the command is read back by Alpine. This says it all: display filters turn out to be an extremely powerful generic mechanism for reformatting and enhancing text; it works particularly well when applied to machine generated messages. Maybe its power is best explained by the example which caused me to investigate in in the first place: ### An example (the nagios bit): A longstanding irritant to me has been a my difficulty in shutting nagios up. For a long time I’ve been relying on a filter to parse nagios’ incoming emails and generate a URL. The display filter closes the loop, automatically injecting that magic URL at the end of the message. Here’s a simplified version of the filter, reminiscent of the one in the previous post: #!/usr/bin/gawk -f # Crude detection of problem type for acknowledgement link # Don't forget to validate these inputs... /Notification Type: / { TYPE=$3; } /Service:/ { SERVICE=substr($0,length($1)+1,length($0)); } /Host:/ { HOST=$2; } # Important: this is a filter, so don't forget to print input lines back out! // {print;} # Now add the acknowledgement link below: END { if (HOST && TYPE == "PROBLEM") { # this is the script which generates the URL. # ideally this should be replaced with some awk to do the same thing cmd="~/bin/nagack "HOST" "SERVICE cmd | getline url close(cmd) # now add the link to the email. print "[Acknowledgement link: "url" ]" } Now, to alpine’s Display Filters setting, add: Display Filters = _LEADING("***** Nagios")_ /path/to/nagios-filter-script that’s it! My emails from nagios now look like: ***** Nagios ***** Notification Type: PROBLEM Service: ssh Host: myhost Address: 192.168.12.34 State: CRITICAL ... [Acknowledgement link: https://nagiosserver/nagios/cgi-bin/cmd.cgi?cmd_typ=3... ] ### Important caveats: • If you’re not careful, by adding these filters you will have introduced a trivial local shell injection attack to your mail client. Validate your inputs — just like I didn’t above! • The developers have this to note about running filters on every message: Testing for the trigger and invoking the filter doesn’t come for free. There is overhead associated with searching for the trigger string, testing for the filter’s existence and actually piping the text through the filter. The impact can be reduced if the Trigger Modifying Tokens [...] are employed. I’ve certainly noticed a small (immeasurable, but noticeable) delay in opening messages with triggers. Large enough to be annoying if I’d planned to filter every message, even using a trivial bash filter which itself is quick to complete. • One additional caveat on DICE: if your alpine session outlives your AFS credentials, and you’ve stored your display filters in your home directory, you will find that the display filters simply disappear. As good a reminder as any to renew, and thankfully a “renc” is all that’s required to restore your filters to former glory. That’s it! Surprisingly trivial, and with a handful of these triggers, the benefits are huge. I’m using five so far, mostly generating clickable links to some of our automated systems, but I’d be pleased to hear what other people are doing with these filters. Written by gdutton 9 July 2012 at 1204 ## Editing component files with vim leave a comment » Editing LCFG component source files using Vim is of course The Right Thing to do, but due to the way these source files are named (typically filename.ext.cin) vim doesn’t necessarily pick up on the filetype, and goodies such as syntax highlighting are lost. This is easy to fix using vim’s ftdetect system. Some examples for simple types: " These files are always POD in disguise au BufRead,BufNewFile *.pod.cin : set filetype=pod " Slightly contentious: a new filetype is needed, really, but this is a decent match. au BufRead,BufNewFile *.def.cin : set filetype=cpp " For other, unknown types, detect from the as-yet undefined shebang: au BufRead,BufNewFile *.cin : if getline(1) =~ '^#!@SHELL@' | set filetype=sh | endif au BufRead,BufNewFile *.cin : if getline(1) =~ '^#!@PERL@' | set filetype=perl | endif (note the latter two lines are specified separately, rather than elseifed, purely for readability). It’s fairly obvious that this can be extended to any file type, and there’s also scope for adding an automatic mapping to allow all files of form file.typ.cin to be mapped automatically to their default .typ. “sub-extension” file type. Anyway, the above has already improved my productivity no end so I’ll leave the latter exercise to the reader. Comments and contributions are welcome, as always — so long as they’re not suggestions to use Emacs(!) Written by gdutton 1 May 2012 at 1657 Posted in DICE, howto, scripty Tagged with , , ## get on the rpm bus with one comment This is a quickie script which streamlines my RPM building and submission to a single command. Note that this is entirely dependent on our shiny new Package Forge system, which feeds RPMs to multiple platforms for building and eventual submission into our RPM buckets. All it does is chain up “rpmbuild -bs [spec]; pkgforge submit [srpm]” but it’s a nice timesaver nonetheless. Side-benefits include the automatic generation of a readable ID and provision of a tracking link for pkgforge so that you can anxiously refresh the page to watch the build progress (or you could just wait for the report email…). So, here is is; my very simple and stupid RPM automation. Suggested name: ‘rpmbus’. #!/bin/bash if [[ -z$2 ]]; then echo "RPMbus: build -> submit assist" echo "Usage: basename $0 [pkgforge args]" exit 1 fi bucket=$1; shift spec=$1; shift args=$* output=rpmbuild -bs ${spec} | tail -n 1 pkg=echo${output} | sed -e 's_^Wrote: __' if [[ ! -e ${pkg} ]]; then echo "Package wasn't built:${output}" exit 1 fi id=basename ${spec} | sed -e 's_\.spec__' -e 's_\.__g'-date +"%s" echo -e "Found source package:\n${pkg}" echo " Extra args: ${args:-none}" read -p "Submit to '${bucket}'?" foo if [[ ${foo} != 'y' ]]; then echo "Cancelled" exit 1 fi echo "Submitting to${bucket}..." pkgforge submit --id ${id} -B${bucket} ${args}${pkg} && \ echo " https://pkgforge.inf.ed.ac.uk/job/view?id=\${id}" Caveats: well, they’re numerous and they’re pretty apparent. But it took five minutes to write and it WFM :) Written by gdutton 11 May 2011 at 0855 Posted in DICE, scripty Tagged with , , , , ## losing locate As much as MacOS’ Spotlight is an integral and indispensable part of my interaction with my laptop, a part of me still begrudges the “gratuitous” CPU and disk utilisation which is of course a necessary part of its operation. However as a hardened Linux user unprepared to do without the luxury of the locate database, my inner resource miser was further upset on discovering that these databases were not shared, and with even more irritation enabled the periodic updatedb cron job, as was suggested by locate itself. Whether it was SSD envy, a nagging sense of a job half-done or sheer procrastination I’m not sure, but last week I felt compelled to do away with the needless platter-spinning and found the answer far too quickly, in the form of the Spotlight shell utility mdfind. One alias later: alias locate="mdfind -name" and I was able to destroy the locate database, and discontinue its indexing: launchctl stop /System/Library/LaunchDaemons/com.apple.locate.plist whew! #### Some obligatory qualifications… This applies largely to OS X 10.6, Snow Leopard. Leopard’s arrangement is slightly different, and I know nothing about earlier versions… And no, it doesn’t support all of locate’s arguments, but I rarely used them (apart from -i) anyway (and don’t have any case-sensitive, indexed filesystems on the mac right now). man mdfind provides workarounds for many of the more unusual uses of locate, and grep provides the rest… Written by gdutton 5 May 2011 at 1222 Posted in Apple, howto, scripty Tagged with , , , ## Chrome and SPNEGO Update: The landscape on OS X has changed since this post was written. Hugh Cole Baker provided in a comment an excellent mechanism for setting self-managed policy on OS X which beats my clunky wrapper; Lion’s Kerberos support has changed in a way which prevents SPNEGO working for our Cosign servers (though a fix at our end is planned); Chrome on Linux gained a proper managed configuration, which we use locally (I produced the lcfg-chrome component for this purpose). I was most impressed by the efficient conclusion to the enhancement request for SPNEGO on Chrome, but having read that the request had been met, I struggled for far too long to discover how to activate it. Irritated by Firefox 4 beta 7′s breakage of SPNEGO on the Mac*, but reluctant to revert 3.6, I felt it was time to reinvestigate the alleged Chrome support Read the rest of this entry » Written by gdutton 20 November 2010 at 1818 ## Nag nag nag nag nagios Nagios is an extremely useful tool, until it isn’t.  Which is to say, it’s nothing but a hindrance to have nagios continue to bombard you with IMs and emails when you’re already working on the problem. ### Surely you can just acknowledge the fault and shut it up…? Well, sometimes, but it is hardly convenient to break out a Firefox session when you’re attached to a serial console with your lovely secure-shell-enabled phone.  And even if you are on a DICE machine it’s a bit of a pain to have to navigate the slightly clunky Nagios UI to find the host and service you wish to silence. I started with a dumb bash script. Read the rest of this entry » Written by gdutton 30 July 2010 at 0921
2013-12-06 11:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2653288245201111, "perplexity": 6099.146103156574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051509/warc/CC-MAIN-20131204131731-00002-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.physics.umd.edu/grt/seminars/abstracts01spring.html
GRAVITY THEORY SEMINARS, Spring 2001 Abstracts: Feb.2, Fri., 2:00pm, Room 4208 Stefano Liberati, University of Maryland "Analog models of GR from Bose-Einstein Condensates" We shall discuss the prospects for the use of Bose-Einstein condensates as systems able to mimic the kinematical aspects of general relativity. Taking a long-term view, we ask what the ultimate limits of such a system mightbe.To this end, we consider a very general version of the Gross-Pitavæskii equation that can be used in discussing Bose-Einsteincondensates in heterogeneous and highly nonlinear systems. Wedemonstrate that at low momenta linearized excitations of the phase of thecondensate wavefunction obey a (3+1)-dimensional d'Alembertian equationcoupling to a (3+1)-dimensional Lorentzian-signature "effective metric"that is generic, and depends algebraically on the background field. Feb. 23, Fri., 2:00pm, Room 4208 Charis Anastopoulos, University of Maryland "Quantum phases: the basic structire of quantum theory?" We explain how quantum phases are operationally determined in measurement of systems at more than one moment of time. They do not correspond to any self-adjoint operator. We study how they can be quantified and then explain an axiomatic framework that takes the notion of phases as primitive. In fact, such a description needs not make reference at all to a Hilbert space, which provides a stepping stone for novel interpretational schemes for quantum theory. Mar. 2, Fri., 2:00pm, Room 4208 Ted Jacobson, University of Maryland "Black holes, Hawking radiation, entropy, and information loss in a thinfilm of 3He-A" The Hawking process---quantum decay of a black hole via emission ofthermal radiation---is theoretically compelling, yet it raises deep puzzles aboutquantum gravity and is observationally remote. Condensed matter analogs ofblack holes have already yielded insights into one ofthe puzzles, and mayone day allow us to experimentally study Hawking radiation analogs in thelaboratory. This talk will focus on a black hole analog employing a domainwall texture in a thin film of super fluid helium-3 in the A-phase. The causalstructure of the effective black holewill be discussed, and a model of blackhole formation and subsequent evaporation will be presented which exhibitsthe transfer of information to a disconnected "sister universe". Mar. 9, Fri., 2:00pm, Room 4208 Tatsu Koike, Keio University, Yokohama "Critical behavior in gravitational collapse." Critical behavior analogous to the one in condensed matter physics occurs in gravitational collapse of many systems. It is characterized by (1) scaling and (2) universality. Namely, for generic one-parameter family of initial data parametrized by $p$, where those with $p> p^*$ evolve into a black hole and those with $p<p^*$ do not, solutions near $p=p^*$ satisfy the following: (1) the mass $M$ of black hole formed obeys the scaling relation $M\propto |p-p^*|^\beta$, (2) the critical exponent $\beta$ does not depend on the one-parameter families. This is an introductory talk about the subject  focused on what it Mar. 16, Fri., 2:00pm, Room 4208 Violeta A. Prieto-Gortcheva, University of Maryland "BF gravity and the Immirzi parameter" It is proposed a novel BF-type formulation of real four-dimensional gravity, which generalizes previous models. In particular, it allows for an arbitrary Immirzi parameter. In addition, it is constructed the analogue of the Urbantke metric for this model. Finally, it is performed the canonical analyzis of the BF action leading to Ashtekar-Barbero phase variables. Apr. 6, Fri., 2:00pm, Room 4208 Dieter Brill, University of Maryland "2+1 dimensional black holes: the inside story" Apr. 13, Fri., 2:00pm, Room 4208 Hugo A. Morales T'ecotl, UAMI(Mexico) and CGPG-PSU "On effective Maxwell and Dirac equations in loop quantum gravity" Simple arguments indicate cosmological GRB's might probe quantum gravity scales. Candidate theories have been studied accordingly. In this talk a coarse grained approximation in loop quantum gravity with matter fields is described yielding Planckian modifications to Maxwell and Dirac equations in flat spacetime. The corresponding Planckian dispersion relations are hence derived. Apr. 16, Mon., 1:30pm, Room 1201. Joint with Particle Theory Group Robert Brandenberger, Brown Univ. "Inflationary Cosmology: Progress and Problems" Inflationary cosmology has become a cornerstone of moderncosmology. In spite of the success in terms of explaining many observations, current realizations of inflation suffer from several conceptual problems. In this lecture I will mention a couple of areas of inflationary cosmology in which there has been substantial progress in our understanding, such as the theory of reheating (with applications to Anderson localization) and the theory of cosmological perturbations. I will then focus on several important conceptual problems (e.g. the trans-planckian problem) and mention some possible new avenues. Apr. 27, Fri., 2:00pm, Room 4208 Peter Fritschel, MIT "Pursuing gravitational wave astrophysics with LIGO" May 4, Fri.  2:00pm, Room 1201 Neil Cornish, U. Montana "Future directions in gravitational wave astronomy" Space based gravitational wave observatories, such as the Laser  Interferometer Space Antenna (LISA), will open an exciting new window on the universe. I will discuss the LISA mission and some potential LISA follow-on missions in terms of their science potential. One exciting possibility is a mission to measure the cosmic gravitational wave background that is produced during the formation of the universe. May 11, Fri.  2:00pm, Room 4208 Ho Jung Paik, UMD "Search for a string-theory predicted violation of gravitational inverse-square law" I will discuss the design of a cryogenic null test for the 1/r2 law with expected resolution of 10-3 at 50 mm.  Achieving the desired sensitivity requires not only extremely low intrinsic noise but careful attention to metrology errors and extremely low coupling to seismic, gravitational, and magnetic disturbances.  A null source is employed to minimize Newtonian errors.  A circular disk of large diameter with a correction ring produces two second-order null points near its center; where test masses are located.  The test masses are thin sapphire disks with superconducting coils deposited.  Magnetic flux trapped in these coils allows levitation and self-alignment of the test masses with respect to the magnetic shield and the source mass.  Superconducting sensing coils, located near the levitated test masses, are connected together to a SQUID to form a sensitive differential accelerometer.  Due to the reflection symmetry in the source-detector configuration, the  1/r2 violation signal appears at the second harmonic of the source motion.  This frequency discrimination, combined with the differential measurement, gives the required isolation for source-induced vibration.  Most metrology and density errors average out due to the cylindrical symmetry in the design.  Superconducting magnetic levitation allows soft, low-loss suspension of the test masses.  The low intrinsic noise of the detector, combined with reduced metrology errors, leads to high resolution of the experiment. May 16, Wed.  10:30am Room 4208 Carlos Barcelo, Washington University, Missouri. "Generation and detection of gravitational waves  in the presence of large compact extra dimensions" The concepts underlying the physics of generation and detection of gravitational waves are substantially changed when one consider the existence of large compact extra dimensions. However, in normal situations these new concepts give place to the same measurable results. In this talk, I will explain what do we mean by normal situations and in which form the existence of large compact extra  dimensions could shows up when leaving these normal regimes. May 18, Fri.  2:00pm, Room 4208 Fred Cooperstock, UVic (Canada) "Energy localization in general relativity and gravitational waves" May 25, Fri.  1:30pm, Room 4208 Renaud Parentani, University of Tours (France) "Hawking radiation as a probe of Quantum Gravity" Home    People   Research Interests    Ph.D. Program    Journal Club    Available Positions    Related Links Back tothe UM Physics Department Home Page Back tothe University of Maryland Home Page Last updated: 14 February 2001 Send yourcomments to: liberati@physics.umd.edu
2018-01-21 16:44:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.446478933095932, "perplexity": 4113.405623416797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890795.64/warc/CC-MAIN-20180121155718-20180121175718-00119.warc.gz"}
https://www.breathinglabs.com/prevention-wellbeing/contactless-radar-based-breathing-monitoring-of-premature-infants-in-the-neonatal-intensive-care-unit/
# Contactless radar-based breathing monitoring of premature infants in the neonatal intensive care unit Nearly 15 million infants are born annually before the 37th week of pregnancy, meaning that about 10% of all births worldwide are premature1. Due to their immature organ systems and associated functions, as well as their immune system, these infants are at a higher risk of infections, chronic diseases and respiratory problems. The immaturity of breathing regulation and lungs often lead to apnea-bradycardia and respiratory distress syndromes. This is commonly followed by bronchopulmonary dysplasia in 27% of the infants born at less than 30 weeks of gestation2,3,4,5. Consequently, further development of these premature infants has to continue ex-utero, and they usually have to spend several weeks at a Neonatal Intensive Care Unit (NICU). During this period, continuous monitoring of their underdeveloped organs is necessary. Often, newborns are dependent on parenteral nutrition, respiratory support, and invasive diagnostic interventions which, albeit being essential for survival, may cause stress to the child. Basic vital parameters such as respiration, heart rate and oxygen saturation also need to be monitored. To this end, several sensors are directly attached to their fragile skin and connected to the monitoring systems through cables. Besides mobility restrictions, these sensors often cause skin irritation and may eventually lead to pressure necrosis6,7,8,9,10,11. In order to promote the development of premature babies, a number of efforts have been made toward non-invasive monitoring and diagnostic solutions. The use of sensors that can monitor a variety of vital signs without a cable connection, but bonded to the skin, is being investigated in12,13. Current studies are also investigating the potential of different non-contact techniques for non-invasive diagnostics in children. Efforts are underway to detect pathological changes in body excretions by analyzing volatile organic compounds14,15. There are approaches using optical methods to monitor pulse rate and oxygen saturation without direct skin contact and cable connection, based on e.g. dynamic light scattering16, video17 or photoplethysmography18,19. Of high relevance for preterm infants is also the diagnosis of respiratory pathologies and classification regarding periodic breathing and apneas20,21. This task is addressed using different non-contact techniques, which require redundant measurements of various vital signs, e.g. respiration motion, heart rate, oxygen saturation or nasal breathing22,23,24. The contactless monitoring of the cardiorespiratory activity neither confines nor inhibits the patient, reduces hygiene risks and does not cause any discomfort, irritation or skin damage25,26. In this context, radars have already been proven to be a promising technology27,28,29, being intrinsically low-power, low-cost and privacy preserving. Unlike camera-based systems30,31, radar signals can penetrate through different materials (such as plexiglass, clothing, mattresses and blankets), and are not affected by skin pigmentation or ambient light levels. However, due to the reduced transmitted power, these signals can be easily buried in the background noise, or masked by stronger external interference, including body movements from the monitored patient32. This interference is a major challenge for accurate estimation in contactless solutions as well as for cabled devices. Specific signal processing techniques are thus needed in order to ensure reliable and robust measurements. Recent works33,34 have demonstrated that an ultrawideband radar can provide reliable breathing rate estimates for neonates under specific conditions. However, these investigations were limited to a single scenario, where the neonates were lying over an open-air crib, always in supine position. In addition, radar performance was evaluated only during minimal movements of the monitored patients. In this article, we take one step further by using a simpler continuous-wave (CW) radar device, and investigating premature infants under different scenarios common to the NICU routine, irrespective of the amount of movement or external interference. The specificity of the monitored patients in a real clinical setup creates several challenges which were addressed through different contributions in the proposed signal processing framework. Particularly, rather than just discarding measurements under strong interference33,34,35,36,37, we present a novel random body movement mitigation technique based on the time-frequency decomposition of the recovered signal. Additionally, we propose a simple and accurate frequency estimator, which explores the harmonic structure of the breathing signal. ### Problem formulation The activity of the cardiovascular and respiratory systems causes some physical and physiological effects on the human body. The chest wall moves during the inspiration/expiration cycle as a result of the diaphragm and intercostal muscle movements. This small and periodic displacement can be detected by radar, allowing accurate estimation of the breathing rates under certain conditions. Figure 1a illustrates the basic operational principle of a CW radar. The transmitted signal propagates through the free space and reaches every object in the radar’s field-of-view, being reflected back with additional phase information regarding each object’s position. The received signal can thus be modeled as a scaled and time-shifted version of the transmitted signal, in which the phase variation over time contains valuable information regarding the scene movement. This time-varying phase $$\theta (t)$$ can usually be recovered as \begin{aligned} \theta (t) = \frac{4 \pi d(t)}{\lambda }, \end{aligned} (1) where $$\lambda$$ is the radar operating wavelength, and d(t) represents the displacement signal which, ideally, would correspond only to the chest wall motion due to the breathing mechanism. As seen by the radar, this movement is mainly originated by the reflected points over the chest moving surface, but it may additionally include residual motion from the belly, sides and the back, depending on the patient relative position. In healthy adults, standard amplitudes for this motion range between 4 mm to 12 mm38, with breathing rates varying from 5 to 25 breaths per minute (bpm)39. For premature infants, these amplitudes can be smaller than 1 mm, while average breathing rates can normally reach 60 bpm40, and go up to 80 bpm under specific conditions41. Perfect recovery of the chest wall motion d(t) would allow precise estimation of the breathing frequency $$f_b$$ by simple analysis of the movement periodicity. However, in a real clinical setup, besides unavoidable hardware imperfections, the received radar signal is usually mixed with additional reflections from the external environment, arising not only from different body movements of the monitored patient, but also from every moving object in the scene. These interfering signals are usually much stronger than those induced by the chest wall millimeter displacement, and this makes accurate recovery and subsequent estimation of the breathing frequency a challenging task. In addition, when considering premature infants, the reduced amplitudes of the chest wall motion, and the wider range of possible breathing rates pose an additional signal processing challenge in relation to previously reported research with adults. ### Clinical setup and protocol The study was performed in the Department of Pediatrics, at the Saarland University Medical Center (Homburg, Germany). Figure 2a shows a premature neonate being monitored with the conventional method. Besides the sensors attached to the chest and abdomen, and connected by cables to the central monitoring unit (for oxygen saturation, heart rate, and respiration), an additional peripheral venous catheter and a gastric tube are also necessary in this stage. The clinical setup, including the neonatal cot, the radar device, and the reference monitoring system is shown in Fig. 2b–d. The radar is certified for operation in the 24-GHz ISM (industrial, scientific and medical) band, and it was installed outside the cot, attached to a low-vibration tripod. The relative distance to the monitored infant was around 45 cm to 50 cm. Due to the radar’s inherent capabilities, no modification to the cot structure was necessary, and the plastic cover could remain closed during the measurements. In Fig. 2d, twins are sharing the same bed, with only one being monitored with the contactless method. Cobedding of twins is a common procedure in the NICU, with several studies reporting physiological benefits to the infants42,43. A total of 12 premature infants were included in the study. The Supplementary Table 1 shows a summary of the patient’s information. They were selected on the basis of medical opinion, and taking into account the medical safety of participating in the study. For each infant, the measurements were carried out in three different days, at noon (after feeding), over a period of 25 minutes each. Their natural position was not changed during each measurement. Besides the supine (with the chest facing the radar), prone (with the back facing the radar) and side positions, we have also investigated cobedding cases with only one infant being monitored using the contactless method. The idea was to investigate the different effects when collecting radar data from the chest/abdomen and back. Additionally, if monitoring twins is possible, and what would be a safe distance (in terms of radar interference) between them. The basic principle that guided the data collection protocol was to ensure seamless operation at the NICU. A detailed description of the patient’s protocols is shown in Supplementary Tables 2a–d, including all interventions and additional transients manually annotated during the measurements. ### Signal processing background Figure 1b shows the basic block diagram of the signal processing chain. The initial signal processing step for CW systems is commonly known as phase demodulation. It is essentially the process where the received in-phase and quadrature (I and Q) signals from the radar’s analog-to-digital converter (ADC) are combined with the aim to recover the displacement signal d(t). Among several methods, the two most used are the arctangent demodulation (AD)44 and the complex signal demodulation (CSD)45. While the AD enables precise recovery of the chest wall motion, it is highly sensitive to hardware calibration, and to the presence of DC offsets, noise, and external interference. The CSD is more robust to these effects, but it relies on small displacements for recovering an approximation of the breathing motion (please refer to the Methods). Figure 3 shows examples of the recovered breathing motion from radar data, in comparison to the actual (reference) displacement acquired from the cabled device. Initially, to precisely reconstruct the chest wall displacement, we selected “clean” segments of data (no external interference), and the AD was used in both cases. While Fig. 3a depicts a normal breathing pattern obtained at supine position, Fig. 3b shows an occurrence of the Cheyne-Stokes (periodic) breathing pattern46, with the infant at prone position. This special form of breathing is physiologically found in neonates, and is defined by a cyclic variation between hyperpnea and hypopnea47,48,49, i.e. repetitive short cycles of pauses and breaths. Despite small differences between the recovered radar signals and the reference device, the periodic breathing movement can still be clearly identified in both cases. The small amplitudes of the chest wall motion can also be visualized, with displacements around 2 mm in supine position and 0.5 mm in prone position. These amplitudes are well below typical values for adults reported in previous research50,51,52. The approximated breathing motion, obtained with the CSD, is shown in Fig. 3d,e. Despite noticeable differences when compared to the AD, Fig. 3g,h show that both techniques yield signals with the same fundamental frequency, corresponding to the average breathing frequency. Given the small displacements we aim to detect, and the challenging conditions of this real clinical environment, the CSD was adopted in our solution for long-term monitoring. The harmonic structure of the breathing signal can also be visualized, with the second harmonic being clearly distinguishable. This harmonic structure can be used for improving estimation, as we will show latter. However, as depicted in Fig. 3c,f,i, under external interference and eventual ADC saturation, both demodulation methods fail to reconstruct the chest wall motion. The spectrum becomes dominated by the interference components, which will then prevent accurate breathing frequency estimation. Therefore, additional processing is necessary in order to attenuate these effects. ### Random body movement mitigation Most research on contactless vital sign monitoring with radar sensors focus on a single-person setup under ideal motionless conditions53. In practical monitoring situations, the subject may often move body parts like hands, legs or torso, and even the entire body. These unwanted but unavoidable movements are usually called random body movements (RBM). The amplitude of their reflected signals is often much stronger than the millimeter-scale breathing motion, which will potentially be masked by this interference. Since spontaneous RBM are inevitable, solving this problem is fundamental to reliable vital sign detection in practical applications. Several methods for RBM mitigation were already proposed54, and even though specific types of movements could be effectively cancelled out, they usually require more complex systems. Most solutions rely on additional or duplicated hardware, thus suffering from practical limitations such as misalignment, synchronization, and cost45,55,56. Another direction of research basically tries to identify segments of vital sign data with RBM, and simply discard these corrupted segments before estimation33,34,35,36,37. However, depending on the processing window duration, even very short RBM will affect several seconds of good signal. Therefore, rather than simply discarding segments of data, an approach which allows useful exploitation of these episodes with moderate RBM is desired. Recent work has begun to address RBM using a single sensor and within more challenging scenarios32. Nonetheless, experimental validation is still performed under controlled situations, with RBM being emulated through predefined behavior, which results in limited interference over the desired signal. First, let us assume that RBM are sparse, i.e. they are not frequent and, when they occur, their duration is small in relation to the observed time window. This contrasts with the constant and periodic nature of the breathing movement. Additionally, their amplitudes are usually much stronger than the standard breathing signal. These specific time and frequency features will be present in the spectrogram of the recovered signal, which can be analyzed toward identifying and possibly removing this interference. For addressing this, we will use the nonnegative matrix factorization (NMF)57,58, a matrix decomposition technique usually employed for extracting features from a set of nonnegative data. If x(t) (Fig. 1b) is the recovered signal containing the chest wall motion and eventual RBM interference, its magnitude spectrogram $$|{{\varvec{X}}}|$$ can be obtained through the Short-Time Fourier Transform (STFT) of x(t). The NMF will then decompose $$|{{\varvec{X}}}|$$ as \begin{aligned} |{{\varvec{X}}}| \approx {{\varvec{W}}}{{\varvec{H}}}= \sum _{i=1}^{K} {{\varvec{w}}}_i {{\varvec{h}}}^{\mathrm{T}}_i, \end{aligned} (2) where the matrices $${{\varvec{H}}}$$ and $${{\varvec{W}}}$$ contain, respectively, the associated time and frequency basis components of $$|{{\varvec{X}}}|$$, with K being a predefined number of basis. In other words, $${{\varvec{W}}}$$ can be seen as the set of frequency templates of $$|{{\varvec{X}}}|$$, while $${{\varvec{H}}}$$ contains the timing information related to the activation of these templates. If we look into the time activation matrix $${{\varvec{H}}}$$, the basis components with sparse behavior and higher amplitudes will often indicate the epochs when the RBM interference is present. Despite the unpredictable frequency spectrum, which will eventually overlap with breathing frequencies, the RBM distinct time behavior can be captured by the NMF time activation bases $${{\varvec{H}}}$$, whereas the corresponding bases in $${{\varvec{W}}}$$ will retain its frequency content. This allows additional flexibility for filtering the RBM interference when compared to standard spectral analysis methods. We can thus reconstruct the filtered spectrogram $$\hat{|{{\varvec{X}}}|}$$, by simply adding back all the $${{\varvec{w}}}_i {{\varvec{h}}}^{\mathrm{T}}_i$$ matrices, except for the ones containing the interfering components. Figure 4a shows a 60-s processing window for illustrative purposes, where the recovered signal x(t) (after CSD) is corrupted by segments of RBM, with its normalized spectrogram $$|{{\varvec{X}}}|$$ in Fig. 4b. In the case of the CSD, the spectrogram is calculated based on the complex samples of the recovered signal x(t), and therefore considers both I and Q channels simultaneously. The NMF decomposition into $$K=11$$ frequency ($${{\varvec{W}}}$$) and time basis ($${{\varvec{H}}}$$) components is depicted in Fig. 4c,d, where each color represents a pair of basis components, with the frequency content in Fig. 4c, and the corresponding time activation in Fig. 4d. It can be seen that (please refer to the green and blue bases for instance), due to its random nature, the RBM interference has frequency components spread over the entire spectrum, overlapping with the breathing frequency region. While a variety of frequencies can be visualized in $${{\varvec{W}}}$$, the sparse and strong bases corresponding to RBM can be clearly identified in $${{\varvec{H}}}$$ (please refer to the Methods). Removing the selected bases allows the reconstruction of the filtered spectrogram in Fig. 4e, where the breathing frequency variation over time (around 45 bpm) is now evident. After the inverse STFT, the RBM filtered time signal is depicted in Fig. 4f. Finally, Fig. 4g shows the bandpass spectrum of both the original and the RBM filtered signals. The corresponding detected values are highlighted respectively with the blue and red markers. The dashed black line shows the reference value for the average breathing frequency associated with this processing window. Because of the strong RBM interference, the maximum value of the original spectrum would indicate an erroneous breathing frequency of 52.1 bpm, very distant from the true value of 42 bpm. After RBM filtering with the NMF, the modified spectrum indicates a closer value of 42.4 bpm, where the estimation error would be only 0.4 bpm. ### Breathing rate estimation Different models have already been proposed for representing the back-and-forth breathing movement d(t), from simple sinusoidal approximations59,60, to more complicated patterns as described in38,61. The breathing movement is a complex phenomenon which involves different patterns of motion, not only from the chest wall surface, but also from the belly, shoulders and back62,63. Therefore, it is difficult to identify time-domain models that fully characterize it in a robust way, for every subject and monitoring scenario. However, due to the inherent periodic nature of breathing, any function representing this movement can eventually be decomposed into Fourier terms, containing the fundamental frequency and harmonics that correspond to the breathing rates we aim to estimate. Hence, the displacement signal can be modeled as a sum of harmonically related complex sinusoids, having frequencies that are integer multiples of the fundamental breathing frequency. To better exploit this harmonic structure, in this section we propose a simple and accurate Nonlinear Least Squares (NLS) estimator64, which is asymptotically efficient for large processing windows, even in colored noise scenarios65. Initially, for removing any residual DC values, and possible high frequency noise components, the RBM-filtered displacement signal $${\hat{x}}(t)$$ is further filtered using a bandpass Kaiser window ($$\beta =6.5$$), from 0.3 Hz to 3 Hz ($$18-180$$ bpm). This corresponds to the physiological range of breathing frequencies, also including possible harmonics. The bandpass filtered signal $${\hat{d}}(t)$$ will ideally be an accurate approximation of the true chest wall motion d(t) (Fig. 1), and can finally be used for breathing frequency estimation. Before estimation, for improving the signal-to-noise ratio (SNR)66,67,68, we calculate the autocorrelation function r(t) of the bandpass filtered signal. The estimation is first performed in time domain, directly over the autocorrelated signal. An initial (coarse) estimation is obtained through a peak detection algorithm, where the time distance between peaks provides an estimation of the time between each breath. Eventually, detected peaks can be excluded if the distance to its neighbors correspond to a frequency outside the expected physiological range. The initial breathing frequency is thus calculated as the inverse of the time between selected peaks, averaged over the entire processing window. This initial estimation will be used to simplify the NLS algorithm. The following step is to calculate the NLS frequency estimates $${\hat{\omega }}$$, which are obtained by maximizing the similarity between $${\hat{d}}(t)$$ and the displacement signal model in (12). Under certain conditions (please refer to the Methods), the solution to this problem (the resulting cost function in (17)) can be efficiently implemented using a Fast Fourier Transform (FFT) and a linear grid search algorithm69, where the estimator reduces to a summation of the breathing harmonics over the power spectral density of $${\hat{d}}(t)$$. The initial time domain estimation is used for limiting the search range, thus avoiding stronger low-frequency components which may still be present in real data. This strategy also reduces the computational effort to perform the grid search.
2022-10-03 18:49:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46599575877189636, "perplexity": 1610.9609635496956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00231.warc.gz"}
https://pypi.org/project/freeform/
natural interpretation of loosely structured user phrases Project description freeform makes it easy to correctly interpret loosely structured user text input without imposing any syntax requirements on the end user. This is particularly usefull for providing text based interfaces over mediums like SMS or instant messaging. Project details Uploaded source Uploaded 2 4
2022-08-20 03:40:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31165072321891785, "perplexity": 7900.320175900187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00386.warc.gz"}
https://jpmccarthymaths.com/2014/02/06/ms3011-week-5-3/
I am emailing a link of this to everyone on the class list every week. If you are not receiving these emails or want to have them sent to another email address feel free to email me at jpmccarthymaths@gmail.com and I will add you to the mailing list. ## Week 5 We began our study of the Logistic Family. We postulated the equation as a model of population growth with two assumptions and ended up with $Q_\mu(x)=\mu x(1-x)$ where $x\in[0,1]$ can be interpreted as the proportion of a maximum population with a growth rate $\mu\in[0,4]$.  We began analysing when zero was attracting/repelling and when $\displaystyle \frac{\mu-1}{\mu}$ was attracting/repelling. We were rudely interrupted by the fire drill! http://kkcb.com/one-of-the-best-scenes-of-the-office-is-dwights-fire-drill/ ## Week 6 In Week 6 we will finish of our study of the Logistic Mapping and perhaps begin our study of the Tent Mapping. ## Exercises I have emailed ye a copy of the exercises and ye should be able to look at these questions 38, 39, 42, 43 for the Week 6 tutorial (assuming that ye are ready for Wednesday’s Test). ## Test and Other CA The test will take place on February 12 in Week 6. Everything up to but not including section 3.4 in the typeset notes is examinable. We have everything for the test covered. Please find a sample test and the test I gave last year after page 31. in these typeset notes. The following theorems from the notes are examinable: the very bottom of page 12 and start of page 13. Also the Fixed-Point Factor-Theorem (which was called such on the board. It is not in the notes but is found here. When I say examinable you should be able to • state the theorem • prove the theorem • understand the theorem and the proof Learning off the proof letter by letter won’t do you! The Concept MCQ will take place in Week 8. The homework will be given to you towards the end of the semester and I will give ye three weeks to do it. It will probably be on complex numbers and won’t be as long as last year’s homework. You will be given marks for the best two out of Test, Concept MCQ and Homework. ## Math.Stack Exchange If you find yourself stuck and for some reason feel unable to ask me the question you could do worse than go to the excellent site math.stackexchange.com. If you are nice and polite, and show due deference to these principles you will find that your questions are answered promptly. For example this question about finding a formula for the iterates of $Q_1$(x).
2022-09-25 10:57:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4195237159729004, "perplexity": 1166.0815594685264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00799.warc.gz"}
https://www.aviraldg.com/
# Posts • ## GSoC Wrapup! For the past few weeks I’ve been working with the Matrix.org team on its Vector.im client. Most of the work described here has been continously merged into the /develop branch of matrix-react-sdk and vector-web and hence has been available on Vector (Develop) for feedback from the community throughout GSoC, and has been developed accordingly. What follows is a brief outline of the work I’ve done (not all commits are mentioned or linked to). • ## Matrix ∙ Vector Matrix is an open standard for decentralised communication, providing simple HTTP APIs and open source reference implementations for securely distributing and persisting JSON over an open federation of servers. • ## Firebase in Production Firebase is Google’s lightweight Backend as a Service solution which includes: • ## GCI 2012 and Sahana Software Foundation This is a post about my experience working with the Sahana Software Foundation, a nonprofit organisation that develops software like Eden, Vesuvius and Agasti to help organisations like the UN in disaster management efforts. • ## WikiPDF TCS IT Wiz is coming up (the Kolkata edition’s on the 9th) and that means a lot of research work, especially from Wikipedia. So, I wrote up a quick tool to export Wikipedia pages as PDFs. First, install Python 2.7 from here and download the following script: • ## Limbo. I recently managed to grab Limbo (and a few other excellent videogames) as a part of the 5th Humble Indie Bundle. I’d seen Limbo long before HIB5, but back then it’d only been released on XBLA, which is why I was overjoyed to see it on HIB5 (which usually means Linux compatibility.) Unfortunately it appears that the Linux version is a simple WINE wrapper over the Windows version. Either way, I had to have a go at it (on my dad’s laptop which still runs XP.) • ## How I Got My School's ICSE Exam Results. First, have a look at ICSE Results (Loyola) and Prizewinners/Toppers (ICSE). • ## The Journey I stand by the window of an old train, Screeching along through torrents of rain. By meadows of joy and forests of dark sorrow. Through unknown towns, unknown villages and the countryside. • ## Throwback: ETRIX '08 I just did a fresh install of Ubuntu 12.04 and found this photograph among some of my old files. It was the first computer-related event I participated in (and won!) — the web designing contest from ETRIX ‘08. • ## The Value of Software I recently showed one of my collaborators on Quizzardous! the private alpha version that I came up with after around a month. I expected him to shower praises upon it, and call it the best thing since the Internet. After all, I was doing revolutionary work, no? • ## Say less, do more. Shut up! Announcing your plans makes you less motivated to accomplish them. - Derek Sivers
2019-08-19 04:04:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22104975581169128, "perplexity": 5956.651876658107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314641.41/warc/CC-MAIN-20190819032136-20190819054136-00265.warc.gz"}
https://www.numerade.com/questions/how-many-unpaired-electrons-do-you-expect-each-complex-ion-to-have-a-crcn64-b-mnf64-c-ruen32/
Enroll in one of our FREE online STEM summer camps. Space is limited so join now!View Summer Courses ### How many unpaired electrons do you expect the com… 03:12 Auburn University Main Campus Need more help? Fill out this quick form to get professional live tutoring. Get live tutoring Problem 50 How many unpaired electrons do you expect each complex ion to have? a. [Cr(CN)6]4- b. [MnF6]4- c. [Ru(en)3]2+ A) Hence the number of unpaired electrons in the d shell of the central metal ion is $\underline{t w O}$ . B) Hence there are tive unpaired electrons in the d shell of the central metal atom. C) Hence there is zero number of unpaired electrons in the d shell of the central metal. ## Discussion You must be signed in to discuss. ## Video Transcript so the number of unfairly electrons are one complice iron can have. Future depends on the strength of the field induced by the legal on that complex. So if a league and is strong filled, one in that case on the the probability off finding are having an affair, electrons well below. And if the ligand is a week filled in that case, the probability of hundred electrons will be high. So let's have a look at our first complex here, which is here. See, and six for minors. So if we just so since it's an octahedron complex, we will have a deal without splitting, like thiss And since, um, sanity is a strongly and strong feeling agent, that means, um, this energy capital behind and Dex Way electrons, the maxim electrons. We've been paired condition. So for chromium here we have actually chromium two plus ion, So chromium cupolas has an illiterate configuration off three four. Some, since there's a higher energy again, select transit for a struggle by one at a time here, and then the fourth one will be paired with the first one. That means in case off thiss complex, you have two unpaid electrons you know, for in the second one, we have him and six for minus. In this case, we haven't em. And two plus on because you have minus one time six here. The total charge is minus four. That means we have plus two here. So you mean to plus as a different configuration off. Truly fight some? Yeah. Here are the Floridis we Phil Egan. That means the energy camp will be law and the electrons will occupy Buddha Law and high And I just are We does. So first one to tree and then his sort of bearing up here electricity goto the higher Stan for fight. So they're so there, dear, Because speeding off this complicit looks like that mess we have here five unfairly electrons And then for the hard won The formula here is Are you in train? I own two plants. So here are our central Maykel Atom has plus two church on it On the electorate configuration is krypton and then forty six we have here. So we have sixty electrons here Safe again Andrada dear which is spitting So it'll in diamond Here is a strong feeling, Egon. That means the energy cap will be high. And that's why electricity loco. Bye. The Lord. I just There's only first one at a time, and then they will pair up here instead of going to the higher energy level, our retail. So in that case, wait. Don't have any on Perry electrons. We have Ciro unfair electron in this case.
2020-07-16 13:38:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49689310789108276, "perplexity": 2191.4323447738034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657169226.65/warc/CC-MAIN-20200716122414-20200716152414-00299.warc.gz"}
https://usaco.guide/silver/prefix-sums-2
Not Frequent 0/9 More on Prefix Sums Authors: Darren Yao, Eric Wei, Neo Wang, Qi Wang Prerequisites Max subarray sum, prefix sums in two dimensions, and a more complicated example. Max Subarray Sum Focus Problem – read through this problem before continuing! This problem has a solution known as Kadane's Algorithm. Please don't use that solution; try to solve it with prefix sums. Why are the two methods equivalent? 2D Prefix Sums Focus Problem – read through this problem before continuing! Now, what if we wanted to process $Q$ queries for the sum over a subrectangle of a 2D matrix with $N$ rows and $M$ columns? Let's assume both rows and columns are 1-indexed, and we use the following matrix as an example: 0 0 0 0 0 0 0 1 5 6 11 8 0 1 7 11 9 4 0 4 6 1 3 2 0 7 5 4 2 3 Naively, each sum query would then take $\mathcal{O}(NM)$ time, for a total of $\mathcal{O}(QNM)$. This is too slow. Let's take the following example region, which we want to sum: 0 0 0 0 0 0 0 1 5 6 11 8 0 1 7 11 9 4 0 4 6 1 3 2 0 7 5 4 2 3 Manually summing all the cells, we have a submatrix sum of $7+11+9+6+1+3 = 37$. The first logical optimization would be to do one-dimensional prefix sums of each row. Then, we'd have the following row-prefix sum matrix. The desired subarray sum of each row in our desired region is simply the green cell minus the red cell in that respective row. We do this for each row to get $(28-1) + (14-4) = 37$. 0 0 0 0 0 0 0 1 6 12 23 31 0 1 8 19 28 32 0 4 10 11 14 16 0 7 12 16 18 21 Now, if we wanted to find a submatrix sum, we could break up the submatrix into a subarray for each row, and then add their sums, which would be calculated using the prefix sums method described earlier. Since the matrix has $N$ rows, the time complexity of this is $\mathcal{O}(QN)$. This might be fast enough for $Q=10^5$ and $N=10^3$, but we can do better. In fact, we can do two-dimensional prefix sums. In our two dimensional prefix sum array, we have $\texttt{prefix}[a][b]=\sum_{i=1}^{a} \sum_{j=1}^{b} \texttt{arr}[i][j].$ This can be calculated as follows for row index $1 \leq i \leq n$ and column index $1 \leq j \leq m$: \begin{aligned} \texttt{prefix}[i][j] =& \, \texttt{prefix}[i-1][j]+ \texttt{prefix}[i][j-1] \\ &- \texttt{prefix}[i-1][j-1]+ \texttt{arr}[i][j] \end{aligned} Let's calculate $\texttt{prefix}[2][3]$. Try playing with the interactive widget below by clicking the buttons to see which numbers are added in each step. Notice how we overcount a subrectangle, and how we fix this by subtracting $\texttt{prefix}[i-1][j-1]$. subtract prefix[i-1][j-1] to get prefix[i][j] 000000 0156118 0171194 046132 075423 The submatrix sum between rows $a$ and $A$ and columns $b$ and $B$, can thus be expressed as follows: \begin{aligned} \sum_{i=a}^{A} \sum_{j=b}^{B} \texttt{arr}[i][j]=&\,\texttt{prefix}[A][B] - \texttt{prefix}[a-1][B] \\ &- \texttt{prefix}[A][b-1] + \texttt{prefix}[a-1][b-1] \end{aligned} Summing the blue region from above using the 2D prefix sums method, we add the value of the green square, subtract the values of the red squares, and then add the value of the gray square. In this example, we have $65-23-6+1 = 37,$ as expected. 0 0 0 0 0 0 0 1 6 12 23 31 0 2 14 31 51 63 0 6 24 42 65 79 0 13 36 58 83 100 Try playing with the interactive widget below by clicking the buttons to see which numbers are added in each step. subtract prefix[a-1][B] subtract prefix[A][b-1] to get our result 000000 0156118 0171194 046132 075423 Since no matter the size of the submatrix we are summing, we only need to access four values of the 2D prefix sum array, this runs in $\mathcal{O}(1)$ per query after an $\mathcal{O}(NM)$ preprocessing. Warning! We need to be cautious of off-by-one errors, as intervals can be inclusive, exclusive, 1-indexed, etc. C++ #include <bits/stdc++.h> using namespace std; #define vt vector #define FOR(i, a, b) for(int i = (a); i < (b); i++)#define FORE(i, a, b) for(int i = (a); i <= (b); i++)#define F0R(i, a) for(int i = 0; i < (a); i++)#define trav(a, x) for (auto& a : x) Java import java.io.*;import java.util.StringTokenizer; public class ForestQueries { static int N; static int Q; static int[][] pfx; static int[][] arr; public static void main(String[] args) { Kattio io = new Kattio(); Problems StatusSourceProblem NameDifficultyTagsSolutionURL SilverEasy Show Tags 2D Prefix Sums SilverHard Show Tags 2D Prefix Sums External Sol GoldHard Show Tags 2D Prefix Sums, Max Subarray Sum External Sol PlatVery Hard Show Tags 2D Prefix Sums External Sol A More Complicated Example What if we want to quickly answer the following type of query? Find $1\cdot a_l+2\cdot a_{l+1}+3\cdot a_{l+2}+\cdots+(r-l+1)\cdot a_{r}$. In addition to taking prefix sums over $a_i$, we'll also need to take prefix sums over $i\cdot a_i$. First, define the following: $\texttt{ps}[i] = a_1+a_2+a_3+a_4+\cdots+a_i$ $\texttt{ips}[i] = 1\cdot a_1+2\cdot a_2+\cdots+i\cdot a_i$ Then, we have: $l\cdot a_l + (l+1) \cdot a_{l+1} + \cdots + r \cdot a_r = \texttt{ips}[r]-\texttt{ips}[l-1]$ $(l-1) \cdot a_l + (l-1) \cdot a_{l+1} + \cdot + (l-1) \cdot a_r = (l-1)(\texttt{ps}[r]-\texttt{ps}[l-1])$ And so, $1\cdot a_l + 2 \cdot a_{l+1} + \cdots + (r-l+1) \cdot a_r = \texttt{ips}[r]-\texttt{ips}[l-1]-(l-1)(\texttt{ps}[r]-\texttt{ps}[l-1])$ Which is what we were looking for! StatusSourceProblem NameDifficultyTagsSolutionURL KSEasy Show Tags Prefix Sums ACHard Show Tags Prefix Sums Check AC PlatInsane Show Tags Prefix Sums External Sol Join the USACO Forum! Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!
2021-03-02 10:45:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 33, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7632405161857605, "perplexity": 801.2348473085344}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363809.24/warc/CC-MAIN-20210302095427-20210302125427-00104.warc.gz"}
http://mathhelpforum.com/calculus/128242-alternative-form-derivative-print.html
# alternative form of a derivative? • Feb 10th 2010, 03:14 PM maybnxtseasn alternative form of a derivative? the definition of a derivative is to find the slope of a tangent line..... the alternate form of the derivative is for??? • Feb 10th 2010, 03:21 PM skeeter Quote: Originally Posted by maybnxtseasn the definition of a derivative is to find the slope of a tangent line..... the alternate form of the derivative is for??? the slope of a tangent line at a specific point. • Feb 10th 2010, 03:52 PM Keithfert488 $\lim_{x\to c}=\frac{f(x)-f(c)}{x-c}$ is the alternate form of a derivative. While $\lim_{\Delta x\to 0} \frac{f(x+\Delta x)-f(x)}{\Delta x}$ is the definition of derivative. The alternate form will give you the value of the derivative at x=c So really the definition of a derivative ${f}'(x)$ really maps all of the alternate definitions of a derivative at all x values in the domain of f(x)
2017-10-18 08:12:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432967305183411, "perplexity": 555.5194123292466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822822.66/warc/CC-MAIN-20171018070528-20171018090528-00477.warc.gz"}
http://clay6.com/qa/40237/given-x-0-the-value-of-f-x-3-cos-sqrt-lie-in-the-interval-
Answer Comment Share Q) # Given $x > 0$,the value of $f(x)=-3\cos \sqrt{3+x+x^2}$ lie in the interval _________ $\begin{array}{1 1}(A)\;-3,3&(B)\;-2,2\\(C)\;-1,1&(D)\;-4,4\end{array}$
2021-01-17 02:35:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9709614515304565, "perplexity": 1555.2214159244402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00570.warc.gz"}
http://mathoverflow.net/questions/71673/fourier-transform-in-mathematica?sort=oldest
# Fourier transform in Mathematica [closed] R and Mathematica software differ when computing fft(c(1,1)) and Fourier[{1,1}], 2+0i 0+0i and {1.41421+ 0i, 0} respectively. How can this be???? - ## closed as off topic by Gerald Edgar, Yemon Choi, Torsten Ekedahl, Felipe Voloch, Andres CaicedoJul 30 '11 at 22:21 Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here. If this question can be reworded to fit the rules in the help center, please edit the question. off topic, see the FAQ –  Gerald Edgar Jul 30 '11 at 18:44 Obviously, they're using different definitions. The notation and conventions associated with the Fourier transform differ between different authors, although it's usually easy to figure out the differences and adjust your results accordingly. A look at the documentation for the R and Mathematica functions should help you figure this out. –  Brian Borchers Jul 30 '11 at 18:51 Normalizing factor. It looks like R defines the Discrete Fourier Transform matrix as $F = [1$ $1; 1$ $-1]$ while Mathematica defines it as $F = \frac{1}{\sqrt{2}}[1$ $1; 1$ $-1]$. If you do inverse fft - R would define it to be $F^{-1} = \frac{1}{2}[1$ $1; 1$ $-1]$ while Mathematica would define it as $F^{-1} = F^{H} = \frac{1}{\sqrt{2}}[1$ $1; 1$ $-1]$ where $H$ is Hermitian transpose. -
2015-03-05 20:43:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38140687346458435, "perplexity": 907.7973845863313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464840.47/warc/CC-MAIN-20150226074104-00057-ip-10-28-5-156.ec2.internal.warc.gz"}
http://www.philipzucker.com/2019/11/
## Linear Relation Algebra of Circuits with HMatrix Oooh this is a fun one. I’ve talked before about relation algebra and I think it is pretty neat. http://www.philipzucker.com/a-short-skinny-on-relations-towards-the-algebra-of-programming/. In that blog post, I used finite relations. In principle, they are simple to work with. We can perform relation algebra operations like composition, meet, and join by brute force enumeration. Unfortunately, brute force may not always be an option. First off, the finite relations grow so enormous as to be make this infeasible. Secondly, it is not insane to talk about relations or regions with an infinite number of elements, such as some continuous blob in 2D space. In that case, we can’t even in principle enumerate all the points in the region. What are we to do? We need to develop some kind of finite parametrization of regions to manipulate. This parametrization basically can’t possibly be complete in some sense, and we may choose more or less powerful systems of description for computational reasons. In this post, we are going to be talking about linear or affine subspaces of a continuous space. These subspaces are hyperplanes. Linear subspaces have to go through the origin, while affine spaces can have an offset from the origin. In the previous post, I mentioned that the finite relations formed a lattice, with operations meet and join. These operations were the same as set intersection and union so the introduction of the extra terminology meet and join felt a bit unwarranted. Now the meet and join aren’t union and intersection anymore. We have chosen to not have the capability to represent the union of two vectors, instead we can only represent the smallest subspace that contains them both, which is the union closed under vector addition. For example, the join of a line and point will be the plane that goes through both. Linear/Affine stuff is great because it is so computational. Most questions you cant to ask are answerable by readily available numerical linear algebra packages. In this case, we’ll use the Haskell package HMatrix, which is something like a numpy/scipy equivalent for Haskell. We’re going to use type-level indices to denote the sizes and partitioning of these spaces so we’ll need some helper functions. In case I miss any extensions, make typos, etc, you can find a complete compiling version here https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs In analogy with sets of tuples for defining finite relations, we partition the components of the linear spaces to be “input” and “output” indices/variables $\begin{bmatrix} x_1 & x_2 & x_3 & ... & y_1 & y_2 & y_3 & ... \end{bmatrix}$. This partition is somewhat arbitrary and easily moved around, but the weakening of strict notions of input and output as compared to functions is the source of the greater descriptive power of relations. Relations are extensions of functions, so linear relations are an extension of linear maps. A linear map has the form $y = Ax$. A linear relation has the form $Ax + By = 0$. An affine map has the form $y = Ax + b$ and an affine relation has the form $Ax + By = b$. There are at least two useful concrete representation for subspaces. 1. We can write a matrix $A$ and vector $b$ down that corresponds to affine constraints. $Ax = b$. The subspace described is the nullspace of $A$ plus a solution of the equation. The rows of A are orthogonal to the space. 2. We can hold onto generators of subspace. $x = A' l+b$ where l parametrizes the subspace. In other words, the subspace is generated by / is the span of the columns of $A'$. It is the range of $A'$. We’ll call these two representations the H-Rep and V-Rep, borrowing terminology from similar representations in polytopes (describing a polytope by the inequalities that define it’s faces or as the convex combination of it’s vertices). https://inf.ethz.ch/personal/fukudak/lect/pclect/notes2015/PolyComp2015.pdf These two representations are dual in many respects. It is useful to have both reps and interconversion routines, because different operations are easy in the two representations. Any operations defined on one can be defined on the other by sandwiching between these conversion functions. Hence, we basically only need to define operations for one of the reps (if we don’t care too much about efficiency loss which, fair warning, is out the window for today). The bulk of computation will actually be performed by these interconversion routines. The HMatrix function nullspace performs an SVD under the hood and gathers up the space with 0 singular values. These linear relations form a category. I’m not using the Category typeclass because I need BEnum constraints hanging around. The identity relations is $x = y$ aka $Ix - Iy = 0$. Composing relations is done by combining the constraints of the two relations and then projecting out the interior variables. Taking the conjunction of constraints is easiest in the H-Rep, where we just need to vertically stack the individual constraints. Projection easily done in the V-rep, where you just need to drop the appropriate section of the generator vectors. So we implement this operation by flipping between the two. We can implement the general cadre of relation operators, meet, join, converse. I feel the converse is the most relational thing of all. It makes inverting a function nearly a no-op. Relational inclusion is the question of subspace inclusion. It is fairly easy to check if a VRep is in an HRep (just see plug the generators into the constraints and see if they obey them) and by using the conversion functions we can define it for arbitrary combos of H and V. It is useful the use the direct sum of the spaces as a monoidal product. A side note: Void causes some consternation. Void is the type with no elements and is the index type of a 0 dimensional space. It is the unit object of the monoidal product. Unfortunately by an accident of the standard Haskell definitions, actual Void is not a BEnum. So, I did a disgusting hack. Let us not discuss it more. ### Circuits Baez and Fong have an interesting paper where they describe building circuits using a categorical graphical calculus. We have the pieces to go about something similar. What we have here is a precise way in which circuit diagrams can be though of as string diagrams in a monoidal category of linear relations. An idealized wire has two quantities associated with it, the current flowing through it and the voltage it is at. When we connect wires, the currents must be conserved and the voltages must be equal. hid and hcompose from above still achieve that. Composing two independent circuits in parallel is achieve by hpar. We will want some basic tinker toys to work with. A resistor in series has the same current at both ends and a voltage drop proportional to the current Composing two resistors in parallel adds the resistance. (resistor r1) <<< (resistor r2) == resistor (r1 + r2)) A bridging resistor allows current to flow between the two branches Composing two bridge circuits is putting the bridge resistors in parallel. The conductance $G=\frac{1}{R}$ of resistors in parallel adds. hcompose (bridge r1) (bridge r2) == bridge 1 / (1/r1 + 1/r2). An open circuit allows no current to flow and ends a wire. open ~ resistor infinity At branching points, the voltage is maintained, but the current splits. This cmerge combinator could also be built using a short == bridge 0 , composing a branch with open, and then absorbing the Void away. We can bend wires up or down by using a composition of cmerge and open. Voltage and current sources enforce current and voltage to be certain values Measurements of circuits proceed by probes. Inductors and capacitors could be included easily, but would require the entries of the HMatrix values to be polynomials in the frequency $\omega$, which it does not support (but it could!). We’ll leave those off for another day. We actually can determine that the rules suggested above are being followed by computation. ### Bits and Bobbles • Homogenous systems are usually a bit more elegant to deal with, although a bit more unfamiliar and abstract. • Could make a pandas like interface for linear relations that uses numpy/scipy.sparse for the computation. All the swapping and associating is kind of fun to design, not so much to use. Labelled n-way relations are nice for users. • Implicit/Lazy evaluation. We should let the good solvers do the work when possible. We implemented our operations eagerly. We don’t have to. By allowing hidden variables inside our relations, we can avoid the expensive linear operations until it is useful to actually compute on them. • Relational division = quotient spaces? • DSL. One of the beauties of the pointfree/categorical approach is that you avoid the need for binding forms. This makes for a very easily manipulated DSL. The transformations feel like those of ordinary algebra and you don’t have to worry about the subtleties of index renaming or substitution under binders. • Sparse is probably really good. We have lots of identity matrices and simple rearrangements. It is very wasteful to use dense operations on these. • Schur complement https://en.wikipedia.org/wiki/Schur_complement are the name in the game for projecting out pieces of linear problems. We have some overlap. • Linear relations -> Polyhedral relations -> Convex Relations. Linear is super computable, polyhedral can blow up. Rearrange a DSL to abuse Linear programming as much as possible for queries. • Network circuits. There is an interesting subclass of circuits that is designed to be pretty composable. https://en.wikipedia.org/wiki/Two-port_network Two port networks are a very useful subclass of electrical circuits. They model transmission lines fairly well, and easily composable for filter construction. It is standard to describe these networks by giving a linear function between two variables and the other two variables. Depending on your choice of which variables depend on which, these are called the z-parameters, y-parameters, h-parameters, scattering parameters, abcd parameters. There are tables of formula for converting from one form to the others. The different parameters hold different use cases for composition and combining in parallel or series. From the perspective of linear relations this all seems rather silly. The necessity for so many descriptions and the confusing relationship between them comes from the unnecessary and overly rigid requirement of have a linear function-like relationship rather than just a general relation, which depending of the circuit may not even be available (there are degenerate configurations where two of the variables do not imply the values of the other two). A function relationship is always a lie (although a sometimes useful one), as there is always back-reaction of new connections. The relation model also makes clearer how to build lumped models out of continuous ones. https://en.wikipedia.org/wiki/Lumped-element_model null • Because the type indices have no connection to the actual data types (they are phantom) it is a wise idea to use smart constructors that check that the sizes of the matrices makes sense. • Nonlinear circuits. Grobner Bases and polynomial relations? • Quadratic optimization under linear constraints. Can’t get it to come out right yet. Clutch for Kalman filters. Nice for many formulations like least power, least action, minimum energy principles. • Quadratic Operators -> Convex operators. See last chapter of Rockafellar. • Duality of controllers and filters. It is well known (I think) that for ever controller algorithm there is a filter algorithm that is basically the same thing. • LQR – Kalman • Viterbi filter – Value function table • particle filter – Monte Carlo control • Extended Kalman – iLQR-ish? Use local approximation of dynamics • unscented kalman – ? ## Failing to Bound Kissing Numbers https://en.wikipedia.org/wiki/Kissing_number Cody brought up the other day the kissing number problem.Kissing numbers are the number of equal sized spheres you can pack around another one in d dimensions. It’s fairly self evident that the number is 2 for 1-d and 6 for 2d but 3d isn’t so obvious and in fact puzzled great mathematicians for a while. He was musing that it was interesting that he kissing numbers for some dimensions are not currently known, despite the fact that the first order theory of the real numbers is decidable https://en.wikipedia.org/wiki/Decidability_of_first-order_theories_of_the_real_numbers I suggested on knee jerk that Sum of Squares might be useful here. I see inequalities and polynomials and then it is the only game in town that I know anything about. Apparently that knee jerk was not completely wrong https://arxiv.org/pdf/math/0608426.pdf Somehow SOS/SDP was used for bounds here. I had an impulse that the problem feels SOS-y but I do not understand their derivation. One way the problem can be formulated is by finding or proving there is no solution to the following set of equations constraining the centers $x_i$ of the spheres. Set the central sphere at (0,0,0,…) . Make the radii 1. Then$\forall i. |x_i|^2 = 2^2$ and $\forall i j. |x_i - x_j|^2 \ge 2^2$ I tried a couple different things and have basically failed. I hope maybe I’ll someday have a follow up post where I do better. So I had 1 idea on how to approach this via a convex relaxation Make a vector $x = \begin{bmatrix} x_0 & y _0 & x_1 & y _1 & x_2 & y _2 & ... \end{bmatrix}$ Take the outer product of this vector $x^T x = X$ Then we can write the above equations as linear equalities and inequalities on X. If we forget that we need X to be the outer product of x (the relaxation step), this becomes a semidefinite program. Fingers crossed, maybe the solution comes back as a rank 1 matrix. Other fingers crossed, maybe the solution comes back and says it’s infeasible. In either case, we have solved our original problem. Didn’t work though. Sigh. It’s conceivable we might do better if we start packing higher powers into x? Ok Round 2. Let’s just ask z3 and see what it does. I’d trust z3 with my baby’s soft spot. It solves for 5 and below. Z3 grinds to a halt on N=6 and above. It ran for days doin nothing on my desktop. Ok. A different tact. Try to use a positivstellensatz proof. If you have a bunch of polynomial inequalities and equalities if you sum polynomial multiples of these constraints, with the inequalities having sum of square multiples, in such a way to = -1, it shows that there is no real solution to them. We have the distance from origin as equality constraint and distance from each other as an inequality constraint. I intuitively think of the positivstellensatz as deriving an impossibility from false assumptions. You can’t add a bunch of 0 and positive numbers are get a negative number, hence there is no real solution. I have a small set of helper functions for combining sympy and cvxpy for sum of squares optimization. I keep it here along with some other cute little constructs https://github.com/philzook58/cvxpy-helpers and here is the attempted positivstellensatz. It worked in 1-d, but did not work in 2d. At order 3 polynomials N=7, I maxed out my ram. I also tried doing it in Julia, since sympy was killing me. Julia already has a SOS package It was faster to encode, but it’s using the same solver (SCS), so basically the same thing. I should probably be reducing the system with respect to equality constraints since they’re already in a Groebner basis. I know that can be really important for reducing the size of your problem I dunno. Blah blah blah blah A bunch of unedited trash https://github.com/peterwittek/ncpol2sdpa Peter Wittek has probably died in an avalanche? That is very sad. These notes https://web.stanford.edu/class/ee364b/lectures/sos_slides.pdf Positivstullensatz. kissing number Review of sum of squares minimimum sample as LP. ridiculous problem min t st. f(x_i) – t >= 0 dual -> one dual variable per sample point The only dual that will be non zero is that actually selecting the minimum. Hm. Yeah, that’s a decent analogy. How does the dual even have a chance of knowing about poly airhtmetic? It must be during the SOS conversion prcoess. In building the SOS constraints, we build a finite, limittted version of polynomial multiplication x as a matrix. x is a shift matrix. In prpducing the characterstic polynomial, x is a shift matrix, with the last line using the polynomial known to be zero to eigenvectors of this matrix are zeros of the poly. SOS does not really on polynomials persay. It relies on closure of the suqaring operaiton maybe set one sphere just at x=0 y = 2. That breaks some symmettry set next sphere in plane something. random plane through origin? order y components – breaks some of permutation symmettry. no, why not order in a random direction. That seems better for symmettry breaking ## Learn Coq in Y I’ve been preparing a Learn X in Y tutorial for Coq. https://learnxinyminutes.com/ I’ve been telling people this and been surprised by how few people have heard of the site. It’s super quick intros to syntax and weirdness for a bunch of languages with inline code tutorials. I think that for me, a short description of that mundane syntactic and programming constructs of coq is helpful. Some guidance of the standard library, what is available by default. And dealing with Notation scopes, which is a pretty weird feature that most languages don’t have. The manual actually has all this now. It’s really good. Like check this section out https://coq.inria.fr/refman/language/coq-library.html . But the manual is an intimidating documents. It starts with a BNF description of syntax and things like that. The really useful pedagogical stuff is scattered throughout it. Anyway here is my draft (also here https://github.com/philzook58/learnxinyminutes-docs/blob/master/coq.html.markdown where the syntax highlighting isn’t so janked up). Suggestions welcome. Or if this gets accepted, you can just make pull requests Bonus. An uneditted list of tactics. You’d probably prefer https://pjreddie.com/coq-tactics/
2019-11-16 01:40:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7052082419395447, "perplexity": 873.4121744693297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00504.warc.gz"}
https://slawekk.wordpress.com/2013/02/25/isarmathlib-1-8-0-released/
## IsarMathLib 1.8.0 released I have released the 1.8.0 version of IsarMathLib. This version works with the new Isabelle2013 (released about two weeks ago). The new material in this release mostly comes from contributions by Daniel de la Concepción Sáez, who added 5 theory files with about a hundred theorems about general topology. The subjects range from convergence of nets and filters and related continuity properties of functions, through some generalized notions of compactness and connectedness, to rather abstract (like, category theory abstract) subjects of heredity and spectra of topological properties. Unfortunately, it will take some time before those theories will be presented on the FormalMath.org site, as my parser is not good enough to parse Daniel’s advanced Isar style.  I will work on this though and in the meantime an interested reader can have a look at the Isabelle generated full proof document or the outline. As for myself, I have finished the proof that the function $(x_0, x_1,...,x_{n-1})\mapsto \sum_{i=0}^{n-1}x_i$ is continuous in the natural topology on the space of $n$-element sequences of a topological group.
2019-04-19 21:06:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5263925790786743, "perplexity": 824.688484584413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528058.3/warc/CC-MAIN-20190419201105-20190419223105-00326.warc.gz"}
http://math.stackexchange.com/questions/93239/greens-function
# Green's function Why does the Green's function $G(r,r_0)$ of the Laplace's equation $\nabla^2 u=0$, the domain being the half plane, is equal $0$ on the boundary? How can I interpret the Laplace's equation physically? Is there a way to intuitively interpret the Green's function in general? Based on @Matt's comment, I am guessing that it is not generally true that the Green's function is 0 on the boundary? - I hate to keep writing this in comments, but can you post more information. Is your domain just the disk? Where is Laplace's equation coming up for you? I ask that because there are entire books on physical interpretations of Laplace's equation, so it is surprising that wherever you are seeing this doesn't have any. – Matt Dec 21 '11 at 17:00 @Matt: Thanks. Based on your comment then I am guessing that it is not generally true that the Green's function is 0 on the boundary? I am thinking of the domain being the half plane. But it would be nice if some other cases could be considered. – Jonathan Dec 21 '11 at 17:40 If the Greens function is equal to $0$ at the boundary (regardless of the domain), it can be used to solve the Dirichlet problem (prescribed boundary values). You often find the name 'Green's function of the first kind' for this. Another problem which is of interest is prescribing the normal derivative of the solution at the boundary. For this, the Greens function should have zero normal derivative at the boundary (...Green's function of the second kind). See any general text book about PDE in general or about elliptic PDE. The concept as such is not restricted to the Laplace equation. – user20266 Dec 21 '11 at 18:02 @Thomas: Thanks. Would you mind elaborating why for a Dirichlet problem we should have G=0 on the boundary? – Jonathan Dec 21 '11 at 18:07 One more clarification. What is your definition of Green's function? I just looked up what it was when I saw it and the definition is a function $G(x,y)$ such that $\Delta_y G(x,y)=\delta_x$ for points inside the domain and $G(x,y)=0$ on the boundary. I only asked about the domain because common domains like the disk and upper half plane have straightforward to derive Green's functions that you can then just check directly they are zero. – Matt Dec 21 '11 at 18:07 Regarding to physical interpretation of Greens functions, for elliptic operators it makes sense to think of the Poisson equation in electrostatics $$\nabla^2\phi(\vec x)=\rho(\vec x),$$ where $\phi(\vec x)$ is the electric potential. Here $\rho(\vec x)$ is the charge density, which you can think of as being composed of point charges localized at positions $\vec y$ with densities described by $\delta(\vec x-\vec y)$. This thought can somewhat tautologically be represented by $$\rho(\vec x)=\int\rho(\vec y)\delta(\vec x-\vec y)d\vec y.$$ Now by the superposition principle, which holds for the Mawell equations (or mathematically by the fact that your differential operator is linear), if you know the potential $G(\vec x)$ of a point particle $$\nabla^2G(\vec x)=\delta(\vec x),$$ you already know the solution to the full problem. With $$\phi(\vec x)=\int G(\vec x-\vec y)\rho(\vec y)d\vec y,$$ which resembles the summing up of all the point potentials, you find that Poissons equations is solved: $$\nabla^2\phi(\vec x)=\int \nabla^2G(\vec x-\vec y)\rho(\vec y)d\vec y =\int \delta(\vec x-\vec y)\rho(\vec y)d\vec y=\rho(\vec x).$$ Now what is the potential of the point particle? It's usefull to think of $\nabla^2$ as the divergence of the gradient $$\text{div}(\vec \nabla G(\vec x))=\delta(\vec x).$$ The gradient of the potential is the electric field, which is proportional to the force imposed on other point charges. Now what is the point particle force field which has zero divergence but is singular for $\vec x =0$? In three dimensions, the surface area $A$ of a sphere goes with $A\propto r^2$, so if the divergence should be zero, the radial solution should go as $\frac 1 {r^2}$, which just is Coulombs law. Integrating the remaining gradient, we find $$G(\vec r-\vec r_0)=\frac{c}{|r-r_0|}.$$ Similarly, if you're in two dimensions, then the surface goes with $r$, field must go inverse to that and the integral, i.e. the Greens function goes as $log(r)$. In physics, very often you think of the delta peak as a source of a pertubation of some field. The differential operator comes from some Lagrangian density which encodes conservation laws and the associated Greens function describes how information porpagates away from the source. The field decays spatially and (in contrast to the Poisson equation with a change densitiy $\rho(\vec x)$) the remaining Laplace equation $\nabla^2\phi(\vec x)=0$ describes free dispersal/propagation of the potential/wave. All this business is a mayor theme in Field Theories (or 'its applications' like signal processing), where the Operators involve time derivatives. At the source points, there is some interaction and the field gets perturbed and then the information travels away from there. In Quantum Theories, these are "just" propability waves. Basically, if you know your free propagators and how to knot them together using Feynman diagrams, you know the whole theory. A graphic and therefore illustrive such example is the Greens function of the heat equation, where you can literally watch then density dissolve. - 'why for a Dirichlet problem we should have $G=0$ on the boundary': this does not fit into a comment. The reason is -- technically -- Green's representation formula, which holds for any $u\in C^1(\overline{\Omega})\cap C^2(\Omega)$ : $$u(y) = \int_{\partial \Omega} (u \frac{\partial G}{\partial \nu} - G\frac{\partial u}{\partial \nu})dS + \int_\Omega G \Delta udx$$ If $G=0$ on the boundary then the second term in the first integral vanishes, which allows you to represent solutions to $\Delta u = 0$ (which makes the second integral vanish) for given boundary values by an integral over $\partial \Omega$. See, e.g., D. Gilbarg, N.S.Trudinger, Elliptic Partial Differential Equations of 2nd order, section 2.4. There you will also find the notation explained if you don't recognize it. This formula also shows why it is desirable that the normal derivative vanishes if you want to prescibe the normal derivative at the boundary. -
2015-11-28 07:52:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8821048140525818, "perplexity": 217.78190733781122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451744.67/warc/CC-MAIN-20151124205411-00198-ip-10-71-132-137.ec2.internal.warc.gz"}
https://tutorbin.com/questions-and-answers/a-system-with-unity-feedback-is-shown-in-figure-2-below-use-the-routh-
Question A system with unity feedback is shown in Figure 2 below. Use the Routh-Hurwitz stability criterion, determine if the closed-loop feedback control system \text { shown in Figure } 2 \text { is stable for the case where } G(s)=\frac{\alpha(s+4)(s+1)}{(s+2)\left(s^{2}+10 s+20\right)}\langle\alpha \text { is } 36 \text { ) } Fig: 1 Fig: 2 Fig: 3 Fig: 4 ### Submit a new Query Success Assignment is successfully created
2023-03-25 02:16:11
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8446333408355713, "perplexity": 2158.14778453786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00586.warc.gz"}
https://www.vedantu.com/question-answer/different-words-can-be-formed-with-the-le-class-11-maths-cbse-5eeb0d82309073303c5f9215
Question # How many different words can be formed with the letters of the word ORDINATE so that(i) The vowels occupy odd places(ii) Beginning with O(iii) Beginning with O and ending with E. Hint: For the first case, get the arrangements of vowel letters and consonants at odd and even places respectively, hence, get the total number of words using them. For the second case fix O at the first position and for third case fix O at first position and E at last. Now, get permutations of remaining letters to get the answer. Complete step by step solution: We have the word ‘ORDINATE’ which consists of letters O, R, D, I, N, A, T, E. Here all the letters are different. (i) The vowels occupy odd places. ORDINATE is an 8 letter word, so there will be 4 odd places and 4 even places. Now, vowels in the given word can be given as O, I, A, E. \begin{align} & \text{O}\to \text{odd places} \\ & \text{E}\to \text{evenplaces} \\ \end{align} Other letters except vowels in the given word are R, D, N, T. As we can observe that there are 4 odd places and 4 vowels, so they will occupy each place once and we can arrange them by 4! as well. And similarly, we can arrange other letters (consonants) which will occupy even places by 4! as well, because there are 4 other letters except vowels and 4 even places. Hence, the total words of all these kinds can be given by $4!\times 4!$. Hence, the number of words where vowels occupy odd places are 576. (ii) Beginning with 0. Here, we need to find the number of words which are beginning with 0. So, let us fix ‘O’ at the starting of the given word ORDINATE, leaving 7 positions as empty. So, we have 7 letters R, D, I, N, A, T, E for the arrangement and formation of words. Hence, the number of arrangements of 7 different letters is given as 7!. So, number of words beginning with ‘o’ From ORDINATE can be given as 7!=5040. (iii) Beginning with O and ending with E. Here, we need to find the number of words starting with o and ending with E from ORDINATE. So, we can fix O and E in the beginning and last respectively, we have 6 positions as empty. So, now we have R, D, I, N, A, T for arrangement and calculating the total number of words for the required condition. Hence, permutations of 6 letters can be given 6! (all different). So, the number of words beginning with O and ending with E is 6! = 720. Note: Here, we do not have any same letters with the word ORDINATE. If same letters are involved then permutations of them will be given by formula $\dfrac{n!}{{{r}_{1}}!{{r}_{2}}!{{r}_{3}}!............{{r}_{p}}!}$. Here${{r}_{1}}{{r}_{2}}{{r}_{3}}......{{r}_{p}}$ are number of letters of same kind. So, take care of it as well. Don’t count the number of vowels and consonants carelessly to not miss any cases for the first part. Always try to solve these kinds of problems with the help of boxes to get the concepts of the problem. It makes the problem very flexible and easy. So, take care of it.
2021-05-09 16:30:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424406290054321, "perplexity": 662.2452200582726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00504.warc.gz"}
http://tucs.fi/publications/view/?pub_id=cFuMeVa10a
## Some Examples of Computing the Possibilistic Correlation Coefficient from Joint Possibility Distributions Robert Fullér, József Mezei, Péter Várlaki, Some Examples of Computing the Possibilistic Correlation Coefficient from Joint Possibility Distributions. In: Imre J. Rudas, János Fodor, Janusz Kacprzyk (Eds.), Computational Intelligence in Engineering, Studies in Computational Intelligence Series, Studies in Computational Intelligence 313, 153–169, Springer, 2010. http://dx.doi.org/10.1007/978-3-642-15220-7_13 ### Abstract: In this paper we will show some examples for computing the possibilistic correlation coefficient between marginal distributions of a joint possibility distribution. First we consider joint possibillity distributions, $(1-x-y)$, $(1-x^2-y^2)$, $(1-\sqrt{x}-\sqrt{y})$ and $(1-x^2-y)$ on the set $\left\lbrace (x,y) \in \mathbb{R}^{2} \mid x \geq 0, y \geq 0, x+y \leq 1 \right\rbrace$ then we will show (i) how the possibilistic correlation coefficient of two linear marginal possibility distributions changes from zero to -1/2, and from -1/2 to -3/5 by taking out bigger and bigger parts from the level sets of a their joint possibility distribution; (ii) how to compute the autocorrelation coefficient of fuzzy time series with linear fuzzy data. ### BibTeX entry: @INBOOK{cFuMeVa10a,  title = {Some Examples of Computing the Possibilistic Correlation Coefficient from Joint Possibility Distributions},  booktitle = {Computational Intelligence in Engineering, Studies in Computational Intelligence Series},  author = {Fullér, Robert and Mezei, József and Várlaki, Péter},  volume = {313},  series = {Studies in Computational Intelligence},  editor = {Rudas, Imre J. and Fodor, János and Kacprzyk, Janusz},  publisher = {Springer},  pages = {153–169},  year = {2010},  keywords = {Correlation coefficient, autocorrelation},} Belongs to TUCS Research Unit(s): Institute for Advanced Management Systems Research (IAMSR) Publication Forum rating of this publication: level 2 Edit publication
2018-05-23 18:26:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7096554636955261, "perplexity": 5040.64698829905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00560.warc.gz"}
http://curiouscheetah.com/BlogMath/category/algebra/
# Algebra ## Deriving Euler’s Identity Euler’s Identity has been called “the most beautiful equation” in mathematics. It neatly encapsulates five key values and three operators into a true equation: $e^{\pi i} – 1 = 0$ But why is it true? In this entry, I’m going… ## Is Zero a Factor of Zero? Generally speaking, if $$a \times b = c$$, then $$a$$ and $$b$$ are factors of $$c$$. This concept appears at the secondary level in two contexts: The factors of positive integers, and the factors of a polynomial. If we limit… ## De Gua and the Pythagoreans The Pythagorean Theorem states that, given a right triangle, the areas of squares placed along the two legs will have the same area as a square placed on the hypotenuse. This is normally written as $$a^2 + b^2 = c^2$$,… ## Geometry for multiplication, division, and roots Contemporary plane geometry of the sort taught in the standard American high school is most heavily informed by two books and a third mathematician. The first of these is Euclid’s Elements, which is so conceptually tied to planar geometry that… ## Factoring and long division This morning, I’ve been watching YouTube videos. I started with Tarleen Kaur’s video on Middle Term Splitting. What I find interesting about Kaur’s Chapter to Chapter videos is that, because she’s a student in India, her methods are often different… ## Positive numbers and absolute value They say that when you’re a hammer, everything looks like a nail. Since I’m currently thinking about conceptual vs procedural teaching, I’m noticing examples. Here’s a good definition of absolute value: “the magnitude of a real number without regard to… ## The smallest angle I have been thinking about procedural vs conceptual thinking, which Skemp’s seminal article refers to as relational vs instructional. One of the questions on this year’s geometry final asks: Given a triangle ABC with sides AB = 5, BC = 6,… ## Concepts vs procedures A persistent topic in mathematics education is whether to focus on conceptual or procedural knowledge. After reading Kris Boulton’s recent post that argues, “It depends,” I found myself thinking about the disconnect between arithmetic and algebra. What is needed to understand… ## Graphing and the coordinate plane Dan Meyer’s latest post is on an exercise involving using a gridless coordinate plane to place fruit along two dimensions. The goal is a worthy one: To give students the opportunity to explore what the coordinate plane is without getting tied… ## Transformations as Functions Most high school geometry textbooks will proclaim that there are four basic transformations. Three of these (translations, reflections, and rotations) are rigid transformations; the resulting copy (image) is congruent to the original version (pre-image). Here are examples: The first example…
2020-10-31 12:48:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4105907082557678, "perplexity": 1334.7540653684246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00716.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-center-and-radius-of-the-circle-x-2-y-2-10x-6y-18-0
# How do you find the center and radius of the circle: x^2 + y^2 – 10x + 6y + 18 = 0? Mar 23, 2018 Centre is $\left(5 , - 3\right)$ and the Radius is $4$ #### Explanation: We must write this equation in the form ${\left(x - a\right)}^{2} + {\left(y - b\right)}^{2} = {r}^{2}$ Where $\left(a , b\right)$ are the co ordinates of the center of the circle and the radius is $r$. So the equation is ${x}^{2} + {y}^{2} - 10 x + 6 y + 18 = 0$ Complete the squares so add 25 on both sides of the equation ${x}^{2} + {y}^{2} - 10 x + 25 + 6 y + 18 = 0 + 25$ = ${\left(x - 5\right)}^{2} + {y}^{2} + 6 y + 18 = 0 + 25$ Now add 9 on both sides ${\left(x - 5\right)}^{2} + {y}^{2} + 6 y + 18 + 9 = 0 + 25 + 9$ =${\left(x - 5\right)}^{2} + {\left(y + 3\right)}^{2} + 18 = 0 + 25 + 9$ This becomes ${\left(x - 5\right)}^{2} + {\left(y + 3\right)}^{2} = 16$ So we can see that the centre is $\left(5 , - 3\right)$ and the radius is $\sqrt{16}$ or 4 Mar 23, 2018 centre : $C \left(5 , - 3\right)$ radius : $r = 4$ #### Explanation: The general equation of a circle: color(red)(x^2+y^2+2gx+2fy+c=0...........to(1), whose centre is color(red)(C((-g,-f)) and radius is color(red)(r=sqrt(g^2+f^2-c) We have, ${x}^{2} + {y}^{2} - 10 x + 6 y + 18 = 0$ Comparing with $e q {u}^{n} \left(1\right)$, we get $2 g = - 10 , 2 f = 6 \mathmr{and} c = 18$ $\implies g = - 5 , f = 3 \mathmr{and} c = 18$ So, radius $r = \sqrt{{\left(- 5\right)}^{2} + {\left(3\right)}^{2} - 18} = \sqrt{25 + 9 - 18} = \sqrt{16} = 4$ i.e. $r = 4 > 0$ centre $C \left(- g , - f\right) \implies C \left(- \left(- 5\right) , - 3\right)$ i.e. centre $C \left(5 , - 3\right)$
2019-09-23 03:02:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 26, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8958791494369507, "perplexity": 600.5279028581749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575860.37/warc/CC-MAIN-20190923022706-20190923044706-00457.warc.gz"}
https://physics.stackexchange.com/questions/392078/the-geiger-counter
# The Geiger Counter In my class, we talked about the GM-counter and we were shown a graph with the voltage on the x-axis and intensity on the y-axis. But I do not understand why the graph has the characteristics it has, such as under a certain voltage it does not work, then it hits a plateau and suddenly it goes up again. I asked my teacher for clarification, but he was rather vague. He said that the particles need a certain voltage to get ionized (I understand this partly), but he also said that there was a continuous discharge above a certain voltage (the part I do not completely understand). I could not find the same graph as in my text book but I found one similar enough, so I have enclosed that picture. I hope someone can help me understand this concept. Thank you in advance.
2019-12-06 02:36:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399055600166321, "perplexity": 200.05218947536588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484477.5/warc/CC-MAIN-20191206023204-20191206051204-00116.warc.gz"}
https://stackovergo.com/es/q/3386563/adding-image-to-blob-mysql-via-c
# Agregar imagen a blob (mysql) a través de c # I have made a C# application that adds images from the harddisk to a blob-field in mysql. I don't get errors and the blob-field is filled. When I check the blob-field in MySQL Workbench it says the field is not correct (it can not show the image). Whatever I do, whatever example I try, nothing seems to work. So it has no point to place any example code here, because nothing is working so far. Does anyone have a good example code that I can try? Muchas Gracias preguntado el 01 de septiembre de 12 a las 16:09 The code from your C# app might be useful; if for no reason but to verify that what you're doing is correct. - What kind of image is it? Maybe MySQL Workbench doesn't know how to show a TIF (for example). Try grabbing the bytes and seeing if you CAN view the view image. - @Cory: Some link is posted with code. I will check it, try it and posted here. - @aquinas: Nothing special, just a png 16x16. When I add the image directly with Workbench, it is showing correctly. - OK. Then POST YOUR CODE. Write a 5 line console application that can replicate the problem. Then we can help you. It's highly doubtful that MySQL has a problem. :) - ## 3 Respuestas Okay, since some people wanted some code to be shown, I created a very simple console application with the help described in the link of Alex. I literally taken that code and placed in the console app. Some miracle happend, because it worked as soon as I started the app. I don't know what I did wrong in the main application, but it has to be something (duh). Here is the code: string MyConString = "SERVER=localhost;" + "DATABASE=database;" + "UID=root;" + System.IO.FileStream fs = new FileStream(@"D:\link\to\image.png", FileMode.Open); System.IO.BufferedStream bf = new BufferedStream(fs); byte[] buffer = new byte[bf.Length]; byte[] buffer_new = buffer; MySqlConnection connection = new MySqlConnection(MyConString); connection.Open(); MySqlCommand command = new MySqlCommand("", connection); command.CommandText = "insert into table(fldImage) values(@image);"; command.ExecuteNonQuery(); connection.Close(); Respondido el 02 de Septiembre de 12 a las 13:09 Be careful in here. You never close the fs FileStream or the bf buffered stream. Your main problem could be that buffer_new no contiene todos los of the bytes of you image, resulting in a corrupted one in MySQL Workbench. You really should be applying the using statement to streams: using (var fs = new FileStream(@"D:\link\to\image.png", FileMode.Open)) { using (var bf = new BufferedStream(fs)) { /* your existing code */ } } - Cᴏʀʏ Did you try to recover the blob via the mysql connector (the way you store it). MySQL Workbench doesn't displaying the image doesn't mean that isn't being stored. Try to recover the image: while (reader.read()) { File.WriteAllBytes(@"c:\image.extension", image); } This way you can ensure it's saving. Respondido el 01 de Septiembre de 12 a las 16:09 Tried your code, image is saved, image can not be shown by windows image viewer. So... Now I can say the image is saved incorrect? - user1140479 If you try with the proper extension then yes. It's not being saved the way it should. I'll suggest you to put the code you're using to see where's the problem. - Erre Efe I find when working with mysql and .NET that the Conector MySQL makes things really easy! Este chico: Guardar imagen en MySQL a través de la aplicación C # seems to be on the right lines. ¡Buena suerte! contestado el 23 de mayo de 17 a las 13:05 No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.
2022-06-30 17:29:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2069113552570343, "perplexity": 4756.513733241627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103850139.45/warc/CC-MAIN-20220630153307-20220630183307-00527.warc.gz"}
https://www.vedantu.com/question-answer/a-invested-rs-25300-for-7-months-b-invested-rs-class-10-maths-cbse-5edcbcaee5b56371c59317c5
Question A invested Rs. 25,300 for 7 months, B invested Rs. 25,200 for 11 months and C invested Rs. 27,500 for 7 months. Find the share of A and C together out of a total profit of Rs. 33,600. Hint – Total investment of each person is respective months multiplied by respective investment. Share of profit earned by A and C is corresponding to their investment .(Use properties of ratio). Given data A invested Rs. 25,300 for 7 months. Therefore total investment of A $= \left( {25300 \times 7} \right)$ B invested Rs. 25,200 for 11 months. Therefore total investment of B $= \left( {25200 \times 11} \right)$ C invested Rs. 27,500 for 7 months. Therefore total investment of C $= \left( {27500 \times 7} \right)$ Therefore ratio of their investment are $A:B:C = \left( {25300 \times 7} \right):\left( {25200 \times 11} \right):\left( {27500 \times 7} \right)$ Divide by 1100 we get $\begin{gathered} A:B:C = \left( {23 \times 7} \right):\left( {252} \right):\left( {25 \times 7} \right) \\ A:B:C = 161:252:175 \\ \end{gathered}$ Now it is given that the total profit is Rs. 33,600. Let the share of A, B and C is x, y and z respectively. So, the share of x is $\begin{gathered} \Rightarrow x = \dfrac{{161}}{{161 + 252 + 175}} \times 33600 \\ \Rightarrow x = \dfrac{{161}}{{588}} \times 33600 \\ \end{gathered}$ So, the share of y is $\begin{gathered} \Rightarrow y = \dfrac{{252}}{{161 + 252 + 175}} \times 33600 \\ \Rightarrow y = \dfrac{{252}}{{588}} \times 33600 \\ \end{gathered}$ So, the share of z is $\begin{gathered} \Rightarrow z = \dfrac{{175}}{{161 + 252 + 175}} \times 33600 \\ \Rightarrow z = \dfrac{{175}}{{588}} \times 33600 \\ \end{gathered}$ So the share of A and C together is $\begin{gathered} x + z = \dfrac{{161}}{{588}} \times 33600 + \dfrac{{175}}{{588}} \times 33600 \\ \Rightarrow x + z = \dfrac{{336}}{{588}} \times 33600 = 19200 \\ \end{gathered}$ So, the share of A and C together out of a total profit of Rs. 33,600 is Rs. 19,200. Note – In such types of questions first calculate the total investment of the respective persons then calculate the ratio of their investments and simplify, from this we get the percentage amount they invested, which then can be multiplied with the total profit earned to get the share of profit earned by the respective person .
2021-05-17 16:59:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7198849320411682, "perplexity": 1364.3745328821444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00523.warc.gz"}
https://fr.maplesoft.com/support/help/maple/view.aspx?path=examples%2FLA_options
Linear Algebra Options - Maple Programming Help > $\mathrm{restart}$ > $\mathrm{with}\left(\mathrm{LinearAlgebra}\right):$
2021-03-08 00:07:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 70, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9177228808403015, "perplexity": 2001.5109152208397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381230.99/warc/CC-MAIN-20210307231028-20210308021028-00545.warc.gz"}
https://edwardfhughes.wordpress.com/tag/2015/
# Conference Amplitudes 2015 – Integrability, Colorful Duality and Hiking The middle day of a conference. So often this is the graveyard slot – when initial hysteria has waned and the final furlong seems far off. The organisers should take great credit that today was, if anything, the most engaging thus far! Even the weather was well-scheduled, breaking overnight to provide us with more conducive working conditions. Integrability was our wake-up call this morning. I mentioned this hot topic a while back. Effectively it’s an umbrella term for techniques that give you exact answers. For amplitudes folk, this is the stuff of dreams. Up until recently the best we could achieve was an expansion in small or large parameters! So what’s new? Dr. Amit Sever brought us up to date on developments at the Perimeter Institute, where the world’s most brilliant minds have found a way to map certain scattering amplitudes in $4$ dimensions onto a $2$ dimensional model which can be exactly solved. More technically, they’ve created a flux tube representation for planar amplitudes in $\mathcal{N}=4$ super-Yang-Mills, which can then by solved using spin chain methods. The upshot is that they’ve calculated $6$ particle scattering amplitudes to all values of the (‘t Hooft) coupling. Their method makes no mention of Feynman diagrams or string theory – the old-fashioned ways of computing this amplitude for weak and strong coupling respectively. Nevertheless the answer matches exactly known results in both of these regimes. There’s more! By putting their computation under the microscope they’ve unearthed unexpected new physics. Surprisingly the multiparticle poles familiar from perturbative quantum field theory disappear. Doing the full calculation smoothes out divergent behaviour in each perturbative term. This is perhaps rather counterintuitive, given that we usually think of higher-loop amplitudes as progressively less well-behaved. It reminds me somewhat of Regge theory, in which the UV behaviour of a tower of higher spin states is much better than that of each one individually. The smorgasbord of progress continued in Mattias Wilhelm’s talk. The Humboldt group have a completely orthogonal approach linking integrability to amplitudes. By computing form factors using unitarity, they’ve been able to determine loop-corrections to anomalous dimensions. Sounds technical, I know. But don’t get bogged down! I’ll give you the upshot as a headline – New Link between Methods, Form Factors Say. Coffee consumed, and it was time to get colorful. You’ll hopefully remember that the quarks holding protons and neutrons together come in three different shades. These aren’t really colors that you can see. But they are internal labels attached to the particles which seem vital for our theory to work! About 30 years ago, people realised you could split off the color-related information and just deal with the complicated issues of particle momentum. Once you’ve sorted that out, you write down your answer as a sum. Each term involves some color stuff and a momentum piece. Schematically $\displaystyle \textrm{gluon amplitude}=\sum \textrm{color}\times \textrm{kinematics}$ What they didn’t realise was that you can shuffle momentum dependence between terms to force the kinematic parts to satisfy the same equations as the color parts! This observation, made back in 2010 by Zvi Bern, John Joseph Carrasco and Henrik Johansson has important consequences for gravity in particular. Why’s that? Well, if you arrange your Yang-Mills kinematics in the form suggested by those gentlemen then you get gravity amplitudes for free. Merely strip off the color bit and replace it by another copy of the kinematics! In my super-vague language above $\displaystyle \textrm{graviton amplitude}=\sum \textrm{kinematics}\times \textrm{kinematics}$ Dr. John Joseph Carrasco himself brought us up to date with a cunning method of determining the relevant kinematic choice at loop level. I can’t help but mention his touching modesty. Even though the whole community refers to the relations by the acronym BCJ, he didn’t do so once! Before that Dr. Donal O’Connell took us on an intriguing detour of solutions to classical gravity theories with an appropriate dual Yang-Mills theory, obtainable via a BCJ procedure. The idea is beautiful, and seems completely obvious once you’ve been told! Kudos to the authors for thinking of it. After lunch we enjoyed a well-earned break with a hike up the Uetliberg mountain. I learnt that this large hill is colloquially called Gmuetliberg. Yvonne Geyer helpfully explained that this is derogatory reference to the tame nature of the climb! Nevertheless the scenery was very pleasant, particularly given that we were mere minutes away from the centre of a European city. What I wouldn’t give for an Uetliberg in London! Evening brought us to Heidi and Tell, a touristic yet tasty burger joint. Eager to offset some of my voracious calorie consumption I took a turn around the Altstadt. If you’re ever in Zurich it’s well worth a look – very little beats medieval streets, Alpine water and live swing music in the evening light. Conversations It was fantastic to meet Professor Lionel Mason and discuss various ideas for extending the ambitwistor string formalism to form factors. I also had great fun chatting to Julio Martinez about linking CHY and BCJ. Finally huge thanks to Dr. Angnis Schmidt-May for patiently explaining the latest research in the field of massive gravity. The story is truly fascinating, and could well be a good candidate for a tractable quantum gravity model! Erratum: An earlier version of this post mistakenly claimed that Chris White spoke about BCJ for equations of motion. Of course, it was his collaborator Donal O’Connell who delivered the talk. Many thanks to JJ Carrasco for pointing out my error! # Conference Amplitudes 2015! It’s conference season! I’m hanging out in very warm Zurich with the biggest names in my field – scattering amplitudes. Sure it’s good fun to be outside the office. But there’s serious work going on too! Research conferences are a vital forum for the exchange of ideas. Inspiration and collaboration flow far more easily in person than via email or telephone. I’ll be blogging the highlights throughout the week. Monday | Morning Session To kick-off we have some real physics from the Large Hadron Collider! Professor Nigel Glover‘s research provides a vital bridge between theory and experiment. Most physicists in this room are almost mathematicians, focussed on developing techniques rather than computing realistic quantities. Yet the motivation for this quest lie with serious experiments, like the LHC. We’re currently entering an era where the theoretical uncertainty trumps experimental error. With the latest upgrade at CERN, particle smashers will reach unprecedented accuracy. This leaves us amplitudes theorists with a large task. In fact, the experimentalists regularly draw up a wishlist to keep us honest! According to Nigel, the challenge is to make our predictions twice as good within ten years. At first glance, this 2x challenge doesn’t seem too hard! After all Moore’s Law guarantees us a doubling of computing power in the next few years. But the scale of the problem is so large that more computing power won’t solve it! We need new techniques to get to NNLO – that is, corrections that are multiplied by $\alpha_s^2$ the square of the strong coupling. (Of course, we must also take into account electroweak effects but we’ll concentrate on the strong force for now). Nigel helpfully broke down the problem into three components. Firstly we must compute the missing higher order terms in the amplitude. The start of the art is lacking at present! Next we need better control of our input parameters. Finally we need to improve our model of how protons break apart when you smash them together in beams. My research helps in a small part with the final problem. At present I’m finishing up a paper on subleading soft loop corrections, revealing some new structure and developing a couple of new ideas. The hope is that one day someone will use this to better eliminate some irritating low energy effects which can spoil the theoretical prediction. In May, I was lucky enough to meet Bell Labs president Dr. Marcus Weldon in Murray Hill, New Jersey. He spoke about his vision for a 10x leap forward in every one of their technologies within a decade. This kind of game changing goal requires lateral thinking and truly new ideas. We face exactly the same challenge in the world of scattering amplitudes. The fact that we’re aiming for only a 2x improvement is by no means a lack of ambition. Rather it underlines that problem that doubling our predictive power entails far more than a 10x increase in complexity of calculations using current techniques. I’ve talked a lot about accuracy so far, but notice that I haven’t mentioned precision. Nigel was at pains to distinguish the two, courtesy of this amusing cartoon. Why is this so important? Well, many people believe that NNLO calculations will reduce the renormalization scale uncertainty in theoretical predictions. This is a big plus point! Many checks on known NNLO results (such as W boson production processes) confirm this hunch. This means the predictions are much more precise. But it doesn’t guarantee accuracy! To hit the bullseye there’s still much work to be done. This week we’ll be sharpening our mathematical tools, ready to do battle with the complexities of the universe. And with that in mind – it’s time to get back to the next seminar. Stay tuned for further updates! Update | Monday Evening Only time for the briefest of bulletins, following a productive and enjoyable evening on the roof of the ETH main building. Fantastic to chat again to Tomek Lukowski (on ambitwistor strings), Scott Davies (on supergravity 4-loop calculations and soft theorems) and Philipp Haehnal (on the twistor approach to conformal gravity). Equally enlightening to meet many others, not least our gracious hosts from ETH Zurich. My favourite moment of the day came in Xuan Chen’s seminar, where he discussed a simple yet powerful method to check the numerical stability of precision QCD calculations. It’s well known that these should factorize in appropriate kinematic regions, well described by imaginatively named antenna functions. By painstakingly verifying this factorization in a number of cases Xuan detected and remedied an important inaccuracy in a Higgs to 4 jet result. Of course it was a pleasure to hear my second supervisor, Professor Gabriele Travaglini speak about his latest papers on the dilatation operator. The rederivation of known integrability results using amplitudes opens up an enticing new avenue for those intrepid explorers who yearn to solve $\mathcal{N}=4$ super-Yang-Mills! Finally Dr. Simon Badger‘s update on the Edinburgh group’s work was intriguing. One challenge for NNLO computations is to understand 2-loop corrections in QCD. The team have taken an important step towards this by analysing 5-point scattering of right-handed particles. In principle this is a deterministic procedure: draw some pictures and compute. But to get a compact formula requires some ingenuity. First you need appropriate integral reduction to identify appropriate master integrals. Then you must apply KK and BCJ relations to weed out the dead wood that’s cluttering up the formula unnecessarily. Trouble is, both of these procedures aren’t uniquely defined – so intelligent guesswork is the order of the day! That’s quite enough for now – time for some sleep in the balmy temperatures of central Europe.
2020-08-07 21:36:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.434807687997818, "perplexity": 1907.358565511182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00094.warc.gz"}
https://geometrycourse.wordpress.com/category/uncategorized/
# High School Algebra in Ancient Mesopotamia I. On an online forum for discussing math, a user named Mr. Javascript  (“If you’ve ever gone to the doctor, purchased insurance, or used a credit card, my code may have been executed” is his bio) took a swing at polynomial factoring: The wife and I are sitting here on a Saturday night doing some algebra homework. We are factoring polynomials and we both had the same thought at the same time: when are we going to use this? For some reason, factoring makes people angry. I’ve never entirely understood why. Is it useless? Mostly, but it’s unclear to me it’s especially useless. It’s probably a bit more useless than knowing how to ride a unicycle, a bit less than the punchline of the first Peanuts comic (“How I hate him!”), which means that it fits nicely within the range of useless stuff I learned and enjoyed as a teen. Part of the problem is that factoring is too much of one thing, not enough of another. It’s typically introduced to students as a method for solving polynomial equations. But it’s never the only method taught. If you hate or fear algebraic manipulation, are you going to solve an equation by factoring? Not if you can graph it. And if algebraic manipulation is your speed, why bring a spoon to a knife fight? The quadratic formula or completing the square could be your go-to. So, nobody’s students likes factoring. (Sit down, Honors Algebra.) It seems frivolous and useless. Which is why I was a bit surprised to see it coming up again and again while reading about ancient mathematics. I’ve been on a bit of a math history kick lately. I started with The Beginnings and Evolution of Algebra, a book I found while scanning the shelves at school for some summer reading. Beginnings and Evolution seems to heavily rely on van der Waerden’s dry but important Geometry and Algebra in Ancient CivilizationsA search for an up-to-date, well-written version of all this led me to Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, which is the best of the bunch for my needs… …and then things started spiraling out of control. Katz’s material on Mesopotamia sent me to Eleanor Robson, and this gem of a piece by her is fantastic. Read Robson for sentences like “the conceptual transition from arc to angle was slow and halting” and “Sherlock Holmes, if he ever made it to Babylon, would have been over 100 miles away from the action.” “Using the history of algebra, teachers of the subject can increase students’ overall understanding of the material.” This is from Katz and Parshall, at the start of Taming the Unknown. Are we to take teaching tips from Mesopotamian scribes? What exactly can a modern teacher glean from mathematical history? II. Not many people have five words in their name, but most people aren’t Bartel Leendert van der Waerden. Though a student of Emmy Noether (who was Jewish) he managed to hold on to his university position in Germany during the Nazi rule. (On one hand, when talking to the Nazis he made a point of his “full-blooded Arianness”. On the other, he seemed disposed against the regime in fairly conventional ways. In sum, he’s guilty of cowardice and self-interest. Aren’t we all?) He wrote the first comprehensive textbook on modern algebra, and later his interests turned to the history of mathematics. In both Scientific Awakening and Geometry and Algebra in Ancient Civilizations, he put ancient sources in conversation with a modern mathematical perspective. Often, he discovered modern theorems lurking in the work of the ancients. These included various identities that today we would teach as factoring: $(a + b)^2 = a^2 + 2ab + b^2$ $(a - b)^2 = a^2 - 2ab + b^2$ $(a + b)(a - b) = a^2 - b^2$ This third identity — the difference of squares — is studied by high school math students all over the world. Personally, I love the way this identity can be used as a mental math shortcut, allowing for seemingly heroic acts of paperless computation. So, for example, if you transform $101 \times 99$ into a difference of squares, you can conclude that the product must be 9,999: $101 \times 99 \leftrightarrow (100 + 1)(100 - 1) \leftrightarrow 100^2 - 1^2$ Similarly: $203 \times 197 \leftrightarrow 200^2 - 3^2$ And, for extra fun, you can sometimes play the game backwards, starting with a difference of squares and transforming it into a product. From brilliant.org, here is a particularly clever use of this transformation: $99^2 - 98^2 \leftrightarrow (99 + 98) \times 1$ In school math, however, the identity is typically used as an exercise in of itself: $x^2 - 9 \leftrightarrow (x + 3)(x - 3)$ $a^2 x^2 - 9b^2 \leftrightarrow (ax + 3b)(ax - 3b)$ $\frac{a^2 x^2}{100} - \frac{9b^2}{121} \leftrightarrow (\frac{ax}{10} + \frac{3b}{11})(\frac{ax}{10} - \frac{3b}{11})$ And so on, and so on, until the font of the terms becomes far too small to read. Van der Waerden is very clear; he believes the ancient Mesopotamians knew of this (and other) factoring identity: For the longest time, I pored over van der Waerden’s books in search of a source that could justify his claim. I found none, and grew frustrated. Where is the proof that Babylonians knew and used the difference of squares identity? And why doesn’t van der Waerden bother to carefully show this? Now, though, my best read is he thought that it was just blindingly obvious that they used the difference of squares.  This is because the Mesopotamian method for solving quadratic equations (“completing the square”) just is an instance of the difference of squares transformation. When presented with a problem such as $x^2 + bx = c$, the Mesopotamians would typically transform this expression into a difference of squares. Pictorially, this rectangle of area “$c$“… …is cut and pasted into a difference-of-squares arrangement: This was the fundamental step in their solution of a quadratic equation. If the initial diagram had area c (as it does, since $x^2 + bx = c$) then this difference of two squares does too. And then things get rolling: the area of the full square is $\frac{b^2}{4}+c$; the side length is $\sqrt{\frac{b^2}{4}+c}$; the missing length, $x$ is  $\sqrt{\frac{b^2}{4}+c} - \frac{b}{2}$. We have just come very, very close to deriving the quadratic formula. This, I think, is why van der Waerden doesn’t bother to carefully source his claim. The difference of squares identity is in constant use by Mesopotamian mathematicians, as they go about their business of solving area problems. Shocking, right? A factoring identity was crucially useful. III. (a) it’s not so “shocking”; (b) modern historians don’t accept his defence This is lovely, but is it algebra? Van der Waerden certainly thought so. As in other ancient sources, Mesopotamian math was couched entirely in the language of shapes — squares, lengths, sides, rectangles, area. There was no symbolic language, no equations or expressions. Just words and numbers. Nevertheless, he thought that all of this was essentially a geometric sheen over an algebraic core. After speculating on how Mesopotamian mathematicians might have geometrically derived the difference of squares identity, he cautions his reader: But we must guard against being lead astray by the geometric terminology. The thought processes of the Babylonians were chiefly algebraic. It is true that they illustrated unknown numbers by means of lines and areas, but they always remained numbers. If this is so, then perhaps ancient mathematicians would use the differences of squares identity just as we could, for quick mental computation in special cases, or for factoring expressions. In time, though, van der Waerden’s view would become extremely controversial. (Well, at least as heated as these things get in history of math.) All of this is new to me, so I’m worried that I’m going to get this wrong, but here’s what I’m gathering. The first major breakthroughs in the history of ancient math came from modern mathematicians — people like good-old Nazi-tolerating van der Waerden. Their contributions can’t be minimized — they put ancient mathematics in terms moderns can understand. More recently, the study of ancient mathematical texts has been taken over by historians trained as historians. These scholars understand ancient societies in a way that the mathematicians-turned-historians could not. These more recent scholars believe that mathematicians like van der Waerden erred by seeing too much of the past in modern mathematical terms. The current consensus seems to be that it’s an error to see the Mesopotamians as engaging in algebra. But what should we call it? It was clearly algorithmic — they used the same procedure for solving quadratic situations again and again. It was also, clearly, geometric. The problems were couched in geometric language, and it seems that rather than conceiving of their solutions as numbers they conceived of them as lengths and areas that were constructed. The earlier, debated term for describing this style of math was “geometrical algebra,” but that’s fallen out of favor. In Taming the Unknown the authors seem to prefer “cut and paste” to describe this mathematical style — I like that! (I also sort of like quasi-algebraic geometry, not despite but because it’s a mouthful. Fun to say!) It’s not that ancient Mesopotamians weren’t “abstract thinkers” or something. The scribes who produced these quadratic equations weren’t solving practical problems. From Taming: “The problems in geometrical algebra that show up on the clay tablets are artificial, in the sense that they do not reflect actual problems that the surveyors (or the scribes) needed to solve.” In fact, no one can quite figure out why they were solving them in the first place. Taming summarizes two interesting proposed contexts for the math: Various scholars have speculated about these origins, however. Jens Hoyrup, for example, believes that they were originally developed as riddles to demonstrate the professional competence of surveyors and scribes, while Eleanor Robson holds that the development of techniques for accurately calculating lines and areas reflected a concern with justice, part of the ideology of the Mesopotamian states around 2000 BCE. Either way, modern scholars have caused trouble for the idea that what our high school students study is actually ancient. While the Mesopotamians knew how to cut-and-paste a rectangle into the difference of two squares, that didn’t mean they also knew our factoring identities. And while our students might study factoring identities, that surely doesn’t mean that they would see their work in what is found on clay tablets in Iraq. We tend to think of mathematics as universal, Eleanor Robson points out. Studying the history of math shows us that this is not so: for example, the Mesopotamians didn’t use radii to find the area of circles. (Area was entirely a function of circumference.) They had no concept of angle measure. And their math was largely concerned with geometric algorithms, rather than numerical ones. IV. Each year, in my high school geometry classes, we study a bit of cut-and-paste math. We do this while studying area formulas. (I first learned this approach from the CME Geometry text.) Can you cut a triangle, and rearrange its pieces into a rectangle? How about a trapezoid? Remember: you can’t throw pieces away, and you can’t add anything either! (Somehow I always forget to say this and kids end up giving the triangle a haircut and declaring victory.) Most years, a student ends up wondering whether it’s possible to cut and rearrange the pieces of a rectangle to make a square. My first instinct has always been, sure, it must be possible, but I’ve never known how to. (Come to think of it, is it really always possible? Suppose the rectangle is 1 ft by 19 ft. We would have to find a way to cut and rearrange the pieces of the original into a square whose sides are each $\sqrt{19}$. That feels impossible…) While reading ancient Greek mathematics, I learned that Euclid provided an algorithm for constructing a square with area equal to that of a rectangle. (Come to think of it, if you can construct the square, you must be able to cut and paste the original rectangle into it. Right? Right?? I’m confused.) This isn’t quite how Euclid did it, but it’s closely related. Start with a rectangle of a certain height (H) and width (W). It’s area would be $W \times H$. Suppose that we were able to make a right triangle that looked like this: Then, the Pythagorean Theorem would tell us, this diagram follows: Do you see it? Do you see the difference of squares in the bottom left square? Here’s the rest of the argument. If: $a^2 - b^2 \leftrightarrow (a + b)(a - b)$ Then: $(W+H)^2 - (W-H)^2 \leftrightarrow (W + H + W - H)(W + H - W + H) = 2W \times 2H = 4 \times WH$ Which means: that square is four times bigger than our original rectangle. A fourth of it, though, is a square of area equal to what we started with. The way Euclid did it doesn’t exactly involve the difference of squares identity, but I think there’s something important about this version of the proof. The difference of squares identity is deeply connected to the Mesopotamian cut-and-paste method. It’s also deeply connected to the Pythagorean Theorem, though this rarely comes up in school math. We typically introduce the Pythagorean Theorem as a sum of squares relationship: $A^2 + B^2 = C^2$ But it’s equally true that the Pythagorean Theorem is saying something about a difference of squares: $A^2 = C^2 - B^2$ Which means that you could just as well put it like this: $A^2 = (C + B)(C - B)$ If you want to go down a number theory road, this line of reasoning can lead to a method for generating an unending collection of integers that make the Pythagorean Theorem true. (See APPENDIX below.) The older generation of scholars — the mathematicians-turned-historians — thought that they had recognized this Pythagoras/Factoring one-two punch in a Mesopotamian clay tablet called “Plimpton 322.” They thought that the tablet represented a list of Pythagorean triples, and they inferred that the difference of squares method was used to generate them. Here’s what Plimpton 322 looks like, by the way: Plimpton 322: We used to think these were Pythagorean triples. Eleanor Robson wrote a fantastic article challenging this view. She argues on both mathematical and contextual basis that this table can’t represent Pythagorean triples. For her, this is just another example of mathematicians not understanding Mesopotamia on its own mathematical and social terms. But even if this idea is not quite as ancient as initially supposed, there is no doubt that Plato, Euclid and Diophantus used functions like this one to generate Pythagorean triples. And the connection between factoring the difference of squares and the Pythagorean Theorem is mathematically true, no matter the historical record. A curriculum is a funny thing. In high school, students study area formulas, the Pythagorean Theorem and factoring a difference of squares. And yet, it would seem to never do a teacher any good to get into all of the above, except as an extension or something. What was a natural part of an ancient curriculum has no place in our own. V. So: can the studying the past help us better teach factoring? At one level, I think the answer is “no.” The needs of Mesopotamian scribes and Greek phiosopher-mathematicians are too different from the needs of our students. We live in different worlds, and that applies to mathematics too. At a less-practical level — the level of understanding — I think this historical excursion helped me better understand a few important points about teaching factoring to students. Mr. and Mrs. Javascript were worrying about the usefulness of factoring. Almost certainly, they were worrying about problems like this: Solve by factoring: $x^2 + 11x + 10 = 0$ These sort of problems, though, basically never showed up in the ancient history, because they almost never studied these trinomial equations. Because these situations always had an area interpretation, it didn’t make sense to make an expression equal to zero. Instead, the equations always involved two terms. Factoring didn’t involve generic trinomials. It involved the special, factoring formulas. I really have no taste for curriculum advocacy — I teach what I’m expected to teach — but I do think it could make sense to separate the difference of squares from other factoring cases. Perhaps, I’d teach difference of squares (and the binomial expansion, i.e. $(a + b)^2$ first. The other thought I have is mushier, and more general. The mistake of the early mathematician-historians was to see too much of algebra in the cut-and-paste geometry of the Mesopotamians and Greeks. What they failed to understand was the extent to which this ancient math was fully geometrified. It was fully and thoroughly geometry, all the way down. It seems weird, then. Why didn’t the Mesopotamians and Greeks make the leap to algebra? And why don’t our students make these same connections? In the history of education there have been people who have made very strong claims about the similarity of children’s development to the historical development of cultures. This is wrong — and often racist and colonialist, as it assumes that other cultures are further behind in an inevitable path towards the present. But historians of mathematics have a more nuanced view of Mesopotamia now. It’s not that ancient cultures knew — or failed to know — algebra, as much as they had their own sort of algorithmic geometry. It made sense to them, and it needs to be understand in its own context and time. All of this, though, might make us a little more pessimistic about the usefulness of geometry for helping students learn algebraic concepts. The geometry of cut-and-paste really is different from the algebra of factoring. It’s only when you understand both that you can look back and see the connections between them, as van der Waerden did. When faced with a tough topic, math teachers often like to change the context — add a story, move to pictures, put things in geometric terms. A lesson from the history of math is to be very, very worried about whether these more comprehensible contexts are really aids for understanding the difficult things. There is nothing simple about moving from geometry to algebra. APPENDIX: ON PYTHAGOREAN TRIPLES AND THE DIFFERENCE OF SQUARES Where were we? Ah, right: $a^2 = (c + b)(c - b)$ It’s not obvious that both $(c + b)$ and $(c-b)$ have to be square numbers, but they do (I think), and let’s call the first square number $s^2$ and the second $t^2$. Which means that the following two equations are true: $c + b = s^2$ $c - b = t^2$ Add those two equations together, and you get a new one. $2c = s^2 + t^2 \leftrightarrow c = \frac{1}{2}(s^2 + t^2)$ Subtract them, and you get an equation for $b$. $2b = s^2 - t^2$ We had already said that $a^2 = s^2 t^2 \leftrightarrow a = st$. So, there you have it. Pick two numbers, swap them in for $s$ and $t$ and you get yourself a new Pythagorean triple! Thanks again, difference of squares. # A fun problem, as long as you know the Pythagorean Theorem I wanted to start the year off with this, until I realized (duh) that it really uses the Pythagorean Theorem. But it’s a great problem, and so rich for spinning off. What about triangles? What about pentagons? What about irregular triangles? Can you always gets a circle around the polygon, no matter where you start? (An honest-to-god context for drawing a circle through three points.) More than ever, I’d like to really focus on problem posing this year, and this problem is very rich for jumping off from. I love it, and can’t wait to bring it in to class. # Arithmagons and Polygons Arithmagons are cool, right? (The sum of the vertices is the value of the edge.) Are triangles special, or can you solve them for any polygon? What determines when you can solve one of these? Why do triangles always work, but square puzzles don’t always work? What shapes* can you make with the bendy straws? * Do the ends have to connect? We can either require that the ends connect, or not. You can try to make a triangle. But can you always make a triangle out of the two straws? What requirements are there? Did you try extending the straw out all the way? Did you try scrunching it in? I see a shape there, but I notice that the ends don’t connect perfectly. Can you make them connect perfectly? What categories of shapes are we able to make if the ends have to connect perfectly? (And we can use a bit of tape to make sure that the ends are sticking together.) Are there more categories? How do we know? What’s special about these shapes? What happens if you flip one of these shapes onto its back side? What changes? Why? I think that this might be a nice mini-investigation that could motivate some proofy concerns. # Game: Polygon Races Rules: Each partner gets a board that looks like this, and you win the race by crossing the edge (any edge) of the box before your opponent does. Inside the circle, you’re going to draw a shape. Maybe you’ll draw a triangle, or a square, or something else. The next thing that you draw is going to be the same shape, but it has to touch all the corners of your original shape. So, if your first move looks like this: Then your second move can look like this: And your third move will be a third square, touching all four points of the second square. If you keep on drawing squares, eventually you’ll hit the edge. If you get there before your opponent does, you win. I’d have kids play this game a couple of times and have them figure out strategies. I’d encourage them to try various shapes, and see if any of them get them to the edge faster. If a kid thinks that certain shapes are winners, I’d encourage them to explain why. That’s one possible opener for a series of days investigating inscribed polygons. # Parallel Line and Tranversals Standard: Flipped: Make your own parallel line puzzle. Make an easy one and a hard one. Meta: What’s the minimum number of angles that you have to provide in a puzzle with 2 intersecting pairs of parallel lines? There’s a worthwhile impulse here, but I don’t like the way the meta-problem turned out. But I think that there’s something really worthwhile here, the notion that after solving a problem we then study the class of problems itself.
2017-09-20 21:43:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 39, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45810064673423767, "perplexity": 925.8370238412863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00090.warc.gz"}
https://www.geosci-instrum-method-data-syst.net/7/209/2018/
Journal topic Geosci. Instrum. Method. Data Syst., 7, 209–221, 2018 https://doi.org/10.5194/gi-7-209-2018 Geosci. Instrum. Method. Data Syst., 7, 209–221, 2018 https://doi.org/10.5194/gi-7-209-2018 Research article 25 Jul 2018 Research article | 25 Jul 2018 # The development and test research of a multichannel synchronous transient electromagnetic receiver The development and test research of a multichannel synchronous transient electromagnetic receiver Fanqiang Lin1,2, Xuben Wang2,3, Kecheng Chen1, Depan Hu4, Song Gao1, Xue Zou1, and Cai Zeng1 Fanqiang Lin et al. • 1College of Information Science and Technology, Chengdu University of Technology, Chengdu 610059, China • 2College of Geophysics, Chengdu University of Technology, Chengdu 610059, China • 3Key Lab of Earth Exploration and Information Technique of Ministry of Education, Chengdu 610059, China • 4Geoenvironment Monitoring Institute in Chengdu, Chengdu 610071, China Correspondence: Fanqiang Lin (linfq@cdut.edu.cn) Abstract As a result of the drastic reduction in shallow mineral resources, the exploitable potential and reserves of proven mines are insufficient, and the mineral resources in deep ground need to be more explored in a more refined way. There are some disadvantages of the existing instruments, such as few channels and slow sampling rate. Therefore, a multiparameter transient electromagnetic instrument with synchronous receiving has been developed and tested. The instrument is composed of two controllers – embedded controller and programmable logic controller – which can provide a diversified information combination for follow-up information processing. Under the grounding electrode source emission mode, the real-time synchronous transient electromagnetic acquisition system of six channels is achieved with a sampling rate of 128 000 samples per second (SPS). The data acquired by the six channels is recorded in the full-time range of the time domain. Furthermore, experiments were carried out in the laboratory, open areas, and actual mine. Through data analysis, the measured data curves of the mining area are highly consistent with the existing geochemical exploration curves and geological profile. 1 Introduction Figure 1System block diagram of hardware. Figure 2INA114 instrument amplifier and programmable gain amplifier circuit. The transient electromagnetic method (TEM) belongs to the active field source method of time domain electromagnetism. James R. Wait first proposed the transient electromagnetic method to search for conducting ore bodies in 1951. In recent decades, TEM receivers have been widely used in metal mineral, petroleum, and natural gas exploration and other fields (Danielsen et al., 2003; Haroon et al., 2015), so many kinds of TEM receivers have been developed and manufactured (Li et al., 2012), such as the V8 receiver from Phoenix Geophysics (Phoenix Geophysics, 2017), ADU-07e from Metronix (Metronix, 2017), and KMS-820 from KMS (KMS, 2017), which are easy to use and have good performance. Meanwhile, the TEM detection theories have also been developed (Wright, 2004). In 2010, the internationally renowned geophysicist Zhdanov proposed the following directions for future electromagnetic exploration instruments and methods at the 75th Annual Conference of Geophysics: multicomponent emission, multichannel reception and pseudo-seismic data collection (Qi et al., 2015; Ayuso et al., 2016). Xue Guoqiang proposed the short offset transient electromagnetic method (SOTEM) (Xue et al., 2013; Chen et al., 2016, 2017), and the reduction in transceiver distance can both greatly enhance the exploration depth and improve the signal-to-noise ratio (SNR). Because of the demand for increased exploration depth and the need to obtain more diverse subsurface geological information, the number of channels, high SNR, and synchronization are the key elements for TEM receiver development, which are useful for obtaining more abundant underground geological information and improving the accuracy of detection. Therefore, we propose a multichannel synchronous transient electromagnetic receiver, which has the characteristics of multichannel synchronous parallel acquisition, a large-capacity (64 G) storage with full-time synchronization, and a sampling rate of up to 128 000 SPS, while the sampling rate of the V8 receiver is 96 000 only. Figure 4DAC7714 potential compensation and reference voltage circuit. Figure 5Single-ended-to-differential converter circuit. Figure 6Analog-to-digital converter circuit. The main purpose of this paper is to introduce a receiver system for the synchronous acquisition of multiple electromagnetic signals in transient electromagnetic prospecting to achieve multiparameter and multichannel synchronous reception. High-speed programmable logic devices are used to achieve high-level synchronization between different channels. Transmitting current waveform acquisition and multichannel reception can be synchronized by using a high-precision Global Position System (GPS) timing unit which is controlled by a serial port of microcontroller, and a high-precision counter is used to further improve the synchronization of data acquisition, especially when multiple receivers work in distributed mode (X. H. Zhang et al., 2017). Figure 7FPGA controller unit circuit. 2 Multichannel receiver hardware and software design The receiving system consists of four parts, namely a field programmable gate array (FPGA) unit, an advanced reduced instruction set computer (RISC) machine (ARM) unit, an analog acquisition unit, and a power supply circuit. The system block diagram is shown in Fig. 1. The numbers of analog boards are marked as channels 1∕2, 3∕4, and 5∕6, and each board has two channels. The board contains the signal-conditioning circuit and channels for amplification and acquisition, which are completely independent. Hence, multichannel real-time synchronized acquisition is achieved. Figure 8STM32F407 unit circuit. Figure 9High-precision power circuit. Table 1Magnification test of six channels. ## 2.2 Design of analog circuit board The analog board mainly includes the signal conditioner (including the first-stage instrumentation amplifier, the second-stage programmable amplifier, and a low-pass filter), a protection circuit (input stage clamp diode), the natural potential compensation circuit (Liu et al., 2016), a single-ended-to-differential circuit, and an analog-to-digital converter. Figure 10Flowchart of system program. Figure 11The relation of input signal amplitude and amplification ratio in DC testing. The first-stage amplifier uses a dedicated high-precision, low-noise amplifier INA114, and a high-resolution digital potentiometer AD5272 is used to design the precision gain op-amp to achieve high-precision amplification and range adjustment. The two circuits are shown in Figs. 2 and 3. In Fig. 2, the circuit utilizes the programmable gain amplifier PGA103 and a general input and output (IO) port to complete the three-level amplification ratio adjustment. Two natural potential compensation voltages and two full-scale reference voltages are supplied by DAC7714, a 12 bit digital-to-analog converter with four output channels. The DAC7714 circuit is shown in Fig. 4. The multichannel receiver has a wide input voltage range, which varies from −5 to +5 V, and the amplitude of the signals actually collected is from dozens to hundreds of millivolts. ADA4941 is a single-ended-to-differential converter that is used as the front part of an analog-to-digital converter. Each channel of the receiver uses a separate AD7767, a 24 bit high-resolution and wide dynamic-range converter. The AD7767 is a $\sum -\mathrm{\Delta }$ADC with oversampling characteristics, which can reduce noise from the front end and the need for a front-end anti-alias filter, and uses its daisy chain technology to realize the multi-chip cascade connection for an efficient parallel synchronous acquisition method (Liu et al., 2017). These two circuits are shown in Figs. 5 and 6. Table 2Error analysis of interchannel consistency. By adopting a modular design, two AD7767 chips are cascaded, and an FPGA controller is used to control the three analog circuit boards separately, and therefore the overall data packaging and storage in real time is achieved. The AD7767 chip has a maximum output data rate of 128 kHz, so the bandwidth of the receiver ranges from direct current (DC) to 12.8 kHz. For a good accuracy of collection, the frequency of the input signal should be less than 12.8 kHz. Figure 12Sine waveforms received by six channels synchronously. Figure 13Waveform and its spectrum diagram. ## 2.3 Design of the digital logic controller board The multichannel synchronous receiver is made up of an embedded controller and an FPGA controller (Oballe-Peinado et al., 2017), which is flexible and widely used. The FPGA is the core controller of the receiving system, and the synchronization between the various functional units can be easily achieved by its parallel processing function; therefore, a number of logic modules are driven by the same clock source. This receiver uses an XC6SLX9 chip as the core acquisition controller, that belongs to the Spartan-6 series of the well-known FPGA chip maker Xilinx. It is shown in Fig. 7. The FPGA unit is mainly composed of an SD card unit circuit, static random access memory (SRAM) buffer circuit, and the serial peripheral interface (SPI) which is used to communicate with the STM32 controller, the synchronous signal acquisition and control interface of six channels, and GPS's pulse per second (PPS) signal interface and power interface. The STM32F4 series of ST Microelectronics is adopted as controller unit which is popular in industry and instrumentation. The embedded controller unit has the following functions: keyboard input, communication with the FPGA, GPS location information collection, and a liquid crystal display (LCD). The collected GPS information is composed of latitude, longitude, time, and so on, which is output by the GPS module through the serial port and simultaneously displayed on a LCD. A 6×8 matrix keyboard is designed to set the system parameters and select the channel for data collection. When the GPS satellite signal is locked, the receiving system can collect the data according to the preset instruction by a start key. A general-purpose IO port is used to control the DAC7714 chip to achieve natural potential compensation, reference voltage, and fine gain tuning. The STM32F407 diagram is shown in Fig. 8. Figure 15Single cycle waveform (40 ms). Figure 16Profile of the open area. ## 2.4 High-precision linear power circuit Low-dropout–output linear power chips with a high-precision and a high ripple rejection ratio are selected (Joo et al., 2017; Duong et al., 2017). On the power board, there are three independent connecting plugs connecting each analog board (Ren et al., 2015). In the receiver, the TPS7A series chips are selected, which are manufactured by the Texas Instrument company. These chips have the following characteristics: a wide input voltage range, low noise, and a high ripple rejection ratio. The power board has four different output voltages (+10, −10, +5, and −5V), which are used for preamplifiers and the programable gain amplifier (PGA). Another separate +5V power supply for the digital potentiometer on the analog board and the DAC7714 chip and 3.3 V power provide digital logic power for the analog-to-digital converter AD7767 (Yun et al., 2017). The power diagram is shown in Fig. 9. Since all chips are low-powered, the overall power consumption of the receiver is less than 10 W. Figure 17Waveform diagram of data collected in mining area. Figure 18Survey line profile of mining area. ## 2.5 Software design The software mainly includes two parts: (1) the program of the FPGA unit, which involves the data storage on an SD card, real-time communication with STM32, data processing in data buffer area, and so on; (2) the program of the STM32 unit, which includes the gain adjustment of the first-stage preamplifier, the ratio setting of the second-stage programmable amplifier, and natural potential compensation control (Khomutov et al., 2017). Figure 19Anomaly of geochemical exploration of mining area along the survey line. Figure 20Geological profile of mining area. The FPGA mainly controls the front-end analog-to-digital converter, data storage, and interactions with STM32 by SPI. First, the FPGA controller loads the program from flash after power-on and then waits for the operation control commands sent from STM32, such as system initialization, parameter setting, and sensor calibration (Wang et al., 2017). Time information and the channel serial number are stored on an SD card through the FPGA program. The information is stored in the first 8 bytes in each sector of the SD card. The PPS provided by the GPS unit is used to update the internal base time of the FPGA every 2 s. It can reduce the time cumulative error. The embedded controller is responsible for coordinating the operation of the entire system, such as the system gain adjustment of each channel, GPS time information reception and status monitoring, and real-time display of acquired data, and communicating with the FPGA controller (X. Y. Zhang et al., 2017). The system flowchart is shown in Fig. 10. Figure 22T4 transmitter. Figure 23The mining area map. 3 Performance testing Performance testing of the receiver system mainly includes DC testing, AC testing, and field testing (Ziolkowski et al., 2010; Zhou et al., 2015). In the DC stability testing, we first tested the magnification range of each channel and the magnification consistency. Taking the first channel as an example, as is shown in Fig. 11 below, the input signal is a blue curve with a range of ±50mV. The green curve is the magnification times of the first channel when each data value of input signal was calculated. Under the same test conditions, the six channels were connected in parallel, and the same input signals were received separately by six channels at the same time. The average magnification times of each channel are shown in Table 1. Next was the conformance test for channels of the receiver. The test aimed to confirm high consistency between channels. In the test, the positive terminals of the six channels were connected together as a positive port, while negative terminals were linked to each other as a negative port. The two ports were input with standard sine waves. The peak-to-peak voltage value of input signal was 20 mV and its frequency was 150 Hz. The extracted numerical values of each channel at the same time were compared to obtain the average error, the average absolute error, and error percentage, as is shown in Table 2. The average error is less than 2.2 µV. In the AC testing, the six channels of the positive and negative terminals were connected separately, and the sine waves generated by a standard signal generator were input to the receiver. The waveforms generated by the collected data are shown in Fig. 12. The peak-to-peak value of the input signal was 10 mV, and its frequency was 20 Hz. As can be seen from the figure, the consistency between each channel is quite high and there is no phase offset. After the input signals are amplified by a two-stage amplifier in the analog board, the output peak-to-peak value of each channel is up to 1000 mV. $\begin{array}{}\text{(1)}& \mathrm{SNR}=\mathrm{20}×\mathrm{log}\frac{\mathrm{full}\phantom{\rule{0.125em}{0ex}}\mathrm{scale}\phantom{\rule{0.125em}{0ex}}\mathrm{value}}{\mathrm{root}\phantom{\rule{0.125em}{0ex}}\mathrm{mean}\phantom{\rule{0.125em}{0ex}}\mathrm{square}\phantom{\rule{0.125em}{0ex}}\mathrm{value}\phantom{\rule{0.125em}{0ex}}\mathrm{of}\phantom{\rule{0.125em}{0ex}}\mathrm{noise}}\end{array}$ Figure 13a shows the waveform of three cycles of data acquired by the first channel shown in Fig. 12. Figure 13b presents the frequency spectrum formed by fast Fourier transform (FFT). As is shown in Fig. 13b, the frequency of the input sine waves is 20 Hz, and the frequency spectrum has few harmonic components, which indicates the excellent performance of the analog circuit board and the high stability of the power circuit. In Fig. 14, when six channels are connected to zero voltage in the same way, 2000 data points of each channel are displayed. According to Eq. (1), the SNR of each channel is calculated to be approximately 100 dB. 4 Field testing In field testing, a hollow coil with 400 turns and a diameter of 50 cm was used as the receiving sensor (Brunke et al., 2017), and its effective area is 40 m2. Phoenix T-4 transmitter is a small power transmitter, which is manufactured by Phoenix. This transmitter can launch up to 40 A currents, powered by a battery group, and it can launch many kinds of waveforms, such as TD50 (numbered by Phoenix), which is a bipolar pulse waveform with a 50 % duty cycle (Wang et al., 2015) and was used in the actual test. The V8 system is the eighth generation of receiver technology developed by Phoenix since 1975. The V8 sampling rate is 96  000, and its flash memory capacity is 512 MB only. It cannot store all received data and is synchronized by a GPS unit. However, the sampling rate of the multichannel TEM receiver is 128 000 SPS, and the synchronization mechanism is realized by GPS and a 28 bit high-precision counter. The accuracy of synchronization can be improved, and all the time information and collected data are stored on an SD card simultaneously. Two sites were selected for field testing, one of which is a relatively open area; the other is the actual mining area. The near-field measurement method was used in the test. A grounded electrode source was adopted as the field source. Four aluminum plates were used as electrodes, which were linked together and buried 40 cm deep in the ground and covered with saltwater and soil, so that the grounding resistance was less than 10 Ω. The distance between the transmitting electrodes A and B was 400 m, and four batteries were used as the power supply. The voltage ranged between 40 and 50 V. The emission current was 3 A, and the emitting frequency was 25 Hz. During the test, the power supply electrodes were stationary, while the preset measurement line was parallel with the transmitting line. The offset between transmitting line and receiving line was 40 m. The changing rate of the magnetic flux density (dB dt) was received by the coil. Figure 15 below shows one original cycle. The open area where the test was carried out is formed by the mixture of soil and building slag. The measurement line was observed at eight measuring points. After the data was stored, data of the eight collecting points were processed by a MATLAB computer program. Then, the secondary field information of each testing point was extracted. The resulting profile curve was formed. Figure 16 shows the profile, which is formed by the extracted data and indicates that the overall response tends to be flat. The signals received at the measurement points close to the transmitting nodes are relatively strong, while the central part of the profile becomes flat slowly. In order to test the performance of the receiver further, a mining area in Leshan, Sichuan Province, China, was selected for the field test, and a relatively gentle geodetic survey line was planned. The electrode source emission was adopted, and the distance between the transmitting electrodes was 400 m. The testing line offset was 80 m, and the distance between the measuring points was 40 m. A continuous launch mode was used in a mining area test, and the transmitting frequency of the signal was 25 Hz. A multichannel synchronous transient electromagnetic receiver can be used to collect the data of the transmitting current waveform, the change rate of the magnetic field, and the intensity of the electric field. Figure 17 is composed of three sub-figures in the mining area. Figure 17a shows the change rate of the magnetic field (dB dt), and Fig. 17b presents electric field intensity; Fig. 17c is about the transmitting current waveform. All sub-figures display two periods of signals (80 ms). Figure 17a and b were formed from data acquired by a receiver, and Fig. 17c was formed from data acquired by another receiver. The high-precision synchronization mechanism is implemented with a high-reliability GPS module and a 28 bit counter in the FPGA. The receiver's time information is refreshed every 2 s by the PPS of the GPS module. The 28 bit counter is used to record the pulses, which come from a 25 MHz clock in an FPGA chip. The 28 bit counter value, time information, and data collected by the receiver are packaged and stored on an SD card. The time error between every two points is 40 ns according to the period of the 25 MHz clock and the frequency of refreshment. Each collected sample can be tracked by the stored time information and data information on an SD card. The full-time range storage technology can greatly enhance the synchronization accuracy, even with many receivers working at the same time in the mining area. The acquired data in the mining area were second-field extracted, filtered, and interpolated, and the waveform was formed. The waveform of the time domain is smoother after waveforms were superimposed 200 times, which can reflect the response of underground geological bodies to TEM well. The data of each measuring point were processed to obtain the pure secondary field curves, and the time domain waveforms were extracted to form the profile of the measuring line. In Fig. 18, t1–t12 represent the extraction time (Zhong et al., 2016), and the extracted values of different collection points were collected together to form 12 curves at different time points. As can be seen by comparing Figs. 19 and 20 with Fig. 18, the high anomaly point is near Point 1160, which happens to be a sloped metal vein. Thus, it is verified that the receiver can acquire signals of transient electromagnetic emission very well. By analyzing and comparing the profile with the actual geochemical distribution, it can obviously be seen that the abnormality of the curves is highly consistent at Point 1160. In addition, the anomaly is consistent with the geological structure in this area. Figure 21 is the receiver photo, and Fig. 22 shows the T4 transmitter working at the testing field, and Fig. 23 is the map of mining area. 5 Conclusions The purpose of this paper is to develop the multichannel synchronous receiver that can be applied in transient electromagnetic prospecting. First, the hardware circuits and software programs were designed to realize the functions mentioned above. By means of a dual controller, the receiver can acquire signals synchronously through six independent channels. Then, the data stored on an SD card were processed by computer programs to generate profile graphs. The overall performance of the receiver was tested and verified. All the collected data error of each channel is less than 0.35 %, and each channel can connect different sensors, such as coils, magnetic probes, and electrodes. The receiver can be used to collect transient electromagnetic information. Due to its high precision and high sampling rate, it can capture the fast falling edge of TEM and ultralow noise and so on. Hence, by adopting the receiver, the multichannel synchronous acquisition of magnetic field information in three directions, electric field information in two directions, and the changing rate of magnetic induction intensity can be realized in the time domain reception. In addition, the reliability, practicability, and data validity of the receiver were verified by field testing in the mining area. Meanwhile, the receiver can be used for pseudorandom signal reception and distributed three-dimensional data reception, which can improve geophysical exploration efficiency. Code availability Code availability. The codes of the receiver are available upon request (linfq@cdut.edu.cn). Data availability Data availability. The circuit schematics of the receiver are available upon request (linfq@cdut.edu.cn). Author contributions Author contributions. FQ developed the main hardware and part of the software, and XB instructed all the authors. The other authors participated in the experiments and software development. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. This work was supported by the national key research and development project (no. 2016YFC0600300), the National Natural Science Foundation of China (no. 41674078), and the Field observation and research base of the Ministry of Land and Resources of China for Geohazards in the earthquake-disturbed area of the Longmen Mountains in Chengdu. Edited by: Luis Vazquez Reviewed by: Taifei Zhao and one anonymous referee References Ayuso, S., Blanco, J. J., Medina, J., Gomez-Herrero, R., Garcia-Poblacion, O., and Gracia, T. I.: A coincidence detection system based on real-time software, Geosci. Instrum. Meth., 2, 1–36, https://doi.org/10.5194/gi-5-437-2016, 2016. Brunke, H. P., Widmer-Schnidrig, R., and Korte, M.: Merging fluxgate and induction coil data to produce low-noise geomagnetic observatory data meeting the INTERMAGNET definitive 1 s data standard, Geosci. Instrum. Meth.,2, 487–493, https://doi.org/10.5194/gi-6-487-2017, 2017. Chen, W. Y., Khan, M. Y., Xue, G. Q., Cui, J. W., and Zhong, H. S.: Study on the response and optimal observation area for SOTEM, Chinese J. Geophys.-Ch., 2, 739–748, https://doi.org/10.6038/cjg20160231, 2016. Chen, W. Y., Khan, M. Y., and Xue, G. Q.: Response of surface-to-borehole SOTEM method on 2D earth, J. Geophys. Eng., 3, 987–997, https://doi.org/10.1088/1742-2140/aa6fcc, 2017. Danielsen, J. E., Auken, E., Jørgensen, F., Søndergaard, V., and Sørensen, K. I.: The application of the transient electromagnetic method in hydrogeophysical surveys, J. Appl. Geophys., 53, 181–198, https://doi.org/10.1016/j.jappgeo.2003.08.004, 2003. Duong, Q. H., Nguyen, H. H., Kong, J. w., Shin, H. S., Ko, Y. S., Yu, H. Y., Lee, Y. H., Bea, C. H., and Park, H. J.: Multiple-Loop Design Technique for High-Performance Low-Dropout Regulator, IEEE J. Solid-St. Circ., 10, 2533–2549, https://doi.org/10.1109/JSSC.2017.2717922, 2017. Haroon, A., Adrian, J., Bergers, R., Gurk, M., Tezkan, B., Mammadov, A. L., and Novruzov, A. G.: Joint inversion of long-offset and central-loop transient electromagnetic data: Application to a mud volcano exploration in Perekishkul, Azerbaijan, Geophys. Prospecting, 63, 478–494, 2015. Li, W. Y., Yan, C. W., Zou, Z. W., and Wang, R. H.: Research progress of transient electromagnetic apparatus, Journal of YunNan University (natural science edition), 34, 233–241, 2012. Liu, H., Wang, W. D., Xiang, C. L., Han, L. J., and Nie, H. Z.: A de-noising method using the improved wavelet threshold function based on noise variance estimation, Mechanical System and Signal Processing, 99, 30–46, doi:10.1016/j.ymssp, 2017. Liu, J. N., Ai, W., Wen, F., and Liu, D. H.: Design of Cascaded Multi-channel Signal Isolated Acquisition Circuit, Instrument Technique and Sensor, 12, 148–151, 2016. Joo, S. and Kim, S.: PSR enhancement techniques for output-capacitor-free LDO regulator design, Analog Integrated Circuits and signal Processing, 2, 319–327, https://doi.org/10.1007/s10470-017-1045-9, 2017. Khomutov, S. Y., Mandrikova, O. V., Budilova, E. A., Arora, K., and Manjula, L.: Noise in raw data from magnetic observatories, Geosci. Instrum. Meth., 2, 329–343, https://doi.org/10.5194/gi-6-329-2017, 2017. KMS: KMS-820, http://www.kmstechnologies.com, last access: 1 January 2017. Metronix: ADU-07e, https://www.metronix.de/metronixweb, last access: 1 January 2017. Oballe-Peinado, O., Hidalgo-Lopez, J. A., Castellanos-Ramos, J., Sanchez-Duran, J. A., Navas-Gonzalez, R., Herran, J., and Vidal-Verdu, F.: FPGA-Based Tactile Sensor Suite Electronics for Real-Time Embedded Processing, IEEE T. Ind. Electron., 12, 9657–9665, https://doi.org/10.1109/TIE.2017.2714137, 2017. Qi, Y. F., Yin, C. C., Wang, R., and Cai, J.: Mutlti-Transient EM full-time forword modeling and inversion of m-sequences, Chinese J. Geophys., 7, 2566–2577, https://doi.org/10.6038/cjg20150731, 2015. Ren, Z. D., Guo, C. S., and Lin, P. F.: Design of High-Performance LDO with Buffer Impedance Attenuation, Microelectronics, 2, 225–227, 2015. Wang, H. F., Chen, S. D., Zhang, S. Yuan, Z. W., Zhang, H. Y., Fang, D., and Zhu, J.: A High-Performance Portable Transient Electro-Magnetic Sensor for Unexploded Ordnance Detection, Sensors Basel, Switzerland, 11, 2651–2668, https://doi.org/10.3390/s17112651, 2017. Wang, X. L., Gao, J. X., Tian, J. S., and Zhang, Y. T.: Design of power transformer in MTEM transmitter power, Atlantis Press, International Conference on Civil, Materials and Environmental Sciences (CMES 2015), 161–164, https://doi.org/10.2991/cmes-15.2015.47, 2015. Wright, D.: Detection of hydrocarbons and their movement in a reservoir using time-lapse Multichannel Transient ElectroMagnetic data [D], Edinburgh, University of Edinburgh, 2004. Xue, G. Q., Chen, W. Y., Zhou, N. N., and Li, H.: Short offset TEM technique with a grounded wire source for deep sounding, Chinese J. Geophys., 1, 255–261, https://doi.org/10.6038/cjg20130126, 2013. Yun, S. J., Kim, J. S., and Kim, Y. S.: Capless LDO Regulator Achieving-76 dB PSR and 96.3 fs FOM, IEEE T. Circuits Syst., 10, 1147–1151, https://doi.org/10.1109/TCSII.2016.2628965, 2017. Zhang, X. H., Du, J. L., Fan, C. G., Liu, D, Fang, J. L., and Wang, L. S.: A Wireless Sensor Monitoring Node Based on Automatic Tracking Solar-Powered Panel for Paddy Field Environment, IEEE Internet Things, 5, 1304–1311, https://doi.org/10.1109/JIOT.2017.2706418, 2017. Zhang, X. Y., Zhang, Q. S., Wang, M., Kong, Q., Zhang, S. Q., He, R. H., Liu, S. H., Li, S. H., and Yuan, Z. Z.: Development of a full-waveform voltage and current recording device for multichannel transient electromagnetic transmitters, Geosci. Instrum. Meth., 2, 495–503, https://doi.org/10.5194/gi-6-495, 2017. Zhong, H. S., Xue, G. Q., Li, X., Zhi, Q. Q., and Di, Q. Y.: Pseudo wave field extraction in the multi-channel transient electromagnetic (MTEM) method, Chinese J. Geophys., 59, 4424–4431, 2016. Zhou, N. N., Xue, G. Q., Chen, W. Y., and Chen, J. L.: Large-depth hydrogeological detection in the North China-type coalfield through short-offset grounded-wire TEM, Environ. Earth Sci., 3, 2393–22404, https://doi.org/10.1007/s12665-015-4240-y, 2015. Ziolkowski, A., Parr, R., Wright, D., Nockles, V., Limond, C., Morris, E., and Linfoot, J.: Multi-transient electromagnetic repeatability experiment over the North Sea Harding field, Geophys. Prospect., 58, 1159–1176, 2010.
2020-01-23 21:27:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5178977251052856, "perplexity": 3781.959367469994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250613416.54/warc/CC-MAIN-20200123191130-20200123220130-00003.warc.gz"}
https://math.stackexchange.com/questions/2584928/tree-graphs-colorings-with-k-colors
# Tree Graphs Colorings With K Colors For each of the trees, how many different ways are there of coloring the vertices with k colors such that adjacent vertices are colored with different colors and so that two colorings of the graph are considered different if there is no rearrangement of the vertices so that they look the same? i dont understand the last bit For example in the first graph the coloring: $A\\B-C\\D\\E-F\\G$ is "the same" that the coloring: $G\\E-F\\D\\B-C\\A$ • okay thanks, so lets say theres only two colours because the first graph only needs two colours to be coloured. would it just be – Mahlissa LECKY Dec 30 '17 at 0:22 • hmm but you need the minimum? – Martín Vacas Vignolo Dec 30 '17 at 0:24 • he never specified but what i meant was for each shape there would only be two unique colourings with two colours B W--B B W--B – Mahlissa LECKY Dec 30 '17 at 0:47 • Ah, yes. It's right – Martín Vacas Vignolo Dec 30 '17 at 0:48 • Okay great thanks now he also asked a similar question but i never got to answer it because of his wording": **For each of the trees in the first problem, how many different ways are there of coloring the vertices with k colors (no restriction on which colors can be used) where two colorings of the graph are considered different if there is no rearrangement of the vertices so that the colorings look the same? ** – Mahlissa LECKY Dec 30 '17 at 0:52 ### Algorithm and answer to query A Google search will reveal that these graph colorings are counted by so-called orbital chromatic polynomials (as opposed to ordinary chromatic polynomials). These count proper colorings under the action of the automorphism group of the graph. The algorithm to compute these is documented e.g. at Peter Cameron's blog. This is really quite straightforward supposing that the number of automorphisms of the graph is of a reasonable order, e.g. linear in the number of vertices as in the case of a bracelet. What we do here is to apply Burnside, iterating over all automorphisms and computing the number of proper colorings that they fix and averaging the results. To do this we factorize the automorphisms into cycles, on which the color has to be constant. Therefore we have no contribution if a cycle of the factored automorphism contains two vertices linked by an edge since these may not be monochrome in a proper coloring. Otherwise we shrink all the cycles to vertices, creating a reduced graph, where two new vertices are adjacent if there existed an edge between the vertices on the cycles in the source graph. The colorings fixed by this automorphism are then counted by the ordinary chromatic polynomial of the reduced graph. Hence the orbital chromatic polynomial is obtained by averaging these over the number of automorphisms. It is not difficult to implement this in Maple, where the goal was to get a functioning program to answer the question, which may of course be optimized in many ways. We decided on a simple data structure representing graphs and their automorphism group by a list of the edges, the number of vertices, and the permutations from the group. We use these as inputs to Burnside where we carry out the iteration that we described, obtaining the OCP. Numbering the four trees from left to right we thus obtain for the first tree, $$1/8\,{k}^{7}-1/2\,{k}^{6}+1/4\,{k}^{2}-1/4\,{k}^{4} \\-3/8\,{k}^{3}+3/4\,{k}^{5},$$ for the second one, $$1/6\,{k}^{7}-{k}^{6}+3\,{k}^{5}-16/3\,{k}^{4} \\+{\frac {35\,{k}^{3}}{6}}-11/3\,{k}^{2}+k$$ for the third one $$1/24\,{k}^{7}-1/6\,{k}^{5}+1/12\,{k}^{4} \\+1/8\,{k}^{3}-1/12\,{k}^{2}$$ and for the last one, $$1/12\,{k}^{7}-1/6\,{k}^{6}+1/6\,{k}^{4}-1/12\,{k}^{3}.$$ The admissible colorings under the action of the automorphism group of the tree are then computed by instantiating $k$ to the number of colors. ### Sanity checks and more With these trees having a reasonable number of automorphisms we can check the correctness of the OCP by enumerating colorings with few colors. This was done and may be seen in the attached Maple code. E.g. for tree number three the enumeration routine produces $$0, 2, 60, 540, 2800, 10500,\ldots$$ which is indeed given by the polynomial listed above. Similarly the question of proper colorings of bracelets (dihedral symmetry) recently appeared at this MSE link. This forms the second sanity check where for example we obtain matching OCPs from the cited link and the present document. E.g. for a bracelet on five beads we obtain by both methods $$1/10\,{k}^{5}-1/2\,{k}^{4}+{k}^{3}-{k}^{2}+2/5\,k$$ $$1/12\,{k}^{6}-1/2\,{k}^{5}+3/2\,{k}^{4}-7/3\,{k}^{3} +{\frac {23\,{k}^{2}}{12}}-2/3\,k.$$ ### Maple code We now present the Maple code which uses only one routine from a library of Polya Enumeration code while the rest translates the specification of the algorithm with little auxiliary effort required. with(GraphTheory); with(combinat); T1 := proc() option remember; return [7, {{1,2}, {1, 3}, {2, 4}, {2, 5}, {3, 6}, {3, 7}}, [[1,2,3,4,5,6,7], [1,2,3,5,4,6,7], [1,2,3,4,5,7,6], [1,2,3,5,4,7,6], [1,3,2,6,7,4,5], [1,3,2,7,6,4,5], [1,3,2,6,7,5,4], [1,3,2,7,6,5,4]]]; end; T2 := proc() option remember; local automs, src, perm; src := [[2,3], [4,5], [6,7]]; automs := []; for perm in permute(3) do automs := [op(automs), [1, seq(op(src[perm[q]]), q=1..3)]]; od; return [7, {{1, 2}, {1, 4}, {1, 6}, {2, 3}, {4, 5}, {6, 7}}, automs]; end; T3 := proc() option remember; local automs, perm; automs := []; for perm in permute(4) do automs := [op(automs), [1,2,3, seq(perm[q]+3, q=1..4)]]; od; return [7, {{1, 2}, {2, 3}, {1, 4}, {1, 5}, {1, 6}, {1, 7}}, automs]; end; T4 := proc() option remember; local automs, perm1, perm2; automs := []; for perm1 in permute(2) do for perm2 in permute(3) do automs := [op(automs), [1, 2, seq(perm1[q]+2, q=1..2), seq(perm2[q]+4, q=1..3)]]; od; od; return [7, {{1, 2}, {1, 5}, {1, 6}, {1, 7}, {2, 3}, {2, 4}}, automs]; end; BRACELET := proc(n) option remember; local automs, rot, shft, edges; if n=1 then return [1, {}, [[1]]] fi; automs := []; for rot to n do shft := [seq(q, q=rot..n), seq(q, q=1..rot-1)]; automs := [op(automs), shft, [seq(shft[n-q], q=0..n-1)]]; od; edges := {{n, 1}, seq({q, q+1}, q=1..n-1)}; return [n, edges, automs]; end; pet_autom2cyclesA := proc(src, aut) local numa, numsubs; local marks, pos, cycs, data, item, cpos, clen; numsubs := [seq(src[k]=k, k=1..nops(src))]; numa := subs(numsubs, aut); marks := Array([seq(true, pos=1..nops(aut))]); cycs := []; pos := 1; data := []; while pos <= nops(aut) do if marks[pos] then clen := 0; item := []; cpos := pos; while marks[cpos] do marks[cpos] := false; item := [op(item), aut[cpos]]; cpos := numa[cpos]; clen := clen+1; od; cycs := [op(cycs), clen]; data := [op(data), item]; fi; pos := pos+1; od; return [data, mul(a[cycs[k]], k=1..nops(cycs))]; end; OCP := proc(tdata) option remember; local n, edges, automs, autom, src, potedg, rededgs, cdx1, cdx2, c1, c2, redG, ocp; n := tdata[1]; edges := tdata[2]; automs := tdata[3]; src := [seq(q, q=1..n)]; ocp := 0; for autom in automs do cycs := pet_autom2cyclesA(src, autom)[1]; ccount := nops(cycs); admit := true; cidx := 1; while admit and cidx <= ccount do for edg in choose(cycs[cidx], 2) do if {edg[1], edg[2]} in edges then break; fi; od; cidx := cidx + 1; od; rededgs := {}; for cdx1 to ccount do for cdx2 from cdx1+1 to ccount do c1 := cycs[cdx1]; c2 := cycs[cdx2]; potedg := {seq(seq({c1[p], c2[q]}, p=1..nops(c1)), q=1..nops(c2))}; if edges intersect potedg <> {} then rededgs := {op(rededgs), {cdx1, cdx2}}; fi; od; od; redG := Graph([seq(q, q=1..ccount)], rededgs); ocp := ocp + ChromaticPolynomial(redG, 'k'); fi; od; expand(ocp/nops(automs)); end; X := (tdata, kval) -> subs('k'=kval, OCP(tdata)); ENUM := proc(tdata, k) option remember; local n, edges, edg, admit, automs, autom, orbits, orbit, idx, cols; n := tdata[1]; edges := tdata[2]; automs := tdata[3]; if k=1 then return if(nops(edges)=0, 1, 0); fi; orbits := table(); for idx from k^n to 2*k^n-1 do cols := convert(idx, base, k)[1..n]; for edg in edges do if cols[op(1, edg)] = cols[op(2, edg)] then break; fi; od; if not admit then next fi; orbit := []; for autom in automs do orbit := [op(orbit), [seq(cols[autom[q]], q=1..n)]]; od; orbits[sort(orbit)[1]] := 1; od; numelems(orbits); end;
2019-12-06 01:06:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27788046002388, "perplexity": 2178.6835955621755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00015.warc.gz"}
https://proofwiki.org/wiki/Definition:Cauchy_Principal_Value
# Definition:Cauchy Principal Value ## Definition The Cauchy principal value is an extension of the concept of an improper integral when the latter might not exist. ### Real Integral Let $f: \R \to \R$ be a real function which is piecewise continuous everywhere. Then the Cauchy principal value of $\displaystyle \int f$ is defined as: $\PV_{-\infty}^{+\infty} \map f t \rd t := \lim_{R \mathop \to +\infty} \int_{-R}^R \map f t \rd t$ where $\displaystyle \int_{-R}^R \map f t \rd t$ is a Riemann integral. ### Complex Integral Let $f: \R \to \C$ be a bounded complex function. Then the Cauchy principal value of $\displaystyle \int f$ is defined as: $\PV_{-\infty}^{+\infty} \map f t \rd t := \lim_{R \mathop \to +\infty} \int_{-R}^R \map f t \rd t$ where $\displaystyle \int_{-R}^R \map f t \rd t$ is a complex Riemann integral. ### Contour Integral Let $C$ be a contour defined by a directed smooth curve. Let $C$ be parameterized by the smooth path $\phi: \closedint {-R} R \to \C$, where $R > 0$. Let $f: \Img C \to \C$ be a continuous complex function, where $\Img C$ denotes the image of $C$. Then the Cauchy principal value of $\displaystyle \int f$ is defined as: $\PV_C \map f z \rd z = \PV_{\map \phi {-\infty} }^{\map \phi {+\infty} } \map f z \rd z := \lim_{R \mathop \to +\infty} \int_{-R}^R \map f {\map \phi t} \map {\phi'} t \rd t$ where $\displaystyle \int_{-R}^R \map f {\map \phi t} \map {\phi'} t \rd t$ is a complex Riemann integral defining a contour integral. ## Also denoted as Variants of the notation $\PV$ for the Cauchy principal value can often be seen, such as: • $\operatorname {P.V.} \displaystyle \int$ • $\operatorname {p.v.} \displaystyle \int$ • $PV \displaystyle \int$ and so on. ## Source of Name This entry was named for Augustin Louis Cauchy. ## Technical Note The $\LaTeX$ code for $\PV$ is \PV .
2020-11-30 17:03:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9944987297058105, "perplexity": 390.76422087960236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216897.58/warc/CC-MAIN-20201130161537-20201130191537-00139.warc.gz"}
http://mathhelpforum.com/calculus/50235-relative-rates.html
Math Help - Relative rates 1. Relative rates when the area in squre units of an expanding circle is increasing twice as fast as its radius in linear units,the radius is a) 1/4pi b)1/4 c)1/pi d)1 E)pi anyone know how to explain this i just learned Related Rates and not that good at it. ( o yea the titles suppose to be Related Rates) 2. Ok so let's start with our basic area formula for a circle. $A= \pi r^2$ Now let's differentiate with respect to t, or time. $\frac{dA}{dt} = 2 \pi r \frac{dr}{dt}$ Now the problem says something particular about the ratio of the changing area to the changing radius. Can you fill that in and solve for r at that moment? 3. i dont think i can
2015-07-03 10:46:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258567094802856, "perplexity": 1055.599316388023}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095874.61/warc/CC-MAIN-20150627031815-00085-ip-10-179-60-89.ec2.internal.warc.gz"}
https://eprints.soton.ac.uk/265025/
# Robust Stability of Iterative Learning Control Schemes French, Mark (2008) Robust Stability of Iterative Learning Control Schemes International Journal of Robust and Nonlinear Control, 18, (10), pp. 1018-1033. ## Description/Abstract A notion of robust stability is developed for iterative learning control in the context of disturbance attenuation. The size of the unmodelled dynamics is captured via a gap distance, which in turn is related to the standard ${\cal H}^2$ gap metric, and the resulting robustness certificate is qualitatively equivalent to that obtained in classical robust ${\cal H}^\infty$ theory. A bound on the robust stability margin for a specific adaptive ILC design is established. Item Type: Article ISSNs: 1049-8923 (print) Related URLs: Organisations: Southampton Wireless Group ePrint ID: 265025 Date : Date Event 2008Published Date Deposited: 11 Jan 2008 15:35
2017-05-01 06:17:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40345335006713867, "perplexity": 2865.4878445613977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00311-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.groundai.com/project/quantum-hall-system-in-tao-thouless-limit/
Quantum Hall system in Tao-Thouless limit # Quantum Hall system in Tao-Thouless limit E.J. Bergholtz    A. Karlhede Department of Physics, Stockholm University AlbaNova University Center SE-106 91 Stockholm, Sweden September 4, 2019 ###### Abstract We consider spin-polarized electrons in a single Landau level on a torus. The quantum Hall problem is mapped onto a one-dimensional lattice model with lattice constant , where is a circumference of the torus (in units of the magnetic length). In the Tao-Thouless limit, , the interacting many-electron problem is exactly diagonalized at any rational filling factor . For odd , the ground state has the same qualitative properties as a bulk () quantum Hall hierarchy state and the lowest energy quasiparticle exitations have the same fractional charges as in the bulk. These states are the limits of the Laughlin/Jain wave functions for filling fractions where these exist. We argue that the exact solutions generically, for odd , are continuously connected to the two-dimensional bulk quantum Hall hierarchy states, ie that there is no phase transition as for filling factors where such states can be observed. For even denominator fractions, a phase transition occurs as increases. For this leads to the system being mapped onto a Luttinger liquid of neutral particles at small but finite , this then develops continuously into the composite fermion wave function that is believed to describe the bulk system. The analysis generalizes to non-abelian quantum Hall states. ###### pacs: 73.43.Cd, 71.10.Pm, 75.10.Pq ## I Introduction The two-dimensional electron gas in a perpendicular magnetic field, the quantum Hall (QH) system, is remarkably rich. The integer QH effect shows a conductance precisely quantized in integer units of vonk , and in the fractional QH regime the conductance is quantized in fractions of and there are fractionally charged quasiparticles that obey (abelian) fractional statistics tsui ; stat1 ; stat2 . Whereas the integer effect can be understood in terms of non-interacting electrons it1 ; it2 , the fractional effect is caused by the interaction between the electrons Laughlin83 . Other states observed in this strongly correlated electron system are metallic states that resemble a free two-dimensional Fermi gas jiang ; willett90 ; wang and inhomogenous striped states in higher Landau levels striped1 ; striped2 . A further example of an exotic quantum state that may form is a non-abelian state where the quasiparticles obey non-abelian fractional statistics mr ; nayak ; readrezayi . The fractional QH effect is understood as an incompressible quantum liquid with fractionally charged quasiparticles; this is based on Laughlin’s wave functions for filling fractions Laughlin83 . For other fractions, a hierarchy construction where quasiparticles condense to form new quantum liquids just as electrons form the Laughlin states was proposed by Haldane hierarchyHaldane , Laughlin hierarchyLaughlin and by Halperin hierarchyHalperin . An alternative view, where electrons supposedly capture magnetic flux to form composite fermions that see a reduced magnetic field was developed by Jain jain89 ; jainrev1 ; jainrev2 ; jainbook . Successful mean field theories that support this idea of flux attachment have been developed zhang ; read89 ; lopez . The composite fermion approach has received strong support by experiments performed in the half-filled Landau level. Near , ballistic transport is consistent with particles moving in a reduced magnetic field in accordance with the composite fermion prediction ballistic1 ; ballistic2 ; ballistic3 .111However, an alternative consistent interpretation is that it is a quasiparticle, with a reduced charge compared to the electron, that moves in the original full magnetic field; what is measured is the product of the charge and the magnetic field. At , the external magnetic field is completely absorbed by the electrons and the composite fermions see no magnetic field—they form a two-dimensional Fermi gas. The mean field theory of this state hlr ; kalmeyer is in excellent agreement with surface acoustic wave experiments willett90 ; Willettreview . In spite of these impressive results it is our opinion that there are basic questions concerning the QH system that remain to be answered.222A critical discussion of the status of the theory of the QH effect is presented by Dyakonov Dyakonov . Most importantly, a microscopic understanding is lacking. It is true that good many-body wave functions exist, but there is no microscopic understanding or derivation of them. For example, according to the composite fermion picture the fractional QH ground state is formed when composite fermions fill an integer number of effective Landau levels in the reduced magnetic field. No derivation of this scenario from the many body wave function exists. Mean field theory, which is successful, does give support to the idea of electrons binding flux quanta; however, it is not understood why mean field theory works as well as it does. A further question is the relation between the original hierarchy description and composite fermions; are they alternative descriptions of the same thing as argued by Read read90 and by Blok and Wen blokwen , or are they fundamentally different as argued by Jain jain89 ; jainrev1 ; jainrev2 ; jainbook ? Our aim is to contribute to a solution to these problems. In this article we consider spin-polarized interacting electrons within a single Landau level. It has recently been realized that there is a limit in which this problem can be exactly solved for any rational filling factor —and that the solution is physically relevant bk1 ; bk2 ; we06 ; hierarchy ; conformal . We here expand on, and provide details of, our work presented in Refs. bk1, ; bk2, . We consider the interacting electron gas on a torus where it becomes equivalent to a one-dimensional lattice model with a complicated long-range interaction. The solvable limit is the thin torus: , where is one of the circumferences of the torus (the other circumference being infinite). In this limit the interaction in the one-dimensional lattice problem becomes purely electrostatic and the ground state is a ”crystal” of electrons occupying fixed positions on the lattice as far apart from each other as possible. For every third site is occupied; interestingly enough this is the state introduced by Tao and Thouless in 1983 to explain the fractional QH effect Tao83 and we call these crystal states Tao-Thouless (TT) states. The fractionally charged quasiparticles are domain walls separating the degenerate ground states that exist on the torus. It should be noted that the mapping of the lowest Landau level, or of any single Landau level for that matter, onto a one dimensional lattice problem is exact and that varying may alternatively be thought of as varying a parameter in the hamiltonian that controls the range of the interaction while keeping the two-dimensional space fixed (and possibly infinite). To stress this we refer to the thin limit as the Tao-Thouless (TT) limit. If the ground state in the TT-limit remains the ground state as we may thus conclude that the experimentally accessible ground state is adiabatically connected to the ground state in the TT-limit. The simple limit may at first seem to be of little physical interest—it is really an extreme case: the interaction is both very short range and anisotropic and is furthermore purely electrostatic. The surprising fact is that the ground state in this limit has all the qualitative properties of a fractional quantum Hall state, such as a gap, the correct quantum numbers and quasiparticles with the correct fractional charge. We argue that the simple TT-ground states obtained in the TT-limit develop continuously, without a phase transition, into the fractional QH hierarchy states, as , for filling factors where such states are observed. Thus, we argue that the TT-state in general describes the fractional QH phase observed as , in the sense that these states are adiabatically connected. We show that the TT-states are the limits of the Laughlin and Jain wave functions for filling fractions where these exist. In the TT-limit the original hierarchy construction is manifest: the TT-ground states are condensates of quasiparticles. The hierarchy structure of states in the TT-limit has recently emerged within a conformal field theory (CFT) construction of bulk hierarchy wave functions for all fractions that are obtained by successive condensation of quasielectrons (as opposed to quasiholes) hierarchy ; conformal . These wave functions reduce to the correct TT-states in the TT-limit and are obtained by a natural generalization of the conformal construction of the composite fermion wave functions hans . This supports the adiabatic continuity fro m the TT-limit to the bulk for general hierarchy states. The TT-states are the ground states also for the even denominator fractions in the TT-limit; however, for these fractions we claim that there is always a phase transition when increases. This is supported by numerical studies and by a detailed analysis of , which we believe is a representative case for the even denominator fractions. At , there is a phase transition from the gapped TT-state to a one-dimensional gapless state at . (Lengths are measured in units of the magnetic length, .) Also for this gapless phase there is an exact solution: For a hamiltonian that is a good approximation at , the low energy sector consists of non-interacting neutral fermions (dipoles); the ground state is a one-dimensional Fermi sea and there are gapless neutral excitations. This provides an explicit example of interacting electrons in a magnetic field being equivalent to free particles that do not couple to the magnetic field. The ground state in the exact solution is a version of the composite fermion state jainbook given by Rezayi and Read rr . There is strong numerical evidence bk2 that this gapless one-dimensional state develops continuously, without a phase transition, into the two-dimensional bulk version of the Rezayi-Read state that is believed to describe the observed metallic state at . The Moore-Read pfaffian state mr , believed to describe the half-filled second Landau level, , also exists in the TT-limit. A simple construction gives the quarter-charged quasiholes and quasielectrons as domain walls between the six-fold degenerate ground states and the non-trivial degeneracies for these excitations is obtained haldaneAPS ; we06 . Similar results for other non-abelian states exist seidel06 ; read06 ; weunpublished ; see also Ref. haldanejack, . Over the years there have been many interesting attempts to improve the understanding of the QH system and it is impossible to here mention them all. In our work what emerges is a one-dimensional theory of the quantum Hall system which depends on a dimensionless parameter , where the Tao-Thouless states Tao83 are the exact solutions as and the fractionally charged quasiparticles are domain walls between the degenerate ground states. The TT-states are adiabatically connected to bulk QH states as —there is no phase transition if a QH state is observed. Searching the literature one finds hints and suggestions for such a scenario. Anderson noted in 1983 that Laughlin’s wave function has a broken discrete symmetry and that the quasiparticles are domain walls between the degenerate ground states anderson . Furthermore he noted that the TT-state is non-orthogonal to the Laughlin state and suggested that it can be thought of as a parent state that develops into the Laughlin wave function, without a phase transition, as the electron-electron interaction is turned on. In 1984, Su concluded, based on exact diagonalization of small systems on the torus, that the QH state at is -fold degenerate and that the lowest energy excitations are quasiparticles with charge that are domain walls between the degenerate ground states su1984 ; su2 . In 1994, Rezayi and Haldane studied the Laughlin wave function on a cylinder as a function of its radius and noted that it approaches the Tao-Thouless state on the thin cylinder. Implicit in their work is the fact that the Laughlin wave function is the exact and unique ground state to a short range pseudo-potential interaction on a cylinder for any circumference Haldane94 . In retrospect, this makes a very strong case for an adiabatic evolution from the TT-state to the Laughlin wave function. A one-dimensional approach to QH states was also considered by Chui in 1985 chui1985 ; chui86 . More recently, this has been explored in connection with Bose-Hubbard models by Heiselberg heiselberg , and it should also be mentioned that Dyakonov presents a one-dimensional toy QH-model Dyakonov . At , we find that the low energy sector consists of weakly interacting dipoles. This relates to earlier descriptions in terms of dipoles, in particular to the field theory of Murthy and Shankar ms and the work by Read readearly ; read.5 , Pasquier and Haldane pasq , Lee dhlee and Stern et al stern ; for reviews see Ref. rev1, . It is of course also reminiscent of composite fermions in general jainbook in that the particles do not couple to the magnetic field and are weakly interacting. We would also like to draw the attention to the construction of composite fermion wave functions directly in the lowest Landau level by Ginocchio and Haxton haxton , the series of work by Wojs, Quinn and collaborators, see Ref. wojs, and references therein. Explicit wave functions have been obtained within the original hierarchy construction by Greiter greiter94 . The content of the article is as follows. The one-dimensional lattice model for interacting electrons in a single Landau level is introduced in Sec. II. In Sec. III we solve this problem exactly in the TT-limit , ie we diagonalize the interacting electron hamiltonian for any rational filling factor in this limit. The ground states and the fractionally charged quasiparticles are identified and it is found that the former are condensates of the latter thus proving the original hierarchy construction in this limit hierarchy . The quasiparticles are domain walls between the degenerate ground states and their charge is determined by the Su-Schrieffer counting argument. The energy of a quasielectron-quasihole pair is determined as and it is found that the gap to creating an infinitely separated such pair at decreases monotonously with increasing but is independent of ; this is in surprisingly good agreement with experiments, which are performed in the two-dimensional bulk system, . The Laughlin and Jain fractions, as well as those observed by Pan et al pan are considered explicitly as examples, and finally the relation to composite fermions and emergent Landau levels is commented on. The transition to the two-dimensional bulk system as is considered in Sec. IV, first for odd denominator fractions leading to the QH hierarchy states, then for the half-filled Landau level; comments on other even denominator fractions and on non-abelian states are included. The conclusion is that the rich structure and different phases of matter present in the QH system exist also in the TT-limit, where it can be studied in detail starting from a microscopic hamiltonian. Technical details are burried in a series of appendices. The mathematics that we need for a single Landau level on a torus Haldane85PRL including the construction of the lattice hamiltonian is given in Appendix A. In Appendix B we prove that the relaxation procedure given in Sec. III actually gives the ground state. In Appendix C, we show that the quasiparticle charge at is ; the result is obtained in the limit using the Su-Schireffer counting argument. The energy of a quasielectron-quasihole pair is obtained in Appendix D. In Appendix E we show that the limit of Laughlin’s and Jain’s wave functions are the TT-ground states obtained in Sec. III. Appendix F shows that Laughlin’s wave function at is the exact ground state, and that there is a gap to excitations, for all for a short-range interaction. Appendix G contains details of the exact solution at . ## Ii Model We consider a single Landau level of spin-polarized electrons on a torus with lengths in the and -directions respectively. We use units such that and choose one-particle states , , , that have -momentum and are gaussians centered at , see Appendix A. This provides an exact mapping of the Landau level onto a one-dimensional lattice model with lattice constant , see Fig. 1. Each site can be either empty, 0, or occupied by an electron, 1. Thus a basis of many particle states is provided by , where ; alternatively, states can be characterized by the positions (or, equivalently, the -momenta) of the particles. A general two-body interaction, , that depends on the distance, , between two electrons only, such as a Coulomb or a short-range delta function interaction, leads to the one-dimensional hamiltonian H = Ns−1∑n=0∑|m| Here creates an electron in state , , and . consists of the two-body terms that preserve the -momentum, ie, the position of the center of mass of the electron pair, see Fig. 2. is the amplitude for two electrons separated lattice constants to hop symmetrically to a separation of lattice constants. is the electrostatic repulsion (including the exchange interaction) between two electrons separated lattice constants.333Note that is multiplied by because of the delta-function—this compensates for the fact that for even every process with appears twice in (1). For a given real-space interaction , when is small the lattice constant is large and, hence, the dominant are those with small . Furthermore, the wave functions are gaussians in the -direction, with width of the order of the magnetic length, hence their overlap vanishes rapidly as , and the electrostatic terms dominate in this limit. The physics is thus very simple in the TT-limit —it is determined by electrostatic repulsion only. In the two-dimensional limit, , the range of the one-dimensional interaction measured in number of lattice constants goes to infinity for any real space two-dimensional interaction; this is true also for a local interaction such as . The symmetry analysis of many-electron states on the torus was given by Haldane Haldane85PRL ; a simple version adapted to our needs can be found in Appendix A. We give here the results. There are two translation operators, , , that commute with the hamiltonian ; they obey , see (38). These operators have eigenvalues . corresponds to -translations and is the total -momentum (in units of ). translates the system one lattice constant in the -direction, ie along the one-dimensional lattice and increases by . At filling factor (where and are relatively prime) commutes with : is a maximal set of commuting operators. , generate degenerate orthogonal states, which have different , when acting on any state—this is the -fold center of mass degeneracy. Hence, each energy eigenstate is (at least) -fold degenerate and we choose to characterize it by the smallest . Thus, the energy eigenstates are characterized by a two-dimensional vector , where is the -eigenvalue. ## Iii Exact solution In this section we solve the problem of interacting spin-polarized electrons exactly at any rational filling factor in the Tao-Thouless limit; we diagonalize the hamiltonian (1) and construct explicitly the ground state as well as the low energy charged excitations which turn out to have charge . As noted above, the hopping elements, , vanish rapidly as decreases, whereas the electrostatic elements, , decrease much more slowly. Hence in the limit effectively only the electrostatic interaction survives and the hamiltonian (1) becomes H0=Ns−1∑i=0∑1≤k≤Ns/2Vk01+δk,Ns2 ^ni^ni+k  , (2) where and periodic boundary conditions have been imposed, . in (2) defines the Tao-Thouless limit. At filling fraction , the energy eigenstates are simply the states {|n0n1…nNs−1⟩}  ,   ni=0,1  , (3) where electrons occupy fixed positions on lattice sites, Ns−1∑i=0ni=Ne=Nsp/q  . (4) ### iii.1 Ground states, quasiparticles and the hierarchy Here we determine the ground state at in the TT-limit, ie for the hamiltonian in (2). This is the classical electrostatics problem of finding the position of electrons, on a circle with sites, that minimizes the energy. Hubbard gave an algorithm for constructing this ground state hubbard ; we will present two simple alternative constructions. The first gives the ground state for given , and provides an intuitive understanding of why the energy is minimized; the second starts from and constructs the ground states at all other filling factors iteratively from this state as repeated condensations of quasiparticles—in addition to the ground states this gives the fractionally charged quasiparticles and makes the Haldane-Halperin hierarchy construction manifest in the TT-limit. The states obtained here are the ground states for any interaction that obeys the concavity condition Vk+1,0+Vk−1,0>2Vk0   , (5) which implies, by iteration, Vk+n,0+Vk−1,0>Vk+n−1,0+Vk0  , (6) for all . When, , is simply the electrostatic interaction energy between two rings of circumference separated a distance ; hence he concavity condition is fulfilled by a generic electron-electron interaction when . It implies that the interaction energy of one electron with two other electrons that have fixed positions is minimized if the first electron is as close to the midpoint between the fixed electrons as possible, ie if the distances to the two fixed electrons differ by at most one lattice constant, see Fig. 3. 444Hubbard assumes, in addition, that ; this need not be fulfilled when periodic boundary conditions are assumed as is the case here. The crucial observation in obtaining the ground state is to realize that it is possible to minimize the energies of the :th nearest neighbors separately for each for an interaction that obeys (6) hubbard . In our first construction, the ground state is obtained by placing the electrons equidistantly on a circle and then letting them relax to the closest lattice sites. The ground state is periodic with a unit cell of length containing electrons and this cell is obtained as follows. Consider a circle with equidistant electrons and a lattice with (equidistant) lattice sites as in Fig. 4. Move each electron to its closest lattice site; an electron that is equally far from two sites is moved to one of these sites. 555There is at most one such electron, since otherwise the period would be less than . The obtained configuration is the unit cell of the ground state, for the proof we refer to Appendix B. An algebraic expression for the unit cell is obtained by noting that electron is at site i(α)=I[α/ν]   , (7) where and denotes the integer closest to . For example, for , (7) gives and hence the unit cell is in agreement with Fig. 4. We use a chemical notation where the subscript denotes the number of times the quantity is repeated.666Hubbard uses a more compact notation where the distance between consecutive ones is given, eq, . That the unit cell at has length implies that the ground state is -fold degenerate. This is the center of mass degeneracy of any state at discussed in Sec. II Haldane85PRL . For the quantum Hall states this is the topological degeneracy of the ground state identified by Wen and Niu wen ; wenreview . Incidentally, the initial configuration with equidistant electrons is clearly the ground state in the continuum problem when there is no lattice—when there is a lattice the ground state is as similar to the continuum one as possible. Because of the periodicity, the picture of course generalizes immediately to the full system consisting of unit cells and periodic boundary conditions: the ground state for electrons on sites is obtained by starting with equidistant electrons and moving each electron to its closest site. At the procedure gives the unit cell . Thus, the ground state is obtained by placing one electron on every :th site; this obviously minimizes the electrostatic repulsion. For odd , these are the Laughlin fractions and the state with unit cell was in fact proposed by Tao and Thouless in 1983 as an explanation of the fractional quantum Hall effect Tao83 . Although this state has a small overlap with the exact ground state (for large ), and with the Laughlin state, it does in fact play an important role in the quantum Hall effect and we will call the crystal ground states at general Tao-Thouless (TT) states. We claim that the TT-states are QH-states in the sense that they are adiabatically connected to the bulk QH states bk2 ; hierarchy . Note that the TT-states have a gap to excitations—the lattice sites are fixed in space and there are no phonons in these crystals; the TT-states should not be confused with Wigner crystals, which have gapless excitations due to the broken translational invariance. This is a further reason to call the states considered here TT-states rather than crystal states. We will now present an alternative, iterative, construction of the TT-states that brings out the connection to the hierarchy of fractional quantum Hall states hierarchyHaldane ; hierarchyHalperin and determines the fractionally charged quasiparticles. Let be the unit cell for a TT-state at level ; this level is defined by the iteration process and is identical to the one in the hierarchy construction. At the first levels we have C(0) = 0  ,   ν0=0 C(1) = 0t1−11  ,   ν1=1t1  , (8) where . The unit cells for the TT-states at level are obtained iteratively as C(n) = C(n−1)tnC(n−2),   νn=tnpn−1+pn−2tnqn−1+qn−2 C(n) = C(n−1)tn−1¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯C(n−2),   νn=tnpn−1−pn−2tnqn−1−qn−2  , (9) where indicates that is repeated times; is the complement of in the unit cell , ie and . At the second level we obtain: C(2) = {0t1−11}t20  ,   ν2=1t1+1t2 C(2) = {0t1−11}t2−10t1−21  ,   ν2=1t1−1t2  . (10) We can now connect to the original hierarchy construction. As will be further discussed below, is the quasihole, and is the quasielectron, with charges , in the ground state . Thus the unit cells at level two consists of (or ) copies of the unit cell at level one, followed by a quasihole (or quasielectron) in the level one ground state—the new ground state at level two is a condensate of quasiparticles in the level one ground state. At the next level, level three, we find the unit cells C(3) = {{0t1−11}t20}t30t1−11 C(3) = {{0t1−11}t20}t3−1{0t1−11}t2−10 C(3) = {{0t1−11}t2−10t1−21}t30t1−11 (11) C(3) = {{0t1−11}t2−10t1−21}t3−1{0t1−11}t2−20t1−21 and the corresponding filling factors ν3 = 1t1+1t2±1t3 ν3 = 1t1−1t2±1t3   . (12) Generalizing this, we find at level , the filling factors νn=1t1+α2t2+α3⋅⋅tn−1+αntn   , (13) where and the corresponding unit cells are obtained using (III.1). In this construction, if is constructed with () in (III.1). A state at level is uniquely characterized by the parameters . Equation (13) is the continued fraction form of the filling factors as given by the hierarchy scheme, except that now also even denominators are obtained. Restricting to and , for , (13) is identical to Haldane’s formula for the filling factors hierarchyHaldane , which is known to give each odd denominator fraction once. This construction gives each rational filling factor and by inspecting the unit cells one finds that they are such that the distances between :th nearest neighbors differ by one lattice constant at most, hence they minimize the energy and are identical to the ground states obtained by the relaxation procedure above. To be precise, we have not proven that the unit cells obtained by the two methods are always identical, but we have checked this in many examples and are convinced that this is the case. The interpretation of (III.1) as a condensate of quasiparticles generalizes to arbitrary level in (III.1): C(n−2)  and  ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯C(n−2) (14) are the quasihole and the quasielectron (which is which depends on the state) in the ground state with unit cell , and the ground state at level , , is a condensate of these quasiparticles in accordance with the hierarchy construction. That the proposed quasiparticles, and , have the expected charges at follows from the Su-Schrieffer counting argument Schrieffer , see Appendix C. Furthermore, they are domain walls between the degenerate TT ground states. This was noted by Anderson anderson and stressed by Su su1984 ; su2 based on exact diagonalization studies on the torus. The quasiparticles discussed here are the ones with the elementary charge and the QH states are the simple abelian ones; as pointed out by Wen wenreview other quasiparticles may also in principle condense to form more complicated ground states. The iteration formula (III.1) gives the one quasiparticle excitations in the state with unit cell and shows that these are the lowest energy excitations at the corresponding filling factors; for example, inserting in the ground state with unit cell gives the quasielectron and it is the lowest energy state at . The energy of a quasielectron-quasihole pair at can be calculated in the limit . According to the discussion above, the quasiparticles in a TT-state with unit cell are and . Thus, a minimally separated particle-hole pair is obtained by the replacement C(n)=C(n−1)¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯C(n−1)→¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯C(n−1)C(n−1)  , (15) in the ground state. Note that the replacement in (15) amounts to a translation of the unit cell with periodic boundary conditions on the cell itself, thus it creates two domain walls (with the expected charge). It can be shown, see Appendix D, that differ from only in that one electron has been moved one lattice constant. A separated particle-hole pair is obtained by translating consecutive cells as in (15), ie with periodic boundary conditions on each cell separately, or, equivalently, on all of them together. This is equivalent to inserting a string of unit cells between the particle and hole C(n)s→¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯C(n−1)C(n)s−1C(n−1), (16) and moves electrons (one in each unit cell) one lattice constant in the same direction. The replacement (15) or (16) implies an ordering of the particle-hole pair. The opposite ordering is obtained by instead making the reverse replacement C(n)s→C(n−1)C(n)s−1¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯C(n−1)  . (17) Again the replacement (16) or (17) amounts to a translation of assuming periodic boundary conditions. It is shown in Appendix D that the energy of the separated particle-hole pair in (16,17) is Eph(s)=s−1∑k=1kΔkq+s∑s≤k≤Ns/2qΔkq1+δk,Ns2q  , (18) where Δkq≡Vkq−1,0−2Vkq,0+Vkq+1,0  . (19) is the change in energy for an electron with its two neighbors unit cells away when the initial electron is moved one lattice constant. This energy, which is the second derivative of , is positive due to the concavity condition (5), which thus ensures the stability of the TT-ground state to particle-hole formation. Note that is independent of the numerator of the filling factor. It can be shown that the nearest neighbor pair excitation , which has energy Eph(1)=∑1≤k≤Ns/2qΔkq1+δk,Ns2q  , (20) according to (18), is the lowest energy excitation at fixed filling fraction, see Appendix D. An important quantity is the energy of an infinitely separated particle-hole pair; this is what is measured in activated transport and is a measure of the stability of a quantum Hall state. When the separation of the particles goes to infinity, , we find Egap≡Eph(∞)=∞∑k=1kΔkq . (21) Note that the gap depends only on the denominator , ie on the fractional charge , and not on the numerator in . This is natural in the sense that the denominator determines the charge of the quasiparticles, but the result is non-trivial since the properties of the ground states depend on both and . Furthermore, is a monotonic function that approaches zero from above as the denominator increases.777This statement is true for most imaginable interactions. A sufficient (but not necessary) condition is that (or its derivative) is monotonic in . In this context it should be mentioned that Halperin within the original hierarchy construction of fractional QH states predicted a gap that is predominantly determined by and decreases monotonously with increasing hierarchyHalperin . Thus we find that the gap to creating a separated quasiparticle pair decreases monotonously with increasing . This motivates Fig. 5 which shows for each , odd hierarchy . Each point corresponds to a TT-state that we claim is adiabatically connected to a bulk QH-state. The higher up a point is, the larger is the gap in the corresponding TT/QH-state and the more stable is the state. The points marked by plusses are filling fractions where a dip in the longitudinal resistance is reported in the experiment by Pan et al pan . At the crosses, we infer a very small dip in the same data. This experiment, which covers the range marked in the figure, is performed on the highest available mobility samples and exhibits the largest number of QH states. We note that to a surprisingly good approximation a dip in is observed at a filling factor if, and only if, . This is in agreement with the gap, , being independent of . The structure shown in Fig. 5 is a fractal, self-similar, one: enlarging any region of reproduces the original figure mani ; goerbig . This fractal structure is connected to the hierarchy construction of fractional QH states. The TT-states and their quasiparticles obtained when makes the hierarchy construction of fractional QH states manifest as discussed above. According to (III.1), for each state, the parent state, condensation of quasiparticles gives rise to two sequences of daughter states with filling fractions approaching that of the parent state from above and below respectively and with decreasing , see Fig. 6. This holds for each and explains the fractal structure in Fig. 5. For a more complete discussion of the connection to the hierarchy theory, including the connection to the global phase diagram global ; lutken , we refer to Ref. hierarchy, . Using Fig. 5 we can predict what QH-states are next in line to be discovered when higher mobility samples become available. For example, in the region we first expect, in addition to 7/17 and 8/13 included above and new Jain states at , states at 10/17, 11/17, 6/19, 7/19 and 8/19. ### iii.2 Examples We here give explicit examples of TT ground states and quasiparticles for prominent filling fractions. #### iii.2.1 Laughlin/Jain fractions For the Jain sequences that for fixed approach from below as increases, the unit cell of the TT-state is ; in the hierarchy notation this corresponds to . Thus the states in the Jain sequences are those in the hierarchy where all but the first condensate has maximal density. At , the unit cell is . Explicitly for the experimentally most prominent sequence, , the unit cells are 1/3 : 001 2/5 : 00101 3/7 : 0010101 4/9 : 001010101 5/11 : 00101010101 ˙ ˙ ˙ 1/2 : 01   . (22) By taking the particle-hole conjugate, , of the states in (III.2.1) one obtains the Jain series that approaches from above. We now turn to the fractionally charged quasiparticles. Consider to start with the TT-state at . According to (III.1), its unit cell is and the quasiparticles are and . A quasihole with charge is created by inserting somewhere: 001001001001001001001001001001001001001 001001000–1001001001001001001001001001001 001001000–1001001001000–1001001000–1001001001–––– . The first line in (III.2.1) is the ground state; the second line has one extra 0 inserted—comparing to the ground state it is clear that this creates a domain wall between two degenerate ground states. In the third line, three well-separated 0’s are inserted, creating three domain walls. Comparing to the ground state in the first line one sees that far away from the three 0’s the state is unchanged and to maintain the size of the system the underlined unit cell has to be removed. In between the inserted 0’s the state is a rigid translation of the original ground state and thus indistinguishable from this state by a measurement that only refers to the translated state. Thus, the net change is that one electron has been removed and this charge is divided on the three well-separated domain walls which thus have charge each. This is of course nothing but the Su-Schrieffer counting argument Schrieffer . The argument is also closely related to Laughlin’s original argument for the fractional charge, where a quasihole was created by adiabatic insertion of a flux quantum Laughlin83 —this corresponds to adding one empty site as there is one site per flux quantum. The quasielectron is created by inserting : 00100100101001001001001. Again this is a domain wall and the charge is, by the counting argument . Note that particle-hole symmetry is manifest; this is not the case for the bulk wave functions. It is straightforward to show that the excitations just given are the limits of the two-dimensional bulk quasiparticles constructed by Laughlin, see Appendix E. It follows from Sec. III that inserting or removing 01 create quasiparticles in all the TT ground states in the Jain sequence that approaches as increases. For example, a quasielectron at is obtained as: 00101001010010100101001010010100101 (24) 001010010101–––0010100101001010010100101  . For Jain sequences with general , starting from the Laughlin state , quasiholes (quasielectrons) are created by removing (inserting) . In all cases, the Su-Schrieffer counting argument gives the expected charge, , for these quasiparticles and they are the limits of the Laughlin and Jain quasiparticles in the two-dimensional bulk system, see Appendix E. #### iii.2.2 Non-Jain fractions Until recently, all experimentally well-established fractional quantum Hall states where for Laughlin-Jain filling fractions. However, in 2003 Pan et al reported a new set of states in ultra-high mobility samples pan . Preliminary indications of such a state at 4/11 was reported by Goldman and Shayegan goldman . Fractional quantum Hall states were seen at the following odd denominator filling factors and (the ones in parenthesis are inferred by us from the data in Ref. pan, but were not claimed in this reference). 888The experiment shows similar features, a dip in the longitudinal resistance, at even denominator fractions, eg at —these are presumably not QH states, see Sec. IV.2.2. Using the methods above, we readily find the TT ground states and the quasiparticles with charge at these filling fractions. In Table 1 the unit cells are given in the hierarchy form, ie as in (III.1). The anti-quasiparticle, which is not given in the table, is obtained by taking the complement of the quasiparticle in the given ground state unit cell, according to (14). The unit cell for given is most easily obtained using the relaxation method, (7); it is then easily transformed to the hierarchy form by identifying its parent using Figs. 5 and 6. The parent is one of the two nearest neighbors whose denominator is smaller than . —tt Check this ### iii.3 Emergent Landau levels and composite fermions We have seen that in the TT-limit the TT ground states give a microscopic realization of the original hierarchy construction of fractional quantum Hall states. Today the alternative construction of quantum Hall states in terms of composite fermions has become the dominant framework and one may ask how the TT-construction is related to this. We believe, as argued by Read read90 and by Blok and Wen blokwen but contrary to the opinion of Jain jainrev1 ; jainrev2 ; jainbook , that the original hierarchy construction and composite fermions are just two different ways to view the same phenomena. In our approach, this is supported by the fact that both the hierarchy construction and the emergent Landau level structure is manifest in the TT-limit and that the TT-ground states and the quasiparticles are the limits of Jain’s composite fermion wave functions. According to the composite fermion idea, fractional QH states are created when composite fermions fill an integer number of Landau levels in a reduced magnetic field. This implies the existence of an emergent structure of effective Landau levels within the lowest Landau level; such a structure is indeed seen in numerical studies emergent ; jainrev1 . We here describe how this comes about in the TT-limit. To be specific we imagine that we start at filling factor with the state , ie the ground state has unit cells. When the magnetic field is decreased, quasielectrons with charge are created by inserting 01. One such quasielectron can be inserted in different equivalent places; these degenerate states form an effective Landau level for these quasiparticles. When is further decreased more quasielectrons are added to this effective Landau level; the quasielectrons repel eachother and are hence pushed as far apart as possible. Eventually, one 01 has been added per unit cell 001, the effective Landau level is filled and one has reached the state , ie the 2/5 state with unit cell 00101, see Fig. 7. Of course, when decreasing the magnetic field and adding quasielectrons 01 to the 1/3 TT-state many other, infinitely many in fact, TT-ground states are obtained before the 2/5 state is reached, see Fig. 5. Among them are the states with unit cells , see Fig. 6; these are just as 2/5 obtained by a condensation of quasielectrons in the 1/3 state albeit with a lower density of quasielectrons. However, these states have a filling factor with larger denominator and hence a smaller gap and are thus less stable to disorder and may not form in a given sample. Continuing decreasing the magnetic field creates quasielectrons in the -state by adding 01 which have charge (adding 01 just before reaching 2/5 removes quasiholes of charge ). After having added 01 an additional times one reaches the 3/7 ground state with unit cell 0010101. Continuing the process gives all the TT ground states in the Jain sequence approaching .999As well as states at all fractions in between just as was the case when going from 1/3 to 2/5. Note that whereas the operation of inserting 01 is one and the same throughout this process, its interpretation in terms of quasiparticles varies: Near , adding 01 creates quasielectrons with charge and when approaching , adding 01 removes quasiholes of charge . In the discussion above we started from the 1/3 state. However, due to the self-similar fractal structure established for the TT-ground states, it is clear that the procedure is general—it applies mutatis mutandis to any fraction . ## Iv Transition to bulk We have seen in the previous section that the TT-limit is remarkably simple but still rich and non-trivial. However, this is a limit that is most likely impossible to realize experimentally. Its raison d’tre lies in what it says about the experimentally realizable two-dimensional bulk limit, ,—this is the topic of this section. We propose that the TT-ground state is the limit of a two-dimensional bulk QH state for any , odd. If this QH state is the ground state (in the bulk) then it is adiabatically connected to the TT-state—there is in general no phase transition from the TT-state as goes from 0 to .101010Of course more complicated scenarios are possible where there are intermediate phases and the QH phase is reentrant. This statement may need clarification. When changes, the size of the physical system changes; normally when one talks about adiabatic continuity one has in mind that a parameter in the hamiltonian changes. However, all that is happening when changes is that the matrix elements change. Thus for any , we can think of the problem as the infinite two-dimensional QH system with an interaction that depends on . This interaction is unusual: for finite it is anisotropic, it becomes purely electrostatic in the TT-limit, and it cannot, presumably, be written as a real space interaction . However, in the limit it approaches the chosen isotropic electron-electron interaction, whereas it is exactly solvable in the TT-limit. Thus, we have the standard situation when adiabatic continuity can be discussed. In passing, we note that this point of view shows that the TT-states give an exact solution to the two-dimensional QH-problem albeit with a peculiar interaction. We will present arguments for our proposal about adiabatic continuity below; the strength of these arguments varies with . For the Laughlin states adiabatic continuity holds for a short-range pseudo-potential interaction, for the Jain states it can be strongly argued and for more general hierarchy states a case for it is emerging. In our opinion, the overall evidence for the correctness of the claim is convincing. Apart from fractional QH states, there may of course be other ground states in the bulk. An example of this are Wigner crystals that are expected at low filling factors, in particular at , for small enough. In these cases, there is a phase transition from the TT-state as increases. The even denominator filling factors are special. For them we propose that there is always a phase transition from the gapped TT-state as increases. The suggestion is based on: 1) a detailed analysis of the case and 2) exact diagonalization of small systems. For , we find a phase transition to a gapless Luttinger liquid that can be identified with a version of the Rezayi-Read state rr ; when increases further the state develops smoothly into the Rezayi-Read state that is believed to describe the system in the limit . Exact diagonalization of all with shows that for each filling factor with even denominator there is an abrupt change in the ground state, where its quantum numbers change, as increases, whereas there is no such change for any odd denominator filling factor bk2 . Below we first discuss the odd denominator hierarchy states, concentrating on the Laughlin and Jain fractions; results for more general hierarchy fractions are summarized. We then turn to even denominator fractions; we discuss the half-filled Landau level in detail and comment on general such fractions and on the difference between and . The TT-limit is useful also to analyze non-abelian states; we end by discussing the Moore-Read pfaffian state at as an example of this. ### iv.1 Hierarchy states We begin by comparing the qualitative properties of the TT-states and the bulk hierarchy states. First of all, both have a gap to excitations. Note that the crystalline TT-states have no phonons: once the boundary conditions, ie the flux through the torus, are fixed the lattice sites are fixed in space. Thus the TT-states are not the Wigner crystals expected in the quantum Hall system at low filling factor; these would have gapless phonons. Second, the TT-state and the bulk hierarchy state at both have quasiparticle excitations with charge . A further similarity is that the ground state degeneracy on the torus in both cases is , but this is rather trivial as it is true for any state at . On the torus, the symmetries are the magnetic translations and the TT and Laughlin-Jain states have the same quantum numbers, , with respect to these. This follows since the former are the limit of the latter, see Appendix E. The TT-state is inhomogeneous, whereas the bulk hierarchy state should be homogeneous—a property that is normally considered fundamental for the quantum Hall fluids. However, it should be noted that imposing periodic boundary conditions on the problem of electrons moving in a perpendicular magnetic field, ie considering the problem on a cylinder or a torus, implies that the continuous translational invariance present on the infinite plane, is broken, in one direction, to a discrete invariance, translation by one lattice constant , along the cylinder, see Appendix A. As a consequence, for any state on a finite cylinder or torus there will be ripples in the density that disappear completely only as the size of the torus goes to infinity haldanebook . Furthermore, it should be noted that if one takes the alternative ”adiabatic” point of view where the TT-limit is just a change in the interaction in the infinite two-dimensional system (where the lattice constant vanishes), then the TT-state is homogeneous. #### iv.1.1 Laughlin-Jain fractions For the Laughlin and Jain filling fractions, , explicit wave functions are known for the ground states as well as for the quasiparticle excitations—these allow us to further investigate the connection to the TT-states. Rezayi and Haldane noted in 1994 that the limit of Laughlin’s wave function (at ) is the TT-state with unit cell 001 Haldane94 . This analysis can be generalized to general Jain states and also to quasiparticle excitations thereof. Taking the limit of these wave functions, one finds the TT-states and the quasiparticles of Sec. III bk2 . One direct consequence of this is that the TT-state has the same quantum numbers as the Laughlin/Jain state. The details of this analysis are given in Appendix E. On the plane, the Laughlin state at is the unique ground state to a certain short range interaction and there is a gap to all excitations haldanebook ; trugman ; pokrovsky . Since this result on the plane is a consequence of the behavior of the wave function when two electrons approach each other it is natural that it holds in other geometries as well. This is indeed the case: For the short range interaction on the torus the Laughlin state is the unique ground state (up to the center of mass degeneracy) and there is a gap to all excitations for any circumference . An explicit statement or proof of this has not, to the best of our knowledge, appeared, but it is implicit in Ref. Haldane94, .111111However, there is a confusing comment in Ref. Haldane94, on observed gapless modes which seems to say that such bulk modes are expected. For a proof see Appendix F. The transition from an inhomogeneous charge density wave state to a homogeneous-looking state as grows was studied in Haldane94 for the Laughlin state and a rapid crossover was found at . This crossover was recently studied numerically in detail by Seidel et al Lee05 . They confirm that there is no phase transition but an exponentially fast crossover to a virtually homogeneous state. That the Laughlin state spontaneously breaks the discrete translational symmetry on the thin cylinder has been rigorously proven by Jansen et al Jansen . To conclude, we consider it firmly established that the TT ground states are adiabatically connected to the bulk QH states for the Laughlin fractions and that a strong case has been made for this being true also for the Jain fractions. #### iv.1.2 Non-Jain hierarchy fractions For the fractional QH states observed by Pan et al pan that are not of the Laughlin-Jain type discussed above no agreed upon wave functions exist. However, recently wave functions were constructed for all filling fractions in the hierarchy scheme that are obtained as repeated condensates of quasielectrons (as opposed to quasiholes) hierarchy ; conformal . These wave functions are obtained by a natural generalization of the conformal field theory construction of Jain’s composite fermion wave functions hans . The limit of the proposed wave functions are the TT ground states and the wave function at 4/11 has a large overlap with the exact ground state for small systems. We take this as a strong indication that the non-Jain hierarchy states in the bulk are also adiabatically connected to the TT ground states. Details of this conformal field theory construction of hierarchy wave functions will be presented elsewhere conformal . Wave functions at these fractions can also be constructed within the hierarchy scheme as condensates of quasiparticles greiter94 and as fractional QH states of composite fermions JainGoldman ; Quinn ; jain4over11 . For further discussion of these states, see Ref. lopez2, ; wojs, ; scomp2, ; pashitskii, . ### iv.2 Even denominator fractions So far we have discussed odd denominator filling factors and argued that the TT-states that are the ground states in the TT-limit are adiabatically connected to the two-dimensional bulk QH hierarchy states. In this section we discuss the even denominator filling factors. We begin by considering the half-filled Landau level; this case has been analyzed in detail and we believe it is representative for other gapless even denominators. (The gapped state at is discussed in Sec. IV.3.) #### iv.2.1 Half-filled Landau level In the half-filled lowest Landau level, , a metallic, ie, gapless state is observed jiang . The composite fermion explanation for this behaviour is that each electron binds two flux quanta, thereby removing all the external magnetic flux. The state is then supposedly described by non-interacting, or weakly interacting, composite fermions in zero magnetic field, ie, by a free two-dimensional Fermi gas. This is also the picture that emerges from the mean field theory where the statistical magnetic field completely cancels the external magnetic field hlr . These descriptions successfully explain experimental data and constitute the main evidence for the correctness of the composite fermion idea. The microscopic wave function of composite fermion type describing the state was provided by Rezayi and Read rr —on the torus it reads rrhaldane ΨRR=detij[eiki⋅Rj]Ψ12   , (25) where are the guiding center coordinates and is the bosonic Laughlin state at , see Appendix A. The wave function is not unique but depends on a set of parameters, ”momenta”, . In the two-dimensional bulk limit these are assumed to form a circular Fermi sea. The TT-state at has unit cell 01, ie it is a state with one electron on every other site. This clearly minimizes the electrostatic repulsion, and hence is the ground state, in the TT-limit . As any TT-state it has a gap to excitations and it has quasiparticles with charge . Clearly, this is very different from the gapless state that is observed in the two-dimensional bulk limit and is believed to be described by (25). A gapless state does in fact exist also for small but finite and there is strong numerical evidence that this state develops without a phase transition into the gapless two-dimensional state described by (25). The first evidence for this was obtained in a numerical study of the QH system on a thin cylinder bkdmrg , using the density matrix renormalization group method white . At , a sharp transition from the TT-state with a finite amplitude of the density oscillations to a virtually homogeneous state was observed at (for a short-range interaction ) and there were indications of gapless excitations. This behaviour was dramatically different from other fractions such as where no transition was observed as was varied—the amplitude of the density wave decreased continuously as increased. The transition at takes place in a region where only the first few shortest-range terms are important. This made it possible to find an exact solution—ground state and low energy excitations—for a hamiltonian that is a very good approximation to the short-range interaction for bk1 . We now describe the exact solution, for details see Appendix G. Consider to start with a hamiltonian (1) that consists only of the shortest range hopping term, . Define a restricted Hilbert space by requiring that each pair of nearby sites contains exactly one electron, ie include the states and , but not 00 or 11.121212If one instead groups sites , one gets a restricted Hilbert space that is related to by a symmetry operation. The hamiltonian in (1) have vanishing matrix elements between the states in and since is conserved and these spaces have no common states except for the two TT-states with unit cell 01. Note that the interaction preserves the restricted Hilbert space . In spin notation, becomes and hence the model with the shortest range hopping term is, within , equivalent to a nearest neighbour spin-1/2 -chain, Hxy=V21Ns/2−1∑p=0(s+p+1s−p+s−p+1s+p)   , (26) where , , and . Note that creates a neutral particle, a dipole. The hamiltonian (26) is diagonalized by mapping the spins to fermions, , via a Jordan-Wigner transformation—the hamiltonian then becomes that of free one-dimensional fermions with nearest neighbor hopping. Thus, within , the shortest range hopping term gives a one-dimensional free Fermi gas of neutral particles. But to make this a reasonable approximation to (1) for small but finite we must consider also the short-range electrostatic terms. Including the two shortest range such terms, and , with relative strength one finds that all states in have the lowest possible electrostatic energy. There are states not in (or ) with the same electrostatic energy but a strong case can be made that the hopping term creates a gap to these ( contains the most hoppable states) bk1 ; this is also supported by the numerics discussed below. Thus, the low energy sector for the short-range hamiltonian Hsr = αNs−1∑p=0^np(2^np+1+^np+2) (27) + V21Ns/2−1∑p=0(s+p+1s−p+s−p+1s+p)  , where , is contained in and is that of a free one-dimensional Fermi gas of neutral particles. The ground state is a filled Fermi sea |GS⟩=∏|k|>π2~d†k|010101…⟩  , (28) and excitations out of this sea gives gapless neutral excitations. This provides an exact and explicit mapping of the low-energy sector of a system of interacting electrons in a magnetic field onto a system of non-interacting neutral particles, ie onto free particles that do not interact with the magnetic field. We note that dipoles have many of the properties expected of composite fermions, most notably that they do not couple to the magnetic field. Earlier approaches to a microscopic theory in terms of dipoles were made by Read readearly ; read.5 , Murthy and Shankar ms , Pasquier and Haldane pasq , Lee dhlee and by Stern et al stern . An important feature of the exact solution is that the number of dipoles is not conserved. These particles are neutral and their number is not tied to the number of electrons in the system as is the case for the composite fermions. For example, there are excitations where a single dipole is added to, or removed from, the Fermi sea. Moreover, the number of dipoles making up the Fermi sea in (28) is determined dynamically, ie by filling the negative energy dipole states; this leads to there being one dipole for every second electron. As this is a consequence of energetics, the number of dipoles presumably changes as increases. Assuming the ground state develops into a two-dimensional Fermi gas of dipoles as , this suggests that the number of dipoles, and hence the Fermi momentum, is determined dynamically rather than being simply given by the number of electrons. In this context we note that the Fermi momentum measured in surface acoustic wave experiments Willettreview disagrees with the mean field theory result which follows from that the number of composite fermions in the Fermi sea is equal to the number of electrons hlr . The solvable case is a good approximation for a small finite . For the real-space short-range interaction , corresponds to . The short-range hopping term is then , whereas the leading ignored terms are small: and . This is close to the solvable point. Longer range hopping terms in (1) will in general not preserve the subspace . However, since there is a gap to states not in , the effect of small such terms can be included perturbatively in an effective spin-1/2 hamiltonian that acts within . To zeroth order the effective hamiltonian is Heff=∑p,n[V2n,1(s+p+ns−p+s−p+ns+p)−Δ2nszpszp+n] , (29) where and is given in (19)131313Note that need not be positive for finite . In particular, this effective hamiltonian, , contains an Ising term that changes the non-interacting Fermi gas to an interacting Luttinger liquid with interaction parameter . If such terms come with large coefficients, they may cause a phase transition that opens up a gap; the results below strongly indicate that this does not happen in this system. In higher order perturbation theory, one gets increasingly more complicated spin-1/2 models containing renormalized quadratic as well as higher order terms. However, our numerical calculations suggest that these higher order terms may be very small for a range of —we find that the projection of the exact ground state of the Coulomb interaction on is virtually unity in the ”solvable” region as discussed below. We have numerically identified the state obtained in exact diagonalization in the solvable region with the exact solution (28). We continuously varied from the values that give the exact solution to values that correspond to a in this region and verified that the ground state develops continuously without any level crossing. Whereas gives a qualitative understanding, we suggest that gives a quantitatively very accurate description of the low energy sector within this region. Thus we conclude that the exact solution is stable in a finite neighborhood of the solvable point where it develops into an interacting gapless one-dimensional model, a Luttinger liquid. However, the perturbative argument for this is of course restricted to hamiltonians close to the solvable one—when increases, hopping terms of increasing range and with large coefficients will appear—leading to a hamiltonian that is very different from (27) and that definitely does not preserve the restricted hilbert space . To investigate this region we compared the ground states obtained in exact diagonalization with the Rezayi-Read states (25) bk2 . The plane waves in (25) are periodic on the torus, with our normalization this means that are integers. A straightforward calculation shows that these momenta determine the conserved quantum numbers of the state: Kα=Ne−1∑i=0kiα  mod((1+δ1α)Ne)  , (30) where is the momentum of particle
2020-12-05 06:28:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8222887516021729, "perplexity": 675.0419518231346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746320.91/warc/CC-MAIN-20201205044004-20201205074004-00595.warc.gz"}
https://learn.co/lessons/gradient-descent-step-sizes
In the last section, we took a first look at the process for improving regression lines. We began with some data then used a simple regression line in the form $\hat{y}= mx + b$ to predict an output, given an input. Finally, we measured the accuracy of our regression line by calculating the differences between the outputs predicted by the regression line and the actual values. We quantify the accuracy of the regression line by squaring all of the errors (to eliminate negative values) and adding these squares together to get our residual sum of squares (RSS). Armed with a number that describes the line's accuracy (or goodness of fit), we iteratively try new regression lines by adjusting our y-intercept value, $b$, or slope value, $m$, and then comparing these RSS values. By finding the values $m$ and $b$ that minimize the RSS, we can find our "best fit line". In our cost function below, you can see the sequential values of $b$ and the related RSS values (given a constant value $m$). import plotly from plotly.offline import init_notebook_mode, iplot from graph import m_b_trace, trace_values, plot, build_layout init_notebook_mode(connected=True) b_values = list(range(70, 150, 10)) rss = [10852, 9690, 9128, 9166, 9804, 11042, 12880, 15318] layout = build_layout(options = {'title': 'RSS with changes to y-intercept', 'xaxis': {'title': 'y-intercept value'}, 'yaxis': {'title': 'RSS'}}) plot([cost_curve_trace], layout) The bottom of the blue curve displays the $b$ value that produces the lowest RSS. Things are not so simple At this point, our problem of finding the minimum RSS may seem simple. For example, why not simply try all of the different values for a y-intercept, and find the value where RSS is the lowest? So far, we have held one variable constant in order to experiment with the other. We need an approach that will continue to work as we change both of the variables in our regression line. Altering the second variable makes things far more complicated. Here is a quick look at our cost curve if we can change both our y-intercept and slope value: As we can see, exploring both variables, the slope and the y-intercept, requires plotting the second variable along the horizontal axis and turning our graph into a three-dimensional representation. And in the future we'll be able to change more than just that. Furthermore, because we need to explore multiple variables in our regression lines, we are forced to rule out some approaches that are more computationally expensive, or simply not possible. • We cannot simply use the derivative (more on that later) to find the minimum. Using that approach will be impossible in many scenarios as our regression lines become more complicated. • We cannot alter all of the variables of our regression line across all points and calculate the result. It will take too much time, as we have more variables to alter. However, we are on the right track by altering our regression line and calculating the resulting RSS values. Remember in the last lesson, we evaluated our regression line by changing our y-intercept by 10 to determine whether it produced a higher or lower RSS. b residual sum of squared 140 24131 130 21497 120 19864 110 19230 100 19597 90 20963 80 23330 70 26696 Rather than arbitrarily changing our variables, as we have done by decrementing the y-intercept by 10 in the example above, we need to move carefully down the cost curve to be certain that our changes are reducing the RSS. Our approach We don't want to adjust the y-intercept value or another variable and hope that the RSS decreased. Doing so is like trying to fly plane just by sitting down and pressing buttons. We want an approach that lets us be certain that we're moving in the right direction with every change. Also, we want to know how much of a change to make to minimize RSS. Let's call each of these changes a step, and the size of the change our step size. Our new task is to find step sizes that bring us to the best RSS quickly without overshooting the mark. The slope of the cost curve tells us our step size Believe it or not, we can determine the proper step size just by looking at the slope of our cost function. Imagine yourself standing on our cost curve like a skateboarder at the top of a halfpipe. Even with your eyes closed, you could tell simply by the way you tilted whether to move forwards or backwards to approach the bottom of the cost curve. • If the slope tilts downwards, then we should walk forward to approach the minimum. • And if the slope tilts upwards, then we should point walk backwards to approach the minimum. • The steeper the tilt, the further away we are from our cost curve's minimum, so we should take a larger step. So by looking to the tilt of a cost curve at a given point, we can discover the direction of our next step and how large of step to take. The beauty of this, is that as our regression lines become more complicated, we need not plot all of the values of our regression line. We can see the next variation of the regression line to study simply by looking at the slope of the cost curve. To demonstrate this, let's zoom in on our cost function and look at just one part of it. Looking at our zoomed in cost function below, we can get a sense of the direction and magnitude of change required to alter our y-intercept in the next iteration. import plotly from plotly.offline import init_notebook_mode, iplot from graph import m_b_trace, trace_values, plot, build_layout init_notebook_mode(connected=True) layout = build_layout(options = {'title': 'RSS with changes to y-intercept', 'xaxis': {'title': 'y-intercept value'}, 'yaxis': {'title': 'RSS'}}) b_values = list(range(70, 150, 10)[:3]) rss = [10852, 9690, 9128, 9166, 9804, 11042, 12880, 15318][:3] plot([cost_curve_trace], layout) Stepping according to the slope We can follow our technique with more precision by adding some numbers to our slope. The slope of the curve at any given point is equal to the slope of the tangent line at that point. By tangent line, we mean the line that just barely touches the curve at that point. In the above graph, the orange, green, and red lines are tangent to our cost curve at the points where $b$ equals 70, 85, and 90, respectively. The slopes of our tangent lines, and therefore the slopes of the cost curves at those points, are labeled above. Let's see how this works. We use the following procedure to find the ideal $b$: 1. Randomly choose a value of $b$, and 2. Update $b$ with the formula $b = (-.1) * slope_{b = i} + b_i$. The formula above tells us which $b$ value to look at next. We start by choosing a random $b$ value that we can plug into our formula. We take the slope of the curve at that $b$ value and multiply it by $-.1$ then add it to our original $b$ value to produce our next $b$ value. As we can surmise, the larger the slope, the larger the resulting step to the next $b$ value. Here's an example. We randomly choose a $b$ value of 70. Then: • $b_{t=0} = 70$ • $b_{t=1} = (-.1) * -146.17 + 70 = 14.61 + 70 = 84.61$ • $b_{t=2} = (-.1) * -58.51 + 85 = 5.851 + 85 = 90.85$ • $b_{t=3} = (-.1) * -21.07 + 90.85 = 90.851 + 2.11$ Notice that we don't update our values of $b$ by just adding or subtracting the slope at that point. The reason we multiply the slope by a fraction like .1 is so that we avoid the risk of overshooting the minimum. This fraction is called the learning rate. Here, the fraction is negative because we always want to move in the opposite direction of the slope. When the slope of the cost curve points downwards, we want to move to a higher y-intercept. Conversely, when we are on the right side of the curve and the slope is rising, we want to move backwards to a lower y-intercept. This technique is pretty magical. By looking at the tangent line at each point, we no longer are changing our $b$ value and just hoping that it has the correct impact on our RSS. This is because, for one, the slope of the tangent line points us in the right direction. And as you can see above, our technique properly adjusts the amount to change the $b$ value by without even knowing the ideal $b$ value. When our $b$ was far away from the ideal $b$ value, our formula increased $b$ by over 14. By the third step, we were updating our $b$ value by only 2 because we were closer to the ideal slope for minimizing the RSS. Summary We started this section with saying that we wanted a technique to find a $b$ value that would minimize our RSS, given a value of $m$. We did not want to simply try all of the values of $b$ as doing so would be inefficient. Instead, we went with the approach of gradient descent, where we try variations of regression lines iteratively changing our $b$ variable and assessing our RSS to see if we are making progress. In this lesson, we focused in on how to know which direction to alter a given variable, $m$ or $b$, as well as a technique for determining the size of the change to apply to one of our variables. We used the line tangent to our cost curve at a given point to indicate the direction and size of the update to $b$. The further away, the steeper the curve and thus the larger the step we would want to take. Appropriately, our tangent line slope would have us take a larger step. And the closer we are to the ideal $b$ value, the flatter the tangent line to the curve, and the smaller a step we would take. Learn to code.
2019-04-23 15:01:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5358536243438721, "perplexity": 424.0801427439704}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578605510.53/warc/CC-MAIN-20190423134850-20190423160850-00260.warc.gz"}
https://tex.stackexchange.com/questions/587230/referencing-the-subequations-block-doesnt-show-the-section-number
# Referencing the subequations block doesn't show the section number I am new to this stackexchange group so forgive me if my question is not proper. If some extra information is required then kindly comment, I will attach those details. I am using the following lines \begin{subequations} \renewcommand{\theequation}{\arabic{section}.\theparentequation.\arabic{equation}} \begin{align} & y = (c-r_x)*x + r_x*gbest, \label{eq:0.1} \\ & y = (c-r_n)*x - r_n*gbest, \label{eq:0.2} \\ \end{align} \label{eq:eqn} \end{subequations} But calling \cref{eq:eqn} prints eq.(1) instead of eq(.1). This doesn't happen when I am referencing the subequations. Can someone please guide me on how to achieve this? • Welcome to TeX.SE. – Mico Mar 14 at 8:02 • Please clarify the meaning of \cerf. Is it maybe a typo and should say \cref, which is a macro provided by the cleveref package? Please also tell us which document class you employ. – Mico Mar 14 at 8:02 • Please provide a minimal working example (MSE). Mar 14 at 11:56 You should use \numberwithin{equation}{section} in the preamble instead of putting \thesection inside \theequation. \documentclass{article} \usepackage{amsmath} \usepackage{cleveref} \numberwithin{equation}{section} \begin{document} \section{TEST} Intro \begin{subequations} \renewcommand{\theequation}{\theparentequation.\arabic{equation}} \begin{align} & y = (c-r_x)*x + r_x*gbest, \label{eq:0.1} \\ & y = (c-r_n)*x - r_n*gbest, \label{eq:0.2} \end{align} \label{eq:eqn} \end{subequations} And then refer to equation \ref{eq:0.2}, which is within \cref{eq:eqn}. \end{document} Something like this? \documentclass{article} \usepackage{amsmath,cleveref} \numberwithin{equation}{section} \begin{document} \setcounter{section}{3} % just for this example \begin{subequations} \renewcommand{\theequation}{\theparentequation.\arabic{equation}} \begin{align} y_x &= (c-r_x)x + r_x g_{\mathrm{best}}, \label{eq:0.1} \\ y_n &= (c-r_n)x - r_n g_{\mathrm{best}} \label{eq:0.2} \end{align} \label{eq:eqn} \end{subequations} \verb|\cref{eq:eqn} and \cref{eq:0.1}| produces \cref{eq:eqn} and \cref{eq:0.1}''. \end{document}
2021-09-22 16:33:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9814106225967407, "perplexity": 4761.410310360192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00154.warc.gz"}
http://ibip.asso-agrobiotech.fr/r-subset-partial-string-match.html
Re: how to Subset based on partial matching of columns? >From Sarah's data frame you can get what you want directly with the table() function which will create a table object, mydf. Unstable return type and implicit partial matching can lead to surprises and bugs that are hard to catch. 1978-01-01. In R, a dataframe is a list of vectors of the same length. In computer science, string-searching algorithms, sometimes called string-matching algorithms, are an important class of string algorithms that try to find a place where one or several strings (also called patterns) are found within a larger string or text. Foreach entry of haystack, grep searches if at least a string in needle which matches a substring. For any “queries” or “feedback” please contact [email protected] matrix() to make mydf. The column has same type of strings except that the part after #@ can contain any number of characters. The trie is a tree of nodes which supports Find and Insert operations. You can browse for and follow blogs, read recent entries, see what others are viewing or recommending, and request your own blog. We can use the Substring method with 2 parameters—the first is 0 and the second is the desired length. string: Input vector. apropos uses regexps and has more examples. If there is a match, the source string is returned with the replacement string substituted for the matching substring. This is a complete course on R for beginners and covers basics to advance topics like machine learning algorithm, linear regression, time series, statistical inference etc. frame("Title. The product is shipped on CD-ROM and is available through Frontier's reseller channel. It is also similar to REGEXP_INSTR, but instead of returning the position of the substring, it returns the substring itself. See the section on named group reuse. Syntax: s[start:end] This will return part of the string starting from index start to index end - 1. The sub function has three required parameters: a string with the regular expression, a string with the replacement text, and the input vector. You use the same indexing rules for character vectors that you use for numeric vectors (or for vectors of any type). If a regex match could be found in a string element, it is replaced with the replacement text. Scripting Technique scripting techniques scripting templates Scripting Wife Sean Kearney Sean McCown search Searching searching Active Directory security Select-String sending email sending mail Series Serkan Varoglu Server Applications Server-Side Management Server-Side update servers servers WSUS Service Packs and Hot Fixes service packs and. Harrison regrets: Robert Anderson, JoAnn Hackos, Adrian Warman, Standing Business ===== minutes. The below example shows how to perform a similar, though not identical, cell match in Microsoft Excel without specifying the exact match string. To get around it, we use a heuristic we call “best partial” when two strings are of noticeably different lengths (such as the case above). I have released a new version of the stringdist package. This week covers the basics to get you started up with R. When subsetting a dataset, you will only have a single new dataset as a result. The default interpretation is a regular expression, as described in stringi::stringi-search-regex. MatchObject with additional information about which part of the string the match was found. The match() method searches a string for a match against a regular expression, and returns the matches, as an Array object. In the generic version, substring is the specific text you are looking for, and text represents text in the cell you are testing. NewEncoding returns a new padded Encoding defined by the given alphabet, which must be a 64-byte string that does not contain the padding character or CR / LF ('\r', ' '). The below example shows how to perform a similar, though not identical, cell match in Microsoft Excel without specifying the exact match string. Short title; table of contents (a) Short title This Act may be cited as the FAA Reauthorization Act of 2018. Hi, I'm very new to R, and am not at all a software programmer of any sort. How to filter dataframes by using partial string match? Hello all, I'm a bit stuck with what feels should not be a difficult thing to work out, however I cannot find something online for this. We are going to study the exact string matching. Args is in fact an array containing all the strings separated by spaces entered in the command line. apropos uses regexps and has more examples. For programmatic use, for example in writing a generalizable function that will be used on different data sets with different column names, $should be avoided. The haystack entries index where at least a match has been found are returned in the row output argument. Example data frame: a = c("Alpha", "Beta", "Gamma", "Beeta", "Alpha", "beta") b = c(1:6) example = data. ; matchPDict and vmatchPDict: match a (possibly large) set of sequences against one sequence (matchPDict) or more than one (vmatchPDict. A simple Node class can be used to represent nodes in the trie:. If you are new to R, or lack experience working with character data, this book will help you get started with the basics of handling strings. So a string S = "Galois" is indeed an array ['G', 'a', 'l', 'o', 'i', 's']. Subset Observations (Rows) Subset Variables (Columns) F M A Each variable is saved in its own column F M A Each observation is saved in its own row In a tidy data set: & Tidy Data - A foundation for wrangling in R Tidy data complements R's vectorized operations. html#ZhangL20 Guijuan Zhang Yang Liu Xiaoning Jin. We propose a two-period model that highlights the informativeness of consistency as a signal of skills and allows for the analysis of consequences for behavior. concolor h. Partial String Matching Jared Boehm (jtboehm[at]kent. This is fast, but approximate. In rebus, if you want to match a specific character, or a specific sequence of characters, you simply specify them as a string, e. Either a vector giving the indices of the elements that yielded a match, of, if value is TRUE, the matched elements (after coercion, preserving names but no other attributes). 1109/ACCESS. How to countif partial string/substring match in Excel? It's easy to countif cells filled with certain strings, but do you know how to countif cells containing only partial string or substrings in Excel? This article will introduce a couple of methods to solve it quickly. Python Regular Expression Tutorial Discover Python regular expressions: find basic and complex patterns, repetitions, or to do (non-)greedy matching, work with the re library and much more! Regular expressions are used to identify whether a pattern exists in a given sequence of characters (string) or not. how to subset a datafrom with partial match column names. When that string is split, it produces an empty string at index 0. How to compare partial match in two columns I want to compare two columns as shown below. Argument 1: The starting index of the substring. If the group matches in several places, all captures get added to the capture collection for Group 1 (or named group token). 9 Subsetting R Objects. In this R tutorial, I'll show you how to apply the substr and substring functions. Author(s) This function is based on a C function written by Terry Therneau. Match any characters except those inside the square brackets. You just have to remember that a data frame is a two-dimensional object and contains rows as well as columns. Anyway, you end up with only partial text match in your key columns, e. This also applies to nested object properties. A simple Node class can be used to represent nodes in the trie:. When extracting, if start is larger than the string length then "" is returned. A word is any contiguous set of letters, numbers or underscore characters, or a single other character. Or copy & paste this link into an email or IM:. By default when reading plain text, one newline signifies a paragraph. R's subsetting operators are fast and powerful. What this means is that even when passed only a portion of the datetime, such as the date but not the time, pandas is remarkably good at doing what one would expect. ) or in other fields (psychology, philosophy, logic, economics, cognitive science, computer science, management, engineering, etc. Below are several simple examples of using substr. character if it is not already one. string wordstart string index Returns the index of the first character in the word that contains the index'th character of string. After all this, you may be wondering why we even bother using INDEX MATCH. (We will mention just once that it may be easier to use drop on the complementary subset, rather than using keep on the subset you want, a point that applies throughout this FAQ. html#ZengBNN01 conf/vldb/83 Ulrich Schiel. This is especially helpful because Section 5 of RFC 8205 specifically allows putting BGPsec path validation temporarily on hold. For instance, in the string 'abcdef', the character at position 0 is 'a', the character at position 2 is 'c', and so forth. #In Review# Partial Sandbox refreshes can enter a Stopped status and show no additional information on why it did not complete. (partial backport of httpd rev 178018) *) install_ihs: Support installing from a symlinked install directory APAR PK40567 (z/OS only) *) mod_afpa_cache: Afpa is not supported on Windows Vista or Longhorn. PDF journals/sigmod/AbbottG88 journals/cacm/EswarranGLT76 journals/tods. developerWorks blogs allow community members to share thoughts and expertise on topics that matter to them, and engage in conversations with each other. 5 Ways to Subset a Data Frame in R Often, when you’re working with a large data set, you will only be interested in a small portion of it for your particular analysis. L:\XML\CPRT-114-HPRT-RU00-AIHIGHWAYS. {"bugs":[{"bugid":519786,"firstseen":"2016-06-16T16:08:01. This video introduces the concept and application of subsetting a data frame in R. 00001 2020 Informal Publications journals/corr/abs-2001-00001 http://arxiv. Find a set of amounts that match a target value There are instances when one wants to figure out which elements of a set of numbers (amounts) total to a target value. You should sit down and work with R to try to understand what is happening and master the concepts. search_forward r s start searches the string s for a substring matching the regular expression r. If the portion to be replaced is longer. match will have a partial agreement category calculated in addition to disagreement and absolute agreement, as a function of Jaro-Winkler distance. str_extract(fruit, "[aeiou]") str_match(string, pattern) Return the first pattern match found in each string, as a. “R for Data Science” which introduces you to R as a tool for doing data science, focussing on a consistent set of packages known as the tidyverse. Reading large tables from text files into R is possible but knowing a few tricks will make your life a lot easier and make R run a lot faster. Note that the ^ and$ surrounding alpha are there to ensure that the entire string matches. x[y], if length(y) < length(x) then y will be recycled to match length(x):. Repeat It Recommended for you. Tutorial: Python Regex (Regular Expressions) for Data Scientists. Underscore-java, a java port of the functions that are applicable in both languages. This is fast, but approximate. Fuzzy string Matching using fuzzywuzzyR and the reticulate package in R 13 Apr 2017. We believe free and open source data analysis software is a foundation for innovative and important work in science, education, and industry. Also str_extract_all to return every pattern match. Vectors can also be subset using a logical vector. : offset specifies number of characters into. ' that expresses a signed floating-point number in decimal positional notation. We can run the code in the console:. 31 release nikto_core. C THIS WORK PUBLISHED IN TRANSACTIONS ON MATHEMATICAL SOFTWARE, C VOL. Subsetting Itemsets, Rules and Transactions. R's subsetting operators are powerful and fast. R has powerful indexing features for accessing object elements. Norton) introduced the following bill; which was referred to the Committee on Transportation and Infrastructure October 29, 2015 Reported with an amendment, committed to the Committee of. LIKEC uses Unicode complete characters. If a regex match could be found in a string element, it is replaced with the replacement text. by comparing only bytes), using fixed(). 126-134 1988 SIGMOD Conference db/conf/sigmod/sigmod88. com Consulting" There are 11 characters which match and are in order between these two strings. In other words, a matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M. Syntax: Please notice how the date string is surrounded by pound "#" symbols. If the shorter string is length m, and the longer string is length n, we're basically interested in the score of the best matching length-m substring. This is the appropriate behaviour for partial matching of character indices, for example. frames ("table"-like objects which handle rows as observations and columns as variables), are the most common data objects in R. Python String rstrip() The rstrip() method returns a copy of the string with trailing characters removed (based on the string argument passed). I feel like I should be able to figure this out. Except for the special metacharacters *+?()|, characters match themselves. If from is longer than to , occurrences of the extra characters in from are removed. This is a complete course on R for beginners and covers basics to advance topics like machine learning algorithm, linear regression, time series, statistical inference etc. You can browse for and follow blogs, read recent entries, see what others are viewing or recommending, and request your own blog. A regular expression specifies a pattern that aims to match the input string. matrix() to make mydf. The SAS character functions are essential to this. All strings or objects with string properties in array that match this string will be returned. They may support an internal declaration subset with supplemental entity, element, and other markup declarations. The sub function has three required parameters: a string with the regular expression, a string with the replacement text, and the input vector. Example data frame: a = c("Alpha", "Beta", "Gamma", "Beeta", "Alpha", "beta") b = c(1:6) example = data. 1, 2018 Title 49 Transportation Parts 1 to 99 Revised as of October 1, 2018 Containing a codification of documents of general applicability and future effect As of October 1, 2018. Regex, and Match, are found in the System. Replacing Regex Matches in String Vectors. NA values are treated as the string constant "NA". Ask Question Asked 7 years, 1 month ago. Without them, if there were a column named alphabet, it would also match, and the replacement would be onebet. Then unique partial matches are considered, and if found recorded. Among these string functions are three functions that are related to regular expressions, regexm for matching, regexr for replacing and regexs for subexpressions. In this tutorial, you will learn about regular expressions (RegEx), and use Python's re module to work with RegEx (with the help of examples). Options PCRE_limit_recursion, PCRE_study and PCRE. You can use LIKE to retrieve rows based on partial information. because it was typed in by the user), you can use the RegExp constructor: myregexp = new RegExp(regexstring). Repeated typing of various syntax elements is part of what makes this approach difficult. partial seizure: [ se´zhur ] 1. RegularExpressions namespace. I want to filter a dataframe column which contains strings with a partial match and get the rows that match the string. If there is a match, the source string is returned with the replacement string substituted for the matching substring. Regular expressions will often be written in Python code using. Encyclopedia of Algorithms 717-724 2016 Reference Works reference/algo/AardalBM16 10. EECS 203-1 Homework 9 Solutions Total Points: 50 Page 413: 10) Let R be the relation on the set of ordered pairs of positive integers such that ((a, b), (c, d)) ∈ R if and only if ad = bc. The text search pattern is called a regular expression. The haystack entries index where at least a match has been found are returned in the row output argument. If the element of vector 1 is not exist in vector 2, NA is returned. match whole word Extract String Between Two STRINGS special characters check Match anything enclosed by square brackets. R Enterprise Training (select only itemsets matching only the given item), and %pin% (%in% with partial matching). Super Fast String Matching in Python Oct 14, 2017 Traditional approaches to string matching such as the Jaro-Winkler or Levenshtein distance measure are too slow for large datasets. The input arguments str1 and str2 can be character vectors or string scalars. Three String Matching Algorithms in C++. html#ZengBNN01 conf/vldb/83 Ulrich Schiel. In the example shown, the formula in G5 is:. Sub-setting a data frame by partial column names?. Underscore-java, a java port of the functions that are applicable in both languages. Watch a video of this section. We numerically solve the equations of motion (EOM) for two models of circular cosmic string loops with windings in a simply connected internal space. If string is less than start characters long, FALSE will be returned. character string containing a regular expression (or character string for fixed = TRUE) to be matched in the given character vector. Without them, if there were a column named alphabet, it would also match, and the replacement would be onebet. field in table2 must start with all characters in ID field in table1 and can have extra characters following, or field in table2 must have all characters in ID field in table1 and can. Both Insert and Find run in O(n) time, where n is the length of the key. XPG makes the 3'incision in human DNA nucleotide excision repair. Some portions are adapted from other calculations, as will be indicated. Inconsistent substrings are a common problem for us. Your problem is both in the matching and the way R handles regex. Assume that you would like to return some sort of confirmation statement that denotes that your data has a partial match (such as "ABC Learning Centers" and "ABC Learning Cents. Integration, the VLSI Journal 2016 52 355--366 Abstract We present a new algorithm designed to solve floorplanning problems optimally. Multiple Index Match Criteria with Partial String. placeholder value, which defaults to _ in monolithic builds, may be used as a placeholder for provided arguments. In the full matching, the dataframe returns only rows found in both x and y data frame. RegularExpressions. The default interpretation is a regular expression, as described in stringi::stringi-search-regex. Set A is said to be the subset of set B if all elements of A are in B. In other words, subsetting is the process of extracting a subset of a vector. So partial matching is a handy tool which it, which it often saves you a lot of typing at the command line. match identifies common elements between 2 vectors and returns the positions in the second vector of these matching elements in the order they appear in the first vector. R may not be as rich and diverse as other. 31, 2014 Title 24 Housing and Urban Development Part 1700 to End Revised as of April 1, 2014 Containing a codification of documents of general applicability and future effect As of April 1, 2014. These representations preserve more semantic and syntactic […]. For example, RecordLinkage (Borg and Sariyar,2012),. character if it is. Grep is an acronym that stands for Global Regular Expression Print. ix86_register_mov. Many algorithms exist for searching volumes of a body of text for a specific string. How can I format a string containing a date into R “Date” object? | R FAQ Converting dates entered as strings into numeric dates in R is simple for a single string, similarly simple for a vector of strings if the date information is represented consistently, and a little trickier if the date information is not represented consistently. A character object is used to represent string values in R. This means that you need to specify the subset for rows and columns independently. NASA Astrophysics Data System (ADS) Lake, Matthew J. save hide report. Dynamical behavior and Jacobi stability analysis of wound strings. Minutes of the OASIS DITA TC Tuesday, 8 January 2013 Recorded by N. I need to compare them and get an exact percentage of match, ie. But it depends on what you want to achieve. Matching Patterns with LIKE. 0ED22B80" This document is a Single File Web Page, also known as a Web Archive file. The SAS character functions are essential to this. table code becomes more efficient — and elegant — when you take advantage of its special symbols and functions. A tweet from @coneee yesterday about merging two datasets using columns of data that don’t quite match got me wondering about a possible R recipe for handling partial matching. html#LiJ05 Jose-Roman Bilbao-Castro. Video created by 존스홉킨스대학교 for the course "R 프로그래밍". Below shows how you can easily and quickly perform a VLOOKUP using a partial match on the look up value. Examples Using match(). If the element of vector 1 is not exist in vector 2, NA is returned. Likewise, if you are already familiar with R, you will find material that shows you how to do more advanced string and text. Return the position of the first character of the matched substring. You use the same indexing rules for character vectors that you use for numeric vectors (or for vectors of any type). 00 correlation has no discernable increasing or decreasing linear pattern in this particular graph. Berge's theorem states that a matching is maximum if and only if it does not contain any augmenting path. in function is designed for speed and is much faster than explicitly comparing the field against each string using eq and or. x[y], if length(y) < length(x) then y will be recycled to match length(x):. Subsetting and creating a new data. The string returned is in the same character set as source_char. Pattern matching natural powers as products What is the difference between totalitarianism and authoritarianism? What would be the consequence of a traffic light system where the users with green light pay the users with red light?. 2 Background and Context. Similar variables are named according to their group, and I am now writing a function to check correlations within groups. by comparing only bytes), using fixed(). The solution uses the SEARCH and ISNUMBER functions to create a logical test. Active 3 years ago. define which codes (symbols and/or expressions) exist, and how they are understood. sub returns a new vector with the same length as the input vector. The resulting Encoding uses the default padding character ('='), which may be changed or disabled via WithPadding. Each approach provides a specific purpose and can be combined in different ways to achieve the following subsetting objectives:. matrix() to make mydf. Besides a some new string distance algorithms it now contains two convenient matching functions: amatch: Equivalent to R's match function but allowing for approximate matching. For those …. Here is the problem: Build in VBA a routine that will calculate a "fuzzy match" between two text strings. pmatch(), and agrep(), grep(), grepl() are three functions that if you take the time to look through will provide you with some insight into approximate string matching either by approximate string or approximate regex. There is a library in Python called FuzzyWuzzy that has functions to evaluate the degree of similarity between strings. The [operator always returns an object of the same class as the original. C THIS WORK PUBLISHED IN TRANSACTIONS ON MATHEMATICAL SOFTWARE, C VOL. This is the appropriate behaviour for partial matching of character indices, for example. 2944899 https://doi. R's subsetting operators are powerful and fast. For programmatic use, for example in writing a generalizable function that will be used on different data sets with different column names, \$ should be avoided. Introduction to String Matching and Modification in R Using Regular Expressions Svetlana Eden March 6, 2007 1 Do We Really Need Them ? Working with statistical data in R involves a great deal of text data or character strings. Value Matching Description. Matches when the current position is the first matching position in subject. You can also use INDEX MATCH with wildcards to look up based on a partial phrase or string. This book will teach you how to do data science with R: You'll learn how to get your data into R, get it into the most useful structure, transform it, visualise it and model it. 1109/ACCESS. The Regular Expressions Split() methods are almost similar to the String. How to lookup partial string match in Excel? For example, there is a range of data, and column E is the students' name list, column F and G are the class and grade of the students, and column H is the ranking of the students' scores, now you have a given name "Jack" which is the partial string of "Jacky", how can you quickly lookup partial string match and return the ranking of the. How to use string. In SQL, the LIKE keyword is used to search for patterns. They refer to the more structurally rigid definition of HTML. Note that the r = 0. With partial merging, it is possible to keep the rows with no matching rows in the other data frame. Reading large tables from text files into R is possible but knowing a few tricks will make your life a lot easier and make R run a lot faster. Your problem is both in the matching and the way R handles regex. This book aims to provide a panoramic perspective of the wide array of string manipulations that you can perform with R. It returns VARCHAR2 if the first argument is not a LOB and returns CLOB if the first argument is a LOB. In the case of partial differential equa-. gsub() function replaces all matches of a string, if the parameter is a string vector, returns a string vector of the same length and with the same attributes (after possible coercion to character). Anyway, you end up with only partial text match in your key columns, e. It is a command-line driven tool; you can also supply instructions in a script file. I One Hundred Eleventh Congress of the United States of America At the First SessionBegun and held at the City of Washington on Tuesday, the sixth day of January, two thousand and nine H. non-numerical data - is an essential skill for anyone looking to visualize or analyze text data. It can only be. To check if a cell contains specific text, you can use the SEARCH function together with the ISNUMBER function. Fuzzy String Matching is basically rephrasing the YES/NO “Are string A and string B the same?” as “How similar are string A and string B?”… And to compute the degree of similarity (called “distance”), the research community has been consistently suggesting new methods over the last decades. Result: Select queries the DataTable and finds two matching rows. Full-text index may be an option. The UTL_MATCH package was introduced in Oracle 10g Release 2, but first documented (and therefore supported) in Oracle 11g Release 2. This exam presents 40 questions from a subset of the actual exam pool. When developing with. Match example. Removing rows if certain elements are found in character string. The R language is a dialect of S which was designed in the 1980s and has been in widespread use in the statistical community since. Here are some external resources for finding less commonly used symbols: Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web, (technically an AJAX library simulating it. The easiest way to create a new RegExp object is to simply use the special regex syntax: myregexp = /regex/. “Advanced R solutions” by Malte Grosser and Henning Bumann, provides worked solutions to the exercises in this book. define which codes (symbols and/or expressions) exist, and how they are understood. If you rely on code that requires the original data frame behavior, coerce to a data frame via as. You can also use INDEX MATCH with wildcards to look up based on a partial phrase or string. Can be used to assert the inclusion of a subset of properties in an object while checking for deep equality. Unstable return type and implicit partial matching can lead to surprises and bugs that are hard to catch. 7) The overload 3 is prohibited from accepting temporary strings, otherwise this function populates match_results m with string iterators that become invalid immediately. R Regular Expression. The documentation for MERGE has a pretty strongly worded warning: It is important to specify only the columns from the target table that are used for matching purposes. 124667 db/journals/amc/amc364. Note that R has no 0-dimensional, or scalar types. Subsetting is a very important component of data management and there are several ways that one can subset data in R. In PHP, you can use the preg_match() function to test whether a regular expression matches a specific string. Bash supports a surprising number of string manipulation operations. The search. Keywords—String, String-Matching, Partial Matching, Complexity. Partial Ratio - matches based on best substrings ; Token Sort Ratio - tokenizes the strings and sorts them alphabetically before matching When compared strings are subset. The returned dataset has a variable indicating the type of match called pass. A string is a sequence of characters. Just as a chemist learns how to clean test tubes and stock a lab, you’ll learn how to clean data and draw plots—and many other things besides. A simple Node class can be used to represent nodes in the trie:. June 26, 2012 Title 40 Protection of Environment Part 1000 to End Revised as of July 1, 2012 Containing a codification of documents of general applicability and future effect As of July 1, 2012. It is not surprising that two dataframes do not have the same common key variables. sub returns a new vector with the same length as the input vector. Countif partial string/substring match with formulas. partial seizure: [ se´zhur ] 1. The sub function has three required parameters: a string with the regular expression, a string with the replacement text, and the input vector. That's because a "From r" string precedes the first email. Note that this match is case-sensitive. This is the current published version in it's permanent home (it will always be available at this URL). This is not the final. name-apply 5 Ways to Subset a Data Frame in R;. To extract all matches based on a partial match, you can use use an array formula based on the INDEX and AGGREGATE functions, with support from ISNUMBER and SEARCH. Strings are not glamorous, high-profile components of R, but they do play a big role in many data cleaning and preparation tasks. There exist very simple looking regular expressions that force searches to take exponential (in the length of the pattern) time to search, and are generally fairly large. I want to filter a dataframe column which contains strings with a partial match and get the rows that match the string. Introduction to String Matching and Modification in R Using Regular Expressions Svetlana Eden March 6, 2007 1 Do We Really Need Them ? Working with statistical data in R involves a great deal of text data or character strings processing, including adjusting exported variable names to the R variable name format,. The latest posting by Tony Hirst sparked my attention because I was thinking about a very similar issue recently. For instance, in the string 'abcdef', the character at position 0 is 'a', the character at position 2 is 'c', and so forth. String Representations. This is the appropriate behaviour for partial matching of character indices, for example. the second function template has the same template parameters as the second partial specialization and has just one function parameter whose type is a class template specialization with all the template arguments from the second partial specialization. There is one plot for observations for each distinct subset of byvar in which data for that subset are highlighted and the rest of the data shown as backdrop. Foreach entry of haystack, grep searches if at least a string in needle which matches a substring. When extracting, if start is larger than the string length then "" is returned. remove - r subset partial string match Subset data based on partial match of column names (2) I need to subset a df to include certain strings. org/abs/2001. Pattern matching employs wildcard characters to match different combinations of characters. This video introduces the concept and application of subsetting a data frame in R. A vector of K character strings that are used when the factor is displayed. Reading large tables into R. Subset Observations (Rows) Subset Variables (Columns) F M A Each variable is saved in its own column F M A Each observation is saved in its own row In a tidy data set: & Tidy Data - A foundation for wrangling in R Tidy data complements R’s vectorized operations. About Subsetting Operator. R - Strings. Selecting (Keeping) Variables # select variables v1. The axial component of kinetic energy of particles slightly exceeds 100 keV and the rotational component of the ions energy is a bit greater than 1 MeV. grepl() returns a TRUE/FALSE vector indicating which elements of the. Well, R has several ways of doing this in a process it calls "subsetting. R match Function. The behavior of the CSV parser depends on the set of columns that are read. Convert string to factor by default; Note that, if we let the left part blank, R will select all the rows. These utilities are very useful for basic sequence analysis tasks. Syntax: s[start:end] This will return part of the string starting from index start to index end - 1. Match string not containing string Given a list of strings (words or other characters), only return the strings that do not match. You can pass a regular expression as the pattern argument to any stringr function that has the pattern argument.
2020-03-28 14:35:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2260833978652954, "perplexity": 1409.370002883817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491998.11/warc/CC-MAIN-20200328134227-20200328164227-00450.warc.gz"}
https://gamedev.stackexchange.com/questions/103067/how-do-game-engines-mitigate-cpu-cost-for-many-trigger-zones-entities/103085
# How do game engines mitigate CPU cost for many trigger zones & entities? [closed] It's a rather simple question, but a somewhat fundamental in computing in general. Let's say we have a player controlled character in an open world. The world is large. In this game-world we have a vast amount of trigger zones that basically responsible for changing music, changing lighting, changing animation and so on. And the game clock runs at 60 fps. In this situation we have so many subroutines which sensitive to "If a player has entered the area" that we can count then by thousands. Here's where I'm not sure. Since we have so many trigger subroutines it seems we can't just simply put them to check, if a player has entered the area, every 60th of a second even though this simple "listening" is not gonna take up a lot of resources; but scaling the world bigger and bigger will create a performance hit for sure, with this approach. So, there must be some other mechanism to manage all of those trigger zones. I inclined to think that the mechanism goes like this: • a player's character subroutine constantly (at 60 fps) updates its position. With each update frame the playable character subroutine goes something like this: Hey, I'm at the position x(5.0),y(3.0),z(10.5) so what branch of code should I execute being at this position? And the respond comes from an indexed map where we have every possible coordinates and corresponding code to execute. (of course we can have a lossless compression going on for saving up the memory) Am I correct? ...or does every subroutine listen for an input every update frame? (hope it's not :) ) Edit addition: I'm not asking if games divide their worlds into chunks, areas or grid cells for saving up memory, but asking about their approach on saving up CPU/GPU cycles and how unactivated triggers are treated. • An SQL request and a bunch of events can take care of anything! :P And some routines don't need to be called every frame, such as AI. Jun 28 '15 at 0:53 • I think your edit indicates a strange incarnation of the XY problem. You started from the assumption that you have to run each of those checks every frame, so you asked how to save CPU while doing so. The real answer is: don't run all the checks. @Chris covers it with "only listen to events in this chunk". Jun 28 '15 at 18:10 Typical approaches to reduce processing are: Spatial subdivision / coherence - Don't process everything in the whole world, instead only do so in pertinent regions - usually only those in close proximity to the player(s). - Group units close together as a single unit to reduce overall count (Rome: Total War). Common in real-time rendering fields. Selective processing - Don't process every entity type for a particular effect. For example, maybe only magical objects need to be processed for particle emissions, only baddies need to be processed for collision detection against your forcefield, only non-flying units must be tested against pit trap triggers, etc. You will find that there are often discrete processing regimes for certain closely-releated types, this is why in many engines a type Particle is discrete from the more general type Entity... the only thing they share is transform information, outside of that Particle is an entirely different beast without collision physics, pathfinding, depth write during render, mesh animation etc. Temporal discretization - Don't process every 60th of a second. Rather, do say 1 update in 4 frames, i.e. 15 updates a second (Age of Empires). This enables 4x more logic per update than you would have at 60fps. You can work this idea further by having triggers act say only once in those 15 updates, i.e. once a second. There is nothing that says you need to do all processing, every frame. Outside of these things, it comes down to improving engineering through adaptation of game design / loosening design constraints. Oftentimes, you just don't need as much processing as you originally thought, and in those areas where you overrun capacity, you adapt your design to reduce complexity. P.S. Conditionals tend to be costly because of branch mispredictions. Forward-running, linear loops don't in themselves suffer from the degree of branch mispredictions that arbitrary conditionals may suffer; and arbitrary conditionals may take (compiler-specific) hints to indicate which branch they are likely to take in most cases, though this only applies in certain, rarer logical cases. Most of the time you want to avoid deeply nested conditionals, and too many conditionals per se. This is where the above approaches come into play. Having said that, these days you also have multiple cores to work with. One way to go is to split the whole world into chunks, and load just the chunk the player character is currently in (or any number of chunks that are closest). This way you only have to listen for events in this chunk. This does not mean you have to split everything into "levels". Minecraft is a fairly good example of splitting a giant world into chunks that are only loaded when necessary. All of the other data can be stored in files so you don't have to keep everything in memory. • I believe that the main reason Minecraft divides the world into chunks is because procedurally generated nature of the world. Since the world is potentially "infinite" and the memory is not, so it has to offload some generated data and only keep what's relevant at the moment. Jun 27 '15 at 22:29 • @FloatingPoint Even games which are not procedurally generated tend to stream in level data in chunks - open world games most notably. Game levels have long been too big to fit entirely in memory with everything else going on, especially on consoles. Even within the loaded level chunks, games tend to use spatial datastructures like quad/oct/bsp- trees to be able to quickly query objects & triggers close to the player, and shift more distant objects to lower levels of detail, both in graphics and sometimes in their gameplay logic. Jun 27 '15 at 23:19 • Even if you disregard the part about only keeping one part of the world in memory at a time, the main take away from this good answer is that you don't check the triggers of the entire world each frame (or each time the player moves). You break the world up into sections (like say, a grid) and you only check against the triggers in the current grid cell. This is super fast if the grid cell itself has the list of triggers it contains, so you have only very short lists to check. Jun 28 '15 at 2:19 • Well, I didn't say "the only reason..." I said "the main reason...", about Minecraft. I know that games with big worlds do all sorts of techniques to ease on memory: LoD, tessellation, dynamic loading, screen-space detailing, etc... but I'm not really talking about memory issues, more of a CPU/GPU cycles issue. I understand that if we have a time trigger event, driven by a counter, for example a spike going in and out of a wall with 0.5Hz frequency, and the spike is in close proximity or in screen space, then yes, we have to evaluate every time when the counter's time is up. Jun 28 '15 at 8:10 As mentioned in one of the comments, you want to partition the world into sections by using oct/quad trees. Then as the player moves, using the tree you can quickly get all the nearest elements in your world. Only those closest to the player are the ones you have to react to.
2022-01-25 03:01:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30609554052352905, "perplexity": 1187.678286983426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00194.warc.gz"}
https://itprospt.com/num/18364278/ho-the-distributions-of-the-two-populations-are-the-same
5 # Ho: The distributions of the two populations are the same_ Ha: The distributions of the two populations are not the same 0 = 0.10 The test statistic, Xi = 21.480. C... ## Question ###### Ho: The distributions of the two populations are the same_ Ha: The distributions of the two populations are not the same 0 = 0.10 The test statistic, Xi = 21.480. Critical value: Xi,1o 4.605Which is the correct conclusion of our homogeneity test; at the 10% sigrificance level?Select all that apply:We should reject Ho.We should not reject Ho:Atthe 1O% significance level there IS suffictert evldence t0 corielude that the distributions of the {wo populatlons are not the sume,Atthe !O% slgnificance Ho: The distributions of the two populations are the same_ Ha: The distributions of the two populations are not the same 0 = 0.10 The test statistic, Xi = 21.480. Critical value: Xi,1o 4.605 Which is the correct conclusion of our homogeneity test; at the 10% sigrificance level? Select all that apply: We should reject Ho. We should not reject Ho: Atthe 1O% significance level there IS suffictert evldence t0 corielude that the distributions of the {wo populatlons are not the sume, Atthe !O% slgnificance level (here Is not sulficlent evldence to conclude that the distributio olthe Ewo papulations are not the same; Hootn Gnnt eb ## Answers #### Similar Solved Questions 3 answers ##### Completely Randomized Single Factor Experiment It is suspected that the strength of & synthetic fiber , related percentar Five levels cotton cotton percentage are tested and five the fiber. replicates are run following data was collected, Cotton Percentage Tensllc Strength Tensile Strergth Tensile Strenslh Tensilc Strergth Tenslle StrengthAn ANOVA was run in MiniTab with the following output: One-Way ANOVA: Strength versus Percentage Kadchypothesiaequal reninnterntive hireeheris ion Aciici U Completely Randomized Single Factor Experiment It is suspected that the strength of & synthetic fiber , related percentar Five levels cotton cotton percentage are tested and five the fiber. replicates are run following data was collected, Cotton Percentage Tensllc Strength Tensile Strergth Tensi... 5 answers ##### HcnexcTk Asyiannan -12/18420 18Aealanmenls121167218nolnisScalch24,029~(1 pointsSCalcb017L.Differentiate: 9(x) 142 5 - 4x 9"(x)Differentiate the function_ 8xt2Need Help?Need Help?printsSCalcBV3ooingssCalcaDifferentialeFlnd the denvative{*)Sx2)(*TviJ wavsUse the Product Rule- f"x)Need Help?Perform the multiplicatlon first_ f'(x)nointsSCalc8 3,051-Find an equulion of the tangent Ilnecuryc at the given Point.Do your answers agree?Need Help?JrIaa aNeed Help?KaknaAlkenn HcnexcTk Asyiannan - 12/18420 18 Aealanmenls 121167218 nolnisScalch 24,029 ~(1 pointsSCalcb 017L. Differentiate: 9(x) 142 5 - 4x 9"(x) Differentiate the function_ 8xt2 Need Help? Need Help? printsSCalcB V3ooingssCalca Differentiale Flnd the denvative {*) Sx2)(* TviJ wavs Use the Product Rule- f... 5 answers ##### Suppose and are primes_ P > q > 2 and=1Prove that there exist integers b, and y such that1 <b < qand12 | qy2 = bp. Suppose and are primes_ P > q > 2 and =1 Prove that there exist integers b, and y such that 1 <b < q and 12 | qy2 = bp.... 5 answers ##### Consider large hospital that wants to estimate the average engthol stay of its patients_ The hospital randomly samples n = 100 of Its patients and finds that the sample mean length of stay is 4.5 days Assume that the standard deviation of the length of stay for all hospital patients is days. Find 95% confident interval for true mean length ofall hospital patients(3.84.5.16)(3.72,5.281 Consider large hospital that wants to estimate the average engthol stay of its patients_ The hospital randomly samples n = 100 of Its patients and finds that the sample mean length of stay is 4.5 days Assume that the standard deviation of the length of stay for all hospital patients is days. Find 95... 5 answers ##### Tdcumcmucor #rlnDta0Aquadrat < cquation Enbf Res Integer cochciorze &nd has #3 sclutons the par cnumben (Uy fa Dat4otWinte €407 Irecuon teitt 4ene LCDArdehg LCKG meernomuFndateLG4 oithe natnumnanFrothe LCHpokronialMacBcol Bto; Tdcumcm ucor #rlnDta 0A quadrat < cquation Enbf Res Integer cochciorze &nd has #3 sclutons the par cnumben (Uy fa Dat4ot Winte €407 Irecuon teitt 4ene LCD Ardehg LCKG meernomu FndateLG4 oithe natnumnan Frothe LCH pokronial MacBcol Bto;... 5 answers ##### 0 The 841 Jamod 1 The power The power The mtensty | passes = through U throughtheopehiagource What is the intensity of the sound at 4 I 1.75x 10-^ Wlm2 1 the openlng Is E { in all directions 4.91 1 H and has a through J0 Jamod sourceall 3.49 * except through squar opening 1.50 m with "sapis How much 0 The 841 Jamod 1 The power The power The mtensty | passes = through U throughtheopehiagource What is the intensity of the sound at 4 I 1.75x 10-^ Wlm2 1 the openlng Is E { in all directions 4.91 1 H and has a through J0 Jamod sourceall 3.49 * except through squar opening 1.50 m with "sapis Ho... 5 answers ##### GurahmroteclilstLendlAE~ir} 065281 gurah mroteclilst LendlAE ~ir} 065281... 3 answers ##### 4~i0Ub ite1i7o 24 ToitnIeoIAZu14ooled Q8497HeIn5Q 14J54421JI9.2213934791S_ 41aZ0001300 4~i0 Ub ite 1i7o 24 Toitn IeoIA Zu14o oled Q8497 He In5Q 14J5442 1JI9.22 139347 91S_ 41a Z000 1300... 5 answers ##### The author uses the final sentence of the passage ( "It wasn't.... outbreaks") at least in part to1. underscore the assertion that Snow's explanation of the cause of the epidemic was ultimately correct.2. demonstrate that an explanation of a phenomenon will not be accepted until after the mechanism behind it is fully detailed.3. suggest that there is often a significant delay between medical discovery and its application.4. lament the loss of life caused by failing to act on The author uses the final sentence of the passage ( "It wasn't.... outbreaks") at least in part to 1. underscore the assertion that Snow's explanation of the cause of the epidemic was ultimately correct. 2. demonstrate that an explanation of a phenomenon will not be accepted unti... 5 answers 5 answers ##### Find or approximate all points at which the given function equals its average value on the given interval.$$f(x)=8-2 x ext { on }[0,4]$$ Find or approximate all points at which the given function equals its average value on the given interval. $$f(x)=8-2 x \text { on }[0,4]$$... 5 answers ##### [. Research the reaction of calcium carbonate and acetic acid online. Write the chemical equation that corresponds to this reaction;Explain the difference between osmosis and dialysis_3. How You prepare a [0% NaCl solution from the NaCl solid? [. Research the reaction of calcium carbonate and acetic acid online. Write the chemical equation that corresponds to this reaction; Explain the difference between osmosis and dialysis_ 3. How You prepare a [0% NaCl solution from the NaCl solid?... 5 answers ##### Tan? xdx9 JiswL_LANone ofthese4.00 _t9J1JIJHi~ F3 tan? X Inlsecxl + c3 tan?_Infcosxl + ctanx + In| secxl + Ztan?x + Infcosxl + € tan? xdx 9 Jisw L_LA None ofthese 4.00 _t9J1 JIJHi~ F 3 tan? X Inlsecxl + c 3 tan?_ Infcosxl + c tanx + In| secxl + Ztan?x + Infcosxl + €... 5 answers ##### (6 pts) A studlent is drinking milkshake with Struw from & cvlindrical cup with u ridius 0l" CIn. If'the studet drinking at rate Of" cm " per second; then the level of the milkshake drepping M ralle 0f CTI per second, (6 pts) A studlent is drinking milkshake with Struw from & cvlindrical cup with u ridius 0l" CIn. If'the studet drinking at rate Of" cm " per second; then the level of the milkshake drepping M ralle 0f CTI per second,... 5 answers ##### Give the major organic product(s) for the following reactionKMn04, OH 2 cold2Edit Give the major organic product(s) for the following reaction KMn04, OH 2 cold 2 Edit... 5 answers ##### 1. A group of students were asked how many times they hadbreakfast in the past week. The results were:0,1,1,2,3,3,4,4,5,5,5,6,6,6,6,6,7,7,7,7,7,7,7,7,7,7. a. Compute themean for this data. b. Compute the median for this data. c.you madea histogram from this data and it was skewed to the left. When youcompare your answers to a and b is that consistent with what youwould expect for skewed data. Briefly explain. d. The standarddeviation for the data is 2.20. Describe what this value tells you.e. Co 1. A group of students were asked how many times they had breakfast in the past week. The results were: 0,1,1,2,3,3,4,4,5,5,5,6,6,6,6,6,7,7,7,7,7,7,7,7,7,7. a. Compute the mean for this data. b. Compute the median for this data. c.you made a histogram from this data and it was skewed to the left. Wh... 5 answers ##### There Solve H ! Get help: are sin(r) possible: 3 attempts Video solutions, 0.25 on 0 and B IA Previcw Preview with vand and B radiansdecimal places There Solve H ! Get help: are sin(r) possible: 3 attempts Video solutions, 0.25 on 0 and B IA Previcw Preview with v and and B radians decimal places... 5 answers ##### CCeduHomepage - City €MessengerMysynchrony ConsTCF Bank [ AccnuniAccount Centcr Ho_,PNC Banke(week 13)Kenneth AmnuayTo find the distance across Micr; surveyor choose- She then chooses pounts reference and Bwhich &c 192 yd apart on One side of the fiver point on the opposite side of thc niver and finds that Z BAC ~ 80 and LABC ~ 47" 19z yd808NOTE; The picture NOT drawn to scalesApproxumate thc distance from point distance Find thc distance across the fVEr heightpomt €Enter Our anjwer CCedu Homepage - City € Messenger Mysynchrony Cons TCF Bank [ Accnuni Account Centcr Ho_, PNC Banke (week 13) Kenneth Amnuay To find the distance across Micr; surveyor choose- She then chooses pounts reference and Bwhich &c 192 yd apart on One side of the fiver point on the opposite side o... 5 answers ##### For the figure shown if the capacitors are connected to a 6V battery 6,00 HFL8.00 MF2.00 UFL8.00 UFIFind the equivalent capacitance (&o 6_79l1 Jbsl -z4 ylg3JI)Find the charge on the 6pF capacitor (6_79// JGJl UlgaJI 8) For the figure shown if the capacitors are connected to a 6V battery 6,00 HFL 8.00 MF 2.00 UFL 8.00 UFI Find the equivalent capacitance (&o 6_79l1 Jbsl -z4 ylg3JI) Find the charge on the 6pF capacitor (6_79// JGJl UlgaJI 8)... 5 answers ##### Let A d(i) Find the BEF of A. (ii) Are the columns of A linearly independent? Justify Your answer (iii) Do the columns of A span P3? Justily your answer: Let A d (i) Find the BEF of A. (ii) Are the columns of A linearly independent? Justify Your answer (iii) Do the columns of A span P3? Justily your answer:... -- 0.018563--
2022-08-15 21:43:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49363014101982117, "perplexity": 14792.617914074295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00443.warc.gz"}
https://fastballs.wordpress.com/2007/09/
### September 2007 There’s some very informative commentary from Ike at his blog on how the reconstruction algorithm of PITCHf/x works and how that affects measurement error in the data. He also has a couple of previous posts on the PITCHf/x topic. It’s nice to see a fellow Sooner and, as best I can tell, a fellow OU Physics alum writing on this topic. Some of the other PITCHf/x analysts out there are looking at the hitting aspects or other things that can be divined from the data, but I’m still quite fascinated by the ability to classify a pitcher’s pitches. I know that’s not the be-all-end-all of baseball or of PITCHf/x, but I’m learning so much about the game from pursuing this angle, I may camp in this corner for a while. I decided to take a look at one of my favorite players–who happens to have had quite a resurgence in the second part of this year–Royals’ former and once-again wunderkind Zack Greinke. I split the data into three parts. The first part was his first seven starts this year, in which he compiled a 5.71 ERA on the strength of 49 hits, 11 walks, and 19 strikeouts in 34 2/3 innings. This performance resulted in his banishment to the bullpen in hopes of salvaging something from 2007 for Greinke. We only have one start recorded in PITCHf/x from this period. In this start, his fastball was recorded at 87-93 mph. Other than noting that fact, I’ve chosen to ignore the rest of the data from this start. The second part was his relief performance, which lasted from May 10 to August 20, and for which he have 295 pitches recorded by PITCHf/x. In 38 relief appearances, he compiled a 3.54 ERA on the strength of 43 hits, 15 walks, and 55 strikeouts in 53 1/3 innings. In the data we have, his fastball as a reliever ran in the 92-98 mph range. Here’s the speed versus spin direction chart: As a reliever, he threw 67% fastballs, 24% sliders, 5% changeups, and 4% curveballs. The fastball and changeup groupings are pretty obvious. I used the spin rate parameter to help me separate the sliders and curveballs. I won’t reproduce that graph for his relief outings, but suffice it to say that the curveballs are the pitches with slower speed, higher spin rate, and lower spin direction. What’s interesting to me is comparing the reliever graph to the same graph for his return to the starting rotation, which began on August 24. We have PITCHf/x data for four of his five starts since then, missing only his September 15 start at Cleveland. In all five starts, he’s compiled a 1.71 ERA on the strength of 21 hits, 8 walks, and 15 strikeouts in 21 innings. His fastball still has a lot of pop in the 91-97 mph range. It will be interesting to see if he can keep that life on his fastball as he stretches out beyond 4 or 5 innings at a time. He’s also using his changeup a bit more: 75% fastballs, 15% sliders, 8% changeups, and 3% curveballs. Here’s the spin rate vs. spin direction graph for his starter outings. I’ve labeled the x axis to show how the spin direction corresponds to break to a right-handed hitter. A pitch that broke straight down would have a spin direction of 0 degrees, break away from a righty corresponds to 90 degrees spin direction, break up (or a “rising” fastball) corresponds to 180 degrees, and break in on a righty’s hands corresponds to 270 degrees. So the spin direction tells us which way the pitch will break, and the spin rate tells us how much the pitch will break. Here’s the vertical break vs. horizontal break displayed in inches. I’m excited to see Zack Greinke back on top of his game, and I hope he can stay there for years to come. So far we’ve classified the pitches for four pretty orthodox pitchers: Joba Chamberlain, Jonathan Papelbon, Edinson Volquez, and Greg Maddux. It’s time to try someone a little more novel. Like Josh Kalk, I’ve been thinking about ways of automating the pitch classification process, although he’s much further along that process than I am. One thing I have wondered is whether such a system can ever be developed that will handle all pitches from all pitchers, or whether we will have to restrict it to mainstream pitchers and pitches only. So I’ve been thinking about pitchers who might be exceptions to various rules that seem to apply well to everyone else. Take Chad Bradford, for instance. Could a pitch classification system handle a pitcher who throws underhand? He plays in a home park, Baltimore, that doesn’t have the PITCHf/x system running, but we have 188 pitches recorded for him in other parks, and that seems to be enough to figure out his repertoire. The scouting information on Bradford couldn’t be much muddier for a guy who doesn’t throw very many different pitches and uses the same basic delivery for all of them. One scouting report says he throws a fastball, a slider, and an occasional changeup. The Sporting News says he “throws in the high 80s and has a solid changeup and curveball.” Another article (of which I seem to have lost track) said he threw a two-seam sinking fastball. The Washington Post says he throws an 83-mph fastball, a slider, and a changeup. The Post article is well worth reading if you’re interested in Bradford. Everyone seems to agree he has three pitches, but they can’t agree what they are. Surely we can tell if we dive into the data we have. Let’s start with what is becoming my traditional pitch classification graph, pitch speed versus spin direction. (As usual, I have normalized the start_speed parameter to y0 = 50 feet.) I’ve not labeled the slow breaking pitch in the graph because, as we will discuss later, it’s name is not so clear. I have color-coded his three pitches and circled the groupings, and now I’ll go through the process I used to determine those groupings and the pitch types. There are two pretty obvious groupings, one containing the fastballs and the other some sort of slower breaking pitch. There are several questions to answer. First, are the multiple pitches hiding in the cluster on the upper right, and if so, what are they? Second, what is the 65-70 mph pitch? None of my usual secondary graphs were very illuminating on these questions. To help answer the first question, I went to another pair of graphs I like to see, the same speed vs. spin direction plot from above but split out for right-handed and left-handed hitters. The first thing I notice is that lefties don’t get many of the slow breaking pitches in the lower left corner of the graph, and righties don’t see pitches slower than 76 mph on the right side of the graph. That implies that there are at least two distinct pitches in our grouping on the right side, since I see no reason to believe Bradford would purposefully throw his already-slow fastball even slower to lefties on occasion. Most probably this is a changeup. We can’t tell from this graph whether righties also see some changeups from Bradford or whether 76 mph is really the cutoff between fastballs and changeups. For that, we move to another plot that exposes a limitation in our data. This graph shows Bradford’s pitch speeds (normalized to y0 = 50 feet) throughout the season, recorded when the Orioles visited a park equipped with PITCHf/x and Bradford happened to pitch. The x-axis lists the pitch id number from my database, but that corresponds closely to time, with pitch #1 at the beginning of the season and pitch #637,220 occurring on September 12. Here we can see on a game-by-game basis that Bradford clearly throws three different speeds of pitches. Our overall speed data is being clouded by two games in Boston on July 31 and August 1, with pitch speed on average 3.5 mph slower than in other parks. Fenway Park’s PITCHf/x system is the source of all sorts of measurement errors, so this does not come as a surprise. Now that we’ve separated the three groups of pitches, can we tell what they are? First, it’s reasonable to classify the mid-speed pitch as changeup since it has similar spin to the fastball only thrown a little slower. But is the fastball a regular four-seam fastball or a two-seam “sinking” fastball as some people suggested? And is the breaking pitch a slider or a curveball? As an aside, in terms of pitch classification algorithms, do we really care what the names of Bradford’s pitches are or how he pronates his wrist? Shouldn’t a mathematical description of how the pitch moves be sufficient? I think the answer to the latter question is probably yes, but pitch classification can be interesting beyond just the search for a universal classification system. It’s interesting to learn about a particular pitcher’s approach, and for that description, it helps to know what the pitcher is attempting from his perspective. How do we determine whether a fastball is a four-seamer or a two-seamer? The classic four-seamer, if thrown from the 12 o’clock position, would have only backspin, which would show up on our graph as a spin direction of 180 degrees. If the pitcher drops down to a 3/4 delivery, the four-seamer gets a little sidespin component, and the spin direction shifts to the neighborhood of 210 degrees. For example, we saw Volquez’s four-seam fastball right in this area, in the range from 200-220 degrees, and Papelbon’s four-seamer was in a similar range of spin directions, 195-225 degrees. The two-seam fastball is thrown with the fingers along the seams, with the middle finger applying pressure to the ball to produce sidespin. If the two-seamer were thrown from the 12 o’clock position, we would expect to see a spin direction greater than 180 degrees by an amount dependent on how much sidespin the pitcher applied to the ball. From a 3/4 delivery, the spin direction would shift over to a greater angle by another 30 degrees or so. Greg Maddux’s two-seamer had spin directions in the range 215-265 degrees (a wide range consistent with his reputation of varying the movement on his fastball), Papelbon’s two-seamer was at 210-235 degrees, and Volquez’s two-seamer was at 220-245 degrees. For an pure underhand pitcher with a delivery from 6 o’clock, the classic four-seam fastball with pure backspin (now switched to pure topspin by the change in delivery) would have a spin direction of 0 degrees (equivalent to 360 degrees). For a more realistic delivery from 5 o’clock, which appears to be consistent with pictures I can find of Bradford’s motion, the spin direction would shift back by about 30 degrees to 330 degrees. If Bradford were applying pressure to the ball to produce the sidespin of a two-seamer, that would tend to move the spin direction back toward 360 degrees because of the direction that the human wrist pronates. Instead, we see a spin direction of 295-330 degrees, consistent with a delivery between 5 and 6 o’clock and little or no sidespin applied to the ball. Therefore, I conclude he is throwing a four-seam fastball. This conclusion squares with one we could have made via logic alone, without regard to the data. Pitchers generally attempt to throw their fastball as hard as they can, which is accomplished with the four-seam grip. They use different grips and accept slower speeds for the purposes of movement or deception. Because of his underhand motion, Bradford gets plenty of sink on his fastball without needing to sacrifice speed to put sidespin on the ball. Here’s the chart of vertical and horizontal movement. His fastball sinks 5 to 12 inches, which is just incredible–almost as much downward break as a Barry Zito curveball! His fastball also moves in on the hands of a right-hander by 7 to 13 inches, which is comparable to the best two-seam sinkers thrown overhand from guys like Brandon Webb and Derek Lowe. You wonder why right-handers can’t hit this guy? My guess is that’s why. And it’s all attributable to the spin direction, which in turn comes from the underhand delivery, and not to the speed of the pitch, which is only in the low 80’s. Finally, we come to the last question for this post: Is that a curve or a slider? It has the horizontal break of good curveball, and sometimes a little more. It has a vertical break somewhere between a typical slider and curveball. Which is it? We already know that Bradford’s unorthodox delivery can significantly affect the movement on a pitch. For the answer, we turn back to the spin direction graph. Bradford’s breaking pitch checks in with spin directions in the range of 75-130 degrees. That’s not too different than an overhand curveball delivered from the 1 o’clock position, although not a perfect fit. A curveball with pure topspin delivered from the 12 o’clock position has a spin direction of 0 degrees. Dropping down to the 1 o’clock position shifts the spin direction to 30 degrees. However, the pronation of the actual human wrist can’t deliver pure topspin to the ball without also imparting some sidespin that pushes the spin direction up to higher angles. For example, Maddux’s curve lands in the 50-100 degree range of spin direction, Volquez throws his curve at 35-80 degrees, and the few curves we saw from Papelbon ranged from 40-80 degrees. A theoretical underhand 6 o’clock curveball delivered with pure topspin (now switched to pure backspin by the change in delivery) would have a spin direction of 180 degrees. Coming back up to the 5 o’clock position would shift the spin direction back up to 150 degrees. The sidespin imparted by pronation would move us back toward 180 degrees; however, and we don’t see Bradford’s breaking pitch around 180 degrees. So Joe Slider, our blog turns its lonely eyes to you. Woo, woo, woo… What’s that you say, Kerry Robinson? Slidin’ Joe has left and gone to the plate. Hey, hey, hey…hey, hey, hey! Sorry for that, I needed a distraction. This post is getting a little long in the tooth. I now return you to “The Slider Teaches Johnny to Pronate”, already in progress… A pitcher throwing a typical slider overhanded applies sidespin in the opposite direction of a two-seamer, as if it were trying to become a topspin curveball, which you might say it was. In the process, it also gets a significant spin component around the direction of travel, which we ignore in terms of affecting the break of the pitch. The significant sidespin of the slider would put its spin direction around 90 degrees in theory, 120 degrees if it’s thrown from 1 o’clock, but in reality the pronating wrist ends up turning more toward the backspin of the fastball than the topspin of the curveball, and our typical slider ends up at slightly higher angles than 90 or 120 degrees. Checking back on our previous pitch classifications, a Greg Maddux slider falls in the range of 130-175 degrees, and Papelbon’s slider is in the range 140-190 degrees. If Bradford were throwing a slider from the 5 o’clock position, at approximately what angle would we expect to see it spin? If he were throwing from the 6 o’clock position and getting pure sidespin on a slider, we’d expect it to have a spin direction of 270 degrees. Moving his delivery toward 5 o’clock moves the spin direction on the slider toward 240 degrees. But then the wrist action tends to pull it back toward the fastball, maybe in the neighborhood of 280 degrees. Huh. We don’t see Bradford’s breaking pitch centered anywhere close to that. In fact, it’s centered around a spin direction of 100 degrees, closer to the 180 degrees we’d expect from a submarine curveball than the 280 degrees we’d expect of a submarine slider, but still not a great match for the curveball. In fact, it seems to have a strong screwball component, which makes some sense out of the movement we see on the vertical/horizontal break, but makes no sense to me in terms of why Bradford would throw a screwball or screwball-like pitch rather than a curveball/slider. Screwballs are hard to throw; they’re hard on your arm, which makes no sense for Bradford, who already has to deal with back pain from his submarine delivery. Nobody reports Bradford throwing a screwball. Maybe this is because it moves more like a traditional overhand slider or curve, but more likely it’s because he doesn’t throw a screwball. Maybe some of you can help me by finding an error in my calculations or reasoning or pointing me to a more authoritative scouting report on Bradford. Until then, I’ll have to leave his third pitch as a mystery. Update: I’ve done more reading on Bradford and the underhand delivery, and I’m finding some support for the idea of Bradford throwing a screwball. Most people don’t call it that because it doesn’t move like a screwball from an overhander, but it appears I may not be as off base as I thought I was. From the Sporting News: “Chad Bradford is a submariner with a tough slider and a circle changeup. Because of his delivery, Bradford essentially twists his hand to the left on each pitch, turning the ball over–almost a screwball-type action.” Some of you who know more about pitching motions and grips might be able to make something out of the pictures in Chris O’Leary’s analysis of Bradford. Once again, Josh Kalk has some good things brewing on his blog. He’s working on a clustering algorithm to distinguish pitch types for all pitchers, and he has player cards up for almost 300 pitchers. He’s seeking input to improve his algorithm from the first pass. According to Josh, one of the worst performances of the algorithm was for Greg Maddux, so I thought I’d try out my tools on Maddux and see what I found in terms of pitch types. My conclusion is that Maddux throws mostly two-seam fastballs (67%), a lot of changeups (21%), some cut fastballs (10%), and an occasional slider (1%) and curveball (1%). This more or less agrees with scouting reports, although you can find mention of Maddux throwing just about every pitch under the sun other than the knuckleball. For example, his Wikipedia article will tell you Maddux throws the splitter and the screwball, but I found no evidence for either in the PITCHf/x games in the 2007 season. On to the graphs… Let’s start with what’s fast becoming my bread and butter, the pitch speed versus spin direction graph. I’ve color-coded the pitches in this graph based on my conclusions from all the data. I don’t claim that they are all easily identifiable based solely on this first graph. (All pitch speeds are normalized to y0 = 50 feet.) The curveball is the easiest to identify. At 70-76 mph, it is the slowest pitch, and it’s the only one with topspin, with a spin direction of 50-100 degrees. The slider is also fairly easy to distinguish, although I had to work a little at the exact boundaries between it and the cutter and changeup. The slider runs 79-83 mph, with mostly backspin and a little sidespin, corresponding to a spin direction of 120-180 degrees. Maddux’s three main pitches are a bit tougher to separate. Let’s start with the changeup, whose most prevalent characteristic is its slower speed 78-83 mph, with similar spin direction to the fastballs. It has a spin direction ranging from 190-270 degrees, from mostly backspin to mostly sidespin. Next, let’s go to the fastballs, the cutter and the two-seamer. From the speed vs. spin direction graph, you can tell that there are probably two separate fast pitches, but it’s hard to tell exactly where the line between them would go. Also, we can see another interesting fact. Usually the harder fastball is a four-seamer on the left, with more backspin (i.e., closer to 180 degrees), and the slower fastball is a two-seamer on the right, with more sidespin (i.e., shifted somewhat toward 270 degrees). Maddux doesn’t have that arrangement. His harder pitch is on the right, which is where the two-seamer should be. Based on scouting descriptions of Maddux’s repertoire, his main fastball is in fact a two-seamer, and he also throws a cut fastball. A cut fastball is usually a little slower than a four-seamer or two-seamer, so that correlates with our mystery pitch on the left half of the fastball grouping. In addition, the cut fastball typically has some slider-like characteristics, so it makes sense that the cut fastball would be found toward the slider side of the spin direction (i.e., toward 180 degrees and lower angles). So I think there’s good evidence to believe we’ve identified a two-seam fastball and a cut fastball on the graph. Finding the boundary between the two will take us on a tour through some other graphs which will also help us define the changeup a little better and reaffirm our identification of the slider and curveball. First, let’s visit an old standby graph, the speed versus horizontal break. In this graph, the curveball is clearly evident again, as the slowest pitch with the most break away from a right-handed hitter. There is a group of pitches 78-83 mph also with a positive horizontal break, although slightly less break than the curveball. This, of course, is a signature of the slider. The changeup group is pretty readily identifiable here as the pitches 78-83 mph with negative horizontal break (in toward a right-hander). We can see some thinning out between the two-seamer group on the right and the cutter on the left of the fastest pitches. The boundary between the two is very smeared and hard to delineate on this graph. At least some of that smearing may come from the fact that we have three different y0 initial distances represented in our data set, and the closer y0 is set to home plate, the less break we will measure on the pitches. I don’t see any evidence for a screwball on this graph.  A screwball should be a very slow pitch, like a curveball, but breaking in to a righty, opposite of the curve.  There are no pitches slower than 77 mph on the left side of the graph, hence, no sign of a screwball. Next on tour comes the speed versus spin rate graph. It turns out that spin rate is a very useful parameter in helping us separate two-seamers from cutters and changeups. The two-seamer generally has a faster spin rate, mostly in the 1500-2500 rpm range, but with some significant tails at both ends. The cutter and the changeup have slower spin rates, mostly in the 500-2000 rpm range. There is some overlap between the spin rates of different pitches, but it is a helpful tool in our pitch classification tool box. Another graph we can make is spin rate versus spin direction, and this one is useful mainly for identifying two-seam fastballs that might otherwise look like borderline changeups. I didn’t find it very helpful in telling the other pitches apart. It’s possible there are some splitters hiding out on the extreme right edge of this graph in what I’ve labelled as changeups. If there was a separate grouping hanging out farther to the right, as Papelbon’s splitter did, I’d tend to believe they were splitters, but at this time, apart from any other evidence, I don’t see a reason to believe they aren’t all just changeups. Finally, we can look at vertical break versus horizontal break. This is the graph presented in Josh Kalk’s player card for Greg Maddux. You can see the great big blog that his algorithm couldn’t separate. I’ve nicely color-coded the pitches, so you can see there’s some order left-to-right, but they’re still pretty much a mess. Ugh! Other than the curveball, I wouldn’t try to pick anything out of that graph alone. However, it was useful in identifying a few more two-seamers that were trying to masquerade as changeups. So that’s the story on Maddux and his five pitches. The speed vs. spin direction graph once again comes through as the star, but this time it needed more help from its friends. Of course, there are more interesting things in the data, for Maddux or for any pitcher, beyond just classifying their pitch types. Which pitches does Maddux prefer to throw to lefties or righties? Answer: he throws the cutter more to lefties (15%) than to righties (6%), and with righties he relies more on his two-seamer instead. Which pitches does he throw in various counts? How does he locate them? Which pitches get swings and misses and which ones see more contact? Which pitches turn into home runs most often? Et cetera. These will be left as an exercise to the reader. Or, what the heck, do what I do and move on to another topic before this one is even cold. I’ve been impressed for a while by the analysis that Steve West does of the Rangers’ pitchers using PITCHf/x data. His recent article about Edinson Volquez made me eager to try out my new spin rate toy on Volquez’s fastballs. Sure enough, the speed versus spin direction graph clearly shows the difference between the “rising” four-seam fastball and the “sinking” two-seam fastball. It’s a difference that’s visible in the horizontal and vertical break graphs, too, and Steve comments that there may be two separate fastballs hiding in the data, but using spin direction we can see them plain as day. The four-seamer runs 91-96 mph with mostly backspin and a little sidespin, corresponding to a spin direction of 200-220 degrees. The two-seamer runs 90-95 mph with a larger component of sidespin, corresponding to a spin direction of 220-245 degrees. This graph also shows the changeup sitting at 80-85 mph with the spin direction varying between mostly backspin (210 degrees) to all sidespin (270 degrees). I left the curveball off this graph so that we could see the difference between the fastballs a little better, but the curve runs 77-82 mph with mostly topspin and some sidespin, corresponding to a spin direction of 35-80 degrees. The spin rate graph confirms the diagnosis of the four-seam and two-seam fastballs, showing that the two-seamer has slightly slower spin. Again, I left the curveball off the graph, but its spin rate ranges from 600-1600 rpm, similar to the changeup. Finally, I want to take a look at the vertical and horizontal break on the pitches and show that our four pitch groupings do show up on that graph, too. The four-seamer has a vertical break of +8 to +13 inches and a horizontal break of -3 to -7 inches. The two-seamer has a vertical break of +5 to +9 inches and a horizontal break of -7 to -11 inches, consistent with the “sinking” nature of the two-seam fastball. The changeup has a vertical break of 0 to +5 inches and a horizontal break of -3 to -8 inches, and the curveball has a vertical break of -2 to -7 inches and a horizontal break of +3 to +7 inches. What little scouting information I could find on Edinson Volquez agreed with the diagnosis of four pitches: a four-seam fastball, a two-seam sinking fastball, a changeup, and a curveball. I’ve tied up some of the loose ends from my first post on Jonathan Papelbon, and I wanted to share those findings here. First, I mentioned that polar plots were preferable to standard x-y plots when graphing spin direction, measured in degrees. I’ve decided that is untrue. The polar plot does a better job of conferring the idea that we’re looking at an angle, but the x-y plot does a better job of spreading the data out for examination, and that seems to be more important for pitch classification. Second, at the time of the first article, I had not classified the major pitch types. Now I have, and what I found was consistent with the scouting reports. Papelbon’s main pitches are his four-seam and two-seam fastballs. Against lefties, he uses his split-fingered fastball as his offspeed pitch, and against righties, he uses the slider as his offspeed pitch, mixing in a few curves and a few splitters (and slutters?). I didn’t find any strong evidence of a slider/cutter hybrid, the semi-famous “slutter”. I found two pitches sitting on the edge of the slider group that had a much higher spin rate. (Actually, the slider often has a significant component of spin with its axis along the direction of travel, which we cannot measure. These pitches may have the same spin rate as the slider but as sidespin rather than spin along the direction of travel.) These might be the slutter, but with only two instances, I’m hesitant to say. Without further ado, here is the graph reprised from the previous post, with pitch types marked. Also, pitches to lefties are shown in red, and pitches to righties are shown in green. The four-seam fastball is thrown the hardest, running 94-99 mph. It has the highest spin rate, about 2800 rpm. This is mostly backspin but also some sidespin that suggests it’s coming from a 3/4 delivery. It rises the most compared to a theoretical pitch without spin, about 12 inches on average, and rides in on a right-handed hitter by about 7 inches. The two-seam fastball is thrown slightly slower, roughly 91-95 mph, with a slightly slower spin rate, about 2200 rpm. This pitch has about equal amounts of backspin and sidespin. It “rises” a little less than four-seamer, only about 9 inches, and rides in on a righty by about 8 inches. The split-fingered fastball is thrown about 84-91 mph, and has a much slower spin rate, between 1000-2000 rpm. The splitter has a significant amount of sidespin. Compared to the other fastballs, it has a large drop. Compared to a pitch without spin, it rises about 2 inches and breaks away from a lefty by about 8 inches. The slider is thrown about 81-88 mph, and has a very slow spin rate in the x-z plane that we measure. This is mostly backspin, but as mentioned before, much of the actual spin is probably around the direction of travel, as is typical for a slider. The slider “rises” by about 5 inches compared to a pitch without spin and breaks away from a righty by an inch or two. The curveball is thrown 78-81 mph, and has a very slow spin rate of about 600 rpm. This is mostly topspin, with a component of sidespin. The curveball drops a couple inches and breaks away from right-hander by about 3 inches. The two slutters, if in fact that is what they are, were thrown at 85 and 92 mph, with a spin rate of 2200 rpm. They had mostly backspin, rose 11-12 inches compared to a non-spinning pitch, and broke in on a righty by 0-2 inches. Finally, moving on to a third topic, I found that the “release point” (initial position) data was not very helpful in classifying pitch types. The groupings we saw in the previous article were due to park variations. The park-to-park dependence washes out anything else. Josh Kalk’s work on correcting for park variance is going to be very important. To drive home that point, here is the initial horizontal position, x0, for all of Papelbon’s pitches since the All-Star Break, when the y0 distance was set to 50 feet. Notice the strong dependence on park above all else. I’m not completely sure if the data is consistent even within one park. Note how the initial position has increased between the first stint in Boston and the second stint in Boston. It’s possible Papelbon changed his release point during this time, but given the other inconsistencies in the data set, my confidence that it’s not a system change is not very high. Things have been slow around here lately due to some other life circumstances, so I haven’t gotten back to the Brandon Webb analysis, nor have I posted some of my other work. I’ve been fiddling with some ideas from Dr. Alan Nathan’s paper on modeling pitch trajectories from a Jon Lester start, but I haven’t had the time to get them into the final format or chase down all the loose ends, of which there are many. Josh Kalk encouraged me to post my preliminary work, anyway, for both my own benefit and that of the analysis community, and he has a good point. So what you have here are some of my experimental ideas tested out on Boston Red Sox pitcher Jonathan Papelbon. I originally started this expedition in search of his so-called “slutter” (slider/cutter hybrid), and it may well be in the data, but one of the loose ends is that I haven’t differentiated his harder-thrown pitches yet. It seems like a pretty big loose end to not have accomplished my main goal, but perhaps more important than whether Papelbon throws a slutter and whether I can identify it from the PITCHf/x data is the detour I followed in the process of looking. In Dr. Nathan’s paper, he solves the equations of motion for each pitch Jon Lester threw and uses this data to determine the spin direction and spin rate of the baseball. Traditionally (if something in its first year of existence can have a tradition), people have classified pitches using the speed, horizontal break, and vertical break from the PITCHf/x data. Dr. Nathan’s method holds promise as an alternate viewpoint on the data, classifying a pitch based on what the pitcher does to the baseball as opposed to classifying it based on how the pitched baseball moves from the perspective of the batter/catcher/umpire. In addition, Dr. Nathan pointed out to me that the spin direction and spin rate are independent of the y0 “release point” measurement distance, which has been set first at 55 feet, then 40 feet, and now 50 feet during different parts of the season. Since the spin parameters are invariant across y0, it’s much easier to classify pitches for a whole season’s data set, with only the start_speed needing to be adjusted based on changes in y0. The method presented in the paper is based on fitting the pitch trajectory to the data using iterative numerical analysis. However, Dr. Nathan also presents some approximations for the Magnus force and drag force which he says yield comparable results to his more accurate method. I took these approximations and solved for the spin direction and spin rate. I then applied these new simpler equations to the same Jon Lester data and confirmed that I achieved very similar results to those presented in the paper. Spin direction (in degrees) $\theta=\arctan(\frac{az+32.174}{ax})\cdot\frac{180}{\pi}$ then add 270 degrees if ax<0 or 90 degrees if ax>0. Spin rate (in rpm) $\omega= \frac{\sqrt{ax^2 + (az + 32.174)^2}}{ |vy0|\cdot0.121\cdot0.00544\cdot2\pi}\cdot60$ (I’m in the process of figuring out how to use LaTeX in WordPress to make the formulas look nice.) Next, I decided to test these equations out on Papelbon to see if I could classify his pitches. Here we come to another loose end. Graphs with angles beg to be put on polar plots, for more reasons than one. Excel does not do polar plots without jumping through a lot of hoops, this another reason I turned to R for graphing. However, I haven’t learned how to do polar plots in R yet. So, I present these Papelbon graphs in Excel and apologize for the poor formatting. It obscures some of the information I want to communicate about the value of classifying pitches by spin direction, but I think I can still make my point. First we have pitch speed plotted versus spin direction. Note that this data set includes all three possible y0 values, yet the pitches are grouping nicely. I have normalized the start_speed values in this graph to compensate for y0 values different than 50 ft. The normalization factor was based on extrapolating or interpolating to 50 ft based on the start_speed and end_speed values and assuming the y deceleration due to drag was constant. Also, I cut the graph off at 300 degrees in order to better see most of the data. In doing so, I excluded two outlier pitches at around 350 degrees with a speed of 97 mph. I think there is some sort of error in the data with those two pitches, but if you run the analysis yourself, you’ll see them there. Compare that to the mess we get when we plot pitch speed versus horizontal break, once again including data from all three of the y0 values. A typical technique to clean up the speed versus horizontal break graph, besides separating the data by y0 measurement points, is to plot the vertical break, either on the same graph as a second set of points, or to make a separate graph showing vertical break versus horizontal break and indicating speed groups by color. Ignoring the y0 problem for the moment, that’s usually sufficient to tease out the different pitch types, although rising and sinking fastballs can still be tough to differentiate. However, using our first graph, we don’t have to do any of that. We also have spin rate data, but I found, at least for Papelbon, that it has a high correlation with pitch speed (R^2=0.68). The correlation isn’t perfect, but it’s high enough that I didn’t find much helpful information in the way of classifying pitches by looking at spin rate versus any of the other parameters on the whole. However, spin rate can sometimes be helpful in deciding how to classify an individual pitch on the borderline between groups in the speed versus spin direction graph. Intuitively this conclusion makes sense to me when I think about how a fastball is thrown. The harder you throw it, the more backspin you will put on the ball. There are (at least) two other avenues for exploration in terms of pitch classification. One is separation of the data by handedness of the batter. When you do this for Papelbon, you can clearly see that there are some pitches he prefers to throw to righties and some to lefties. I haven’t spent much time actually trying to figure out which pitches are which, so the story isn’t quite as fun. (Is there a slutter in there?) Hopefully I can come back and do that. Nonetheless, I think the main point is made. The remaining avenue for investigation is classifying pitches by “release point”, which is really just the initial measurement point for the PITCHf/x system. The following graphs show z0, the vertical distance from the ground to the pitch, versus, x0, the horizontal distance from the centerline of home plate, with the negative direction to the catcher’s left. Here I found it interesting to see the graphs separated by y0. Measured at 55 feet from home plate, you can hardly tell anything apart. When y0 is set to 50 feet, we can start to see some separation into groups. I haven’t correlated these groups to the groups we saw above (another of those niggling loose ends), but clearly there’s something worthwhile in this data. The different pitches are starting to move a little differently and we can pick that up a few feet out of the pitcher’s hand. When we go to 40 feet, the separation of the groups gets a little better, but it appears that groups might also be getting a little smeared or stretched out. I don’t know if that was one of the factors in MLB/Sportvision settling on the 50-foot measurement distance, but in any case it works out well for us in this instance. So, there you have it. There are a lot of loose ends yet to chase in this little voyage through Jonathan Papelbon’s repertoire. Hopefully there is also something useful to be gained from what I’ve found so far. Please feel free to comment either on the work I’ve presented, on the suggestions I’ve made for further investigation, or on your own ideas for what can be done with this data. Part 2 of the Papelbon article can be found here. Next Page »
2018-04-21 17:10:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5079286098480225, "perplexity": 1569.9994430321435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945272.41/warc/CC-MAIN-20180421164646-20180421184646-00421.warc.gz"}