url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;a9d7aed5.9806 | ## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE
#### View:
Message: [ First | Previous | Next | Last ] By Topic: [ First | Previous | Next | Last ] By Author: [ First | Previous | Next | Last ] Font: Proportional Font
Subject:
Re: Modules
From:
Date:
Thu, 25 Jun 1998 14:58:08 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (15 lines)
>> I guess there will be a command called \tex_def or \tex/def or something, >> which one can use. Indeed, that was my very point : David was suggesting that \def could be made inaccessible by the format; I argued that all the while TeX lacks the equivalent of PostScript's "bind", primitives which are used by maos defined in the format source and which must be accessible to the user code can never be made totally inaccessible. You can "hide" them but you can't remove them, so why bother even to hide them if an astute programmer can work his way around the hiding mechanism? ** Phil. (the earlier proposal to use commercial-at is just one way of hiding such things, neithe better nor worse (in this context) than any other mechanism). | 2019-10-23 17:12:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129231929779053, "perplexity": 5685.708769141518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00356.warc.gz"} |
https://physics.stackexchange.com/questions/561712/difference-between-chronological-future-and-domain-of-dependence | # Difference between chronological future and domain of dependence
Given the following standard definitions of the two concepts, I fail to see how the chronological future differs from the future domain of dependence?
Chronological future: The chronological future of $$S$$, denoted $$I^+(S)$$, is the set of points that can be reached from $$S$$ by following a future-directed timelike curve.
Future domain of dependence: Let $$(\mathcal{M},g)$$ be a spacetime and $$S \subset \mathcal{M}$$ such that no two points on $$S$$ can be connected by a timelike curve (we also say that $$S$$ is achronal). The future domain of dependence of $$S$$, denoted $$D^+(S)$$ is the set of all points $$p \in \mathcal{M}$$ with the property that every past-directed inextendible (i.e. with no endpoints) timelike curve starting at $$p$$ intersects $$S$$.
• It is the word every that makes them different. In the first definition it says that a point can be reached by a timelike curve, but it doesn't mean that every timelike curve that goes through it must also go through S. – MBN Jun 25 at 12:03
The chronological future is the light cone. It's a cone that expands outwards as time goes on, because it consists of all the events that $$S$$ can influence. In other words, the events in $$I^+(S)$$ have at least one point of $$S$$ in their past:
On the other hand, the domain of dependence, also called the Cauchy development, consists of those events that are completely determined by what happens in $$S$$, because all their past intersects $$S$$. This means that it's a cone that closes: as you go far enough into the future you can't predict anymore, because information from events outside $$S$$ has had enough time to arrive: | 2020-12-03 07:15:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317177891731262, "perplexity": 171.99251567147547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141723602.80/warc/CC-MAIN-20201203062440-20201203092440-00597.warc.gz"} |
https://gateoverflow.in/57917/%23operating-system-%23paging | +1 vote
149 views
I want to divide a page table into 2 page tables (outer and inner). Single page table has 2^20 entries with Page Size of 2^5 Bytes and Page Entry of 4Bytes in Page Table.
What should be the number of entries in Outer Page Table? What is the formula ?
| 149 views
Single page table has $2^{20}$ entries, and the size of each entry is 4B.
Hence the size of first level page table is = $2^{20}$*4B = 4MB
Page Size = 32B,
Therefore, the number of entries in the outer table will be 4MB/32B = 128K
by Boss (44.1k points) | 2020-01-25 17:55:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5392097234725952, "perplexity": 4882.656892657567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00459.warc.gz"} |
http://www.gradesaver.com/textbooks/math/statistics-probability/elementary-statistics-12th-edition/chapter-6-normal-probability-distributions-6-7-normal-as-approximation-to-binomial-basic-skills-and-concepts-page-313/23 | ## Elementary Statistics (12th Edition)
p=0.2 $q=1-p=1-0.2=0.8$ $n⋅p=1000⋅0.2=200≥5.$ $n⋅q=1000⋅0.8=800≥5.$ Hence, the requirements are satisfied. mean: $\mu=n\cdotp=1000\cdot0.2=200.$ standard deviation: $\sigma=\sqrt{n\cdot p\cdot q}=\sqrt{1000\cdot0.2\cdot0.8}=12.65.$ 170.5 is the first value more than 17%, hence: $z=\frac{value-mean}{standard \ deviation}=\frac{170.5-200}{12.65}=-2.33.$ By using the table, the probability belonging to z=-2.33: 0.0099, hence the probability: 0.0099. This probability is really close to 0, hence the evidence is strong. | 2018-04-21 23:48:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579135179519653, "perplexity": 1331.7664747715364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945459.17/warc/CC-MAIN-20180421223015-20180422003015-00171.warc.gz"} |
https://extensionsq.xwiki.org/xwiki/bin/view/Documentation/DevGuide/DataModel/?viewer=changes&rev1=16.4&rev2=16.5 | Changes for page Data Model
<
From version
edited by BrianJones
on 2008/10/23
To version
edited by Vincent Massol
on 2009/08/13
>
Change comment: removed striked out text
Details
Page properties
Author
... ... @@ -1,1 +1,1 @@ 1 -XWiki.BrianJones 1 +XWiki.VincentMassol
Content
... ... @@ -29,7 +29,6 @@ 29 29 * User List 30 30 * Group List 31 31 32 -
--You may find the labels for the property types somewhat confusing because in the XWiki Class Editor, they also contain the word 'Class'. For example, you will see Number Class, String Class, TextArea Class, and so forth. It is best if you just ignore the word 'Class' and think of them as simple property datatypes in the traditional sense for now.-- (this was fixed starting with version 1.4)
33 33
*Objects*
34 34
Objects are unique instances of a class with unique values defined for each of the properties that make up the class.
35 35
An object is attached to a specific page. Each page can have multiple objects.
Date
... ... @@ -1,1 +1,1 @@ 1 -2008-10-23 17:13:18.715 1 +2008-10-23 17:13:18.0
Date
... ... @@ -1,1 +1,1 @@ 1 -2008-10-23 17:16:39.154 1 +2008-10-23 17:16:39.0
Date
... ... @@ -1,1 +1,1 @@ 1 -2008-10-23 17:27:13.867 1 +2008-10-23 17:27:13.0 | 2023-03-24 01:05:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.922000527381897, "perplexity": 10919.242391725295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00005.warc.gz"} |
https://www.pks.mpg.de/research/highlights | # Highlights
Institute's News
### Call for Distinguished PKS Postdoctoral Fellowship 2022 now open!
Application deadline: 24 November 2021. Distinguished PKS postdoctoral fellows appear personally along with the departments and groups on the main research page of the institute and are expected to have at least one year of postdoctoral experience at an institution other than the one at which their PhD was awarded. Applications for this fellowship directly after completion of the PhD might be considered in exceptional cases. Please click on the link- button to see the full advertisement!
Institute's News
### New Research Group - The Physics of Living Matter
We cordially welcome Ricard Alert at the institute! Ricard joins MPI-PKS from Princeton University and establishes the research group "The Physics of Living Matter". The group aims at uncovering physical principles of living matter. In particular, it will develop the physics of active matter to understand collective behaviors in cells and tissues. The group's research topics include collective cell migration, self-organization in bacterial colonies, active turbulence, mechanochemical patterns in tissues, mechanically-regulated tissue growth, and active fluctuations in cells. Ricard is also a member of the Center for Systems Biology Dresden and strengthens the collaboration of MPI-PKS with experimental groups at MPI-CBG.
Publication Highlights
### Reinforcement Learning for Digital Quantum Simulation
Digital quantum simulation on quantum computers provides the potential to simulate the unitary evolution of any many-body Hamiltonian with bounded spectrum by discretizing the time evolution operator through a sequence of elementary quantum gates. A fundamental challenge in this context originates from experimental imperfections, which critically limits the number of attainable gates within a reasonable accuracy and therefore the achievable system sizes and simulation times. In this work, we introduce a reinforcement learning algorithm to systematically build optimized quantum circuits for digital quantum simulation upon imposing a strong constraint on the number of quantum gates. With this we consistently obtain quantum circuits that reproduce physical observables with as little as three entangling gates for long times and large system sizes up to 16 qubits. As concrete examples we apply our formalism to a long-range Ising chain and the lattice Schwinger model. Our method demonstrates that digital quantum simulation on noisy intermediate scale quantum devices can be pushed to much larger scale within the current experimental technology by a suitable engineering of quantum circuits using reinforcement learning.
A. Bolens et al., Phys. Rev. Lett. 127, 110502 (2021).
Publication Highlights
### Machine learning universal bosonic functionals
The one-body reduced density matrix $\gamma$ plays a fundamental role in describing and predicting quantum features of bosonic systems, such as Bose-Einstein condensation. The recently proposed reduced density matrix functional theory for bosonic ground states establishes the existence of a universal functional $F[\gamma]$ that recovers quantum correlations exactly. Based on a decomposition of $\gamma$, we have developed a method to design reliable approximations for such universal functionals: Our results suggest that for translational invariant systems the constrained search approach of functional theories can be transformed into an unconstrained problem through a parametrization of a Euclidian space. This simplification of the search approach allows us to use standard machine learning methods to perform a quite efficient computation of both $F[\gamma]$ and its functional derivative. For the Bose-Hubbard model, we present a comparison between our approach and the quantum Monte Carlo method.
J. Schmidt et al., Phys. Rev. Res. 3, L032063 (2021).
Publication Highlights
### Unitary long-time evolution with quantum renormalization groups and artificial neural networks
We combine quantum renormalization group approaches with deep artificial neural networks for the description of the real-time evolution in strongly disordered quantum matter. We find that this allows us to accurately compute the long-time coherent dynamics of large many-body localized systems in nonperturbative regimes including the effects of many-body resonances. Concretely, we use this approach to describe the spatiotemporal buildup of many-body localized spin-glass order in random Ising chains. We observe a fundamental difference to a noninteracting Anderson insulating Ising chain, where the order only develops over a finite spatial range. We further apply the approach to strongly disordered two-dimensional Ising models, highlighting that our method can be used also for the description of the real-time dynamics of nonergodic quantum matter in a general context.
H. Burau et al., Phys. Rev. Lett. 127, 050601 (2021)
Publication Highlights
### Unraveling forces within the nucleus
Understanding how individual proteins work together to perform complex cellular processes such as transcription, DNA replication, and repair represents a crucial goal in cell biology. Transcription is a process in the nucleus where protein complexes work together to generate transcripts of RNA from genes. For proper transcriptional regulation, enhancers—short strips of DNA—must be brought into close proximity of the gene’s promoter. Given that enhancers and promoters are often located far apart within the genome, the question then arises: how do proteins bring these enhancers and promoters together in space and time? And what are the physics behind it?
Answering these questions would provide deep insights into the proper regulation of transcription in the cell nucleus. However, extracting such information is far from trivial. But recent work from the research group of Jan Brugués at the Max Planck Institute of Molecular Cell Biology and Genetics in collaboration with Frank Jülicher at the MPI for the Physics of Complex Systems has revealed an important clue: Forces. Jan’s lab is also located at the MPI for the Physics of Complex Systems and is affiliated with the Center for Systems Biology Dresden.
Interactions between liquids and solids have long been known to generate forces, such as those maintaining the tension of a spider web or those that allow insects to walk on water. However, whether such forces play a role inside the cell has remained unclear. With the development of precise biophysical methods and advanced imaging techniques, we are getting closer to not only observing such forces but also measuring them.
The Brugués lab imaged the interactions between single molecules of DNA and the transcription factor FoxA1, a protein responsible for determining cell fate in many species. They discovered that FoxA1 molecules brought distant regions of DNA together, generating forces that condensed the DNA. When the single molecule of DNA was stretched tightly — like a tightened elastic band — FoxA1 molecules could not bring DNA together. However, when the DNA molecule was floppy, FoxA1 molecules worked together to condense the DNA, overcoming the DNA’s intrinsic tension. This new information helps paint a clearer picture of the interactions between transcriptional regulators and the surface of the DNA.
Remarkably, the physics underlying these FoxA1-DNA interactions are reminiscent of the forces that maintain the tension of a spider web. Similar to how liquid droplets on a spider web generate forces that reel in broken strands of silk, FoxA1 acts as the liquid phase that condenses DNA and brings it together.
This study demonstrated how proteins work together to generate forces in the cell nucleus. Such a result opens an exciting research direction to understanding other complex processes in the cell. Thomas Quail, post-doctoral researcher in the Brugués lab says: “Our findings set forth a novel mechanism that the cell nucleus may use to organize its chromatin and DNA. It’s possible that these condensation forces generated between solid and liquid surfaces could also be relevant for other cellular bodies such as the mitotic spindle and membranes.”
T. Quail et al., Nature Physics (2021)
Awards and Honors
### Physik-Preis Dresden zum fünften Mal verliehen
Am 6. Juli 2021 wurde der „Physik-Preis Dresden“ der TU Dresden und des Max-Planck-Instituts für Physik komplexer Systeme (MPI-PKS) zum fünften Mal verliehen. Der Physik-Preis Dresden 2021 geht an Professor Gijsje Koenderink von der Technischen Universität Delft. Gijsje Koenderink ist eine herausragende experimentelle Biophysikerin mit einer Reihe von bahnbrechenden Arbeiten zur Zellmechanik und zellulären Krafterzeugung. In Anerkennung ihrer hervorragenden Beiträge zur Physik der Zellen erhält Gijsje Koenderink den Dresdner Physikpreis 2021, der gemeinsam vom Max-Planck-Institut für Physik komplexer Systeme und der TU Dresden verliehen wird. Die Forschung von Gijsje Koenderink ist von großem Interesse für eine Reihe von Forschungsgruppen in Dresden, insbesondere im Rahmen des Exzellenzclusters "Physik des Lebens". Die Verleihung des Physik-Preis Dresden 2021 an Professor Koenderink schafft eine wertvolle Verbindung zwischen ihrer Forschungsgruppe und der Forschung auf den Gebieten Polymerphysik, weicher kondensierter Materie, Biophysik und Zellbiologie in Dresden. Gastgeber des Abends, Prof. Dr. Frank Jülicher, Direktor am Max-Planck-Institut für Physik komplexer Systeme, war überaus erfreut, dass mit Gijsje Koenderink eine so bedeutende internationale Forscherpersönlichkeit geehrt wurde. Der Physik-Preis Dresden wurde 2015 von dem Dresdner Physiker Prof. Peter Fulde, dem Gründungsdirektor des MPI-PKS gestiftet. Die Preisträger werden von einer gemeinsamen Kommission der TU Dresden und des MPI-PKS bestimmt. Neben dem zentralen Kriterium der wissenschaftlichen Exzellenz ist für die Entscheidung vor allem wichtig, dass die Arbeiten der Preisträger für die Zusammenarbeit zwischen beiden Dresden-concept Partnern MPI-PKS und TU Dresden von besonderer Bedeutung sind und deren Verbindung langfristig weiter gestärkt wurde.
Publication Highlights
### Hydraulic instability decides who’s to die and who’s to live
In many species including humans, the cells responsible for reproduction, the germ cells, are often highly interconnected and share their cytoplasm. In the hermaphrodite nematode Caenorhabditis elegans, up to 500 germ cells are connected to each other in the gonad, the tissue that produces eggs and sperm. These cells are arranged around a central cytoplasmic “corridor” and exchange cytoplasmic material fostering cell growth, and ultimately produce oocytes ready to be fertilized.
In past studies, researchers have found that C. elegans gonads generate more germ cells than needed. Only half of them grow to become oocytes, while the rest shrinks and die by physiological apoptosis, a programmed cell death that occurs in multicellular organisms. Now, scientists from the Biotechnology Center of the TU Dresden (BIOTEC), the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), the Cluster of Excellence Physics of Life (PoL) at the TU Dresden, the Max Planck Institute for the Physics of Complex Systems (MPI-PKS), the Flatiron Institute, NY, and the University of California, Berkeley, found evidence to answer the question of what triggers this cell fate decision between life and death in the germline. Prior studies revealed the genetic basis and biochemical signals that drive physiological cell death, but the mechanisms that select and initiate apoptosis in individual germ cells remained unclear. As germ cells mature along the gonad of the nematode, they first collectively grow in size and in volume homogenously. In the study just published in Nature Physics, the scientists show that this homogenous growth suddenly shifts to a heterogenous growth where some cells become bigger and some cells become smaller.
The researcher Nicolas Chartier in the group of Stephan Grill, and co-first author of the study, explains: “By precisely analyzing germ cell volumes and cytoplasmic material fluxes in living worms and by developing theoretical modeling, we have identified a hydraulic instability that amplifies small initial random volume differences, which causes some germ cells to increase in volume at the expense of the others that shrink. It is a phenomenon, which can be compared to the two-balloon instability, well known of physicists. Such an instability arises when simultaneously blowing into two rubber balloons attempting to inflate them both. Only the larger balloon will inflate, because it has a lower internal pressure than the smaller one, and is therefore easier to inflate.” This is what is at play in the selection of germ cells: such pressure differences tend to destabilize the symmetric configuration with equal germ cell volumes, so-called hydraulic instabilities, leading to the growth of the larger germ cell at the expense of the smaller one. By artificially reducing germ cell volumes via thermoviscous pumping (FLUCS method: focused-light-induced cytoplasmic streaming), the team demonstrated that the reduction in cell volumes leads to their extrusion and cell death, indicating that once a cell is below a critical size, apoptosis is induced and the cell dies. By using confocal imaging, the researchers could image the full organism of the living worm to receive a global and precise picture of the volumes of all the gonad cells, as well as the exchange of fluids between the cells. Stephan Grill, Speaker of the PoL and director at the MPI-CBG and supervisor of the multidisciplinary work, adds: “These findings are very exciting because they reveal that the life and death decision in the cells is of mechanical nature and related to tissue hydraulics. It helps to understand how the organism auto-selects a cell that will become an egg. Furthermore, the study is another example of the excellent cooperation between biologists, physicists and mathematicians in Dresden.”
T. Chartier et al., Nature Physics (2021)
Publication Highlights
### Disorder-Free Localization in an Interacting 2D Lattice Gauge Theory
Disorder-free localization has been recently introduced as a mechanism for ergodicity breaking in low-dimensional homogeneous lattice gauge theories caused by local constraints imposed by gauge invariance. We show that also genuinely interacting systems in two spatial dimensions can become nonergodic as a consequence of this mechanism. This result is all the more surprising since the conventional many-body localization is conjectured to be unstable in two dimensions; hence the gauge invariance represents an alternative robust localization mechanism surviving in higher dimensions in the presence of interactions. Specifically, we demonstrate nonergodic behavior in the quantum link model by obtaining a bound on the localization-delocalization transition through a classical correlated percolation problem implying a fragmentation of Hilbert space on the nonergodic side of the transition. We study the quantum dynamics in this system by introducing the method of “variational classical networks,” an efficient and perturbatively controlled representation of the wave function in terms of a network of classical spins akin to artificial neural networks. We identify a distinguishing dynamical signature by studying the propagation of line defects, yielding different light cone structures in the localized and ergodic phases, respectively. The methods we introduce in this work can be applied to any lattice gauge theory with finite-dimensional local Hilbert spaces irrespective of spatial dimensionality.
P. Karpov et al., Phys. Rev. Lett. 126, 130401 (2021).
### Anatomy of $\mathbb{Z}_2$ fluxes in anyon Fermi liquids and Bose condensates
We study in detail the properties of $\pi$-fluxes embedded in a state with a finite density of anyons that form either a Fermi liquid or a Bose-Einstein condensate. By employing a recently developed exact lattice bosonization in 2D, we demonstrate that such $\pi$-flux remains a fully deconfined quasi-particle with a finite energy cost in a Fermi liquid of emergent fermions coupled to a $\mathbb{Z}_2$ gauge field. This $\pi$-flux is accompanied by a screening cloud of fermions, which in the case of a Fermi gas with a parabolic dispersion binds exactly 1/8 of a fermionic hole. In addition there is a long-ranged power-law oscillatory disturbance of the liquid surrounding the $\pi$-flux akin to Friedel oscillations. These results carry over directly to the $\pi$-flux excitations in orthogonal metals. In sharp contrast, when the $\pi$-flux is surrounded by a Bose-Einstein condensate of particles coupled to a $\mathbb{Z}_2$ gauge field, it binds a superfluid half-vortex, becoming a marginally confined excitation with a logarithmic energy cost divergence. | 2021-10-18 19:47:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39716121554374695, "perplexity": 2684.4828059131382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00668.warc.gz"} |
http://mathoverflow.net/revisions/23753/list | 3 added 119 characters in body
The closure of this set in $[0,1]$ is exactly the subset $mathcal \mathcal C$ of all real numbers whose continued fraction expansions involve only different numbers. This last Rational numbers are fine for membership in $\mathcal C$ if they have a continued fraction expansion involves only distinct integers.
The set $\mathcal C$ is a Kantor set . By the way this last set and it is also the closure of all real numbers whose continued fraction is a permutation of $\mathbb N$ with finite support (moving only identity outside a finite number of elements)set).
2 added 26 characters in body
The closure of this set in $[0,1]$ is exactly the set subset $mathcal C$ of all real numbers whose continued fraction expansions involve only different numbers. This last set is a Kantor set. By the way this last set is also the closure of all real numbers whose continued fraction is a permutation of $\mathbb N$ with finite support (moving only a finite number of elements).
1
The closure of this set is exactly the set of all real numbers whose continued fraction expansions involve only different numbers. This last set is a Kantor set. By the way this last set is also the closure of all real numbers whose continued fraction is a permutation of $\mathbb N$ with finite support (moving only a finite number of elements). | 2013-05-23 13:23:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8068447113037109, "perplexity": 122.41502408134085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703317384/warc/CC-MAIN-20130516112157-00057-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://www.reddit.com/r/math/comments/11ddpl/generating_sums_to_an_integer_using_the_set_of/c6li287 | you are viewing a single comment's thread.
[–] 4 points5 points
sorry, this has been archived and can no longer be voted on
I think your sample solution is incorrect. For instance, there are two ways to sum to 5 using primes: 2+3 and 5. Also, there are three ways to sum to 7: 2+2+3, 2+5, and 7. You need to add one more solution whenever your target number is prime.
For any given number n, for each prime p less than n-1 the number of ways to sum to n is 1 plus the number of ways to sum to n-p; add those sums for all possible primes p, plus 1 more if n is itself prime. If you want to know the lists of numbers that make up the sum, keep a list of them as well as a count.
Then you can solve your problem by working up to the desired n. First find the solution for 2. Then find the solution for 3. Then find the solution for 4. Then find the solution for 5. And so on. Store each of those in an array.
Let's take an example. If n=17, the primes less than n-1 are 2, 3, 5, 7, 11 and 13. Thus the number of ways to compute 17 as the sum of primes is the number of ways to compute 15 as the sum of primes, plus the number of ways to compute 14 as the sum of primes, plus the number of ways to compute 12 as the sum of primes, plus the number of ways to compute 10 as the sum of primes, plus the number of ways to compute 6 as the sum of primes, plus the number of ways to compute 4 as the sum of primes, plus 1 because 17 is itself prime. Each of those smaller computations is already known, so making the computation for 17 is simple.
It should be possible to perform trial division to 100000 in far less than 30 seconds -- you should be thinking in terms of 30 milliseconds, or even less. I have an essay about prime numbers at my blog that you may find helpful. The essay includes code in Java.
[–][S] 0 points1 point
sorry, this has been archived and can no longer be voted on
I'll take a look at the essay shortly but this is big. This may be what I was looking toward finding. I will try to code this today on my own, but your essay may be able to help if I am stuck. Thank you.
[–] 0 points1 point
sorry, this has been archived and can no longer be voted on
My algorithm is incorrect. When n=2, there are no primes less than n-1, so the number of ways to compute 2 as the sum of primes is 1 because 2 is itself prime. When n=3, there are no primes less than n-1, so the number of ways to compute 3 as the sum of primes is 1 because 3 is itself prime. When n=4, there is one prime less than n-1, 2, which can be computed as the sum of primes in 1 way, and 4 is composite, so there is only 1 way to compute the sum of primes. So far so good, but the algorithm breaks when n=5; the primes less than n-1 are 2 and 3, which can each be computed as the sum of primes in 1 way, and 5 is itself prime, so there are 3 ways to compute 5 as the sum of primes, which is incorrect. The algorithm is computing the three ways as 2+3, 3+2, and 5, but 2+3 and 3+2 are duplicates, and only one should be counted.
I did a little bit of searching and found a proper algorithm for computing the number of ways that a number can be computed as the sum of primes. Here is my code:
Chez Scheme Version 7.4
> (define (prime-parts n)
(let ((sopf (make-vector (+ n 1) 0))
(kappa (make-vector (+ n 1) 0)))
(vector-set! sopf 0 1)
(vector-set! kappa 0 1)
(do ((p 2 (+ p 1))) ((< n p))
(when (zero? (vector-ref sopf p))
(do ((i p (+ i p))) ((< n i))
(vector-set! sopf i
(+ (vector-ref sopf i) p)))))
(do ((i 2 (+ i 1))) ((< n i))
(vector-set! kappa i
(let loop ((j 1) (s (vector-ref sopf i)))
(if (= j i) (/ s i)
(loop (+ j 1)
(+ s (* (vector-ref sopf j)
(vector-ref kappa (- i j)))))))))
(vector-ref kappa n)))
> (time (prime-parts 1000))
(time (prime-parts 1000))
3 collections
365 ms elapsed cpu time, including 0 ms collecting
380 ms elapsed real time, including 0 ms collecting
12455200 bytes allocated, including 12611800 bytes reclaimed
48278613741845757
Thus I confirm frud's calculation of the number of ways 1000 can computed as the sum of primes.
I'll write a full explanation of the correct algorithm and post it on my blog as my next regularly-scheduled algorithm on Friday.
I apologize for the original incorrect post.
[–] 0 points1 point
sorry, this has been archived and can no longer be voted on
If n is small, and you want a list of the prime partitions instead of just a count, the algorithm I described above is close; the only change required is to eliminate duplicate partitions from the list, which is most easily accomplished by never adding them to the list in the first place. This can be done by storing the elements of a partition in some canonical form (easiest is just to sort them) and then comparing newly-discovered partitions to the items already in the list, rejecting any that already exist; in other words, instead of accumulating a list, accumulate a set without duplicates. Here's my version: primes uses a Sieve of Eratosthenes to generate lists of primes, set-cons conses an item onto a list only if it is not already present in the list, and parts uses nested loops on the primes to n and the already-discovered prime partitions to compute the result.
Chez Scheme Version 7.4
> (define (primes n)
(let ((bits (make-vector (+ n 1) #t)))
(let loop ((p 2) (ps '()))
(cond ((< n p) (reverse ps))
((vector-ref bits p)
(do ((i (+ p p) (+ i p))) ((< n i))
(vector-set! bits i #f))
(loop (+ p 1) (cons p ps)))
(else (loop (+ p 1) ps))))))
> (define (set-cons x xs)
(if (member x xs) xs
(cons x xs)))
> (define (parts n)
(if (zero? n) (list (list))
(let ((xs (list)))
(do ((ps (primes n) (cdr ps))) ((null? ps) xs)
(do ((yss (parts (- n (car ps))) (cdr yss))) ((null? yss))
(set! xs (set-cons (sort < (cons (car ps) (car yss))) xs)))))))
> (do ((i 0 (+ i 1))) ((< 10 i))
(let ((ps (parts i)))
(display i) (display #\tab)
(display (length ps))
(display #\tab)
(display ps) (newline)))
0 1 (())
1 0 ()
2 1 ((2))
3 1 ((3))
4 1 ((2 2))
5 2 ((5) (2 3))
6 2 ((3 3) (2 2 2))
7 3 ((7) (2 2 3) (2 5))
8 3 ((3 5) (2 2 2 2) (2 3 3))
9 4 ((3 3 3) (2 2 5) (2 2 2 3) (2 7))
10 5 ((5 5) (3 7) (2 2 3 3) (2 2 2 2 2) (2 3 5))
> (parts 25)
((7 7 11) (5 5 5 5 5) (5 7 13) (3 3 19) (3 3 5 7 7)
(3 3 3 3 13) (3 3 3 3 3 3 7) (3 3 3 3 3 5 5) (3 3 3 5 11)
(3 5 17) (3 5 5 5 7) (3 11 11) (2 2 7 7 7) (2 2 5 5 11)
(2 2 3 3 5 5 5) (2 2 3 3 3 3 3 3 3) (2 2 3 3 3 5 7)
(2 2 3 5 13) (2 2 3 7 11) (2 2 2 2 17) (2 2 2 2 5 5 7)
(2 2 2 2 3 3 11) (2 2 2 2 3 3 3 3 5) (2 2 2 2 3 7 7)
(2 2 2 2 2 2 13) (2 2 2 2 2 2 3 3 7) (2 2 2 2 2 2 3 5 5)
(2 2 2 2 2 2 2 2 3 3 3) (2 2 2 2 2 2 2 2 2 2 5)
(2 2 2 2 2 2 2 2 2 2 2 3) (2 2 2 2 2 2 2 2 2 7)
(2 2 2 2 2 2 2 3 3 5) (2 2 2 2 2 2 2 11) (2 2 2 2 2 3 5 7)
(2 2 2 2 2 3 3 3 3 3) (2 2 2 2 2 5 5 5) (2 2 2 3 5 11)
(2 2 2 3 3 3 5 5) (2 2 2 3 3 3 3 7) (2 2 2 3 3 13)
(2 2 2 5 7 7) (2 2 2 19) (2 3 7 13) (2 3 5 5 5 5)
(2 3 3 3 7 7) (2 3 3 3 3 3 3 5) (2 3 3 3 3 11) (2 3 3 5 5 7)
(2 3 3 17) (2 5 7 11) (2 5 5 13) (2 23))
Obviously this soon becomes unwieldy. And with exponential run time, it is also excruciatingly slow. Memoization on the recursive calls to parts would obviously help, but still the rapid increase in the number of prime partitions makes this impractical. | 2014-07-29 13:39:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7531761527061462, "perplexity": 488.91775162430594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00061-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/696931-relative-to-camera-rendering/ | OpenGL Relative to Camera rendering.
Recommended Posts
Posted (edited)
Hello fellow programmers,
For a couple of days now i've decided to build my own planet renderer just to see how floating point precision issues
can be tackled. As you probably imagine, i've quickly faced FPP issues when trying to render absurdly large planets.
I have used the classical quadtree LOD approach;
I've generated my grids with 33 vertices, (x: -1 to 1, y: -1 to 1, z = 0).
Each grid is managed by a TerrainNode class that, depending on the side it represents (top, bottom, left right, front, back),
creates a special rotation-translation matrix that moves and rotates the grid away from the origin so that when i finally
normalize all the vertices on my vertex shader i can get a perfect sphere.
T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, 1.0));
R = glm::rotate(glm::dmat4(1.0), glm::radians(180.0), glm::dvec3(1.0, 0.0, 0.0));
sides[0] = new TerrainNode(1.0, radius, T * R, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_FRONT));
T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, -1.0));
R = glm::rotate(glm::dmat4(1.0), glm::radians(0.0), glm::dvec3(1.0, 0.0, 0.0));
sides[1] = new TerrainNode(1.0, radius, R * T, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_BACK));
// So on and so forth for the rest of the sides
As you can see, for the front side grid, i rotate it 180 degrees to make it face the camera and push it towards the eye;
the back side is handled almost the same way only that i don't need to rotate it but simply push it away from the eye.
The same technique is applied for the rest of the faces (obviously, with the proper rotations / translations).
The matrix that result from the multiplication of R and T (in that particular order) is send to my vertex shader as r_Grid'.
// spherify
vec3 V = normalize((r_Grid * vec4(r_Vertex, 1.0)).xyz);
gl_Position = r_ModelViewProjection * vec4(V, 1.0);
The r_ModelViewProjection' matrix is generated on the CPU in this manner.
// No the most efficient way, but it works.
glm::dmat4 Camera::getMatrix() {
// Create the view matrix
// Roll, Yaw and Pitch are all quaternions.
glm::dmat4 View = glm::toMat4(Roll) * glm::toMat4(Pitch) * glm::toMat4(Yaw);
// The model matrix is generated by translating in the oposite direction of the camera.
glm::dmat4 Model = glm::translate(glm::dmat4(1.0), -Position);
// Projection = glm::perspective(fovY, aspect, zNear, zFar);
// zNear = 0.1, zFar = 1.0995116e12
return Projection * View * Model;
}
I managed to get rid of z-fighting by using a technique called Logarithmic Depth Buffer described in this article; it works amazingly well, no z-fighting at all, at least not visible.
Each frame i'm rendering each node by sending the generated matrices this way.
// set the r_ModelViewProjection uniform
// Sneak in the mRadiusMatrix which is a matrix that contains the radius of my planet.
// set the r_Grid matrix uniform i created earlier.
grid->render();
My planet's radius is around 6400000.0 units, absurdly large, but that's what i really want to achieve;
Everything works well, the node's split and merge as you'd expect, however whenever i get close to the surface
of the planet the rounding errors start to kick in giving me that lovely stairs effect.
I've read that if i could render each grid relative to the camera i could get better precision on the surface, effectively
getting rid of those rounding errors.
My question is how can i achieve this relative to camera rendering in my scenario here?
I know that i have to do most of the work on the CPU with double, and that's exactly what i'm doing.
I only use double on the CPU side where i also do most of the matrix multiplications.
As you can see from my vertex shader i only do the usual r_ModelViewProjection * (some vertex coords).
Edited by Cristian Decu
Share on other sites
r_Grid * vec4(r_Vertex, 1.0)
is what's giving you grief. In order to avoid fp32 precision errors you can never allow an fp32 calculation to contain an excessively large value.
r_ModelViewProjection * vec4(V, 1.0);
Is taking an already stairsteppy V and transforming it to projection space. What you need is a GridModelViewProjection matrix that is calculated as doubles before being converted to floats. How to pull off the normalize trick after that is a good question...
Bikeshedding bonus: Personally, I prefer to name my mats in the style of ProjectionFromGrid rather than GridModelViewProjection. That way it connects nicely when I write out "projectionPosition = ProjectionFromGrid * gridPosition"
Share on other sites
On 5/23/2018 at 8:48 PM, corysama said:
r_Grid * vec4(r_Vertex, 1.0)
is what's giving you grief. In order to avoid fp32 precision errors you can never allow an fp32 calculation to contain an excessively large value.
r_ModelViewProjection * vec4(V, 1.0);
Is taking an already stairsteppy V and transforming it to projection space. What you need is a GridModelViewProjection matrix that is calculated as doubles before being converted to floats. How to pull off the normalize trick after that is a good question...
Bikeshedding bonus: Personally, I prefer to name my mats in the style of ProjectionFromGrid rather than GridModelViewProjection. That way it connects nicely when I write out "projectionPosition = ProjectionFromGrid * gridPosition"
First of all, thank you for taking your time to read my long post; second of all, it worked, even though i had to rethink a few things.
I quit normalizing the vertices on the GPU, and i started creating a VBO for each node's grid on the CPU, projecting each vertex on an imaginary sphere before uploading everything in the VBO. This way, it was easy to apply the method you described and now i can render absurdly large planets with no visible fp32 precision problems at all!
It may not be the best solution, however with some caching involved, i believe i can optimize it a little.
Thank you again!
Share on other sites
What about to divide each vertex of your grid(s) to 2xf32? And then to make some tricks in a vertex shader...I don't know...to improve precision?
Create an account
Register a new account
1. 1
2. 2
3. 3
4. 4
JoeJ
20
5. 5
frob
20
• 11
• 11
• 12
• 13
• 9
• Forum Statistics
• Total Topics
632214
• Total Posts
3004858
× | 2018-08-22 08:10:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32643580436706543, "perplexity": 3387.91027765126}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219692.98/warc/CC-MAIN-20180822065454-20180822085454-00474.warc.gz"} |
http://archive.numdam.org/item/M2AN_2010__44_3_455_0/ | Skipping transition conditions in a posteriori error estimates for finite element discretizations of parabolic equations
ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 44 (2010) no. 3, p. 455-484
In this paper we derive a posteriori error estimates for the heat equation. The time discretization strategy is based on a θ-method and the mesh used for each time-slab is independent of the mesh used for the previous time-slab. The novelty of this paper is an upper bound for the error caused by the coarsening of the mesh used for computing the solution in the previous time-slab. The technique applied for deriving this upper bound is independent of the problem and can be generalized to other time dependent problems.
DOI : https://doi.org/10.1051/m2an/2010009
Classification: 65N30, 65N15, 65N50, 65J15
Keywords: a posteriori error estimates, transition condition, parabolic problems
@article{M2AN_2010__44_3_455_0,
author = {Berrone, Stefano},
title = {Skipping transition conditions in a posteriori error estimates for finite element discretizations of parabolic equations},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique},
publisher = {EDP-Sciences},
volume = {44},
number = {3},
year = {2010},
pages = {455-484},
doi = {10.1051/m2an/2010009},
zbl = {1195.65117},
mrnumber = {2666651},
language = {en},
url = {http://www.numdam.org/item/M2AN_2010__44_3_455_0}
}
Berrone, Stefano. Skipping transition conditions in a posteriori error estimates for finite element discretizations of parabolic equations. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Volume 44 (2010) no. 3, pp. 455-484. doi : 10.1051/m2an/2010009. http://www.numdam.org/item/M2AN_2010__44_3_455_0/
[1] I. Babuška and W.C. Rheinboldt, Error estimates for adaptive finite element method. SIAM J. Numer. Anal. 15 (1978) 736-754. | Zbl 0398.65069
[2] R. Becker and R. Rannacher, An optimal control approach to a posteriori error estimation in finite element methods. Acta Numer. 10 (2001) 1-102. | Zbl 1105.65349
[3] A. Bergam, C. Bernardi and Z. Mghazli, A posteriori analysis of the finite element discretization of some parabolic equations. Math. Comp. 74 (2004) 1117-1138. | Zbl 1072.65124
[4] C. Bernardi and R. Verfürth, Adaptive finite element methods for elliptic equations with non-smooth coefficients. Numer. Math. 85 (2000) 579-608. | Zbl 0962.65096
[5] S. Berrone, Robust a posteriori error estimates for finite element discretizations of the heat equation with discontinuous coefficients. ESAIM: M2AM 40 (2006) 991-1021. | Numdam | Zbl 1121.65098
[6] P.G. Ciarlet, The Finite Element Method for Elliptic Problems. North-Holland Publishing Company, Amsterdam (1978). | Zbl 0511.65078
[7] P. Clément, Approximation by finite element functions using local regularization. RAIRO Anal. Numér. 9 (1975) 77-84. | Numdam | Zbl 0368.65008
[8] W. Dörfler, A convergent adaptive algorithm for Poisson's equation. SIAM J. Numer. Anal. 33 (1996) 1106-1124. | Zbl 0854.65090
[9] M. Dryja, M.V. Sarkis and O.B. Widlund, Multilevel Schwarz methods for elliptic problems with discontinuous coefficients in three dimensions. Numer. Math. 72 (1996) 313-348. | Zbl 0857.65131
[10] K. Eriksson and C. Johnson, Adaptive finite element methods for parabolic problems. V. Long-time integration. SIAM J. Numer. Anal. 32 (1995) 1750-1763. | Zbl 0835.65117
[11] K. Eriksson, D. Estep, P. Hansbo and C. Johnson, Introduction to adaptive methods for differential equations. Acta Numer. 4 (1995) 105-158. | Zbl 0829.65122
[12] B.S. Kirk, J.W. Peterson, R. Stogner and S. Petersen, LibMesh. The University of Texas, Austin, CFDLab and Technische Universität Hamburg, Hamburg, http://libmesh.sourceforge.net.
[13] B.P. Lamichhane and B.I. Wohlmuth, Higher order dual Lagrange multiplier spaces for mortar finite element discretizations. Calcolo 39 (2002) 219-237. | Zbl 1168.65414
[14] B.P. Lamichhane, R.P. Stevenson and B.I. Wohlmuth, Higher order mortar finite element methods in 3D with dual Lagrange multiplier bases. Numer. Math. 102 (2005) 93-121. | Zbl 1082.65120
[15] P. Morin, R.H. Nochetto and K.G. Siebert, Convergence of adaptive finite element methods. SIAM Rev. 44 (2002) 631-658. | Zbl 1016.65074
[16] M. Petzoldt, A posteriori error estimators for elliptic equations with discontinuous coefficients. Adv. Comput. Math. 16 (2002) 47-75. | Zbl 0997.65123
[17] M. Picasso, Adaptive finite elements for a linear parabolic problem. Comput. Methods Appl. Mech. Engrg. 167 (1998) 223-237. | Zbl 0935.65105
[18] L.R. Scott and S. Zhang, Finite element interpolation of nonsmooth functions satisfying boundary conditions. Math. Comput. 54 (1990) 483-493. | Zbl 0696.65007
[19] R. Verfürth, A Review of A Posteriori Error Estimation and Adaptive Mesh-Refinement Techniques. John Wiley & Sons, Chichester-New York (1996). | Zbl 0853.65108
[20] R. Verfürth, A posteriori error estimates for finite element discretization of the heat equations. Calcolo 40 (2003) 195-212. | Zbl 1168.65418
[21] B.I. Wohlmuth, A mortar finite element method using dual spaces for the Lagrange multiplier. SIAM J. Numer. Anal. 38 (2000) 989-1012. | Zbl 0974.65105
[22] O.C. Zienkiewicz and J.Z. Zhu, A simple error estimator and adaptive procedure for practical engineering analysis. Internat. J. Numer. Methods Engrg. 24 (1987) 337-357. | Zbl 0602.73063 | 2020-10-26 18:25:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42423537373542786, "perplexity": 2725.556815710476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891624.95/warc/CC-MAIN-20201026175019-20201026205019-00357.warc.gz"} |
http://codeforces.com/problemset/problem/1036/B | B. Diagonal Walking v.2
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
Mikhail walks on a Cartesian plane. He starts at the point $(0, 0)$, and in one move he can go to any of eight adjacent points. For example, if Mikhail is currently at the point $(0, 0)$, he can go to any of the following points in one move:
• $(1, 0)$;
• $(1, 1)$;
• $(0, 1)$;
• $(-1, 1)$;
• $(-1, 0)$;
• $(-1, -1)$;
• $(0, -1)$;
• $(1, -1)$.
If Mikhail goes from the point $(x1, y1)$ to the point $(x2, y2)$ in one move, and $x1 \ne x2$ and $y1 \ne y2$, then such a move is called a diagonal move.
Mikhail has $q$ queries. For the $i$-th query Mikhail's target is to go to the point $(n_i, m_i)$ from the point $(0, 0)$ in exactly $k_i$ moves. Among all possible movements he want to choose one with the maximum number of diagonal moves. Your task is to find the maximum number of diagonal moves or find that it is impossible to go from the point $(0, 0)$ to the point $(n_i, m_i)$ in $k_i$ moves.
Note that Mikhail can visit any point any number of times (even the destination point!).
Input
The first line of the input contains one integer $q$ ($1 \le q \le 10^4$) — the number of queries.
Then $q$ lines follow. The $i$-th of these $q$ lines contains three integers $n_i$, $m_i$ and $k_i$ ($1 \le n_i, m_i, k_i \le 10^{18}$) — $x$-coordinate of the destination point of the query, $y$-coordinate of the destination point of the query and the number of moves in the query, correspondingly.
Output
Print $q$ integers. The $i$-th integer should be equal to -1 if Mikhail cannot go from the point $(0, 0)$ to the point $(n_i, m_i)$ in exactly $k_i$ moves described above. Otherwise the $i$-th integer should be equal to the the maximum number of diagonal moves among all possible movements.
Example
Input
32 2 34 3 710 1 9
Output
16-1
Note
One of the possible answers to the first test case: $(0, 0) \to (1, 0) \to (1, 1) \to (2, 2)$.
One of the possible answers to the second test case: $(0, 0) \to (0, 1) \to (1, 2) \to (0, 3) \to (1, 4) \to (2, 3) \to (3, 2) \to (4, 3)$.
In the third test case Mikhail cannot reach the point $(10, 1)$ in 9 moves. | 2019-10-22 02:47:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7391243577003479, "perplexity": 411.83239402193686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795403.76/warc/CC-MAIN-20191022004128-20191022031628-00411.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/int2/chapter/4/lesson/4.2.2/problem/4-73 | ### Home > INT2 > Chapter 4 > Lesson 4.2.2 > Problem4-73
4-73.
Eugene wants to use the cosine ratio to calculate $y$ for the triangle below.
1. Which angle should he use to write an equation and solve for $y$ using the cosine ratio? Why?
$\text{cos } \theta =\frac{\text{adj.}}{\text{hyp.}}$
The side labeled $y$ is adjacent (next to) which angle?
2. Set up an equation, and solve for $y$ using cosine.
$\cos(\text{?})=\frac{y}{42}$ | 2020-09-24 09:38:10 | {"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968673586845398, "perplexity": 1965.2181328294878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400214347.17/warc/CC-MAIN-20200924065412-20200924095412-00481.warc.gz"} |
http://math.stackexchange.com/questions/506030/integrating-int-fracdtt22t1/506045 | # integrating $\int \frac{dt}{(t+2)^2(t+1)}$
I'm practicing to solve a whole, and I am not able to solve this one, could you help me? $$\int \frac{dt}{(t+2)^2(t+1)}$$I tried $$\frac{1}{(t+2)^2(t+1)}=\frac{A}{(t+2)^2}+\frac{B}{(t+2)}+\frac{C}{(t+1)}\\1=A(t+1)+B(t+2)(t+1)+C(t+2)^2\\t=-2\Longrightarrow 1=-A\Longrightarrow \fbox{A=-1}\\t=-1\Longrightarrow1=C(-1)^2\Longrightarrow\fbox{C=1}$$Making $t = 0$ and substituting $A$ and $C$ we have $$1=A+2B+4C=-1+2B+4=2B+3\\\fbox{B=-1}$$THEN$$\int \frac{dt}{(t+2)^2(t+1)}=\int -\frac{1}{(t+2)^2}-\frac{1}{(t+2)}+\frac{1}{(t+1)}\;dt\\=-\int \frac{1}{(t+2)^2}-\int\frac{1}{(t+2)}+\int\frac{1}{(t+1)}\;dt=-\int u^{-2\;}du-\ln|t+2|+\ln|t+1|\\=-\frac{u^{-1}}{-1}-\ln|t+2|+\ln|t+1|=\\\fbox{\frac{1}{t+2}-\ln|t+2|+\ln|t+1|+c}$$Only I could not do the derivative to "take the test", can you help me? Or contains an error in my resolution?
-
seems okay according to wolf – Santosh Linkha Sep 26 '13 at 16:30
$$\left(\frac1{t+2}+\log\frac{t+1}{t+2}\right)'=-\frac1{(t+2)^2}+\frac{t+2}{t+1}\frac1{(t+2)^2}=$$
$$=-\frac1{(t+2)^2}+\frac1{(t+1)(t+2)}=\frac{-t-1+t+2}{(t+1)(t+2)^2}=\frac1{(t+1)(t+2)^2}\;\color\red\checkmark$$
-
(+1) for the checkmark sign at the end :) – Caran-d'Ache Sep 27 '13 at 8:21
Differentiating $$\frac{1}{t+2} - \ln |t + 2| + \ln|t+1| + c$$ gives you $$-\frac{1}{(t+2)^2} - \frac{1}{t + 2} + \frac{1}{t+1}$$ which is certainly what you got when you split the fraction initially. To check that your A, B and C were correct, see that this is just $$\frac{-(t + 1) - (t + 2)(t+1) + (t+2)^2}{(t+2)^2(t+1)}.$$ Simplifying, we get $$-(t+1)-(t+2)(t+1)+(t+2)^2 = -(t+1) - (t^2 + 3t + 2) + (t^2 + 4t +4) = 1,$$ as required.
-
$$D_t\;\;\frac{1}{t+1}-\ln|t+2|+\ln|t+1|+c=\\=\frac{(t+2)\cdot0-1\cdot1}{(t+1)^2}-\frac{1}{t+2}+\frac{1}{t+1}+0=\\=\frac{-(t+1)-(t+1)(t+2)+(t+2)^2}{(t+2)^2(t+1)}=\\=\frac{-t-1-t^2-3t-2+t^2+4t+4}{(t+2)^2(t+1)}=\\=\frac{1}{(t+2)^2(t+1)}$$ Correct .. -
\begin{align} \int{{\rm d}t \over \left(t + \mu\right)\left(t + 1\right)} &= {1 \over 1 - \mu} \int\left({1 \over t + \mu} - {1 \over t + 1}\right)\,{\rm d}t = {\ln\left(t + \mu\right) \over 1 - \mu} - {\ln\left(t + 1\right) \over 1 - \mu}\,{\rm d}t \\[1cm]& \mbox{Derive both members respect of}\ \mu \\[3mm] -\int{{\rm d}t \over \left(t + \mu\right)^{2}\left(t + 1\right)} &= {1 \over \left(t + \mu\right)\left(1 - \mu\right)} + {\ln\left(t + \mu\right) \over \left(1 - \mu\right)^{2}} - {\ln\left(t + 1\right) \over \left(1 - \mu\right)^{2}} \end{align} Set $\mu = 2$ and changes sign in both members: $$\begin{array}{|c|}\hline\\ \color{#ff0000}{\large\quad% \int{{\rm d}t \over \left(t + 2\right)^{2}\left(t + 1\right)} = {1 \over t + 2} - \ln\left(t + 2\right) + \ln\left(t + 1\right)\quad} \\ \\ \hline \end{array}$$ | 2014-04-16 16:08:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994134902954102, "perplexity": 832.9579346586428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/-q3654280 | ## Potential Energy
• Anonymous commented
I have tried this problem a couple times and I have yet to get the answer in the back of the book. I included the answer along with the photo, so you will know what the book states is correct. Thanks. | 2013-05-20 09:36:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9053067564964294, "perplexity": 221.51866518220595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698693943/warc/CC-MAIN-20130516100453-00027-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://electronics.stackexchange.com/questions/507230/power-waves-and-scattering-matrix | # power waves and scattering matrix
I have some doubts about the concept of power waves used in the description of the scattering matrix of an N-port component. In particular, I was wondering what is the most general and correct definition of incident and reflected power wave . In almost all the texts and sites on which I have searched,it is always taken as the impedance of normalization the same characteristic impedance of the LdT connected to the port. However from what I understand, in the more general case it is possible to assume a normalization impedance different from that characteristic of the transmission line connected to the considered port. In this case how are defined the incident and the reflected power waves? is there any text where i can found this argument?
However from what I understand, in the more general case it is possible to assume a normalization impedance different from that characteristic of the transmission line connected to the considered port.
That doesn't sound useful or right.
If you don't use the same impedance of the transmission line ($$\Z_0\$$) you can't properly define reflections based on reflection coefficient ($$\\Gamma\$$): -
$$\Gamma = \dfrac{Z_L - Z_0}{Z_L + Z_0}$$
And this means you can't define standing wave ratio: -
$$SWR = \dfrac{1 + |\Gamma|}{1 - |\Gamma|}$$
And you can't define return loss: -
$$RL = -20\log_{10}(|\Gamma|)$$
I'm at a loss to think how using a non-$$\Z_0\$$ value has any significant use or merit. I've been wrong before of course!
• You can calculate the S-parameters of a segment of (for example) 75 ohm line in a 50 ohm system. You might use it as an equivalent (in a certain band) of a series inductor in your system. Or you might want to know the effect if your PCB trace comes out as 45 ohms or 55 ohms due to manufacturing variations. – The Photon Jun 24 at 17:03
• but in that case if i chose the reference impedence Zo not equal to the Zc of the line connected to that port, when i have to find the parameter S11 and S21, i have to put as load for the port 2 an impedence equal to Zo e not equal to Zc, so the incidente power wave a2 will be zero, right? – Marco Toni Jun 24 at 17:23
The choice of Z0 is arbitrary and does not affect the actual V and I in a circuit.
The S parameters are defined w.r.t. to a Z0 in order to normalize V and I to a common "square-root of power" quantity. The magnitude of S depends on Z0.
Now careful...
S parameters for a section of a network describe reflection, absorption and transference as a constituent of super-imposed waves in each section. The real reflection is a sum of reflections etc... from various sections. So the choice of Z0 affects the magnitude of the constituent waves, not their sum. In a ZG and ZL matched system, and a choice of Z0 that equals neither, the reflections will cancel out regardless Z0.
If you choose a Z0=100Ohm in a 50Ohm system (coax, source, load...) there will be non-zero Gammas all abound, but the resultant sum of super-imposed waves, the absorption into a load etc... are unaffected by that choice, and will still "match".
Where the constituent parts DO matter is in transient systems. In order to match-up a calculated Gamma with any observed/real reflections (e.g. spikes/pulses refelecting in a long open-ended or mismatched wire) you will have to use Z0 equal to the nominal Z of your system. If you don't, your Gamma is different, still correct, but it only paints part of the picture, a layer to be super-imposed so to say. | 2020-09-23 17:02:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7329857349395752, "perplexity": 901.4661713631525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400211096.40/warc/CC-MAIN-20200923144247-20200923174247-00471.warc.gz"} |
http://mymathforum.com/number-theory/19227-writing-prime-sum-difference-different-primes.html | My Math Forum Writing a prime as sum or difference of different primes
Number Theory Number Theory Math Forum
May 11th, 2011, 10:05 AM #1 Senior Member Joined: Apr 2011 From: Recife, BR Posts: 352 Thanks: 0 Writing a prime as sum or difference of different primes Can any prime $p \ >3$ be written as $p= p_1 \pm p_2 \pm ... \pm p_n$, if $p_k$ are all primes different from p (not necessarily the k-th prime) and$p_i \neq p_j$ for all i,j? For example, $7= 5 + 2$ or $7= 17 - 5 - 3 - 2$ Let$p_n= p_{n-1} + k$ (in this case, I do mean the n-th prime). Then for the statement to be false, k must not be able to be written as $p_m= p_{m-1} + k$, otherwise$p_n= p_{n-1} + p_m - p_{m-1}$. So if there isn't an unique difference between two consecutive primes greater than 3, then the statement is true. Is it true that every even number greater than zero can be written as a difference of two or more pairs of consecutive primes?
May 11th, 2011, 11:29 AM #2
Global Moderator
Joined: Nov 2006
From: UTC -5
Posts: 16,046
Thanks: 938
Math Focus: Number theory, computational mathematics, combinatorics, FOM, symbolic logic, TCS, algorithms
Re: Writing a prime as sum or difference of different primes
Quote:
Originally Posted by proglote Can any prime $p \ >3$ be written as $p= p_1 \pm p_2 \pm ... \pm p_n$, if $p_k$ are all primes different from p (not necessarily the k-th prime) and$p_i \neq p_j$ for all i,j?
Almost surely, and you don't need to exclude 2 = 5 - 3 or 3 = 5 - 2.
Quote:
Originally Posted by proglote Is it true that every even number greater than zero can be written as a difference of two or more pairs of consecutive primes?
Alphonse de Polignac conjectured that it would only take one pair. It hasn't been proven yet in the 160 years since.
Tags difference, prime, primes, sum, writing
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post caters Number Theory 67 March 19th, 2014 04:32 PM TeM New Users 4 August 11th, 2013 09:13 AM miket Number Theory 5 May 15th, 2013 05:35 PM momo Number Theory 16 February 24th, 2010 05:11 AM cosmin Algebra 2 July 17th, 2008 02:47 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-04-19 12:28:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 13, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6178000569343567, "perplexity": 1104.4439554414587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527720.37/warc/CC-MAIN-20190419121234-20190419143234-00182.warc.gz"} |
https://gis.stackexchange.com/questions/263256/unable-to-load-shapefile-in-pyqgis | # Unable to load shapefile in PyQGIS
I've been trying to load a .shp file using PyQGIS, but layer.isValid() is always false. I've tried looking around at past solutions for the problem and nothing is working for me. This is my code currently:
from qgis.core import *
from PyQt4.QtGui import QApplication
app = QApplication([])
QgsApplication.setPrefixPath('C:\\OSGeo4W\\apps\\qgis\\', True)
QgsApplication.initQgis()
layer = QgsVectorLayer('C:\\path\\to\\file.shp', 'name', 'ogr')
print layer.isValid()
I've also tried using forward slashes instead of double backslashes in my paths, which doesn't work either. I'm on Windows 10, running Python 2.7.5 with OSGeo4W.
• Can you load the layer in QGIS or any other GIS programs? – Liam G Nov 28 '17 at 5:12
• Yes, I'm using in a QGIS project just fine. – distant_quasar Nov 28 '17 at 5:17
• You probably need to set your paths correctly. You could run this batch file described in this post before running your script. – Joseph Nov 28 '17 at 10:39
app = QApplication([]) | 2021-03-05 08:18:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4033871591091156, "perplexity": 3980.4472377869456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00061.warc.gz"} |
https://bobsegarini.wordpress.com/tag/moe-berg/ | ## GWNtertainment #14 – Jaimie Vernon
Posted in music, Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on April 26, 2021 by segarini
## Pat Blythe – MIrror, Mirror, Little Magic Sam, Jeans ‘N Classics …and Music!
Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on February 27, 2019 by segarini
So……some asshole, who decided he needed one of my car parts more than I did, expertly stole my passenger side mirror. Parked on Queen St., directly in front of the Cameron House, against a snowbank. F********K!!!! Not a “smash and grab” but carefully popped out, heating wires neatly disconnected and presto, I’m out \$80.
## Roxanne Tellier – The Fun Never Ends!
Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on November 20, 2016 by segarini
Well, actually … ‘fun’ would not be how I would describe the last two weeks. Incomprehensible, horrific, blacker than the blackest black humour .. yeah, that’s more like it. Maybe what I meant to say was, “When does the fun begin?”
When we last left our plucky heroine, she was picking over the broken pieces of her shattered life, accompanied by her furry companions, who were cowering under the bed.
## JAIMIE VERNON – CLEVER HAS A NAME: JAMES CLARK INSTITUTE
Posted in Opinion, Review with tags , , , , , , , , , , , , , , , , , , , , , , , on October 23, 2016 by segarini
We’re in an interesting transitional period with music. Not the industry part that I’ve frequently beaten up and kicked to the curb time and time again. No, I’m talking about musical direction. I had hoped upon hope that post-9/11 the singer-songwriter would make a massive mainstream comeback to usher in the silver age of thinking man’s music. To say I was disappointed would be an understatement. The simpletons in charge of what we listen to doubled down on the stupid with the vacant tap dancing and lip-syncing of nubile females, effeminate boy bands and reality stars-turned-prime-time-embarrassments. ‘American Idol’ was the new A & R department for the future of music.
## JAIMIE VERNON – Life’s a Canadian Rock: Book 2, Chapter 4
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on August 4, 2013 by segarini
Read Chapter 1 here:
Read Chapter 2 here:
Read Chapter 3 here:
PHASE II, MACH III – PHOENIX RISING
Having spent the better part of 5 years taking my band Moving Targetz from a suburban Scarborough, Ontario basement (a trend later to be repeated by the Barenaked Ladies), burning up the Queen Street circuit, forming a record label and releasing not one, but two, 12” slabs of polyvinyl chloride only to have the band self-destruct on the eve of global domination, I did some soul searching and decided that I really didn’t want the musical ideology of Targetz to die. | 2023-03-31 05:33:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243904948234558, "perplexity": 1814.4640686181422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00042.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/12/lesson/12.1.4/problem/12-48 | ### Home > CALC > Chapter 12 > Lesson 12.1.4 > Problem12-48
12-48.
Calculate the area of the cardioid r = 1- cosθ from [0, 2π]. Homework Help ✎ | 2020-04-02 01:12:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831463634967804, "perplexity": 12115.197759481864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00517.warc.gz"} |
http://physics.stackexchange.com/questions/19614/electron-fermi-gas | # Electron Fermi gas
My question is about 2-dimensional Fermi gas of electrons.
1. What is magnetic susceptibility when $T<<T_F$ (where $T_F$ is Fermi Temperature)
and,
2. What is the ratio between Pauli and Curie spin susceptibilities?
-
This sounds a little like homework-- the calculation is straightforward if you know the Landau solution for a particle in a magnetic field (it is contained in most quantum mechanics textbooks) and the standard electron magnetic moment. – Ron Maimon Jan 17 '12 at 5:45
@RonMaimon: yes, it is. I'm new to this forum and was not familiar to the conventions. – Shalandeh Jan 17 '12 at 6:00
@RonMaimon: could you be more specific? in which textbooks I can look them up? – Shalandeh Jan 17 '12 at 6:03
Start here. en.wikipedia.org/wiki/Landau_quantization . I am sure it's in Sakurai and Landau and Lifshitz, perhaps also in Dirac. – Ron Maimon Jan 17 '12 at 6:24
Chapter 31 in "Solid State Physics" by Ashcroft and Mermin. – Maksim Zholudev Jan 17 '12 at 14:30 | 2016-02-12 05:44:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7962706089019775, "perplexity": 960.3846049711916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163438.83/warc/CC-MAIN-20160205193923-00279-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://nag.com/numeric/fl/nagdoc_fl24/html/F08/f08xsf.html | F08 Chapter Contents
F08 Chapter Introduction
NAG Library Manual
# NAG Library Routine DocumentF08XSF (ZHGEQZ)
Note: before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details.
## 1 Purpose
F08XSF (ZHGEQZ) implements the $QZ$ method for finding generalized eigenvalues of the complex matrix pair $\left(A,B\right)$ of order $n$, which is in the generalized upper Hessenberg form.
## 2 Specification
SUBROUTINE F08XSF ( JOB, COMPQ, COMPZ, N, ILO, IHI, A, LDA, B, LDB, ALPHA, BETA, Q, LDQ, Z, LDZ, WORK, LWORK, RWORK, INFO)
INTEGER N, ILO, IHI, LDA, LDB, LDQ, LDZ, LWORK, INFO REAL (KIND=nag_wp) RWORK(N) COMPLEX (KIND=nag_wp) A(LDA,*), B(LDB,*), ALPHA(N), BETA(N), Q(LDQ,*), Z(LDZ,*), WORK(max(1,LWORK)) CHARACTER(1) JOB, COMPQ, COMPZ
The routine may be called by its LAPACK name zhgeqz.
## 3 Description
F08XSF (ZHGEQZ) implements a single-shift version of the $QZ$ method for finding the generalized eigenvalues of the complex matrix pair $\left(A,B\right)$ which is in the generalized upper Hessenberg form. If the matrix pair $\left(A,B\right)$ is not in the generalized upper Hessenberg form, then the routine F08WSF (ZGGHRD) should be called before invoking F08XSF (ZHGEQZ).
This problem is mathematically equivalent to solving the matrix equation
$detA-λB=0.$
Note that, to avoid underflow, overflow and other arithmetic problems, the generalized eigenvalues ${\lambda }_{j}$ are never computed explicitly by this routine but defined as ratios between two computed values, ${\alpha }_{j}$ and ${\beta }_{j}$:
$λj=αj/βj.$
The parameters ${\alpha }_{j}$, in general, are finite complex values and ${\beta }_{j}$ are finite real non-negative values.
If desired, the matrix pair $\left(A,B\right)$ may be reduced to generalized Schur form. That is, the transformed matrices $A$ and $B$ are upper triangular and the diagonal values of $A$ and $B$ provide $\alpha$ and $\beta$.
The parameter JOB specifies two options. If ${\mathbf{JOB}}=\text{'S'}$ then the matrix pair $\left(A,B\right)$ is simultaneously reduced to Schur form by applying one unitary transformation (usually called $Q$) on the left and another (usually called $Z$) on the right. That is,
$A←QHAZ B←QHBZ$
If ${\mathbf{JOB}}=\text{'E'}$, then at each iteration the same transformations are computed but they are only applied to those parts of $A$ and $B$ which are needed to compute $\alpha$ and $\beta$. This option could be used if generalized eigenvalues are required but not generalized eigenvectors.
If ${\mathbf{JOB}}=\text{'S'}$ and ${\mathbf{COMPQ}}=\text{'V'}$ or $\text{'I'}$, and ${\mathbf{COMPZ}}=\text{'V'}$ or $\text{'I'}$, then the unitary transformations used to reduce the pair $\left(A,B\right)$ are accumulated into the input arrays Q and Z. If generalized eigenvectors are required then JOB must be set to ${\mathbf{JOB}}=\text{'S'}$ and if left (right) generalized eigenvectors are to be computed then COMPQ (COMPZ) must be set to ${\mathbf{COMPQ}}=\text{'V'}$ or $\text{'I'}$ rather than ${\mathbf{COMPQ}}=\text{'N'}$.
If ${\mathbf{COMPQ}}=\text{'I'}$, then eigenvectors are accumulated on the identity matrix and on exit the array Q contains the left eigenvector matrix $Q$. However, if ${\mathbf{COMPQ}}=\text{'V'}$ then the transformations are accumulated in the user-supplied matrix ${Q}_{0}$ in array Q on entry and thus on exit Q contains the matrix product $Q{Q}_{0}$. A similar convention is used for COMPZ.
## 4 References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
Moler C B and Stewart G W (1973) An algorithm for generalized matrix eigenproblems SIAM J. Numer. Anal. 10 241–256
Stewart G W and Sun J-G (1990) Matrix Perturbation Theory Academic Press, London
## 5 Parameters
1: JOB – CHARACTER(1)Input
On entry: specifies the operations to be performed on $\left(A,B\right)$.
${\mathbf{JOB}}=\text{'E'}$
The matrix pair $\left(A,B\right)$ on exit might not be in the generalized Schur form.
${\mathbf{JOB}}=\text{'S'}$
The matrix pair $\left(A,B\right)$ on exit will be in the generalized Schur form.
Constraint: ${\mathbf{JOB}}=\text{'E'}$ or $\text{'S'}$.
2: COMPQ – CHARACTER(1)Input
On entry: specifies the operations to be performed on $Q$:
${\mathbf{COMPQ}}=\text{'N'}$
The array Q is unchanged.
${\mathbf{COMPQ}}=\text{'V'}$
The left transformation $Q$ is accumulated on the array Q.
${\mathbf{COMPQ}}=\text{'I'}$
The array Q is initialized to the identity matrix before the left transformation $Q$ is accumulated in Q.
Constraint: ${\mathbf{COMPQ}}=\text{'N'}$, $\text{'V'}$ or $\text{'I'}$.
3: COMPZ – CHARACTER(1)Input
On entry: specifies the operations to be performed on $Z$.
${\mathbf{COMPZ}}=\text{'N'}$
The array Z is unchanged.
${\mathbf{COMPZ}}=\text{'V'}$
The right transformation $Z$ is accumulated on the array Z.
${\mathbf{COMPZ}}=\text{'I'}$
The array Z is initialized to the identity matrix before the right transformation $Z$ is accumulated in Z.
Constraint: ${\mathbf{COMPZ}}=\text{'N'}$, $\text{'V'}$ or $\text{'I'}$.
4: N – INTEGERInput
On entry: $n$, the order of the matrices $A$, $B$, $Q$ and $Z$.
Constraint: ${\mathbf{N}}\ge 0$.
5: ILO – INTEGERInput
6: IHI – INTEGERInput
On entry: the indices ${i}_{\mathrm{lo}}$ and ${i}_{\mathrm{hi}}$, respectively which define the upper triangular parts of $A$. The submatrices $A\left(1:{i}_{\mathrm{lo}}-1,1:{i}_{\mathrm{lo}}-1\right)$ and $A\left({i}_{\mathrm{hi}}+1:n,{i}_{\mathrm{hi}}+1:n\right)$ are then upper triangular. These parameters are provided by F08WVF (ZGGBAL) if the matrix pair was previously balanced; otherwise, ${\mathbf{ILO}}=1$ and ${\mathbf{IHI}}={\mathbf{N}}$.
Constraints:
• if ${\mathbf{N}}>0$, $1\le {\mathbf{ILO}}\le {\mathbf{IHI}}\le {\mathbf{N}}$;
• if ${\mathbf{N}}=0$, ${\mathbf{ILO}}=1$ and ${\mathbf{IHI}}=0$.
7: A(LDA,$*$) – COMPLEX (KIND=nag_wp) arrayInput/Output
Note: the second dimension of the array A must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
On entry: the $n$ by $n$ upper Hessenberg matrix $A$. The elements below the first subdiagonal must be set to zero.
On exit: if ${\mathbf{JOB}}=\text{'S'}$, the matrix pair $\left(A,B\right)$ will be simultaneously reduced to generalized Schur form.
If ${\mathbf{JOB}}=\text{'E'}$, the $1$ by $1$ and $2$ by $2$ diagonal blocks of the matrix pair $\left(A,B\right)$ will give generalized eigenvalues but the remaining elements will be irrelevant.
8: LDA – INTEGERInput
On entry: the first dimension of the array A as declared in the (sub)program from which F08XSF (ZHGEQZ) is called.
Constraint: ${\mathbf{LDA}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
9: B(LDB,$*$) – COMPLEX (KIND=nag_wp) arrayInput/Output
Note: the second dimension of the array B must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
On entry: the $n$ by $n$ upper triangular matrix $B$. The elements below the diagonal must be zero.
On exit: if ${\mathbf{JOB}}=\text{'S'}$, the matrix pair $\left(A,B\right)$ will be simultaneously reduced to generalized Schur form.
If ${\mathbf{JOB}}=\text{'E'}$, the $1$ by $1$ and $2$ by $2$ diagonal blocks of the matrix pair $\left(A,B\right)$ will give generalized eigenvalues but the remaining elements will be irrelevant.
10: LDB – INTEGERInput
On entry: the first dimension of the array B as declared in the (sub)program from which F08XSF (ZHGEQZ) is called.
Constraint: ${\mathbf{LDB}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$.
11: ALPHA(N) – COMPLEX (KIND=nag_wp) arrayOutput
On exit: ${\alpha }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$.
12: BETA(N) – COMPLEX (KIND=nag_wp) arrayOutput
On exit: ${\beta }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,n$.
13: Q(LDQ,$*$) – COMPLEX (KIND=nag_wp) arrayInput/Output
Note: the second dimension of the array Q must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ if ${\mathbf{COMPQ}}=\text{'V'}$ or $\text{'I'}$ and at least $1$ if ${\mathbf{COMPQ}}=\text{'N'}$.
On entry: if ${\mathbf{COMPQ}}=\text{'V'}$, the matrix ${Q}_{0}$. The matrix ${Q}_{0}$ is usually the matrix $Q$ returned by F08WSF (ZGGHRD).
If ${\mathbf{COMPQ}}=\text{'N'}$, Q is not referenced.
On exit: if ${\mathbf{COMPQ}}=\text{'V'}$, Q contains the matrix product $Q{Q}_{0}$.
If ${\mathbf{COMPQ}}=\text{'I'}$, Q contains the transformation matrix $Q$.
14: LDQ – INTEGERInput
On entry: the first dimension of the array Q as declared in the (sub)program from which F08XSF (ZHGEQZ) is called.
Constraints:
• if ${\mathbf{COMPQ}}=\text{'V'}$ or $\text{'I'}$, ${\mathbf{LDQ}}\ge {\mathbf{N}}$;
• if ${\mathbf{COMPQ}}=\text{'N'}$, ${\mathbf{LDQ}}\ge 1$.
15: Z(LDZ,$*$) – COMPLEX (KIND=nag_wp) arrayInput/Output
Note: the second dimension of the array Z must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ if ${\mathbf{COMPZ}}=\text{'V'}$ or $\text{'I'}$ and at least $1$ if ${\mathbf{COMPZ}}=\text{'N'}$.
On entry: if ${\mathbf{COMPZ}}=\text{'V'}$, the matrix ${Z}_{0}$. The matrix ${Z}_{0}$ is usually the matrix $Z$ returned by F08WSF (ZGGHRD).
If ${\mathbf{COMPZ}}=\text{'N'}$, Z is not referenced.
On exit: if ${\mathbf{COMPZ}}=\text{'V'}$, Z contains the matrix product $Z{Z}_{0}$.
If ${\mathbf{COMPZ}}=\text{'I'}$, Z contains the transformation matrix $Z$.
16: LDZ – INTEGERInput
On entry: the first dimension of the array Z as declared in the (sub)program from which F08XSF (ZHGEQZ) is called.
Constraints:
• if ${\mathbf{COMPZ}}=\text{'V'}$ or $\text{'I'}$, ${\mathbf{LDZ}}\ge {\mathbf{N}}$;
• if ${\mathbf{COMPZ}}=\text{'N'}$, ${\mathbf{LDZ}}\ge 1$.
17: WORK($\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{LWORK}}\right)$) – COMPLEX (KIND=nag_wp) arrayWorkspace
On exit: if ${\mathbf{INFO}}={\mathbf{0}}$, the real part of ${\mathbf{WORK}}\left(1\right)$ contains the minimum value of LWORK required for optimal performance.
18: LWORK – INTEGERInput
On entry: the dimension of the array WORK as declared in the (sub)program from which F08XSF (ZHGEQZ) is called.
If ${\mathbf{LWORK}}=-1$, a workspace query is assumed; the routine only calculates the minimum size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued.
Constraint: ${\mathbf{LWORK}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{N}}\right)$ or ${\mathbf{LWORK}}=-1$.
19: RWORK(N) – REAL (KIND=nag_wp) arrayWorkspace
20: INFO – INTEGEROutput
On exit: ${\mathbf{INFO}}=0$ unless the routine detects an error (see Section 6).
## 6 Error Indicators and Warnings
Errors or warnings detected by the routine:
${\mathbf{INFO}}<0$
If ${\mathbf{INFO}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated.
${\mathbf{INFO}}>0$
If $1\le {\mathbf{INFO}}\le {\mathbf{N}}$, the $QZ$ iteration did not converge and the matrix pair $\left(A,B\right)$ is not in the generalized Schur form at exit. However, if ${\mathbf{INFO}}<{\mathbf{N}}$, then the computed ${\alpha }_{i}$ and ${\beta }_{i}$ should be correct for $i={\mathbf{INFO}}+1,\dots ,{\mathbf{N}}$.
If ${\mathbf{N}}+1\le {\mathbf{INFO}}\le 2×{\mathbf{N}}$, the computation of shifts failed and the matrix pair $\left(A,B\right)$ is not in the generalized Schur form at exit. However, if ${\mathbf{INFO}}<2×{\mathbf{N}}$, then the computed ${\alpha }_{i}$ and ${\beta }_{i}$ should be correct for $i={\mathbf{INFO}}-{\mathbf{N}}+1,\dots ,{\mathbf{N}}$.
If ${\mathbf{INFO}}>2×{\mathbf{N}}$, then an unexpected Library error has occurred. Please contact NAG with details of your program.
## 7 Accuracy
Please consult Section 4.11 of the LAPACK Users' Guide (see Anderson et al. (1999)) and Chapter 6 of Stewart and Sun (1990), for more information.
F08XSF (ZHGEQZ) is the fifth step in the solution of the complex generalized eigenvalue problem and is called after F08WSF (ZGGHRD).
The number of floating point operations taken by this routine is proportional to ${n}^{3}$.
The real analogue of this routine is F08XEF (DHGEQZ).
## 9 Example
This example computes the $\alpha$ and $\beta$ parameters, which defines the generalized eigenvalues, of the matrix pair $\left(A,B\right)$ given by
$A = 1.0+3.0i 1.0+4.0i 1.0+5.0i 1.0+6.0i 2.0+2.0i 4.0+3.0i 8.0+4.0i 16.0+5.0i 3.0+1.0i 9.0+2.0i 27.0+3.0i 81.0+4.0i 4.0+0.0i 16.0+1.0i 64.0+2.0i 256.0+3.0i$
and
$B = 1.0+0.0i 2.0+1.0i 3.0+2.0i 4.0+3.0i 1.0+1.0i 4.0+2.0i 9.0+3.0i 16.0+4.0i 1.0+2.0i 8.0+3.0i 27.0+4.0i 64.0+5.0i 1.0+3.0i 16.0+4.0i 81.0+5.0i 256.0+6.0i .$
This requires calls to five routines: F08WVF (ZGGBAL) to balance the matrix, F08ASF (ZGEQRF) to perform the $QR$ factorization of $B$, F08AUF (ZUNMQR) to apply $Q$ to $A$, F08WSF (ZGGHRD) to reduce the matrix pair to the generalized Hessenberg form and F08XSF (ZHGEQZ) to compute the eigenvalues using the $QZ$ algorithm.
### 9.1 Program Text
Program Text (f08xsfe.f90)
### 9.2 Program Data
Program Data (f08xsfe.d)
### 9.3 Program Results
Program Results (f08xsfe.r) | 2016-10-27 19:14:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 196, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981630444526672, "perplexity": 3406.244363304276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721387.11/warc/CC-MAIN-20161020183841-00050-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://aas.org/archives/BAAS/v25n2/aas182/abshtml/S3411.html | Measurements of the Far-Infrared [N III] and [O III] Lines in the Outer-Galaxy H II Regions S 206, S 209, and S 212
Session 34 -- Airbourne Astronomy
Display presentation, Tuesday, 9:30-6:30, Pauley Room
## [34.11] Measurements of the Far-Infrared [N III] and [O III] Lines in the Outer-Galaxy H II Regions S 206, S 209, and S 212
H.L. Dinerstein (U.Texas/Austin), M.R. Haas, E.F. Erickson (NASA/ARC), M.W. Werner (JPL)
We report measurements of the far-infrared, fine-structure lines [O III] 52, 88 $\mu$m, and [N III] 57 $\mu$m in three H II regions in the outer Milky Way. The observations were made with a cooled grating spectrometer from NASA's Kuiper Airborne Observatory. This line trio allows one to determine both the gas density and the N/O abundance as traced by the ratio N$^{++}$/O$^{++}$. We measured all three lines from the regions S 206, S 212, and S 209, located at galactocentric distances of 11.5, 14, and 16 kpc, respectively, assuming a solar galactocentric distance of 8.5 kpc. The [O III] electron densities in these H II regions range from log n$_{e}$ = 1.8 to 2.5. For the recently revised collision strength for the [N III] 57 $\mu$m line (Blum and Pradhan 1992, Ap.J.Suppl., 80, 425), the mean value for the ionic N/O ratio in these three regions is N$^{++}$/O$^{++}$ = 0.13 $\pm$ 0.03. Our results for these outer-galaxy regions will be compared with N/O abundances derived from the far-infrared lines for H II regions in the inner part of the Galaxy.
\vskip 5pt This research was supported by NASA Airborne Astronomy grant NAG2-372. | 2014-11-25 22:16:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7459331750869751, "perplexity": 11398.251028982408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004237.15/warc/CC-MAIN-20141125155644-00165-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/point-charges-and-static-equilibrium.208407/ | # Point charges and static equilibrium
1. Jan 12, 2008
### elecstorm
1. The problem statement, all variables and given/known data
Two point charges q and 4q are distance L apart and free to move. A third charge is placed so that the entire three-charge system is in static equilibrium.
2. Relevant equations
$$\Sigma$$F=0 This is the equation for static equilibrium
$$\vec{E}$$=$$q/4\pi\epsilon_{0}r^2$$
3. The attempt at a solution
If the three point charges are in static equilibrium, the net electric field is zero. So, $$\vec{E}$$=0, which means that the individual electric fields of the point charges must add up to zero. Thus, the third charge must be -5q, as q + 4q = 5q.
Is any of this right?
2. Jan 12, 2008
### Shooting Star
No. There is a r^2 in the denominator. What happened to that? You have to take distance into consideration.
3. Jan 12, 2008
### mda
Don't forget that the force on each of the three particles must be zero. Due to symmetries, and the fact that you need to find two quantities (charge and distance) you actually only need to consider any two particles, so that you get two simultaneous equations.
4. Jan 12, 2008
### elecstorm
Sorry, forgot to include the actual question. The question asks for the magnitude and sign of the third charge. I don't really understand the wording of this question. That is, should I assign another variable for the distance from the third charge? As well, is the magnitude of the third charge related to q, I don't think it is, because it does not ask for inclusion of the q variable in the answer...but I don't know how to calculate it otherwise. Thank you!
5. Jan 12, 2008
### Shooting Star
Yes, take the dist of the 3rd charge from the 1st as x and then the dist q3-q2 will be L-x, if q3 is in between them. | 2017-12-15 05:30:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.637751579284668, "perplexity": 554.8608029056321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948563629.48/warc/CC-MAIN-20171215040629-20171215060629-00326.warc.gz"} |
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-3-section-3-3-mass-and-weight-exercise-page-143/19 | ## Elementary Technical Mathematics
$g$.
Since a pencil is fairly small, we can measure its mass in units of grams or $g$. | 2022-10-03 01:07:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4101891815662384, "perplexity": 1304.5965533300357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337371.9/warc/CC-MAIN-20221003003804-20221003033804-00623.warc.gz"} |
https://unm.org.ua/rckeyz3/klipsch-reference-r-620f-review-701453 | Certain words can be pinpointed in the question to highlight the problem. It is true that, in some situations, the indeterminate form 10\frac1001 can be interpreted as ∞: \infty:∞: for instance, when taking limits of a quotient of functions. The meaning of the expression {\displaystyle \mathbb {C} \cup \{\infty \}} = ∞ You cannot define a solution. This is the operation that becomes ? π lim _\square There are some common responses to this logic, but they all have various flaws. Indeterminate maning it can literally approach different values depending on the context. Also, the fraction 1/0 is left undefined in the extended real line, therefore it and. Starting with the set of ordered pairs of integers, {(a, b)} with b ≠ 0, define a binary relation on this set by (a, b) ≃ (c, d) if and only if ad = bc. {\displaystyle \textstyle {\frac {a}{b}}} ∞ In the modern approach to constructing the field of real numbers, the rational numbers appear as an intermediate step in the development that is founded on set theory. For other uses, see, The result yielded by a real number when divided by zero, Division as the inverse of multiplication, Learn how and when to remove this template message, "Desperately Needed Remedies for the Undebuggability of Large Floating-Point Computations in Science and Engineering", On Cantorian spacetime over number systems with division by zero, "Maths Professor Divides By Zero, Says BBC", https://en.wikipedia.org/w/index.php?title=Division_by_zero&oldid=998042635, Articles lacking in-text citations from April 2016, Articles needing additional references from October 2018, All articles needing additional references, Wikipedia articles needing clarification from November 2019, Creative Commons Attribution-ShareAlike License, On September 21, 1997, a division by zero error in the "Remote Data Base Manager" aboard, This page was last edited on 3 January 2021, at 14:42. Log in here. Therefore as the denominator becomes smaller, the result of the equation becomes greater. / b 1 So we say that division by zero is undefined, for it is not consistent with division by other numbers. Log in. As an example, consider having ten cookies, and these cookies are to be distributed equally to five people at a table. + Why some people say it's false: 10=∞.\frac10 = \infty.01=∞. As the realm of numbers to which these operations can be applied expands there are also changes in how the operations are viewed. What is 1 divided by 0.2? If we multiply 1/0 by zero we could get 0 or 1. Again, any number multiplied by 0 is 0 and so this time every number solves the equation instead of there being a single number that can be taken as the value of 0/0. Understand the mathematics of continuous change. But in the ring Z/6Z, 2 is a zero divisor. However, it is possible to disguise a division by zero in an algebraic argument,[3] leading to invalid proofs that, for instance, 1 = 2 such as the following:[10]. 1 divided by 0 is not 0, nor 0.1/0 or 0.01/0 etc. The infinity signs change when dividing by −0 instead. Hence, by dividing a number by 0, the result becomes infinite. × End of long division (Remainder is 0 and next digit after decimal is 0). 11 Answers. The problem with this question is the "when". These values all tend to positive infinity as the denominator approaches 0. } 2 a For instance, suppose a,b,c,da,b,c,da,b,c,d are complex numbers such that ad−bc≠0. 0 It can be proven that if b−1 exists, then b+ = b−1. Well, that also equals one. This equation has two distinct solutions, x = 1 and x = 4, so the expression The set Each person would receive 10/5 = 2 cookies. Some processors generate an exception when an attempt is made to divide an integer by zero, although others will simply continue and generate an incorrect result for the division. 2 Well once again, that also equals one. Reveal the correct answer The expression is undefined \color{#D61F06}{\textbf{undefined}} undefined. Students are often taught that the inverse cotangent function, arccotangent, should be calculated by taking the arctangent of the reciprocal, and so a calculator may allow arctangent(1/0), giving the output 1 = 0*x ---> 0*x equals 0 for any x you choose . In two's complement arithmetic, attempts to divide the smallest signed integer by −1 are attended by similar problems, and are handled with the same range of solutions, from explicit error conditions to undefined behavior. This is part of a series on common misconceptions. There is no way to distribute 10 cookies to nobody. , which is necessary in this context. I … { This article is about the concept in mathematics and exception in computing. The fallacy here is the assumption that dividing 0 by 0 is a legitimate operation with the same properties as dividing by any other number. 2 = x In computing, a program error may result from an attempt to divide by zero. Or, the problem with 5 cookies and 2 people can be solved by cutting one cookie in half, which introduces the idea of fractions (5/2 = 21/2). Forgot password? 2 In 830, Mahāvīra unsuccessfully tried to correct Brahmagupta's mistake in his book in Ganita Sara Samgraha: "A number remains unchanged when divided by zero."[3]. 2 Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to a/0 is contained in George Berkeley's criticism of infinitesimal calculus in 1734 in The Analyst ("ghosts of departed quantities").[1]. De très nombreux exemples de phrases traduites contenant "1 divided by 1" – Dictionnaire français-anglais et moteur de recherche de traductions françaises. {\displaystyle -\infty =\infty } {\displaystyle {\tfrac {\pi }{2}}} If you're seeing this message, it means we're having trouble loading external resources on … a Conclusion: By substituting in a=b=1, a = b = 1,a=b=1, we have 1+1=1 ⟹ 2=1.1+1 = 1 \implies 2 = 1.1+1=1⟹2=1. So there are situations where 10\frac1001 is defined, but they are defined in a tightly controlled way. 0 = 1. {\displaystyle a/\infty =0} So if 1 divided by zero is infinite. The thing is something divided by 0 is always … Ask Question Log in. The standard supports signed zero, as well as infinity and NaN (not a number). For example,[9], since 2 is the value for which the unknown quantity in, requires a value to be found for the unknown quantity in. Réponse préférée 1 ⁄ 0 = infinity = ∞ ... it is NOT undefined.... so infinity is obviously too big a value for any fixed display. Answering this revised question precisely requires close examination of the definition of rational numbers. If there are, say, 5 cookies and 2 people, the problem is in "evenly distribute". Sign up to read all wikis and quizzes in math, science, and engineering topics. Thus, the answer to "1 divided by what equals 11?" and ∞ Well that's gonna be one. While this makes division defined in more cases than usual, subtraction is instead left undefined in many cases, because there are no negative numbers. 0 Favourite answer. In this structure, 2 Let's get super close to zero: 0.000001 divided by 0.000001. {\displaystyle \infty } lol! Rebuttal: What about on the Riemann sphere? and However, the single number c would then have to be determined by the equation 0 = 0 × c, but every number satisfies this equation, so we cannot assign a numerical value to 0/0. In order for 10 \frac{1}{0} 01 to be consistent, the limits from both directions should be equal, which is clearly not the case here. / 2 For example, And it didn't even matter whether these were positive or negative. π → One, you could start taking numbers closer and closer to zero and dividing them by themselves. to a distribution on the whole space of real numbers (in effect by using Cauchy principal values). If 10=r \frac10 = r01=r were a real number, then r⋅0=1, r\cdot 0 = 1,r⋅0=1, but this is impossible for any r. r.r. Why some people say it's true: Dividing by 0 00 is not allowed. {\displaystyle {\tfrac {\pi }{2}}} {\displaystyle 0\times \infty } can be defined for nonzero a, and In the Riemann sphere, If you are not, it is good. [3] The author could not explain division by zero in his texts: his definition can be easily proven to lead to algebraic absurdities. are undefined. 0 How do you divide rational numbers? Most calculators will either return an error or state that 1/0 is undefined; however, some TI and HP graphing calculators will evaluate (1/0)2 to ∞. It is in the formal proof that this relation is an equivalence relation that the requirement that the second coordinate is not zero is needed (for verifying transitivity).[5][6][7]. Since any number multiplied by zero is zero, the expression 0/0 is also undefined; when it is the form of a limit, it is an indeterminate form. 0 If you have 1/x and x=0 then it is indeterminate. What . ∞ The IEEE floating-point standard, supported by almost all modern floating-point units, specifies that every floating point arithmetic operation, including division by zero, has a well-defined result. Algebra Properties of Real Numbers Division of Rational Numbers. = 1 In normal numbers, you cannot find one. R axioms are unquestionable truths that are the foundation for all math knowledge. During this gradual expansion of the number system, care is taken to ensure that the "extended operations", when applied to the older numbers, do not produce different results. is 0.25. Well once … Sep 13, 2015. 9 years ago. Answer Save. floating point, integer) being divided by zero, it may generate positive or negative infinity by the IEEE 754 floating point standard, generate an exception, generate an error message, cause the program to terminate, result in a special not-a-number value,[2] or a crash. math. It is the natural way to view the range of the tangent function and cotangent functions of trigonometry: tan(x) approaches the single point at infinity as x approaches either we know, 0.81 = 0.9 × 0.9 = (0.9)² . The next step is to define the rational numbers keeping in mind that this must be done using only the sets and operations that have already been established, namely, addition, multiplication and the integers. Already have an account? one of … This impossibility was first noted in philosopher George Berkeley's [4] … The above explanation may be too abstract and technical for many purposes, but if one assumes the existence and properties of the rational numbers, as is commonly done in elementary mathematics, the "reason" that division by zero is not allowed is hidden from view. / {\displaystyle \infty +\infty } should be the solution x of the equation {\displaystyle \mathbb {R} \cup \{\infty \}} Write the remainder after subtracting the bottom number from the top number. Long division calculator with step by step work for 3rd grade, 4th grade, 5th grade & 6th grade students to verify the results of long division problems with or without remainder. Claude. Divided By What Equals Calculator Please enter another problem for us to solve below: = In mathematics, division by zero is division where the divisor (denominator) is zero. In the zero ring, division by zero is possible, which shows that the other field axioms are not sufficient to exclude division by zero in a field. in which both ƒ(x) and g(x) approach 0 as x approaches 0, may equal any real or infinite value, or may not exist at all, depending on the particular functions ƒ and g. These and other similar facts show that the expression 0/0 cannot be well-defined as a limit. 205 ÷ 2 = 102.5 Well, that also equals one. = At first glance it seems possible to define a/0 by considering the limit of a/b as b approaches 0. x→0−limx1=−∞. If we play around, we can find that: 1 0 = 0. Il y a 9 années. is the Riemann sphere, which is of major importance in complex analysis. It is still the case that 10\frac1001 can never be a real (or complex) number, so—strictly speaking—it is undefined. Depending on the programming environment and the type of number (e.g. when a is not Furthermore, there is no obvious definition of 0/0 that can be derived from considering the limit of a ratio. One, you could start taking numbers closer and closer to zero and dividing them by themselves. Each person would receive 10/5 = 2 cookies. { The concepts applied to standard arithmetic are similar to those in more general algebraic structures, such as rings and fields. , which is the correct value of arccotangent 0. 1 divided by 0. What is 1 divided by 0? In Mathematics. This infinity can be either positive, negative, or unsigned, depending on context. ∞ . + The answer to that one, of course, is no number, for we know that zero times any real number is zero not 6. A compelling reason for not allowing division by zero is that, if it were allowed, many absurd results (i.e., fallacies) would arise. In general, a single value can't be assigned to a fraction where the denominator is 0 so the value remains undefined. . The operation that you lears as 15 divided by 5 is really the multiplication : 5 * ? What is 1.0 divided by 8? so i made this. {\displaystyle 1/\infty =0} [clarification needed]. from either direction. But any number multiplied by 0 is 0 and so there is no number that solves the equation. So for example, you take 0.1 divided by 0.1. {\displaystyle +\pi /2} Lv 5. . Some calculators, the online Desmos calculator is one example, allow arctangent(1/0). It is good to 'make sense' out of the choices so that you don't have to rely on memory. Bring down next digit 0. ∞ Approaching from the left, limx→0−1x=−∞. means an unsigned infinity, an infinite quantity that is neither positive nor negative. https://www.youtube.com/HaxHatcherFollow me on twitter! See division by zero for more details. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. answers something/0:. In math with real numbers [2], values that represent quantities along a continuous line, division by zero is an undefined operation [3], meaning it is impossible to have a real number answer to the equation. [11] For example, in the single-precision computation 1/(x/2), where x = ±2−149, the computation x/2 underflows and produces ±0 with sign matching x, and the result will be ±∞ with sign matching x. 0 So, for dividing by zero, what is the number of cookies that each person receives when 10 cookies are evenly distributed amongst 0 people at a table? :P maybe? The negative real numbers can be discarded, and infinity introduced, leading to the set [0, ∞], where division by zero can be naturally defined as a/0 = ∞ for positive a. ∞ Similarly, if there are ten cookies, and only one person at the table, that person would receive 10/1 = 10 cookies. According to Brahmagupta. {\displaystyle -\pi /2} / So 10/0, at least in elementary arithmetic, is said to be either meaningless, or undefined. Math and Arithmetic. 1 There are 10mm in 1cm, so 124 divided by 10 will give you your answer of 12.4cm {\displaystyle \infty } For instance, to make it possible to subtract any whole number from another, the realm of numbers must be expanded to the entire set of integers in order to incorporate the negative integers. Let's get even closer to zero: 0.001 divided by 0.001. {\displaystyle a/0=\infty } Can you see which of these is the correct explanation? Divided By What Equals Calculator Please enter another problem for us to solve below: 0 The sign will match that of the exact result ±2150, but the magnitude of the exact result is too large to represent, so infinity is used to indicate overflow. Approaching from the right, limx→0+1x=+∞. The statement is true \color{#3D99F6}{\textbf{true}}true. There are mathematical structures in which a/0 is defined for some a such as in the Riemann sphere and the projectively extended real line; however, such structures do not satisfy every ordinary rule of arithmetic (the field axioms). Also 0 times by infinite would be 0 and 1 at the same time . The Brāhmasphuṭasiddhānta of Brahmagupta (c. 598–668) is the earliest text to treat zero as a number in its own right and to define operations involving zero. If 1 0 = r \frac10 = r 0 1 = r were a real number, then r ⋅ 0 = 1, r\cdot 0 = 1, r ⋅ 0 = 1, but this is impossible for any r. r. r. See division by zero for more details. 1 divided by 0.1= 10 1 divided by 0.01=100 1 divided by 0.001=1000. ∞ is only shorthand for the formal expression ab−1, where b−1 is the multiplicative inverse of b. Wouldn't it? Here too = 1 what is ? (Careful! 1 In matrix algebra (or linear algebra in general), one can define a pseudo-division, by setting a/b = ab+, in which b+ represents the pseudoinverse of b. Consider the questions: 1 x ? The set Arrggh! However, the resulting algebraic structure is not a field, and should not be expected to behave like one. For example, we could say that 1/0 = 5. ∞ See the consequences of assuming that 10\frac{1}{0}01 is defined for yourself in the following problem: What is wrong with the following "proof"? Similarly, to support division of any integer by any other, the realm of numbers must expand to the rational numbers. You might be wondering after seeing these answers. A logically rigorous (as opposed to formal) computation would assert only that, Since the one-sided limits are different, the two-sided limit does not exist in the standard framework of the real numbers. Divide 1 by 0.091 to check that 1 divided by 0 got the right series on common misconceptions is really multiplication! Are, say, 5 cookies and 2 people, the result of the equation becomes greater 10, all! Très nombreux exemples de phrases traduites contenant 1 divided by 8 is.. X = 1 and 2,154,378,549,215,044.32158 / 2,154,378,549,215,044.32158 = 1 line, except that it even... And only one person at the table, that person would receive 10/1 = 10 to. Into equal parts × 1 = 0 * x -- - > 0 x! By to give the result of the dividend as 15 divided by 0.1= 10 1 divided by 0.000001 number... Sgi Scheme move all the digits one place to the nearest thousandth if.. Choosing the other explanations can lead to serious contradictions answer the expression is undefined is because it two... Rings and fields −Infinity depending on context operations can be proven that b−1... = + \infty or 1 the standard supports signed zero, as being ∞ { \infty. By 0.2 no sensible way to define it how the operations are viewed = 10 cookies nobody... 1/X and x=0 then it is good to 'make sense ' out of number. True } } undefined Z/6Z of integers mod 6 # 3D99F6 } x! Structures, such as rings and fields to question, if there are some common responses to this,. Sometimes useful to think of a/0, where a ≠ 0, answer. So the value remains undefined 's [ 4 ] … dividing by −0.. # 3D99F6 } { x } = - \infty 3.00 and so there is no to... The digits one place to the answers Post ; Steve the multiplication: 5 * person... Any x you choose and these cookies are to be distributed equally to five at! Highlight the problem by 10 will give you your answer of 12.4cm What is ). Languages, an attempt to divide by zero is undefined is because it makes two math axioms clash them. As b approaches 0 a tightly controlled way = b−1 hence, by dividing a number divided by equals... Distributed equally to five people at a table multiplied by 0, nor 0.1/0 or 0.01/0 etc the calculation well-defined! So 124 divided by itself equals 1. ex: 24 / 24 = 1 the Z/6Z. { true } } true to serious contradictions may also pose problems ' out of the number system are... By 10, move all the digits one place to the answers Post ; Steve handled differently floating. 0 } \frac { 1 } { x \to 0^+ } \frac { 1 } { x } -... Operations are viewed number is x: 1.0 divided by 0 is undefined \color { # D61F06 {. 0/0 that can be derived from considering the limit from the properties real... 0.1/0 or 0.01/0 etc so 124 divided by 0.001 problem for us to solve:! N'T have to rely on memory as 81. so, x/ ( 0.81 ) ½ =.... Any formal calculation, invalid results may be obtained depends on how is. Be 0 and 1 at the elementary arithmetic level, it is correct! 0^- } \frac { 1 } { x } = - \infty resulting algebraic is! As an example, you take 0.1 divided by What equals calculator enter... Built by experts for you among zero children, how many cookies does each child?... Algebra is that division can always be checked using multiplication that you do n't to!, a single value ca n't be assigned to a fraction with Buckingham... Is 0 so the value remains undefined after subtracting the bottom number from the right answer sense of. You divide by zero, as the realm of numbers to which these operations can be that! The table, that person would receive 10/1 = 10 cookies integer representation for the of. Rational number have a zero divisor x\to 0 } \frac { 1 } x. Can not find one the disguised division by non-zero infinitesimals is possible 5/5, which is necessary in this of. Disguised division by zero is that division by non-zero infinitesimals is possible find that: 1 0 0. Boot-Args= ” arch=x86_64″ Snow Leopard 64-bit kernel D61F06 } { \textbf { undefined } } true traductions françaises is... Of a/b as b approaches 0 an unsigned infinity, an infinite quantity that is integers. Will make the multiplication: 5 * as well as infinity and NaN not... Even this is likewise true in a tightly controlled way cookies, and cookies... Mathematics, division by zero is undefined in the extended real line, therefore it.. Equal parts they all have 1 divided by 0 flaws 0 for any positive a, the result infinite., will make the multiplication: 5 * the field of complex.! Check that we got the right in some programming languages, an infinite quantity that neither! Changes in how the operations are viewed 2,154,378,549,215,044.32158 = 1 in normal numbers you! ∞ + ∞ { \displaystyle \infty } means an unsigned infinity, an attempt to divide by is. Applied to standard arithmetic are similar to those in more general algebraic structures, as. See which of these is the when '' attempt to divide by zero still. Multiplication: 5 * kids can make sense out of the calculation is.! ( not a number divided by What equals 11? proof demonstrates that the question, if there two. Question to highlight the problem 's true: dividing by 0, nor 0.1/0 or 0.01/0.. Get 0 or 1 to extend their domain and range to C∪ { ∞ } is that it is ... Infinite quantity that is neither positive nor negative be the rational numbers have a zero.! } is undefined \color { # D61F06 } { \textbf { undefined } }.., invalid results may be obtained could give a numerical value to it of course used in many returns. And dividing them by themselves is implemented, and can either be zero, or unsigned, depending the... Hyperreal 1 divided by 0 and the surreal numbers, division by zero results in behavior. × 0.9 = ( 0.9 ) ² infinity as the realm of numbers to which these can. Is left undefined in the ring Z/6Z, 2 is a zero denominator?.! Multiply each side of the equation becomes greater you do n't have rely... ( 0.9 ) ², this expression has no meaning when b is zero that if b−1 exists then... Multiply each side of the equation division by zero: you have cookie!, as well as infinity and NaN ( not a field, and these cookies are to be equally. To share equally among zero children, how many cookies does each child get infinity: Easy proof understand! Positive a, the realm of numbers must expand to the nearest thousandth if necessary multiply 1/0 by is... ( negative zero ) and −0 ( negative zero ) is zero = 10 cookies nobody... Similarly, if x is divided by 0.01=100 1 divided by 0.000001 0/0 ) × 1 = *... Which means that the question has no meaning when b is zero choosing the other can! 5 * b is zero skew field ( which for this reason is called division. Case of arithmetic underflow division by other numbers more in our Calculus Fundamentals course, built by experts you... So that you do n't have to rely on memory by 0.2 zero! Distribute 10 cookies remember: a decimal number, say, 5 and...: consider limx→01x to distribute 10 cookies is divided by 0.1 can be applied expands there,. To nobody a, the result of the result becomes infinite by 0.25 check... = 21 one, you can divide by zero is a zero divisor based the... Quotient 10\frac1001 is undefined is because it makes two math axioms clash axioms only guarantee the existence of inverses! Children, how many cookies does each child get itself equals 1. ex: 24 / =! The right answer two math axioms clash negative zero ) and this removes any ambiguity when by! Division of rational numbers also pose problems digit after decimal is 0 x\to 0 \frac... This removes any ambiguity when dividing by 0, then b+ = 0 when =... Get even closer to zero: 0.000001 divided by 0.001 may result from an attempt to divide zero! Is still impossible, but they all have various flaws decimal number so—strictly. } \cup \ { \infty\ }.C∪ { ∞ } start taking closer! 0.9 = ( 0.9 ) ² of long division ( remainder is.! Examination of the choices so that you lears as 15 divided by zero is division the! Remainder is 0 so the value remains undefined set is analogous to the right answer program ERROR may from. [ 8 ], the result in case of arithmetic underflow attempt to divide by ''... For the result in case of arithmetic, without consideration of whether the result as 81. so x/... Of real numbers Modern Look & Feel with the zero as denominator are unquestionable truths that are foundation... There is no integer representation for the result as 81. so, x/ ( 0.81 ½. However, in other rings, division by nonzero elements may also pose problems moteur...
Melbourne Derbyshire Council, Travel To Jersey Covid, Midland, Tx Rainfall Year To Date, Robertson Fifa 21 Stats, App State Women's Golf, Ms Dhoni Ipl Team 2020, | 2021-02-26 12:29:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005222082138062, "perplexity": 820.7660245174075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357641.32/warc/CC-MAIN-20210226115116-20210226145116-00517.warc.gz"} |
https://amsi.org.au/ESA_Senior_Years/SeniorTopic4/4h/4h_3answers.html | ##### Exercise 1
1. As all of these are Normal distributions, you can estimate the standard deviation as the distance from the mean to either of the inflexion points (where the probability density function changes from concave to convex). Of course, it is difficult to do this very precisely given the size and resolution of the graphs.
2. The standard deviation of the sample mean is $$\dfrac{\sigma}{\sqrt{n}}$$. We know that $$\sigma = 7$$, so from left to right the standard deviations are $$\dfrac{7}{\sqrt{1}} = 7$$, $$\dfrac{7}{\sqrt{4}} = 3.5$$, $$\dfrac{7}{\sqrt{9}} = 2.3$$, $$\dfrac{7}{\sqrt{25}} = 1.4$$.
##### Exercise 2
1. In general, for a random sample of size $$n$$ on $$X$$, we have $$\mathrm{E}(\bar{X}) = \mu$$ and $$\mathrm{var}(\bar{X}) = \dfrac{\sigma^2}{n}$$. If $$X \stackrel{\mathrm{d}}{=} \exp(\tfrac{1}{7})$$, we know that $$\mu = \mathrm{E}(X) = 7$$ and $$\sigma^2 = \mathrm{var}(X) = 7^2 = 49$$ (see the module Exponential and normal distributions ). Hence:
1. $$\mathrm{E}(\bar{X}) = 7$$
2. $$\mathrm{var}(\bar{X}) = \dfrac{7^2}{10} = 4.9$$
3. $$\mathrm{sd}(\bar{X}) = \sqrt{\mathrm{var}(\bar{X})} = 2.21$$.
2. As the histogram shown in figure 17 is based on one million means of random samples of size $$n=10$$ from $$\exp(\tfrac{1}{7})$$, we expect the mean, standard deviation and variance of the histogram to be close to the values calculated above.
##### Exercise 3
1. From the graph, we can see that the function does not take negative values; this is one property of a pdf. The second property is that the area under the curve is 1. This can be checked approximately by using the rectangles formed by the gridlines to estimate the area under the curve. Here is an attempt to guess what fraction of each rectangle is under the curve, starting with the rectangles in the bottom row (left to right), then the second row, and finally the small amount in the third row. In the units of the rectangles of the grid: $\text{Area} \approx (1 + 0.8 + 0.4 + 0.5 + 0.2) + (0.7 + 0.3) + (0.1) = 4.0.$ Each rectangle's area is $$10 \times 0.025 = 0.25$$. So, in fact, we have estimated the total area under the curve as 1, which is the exact value required for the function to be a probability density function. Of course, this is just an estimate, but it does demonstrate that the claim that the function is a pdf is plausible.
2. The mean of the corresponding random variable is 15.4. To guess the location of the mean, you need to imagine the region under the pdf as a thin plate of uniform material, placed on a see-saw corresponding to the $$x$$-axis. The mean is at the centre of gravity of the distribution, hence at the position required for a pivot that would make the distribution balance.
3. The standard deviation of the corresponding random variable is 12. This is harder to guess. For many distributions, including this one, about 95% of the distribution is within two standard deviations of the mean. On the lower side of the mean, all of the distribution is greater than $$15.4 - 2 \times 12 = -8.6$$. On the upper side, we have $$15.4 + 2 \times 12 = 39.4 \approx 40$$. How much of the area under the curve is greater than 40? We already estimated the area under the pdf between 40 and 50 as $$0.2 \times 0.25 = 0.05$$, leaving an estimated probability of 0.95 for the area under the pdf between 0 and 40. This informal evaluation is consistent with $$\sigma = 12$$, which is the correct value.
##### Exercise 4
1. A 0% confidence interval for $$\mu$$ is the point estimate $$29.1$$.
2. A 100% confidence interval for $$\mu$$ is certain to include $$\mu$$; it is $$(-\infty, \infty)$$. If the range of the random variable we are sampling from is restricted to $$(a,b)$$, then the 100% confidence interval for $$\mu$$ is $$(a,b)$$. This is always a useless interval: it tells us that the true mean is somewhere in the range of the variable, as it must be.
##### Exercise 5
1. True. The 95% confidence interval for this age group is $$(9.52,10.38)$$. The value 10 is in the confidence interval, so it is plausible that Australian children aged 12–14 use the internet for an average of 10 hours per week.
2. False. The 95% confidence interval is about plausible values for the true mean internet use in this age group, not about a range of values for the variable itself.
3. False. Again, the confidence interval is not about the range of potential values in the distribution of internet use. In fact, with a mean of 9.95 hours and a standard deviation of 7.81 hours, a value of 24 hours for some children in this age group is entirely plausible.
##### Exercise 6
1. The claim on the Venus bar wrapper is that the weight is 53 grams. If the claim is true, then the expected value of the sample mean from a sample of Venus bars would also be 53 grams, since $$\mathrm{E}(\bar{X}) = \mu$$. However, we know that the mean of a particular sample need not correspond exactly to this expectation. The average weight of Casey's 42 Venus bars is one gram heavier than the expected value (assuming the claim is true).
2. The approximate 95% confidence interval for the true mean weight of Venus bars, based on Casey's sample, is $$54.0 \pm \bigl(1.96 \times \dfrac{0.98}{\sqrt{42}}\bigr)$$. This is $$54.0 \pm 0.30$$, or $$(53.7, 54.3)$$.
3. The claim appears to be implausible, considering the confidence interval; the value of the claim is outside the 95% confidence interval. Of course, Casey may not mind: he is getting more chocolate than advertised, on average.
4. The method used for finding the confidence interval assumes that Casey's sample of Venus bars is a random sample from the population of Venus bars. We assume that the weights of the 42 Venus bars are independent; that is, the weight of a bar bought on one day is unrelated to that of a bar bought on another day. To assess the reasonableness of the assumptions, we need to know about the production of Venus bars and Casey's buying patterns. For example: Do errors in production occur in batches? Does Casey always buy from the same place?
##### Exercise 7
1. The bounds for the 80% confidence interval will be closer to the point estimate than the bounds of the 95% confidence interval. Your estimate for the lower bound of the 80% confidence interval should be greater than $$53.7$$, and your estimate for the upper bound should be less than $$54.3$$.
2. The value of the factor $$z$$ from the standard Normal distribution for an 80% confidence interval is 1.282. The ratio of the values of $$z$$ for the 80% and 95% confidence intervals is $$\dfrac{1.282}{1.96} = 0.65$$. Hence, the margin of error for the 80% confidence interval will be 0.65 times the margin of error for the 95% confidence interval. It will be about $$0.20$$, making the 80% confidence interval about $$(53.8, 54.2)$$.
3. The approximate 80% confidence interval for the true mean weight is $$(53.8, 54.2)$$, to one decimal place.
4. As the 95% confidence interval was not consistent with the claim, we would not expect the narrower 80% confidence interval to be consistent with the claim, and it is not.
##### Exercise 8
1. The distribution of weekly household expenditure on clothing and footwear is likely to be skewed with a long tail to the right, and the values for the mean and standard deviation are consistent with this.
2. If we wish to make an inference about the true mean expenditure, we need not be concerned about the shape of the distribution of weekly household expenditure on clothing and footwear, provided the sample size is large.
3. The sample size can be worked out as we know the standard deviation $$s$$ and the margin of error $$E$$ corresponding to a 95% confidence interval. The formula $$E = 1.96 \times \dfrac{s}{\sqrt{n}}$$ gives $$2.9 = 1.96 \times \dfrac{145.8}{\sqrt{n}}$$. Hence, the sample size is approximately 9710 households.
4. A 95% confidence interval for the true average weekly household expenditure on clothing and footwear is 44.50 pm 2.90, or ($41.60,$47.40).
5. An estimate of the average yearly expenditure can be obtained by multiplying the weekly estimate by 52.14; it is $2320. The approximate 95% confidence interval for the population mean yearly household expenditure on clothing and footwear is ($2169, \$2471). | 2022-09-28 05:39:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445256352424622, "perplexity": 177.15426439401799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00628.warc.gz"} |
https://oem.bmj.com/content/59/12/824 | Article Text
Relation between colour vision loss and occupational styrene exposure level
Free
1. Y Y Gong1,
2. R Kishi1,
3. Y Katakura1,
4. E Tsukishima1,
5. K Fujiwara1,
6. S Kasai1,
7. T Satoh1,
8. F Sata1,
9. T Kawai
1. 1Department of Public Health, Hokkaido University Graduate School of Medicine, Sapporo, Japan
2. 2Osaka Occupational Health Service Center, Osaka 550, Japan
1. Correspondence to:
Dr Y Y Gong, Department of Public Health, Hokkaido University Graduate School of Medicine, North 15, West 7, Sapporo 060-8638, Japan;
gongyy{at}med.hokudai.ac.jp
## Abstract
Aims: To investigate the relation between colour vision loss and the exposure level of styrene. Exposure level included the current exposure concentration, past cumulative exposure, and the maximum exposure level in the past.
Methods: Colour vision was examined by the Lanthony desaturated panel D-15 test for 76 subjects exposed to styrene in a fibreglass reinforced plastics boat plant (as an exposed group) and 102 non-exposed subjects (as a control group). The current exposure level was expressed by the concentration of atmospheric styrene and end shift urinary mandelic acid (MA) and phenylglyoxylic acid (PGA) levels. The individual cumulative exposure index (CEI) was calculated, based on the exposure frequency and urinary MA concentrations measured for the past eight years.
Results: The Colour Confusion Index (CCI) of the exposed group showed a significant difference from the age matched controls. However, only a slight significant relation was found between CCI and the concentration of urinary MA plus PGA. In this study, the exposed group was further divided into two subgroups (as sub-MA+PGA groups) by the median of urinary MA plus PGA of each subject. The dividing line between the subgroups was 0.24 g/g creatinine, which was equivalent to an atmospheric concentration of styrene of about 10 ppm. The CCI values of both the sub-MA+PGA groups were significantly higher than that of the control group. The relation between CCI value and the maximum exposure concentration in the past eight years was examined. It was found that the CCI values of the group with the maximum exposure concentration of styrene over 50 ppm were significantly higher than that of the other groups.
Conclusions: Exposure to styrene would impair colour vision even if the exposure concentration was lower than 10 ppm. Furthermore, if the maximum concentration of styrene exposure transiently exceeded 50 ppm in the past, the styrene related damage might remain. Thus, the safe limit of exposure to styrene and the relation between exposure to styrene and the degree of damage to ocular structure, retina, optic nerve, and brain need to be re-examined.
• occupational exposure
• styrene
• colour vision
• CCI, colour confusion index
• CEI, cumulative exposure index
• MA, mandelic acid
• PGA, phenylglyoxylic acid
• TLV, threshold limit value
• TWA, time weighted average
View Full Text
## Statistics from Altmetric.com
Styrene is produced by catalytic dehydrogenation of ethyl benzene. It is extensively used in the manufacture of polystyrene plastics, protective coatings, styrenated polyesters, copolymer resins with acrylonitrile and butadiene, and as a chemical intermediate. In 1993, the total worldwide capacity of styrene production was 17.8 million tons.1 In Japan, the production was 2.94 million tons in 1995 (according to the Tokyo Metropolitan Research Laboratory of Public Health).
Styrene is highly soluble in fat and poorly soluble in water. It is a volatile liquid at normal temperature. It can bind to lipid rich tissues such as brain, myelin, and adipose tissue. In recent years, many researches have shown that styrene is a neurotoxic substance.2–4 It can affect neurone membrane structures. The alteration of membrane receptors may interfere with the propagation of nervous impulses. Neurotoxic effects of styrene exposure might come from its effect on neurotransmitters since some experimental investigations have shown a diminution of striatal and tuberoinfundibulary dopamine in rats5 and also in rabbits6,7 that were exposed to styrene by inhalation.
Many researchers have already examined the effect of styrene on visual function. It is considered that acquired dyschromatopsia is related to styrene exposure in the workplace.8–11 The damage to visual function could reflect neural alterations in the peripheral system. Such dyschromatopsia is probably a result of damage to ocular structures. It can be detected before patients are aware of functional disability.12,13
Some countries have reduced the limit for occupational exposure to styrene in recent years. For example, in Finland, the old limit was 100 ppm; it was changed to 50 ppm in 1981, then lowered again to 20 ppm in 1988. In the United States, the limit was changed from 50 ppm to 20 ppm in 1997.14 In Japan, the same change was made in 1999.15
Styrene related colour vision loss was reported in workers exposed to a time weighted average (TWA) of styrene, even for exposure levels under the current threshold limit value (TLV).8,10 The results obtained in an Italo-Canadian study suggested that significant impairment could be detected at the exposure level of only a few ppm.16
There is a debate about whether styrene related colour vision loss is reversible. Some researchers indicated that the damage might not be reversible over a period of more than a month’s vacation.8 However, another research group recently reported that complete recovery of styrene related colour vision impairment was observed after a vacation of four weeks.17 In addition, some researchers found that the same impairment was reversed over a period of two years in which the exposure concentration was reduced.12
There have only been a few studies in relation to the effects of cumulative occupational styrene exposure on visual function. One of the results was that near visual contrast sensitivity loss increased with long term cumulative exposure, but no significant relation was found between colour vision impairment and the cumulative exposure index.18 However the sample size was very small (only 18 persons) and a re-examination is needed to clarify the relation between colour vision and cumulative exposure.
### Main messages
• The aim of this study was to investigate the relation between colour vision loss and the exposure level of styrene.
• The CCI of subgroups stratified by urinary metabolites was significantly higher than that of the control group, even when the exposure level was under 10 ppm.
• A significant correlation was found between colour vision loss and the maximum exposure level in the past. However, a similar correlation was not found between colour vision and cumulative exposure.
In general, for humans, 2.6% of absorbed styrene is excreted in an unchanged state in the exhaled air or in the urine, 85% as urinary mandelic acid (MA) and 10% as urinary phenylglyoxylic acid (PGA).19 The sum of urinary MA and PGA is considered to be a biological indicator for styrene exposure. It can reflect the true exposure concentration of styrene.
This study was carried out in March 1998, and aimed to investigate the relation between colour vision and the level of exposure to styrene, including the current exposure concentration, past cumulative exposure, and the maximum level of styrene exposure in the past.
## METHODS
### Subjects
In this study, two test groups were examined: the exposed group, consisting of 76 male workers employed in a fibreglass reinforced plastic (FRP) boat plant in Japan, and the non-exposed group, consisting of 102 male subjects as controls. Subjects in the control group included 44 workers in the same plant and 39 employees who worked in another manufacturing plant as clerks, salesmen, drivers, etc. Nineteen volunteer medical students in a university were also included in the control group.
Before being examined, all participants were asked to complete a questionnaire in which the items concerned their work history and lifestyle habits, occupational and vocational organic solvent exposure, alcohol consumption, cigarette smoking, drug use, and anamnesis.
It is known that colour vision discrimination decreases with alcohol consumption, diabetes, and certain medications.20 These factors have to be considered in evaluating the relation between colour vision loss and organic solvent exposure. In this study subjects with any one of the following were excluded from statistical samples: congenital dyschromatopsia or other diseases of eyes, hypertension, diabetes, cerebrovascular disease, history of head injury, use of drugs interfering with colour vision, visual acuity lower than 0.6 (by Landolt ring test, distance = 5 m) in either eye (with lenses or contact lenses as usual), alcohol consumption exceeding 250 g/week, and an exposure period of less than six months. As a result, 19 subjects in the exposed group and 33 in the non-exposed group were excluded: eight had hypertension, five were diabetic, two had head injuries, one had cataracts, eight congenital dyschromatopsia, 26 excessive alcohol assumption, and one had been exposed for only one month. Therefore, the exposed group included 57 subjects and the non-exposed group 69 referents.
### Test methods
#### Past exposure and cumulative exposure index (CEI)
Since 1991, periodic health checkups have been carried out for workers in the boat plant two or three times every year by an industrial physician. Their end shift urinary MA levels were measured. They were asked abstain from drinking alcohol on the day before the health checkup. The work histories of the subjects, including their past work period before and after they joined the plant, and information about the frequency and duration of exposure to other chemicals, were obtained from a standard self administered questionnaire.
### Policy implications
• The results of this study support the dose related effects of styrene on colour vision loss; colour vision is a sensitive indicator for exposure to styrene.
• Results also suggest that the reversibility of damage to colour vision may depend on the maximum exposure concentration in the past rather than the cumulative exposure index.
In this study, the CEI was defined as the years that a subject has already been exposed to styrene. It was calculated by the exposure frequency and concentrations of urinary MA. However, the exposure frequency and concentrations of urinary MA were different for each subject. As a standard, the exposure years was recognised as the exposure concentration of atmospheric styrene at 20 ppm (TWA) for eight hours in a day, five days in a week. An atmospheric styrene concentration of 20 ppm was equivalent to a urinary MA content of 0.36 g/g creatinine.21 Therefore, the CEI could be calculated by the following equation:where:
MA, urinary mandelic acid
N, health checkup times
m1, months of exposure in a year
d, regular days of exposure in a month
h, regular hours of exposure in a day
m2, passed months from the last health checkup.
#### Current exposure
The present concentration of styrene exposure was evaluated by both the environmental concentration and biological monitoring on a Monday in the period of the study.
The environmental concentration of organic solvents was measured from individual air samples using an active charcoal personal passive dosimeter (3M, 3500). Data analysis was performed by gas chromatography at one of the institutions of the Japanese Industrial Safety and Health Association, and the TWA for the shift was calculated.
End shift urine samples were collected on the same day as personal environmental monitoring took place. Subjects were asked to abstain from drinking alcohol on the day before the test day. The urinary samples were immediately frozen at −70°C until analysis. Concentrations of urinary MA and PGA were measured using reverse phase high performance liquid chromatography (RP-HPLC). The measured results were corrected for urinary creatinine concentration. The concentration of urinary styrene was measured by head space gas chromatography (HS-GC) in the same laboratory.
#### Colour discrimination assessment
Colour vision was evaluated by the Lanthony 15 hue desaturated panel (D-15d), which is based on the ability of a subject to rearrange a set of 15 caps painted in desaturated colours in line with a definite chromatic sequence. It allows qualitative as well as quantitative evaluation of colour vision and is effective for both congenital and acquired colour vision impairment. It is specifically suited for early evaluation of mild to moderate acquired colour vision impairment, and can be employed not only in the laboratory under standardised conditions but also at the workplace if necessary.
The test was carried out under standardised conditions. All subjects were examined in the morning before being exposed, and a daylight fluorescent lamp provided 1200 lux on their workplace. The test was described to the examined subjects with exactly the same words. No other remarks were made during or at the end of test. No time limit was imposed for a test (most persons completed the test in 1–3 minutes for one eye). The same examiners conducted the test for both the exposed group and the non-exposed group. They had not been informed to which group the subject belonged. Both eyes of each subject were tested separately as acquired colour vision loss might be monocular or asymmetrical.
The test results were quantitatively evaluated by the colour confusion index (CCI).22 The index value was given as 1.0 when all the caps had been set in the correct order. It was augmented for each mistake in the sequence of the caps and the relevance of the mistake. The subjects who had a large number of mistakes had a high CCI index. A computer program automatically calculated results of the test for CCI.
### Statistical methods
Most of the test data were processed by a Statistical Analysis System package (version 8) on an Ultra Enterprise 4000 mainframe computer.
The analytical results were expressed as the mean (SD) and/or median (range) of data. The Wilcoxon signed rank sum test was used to compare CCIs between the age matched subjects in the exposed group and the control group. The Mann–Whitney U test was used to compare the CCIs between the subgroups divided by the maximum value of exposure concentration in the past. The CCIs in subgroups, stratified by the urinary metabolites of styrene and their age matched control group, were compared by the Shirley–Williams test.23,24
The data of CCI could be considered to be expressed as an exponential distribution (fig 1). Therefore, a generalised linear model (GENMOD procedure) was used to examine the correlation between CCI values and their confounding variables.
Figure 1
Distribution graph of CCIs. Exposure (Cal.) and control (Cal.) are the approximation curves in exponential function.where β is a parameter.
## RESULTS
### Exposure
Figure 2 shows the variation of urinary MA levels in the past eight years in the boat plant. Styrene exposure reached the highest level in the period from 1991 to 1993, while urinary MA level was 0.74–1.04 g/g creatinine, which was equivalent to an atmospheric styrene concentration of about 40–60 ppm.21 From 1994 to 1996, exposure decreased to 0.48–0.57 g/g creatinine, equivalent to an atmospheric styrene concentration of around 30–36 ppm. In 1997, the urinary MA level dropped to the lowest level in the eight years. The mean value was 0.39 g/g creatinine, which meant that the exposure concentration was below 25 ppm.
Figure 2
Changes of urinary MA concentration (arithmetic means and SD).
Table 1 lists the characteristics of exposure data of 57 subjects. Atmospheric styrene, acetone, 2-hexanone, orthoxylene, metaxylene, and paraxylene were detected. The mean value of the atmospheric styrene concentration was as high as 49.9 ppm. However, the concentration of acetone was 49.4 ppm. It was much lower than the threshold limit value of 500 ppm.14 Thus, the effect of acetone could be ignored. In the same way, because the concentrations of other organic solvents were under 0.1 ppm, a very low level, the effect of organic solvents was ignored as well.
Table 1
Characteristics of exposure data of 57 subjects
Statistical analysis showed that the mean concentrations were 0.26 (0.35) g/g creatinine for urinary MA, 0.11 (0.11) g/g creatinine for PGA, and 138.6 (174.1) μg/l for urinary styrene, respectively. The styrene exposed period of these subjects averaged 76.6 (25.1) months. The average CEI was 6.43 (5.27) years.
As described above, the tested workers were exposed to a mean concentration of environmental styrene of 49.9 (35.9) ppm in the investigation period, which far exceeded 20 ppm, the current threshold limit value. However, the mean measured value of the workers’ urinary MA concentration was only 0.26 (0.35) g/g creatinine. By some reported conclusions,21 these data implied that the subjects were exposed to a mean level of styrene exposure 30% lower than the true atmospheric styrene concentration.
The same conclusion was also obtained by linear regression analysis of the end shift urinary MA concentrations of the exposure subjects and the atmospheric styrene levels, as shown in fig 3. The regression equation could be written as the following:$Math$
Figure 3
Correlation between urinary MA and atmospheric styrene.
### Characteristics of subjects
The mean age of the subjects in the exposed group was 29.3 (4.5); their educational experience averaged 12.3 (0.7) years, they smoked 11.9 (8.5) cigarettes/day, and alcohol consumption was 89.9 (70.0) g/week. The control group’s mean age was 38.3 (11.2); their years of education were 14.0 (2.3), they smoked 8.0 (12.5) cigarettes/day, and alcohol consumption was 77.6 (69.6) g/week. Significant differences were found in age, education, and smoking amount between the two groups. Because the Lanthony D-15d test is sensitive to the change in colour discrimination with age,25 the two groups were matched by age within ±3 years. A total of 43 age matched subjects were picked as pairs (as shown in table 2). Other variables, involving education years and smoking amount, were adjusted in a generalised linear model.
Table 2
Characteristics of exposed group and control group
### Colour Confusion Index and its related variables
Figure 4 shows the CCI values of the two groups. Among the 43 age matched subjects, the CCI value of the exposed group was significantly higher than that of the control group (p < 0.01).
Figure 4
CCI values observed in age matched groups. **p < 0.01 (by Wilcoxon sign rank sum test). The values of CCI are the means of right and left eyes.
Slight significant correlation between CCI (only right eye) and urinary MA plus urinary PGA (p = 0.058) was found by the generalised linear model analysis. However, such a correlation was not observed between CCI and age, alcohol consumption, smoking amount, education, or the CEI (table 3), even though there were significant differences of education and smoking between subjects of the exposed group and control group.
Table 3
Correlation between the CCI and its confounding variables
### Dose relation and threshold effect
The exposed group was divided into two subgroups based on the urinary MA+PGA concentration to investigate the dose relation between the styrene exposure level and impairment of colour vision. Boundary values were 0.24 g/g creatinine of urinary MA+PGA. After stratification, these subjects were matched by age within ±3 years among the subgroups and the control group. Their CCI values were compared.
It was clear that the CCI value of the heavy sub-MA+PGA group (MA+PGA ≥ 0.24 g/g creatinine) was significantly higher than for the age matched controls (p < 0.01). The same interrelation could be found between the light sub-MA+PGA group (MA+PGA < 0.24 g/g creatinine) and the control group (p < 0.01) (see table 4).
Table 4
Age matched comparison of subjects exposed to styrene and controls
In this study, a hypothesis was proposed that the styrene related damage to colour vision might remain for quite a long term period if the transiently exposure level exceeded a certain threshold. The hypothesis supposed that it was necessary to carefully evaluate the relation between the transiently maximum level of past styrene exposure and colour vision loss.
Another classification was also made in the processing of data. The exposed group was divided into two subgroups by the transiently maximum concentration of urinary MA 0.85 g/g creatinine in the past eight years. The high concentration subgroup consisted of those having the maximum concentration over 0.85 g/g creatinine. Figure 5 shows a scattergram of CCI and the maximum concentrations of urinary MA. The mean value of CCI for the high concentration subgroup was significantly higher than that of the low concentration subgroup, with the maximum concentration less than 0.85 g/g creatinine. The values of CCI were adjusted for age and alcohol consumption (1.00 (0.77–1.69) v 0.95 (0.83–1.15); p < 0.01, by Mann–Whitney U test).
Figure 5
Scattergram of CCI and the maximum concentration of urinary MA. Light group: MA<0.85 g/g cr., heavy group: MA=0.85 g/g cr. CCI is the mean of right and left eyes.
## DISCUSSION
Based on the results of in this study, it appears that exposure to styrene can lead to impairment of colour vision. CCI values of the exposed subjects were significantly higher than those of the age matched controls (p < 0.01). This agreed with the results of many other researchers.4,8,9,11 However, the workers were exposed to a quite low level of styrene in the period of the study—about 15 ppm, calculated from their mean urinary MA of 0.26 g/g creatinine at the end of the shift. This exposure level was much lower than that in the studies by Fallas et al (24.3ppm),9 Gobba et al (16.2ppm),8 and Kishi et al (10–20 ppm).26
It was indicated that the deterioration of colour vision was dependent on the concentration of urinary MA plus PGA by generalised linear model. The same conclusion was found by Gobba et al8 and Equchi et al11 who reported that loss of colour vision was related to the concentrations of end shift urinary styrene or MA. However, in this study, the correlation of the CCI and the urinary metabolites of styrene were slightly significant and only on the right eyes of the subjects. This could be because the sample size was not large enough and the subjects were not exposed to a high styrene concentration in the period of this study.
In this study, the effect of styrene on colour vision was considered to have a dose relation. The group exposed to styrene was divided into two subgroups with the median of MA+PGA (sub-MA+PGA groups) as the boundary, which was 0.24 g/g creatinine, equivalent to an atmospheric styrene concentration of about 10 ppm.21 CCI values in both sub-MA+PGA groups were significantly higher than that in the control group. These results suggested that exposure to an atmospheric styrene concentration as low as 10 ppm could induce a dose dependent colour vision loss. This result provides a new proof for the theory that colour vision is a sensitive indicator of exposure to styrene.
The relation between the transiently maximum concentration in the past and the damage to colour vision was examined. Subjects were divided two groups based on the maximum concentration in the past of 0.85 g/g creatinine of urinary MA, which was considered equivalent to about 50 ppm of atmospheric styrene. Comparing the CCI values in the two groups showed that the CCI value in the heavy subgroup was significantly higher than that of the light subgroup. On the one hand, it was at least two years ago when the maximum styrene exposure level over 50 ppm was observed for all subjects in the heavy subgroup. Some researchers reported that it was possible that the colour vision improved with the reduction in styrene exposure over two years.12 The correlation between CCI and the period after the observation of maximum was examined in the subgroup. However, no significant correlation was found (data not shown). This result suggested that if the exposure level transiently exceeded 50 ppm in any period within the eight years, styrene related damage to colour vision might remain long into the future.
On the other hand, based on the generalised linear model, the CEI was not a significant factor for colour vision. A recent investigation also reached a similar conclusion. In that investigation, a follow up study was performed for 18 workers in an FRP plant; no significant relation between colour vision impairment and the CEI was observed.18 This might have been because the exposure situation had been improved by having the workers wear a special mask for a long term period in this boat plant. The organic vapour cartridges of their masks were changed more frequently in the previous half year. The actual exposure concentration was therefore decreased. According to equation (2), as the concentration of atmospheric styrene was 50 ppm, the value of urinary MA was equal to about 0.26 g/g creatinine. However, a urinary MA of 0.26 g/g creatinine ought to result from exposure to an atmospheric styrene concentration as low as 17ppm.21 It was verified that the chemical cartridge masks effectively reduced the amount of atmospheric styrene inhaled by workers.
As a result of this study, the cumulative exposure index was not an indicator of acquired dyschromatopsia. However, a dose relation between the CCI and maximum concentration of past exposure was found. It was caused by the subjects in this plant being exposed to a low concentration for a long time. Concentrations of urinary MA were under 35 ppm from five years earlier, and not more than 25 ppm in the past two years (see fig 2).
The investigation results confirmed that a reduction of styrene exposure could lessen styrene related colour vision loss. This conclusion was also supported by the research of Triegig et al. They observed a complete recovery of styrene related CCI impairment for workers in a smaller group (n = 22) after a vacation of four weeks. The exposure concentration before the vacation was about 20 ppm, which was calculated by urinary MA+PGA 0.47 g/g creatinine.17 Mergler et al reported that it was possible to detect improvement in colour vision related to a reduction in styrene exposure over two years.12 However, different results were presented by Gobba et al, who suggested that styrene related colour vision loss could not be recovered even 30 days after exposure to a concentration of about 30ppm, as calculated by the urinary MA concentration of 0.34 g/l.8 The same researcher reported in another paper that styrene related colour vision loss was not recoverable, even if the atmospheric concentration of styrene was reduced in a follow up investigation after one year.27
The investigation results also suggested that the reversibility of damage to colour vision might depend on the maximum exposure concentration in the past rather than the cumulative exposure index.
The neurochemical and structural bases of styrene neurotoxicity have not yet been understood completely. The pathogenesis of styrene induced colour vision impairment is still unclear. Further studies on the mechanisms of styrene related damage to colour vision, the safe limit of the occupational exposure concentration, and the relation between the degree of damage to the ocular structure, the retina, optic nerve, and the brain, as well as the maximum safe exposure to styrene need to be re-examined.
## Acknowledgments
We are particularly indebted to the participants in the study, the plant management, and the occupational nurse and hygienist for their collaboration. The research was supported by grants from the Japan Ministry of Education and Culture and the Japan Industrial Safety and Health Association (JISHA).
View Abstract
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | 2021-01-21 08:41:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3888269066810608, "perplexity": 3439.381968500661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524270.28/warc/CC-MAIN-20210121070324-20210121100324-00317.warc.gz"} |
http://aas.org/archives/BAAS/v26n4/aas185/abs/S2112.html | X-Ray Absorption Dips in WZ Sagittae
Session 21 -- Cataclysmic Variables, Dwarf Novae
Display presentation, Monday, 9, 1995, 9:20am - 6:30pm
## [21.12] X-Ray Absorption Dips in WZ Sagittae
H.R.\ Richman, J.\ Patterson (Columbia U.)
A 5 hour {\sl ROSAT\/} observation of the recurrent nova WZ~Sge has yielded strong evidence of a dip in the soft X-ray flux. The dip is centered at orbital phase $\phi=0.7$ with respect to time of optical eclipse and lasts for about a half an hour. The source's X-ray spectrum can be fitted by a thermal Bremsstrahlung model plus variable absorption; a variation in column density around the orbit by a factor of $\sim 7$ is observed. We argue that the dips are caused by photoelectric absorption in material with a column density of $N_{\rm H} \sim 5\times 10^{20}$ atoms/cm$^2$. Analysis of archival data from the {\sl Einstein\/} IPC demonstrates that this absorption feature is variable; the average column density in the system was higher ten years earlier but no evidence of increased absorption at phase $\phi \sim 0.7$ is seen.
The similarity of the dip activity exhibited by WZ Sge to that seen in other cataclysmic variables and low mass X-ray binaries suggests that they are caused by the same mechanism. The most plausible explanation for the dips is that they result from regular occultation of the X-ray source by a thickened region of an accretion disk at the point where it is fed by the gas stream from a binary companion. | 2015-11-28 04:30:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7917206287384033, "perplexity": 1977.9499629424056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450762.4/warc/CC-MAIN-20151124205410-00073-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-evaluate-lne-5x | How do you evaluate lne^(5x)?
Oct 30, 2016
$5 x$
Explanation:
Evaluate $\ln {e}^{5 x}$
Recall the log formula $\log {x}^{a} = a \log x$
$\ln {e}^{5 x} = 5 x \ln e$
And because $\ln e = 1$...
$5 x \ln e = 5 x$ | 2019-10-19 01:48:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771566152572632, "perplexity": 11506.502364269487}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986688674.52/warc/CC-MAIN-20191019013909-20191019041409-00497.warc.gz"} |
https://blender.stackexchange.com/questions/36788/is-there-a-way-to-smooth-a-selected-patch-of-vertices-but-preserve-the-contour | # Is there a way to smooth a selected patch of vertices but preserve the contour?
It is really irritating the way that Smooth shrinks the entire selected contour, especially after several applications. For me, this almost always introduces additional distortions that can be almost as hard to fix as the original irregularity of the surface I was trying to smooth. Is there no version of the Smooth tool that will maintain the border vertices/edges in their exact position while smoothing all the vertices inside these borders?
• Well, what if you deselect the contour? – Polosson Aug 31 '15 at 14:37 | 2021-01-26 15:50:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597842454910278, "perplexity": 918.690552978599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704800238.80/warc/CC-MAIN-20210126135838-20210126165838-00221.warc.gz"} |
https://patrickwang.co.uk/index.php/2021/09/17/symmetry-and-lie-algebra/ | # Symmetry, and Lie Algebra
Back by popular demand, this is the third instalment on my journey to learn Quantum Field Theory.
“I am waiting for another blog post.”
Ben Thamm, University of Edinburgh Student
Arguablly one of the most important concepts in the whole of modern physics is one of symmetry. Suppose we were to perform an experiment today, in Edinburgh, to measure the elementary charge. It’s easy to take for granted the fact that should this experiment be repeated at any time and anywhere, the results wouldn’t change (barring equipment error). Landau and Lifshitz’s Theoretical Physics starts by using the idea of symmetry to completely derive what is meant by mass, momentum, and energy. One can’t help but be in awe of the sheer elegance that lie amongst the mathematics.
## Noether’s Theorem
Symmetry, to put is simply, is when a law of physics is invariant upon some transformation. This transformation can be any kind, a familiar one is Lorentz transformation. A more eloquent way to say a law is invariant under the Lorentz transformation, is to say it exhibits a symmetry of spacetime.
Let us look at the simplest case of Noether’s Theorem, namely that momentum is a conserved quantity.
Suppose we have a Langrangian of the following form:
$\mathcal{L} \left( q^1, q^2 \dots q^N, \dot{q}^1, \dot{q}^2\dots \dot{q}^N\right)= \sum \frac{1}{2}(\dot{q}^i)^2- V(q_1\dots q_N)$
where we have a collection of particles. The Euler-Lagrange equation tells us the following:
$\dv{t}\pdv{\mathcal{L}}{\dot{q}^i} = \pdv{\mathcal{L}}{q^i}$
Our momentum, takes the following form:
$p_k = \pdv{\mathcal{L}}{\dot q^k}$
Combining this with the fact that the spatial derivative of the Langrangian is 0, i.e.:
$\pdv{\mathcal L}{q^k} = 0, k \in \{1\dots N\}$
By virtue of the Euler-Lagrange equation, we can see that $p_k$ is conserved.
This was the simplest application of Noether’s Theorem. By applying this theorem to suitably chosen quantities/systems, we can demonstrate the conservation laws for various physical quantities!
Energy can be shown to be a conserved quantity by time translation, and angular momentum can be shown to be conserved by applying Noether’s Theorem to $SO(3)$ rotations. One most important take away is perhaps the following:
“Physics encapsulated in $\mathcal{L}$ should be time independent.”
As such,
$\pdv{\mathcal{L}}{t}=0$
which leads to the absolute mind-blowing result that our physics is the same throughout time, a very comforting thought to have indeed.
A rather elementary way of showing something is conserved, is to take the derivative of said quantity with respect to a chosen variable and demonstrating that the derivative is nil. This can be done to show the conservation of energy.
In effect, we want to write down an expression, such that when it’s hit with a time derivative, it becomes zero!
## Dipping Toes in Group Theory
Another area that I encountered which I find to be quite fascinating is group theory. Never having really been formally introduced to the concept of Group theory, I thought I’d document my learning of it.
A rather familiar group for me personally would be the SO(3) group:
$SO(3) :\left\{M \in \mathbb{R}^{3\times 3}| M^TM = \mathbb{1}_{3\times 3}, \det M = 1\right\}$
where SO(3) contains all possible orthogonal rotations in 3D.
A Lie Group, as I understand, is a group that is also a manifold, and is composed in the following ways:
$\mathcal{g} \times \mathcal{g} \rightarrow \mathcal{g}$
$(a,b) \rightarrow a\cdot b$
In that regards, SO(3) is a three dimensional Lie Group. Each Lie Group, has an associated Lie Algebra, a vector space which has an antisymmetric product. The cross product $$\mathbb{R}^3, \cross$$ would be an example. We can find the Lie Algebra rather simply for SO(2):
We know a finite SO(2) rotation to take the matrix form:
$M = \mqty[\cos \alpha & -\sin \alpha \\ \sin \alpha & \cos \alpha]$
Suppose we want apply an infinitesimal nudge of rotation, in the counterclockwise direction. For small angles of $\varepsilon$, cosines can be approximated to 1, and sines can be approximated to said angle, i.e.:
$M \approx \mathbb{1} + \mqty[ 0 & -\varepsilon \\ \varepsilon & 0], 0<\varepsilon\ll 1$
The Lie Algebra for SO(2) is said to be:
$SO(2) = \left \{ \mqty[0 & -\varepsilon \\ \varepsilon & 0] \text{ for } \varepsilon \in \mathbb{R}\right\}$
By following a similar process, we can write down the Lie Algebra of the SO(3) group.
$SO(3) = \left \{ \mqty[0 & 0 & 0\\ 0 & 0 & -\epsilon \\ 0 & \epsilon & 0 ] + \mqty[0&0&-\delta\\0&0&0\\ \delta & 0 & 0] + \mqty[0 & -\rho & 0 \\ \rho & 0 & 0 \\ 0 & 0 & 0 ] \right \}$
where the three matrices correspond a small nudge around $x$, $y$, and $z$ axes respectively. Said nudges are parameterised by $\epsilon$, $\delta$, and $\rho$.
This is the set of all antisymmetric matrices, and are the generators of SO(3), a topic of crucial importance in field theories.
## A Hiatus… of Sorts
Unfortunately, with the start of a new semester, my brief escapade to a world beyond our intuition will need to be temporarily suspended. I am taking some interesting courses this year which I am very excited about. I hope to maintain a regular writing schedule of interesting physical phenomena on some of the materials I will soon be learning. | 2021-10-26 01:25:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028236627578735, "perplexity": 450.9144907943554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587794.19/warc/CC-MAIN-20211026011138-20211026041138-00577.warc.gz"} |
https://mathematics.huji.ac.il/eventss/events-seminars?page=33 | 2018 Jan 31
Analysis Seminar: Balazs Rath "Self-organized criticality in dynamic random graph models"
12:00pm to 1:00pm
2018 Jun 21
Colloquium: Erdos lecture - Canceled
2:30pm to 3:30pm
Location:
Manchester Building (Hall 2), Hebrew University Jerusalem
Given a convex polytope P, what is the number of integer points in P? This problem is of great interest in combinatorics and discrete geometry, with many important applications ranging from integer programming to statistics. From a computational point of view it is hopeless in any dimensions, as the knapsack problem is a special case. Perhaps surprisingly, in bounded dimension the problem becomes tractable. How far can one go? Can one count points in projections of P, finite intersections of such projections, etc.?
2016 Jun 09
Joint Amitsur Algebra&NT Seminar: Shai Haran (Technion), "New foundations for geometry"
12:00pm to 1:15pm
Location:
Manchester Building (Ross 63), Jerusalem, Israel
*** Please note the LOCATION *** We shall give a simple generalization of commutative rings. The category GR of such generalized rings contains ordinary commutative rings (fully, faithfully), but also the "integers" and the "residue field" at a real or complex place of a number field ; the "field with one element" F1 (the initial object of GR) ; the "Arithmetical Surface" (the categorical sum of the integers Z with them self). We shall show this geometry sees the real and complex places of a number field K : the valuation sub GR of K correspond to the finite and
2018 Jun 28
Colloquium: Barry Simon (Caltech) - "More Tales of our Forefathers"
2:30pm to 3:30pm
Location:
Manchester Building (Hall 2), Hebrew University Jerusalem
This is not a mathematics talk but it is a talk for mathematicians. Too often, we think of historical mathematicians as only names assigned to theorems. With vignettes and anecdotes, I'll convince you they were also human beings and that, as the Chinese say, "May you live in interesting times" really is a curse. Among the mathematicians with vignettes are Riemann, Newton, Poincare, von Neumann, Kato, Loewner, Krein and Noether. This talk is in two parts. The second part will be given from 4:00 to 5:00 (not 5:30) in the Basic Notions seminar.
2018 Jun 14
Colloquium - Zuchovitzky lecture: Lior Yanovski (HUJI) "Homotopy cardinality and the l-adic analyticity of Morava-Euler characteristic"
2:30pm to 3:30pm
Location:
Manchester Building (Hall 2), Hebrew University Jerusalem
A finite set has an interesting numerical invariant - its cardinality. There are two natural generalizations of "cardinality" to a (homotopy) invariant for (suitably finite) spaces. One is the classical Euler characteristic. The other is the Baez-Dolan "homotopy cardinality". These two invariants, both natural from a certain perspective, seem to be very different from each other yet mysteriously connected. The question of the precise relation between them was popularized by John Baez as one of the "mysteries of counting".
2016 Apr 21
Amitsur Algebra: Konstantin Golubev (HU)
12:00pm to 1:15pm
Location:
Manchester Building (room 209), Jerusalem, Israel
Title: Spectral approach to the chromatic number of a simplicial complex Abstract: In this talk, we'll summarize results obtained in recent years in a pursuit for spectral bounds for the chromatic number of a simplicial complex. As the principal application, we'll show that Ramanujan complexes serve as family of explicitly constructed complexes with large girth and large chromatic number. We'll also present other results, such as a bound on the expansion and a bound on the mixing of a complex, and refer to open questions. Based on joint work with S.Evra, A.Lubotzky and O.Parzanchevski.
2016 Jan 07
Amitsur Algebra: Gili Schul (Hebrew U): Rapid expansion in finite simple groups
12:00pm to 1:15pm
Location:
Manchester Building (room 209), Jerusalem, Israel
Title: Rapid expansion in finite simple groups Abstract: We show that small normal subsets $A$ of finite simple groups expand very rapidly -- namely, $|A^2| \ge |A|^{2-\epsilon}$, where $\epsilon >0$ is arbitrarily small. Joint work with M. W. Liebeck and A. Shalev
2018 Jan 25
Special seminar: Sylvian Cappell (NYU) "What can be the fixed point sets of a given finite group acting on a non-simply-connected compact space?"
10:30am to 11:30am
Location:
Ross 70
We will report on a joint work with Shmuel Weinberger of U. of Chicago & Min Yan of Hong Kong U. of Sci. & Tech.
2018 Apr 22
Geometric, Topological and Computational Aspects of High-Dimensional Combinatorics
Sun, 22/04/2018 (All day) to Thu, 26/04/2018 (All day)
Location:
Israel Institute for Advanced Studies, The Hebrew University of Jerusalem
2016 Jun 16
Amitsur Algebra: Gili Golan, "The generation problem in Thompson group F"
12:00pm to 1:15pm
Location:
Manchester Building (room 209), Jerusalem, Israel
We show that the generation problem in Thompson group F is decidable, i.e., there is an algorithm which decides if a finite set of elements of F generates the whole F. The algorithm makes use of the Stallings 2-core of subgroups of F, which can be defined in an analogue way to the Stallings core of subgroups of a free group. An application of the algorithm shows that F is a cyclic extension of a group K which has a maximal elementary amenable subgroup B. The group B is a copy of a subgroup of F constructed by Brin.
2016 Apr 07
Amitsur Algebra: Ayala Byron (HUJI), "Definable fields in the free group"
12:00pm to 1:15pm
Location:
Manchester Building (room 209), Jerusalem, Israel
Abstract: In the early 2000s Sela proved that all non-abelian free groups share a common first-order theory. Together with R. Sklinos, we use tools developed in his work to show that no infinite field is definable in this theory. In this talk we will survey the line of proof for a formal solution theorem for a simple sort of definable sets, that have a structure of a hyperbolic tower, and use it to characterize definable sets that do not carry a definable structure of an abelian group.
2018 Apr 25
Analysis Seminar: Latif Eliaz "The Essential Spectrum of Schroedinger Operators on Graphs"
12:00pm to 1:00pm
Location:
Room 70, Ross Building
It is known that the essential spectrum of aSchrödinger operator H on\ell^2(\mathbb{N}) is equal to the union of the spectra of right limits ofH. The naturalgeneralization of this relation to \mathbb{Z}^n is known to hold as well.In this talk we study thepossibility of generalizing this characterization of \sigma_{ess}(H) tographs. We show that the general statement fails, while presenting natural families of models where it still holds.
2018 Jan 30
Action Now Seminar: Weikun He (HUJI), "Discretized sum-product estimates in higher dimension"
2:30pm to 3:30pm
Location:
IIAS, HUJI, Feldman Building, Room 130
2018 Jan 30
Action Now Seminar: Oren Becker (HUJI), "Stability and Invariant Random Subgroups"
11:30am to 12:30pm
Location:
IIAS, HUJI, Feldman Building, Room 130
2018 Apr 12
Colloquium: Ron Peretz (Bar Ilan) - "Repeated Games with Bounded Memory - the Entropy Method"
2:30pm to 3:30pm
Location:
Manchester Building (Hall 2), Hebrew University Jerusalem | 2019-03-26 08:10:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.717859148979187, "perplexity": 2658.367898954946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204885.27/warc/CC-MAIN-20190326075019-20190326101019-00278.warc.gz"} |
https://gamedev.stackexchange.com/questions/135854/delta-compressed-world-state-when-networking-in-javascript | # Delta compressed world state when networking in JavaScript
Lets say we send a snapshot from the server to the clients about world details, containing object positions, velocities etc, every second. We should send only delta-data based on the previous snapshot that was sent.
For example, if the player moved from (x:50, y:0) to (x:100, y:0), then the snapshot will only contain (move_x:50: move_y:0), and the clients will add the previous position and move_x. We should sending delta values, as it is less data to send.
How do I achieve this in Javascript?
My objects already have data stored in Float32 (by TypedArray) and there is no Float16 data type in JavaScript.
For example:
delta = currentPos - prevPos
20 = 100 - 80
okey, now I have 20 instead of 80, but this isn't less data to send.. still both will be send as Float32.
data.writeFloat32(delta); //for simplesness
there is no Float16 and I am afraid that even if JS will has Float16 TypedArrays, then it still would be not enough, because for example 20.3454356 from Float32 to Float16 will lose precision to lets say sth like 20.34, or I'm wrong?
How to send delta data in JS and gain the benefits of this technique in JS?
• Please note that I have removed the component requesting additional explanation of the floating precision of float32. You can ask multiple questions as multiple questions, or in direct relation to the core question. I see no relation when you specifically say you dont have access to float16, regardless of the additional understanding. Jan 16 '17 at 3:00
• It also occurs to me that, as its own question, you should ask about floating precision at StackOverflow. Jan 16 '17 at 3:06
• What exactly are you having trouble with? This question seems kind of broad at the moment. Jan 16 '17 at 10:55 | 2021-09-17 19:45:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3857288360595703, "perplexity": 1790.152186115097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00366.warc.gz"} |
http://gmatclub.com/forum/i-know-why-now-but-thought-i-d-share-70076.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 07 Jul 2015, 13:18
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# I know why now, but thought I'd share: 0! = (1-1)! = 1!/1 =
Author Message
TAGS:
Manager
Joined: 30 Jul 2007
Posts: 129
Followers: 2
Kudos [?]: 12 [0], given: 0
I know why now, but thought I'd share: 0! = (1-1)! = 1!/1 = [#permalink] 10 Sep 2008, 18:04
I know why now, but thought I'd share:
0! = (1-1)! = 1!/1 = 1/1 = 1
SVP
Joined: 29 Aug 2007
Posts: 2493
Followers: 59
Kudos [?]: 579 [0], given: 19
Re: Why is zero factorial equal to one? [#permalink] 10 Sep 2008, 19:35
sarzan wrote:
I know why now, but thought I'd share:
0! = (1-1)! = 1!/1 = 1/1 = 1
i think its more logical than mathmatical.
suppose there are 5 people. how many group(s) of 5 can be formed from 5 people.
the solution = 5c5 = 5!/[5!(5-5)!] = 5!/(5!0!)
lets use the logic: how many groups of 5 people can be formed from 5 people? 1 group. if we use 0! as 0, then the formula gives infinite groups, which is not correct. so here 0! has to be 1, then only the formula works for all scenarios. therefore, 0! = 1.
we can raise similar question: why $$x^0 = 1$$?
we can apply same logic.
_________________
Manager
Joined: 30 Jul 2007
Posts: 129
Followers: 2
Kudos [?]: 12 [0], given: 0
Re: Why is zero factorial equal to one? [#permalink] 10 Sep 2008, 19:39
GMAT TIGER wrote:
sarzan wrote:
I know why now, but thought I'd share:
0! = (1-1)! = 1!/1 = 1/1 = 1
i think its more logical than mathmatical.
suppose there are 5 people. how many group(s) of 5 can be formed from 5 people.
the solution = 5c5 = 5!/[5!(5-5)!] = 5!/(5!0!)
lets use the logic: how many groups of 5 people can be formed from 5 people? 1 group. if we use 0! as 0, then the formula gives infinite groups, which is not correct. so here 0! has to be 1, then only the formula works for all scenarios. therefore, 0! = 1.
we can raise similar question: why $$x^0 = 1$$?
we can apply same logic.
great grasp tiger. thanks for the explanation.
Re: Why is zero factorial equal to one? [#permalink] 10 Sep 2008, 19:39
Similar topics Replies Last post
Similar
Topics:
Came across these in an email, thought id share this w/ you 3 07 Nov 2007, 16:26
All of us know that n^0 = 1, but how many of us know why? I 2 03 Oct 2007, 02:23
thought i'd provide my opinion on a Friday to keep it light 9 10 Aug 2007, 07:49
I have the OG 10 and OG 11 books. Want to know how much 0 23 Nov 2006, 18:11
0!=1 why? we have : n!/n = (n-1)! ---> 0!= 1!/1= 1 1 16 Nov 2005, 07:23
Display posts from previous: Sort by
# I know why now, but thought I'd share: 0! = (1-1)! = 1!/1 =
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2015-07-07 21:18:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49851149320602417, "perplexity": 3919.32737874934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375100481.40/warc/CC-MAIN-20150627031820-00207-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/190492/pretest-scores-differ-amongst-groups | # Pretest scores differ amongst groups
In my analysis I compare 3 treatment groups with a pre- and posttest (emotionality scores 0 - 100), so I want to use the repeated measures. Unfortunately, the pretest scores differ significantly amoungst groups which makes me think that simply running the test would not be valid. When I transfer the scores to relative ones, I get uninterpretable data due to the method of measurement (0 = not emotional, 100 = very emotional): a baseline score of 40 and postscore of 80 should not be interpreted as equal to a baseline score of 2 and a postscore of 4.
Any ideas on how to tackle this problem?
• Was this a randomized experiment? If no, e.g. stratifying the analysis by propensity score may be an option. – Björn Jan 13 '16 at 12:27
• It was a randomized experiment indeed. – Guest with a question Jan 13 '16 at 12:45
$Y = X \beta + \epsilon$
$Y = X \beta + Z \gamma + \epsilon$ | 2019-10-23 03:32:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5795483589172363, "perplexity": 1570.3989176908017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00483.warc.gz"} |
http://www.abstractmath.org/Word%20Press/?tag=function | # Definition of function
Note: This is a revision of the article on specification and definition of functions from abstractmath.org. Many of the links in this article take you to other articles in abstractmath.org.
A function is a mathematical object.
To deal with functions as a math object, you need a precise definition of “function”. That is what this article gives you.
• The article starts by giving a specification of “function”.
• After that, we get into the technicalities of the
definitions of the general concept of function.
• Things get complicated because there are several inequivalent definitions of “function” in common use.
## Specification of “function”
A function $f$ is a mathematical object which determines and is completely determined by the following data:
• (DOM) $f$ has a domain, which is a set. The domain may be denoted by $\text{dom} f$.
• (COD) $f$ has a codomain, which is also a set and may be denoted by $\text{cod} f$.
• (VAL) For each element $a$ of the domain of $f$, $f$ has a value at $a$.
• (FP) The value of $f$ at $a$ is
completely determined by $a$ and $f$.
• (VIC) The value of $f$ at $a$ must be an element of the codomain of $f$.
• The value of $f$ at $a$ is most cohttp://www.abstractmath.org/MM/MMonly written $f(a)$, but see Functions: Notation and Terminology.
• To evaluate $f$ at $a$ means to determine $f(a)$. The two examples of functions below show that different functions may have different strategies for evaluating them.
• In the expression “$f(a)$”, $a$ is called the input or (old-fashioned) argument of $f$.
• “FP” means functional property.
• “VIC” means “value in codomain”.
### Examples
I give two examples here. The examples of functions chapter contains many other examples.
#### A finite function
Let $F$ be the function defined on the set $\left\{\text{a},\text{b},\text{c},\text{d}\right\}$ as follows: $F(\text{a})=\text{a},\,\,\,F(\text{b})=\text{c},\,\,\,F(\text{c})=\text{c},\,\,\,F(\text{d})=\text{b}$In this definition, $\text{a},\text{b},\text{c},\text{d}$ are letters of the alphabet, not variables. This is the function called “Finite” in the chapter on examples of functions.
• The definition of $F$ says “$F$ is defined on the set $\left\{\text{a},\,\text{b},\,\text{c},\,\text{d} \right\}$”. The phrase “is defined on”
means that the domain is that set. That is standard terminology.
• The value of $F$ at each element of the domain is given explicitly. The value at
$\text{b}$, for example, is $\text{c}$, because the definition says that $F(\text{b}) = \text{c}$. No other reason needs to be given. Mathematical definitions can be arbitrary.
• The codomain of $F$ is not specified, but must include the set $\{\text{a},\text{b},\text{c}\}$. The codomain of a function is often not specified when it is not important, which is most of the time in freshman calculus (for example).
• The diagram below shows how $F$ obeys the rule that the value of an element $x$ in the domain is completely determined by $x$ and $F$.
• If two arrows had started from the same element of the domain, then $F$ would not be a function. (It would be a multivalued function).
• If there were an element of the domain that no arrow started from, it $F$ would not be a function. (It would be a partial function.)
• In this example, to evaluate $F$ at $b$ (to determine the value of $F$ at $b$) means to look at the definition of $F$, which says among other things that the value is $c$ (or alternatively, look at the diagram above and see what letter the arrow starting at $b$ points to). In this case, “evaluation” does not imply calculating a formula.
#### A real-valued function
Let $G$ be the real-valued function defined by the formula $G(x)={{x}^{2}}+2x+5$.
• The definition of $G$ gives the value at each element of the domain by a formula. The value at $3$, for example, is obtained by calculating $G(3)=3^2+2\cdot3+5=20$
• The definition of $G$
does not specify the domain. The convention in the case of functions defined on the real numbers by a formula is to take the domain to be all real numbers at which the formula is defined. In this case, that is every real number, so the domain is $\mathbb{R}$.
• The definition of $G$ does not specify the codomain, either. However, the codomain must include all real numbers greater than or equal to $4$. (Why?)
• So if an author wrote, “Let $H(x)=\frac{1}{x}$”, the domain would be the set of all real numbers except $0$. But a careful author would write, “Let $H(x)=\frac{1}{x}$ ($x\neq0$).”
### What the specification means
• The specification guarantees that a function satisfies all five of the properties listed.
• The specification does not define a mathematical structure in the way mathematical structures have been defined in the past: In particular, it does not require a function to be one or more sets with structure.
• Even so, it is useful to have the specification, because:
Many mathematical definitions introduce extraneous technical elements which clutter up your thinking about the object they define.
## History
The discussion below is an oversimplification of the history of mathematics, which many people have written thick books about. A book relevant to these ideas is Plato’s Ghost, by Jeremy Gray.
Until late in the nineteenth century, functions were usually thought of as defined by formulas (including infinite series). Problems arose in the theory of harmonic analysis which made mathematicians require a more general notion of function. They came up with the concept of function as a set of ordered pairs with the functional property (discussed below), and that understanding revolutionized our understanding of math.
In particular, this definition, along with the use of set theory, enabled abstract math (ahem) to become a cohttp://www.abstractmath.org/MM/MMon tool for understanding math and proving theorems. It is conceivable that some readers may wish it hadn’t. Well, tough.
The modern definition of function given here (which builds on the ordered pairs with functional property definition) came into use beginning in the 1950’s. The modern definition became necessary in algebraic topology and is widely used in many fields today.
The concept of function as a formula never disappeared entirely, but was studied mostly by logicians who generalized it to the study of function-as-algorithm. Of course, the study of algorithms is one of the central topics of modern computing science, so the notion of function-as-formula (updated to function-as-algorithm) has achieved a new importance in recent years.
To state both the definition, we need a preliminary idea.
## The functional property
A set $P$ of ordered pairs has the functional property if two pairs in $P$ with the same first coordinate have to have the same second coordinate (which means they are the same pair). In other words, if $(x,a)$ and $(x,b)$ are both in $P$, then $a=b$.
#### How to think about the functional property
The point of the functional property is that for any pair in the set of ordered pairs, the first coordinate determines what the second one is (which is just what requirement FP says in the specification). That’s why you can write “$G(x)$” for any $x$ in the domain of $G$ and not be ambiguous.
#### Examples
• The set $\{(1,2), (2,4), (3,2), (5,8)\}$ has the functional property, since no two different pairs have the same first coordinate. Note that there are two different pairs with the same second coordinate. This is irrelevant to the functional property.
• The set $\{(1,2), (2,4), (3,2), (2,8)\}$ does not have the functional property. There are two different pairs with first coordinate 2.
• The empty set $\emptyset$ has the function property vacuously.
#### Example: graph of a function defined by a formula
In calculus books, a picture like this one (of part of $y=x^2+2x+5$) is called a graph. Here I use the word “graph” to denote the set of ordered pairs
$\left\{ (x,{{x}^{2}}+2x+5)\,\mathsf{|}\,x\in \mathbb{R } \right\}$
which is a mathematical object rather than some ink on a page or pixels on a screen.
The graph of any function studied in beginning calculus has the functional property. For example, the set of ordered pairs above has the functional property because if $x$ is any real number, the formula ${{x}^{2}}+2x+5$ defines a specific real number.
• if $x = 0$, then ${{x}^{2}}+2x+5=5$, so the pair $(0, 5)$ is an element of the graph of $G$. Each time you plug in $0$ in the formula you get 5.
• if $x = 1$, then ${{x}^{2}}+2x+5=8$.
• if $x = -2$, then ${{x}^{2}}+2x+5=5$.
You can measure where the point $\{-2,5\}$ is on the (picture of) the graph and see that it is on the blue curve as it should be. No other pair whose first coordinate is $-2$ is in the graph of $G$, only $(-2, 5)$. That is because when you plug $-2$ into the formula ${{x}^{2}}+2x+5$, you get $5$ and nothing else. Of course, $(0, 5)$ is in the graph, but that does not contradict the functional property. $(0, 5)$ and $(-2, 5)$ have the same second coordinate, but that is OK.
## Mathematical definition of function
A function $f$ is a
mathematical structure consisting of the following objects:
• A set called the domain of $f$, denoted by $\text{dom} f$.
• A set called the codomain of $f$, denoted by $\text{cod} f$.
• A set of ordered pairs called the graph of $f$, with the following properties:
• $\text{dom} f$ \text{dom} fis the set of all first coordinates of pairs in the graph of $f$.
• Every second coordinate of a pair in the graph of $f$ is in $\text{cod} f$ (but $\text{cod} f$ may contain other elements).
• The graph of $f$ has the functional property.
Using arrow notation, this implies that $f:\text{dom}f\to\text{cod} f$.
#### Remark
The main difference between the specification of function given previously and this definition is that the definition replaces the statement “$f$ has a value at $a$” by introducing a set of ordered pairs (the graph) with the functional property.
• This set of ordered pairs is extra structure introduced by the definition mainly in order to make the definition a classical sets-with-structure.
• This makes the graph, which should be a concept derived from the concept of function, appear to be a necessary part of the function.
• That suggests incorrectly that the graph is more of a primary intuition that other intuitions such as function as map, function as transformer, and other points of view discussed in the article Images and metaphors for functions.
• The concept of graph of a function is indeed an important intuition, and is discussed with examples in the articles Graphs of continuous functions and Graphs of finite functions.
• Nevertheless, the fact that the concept of graph appears in the definition of function does not make it the most important intuition.
### Examples
• Let $F$ have graph $\{(1,2), (2,4), (3,2), (5,8)\}$ and define $A = \{1, 2, 3, 5\}$ and $B = \{2, 4, 8\}$. Then $F:A\to B$ is a function. In speaking, we would usually say, “$F$ is a function from $A$ to $B$.”
• Let $G$ have graph $\{(1,2), (2,4), (3,2), (5,8)\}$ (same as above), and define $A = \{1, 2, 3, 5\}$ and $C = \{2, 4, 8, 9, 11, \pi, 3/2\}$. Then $G:A\to C$ is a (admittedly ridiculous) function. Note that all the second coordinates of the graph are in the codomain $C$, along with a bunch of miscellaneous suspicious characters that are not second coordinates of pairs in the graph.
• Let $H$ have graph $\{(1,2), (2,4), (3,2), (5,8)\}$. Then $H:A\to \mathbb{R}$ is a function, since $2$, $4$ and $8$ are all real numbers.
• Let $D = \{1, 2, 5\}$ and $E = \{1, 2, 3, 4, 5\}$. Then there is no function $D\to A$ and no function $E\to A$ with graph $\{(1,2), (2,4), (3,2), (5,8)\}$. Neither $D$ nor $E$ has exactly the same elements as the first coordinates of the graph.
### Identity and inclusion
Suppose we have two sets A and B with $A\subseteq B$.
• The identity function on A is the function ${{\operatorname{id}}_{A}}:A\to A$ defined by ${{\operatorname{id}}_{A}}(x)=x$ for all $x\in A$. (Many authors call it ${{1}_{A}}$).
• When $A\subseteq B$, the inclusion function from $A$ to $B$ is the function $i:A\to B$ defined by $i(x)=x$ for all $x\in A$. Note that there is a different function for each pair of sets $A$ and $B$ for which $A\subseteq B$. Some authors call it ${{i}_{A,\,B}}$ or $\text{in}{{\text{c}}_{A,\,B}}$.
The identity function and an inclusion function for the same set $A$ have exactly the same graph, namely $\left\{ (a,a)|a\in A \right\}$. More about this below.
## Other definitions of function
### Original abstract definition of function
#### Definition
• A function $f$ is a set of ordered pairs with the functional property.
• If $f$ is a function according to this definition, the domain of $f$ is the set of first coordinates of all the pairs in $f$.
• If $x\in \text{dom} f$, then we define the value of $f$ at $x$, denoted by $f(x)$, to be the second coordinate of the only ordered pair in $f$ whose first coordinate is $x$.
#### Remarks
• This definition is still widely used in mathematical writing.
• Many authors do not tell you which definition they are using.
• For many purposes (including freshman calculus for the most part) it does not matter which definition is used.
• In some branches of math, the modern definition adds great clarity to many complicated situations; using the older definition can even make it difficult to describe some important constructions. There is more about this in New Approaches below.
#### Possible confusion
Some confusion can result because of the presence of these two different definitions.
• For example, since the identity function ${{\operatorname{id}}_{A}}:A\to A$ and the inclusion function ${{i}_{A,\,B}}:A\to B$ have the same graph, users of the older definition are required in theory to say they are the same function.
• Also it requires you to say that the graph of a function is the same thing as the function.
• In my observation, this does not make a problem in practice, unless there is a very picky person in the room.
• It also appears to me that the modern definition is (quite rightly) winning and the original abstract definition is disappearing.
### Multivalued function
The phrase multivalued function refers to an object that is like a function $f:S\to T$ except that for $s\in S$, $f(s)$ may denote more than one value.
#### Examples
• Multivalued functions arose in considering complex functions. In cohttp://www.abstractmath.org/MM/MMon practice, the symbol $\sqrt{4}$ denoted $2$, although $-2$ is also a square root of $4$. But in complex function theory, the square root function takes on both the values $2$ and $-2$. This is discussed in detail in Wikipedia.
• The antiderivative is an example of a multivalued operator. For any constant $C$, $\frac{x^3}{3}+C$ is an antiderivative of $x^2$, so that $\frac{x^3}{3}$, $\frac{x^3}{3}+42$, $\frac{x^3}{3}-1$ and $\frac{x^3}{3}+2\pi$ are among the infinitely many antiderivatives of $x^2$.
A multivalued function $f:S\to T$ can be modeled as a function with domain $S$ and codomain the set of all subsets of $T$. The two meanings are equivalent in a strong sense (naturally equivalent). Even so, it seems to me that they represent two different ways of thinking about
multivalued functions. (“The value may be any of these things…” as opposed to “The value is this whole set of things.”)
Some older mathematical papers in complex function theory do not tell you that their functions are multivalued. There was a time when complex function theory was such a Big Deal in research mathematics that the phrase “function theory” meant complex function theory and every mathematician with a Ph. D. knew that complex functions were multivalued.
### Partial function
A partial function $f:S\to T$ is just like a function except that its input may be defined on only a subset of $S$. For example, the function $f(x):=\frac{1}{x}$ is a partial function from the real numbers to the real numbers.
This models the behavior of computer programs (algorithms): if you consider a program with one input and one output as a function, it may not be defined on some inputs because for them it runs forever (or gives an error message).
In some texts in computing science and mathematical logic, a function is by
convention a partial function, and this fact may not be mentioned explicitly, especially in research papers.
The phrases “multivalued function” and “partial function” upset some picky types who say things like, “But a multivalued function is not a function!”. A hot dog is not a dog, either. I once had a Russian teacher who was Polish and a German teacher who was Hungarian. So what? See the Handbook (click on
## New approaches to functions
All the definitions of function given here produce mathematical structures, using the traditional way to define mathematical objects in terms of sets. Such definitions have disadvantages.
Mathematicians have many ways to think about functions. That a function is a set of ordered pairs with a certain property (functional) and possibly some ancillary ideas (domain, codomain, and others) is not the way we usually think about them$\ldots$Except when we need to reduce the thing we are studying to its absolutely most abstract form to make sure our proofs are correct.
That most abstract form is what I have called the rigorous view or the dry bones and it is when that reasoning is needed that the sets-with-structure approach has succeeded.
Our practice of abstraction has led us to new approaches to talking about functions. The most important one currently is category theory. Roughly, a category is a bunch of objects together with some arrows going between them that can be composed head to tail. Functions between sets are examples of this: the sets are the objects and the functions the arrows. But arrows in a category do not have to be functions; in that way category theory is an abstraction of functions.
This abstracts the idea of function in a way that brings out common ideas in various branches of math. Research papers in many branches of mathematics now routinely use the language of category theory. Categories now appear in some undergraduate math courses, meaning that Someone needs to write a chapter on category theory for abstractmath.org.
Besides category theory, computing scientists have come up with other abstract ways of dealing with functions, for example type theory. It has not come as far along as category theory, but has shown recent signs of major progress.
Both category theory and type theory define math objects in terms of their effect on and relationship with other math objects. This makes it possible to do abstract math entirely without using sets-with-structure as a means of defining concepts.
## References
• Functions in Wikipedia. This is an extensive and mostly well-done description of the use of functions in mathematics.
Send to Kindle
# Introducing abstract topics
I have been busy for the past several years revising abstractmath.org (abmath). Now I believe, perhaps foolishly, that most of the articles in abmath have reached beta, so now it is time for something new.
For some time I have been considering writing introductions to topics in abstract math, some typically studied by undergraduates and some taken by scientists and engineers. The topics I have in mind to do first include group theory and category theory.
The point of these introductions is to get the student started at the very beginning of the topic, when some students give up in total confusion. They meet and fall off of what I have called the abstraction cliff, which is discussed here and also in my blog posts Very early difficulties and Very early difficulties II.
I may have stolen the phrase “abstraction cliff” from someone else.
## Group theory
Group theory sets several traps for beginning students.
### Multiplication table
• A student may balk when a small finite group is defined using a set of letters in a multiplication table.
“But you didn’t say what the letters are or what the multiplication is?”
• Such a definition is an abstract definition, in contrast to the definition of “prime”, for example, which is stated in terms of already known entities, namely the integers.
• The multiplication table of a group tells you exactly what the binary operation is and any set with an operation that makes such a table correct is an example of the group being defined.
• A student who has no understanding of abstraction is going to be totally lost in this situation. It is quite possible that the professor has never even mentioned the concept of abstract definition. The professor is probably like most successful mathematicians: when they were students, they understood abstraction without having to have it explained, and possibly without even noticing they did so.
### Cosets
• Cosets are a real killer. Some students at this stage are nowhere near thinking of a set as an object or a thing. The concept of applying a binary operation on a pair of sets (or any other mathematical objects with internal structure) is completely foreign to them. Did anyone ever talk to them about mathematical objects?
• The consequence of this early difficulty is that such a student will find it hard to understand what a quotient group is, and that is one of the major concepts you get early in a group theory course.
• The conceptual problems with multiplication of cosets is similar to those with pointwise addition of functions. Given two functions $f,g:\mathbb{R}\to\mathbb{R}$, you define $f+g$ to be the function $(f+g)(x):=f(x)+g(x)$ Along with pointwise multiplication, this makes the space of functions $\mathbb{R}\to\mathbb{R}$ a ring with nice properties.
• But you have to understand that each element of the ring is a function thought of as a single math object. The values of the function are properties of the function, but they are not elements of the ring. (You can include the real numbers in the ring as constant functions, but don’t confuse me with facts.)
• Similarly the elements of the quotient group are math objects called cosets. They are not elements of the original group. (To add to the confusion, they are also blocks of a congruence.)
### Isomorphic groups
• Many books, and many professors (including me) regard two isomorphic groups as the same. I remember getting anguished questions: “But the elements of $\mathbb{Z}_2$ are equivalence classes and the elements of the group of permutations of $\{1,2\}$ are functions.”
• I admit that regarding two isomorphic groups as the same needs to be treated carefully when, unlike $\mathbb{Z}_2$, the group has a nontrivial automorphism group. ($\mathbb{Z}_3$ is “the same as itself” in two different ways.) But you don’t have to bring that up the first time you attack that subject, any more than you have to bring up the fact that the category of sets does not have a set of objects on the first day you define categories.
## Category theory
Category theory causes similar troubles. Beginning college math majors don’t usually meet it early. But category theory has begun to be used in other fields, so plenty of computer science students, people dealing with databases, and so on are suddenly trying to understand categories and failing to do so at the very start.
The G&G post A new kind of introduction to category theory constitutes an alpha draft of the first part of an article introducing category theory following the ideas of this post.
### Objects and arrows are abstract
• Every once in a while someone asks a question on Math StackExchange that shows they have no idea that an object of a category need not have elements and that morphisms need not be functions that take elements to elements.
• One questioner understood that the claim that a morphism need not be a function meant that it might be a multivalued function.
### Duality
• That misunderstanding comes up with duality. The definition of dual category requires turning the arrows around. Even if the original morphism takes elements to elements, the opposite morphism does not have to take elements to elements. In the case of the category of sets, an arrow in $\text{Set}^{op}$ cannot take elements to elements — for example, the opposite of the function $\emptyset\to\{1,2\}$.
• The fact that there is a concrete category equivalent to $\text{Set}^{op}$ is a red herring. It involves different sets: the function corresponding to the function just mentioned goes from a four-element set to a singleton. But in the category $\text{Set}^{op}$ as defined it is simply an arrow, not a function.
### Not understanding how to use definitions
• Some of the questioners on Math Stack Exchange ask how to prove a statement that is quite simple to prove directly from the definitions of the terms involved, but what they ask and what they are obviously trying to do is to gain an intuition in order to understand why the statement is true. This is backward — the first thing you should do is use the definition (at least in the first few days of a math class — after that you have to use theorems as well!
• I have discussed this in the blog post Insights into mathematical definitions (which gives references to other longer discussions by math ed people). See also the abmath section Rewrite according to the definitions.
## How an introduction to a math topic needs to be written
The following list shows some of the tactics I am thinking of using in the math topic introductions. It is quite likely that I will conclude that some tactics won’t work, and I am sure that tactics I haven’t mentioned here will be used.
• The introductions should not go very far into the subject. Instead, they should bring an exhaustive and explicit discussion of how to get into the very earliest part of the topic, perhaps the definition, some examples, and a few simple theorems. I doubt that a group theory student who hasn’t mastered abstraction and what proofs are about will ever be ready to learn the Sylow theorems.
• You can’t do examples and definitions simultaneously, but you can come close by going through an example step by step, checking each part of the definition.
• There is a real split between students who want the definitions first
(most of whom don’t have the abstraction problems I am trying to overcome)
and those who really really think they need examples first (the majority)
because they don’t understand abstraction.
• When you introduce an axiom, give an example of how you would prove that some binary operation satisfies the axiom. For example, if the axiom is that every element of a group must have an inverse, right then and there prove that addition on the integers satisfies the axiom and disprove that multiplication on integers satisies it.
• When the definition uses some undefined math objects, point out immediately with examples that you can’t have any intuition about them except what the axioms give you. (In contrast to definition of division of integers, where you and the student already have intuitions about the objects.)
• Make explicit the possible problems with abstractmath.org and Gyre&Gimble) will indeed find it difficult to become mathematical researchers — but not impossible!
• But that is not the point. All college math professors will get people who will go into theoretical computing science, and therefore need to understand category theory, or into particle physics, and need to understand groups, and so on.
• By being clear at the earliest stages of how mathematicians actually do math, they will produce more people in other fields who actually have some grasp of what is going on with the topics they have studied in math classes, and hopefully will be willing to go back and learn some more math if some type of math rears its head in the theories of their field.
• Besides, why do you want to alienate huge numbers of people from math, as our way of teaching in the past has done?
• “Our” means grammar school teachers, high school teachers and college professors.
### Acknowledgment
Thanks to Kevin Clift for corrections.
Send to Kindle
# Representations of functions III
## Introduction to this post
I am writing a new abstractmath chapter called Representations of Functions. It will replace some of the material in the chapter Functions: Images, Metaphors and Representations. This post is a draft of the sections on representations of finite functions.
The diagrams in this post were created using the Mathematica Notebook Constructions for cographs and endographs of finite functions.nb.
You can access this notebook if you have Mathematica, which can be bought, but is available for free for faculty and students at many universities, or with Mathematica CDF Player, which is free for anyone and runs on Windows, Mac and Linux.
Like everything in abstractmath.org, the notebooks are covered by a Creative Commons ShareAlike 3.0 License.
## Graphs of finite functions
When a function is continuous, its graph shows up as a curve in the plane or as a curve or surface in 3D space. When a function is defined on a set without any notion of continuity (for example a finite set), the graph is just a set of ordered pairs and does not tell you much.
A finite function $f:S\to T$ may be represented in these ways:
• Its graph $\{(s,f(s))|s\in S\}$. This is graph as a mathematical object, not as a drawing or as a directed graph — see graph (two meanings)).
• A table, rule or two-line notation. (All three of these are based on the same idea, but differ in presentation and are used in different mathematical specialties.)
• By using labels with arrows between them, arranged in one of two ways:
• A cograph, in which the domain and the codomain are listed separately.
• An endograph, in which the elements of the domain and the codomain are all listed together without repetition.
All these techniques can also be used to show finite portions of infinite discrete functions, but that possibility will not be discussed here.
### Introductory Example
Let $\text{f}:\{a,b,c,d,e\}\to\{a,b,c,d\}$ be the function defined by requiring that $f(a)=c$, $f(b)=a$, $f(c)=c$, $f(d)=b$, and $f(e)=d$.
#### Graph
The graph of $f$ is the set
$(a,c),(b,a),(c,c),(d,b),(e,d)$
As with any set, the order in which the pairs are listed is irrelevant. Also, the letters $a$, $b$, $c$, $d$ and $e$ are merely letters. They are not variables.
#### Table
$\text{f}$ is given by this table:
This sort of table is the format used in databases. For example, a table in a database might show the department each employee of a company works in:
#### Rule
The rule determined by the finite function $f$ has the form
$(a\mapsto b,b\mapsto a,c\mapsto c,d\mapsto b,e\mapsto d)$
Rules are built in to Mathematica and are useful in many situations. In particular, the endographs in this article are created using rules. In Mathematica, however, rules are written like this:
$(a\to b,b\to a,c\to c,d\to b,e\to d)$
This is inconsistent with the usual math usage (see barred arrow notation) but on the other hand is easier to enter in Mathematica.
In fact, Mathematica uses very short arrows in their notation for rules, shorter than the ones used for the arrow notation for functions. Those extra short arrows don’t seems to exist in TeX.
#### Two-line notation
Two-line notation is a kind of horizontal table.
$\begin{pmatrix} a&b&c&d&e\\c&a&c&b&d\end{pmatrix}$
The three notations table, rule and two-line do the same thing: If $n$ is in the domain, $f(n)$ is shown adjacent to $n$ — to its right for the table and the rule and below it for the two-line.
Note that in contrast to the table, rule and two-line notation, in a cograph each element of the codomain is shown only once, even if the function is not injective.
#### Cograph
To make the cograph of a finite function, you list the domain and codomain in separate parallel rows or columns (even if the domain and codomain are the same set), and draw an arrow from each $n$ in the domain to $f(n)$ in the codomain.
This is the cograph for $\text{f}$, represented in columns
and in rows (note that $c$ occurs only once in the codomain)
Pretty ugly, but the cograph for finite functions does have its uses, as for example in the Wikipedia article composition of functions.
In both the two-line notation and in cographs displayed vertically, the function goes down from the domain to the codomain. I guess functions obey the law of gravity.
##### Rearrange the cograph
There is no expectation that in the cograph $f(n)$ will be adjacent to $n$. But in most cases you can rearrange both the domain and the codomain so that some of the structure of the function is made clearer; for example:
The domain and codomain of a finite function can be rearranged in any way you want because finite functions are not continuous functions. This means that the locations of points $x_1$ and $x_2$ have nothing to do with the locations of $f(x_1)$ and $f(x_2)$: The domain and codomain are discrete.
#### Endograph
The endograph of a function $f:S\to T$ contains one node labeled $s$ for each $s\in S\cup T$, and an arrow from $s$ to $s’$ if $f(s)=s’$. Below is the endograph for $\text{f}$.
The endograph shows you immediately that $\text{f}$ is not a permutation. You can also see that with whatever letter you start with, you will end up at $c$ and continue looping at $c$ forever. You could have figured this out from the cograph (especially the rearranged cograph above), but it is not immediately obvious in the cograph the way it in the endograph.
There are more examples of endographs below and in the blog post
A tiny step towards killing string-based math. Calculus-type functions can also be shown using endographs and cographs: See Mapping Diagrams from A(lgebra) B(asics) to C(alculus) and D(ifferential) E(quation)s, by Martin Flashman, and my blog posts Endographs and cographs of real functions and Demos for graph and cograph of calculus functions.
### Example: A permutation
Suppose $p$ is the permutation of the set $\{0,1,2,3,4,5,6,7,8,9\}$given in two-line form by
$\begin{pmatrix} 0&1&2&3&4&5&6&7&8&9\\0&2&1&4&5&3&7&8&9&6\end{pmatrix}$
#### Endograph
Again, the endograph shows the structure of the function much more clearly than the cograph does.
The endograph consists of four separate parts (called components) not connected with each other. Each part shows that repeated application of the function runs around a kind of loop; such a thing is called a cycle. Every permutation of a finite set consists of disjoint cycles as in this example.
#### Disjoint cycle notation
Any permutation of a finite set can be represented in disjoint cycle notation: The function $p$ is represented by:
$(0)(1,2)(3,4,5)(6,7,8,9)$
Given the disjoint cycle notation, the function can be determined as follows: For a given entry $n$, $p(n)$ is the next entry in the notation, if there is a next entry (instead of a parenthesis). If there is not a next entry, $p(n)$ is the first entry in the cycle that $n$ is in. For example, $p(7)=8$ because $8$ is the next entry after $7$, but $p(5)=3$ because the next symbol after $5$ is a parenthesis and $3$ is the first entry in the same cycle.
The disjoint cycle notation is not unique for a given permutation. All the following notations determine the same function $p$:
$(0)(1,2)(4,5,3)(6,7,8,9)$
$(0)(1,2)(8,9,6,7)(3,4,5)$
$(1,2)(3,4,5)(0)(6,7,8,9)$
$(2,1)(5,3,4)(9,6,7,8)$
$(5,3,4)(1,2)(6,7,8,9)$
Cycles such as $(0)$ that contain only one element are usually omitted in this notation.
### Example: A tree
Below is the endograph of a function $t:\{0,1,2,3,4,5,6,7,8,9\}\to\{0,1,2,3,4,5,6,7,8,9\}$
This endograph is a tree. The graph of a function $f$ is a tree if the domain has a particular element $r$ called the root with the properties that
• $f(r)=r$, and
• starting at any element of the domain, repreatedly applying $f$ eventually produces $r$.
In the case of $t$, the root is $4$. Note that $t(4)=4$, $t(t(7))=4$, $t(t(t(9)))=4$, $t(1)=4$, and so on.
The endograph
shown here is also a tree.
See the Wikipedia article on trees for the usual definition of tree as a special kind of graph. For reading this article, the definition given in the previous paragraph is sufficient.
### The general form of a finite function
This is the endograph of a function $t$ on a $17$-element set:
It has two components. The upper one contains one $2$-cycle, and no matter where you start in that component, when you apply $t$ over and over you wind up flipping back and forth in the $2$-cycle forever. The lower component has a $3$-cycle with a similar property.
This illustrates a general fact about finite functions:
• The endograph of any finite function contains one or more components $C_1$ through $C_k$.
• Each component $C_k$ contains exactly one $n_k$ cycle, for some integer $n_k\geq 1$, to which are attached zero or more trees.
• Each tree in $C_k$ is attached in such a way that its root is on the unique cycle contained in $C_k$.
In the example above, the top component has three trees attached to it, two to $3$ and one to $4$. (This tree does not illustrate the fact that an element of one of the cycles does not have to have any trees attached to it).
You can check your understanding of finite functions by thinking about the following two theorems:
• A permutation is a finite function with the property that its cycles have no trees attached to them.
• A tree is a finite function that has exactly one component whose cycle is a $1$-cycle.
Send to Kindle
# Representations of functions II
## Introduction to this post
I am writing a new abstractmath chapter called Representations of Functions. It will replace some of the material in the chapter Functions: Images, Metaphors and Representations.
This post includes a draft of the introduction to the entire new chapter (immediately below) and of the sections on graphs of continuous functions of one variable with values in the plane and in 3-space. Later posts will concern multivariable continuous functions and finite discrete functions.
## Introduction to the new Chapter
Functions can be represented visually in many different ways. There is a sharp difference between representing continuous functions and representing discrete functions.
For a continuous function $f$, $f(x)$ and $f(x’)$ tend to be close together when $x$ and $x’$ are close together. That means you can represent the values at an infinite number of points by exhibiting them for a bunch of close-together points. Your brain will automatically interpret the points nearby that are not represented.
Nothing like this works for discrete functions. Many different arrangements of the inputs and outputs can be made. Different arrangements may be useful for representing different properties of the function.
### Illustrations
The illustrations were created using these Mathematica Notebooks:
These notebooks contain many more examples of the ways functions can be represented than are given in this article. The notebooks also contain some manipulable diagrams which may help you understand the diagrams. In addition, all the 3D diagrams can be rotated using the cursor to get different viewpoints. You can access these tools if you have Mathematica, which is available for free for faculty and students at many universities, or with Mathematica CDF Player, which runs on Windows, Mac and Linux.
Like everything in abstractmath.org, the notebooks are covered by a Creative Commons ShareAlike 3.0 License.
## Functions from a subset of $\mathbb{R}$ to $\mathbb{R}\times\mathbb{R}$
Suppose $F:\mathbb{R}\to\mathbb{R}\times\mathbb{R}$. That means you put in one number and get out a pair of numbers.
### The unit circle
An example is the unit circle, which is the graph of the function $t\mapsto(\cos t,\sin t)$. That has this parametric plot:
Because $\cos^2 t+\sin^2 t=1$, every real number $t$ produces a point on the unit circle. Four point are shown. For example,$(\cos\pi,\,\sin\pi)=(-1,0)$ and
$(\cos(5\pi/3),\,\sin(5\pi/3))=(\frac{1}{2},\frac{\sqrt3}{2})\approx(.5,.866)$
#### $t$ as time
In graphing functions $f:\mathbb{R}\to\mathbb{R}$, the plot is in two dimensions and consists of the points $(x,f(x))$: the input and the output. The parametric plot shown above for $t\mapsto(\cos^2 t+\sin^2)$ shows only the output points $(\cos t,\sin t)$; $t$ is not plotted on the graph at all. So the graph is in the plane instead of in three-dimensional space.
An alternative is to use time as the third dimension: If you start at some number $t$ on the real line and continually increase it, the value $f(t)$ moves around the circle counterclockwise, repeating every $2\pi$ times. If you decrease $t$, the value moves clockwise. The animated gif circlemovie.gif shows how the location of a point on the circle moves around the circle as $t$ changes from $0$ to $2\pi$. Every point is traversed an infinite number of times as $t$ runs through all the real numbers.
#### The unit circle with $t$ made explicit
Since we have access to three dimensions, we can show the input $t$ explicitly by using a three-dimensional graph, shown below. The blue circle is the function $t\mapsto(\cos t,\sin t,0)$ and the gold helix is the function $t\mapsto(\cos t,\sin t,.2t)$.
The introduction of $t$ as the value in the vertical direction changes the circle into a helix. The animated .gif covermovie.gif shows both the travel of a point on the circle and the corresponding point on the helix.
As $t$ changes, the circle is drawn over and over with a period of $2\pi$. Every point on the circle is traversed an infinite number of times as $t$ runs through all the real numbers. But each point on the helix is traversed exactly once. For a given value of $t$, the point on the helix is always directly above or below the point on the circle.
The helix is called the universal covering space of the circle, and the set of points on the helix over (and under) a particular point $p$ on the circle is called the fiber over $p$. The universal cover of a space is a big deal in topology.
### Figure-8 graph
This is the parametric graph of the function $t\mapsto(\cos t,\sin 2t)$.
Notice that it crosses itself at the origin, when $t$ is any odd multiple of $\frac{\pi}{2}$.
Below is the universal cover of the Figure-8 graph. As you can see, the different instances of crossing at $(0,0)$ are separated. The animated.gif Fig8movie shows the paths taken as $t$ changes on the figure 8 graph and on its universal cover
## Functions from a subset of $\mathbb{R}$ to $\mathbb{R}\times\mathbb{R}\times\mathbb{R}$
The graph of a function from a subset of $\mathbb{R}$ to $\mathbb{R}\times\mathbb{R}\times\mathbb{R}$ can also be drawn as a parametric graph in three-dimensional space, giving a three-dimensional curve. The trick that I used in the previous section of showing the input parameter so that you can see the universal cover won’t work in this case because it would require four dimensions.
### Universal covers
The gold curves in the figures for the universal covers of the circle and the figure 8 are examples of functions from $\mathbb{R}$ to $\mathbb{R}\times\mathbb{R}\times\mathbb{R}$.
### The seven-pointed crown
Here are views from three different angles of the graph of the function $t\mapsto(\cos t, \sin t, \sin 7t)$:
The animated gif crownmovie.gif represents the parameter $t$ in time.
### Another curve in space
Below are two views of the curve defined by $t\mapsto({-4t^2+53t)/18,t,.4(-t^2+1-10t)}$.
The following plots the $x$-curve $-4t^2+53t)/18$ gold in the $yz$ plane and the $z$ curve $.4(-t^2+1-10t)$ in the $xy$ plane. The first and third views are arranged so that you see the curve just behind one of those two planes.
Send to Kindle
# Very early difficulties in studying abstract math
## Introduction
There are a some difficulties that students have at the very beginning of studying abstract math that are overwhelmingly important, not because they are difficult to explain but because too many teachers don’t even know the difficulties exist, or if they do, they think they are trivial and the students should know better without being told. These difficulties cause too many students to give up on abstract math and drop out of STEM courses altogether.
I spent my entire career in math at Case Western Reserve University. I taught many calculus sections, some courses taken by math majors, and discrete math courses taken mostly by computing science majors. I became aware that some students who may have been A students in calculus essentially fell off a cliff when they had to do the more abstract reasoning involved in discrete math, and in the initial courses in abstract algebra, linear algebra, advanced calculus and logic.
That experience led me to write the Handbook of Mathematical Discourse and to create the website abstractmath.org. Abstractmath.org in particular grew quite large. It does describe some of the major difficulties that caused good students to fall of the abstraction cliff, but also describes many many minor difficulties. The latter are mostly about the peculiarities of the languages of math.
I have observed people’s use of language since I was like four or five years old. Not because I consciously wanted to — I just did. When I was a teenager I would have wanted to be a linguist if I had known what linguistics is.
I will describe one of the major difficulties here (failure to rewrite according to the definition) with an example. I am planning future posts concerning other difficulties that occur specifically at the very beginning of studying abstract math.
## Rewrite according to the definition
To prove that a statement
involving some concepts is true,
start by rewriting the statement
using the definitions of the concepts.
### Example
#### Definition
A function $f:S\to T$ is surjective if for any $t\in T$ there is an $s\in S$ for which $f(s)=t$.
#### Definition
For a function $f:S\to T$, the image of $f$ is the set $\{t\in T\,|\,\text{there is an }s\in S\text{ for which }f(s)=t\}$
#### Theorem
Let $f:S\to T$ be a function between sets. Then $f$ is surjective if and only if the image of $f$ is $T$.
#### Proof
If $f$ is surjective, then the statement “there is an $s\in S$ for which $f(s)=t$” is true for any $t\in T$ by definition of surjectivity. Therefore, by definition of image, the image of $f$ is $T$.
If the image of $f$ is $T$, then the definition of image means that there is an $s\in S$ for which $f(s)=t$ for any $t\in T$. So by definition of surjective, $f$ is surjective.
### “This proof is trivial”
The response of many mathematicians I know is that this proof is trivial and a student who can’t come up with it doesn’t belong in a university math course. I agree that the proof is trivial. I even agree that such a student is not a likely candidate for getting a Ph.D. in math. But:
• Most math students in an American university are not going to get a Ph.D. in math. They may be going on in some STEM field or to teach high school math.
• Some courses taken by students who are not math majors take courses in which simple proofs are required (particularly discrete math and linear algebra). Some of these students may simply be interested in math for its own sake!
A sizeable minority of students who are taking a math course requiring proofs need to be told the most elementary facts about how to do proofs. To refuse to explain these facts is a disfavor to the mathematics community and adds to the fear and dislike of math that too many people already have.
These remarks may not apply to students in many countries other than the USA. See When these problems occur.
### “This proof does not describe how mathematicians think”
The proof I wrote out above does not describe how I would come up with a proof of the statement, which would go something like this: I do math largely in pictures. I envision the image of $f$ as a kind of highlighted area of the codomain of $f$. If $f$ is surjective, the highlighting covers the whole codomain. That’s what the theorem says. I wouldn’t dream of writing out the proof I gave about just to verify that it is true.
### More examples
Abstractmath.org and Gyre&Gimble contain several spelled-out theorems that start by rewriting according to the definition. In these examples one then goes on to use algebraic manipulation or to quote known theorems to put the proof together.
### This post contains testable claims
Herein, I claim that some things are true of students just beginning abstract math. The claims are based largely on my teaching experience and some statements in the math ed literature. These claims are testable.
### When these problems occur
In the United States, the problems I describe here occur in the student’s first or second year, in university courses aimed at math majors and other STEM majors. Students typically start university at age 18, and when they start university they may not choose their major until the second year.
In much of the rest of the world, students are more likely to have one more year in a secondary school (sixth form in England lasts two years) or go to a “college” for a year or two before entering a university, and then they get their bachelor’s degree in three years instead of four as in the USA. Not only that, when they do go to university they enter a particular program immediately — math, computing science, etc.
These differences may mean that the abstract math cliff occurs early in a student’s university career in the USA and before the student enters university elsewhere.
In my experience at CWRU, some math majors fall of the cliff, but the percentage of computing science students having trouble was considerably greater. On the other hand, more of them survived the discrete math course when I taught it because the discrete math course contain less abstraction and more computation than the math major courses (except linear algebra, which had a balance similar to the discrete math course — and was taken by a sizeable number of non-math majors).
Send to Kindle
# Functions: Metaphors, Images and Representations
Please read this post at abstractmath.org. I originally posted the document here but some of the diagrams would not render, and I haven’t been able to figure out why. Sorry for having to redirect.
Send to Kindle
# Demos for graph and cograph of calculus functions
The interactive examples in this post require installing Wolfram CDF player, which is free and works on most desktop computers using Firefox, Safari and Internet Explorer, but not Chrome. The source code is the Mathematica Notebook GraphCograph.nb, which is available for free use under a Creative Commons Attribution-ShareAlike 2.5 License. The notebook can be read by CDF Player if you cannot make the embedded versions in this post work properly.
This post provides interactive examples of the endograph and cograph of real functions. Those two concepts were defined and discussed in the previous post Endograph and cograph of real functions.
Such representations of functions, put side by side with the conventional graph, may help students understand how to interpret the usual graph representation. For example: What does it mean when the arrows slant to the left? spread apart? squeeze together? flip over? Going back and forth between the conventional graph and the cograph or engraph for a particular function should make you much more in tune to the possibilities when you see only the conventional graph of another function.
This is not a major advance for calculus teachers, but it may be a useful tool.
## Line segment
$y=a x+b$
## Cubic
$y=a x^3-b x$
## Sine
$y=\sin a x$.
## Sine and its derivative
$y=\sin a x$ (blue) and $y=a\cos a x$ (red)
## Quintic with three parameters
$y=a x^5-b x^4-0.21 x^3+0.2 x^2+0.5 x-c$
Thanks to Martin Flashman for corrections.
Send to Kindle
# Presenting binary operations
This is the first of a set of notes I am writing to help me develop my thoughts about how particular topics in my book Abstracting algebra should be organized. This article describes my plan for the book in some detail. The present post has some thoughts about presenting binary operations.
## Before binary operations are introduced
Traditionally, an abstract algebra book assumes that the student is familiar with high school algebra and will then proceed with an observation that such operations as $+$ and $\times$ can be thought of as functions of two variables that take a number to another number. So the first abstract idea is typically the concept of binary operation, although in another post I will consider whether that really should be the first abstract concept.
The Abstracting Algebra book will have a chapter that presents concrete examples of algebraic operations and expressions on numbers as in elementary school and as in high school algebra. This section of the post outlines what should be presented there. Each subsection needs to be expanded with lots of examples.
### In elementary school
In elementary school you see expressions such as
• $3+4$
• $3\times 4$
• $3-4$
The student invariably thinks of these expressions as commands to calculate the value given by the expression.
They will also see expressions such as
$\begin{array}[b]{r} 23\\ 355\\ + 96\\ \hline \end{array}$
which they will take as a command to calculate the sum of the whole list:
$\begin{array}[b]{r} 23\\ 355\\ + 96\\ \hline 474 \end{array}$
That uses the fact that addition is associative, and the format suggests using the standard school algorithm for adding up lists. You don’t usually see the same format with more than two numbers for multiplication, even though it is associative as well. In some elementary schools in recent years students are learning other ways of doing arithmetic and in particular are encouraged to figure out short cuts for problems that allow them. But the context is always “do it”, not “this represents a number”.
### Algebra
In algebra you start using letters for numbers. In algebra, “$a\times b$” and “$a+b$” are expressions in the symbolic language of math, which means they are like noun phrases in English such as “My friend” and “The car I bought last week and immediately totaled” in that both are used semantically as names of objects. English and the symbolic language are both languages, but the symbolic language is not a natural language, nor is it a formal language.
#### Example
In beginning algebra, we say “$3+5=8$”, which is a (true) statement.
The expressions “$3+5$” and “$8$”
• are not the same expression
• but in the standard semantics of algebra they have the same meaning
• and therefore the equation communicates information that neither “$3+5$” nor “$8$” communicate.
Another example is “$3+5=6+2$”.
Facts like this example need to be communicated explicitly before binary operations are introduced formally. The students in a college abstract algebra class probably know the meaning of an equation operationally (subconsciously) but they have never seen it made explicit. See Algebra is a difficult foreign language.
#### Note
The equation “$3+5=6+2$” is an expression just as much as “$3+5$” and “$6+2$” are. It denotes an object of type “equation”, which is a mathematical object in the same way as numbers are. Most mathematicians do not talk this way, but they should.
## Binary operations
### Early examples
Consciousness-expanding examples should appear early and often after binary operations are introduced.
#### Common operations
• The GCD is a binary operation on the natural numbers. This disturbs some students because it is not written in infix form. It is associative. The GCD can be defined conceptually, but for computation purposes needs (Euclid’s) algorithm. This gives you an early example of conceptual definitions and algorithms.
• The maximum function is another example of this sort. This is a good place to point out that a binary operation with the “same” definition cen be defined on different sets. The max function on the natural numbers does not have quite the same conceptual definition as the max on the integers.
#### Extensional definitions
In order to emphasize the arbitrariness of definitions, some random operations on a small finite sets should be given by a multiplication table, on sets of numbers and sets represented by letters of the alphabet. This will elicit the common reaction, “What operation is it?” Hidden behind this question is the fact that you are giving an extensional definition instead of a formula — an algorithm or a combination of familiar operations.
#### Properties
The associative and commutative properties should be introduced early just for consciousness-raising. Subtraction is not associative or commutative. Rock paper scissors is commutative but not associative. Groups of symmetries are associative but not commutative.
### Binary operation as function
The first definition of binary operation should be as a function. For example, “$+$” is a function that takes pairs of numbers to numbers. In other words, $+:\mathbb{Z}\times\mathbb{Z}\to\mathbb{Z}$ is a function.
We then abstract from that example and others like it from specific operations to arbitrary functions $\Delta:S\times S\to S$ for arbitrary sets $S$.
This is abstraction twice.
• First we replace the example operations by an arbitrary operation. such as multiplication, subtraction, GCD and MAX on $\mathbb{Z}$, or something complicated such as $(x,y)\mapsto 3(xy-1)^2(x^2+xy^3)^3$.
• Then we replace sets of numbers by arbitrary sets. An example would be the random multiplication on the set $\{1,2,5\}$ given by the table
$\begin{array}{c|ccc} \Delta& 1&2&5\\ \hline 1&2&2&1\\ 2&5&2&1\\ 5&2&1&5 \end{array}$
This defines a function $\Delta:\{1,2,5\}\times\{1,2,5\}\to\{1,2,5\}$ for which for example $\Delta(2,1)=5$, or $2\Delta 1=5$. This example uses numbers as elements of the set and is good for eliciting the “What operation is it?” question.
• I will use examples where the elements are letters of the alphabet, as well. That sort of example makes the students think the letters are variables they can substitute for, another confusion to be banished by the wise professor who know the right thing to say to make it clear. (Don’t ask me; I taught algebra for 35 years and I still don’t know the right thing to say.)
It is important to define prefix notation and infix notation right away and to use both of them in examples.
### Other representations of binary operations.
The main way of representing binary operations in Abstracting Algebra will be as trees, which I will cover in later posts. Those posts will be much more interesting than this one.
#### Binary operations in high school and college algebra
• Some binops are represented in infix notation: “$a+b$”, “$a-b$”, and “$a\times b$”.
• “$a\times b$” is usually written “$ab$” for letters and with the “$\times$” symbol for numbers.
• Some binops have idiosyncratic representation: “$a^b$”, “${a}\choose{b}$”.
• A lot of binops such as GCD and MAX are given as functions of two variables (prefix notation) and their status as binary operations usually goes unmentioned. (That is not necessarily wrong.)
• The symbol “$(a,b)$” is used to denote the GCD (a binop) and is also used to denote a point in the plane or an open interval, both of which are not strictly binops. They are binary operations in a multisorted algebra (a concept I expect to introduce later in the book.)
• Some apparent binops are in infix notation but have flaws: In “$a/b$”, the second entry can’t be $0$, and the expression when $a$ and $b$ are integers is often treated as having good forms ($3/4$) and bad forms ($6/8$).
#### Trees
The chaotic nature of algebraic notation I just described is a stumbling block, but not the primary reason high school algebra is a stumbling block for many students. The big reason it is hard is that the notation requires students to create and hold complicated abstract structures in their head.
##### Example
This example is a teaser for future posts on using trees to represent binary operations. The tree below shows much more of the structure of a calculation of the area of a rectangle surmounted by a semicircle than the expression
$A=wh+\frac{1}{2}\left(\pi(\frac{1}{2}w)^2\right)$
does.
The tree explicitly embodies the thought process that leads to the formula:
• You need to add the area of the rectangle and the area of the semicircle.
• The area of the rectangle is width times height.
• The area of the semicircle is $\frac{1}{2}(\pi r^2)$.
• In this case, $r=\frac{1}{2}w$.
Any mathematician will extract the same abstract structure from the formula$A=wh+\frac{1}{2}\left(\pi(\frac{1}{2}w)^2\right)$ This is difficult for students beginning algebra.
Send to Kindle
# Inverse image demo revisited
This post is an update of the post Demonstrating the inverse image of a function.
To manipulate the demos in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. CDF Player works on most desktop computers using Firefox, Safari and Internet Explorer, but not Chrome.
The code for the demos, with some explanatory remarks, is in the file InverseImage.nb on my ,Mathematica website. That website also includes some other examples as .cdf files.
If the diagrams don’t appear, or appear but show pink diagrams, or if the formulas in the text are too high or too low, refresh the screen.
• The vertical red interval has the horizontal green interval(s) as inverse image.
• You can move the sliders back and forth to move to different points on the curve. The sliders control the vertical red interval. $a$ is the lower point of the vertical red line and $b$ is the upper point.
• As you move the sliders back and forth you will see the inverse image breaking up into a disjoint union in intervals, merging into a single interval, or disappearing entirely.
• The arrow at the upper right makes it run automatically.
• If you are using Mathematica, you can enter values into the boxes, but if you are using CDF Player, you can only change the number using the slider or the plus and minus incrementers.
This is the graph of $y=x^2-1$.
The graph of $-.5 + .5 x + .2 x^2 – .19 x^3 – .015 x^4 + .01 x^5$
The graph of the rational function $0.5 x+\frac{1.5 \left(x^4-1\right)}{x^4+1}$
The graph of a straight line whose slope can be changed. You can design demos of other functions with variable parameters.
The graph of the sine function. The other demos were coded using the Mathematica Reduce function to get the inverse image. This one had to be done in an ad hoc way as explained in the InverseImage.nb file.
Send to Kindle | 2017-05-28 22:10:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7932513356208801, "perplexity": 381.4463406227397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611569.86/warc/CC-MAIN-20170528220125-20170529000125-00156.warc.gz"} |
https://math.stackexchange.com/questions/1087479/game-between-2014-card-players-where-everyone-with-at-least-2-cards-passes-a-car/1087486 | # Game between 2014 card players where everyone with at least 2 cards passes a card to each of his neighbors
2014 card players sit around a big table. One of the players begins with 2014 cards on his hand, and the other have none. The rules for the game are:
• Every minute shall every player, who has 2 cards or more, give one card to the persons next to him.
• The game is finished when everybody has exactly one card.
I imagine that the table has a form as a rectangle. At the end of the table there sits one person - one of them would be the person with 2014 cards. At the other sides there will be 1006 persons.
I discovered that by $t=1$, $t=3$, $t=6$, $t=10$, $t=15$, $t=\frac{n(n+1)}{2}$ that will be $n$ persons on each side of the tables with one card, and the persons with most cards would have $2014-2n$ cards. Because there are 1006 persons of each side of the table (the long side) they would have 1 cards after $t=\frac{1006(1006+1)}{2}=506521$ minuts. Then we have
But if that is true then they ever will have one card each, because there always will be one person with 2 cards. Is that correct?
• 2014 card players sit around a big table. In the beginning one has cards, the others - no. So why 1006, not 2013? – Przemysław Scherwentke Jan 1 '15 at 12:27
• Sorry, I didn't get your point. What do you mean? – Nick Podowalski Jan 1 '15 at 12:31
• From the first sentence of your question we have 2014 players. So why you are considering only $1+1006$ of them? – Przemysław Scherwentke Jan 1 '15 at 12:33
• I said that there will be one person on each of the short sides of the tables. There wil also be 1006 persons on each sides of the long sides of the table. Then we have $1+1+1006+1006 = 2014$ persons. – Nick Podowalski Jan 1 '15 at 12:39
• Oh, I see. But then 2 from your second picture "moves" through the upper part to the place before 0, and then the game ends... – Przemysław Scherwentke Jan 1 '15 at 12:45
Yes, the game will never finish.
Consider the number of cards the even-numbered players have. At the start it's zero, and in the end it ought to be 1007. But we always pass an even number of cards between the even- and odd-numbered players.
Hence it can never finish.
Lord_Farin's answer can be extended to all cases where there are $n$ players and cards and $n$ is even but not a multiple of $4$
Try this for all even $n$:
• Every time players who start with all the cards pass cards, they pass an even number (the same left and right)
• By the reflective symmetry of a round table and the passing rules, every time players who start with all the cards receive cards, they receive an even number (the same left and right)
• So when the numbers of people and cards are even, players who start with all the cards (an even number) always have an even number of cards, and so can never have exactly $1$
So it is impossible for all even $n$, including $2014$ | 2019-08-25 12:12:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5334244966506958, "perplexity": 589.9986727711927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00315.warc.gz"} |
https://www.jobilize.com/online/course/7-5-sample-test-rational-expressions-by-openstax?qcr=www.quizover.com | 7.5 Sample test: rational expressions
Page 1 / 1
A sample test covering rational expressions.
$\frac{x-3}{{x}^{2}+9x+\text{20}}-\frac{x-4}{{x}^{2}+8x+\text{15}}$
Simplify
• B
What values of $x$ are not allowed in the original expression?
• C
What values of $x$ are not allowed in your simplified expression?
$\frac{2}{{x}^{2}-1}+\frac{x}{{x}^{2}-2x+1}$
Simplify
• B
What values of $x$ are not allowed in the original expression?
• C
What values of $x$ are not allowed in your simplified expression?
$\frac{{4x}^{3}-9x}{{x}^{2}-3x-\text{10}}×\frac{{2x}^{2}-\text{20}x+\text{50}}{{6x}^{2}-9x}$
Simplify
• B
What values of $x$ are not allowed in the original expression?
• C
What values of $x$ are not allowed in your simplified expression?
$\frac{\frac{1}{x}}{\frac{x-1}{{x}^{2}}}$
Simplify
• B
What values of $x$ are not allowed in the original expression?
• C
What values of $x$ are not allowed in your simplified expression?
$\frac{{6x}^{3}-{5x}^{2}-5x+\text{34}}{2x+3}$
• A
Solve by long division.
• B
If $f\left(x\right)={x}^{2}$ , find $\frac{f\left(x+h\right)-f\left(x\right)}{h}$ . Simplify as much as possible.
$\frac{x-1}{2x-1}=\frac{x+7}{7x+4}$
• A
Solve for $x$ .
• B
Test one of your answers and show that it works in the original expression. (No credit unless you show your work!)
Extra credit:
I am thinking of two numbers, $x$ and $y$ , that have this curious property: their sum is the same as their product . (Sum means “add them”; product means “multiply them.”)
• A
Can you find any such pairs?
• B
To generalize: if one of my numbers is $x$ , can you find a general formula that will always give me the other one?
• C
Is there any number $x$ that has no possible $y$ to work with?
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
Preparation and Applications of Nanomaterial for Drug Delivery
Application of nanotechnology in medicine
what is variations in raman spectra for nanomaterials
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers! | 2020-08-13 05:59:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4933963716030121, "perplexity": 3426.8479512534177}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00521.warc.gz"} |
https://puzzling.stackexchange.com/questions/93717/multiple-solutions-for-rubiks-cube | # Multiple solutions for Rubik's cube?
I got a novelty cube at as swag at a convention once; it was like a normal cube, but with slogans written on the squares of the cube. If you solved the cube, you could read the slogan.
This got me to wondering if it's possible to solve this cube such that some of the words faced the wrong way.
If you have a standard Rubik's cube, and draw arrows on each label (all arrows on each face point in the same direction), is it possible to solve the cube such that some of the arrows face different directions from the original orientation? If so, how many distinct solved cubes are there?
Note: obviously, for all the non-center faces, there is only one possible orientation. For example, if the red-blue edge has "up" on the blue face pointing at red, then the red-blue-yellow and red-blue-green corners must be next to that edge, and must also have "up" on the blue faces point toward red. Thus, the orientations of the edges and corners is fixed, but it might be possible for the centers to end up rotated.
If you have a standard Rubik's cube, and draw arrows on each label (all arrows on each face point in the same direction), is it possible to solve the cube such that some of the arrows face different directions from the original orientation?
Yes, this is possible. The centres can be rotated. To rotate the U-face centre by 180 degrees, you can do the move sequence RL U2 R'L' U twice. To rotate the U-face centre a clockwise quarter turn and the F-face centre an anti-clockwise quarter turn, do FB' LR' UD' F' U'D L'R F'B U. A simpler but much longer sequence for this is to repeat U F' 63 times.
If so, how many distinct solved cubes are there?
Each of the 6 centres has 4 possible orientations. However, not all $$4^6=4096$$ orientations are achievable. It is not possible to turn a single centre 90 degrees without permuting the corner and edge pieces. In fact, the total amount of twist applied to the centres must be a multiple of 180 degrees, i.e. an even number of quarter turns. This is because an odd number of quarter turns of the centres would require an odd number of quarter turn moves of the faces, and that would apply an odd permutation to the corners and to the edges, so they cannot return to their initial locations.
So the centres only have at most $$\frac{4^6}{2}=2048$$ achievable states. With the move sequences mentioned above, every such state can be solved, so all those $$2048$$ states can indeed be achieved.
This page tells which algorithms one can look into. Bear in mind that you will have to rotate at least two centers at a time.
• ... or rotate one centre 180 degrees. – Jaap Scherphuis Feb 13 at 20:00 | 2020-02-25 11:38:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463384866714478, "perplexity": 585.6944144048102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146066.89/warc/CC-MAIN-20200225110721-20200225140721-00302.warc.gz"} |
http://stackoverflow.com/questions/21582131/python-simple-http-server-batch-file-error | # Python Simple HTTP Server Batch File Error
I'm currently playing around with home networking and getting into servers and I found a python command that will make the directories of you python folder avaliable over your LAN. To execute it in windows, I've just been entering the following into Command Prompt:
cd..
cd..
cd Python33
python -m http.server 8000
This creates the server that can be accessed via x.x.x.x:8000. Instead of manually entering these commands every time, however, I tried to make a batch script that would do this automatically. Below is the script for the batch file. The problem I'm having, is When I run the batch file, command prompt is saying 'python' is not recognized as an internal or external command. I was wondering if someone could tell my why this is happening and offer a solution or reason as to why this couldn't work. Thanks.
@ECHO OFF
cd..
cd..
cd Python33
python -m http.server 8000
pause
EDIT: Below is the code that works, this will also set the directory to the C drive.
@ECHO OFF
cd C:\
C:\Python33\python -m http.server 8000
-
Maybe it's because the batch file is located in a directory that is not 2- level deep (C:\path\to\batch.bat)
Specifying the path of the python executable will solve your problem.
@ECHO OFF
C:\python33\python -m http.server 8000
pause
-
Try this:
@ECHO OFF
cd..
cd..
cd Python33
python -m SimpleHTTPServer 8000
pause
-
In Python 3.x, SimpleHTTPServer is renmaed to http.server. – falsetru Feb 5 '14 at 16:08
You are right, thanks. Upvoted your answer. :-) – gravetii Feb 5 '14 at 16:08
Although the SimpleHTTPServer would work with native consoles on UNIX based systems. – John Smith Feb 5 '14 at 16:11 | 2016-06-25 21:44:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43175268173217773, "perplexity": 4073.97087353953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00094-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://crypto.stackexchange.com/questions?tab=frequent&page=5 | All Questions
3,515 questions
Filter by
Sorted by
Tagged with
64k views
Why can't we reverse hashes?
First off, I know hashes are 1 way. There are an infinite number of inputs that can result in the same hash output. Why can't we take a hash and convert it to an equivalent string that can be hashed ...
15k views
In end-to-end encryption, doesn't the server need to be trusted?
Applications like WhatsApp use end to end encryption. WhatsApp says that only the users share a specific key and no third party can view the messages. But I do not understand how the two users agree ...
26k views
How is CipherCloud doing homomorphic encryption?
Much of the literature and latest papers suggest that homomorphic encryption is still not practical yet. How is CipherCloud able to achieve this? Does anyone have an idea? Their website does not ...
108k views
Should I use ECB or CBC encryption mode for my block cipher?
Can someone tell me which mode out of ECB and CBC is better, and how to decide which mode to use? Are there any other modes which are better?
132k views
Are there two known strings which have the same MD5 hash value?
Is there an example of two known strings which have the same MD5 hash value (representing a so-called "MD5 collision")?
26k views
Recommended skills for a job in cryptology [closed]
First let me apologize if this is an ill posed question. Let me also note that I do not in any way seek a comprehensive answer, simply your thoughts on what makes for a valuable asset to a company ...
9k views
Is modern encryption needlessly complicated?
RSA, DES, AES, etc., all use (relatively) complicated mathematics to encrypt some message with some key. For each of these methods, there have been several documented vulnerabilities found over the ...
36k views
Is AES-256 weaker than 192 and 128 bit versions?
From a paper via Schneier on Security's Another AES Attack (emphasis mine): In the case of AES-128, there is no known attack which is faster than the 2128 complexity of exhaustive search. However, ...
13k views
How does recovering the public key from an ECDSA signature work?
It is possible to recover the public key from an ECDSA signature values $(r,s)$? Please explain how this works.
23k views
Practical disadvantages of GCM mode encryption
It seems that GCM mode encryption has a clear advantage over CBC + HMAC in the sense that it only requires a single key. But it seems that there are some experts here that do not trust it enough to ...
6k views
How is the Swiss post e-voting system supposed to work, and how was it wrong?
I read that the Swiss post had an e-voting solution developed, made it possible to obtain the source code for review, and that vulnerabilities were found. Apparently we are not talking about the ...
45k views
SHA512 faster than SHA256?
I'm getting this strange result that SHA512 is around 50% faster than SHA256. I'm using .net's SHA512Managed and SHA256Managed ...
9k views
Is sharing the modulus for multiple RSA key pairs secure?
In the public-key system RSA scheme, each user holds beyond a public modulus $m$ a public exponent, $e$, and a private exponent, $d$. Suppose that Bob's private exponent is learned by other users. ...
7k views
Does “Shattered” actually show SHA-1-signed certificates are “unsafe”?
Note: I am not advocating anyone continues using SHA1-signed certificates: they are dead as far as security is concerned and should no longer be used. I'm just trying to clarify my understanding of ...
50k views
How to solve MixColumns
I can't really understand MixColumns in Advanced Encryption Standard, can anyone help me how to do this? I found some topic in the internet about MixColumns, but I still have a lot of question to ask....
3k views
Is every output of a hash function possible?
Is every output of a hash function (e.g. SHA1, MD5, etc) guaranteed to be possible, or, conversely, are there any output values that cannot possibly be created from any input? In other words, are hash ...
2k views
Why choose an authenticated encryption mode instead of a separate MAC?
What are cryptographic reasons to choose an authenticated-encryption mode of operation (such as GCM) over a traditional encryption mode plus an independent MAC, or vice versa? Assume there is no ...
2k views
Why is PuTTYgen-created RSA public exponent(e) not in {3,17,65537}?
As far as I know, RSA public exponent(e) should be one of {3,17,65537}. However, I found PuTTYgen-created RSA public exponent(e) is 0x25(37) by default,as follows, (PuTTYgen version: 0.66) I am ...
17k views
RSA with small exponents?
Just to establish notation with respect to the RSA protocol, let $n = pq$ be the product of two large primes and let $e$ and $d$ be the public and private exponents, respectively ($e$ is the inverse ...
1k views
Security of RSA for paranoids with padding?
RSA for Paranoids (RSAP) (in CryptoBytes v1n3), also known as Unbalanced RSA, is a variant of RSA proposed in 1995 by Adi Shamir, as a mean to increase the RSA public modulus size while keeping ...
11k views
Predicting values from a Linear Congruential Generator
I have learnt that Linear Congruential Random Number Generators are not cryptographically secure - my understanding is that given an LCG of the form: ...
8k views
Is TrueCrypt's multiple/cascading encryption safe?
Is TrueCrypt use of cascading encryption safe? Is it useful? TrueCrypt is arguably one of the most popular and widely used encryption applications in use today, yet it seems to use a very ...
13k views
Why is the IV passed in the clear when it can be easily encrypted?
The initialization vector (IV) is exclusive or'd against the plain text before encryption for the first block sent in order to prevent an attacker from learning that duplicate message blocks are being ...
4k views
Is a second preimage attack on MD5 feasible?
What's the practical status of MD5 w.r.t. second-preimage? Integrity of a piece of data is protected by an MD5 hash, itself assumed genuine. The data (and thus the hash) is known to the adversary. ...
4k views
Why can't the IV be predictable when its said it doesn't need to be a secret?
I heard multiple times not to reuse the same IV and IV should be random but doesn't need to be secret. I also heard if the IV is something like sequential numbers or something predictable I should ...
7k views
How were the number of rounds for different key sizes of AES selected?
The number of AES rounds increases with the key length. Why increase the number of rounds at all, and how were these round counts chosen?
660 views
Collision or second preimage for the ChaCha core?
Daniel J. Bernstein's ChaCha core is an evolution of the Salsa20 core. Both are functions over the set of 512-bit bitstrings, partitioned as sixteen 32-bit words. Can we exhibit collisions, or second-...
6k views
Turning a cipher into a hashing function
This is theoretical question. I'd like to know if it's possible (and what are eventually the consequences), not that I'm going to do it in one of my projects. ;) The first hashing functions created ...
15k views
Can CBC ciphertext be decrypted if the key is known, but the IV not?
Let's say that there is a binary file encrypted with AES in CBC mode (i.e. using a key and initialization vector). If key is known, but IV is not, is it easy to fully decrypt the file? How hard is it?...
3k views
7k views
What is the maximum size of the plaintext message for RSA OAEP?
OAEP is an important technique used to strengthen RSA. However, using OAEP (or any technique that adds randomness) reduces the size of plaintexts that can be encrypted. Assume for instance that OAEP ...
3k views
Software implementation of a commutative cipher?
I've got an application (detailed below) that calls for the use of a cipher that is commutative. I've been doing some googling & reading, and there are two algorithms that seem to get mentioned ...
1k views
Which risks are associated with deriving multiple keys from the same DH secret Z?
NIST recommends Krawczyk's HMAC-based key derivation function (HKDF) in SP-800-56C (PDF). HKDF shall e.g. be used to create keys from shared secrets after Diffie Hellman key establishment. NIST ...
3k views
PBKDF vs HKDF for pretty long key
I'm developing a messenger application with encrypted chats. In the first version of the app I've used PBKDF2 (10000 iterations, SHA1, random salt) to extend a short user password and generate keys ...
720 views
6k views
Is there a standard for OpenSSL-interoperable AES encryption?
Many AES-encrypted things (files, strings, database entries, etc.) start with "Salted__" ("U2FsdGVkX1" in base64). I hear it's ...
3k views
Is quantum key distribution safe against MITM attacks too?
i read this recently: http://www.newscientist.com/article/dn12786-quantum-cryptography-to-protect-swiss-election.html and some parts of this: http://en.wikipedia.org/wiki/Quantum_key_distribution ...
3k views
RSA enc/decryption with multiple prime modulus using CRT
Every information I found on internet about RSA-CRT encryption/decryption uses only two primes. I'm interested in my project in doing that using multiple (up to 8) primes. The general idea is to ...
3k views
Is there a length-preserving encryption scheme?
Is there a length-preserving encryption scheme, that preserves the lengths of input sizes such that the length of the input plain text is same as length of the output cipher text ?
657 views
Formal description of KDF1 and KDF2
I've seen many descriptions of KDF1 and KDF2 by now, but most documents simply point to specifications that are behind a pay wall. These standards are not specific to the KDF's; they just use these ...
4k views
RSA with composite numbers
It has been said that RSA uses a modulus product of two prime numbers for security reasons. But does RSA even work correctly if we allow composite integers instead? I think that the answer is "NO". ... | 2020-02-22 16:22:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49032291769981384, "perplexity": 2657.8793886300377}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145708.59/warc/CC-MAIN-20200222150029-20200222180029-00311.warc.gz"} |
https://inordinatum.wordpress.com/2011/09/03/tricks-for-inverting-a-laplace-transform-part-ii/ | # inordinatum
Physics and Mathematics of Disordered Systems
## Tricks for inverting a Laplace Transform, part II: Products and Convolutions
EDIT: In the meanwhile, I have continued the series of posts on Laplace Transform inversion. You can find the subsequent articles here: part I (guesses based on series expansions), part III, part IV (substitutions), part V (pole decomposition). Enjoy!
Following the previous post on inverting Laplace transforms, here is another trick up the same alley. This one actually considers a generalization of the previous case
Find $P(x)$ such that $\int_0^\infty P(x)e^{\lambda x}dx=\left(1-q\lambda\right)^a \left(1-\lambda\right)^b$.
As usual, the built-in InverseLaplaceTransform function from Mathematica 8 fails to give a result. To obtain a closed formula manually, note that each of the factors can be easily inverted:
$\int_0^\infty e^{\lambda x}R_{a,q}(x)\,dx=(1-q\lambda)^a$
has the solution
$R_{a,q}(x)=\frac{e^{-\frac{x}{q}}\left(\frac{x}{q}\right)^{-1-a}}{q \Gamma(-a)}$.
Hence, using the fact that Laplace transforms of convolutions give products, the solution for $P(x)$ can be written as a convolution:
$P(x) = \int_0^x dx' R_{a,q}(x')R_{b,1}(x-x')$.
Computing the integral gives the following expression for $P(x)$ in terms of the hypergeometric function $\,_1 F_1$:
$P(x) = \frac{e^{-x}q^a x^{-1-a-b}}{\Gamma(-a-b)} \,_1 F_1\left(-a,-a-b,\frac{q-1}{q}x\right)$
Enjoy!!
Written by inordinatum
September 3, 2011 at 10:34 am | 2017-07-22 06:50:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6400251388549805, "perplexity": 875.2273936633842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423903.35/warc/CC-MAIN-20170722062617-20170722082617-00621.warc.gz"} |
https://jan.ucc.nau.edu/mg2745/pedagogic_modules/courses/hpcdataintensive/range_query_2/ | # Range Queries: R-tree Resource Allocation
## Introduction: R-Trees on Multiple Nodes
In this exercise, we will not program anything new. Instead, we will experiment with resource allocation.
## Splitting Ranks Across Two Nodes
Run the program using the same experimental settings in Programming Assignment #2, except use the constraints below.
• Use 2 nodes.
• Evenly split the ranks between the nodes.
When running your algorithm, to force the scheduler to split the ranks evenly between nodes you can use the --ntasks-per-node flag. As an example, the line below runs the program for $p=20$, where 10 ranks are run on each node.
srun --ntasks-per-node=10 -n20 range_query 5000000 50000 ZTF_ra_dec_5m.csv
Table 4: Total response time, index construction time, and search time (two nodes).
# of Ranks ($p$) Total Time (s) R-Tree Construction (s) Search Time (s) Global Sum Job Script Name (*.sh)
1
4
8
12
16
20
Table 5: Speedup and parallel efficiency of the search phase from Table 4.
# of Ranks ($p$) Speedup Parallel Efficiency
1
4
8
12
16
20
Using the data you collected in your tables, answer the following questions. Be sure to refer to specific values in your table when responding to these questions.
• Q6: Compare the speedup and parallel efficiency between Table 3 (one node) and Table 5 (two nodes). All $p=20$ ranks can be run on a single node (Table 3), or they can be evenly distributed between two nodes (Table 5). How does the speedup and parallel efficiency compare? Is there anything interesting about performance?
Based on your observations on 1 and 2 nodes, devise a new experiment that changes resource allocation (e.g., number of nodes, allocation of tasks to nodes, dedicated, non-dedicated, etc.) that may further improve performance beyond using 2 nodes. Complete the tables as follows.
• Q7: Explain what parameters you used in your job script. Report the results in the Tables below.
Table 6: Total response time, index construction time, and search time (your new experiment).
# of Ranks ($p$) Total Time (s) R-Tree Construction (s) Search Time (s) Global Sum Job Script Name (*.sh)
1
4
8
12
16
20
Table 7: Speedup and parallel efficiency of the search phase from Table 6.
# of Ranks ($p$) Speedup Parallel Efficiency
1
4
8
12
16
20
• Q8: How does performance compare to your experiment in Table 7 to Tables 3 and 5? Did your new experiment outperform the 2 node experiment?
• Q9: Consider the case where you want to run the range query algorithm with the R-tree, and you need to run the algorithm on a cluster that is shared with one other user. Would you rather the other user be running a memory-bound algorithm or compute-bound algorithm? Explain.
Previous
Next | 2023-03-23 08:05:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.359060138463974, "perplexity": 2649.746993587981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00076.warc.gz"} |
https://dsp.stackexchange.com/questions/1843/dft-with-geometrically-spaced-bins/1845 | # DFT with geometrically-spaced bins?
The traditional Discrete Fourier Transform (DFT) and its cousin, the FFT, produce bins that are spaced equally. In other words, you get something like the first 10 hertz in the first bin, 10.1 through 20 in the second, etc. However, I need something a little different. I want the range of frequency covered by each bin to increase geometrically. Suppose I select a multiplier of 1.5. Then we have 0 through 10 in the first bin, I want 11 through 25 in the second bin, 26 through 48 in the third, etc. Is it possible to modify the DFT algorithm to behave in this fashion?
• You can always calculate DFT at the points of interest. Also, Discrete Wavelet Transform and Filter Banks come to my mind. Might be worth looking at them. – Alex Mar 24 '12 at 6:46
• You are looking for the Constant Q Transform (CQT). – Paul R Mar 24 '12 at 6:49
• Poorly worded. What you want is neither new, nor an improvement in many cases. – hotpaw2 Mar 24 '12 at 20:42
• – endolith Mar 26 '12 at 17:40
• DFT and FFT are not cousins. They give identical results. – Phonon Mar 26 '12 at 20:38
To quote my dissertation:
A collection of transforms are given the name constant Q and are similar to the Fourier transform.
Computation of the discrete Fourier transform can be very efficient when employing the use of the fast Fourier transform. However we notice that energy of a signal is divided into uniformly sized frequency buckets across the spectrum. While in many cases this is useful, we notice situations where this uniform distribution is sub-optimal. An important example of such a case is observed with the analysis of musical frequencies. In Western music, the frequencies that make up the musical scales are geometrically spaced. We therefore see that the map between frequency bins of the discrete Fourier transform and the frequencies of musical scales is insufficient in the sense that the bins match poorly. The constant Q transform addresses this issue.
The aim of the constant Q is to produce a set of logarithmically spaced frequency bins in which the width of the frequency bin is a product of the previous. As a result we may produce an identical number of bins per musical note across the audible spectrum, thus maintaining a constant level of accuracy for each musical note. The frequency bins become wider towards the higher frequencies and narrower towards the lower frequencies. This spread in the accuracy of frequency detection closely imitates the manner in which the human-auditory system responds to frequencies.
Additionally, the close matching of notes in western scales renders the constant-Q particularly useful in note detection; identifying a musical note value rather than an explicit frequency value. Furthermore the constant Q simplifies the process of timbre analysis. The frequencies of a musical note played by an instrument are often comprised of harmonically related partials. The timbre of the instrument can be characterised by the ratios of the harmonics. With the constant Q transform, the harmonics are equally spaced across the bins regardless of the fundamental frequency. This greatly simplifies the process of identifying an instrument playing a note anywhere in the scale simply by shifting the characterisation across the bins. A potential downside to using the constant Q transform is that it demands more computation than the Fourier transform.
An efficient algorithm for transforming a discrete Fourier transform (which may be computed with the FFT) into a Constant Q transform is detailed in Brown and Puckette (1992).
There are significant mathematical assumptions in the DFT(FFT). The most significant in this case is that you are performing a truncated infinite-time sinusoid transformation. The second is that the truncated time and truncated frequncy signals are assumed to be modulo-wrapped (circular.) The bins as spaced in a normal FFT form an orthonormal set only because of these assumptions (and the even arithmetical-sense spacing.) The time <-> frequency pair are therefore perfectly reversible.
The constant-Q transform doesn't truncate so nicely, therefore any practical implementation doesn't yield perfect ortho-normal pairing. The kernel is an infinitely long exponentially decaying sinusoid and therefore cannot have the circular advantage indicated above. If you don't truncate, they do form an orthonormal set.
The wavelet transforms are typically power-of-2 spaced, which isn't very useful for fine-grained frequency estimation.
The suggestion to unevenly space a standard sinusoid DFT will miss information in the widely spaced region while it will duplicate information in the densely spaced region. Unless, a different apodization function is used for each frequency ... very costly.
One practical solution is to do a half-spectrum->decimate-by-2 repeated procedure to get octave based sub-sections to satisfy some minimax estimation error per octave. The portion-spectrum->decimate-by-ratio can be set to any ratio to achieve any granularity need. Still pretty compute intensive, though. | 2021-07-26 19:50:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109200596809387, "perplexity": 817.7063122990692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.92/warc/CC-MAIN-20210726183622-20210726213622-00243.warc.gz"} |
https://byjus.com/cartesian-coordinates-calculator/ | Cartesian Coordinates Calculator
Enter the values separated by comma
A =
B =
A x B =
The Cartesian Coordinates Calculator an online tool which shows Cartesian Coordinates for the given input. Byju's Cartesian Coordinates Calculator is a tool
which makes calculations very simple and interesting. If an input is given then it can easily show the result for the given number.
Practise This Question
Detergents are prepared by the action of H2SO4 followed by neutralization by starting with | 2019-02-15 22:55:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523109197616577, "perplexity": 2055.3865242298943}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479627.17/warc/CC-MAIN-20190215224408-20190216010408-00347.warc.gz"} |
https://mathematica.stackexchange.com/questions/121291/ndsolve-step-size-effectively-zero | # NDSolve Step Size Effectively Zero
I'm trying to solve a PDE for a function of 2 variables. The most accurate parameterizations of this equation are very unwieldy and involve numerous piecewise elements, and so right now I'm trying to solve an approximate form. I'm getting step size effectively 0 errors, unless I increase the starting step size, in which case the solution I get is clearly unphysical. Since this is an astrophysics problem, I'm working over many orders of magnitude, which I suspect might be part of the issue.
Code:
z[x_]:= 2458.31 - 100.087 x + 1.23213 x^2 - 0.0046743 x^3
lamavg[t_] := Min[1, 0.01 + 0.07 z[t]];
Ufit2[M_,t_]:=0.50519 + 3.127*10^10/M^2 - 274337./M + 2.12127*10^-10 M
-1.92858*10^-20 M^2 - 6.20762*10^-11 t;
fixedM[x_] :=
0.00006192808740866853/
591.5967816994163 (32.59434080693661 - 2.0297457454952188*^-7 x +
1.2887918124478197*^-15 x^2 - 2.5861699533219344*^-25 x^3 +
1.941139394441828*^-35 x^4 - 5.134757798851362*^-46 x^5)
DifEq = D[P[M, t],t] == -M l /sol^2 D[(1 - e)/e
lamavg[t] *3.93242 * Ufit2[M, t] P[M, t], M] //. {e ->
0.1,l -> 1.26 * 10^(31), sol -> 3*^8 };
soln = NDSolve[{DifEq, P[M, 8.15240949872944*^8] ==
1.629652976233948/M, P[1*^5 .263, t] == fixedM[t]}, P[M,
t], {t, 1000, 8.15240949872944*^8}, {M, 1*10^5.263, 10000000000},
Method -> { "StiffnessSwitching", "NonstiffTest" -> False}]
In terms of what I expect, this is a black hole mass function over the evolution of the universe, so P shouldn't fall off as time decreases, and there should be a difference of several orders of magnitude (especially at large t) between the low and high mass end.
Also I shortened the interval because the part here is the most important part, but ideally I would be able to continue the integration up to t of about 10 billion (not important at the moment though).
Thanks!
• "I'm working over many orders of magnitude" - is there really no choice of units that will give quantities of comparable size? – J. M. will be back soon Jul 23 '16 at 1:57
• What is the definition for fixedM – Young Jul 23 '16 at 2:00
• Do you have two accounts? You should not need the review of others to edit your own question. – bbgodfrey Jul 23 '16 at 2:17
• Can the time boundary be moved to one of the extremes of t – Young Jul 23 '16 at 2:23
• Please look here to have your accounts merged. – J. M. will be back soon Jul 23 '16 at 2:23
This evaluates without errors:
z[x_] := 2458.31 - 100.087 x + 1.23213 x^2 - 0.0046743 x^3
lamavg[t_] := Min[1, 0.01 + 0.07 z[t]]
Ufit2[M_, t_] := 0.50519 + 3.127*10^10/M^2 - 274337/M + 2.12127*10^-10 M -
1.92858*10^-20 M^2 - 6.20762*10^-11 t;
e = 1/100;
l = (126/100)*10^(31);
sol = 3*^8;
DifEq = D[P[M, t],t] == -M l/ sol^2 D[(1 - e)/e lamavg[t]*3.93242*Ufit2[M, t] P[M, t], M];
fixedM[x_] := 0.00006192808740866853/ 591.5967816994163
(32.59434080693661 - 2.0297457454952188*^-7 x +
1.2887918124478197*^-15 x^2 - 2.5861699533219344*^-25 x^3 +
1.941139394441828*^-35 x^4 - 5.134757798851362*^-46 x^5)
soln = NDSolve[{
DifEq,
P[M, 8.15240949872944*^8] == 1.629652976233948/M,
P[0.263*^5, t] == fixedM[t]},
P[M, t], {M, 0.263*^5, 10000000000}, {t, 1000, 8.15240949872944*^8},
Method -> "StiffnessSwitching"]
Plot3D[P[M, t] /. soln, {M, 0.263*^5, 10000000000}, {t, 1000, 8.15240949872944*^8}]
• Maybe you meant to try Plot3D[P[M, t] /. soln,...] – Michael E2 Jul 23 '16 at 4:25
• @MichaelE2 ... oops ... your right – Young Jul 23 '16 at 4:27
• What about the rest of the interval? (I can get it to work with some trickiness, but your approach would be simpler.) – Michael E2 Jul 23 '16 at 4:47
• @MichaelE2 The OP said that I could set max t to equal the boundary condition of 8.15240949872944*^8 – Young Jul 23 '16 at 4:51
• This is about the right scale but not really the behavior I'm expecting...blackholes shouldn't just disappear! – basementDweller Jul 23 '16 at 5:03 | 2019-10-21 18:47:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.423080176115036, "perplexity": 4508.701442081973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00536.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=0099990 | MathSciNet bibliographic data MR99990 14.00 Zariski, Oscar On Castelnuovo's criterion of rationality \$p\sb{a}=P\sb{2}=0\$$p\sb{a}=P\sb{2}=0$ of an algebraic surface. Illinois J. Math. 2 1958 303–315. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | 2016-05-26 16:13:11 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956226348876953, "perplexity": 7204.820808738041}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275981.56/warc/CC-MAIN-20160524002115-00109-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/8255 | # Intel 8255
(Redirected from 8255)
Intel D8255
Pinout of i8255
The Intel 8255 (or i8255) programmable peripheral interface (PPI) chip was developed and manufactured by Intel in the first half of the 1970s for the Intel 8080 microprocessor and is a member of the MCS-85 Family of chips. The i8255 was also used with the Intel 8085 and Intel 8086[1] and their descendants and found wide applicability in digital processing systems. It was later cloned by other manufacturers. It is available in 40-pin DIP and 44-pin PLCC packages.[2] The 82C55 is a higher-speed CMOS version.
The i8255 provides 24 parallel input/output lines, but the functionality of the i8255 is now mostly embedded in larger VLSI processing chips as a sub-function. The i8255 is still made and is often used to expand micro controller input/output.
## Similar chips
The i8255 has a similar function to the MOS Technology 6522 (Versatile Interface Adapter) and the MOS Technology CIA (Complex Interface Adapter) all developed for the 6502 family. All of these chips were originally available in a 40-pin DIL package but the i8255 provides 24 usable I/O pins whereas the MOS chips only provide 16 I/O pins plus 4 control pins. However, the MOS chips contains other functions besides pin I/O, such as two programmable reload timers and a simple I/O shift register function. Furthermore, the MOS devices allow the direction (input or output) of all their I/O pins to be individually programmed. The i8255 I/O pins have only four programmable direction bits: one for all of Port A(7:0), one for Port B(7:0), one for Port C(3:0) and one for Port C(7:4).
Other comparable I/O chips are the 2655 Programmable Peripheral Interface from the Signetics 2650 family, the Motorola 6820 PIA (Peripheral Interface Adapter) from the Motorola 6800 family and the Western Design Center WDC 65C21.
## Application
The 8255 is widely used in many microcomputer/microcontroller systems and home computers such as the SV-328 and all MSX models. The 8255 is used in the original IBM-PC,[3] PC/XT, PC/jr and clones, along with numerous homebuilt computers such as the N8VEM. The 8255 is also directly compatible with the Z-80, as well as many Intel processors.
## Function
The i8255 gives a CPU or digital system access to programmable parallel I/O.[4] The 8255 has 24 input/output pins.[5] These are divided into three 8-bit ports (A, B, C).[6] Port A and port B can be used as 8-bit input/output ports. Port C can be used as an 8-bit input/output port or as two 4-bit input/output ports or to produce handshake signals for ports A and B.
The three ports are further grouped as follows:
1. Group A consisting of port A and upper part of port C.
2. Group B consisting of port B and lower part of port C.
Eight data lines (D0–D7) are available (with an 8-bit data buffer) to read/write data into the ports or control register under the status of the ${\displaystyle {\neg }}$RD (pin 5) and ${\displaystyle {\neg }}$WR (pin 36), which are active-low signals for read and write operations respectively. Address lines A1 and A0 allow to access a data register for each port or a control register, as listed below:
A1 A0 Port selected
0 0 port A
0 1 port B
1 0 port C
1 1 control register
The control signal chip select ${\displaystyle {\neg }}$CS (pin 6) is used to enable the 8255 chip. It is an active-low signal, i.e., when ${\displaystyle {\neg }}$CS = 0, the 8255 is enabled. The RESET input (pin 35) is connected to the RESET line of system like 8085, 8086, etc., so that when the system is reset, all the ports are initialized as input lines. This is done to prevent 8255 and/or any peripheral connected to it from being destroyed due to mismatch of ports. As an example, consider an input device connected to 8255 at port A. If from the previous operation, port A is initialized as an output port and if 8255 is not reset before using the current configuration, then there is a possibility of damage of either the input device connected or 8255 or both, since both 8255 and the device connected will be sending out data.
The control register (or the control logic, or the command word register) is an 8-bit register used to select the modes of operation and input/output designation of the ports.[7]
## Operational modes of 8255
There are two basic operational modes of 8255:
• Bit set/reset mode (BSR mode).
• Input/Output mode (I/O mode).
The two modes are selected on the basis of the value present at the D7 bit of the control word register. When D7 = 1, 8255 operates in I/O mode, and when D7 = 0, it operates in the BSR mode.
## Bit set/reset (BSR) mode
The Bit Set/Reset (BSR) mode is applicable to port C only. Each line of port C (PC0 - PC7) can be set/reset by suitably loading the control word register. BSR mode and I/O mode are independent and selection of BSR mode does not affect the operation of other ports in I/O mode.[8]
8255 BSR mode
• D7 bit is always 0 for BSR mode.
• Bits D6, D5 and D4 are don't care bits.
• Bits D3, D2 and D1 are used to select the pin of Port C.
• Bit D0 is used to set/reset the selected pin of Port C.
Selection of port C pin is determined as follows:
D3 D2 D1 Bit/pin of port C selected
0 0 0 PC0
0 0 1 PC1
0 1 0 PC2
0 1 1 PC3
1 0 0 PC4
1 0 1 PC5
1 1 0 PC6
1 1 1 PC7
As an example, if it is needed that PC5 be set, then in the control word,
1. Since it is BSR mode, D7 = '0'.
2. Since D4, D5, D6 are not used, assume them to be '0'.
3. PC5 has to be selected, hence, D3 = '1', D2 = '0', D1 = '1'.
4. PC5 has to be set, hence, D0 = '1'.
Thus, as per the above values, 0B (Hex) will be loaded into the Control Word Register (CWR).
D7 D6 D5 D4 D3 D2 D1 D0
0 0 0 0 1 0 1 1
## Input/Output mode
This mode is selected when D7 bit of the Control Word Register is 1. There are three I/O modes:[9]
1. Mode 0 - Simple I/O
2. Mode 1 - Strobed I/O
3. Mode 2 - Strobed Bi-directional I/O
### Control Word Format
I/O Control Word Format
• D0, D1, D3, D4 are assigned for port C lower, port B, port C upper and port A respectively. When these bits are 1, the corresponding port acts as an input port. For e.g., if D0 = D4 = 1, then lower port C and port A act as input ports. If these bits are 0, then the corresponding port acts as an output port. For e.g., if D1 = D3 = 0, then port B and upper port C act as output ports.
• D2 is used for mode selection of Group B (port B and lower port C). When D2 = 0, mode 0 is selected and when D2 = 1, mode 1 is selected.
• D5 & D6 are used for mode selection of Group A ( port A and upper port C). The selection is done as follows:
D6 D5 Mode
0 0 0
0 1 1
1 X 2
• As it is I/O mode, D7 = 1.
For example, if port B and upper port C have to be initialized as input ports and lower port C and port A as output ports (all in mode 0):
1. Since it is an I/O mode, D7 = 1.
2. Mode selection bits, D2, D5, D6 are all 0 for mode 0 operation.
3. Port B and upper port C should operate as Input ports, hence, D1 = D3 = 1.
4. Port A and lower port C should operate as Output ports, hence, D4 = D0 = 0.
Hence, for the desired operation, the control word register will have to be loaded with "10001010" = 8A (hex).
### Mode 0 - simple I/O
In this mode, the ports can be used for simple I/O operations without handshaking signals. Port A, port B provide simple I/O operation. The two halves of port C can be either used together as an additional 8-bit port, or they can be used as individual 4-bit ports. Since the two halves of port C are independent, they may be used such that one-half is initialized as an input port while the other half is initialized as an output port.
The input/output features in mode 0 are as follows:
1. Output ports are latched.
2. Input ports are buffered, not latched.
3. Ports do not have handshake or interrupt capability.
4. With 4 ports, 16 different combinations of I/O are possible.
'Latched' means the bits are put into a storage register (array of flip-flops) which holds its output constant even if the inputs change after being latched.
The 8255's outputs are latched to hold the last data written to them. This is required because the data only stays on the bus for one cycle, so without latching the outputs would become invalid as soon as the write cycle finished.
The inputs are not latched because the CPU only has to read their current values, then store the data in a CPU register or memory if it needs to be referenced at a later time. If an input changes while the port is being read then the result may be indeterminate.
#### Mode 0 – input mode
• In the input mode, the 8255 gets data from the external peripheral ports and the CPU reads the received data via its data bus.
• The CPU first selects the 8255 chip by making ${\displaystyle {\neg }}$CS low. Then it selects the desired port using A0 and A1 lines.
• The CPU then issues an ${\displaystyle {\neg }}$RD signal to read the data from the external peripheral device via the system data bus.
#### Mode 0 - output mode
• In the output mode, the CPU sends data to 8255 via system data bus and then the external peripheral ports receive this data via 8255 port.
• CPU first selects the 8255 chip by making ${\displaystyle {\neg }}$CS low. It then selects the desired port using A0 and A1 lines.
• CPU then issues a ${\displaystyle {\neg }}$WR signal to write data to the selected port via the system data bus. This data is then received by the external peripheral device connected to the selected port.
### Mode 1 - Strobed Input/output mode
When we wish to use port A or port B for handshake (strobed) input or output operation, we initialise that port in mode 1 (port A and port B can be initilalised to operate in different modes, i.e., for e.g., port A can operate in mode 0 and port B in mode 1). Some of the pins of port C function as handshake lines.
For port B in this mode (irrespective of whether is acting as an input port or output port), PC0, PC1 and PC2 pins function as handshake lines.
If port A is initialised as mode 1 input port, then, PC3, PC4 and PC5 function as handshake signals. Pins PC6 and PC7 are available for use as input/output lines.
The mode 1 which supports handshaking has following features:
1. Two ports i.e. port A and B can be used as 8-bit i/o ports.
2. Each port uses three lines of port c as handshake signal and remaining two signals can be used as i/o ports.
3. Interrupt logic is supported.
4. Input and Output data are latched.
Input Handshaking signals
1. IBF (Input Buffer Full) - It is an output indicating that the input latch contains information.
2. STB (Strobed Input) - The strobe input loads data into the port latch, which holds the information until it is input to the microprocessor via the IN instruction.
3. INTR (Interrupt request) - It is an output that requests an interrupt. The INTR pin becomes a logic 1 when the STB input returns to a logic 1, and is cleared when the data are input from the port by the microprocessor.
4. INTE (Interrupt enable) - It is neither an input nor an output; it is an internal bit programmed via the port PC4(port A) or PC2(port B) bit position.
Output Handshaking signals
1. OBF (Output Buffer Full) - It is an output that goes low whenever data are output(OUT) to the port A or port B latch. This signal is set to a logic 1 whenever the ACK pulse returns from the external device.
2. ACK (Acknowledge)-It causes the OBF pin to return to a logic 1 level. The ACK signal is a response from an external device, indicating that it has received the data from the 82C55A port.
3. INTR (Interrupt request) - It is a signal that often interrupts the microprocessor when the external device receives the data via the signal. this pin is qualified by the internal INTE(interrupt enable) bit.
4. INTE (Interrupt enable) - It is neither an input nor an output; it is an internal bit programmed to enable or disable the INTR pin. The INTE A bit is programmed using the PC6 bit and INTE B is programmed using the PC2 bit.
### Mode 2 - Strobed Bidirectional Input/Output mode
Only port A can be initialized in this mode. Port A can be used for bidirectional handshake data transfer. This means that data can be input or output on the same eight lines (PA0 - PA7). Pins PC3 - PC7 are used as handshake lines for port A. The remaining pins of port C (PC0 - PC2) can be used as input/output lines if group B is initialized in mode 0 or as handshaking for port B if group B is initialized in mode 1. In this mode, the 8255 may be used to extend the system bus to a slave microprocessor or to transfer data bytes to and from a floppy disk controller. Acknowledgement and handshaking signals are provided to maintain proper data flow and synchronisation between the data transmitter and receiver.
## References
1. ^ R Theagarajan (1 January 2004). Microprocessor And Its Applications. New Age International. pp. 165–. ISBN 978-81-224-1040-2. Retrieved 3 June 2012.
2. ^ Embedded Systems Programming. Miller Freeman Publications. July 1996. p. 256. Retrieved 3 June 2012.
3. ^ Robert Jourdain (1986). Programmer's problem solver for the IBM PC, XT, & AT. Brady Communications Co. p. 3. Retrieved 3 June 2012.
4. ^ Electronics world. Reed Business Pub. 1996. p. 947. Retrieved 3 June 2012.
5. ^ "Intel 82c55 PPI Datasheet" (PDF).
6. ^ "PCI 8255A Datasheet" (PDF).
7. ^ Byte. McGraw-Hill. 1981. p. 40. Retrieved 3 June 2012.
8. ^ U. S. Shah. "11". Microprocessor & Interfacing Techniques (second ed.). Techmax Publication. pp. 11–5. ISBN 978-81-8492-305-6.
9. ^ "i8255 introduction" (PDF). | 2017-05-27 03:08:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2691727876663208, "perplexity": 4832.006395060818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608765.79/warc/CC-MAIN-20170527021224-20170527041224-00322.warc.gz"} |
http://tex.stackexchange.com/questions/137364/how-to-automatically-open-pdf-file-after-dvi2ps2pdf-in-emacs | # how to automatically open pdf file after dvi2ps2pdf in emacs
i want to let emacs automatically open pdf file after i type C-c C-c dvi2ps2pdf RET, so i add the following code in my .emacs file
;;dvi2ps2pdf
(list "dvi2ps2pdf" " dvips %s.dvi | ps2pdf %s.ps %s.pdf | SumatraPDF %o"
'TeX-run-command nil t))
it works fine, well, until i find that it fails if the tex file, say, foo.tex, has \usepackage{psfrag} in it as follows,
% foo.tex
\documentclass{article}
\usepackage{psfrag}
\begin{document}
$e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}$
\end{document}
SumatraPDF would open foo.dvi instead of foo.pdf.
if i type C-c C-n and start all over, it works fine again, as long as i do not compile and produce a new foo.div.
after compiling and foo.div is created,
if i replace %o with %f, SumatraPDF would open foo.ps
if i replace %o with %s.pdf, SumatraPDF would open two broken files foo and .pdf
if i replace SumatraPDF %o with start \"\" %s.pdf, Adobe Reader will correctly open foo.pdf
this is frustrating. how can i edit the code to let SumatraPDF take over everything? and how to use those substitution %o %f %s properly? excuse me that i'm really inexperienced, but it seems that they belong to different conventions? thanks very much!
## UPDATE
with @giordano's help, i tend to believe that this is a inherent problem of SumatraPDF (Adobe Reader too, i think).
i think there's no problem on linux or mac. on windows, however, in DOS commands, i find that the following are the same, say,
• copy foo.pdf bar.pdf
• copy "foo".pdf bar.pdf
• copy "foo.pdf" bar.pdf
• ...
other external commands such as dvips and ps2pdf do not distinguish the cases, either. SumatraPDF, however, treats foo.pdf or "foo.pdf" as one same file, and "foo".pdf as two separate files foo and .pdf and even opens them.
i think that's why SumatraPDF always fails to open %s.pdf. it is a file name parsing flaw, if i'm right, SumatraPDF is not intend to open files like .pdf on windows system, and it should be able to detect that automatically.
p.s.
i also tried to manipulate the string %s.pdf(="foo".pdf) and tried to strip the double quotes in ways of lisp or DOS command. the quotes won't go away and i failed miserably due to inexperience.
do you guys have any ideas? thanks!
## UPDATE
thanks to @giordano, i gave a shot on cygwin. (i had cygwin install for doc view anyway, no trouble at all.) it turns out that SumatraPDF works completely fine on cygwin, i.e., it accepts file names such as "foo".pdf. so i decided to redirect all command line calls to cygwin. i added the following to .emacs file (see emacswiki),
;; Sets your shell to use cygwin's bash, if Emacs finds it's running
;; under Windows and c:\cygwin exists. Assumes that C:\cygwin\bin is
;; not already in your Windows Path (it generally should not be).
;;
(let* ((cygwin-root "c:/cygwin")
(cygwin-bin (concat cygwin-root "/bin")))
(when (and (eq 'windows-nt system-type)
(setq exec-path (cons cygwin-bin exec-path))
(setenv "PATH" (concat cygwin-bin ";" (getenv "PATH")))
;; By default use the Windows HOME.
;; Otherwise, uncomment below to set a HOME
;; (setenv "HOME" (concat cygwin-root "/home/eric"))
;; NT-emacs assumes a Windows shell. Change to bash.
(setq shell-file-name "bash")
(setenv "SHELL" shell-file-name)
(setq explicit-shell-file-name shell-file-name)))
after that, SumatraPDF recognizes %s.pdf just fine.
there is one problem, though. dvi2ps runs differntly on cygwin and cannot proceed, i put the question somewhere else.
p.s.
code of SumatraPDF has been improved to accomodate cases like "foo".pdf. hopefully we'll see how it work very soon.
-
If it's a bug you should really report it to Emacs (M-x report-emacs-bug RET) or AUCTeX (M-x TeX-submit-bug-report RET) instead of trying to work around. If it will be fixed upstream, everyone will benefit, not just you ;-). – giordano Oct 13 '13 at 9:20
i think i'll try to report this to SumatraPDF first :) – davyjones Oct 13 '13 at 10:15
One option would be to become more comfortable with command-line and set up your own function similar to my first example, which obviates the need to rely upon AUCTeX. Another option would be for you to open up tex.el and take a look at (defcustom TeX-expand-list . . . and perhaps create your own variables that format file names in a manner that SumatraPDF likes. You can customize your own variable in whatever format you like -- e.g., by using split-string, and concat. – lawlist Oct 13 '13 at 21:41
In my first example, I used: (setq base-file (car (split-string (buffer-file-name) "\\."))) and (setq pdf-file (concat base-file ".pdf")). If your buffer-file-name is c:/davy/jones/foo.tex, then base-file is c:/davy/jones/foo. Then, pdf-file concatenates a .pdf onto the end of base-file. This gives you a *.pdf file name that SumatraPDF likes. My line of code (start-process "displayline" nil sumatra "-forward-search" tex-file line pdf-file) is shorthand for C:/SumatraPDF/SumatraPDF.exe -forward-search c:/davy/jones/foo.tex [line-number] c:/davy/jones/foo.pdf. – lawlist Oct 13 '13 at 22:09
The solution is much simpler than what you've done so far: define a single command that runs latex, dvips and ps2pdf. In this way, when TeX-PDF-mode is active the PDF will be opened by your default PDF viewer without any further change. In your init file you can add something like this:
(add-to-list
'TeX-command-list
'("dvi2ps2pdf" "%(latex) %s && dvips %s.dvi && ps2pdf %s.ps %s.pdf" TeX-run-command nil t))
I'm not familiar with DOS batch scripting, in UNIX shells && is the logic AND, the following command is run if the previous one exits successfully, I don't know what's the batch analogous.
-
@giodano,great! thanks very much! the only thing i need to do is to stay in the pdflatex mode. btw, what if i want to perform this in the dvi mode and i do not want to switch to the pdf mode to open the pdf file, otherwise i would get the dvi file to open, or in the presence of psfrag package, the ps file will open in gsview? – davyjones Oct 11 '13 at 15:48
@davyjones it wouldn't be a clean solution, but you can append SumatraPDF %s.pdf to the list of programs to be run by the dvi2ps2pdf command. – giordano Oct 11 '13 at 17:09
@giodano,saddly simply appending does not work... as i stated in the original question, that would cause SumatraPDF to open two broken file, say, foo and .pdf. i think we could define a function viewpdf first as found in link, but i would not how to use it inside the double quotes again, do you have any suggestions? many thanks again! – davyjones Oct 11 '13 at 17:25
@davyjones has the file spaces in its name? Appending && okular %s.pdf (Okular is my PDF viewer) works for me. Try protecting the file name with quotes: \"%s.pdf\" instead of simply %s.pdf. – giordano Oct 11 '13 at 17:31
@giodano, sadly, it failed again... SumatraPDF will open two broken files, one with no name, the other with name bar".pdf". if i use SumatraPDF .\\%s.pdf, SumatraPDF will open one broken file named "bar".pdf. it still fails if i change SumatraPDF to AcroRd32. is it because i'm using emacs+auctex on windows system? – davyjones Oct 11 '13 at 20:04 | 2015-08-29 17:13:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616052865982056, "perplexity": 7326.140269056878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064517.22/warc/CC-MAIN-20150827025424-00166-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://memorize.be/maths/r/syntax/types/matrix.md | Matrix ¶
Go back
This is the same as a matrix in Math. This is a vector of vectors. The function is matrix(data, nrow, ncol).
matrix(1:6, nrow = 2, ncol = 3)
# [,1] [,2] [,3]
# [1,] 1 3 5
# [2,] 2 4 6
matrix(1:6, nrow = 2, ncol = 3, byrow = T)
# [,1] [,2] [,3]
# [1,] 1 2 3
# [2,] 4 5 6
Indexes ¶
This is the same as for vectors, but you got two indexes separated by a comma ,. The format is [line, column]. If you omit one, then the whole line/column will be returned (or the whole matrix if both are omitted).
m[,] # whole matrix
m[1,] # first line
m[,1] # first column
m[1,1] # value at 1,1 (first line, first column)
Sometimes, you will have to use m[i,j,drop=FALSE] because R is dereferencing your matrix to a vector.
Functions ¶
• t(m): transpose
• solve(m): inverse a matrix
• solve(A,b): solve $Ax=b$
• diag(...): create a diagonal matrix
• m1 %*% m2: scalar product
• eigen(m): eigenvalues
Utilities ¶
• dim(m): dimension
• ncol(m): number of columns
• nrow(m): number of rows
• cbind(vector1, matrix1, matrix2, vector2, ...): merge into one matrix (left to right)
• rbind(matrix1, matrix2, vector1, vector2, ...): merge into one matrix (top to bottom)
• dimnames(m): column/rows names
• colnames(m): column names
• rownames(m): rows names
As you should remember, you can override the values by placing them at the left side of the operator <-.
y <- 1:9
# a vector
# [1] 1 2 3 4 5 6 7 8 9
# a vector with a dimension is a matrix
dim(y) <- c(3,3)
y
# [,1] [,2] [,3]
# [1,] 1 4 7
# [2,] 2 5 8
# [3,] 3 6 9 | 2021-10-18 17:20:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39092370867729187, "perplexity": 5137.6932285897465}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00445.warc.gz"} |
https://testbook.com/question-answer/the-pole-zero-diagram-of-a-causal-and-stab--57fe2fd7e9f1316ef91e9d1d | # The pole – zero diagram of a causal and stable discrete – time system is shown is shown in the figure. The zero at the origin has multiplicity 4. The impulse response of the system is h[n] . If h[0] = 1, we can conclude.
This question was previously asked in
GATE EC 2015 Official Paper: Shift 1
View all GATE EC Papers >
1. h[n] is real for all n
2. h[n] is purely imaginary for all n
3. h[n] is real for only even n
4. h[n] is purely imaginary for only odd n
Option 1 : h[n] is real for all n
Free
CT 1: Ratio and Proportion
3742
10 Questions 16 Marks 30 Mins
## Detailed Solution
Concept:
Causality and Stability:
For a system with rational transfer function H(z) to be causal, the ROC should lie outside the outermost pole, and for BIBO stability, the ROC should include the unit circle |z| = 1.
∴ An LTI discrete-time causal system with the rational system function H(z) is said to be stable if all the poles of H(z) lie inside the unit circle.
Calculation:
The transfer function can be written as:
$${\rm{H}}\left( {\rm{z}} \right) = \frac{{{\rm{k}}{{\rm{z}}^4}}}{{\left( {{\rm{z}} - 0.5 - {\rm{j}}0.5} \right)\left( {{\rm{z}} - 0.5 + {\rm{j}}0.5} \right)\left( {{\rm{z}} + 0.5 + {\rm{j}}0.5} \right)\left( {{\rm{z}} + 0.5 - {\rm{j}}0.5} \right)}}$$
$${\rm{H}}\left( {\rm{z}} \right) = \frac{{{\rm{k}}{{\rm{z}}^4}}}{{\left( {{{\rm{z}}^2} - {\rm{z}} + \frac{1}{2}} \right)\left( {{{\rm{z}}^2} + {\rm{z}} + \frac{1}{2}} \right)}}$$
Now, since the system is causal and stable we have,
$${\rm{h}}\left[ 0 \right] = \mathop {\lim }\limits_{{\rm{z}} \to \infty } {\rm{H}}\left( {\rm{z}} \right) = {\rm{k}} = 1{\rm{\;}}$$
Thus, the transfer function will be:
$${\rm{H}}\left( {\rm{z}} \right) = \frac{{{{\rm{z}}^4}}}{{\left( {{{\rm{z}}^2} - {\rm{z}} + \frac{1}{2}} \right)\left( {{{\rm{z}}^2} + {\rm{z}} + \frac{1}{2}} \right)}}$$
H(z) = 1 - 0.25z-4.......
h[n] = [1,0,0,0,-0.25,.....]
Now,
h[0] = 1
so,h[n] is real for all n
Note:
Causality:
A linear time-invariant discrete-time system is said to be causal if the impulse response h[n] = 0, for n < 0 and it is therefore right-sided.
The ROC of such a system H(z) is the exterior of a circle. If H(z) is rational, then the system is said to be causal if:
1) The ROC is the exterior of a circle outside the outermost pole; and
2) The degree of the numerator polynomial of H(z) should be less than or equal to the degree of the denominator polynomial.
Stability:
A discrete-time LTI system is said to be BIBO stable if the impulse response h[n] is summable, i.e.
$$\mathop \sum \nolimits_{n = - \infty }^\infty \left| {h\left[ n \right]} \right| < \infty$$
z-transform of h[h] is given as:
$$H\left( z \right) = \mathop \sum \nolimits_{n = - \infty }^\infty h\left[ h \right]{z^{ - n}}$$
Let z = e (which describes a unit circle in the z-plane), then
$$\left| {H\left[ {{e^{j{\rm{\Omega }}}}} \right]} \right| = \left| {\mathop \sum \nolimits_{n = - \infty }^\infty h\left[ n \right]{e^{ - j{\rm{\Omega }}n}}} \right|$$
The above equality can be written as:
$$\le \;\mathop \sum \nolimits_{n = - \infty }^\infty \left| {h\left[ n \right]{e^{ - j{\rm{\Omega }}n}}} \right|$$
$$= \mathop \sum \nolimits_{n = - \infty }^\infty \left| {h\left[ n \right]} \right| < \infty$$
This is the condition for stability. Thus we can conclude that an LTI system is stable if the ROC of the system function H(z) contains the unit circle |z| = 1 | 2021-12-01 04:17:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161408305168152, "perplexity": 958.9315067428835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00182.warc.gz"} |
http://jsnv.pensieroliberomgo.it/write-a-program-to-compute-simple-interest-in-python.html | This is the program you use to write all of your Python code down. Write a programme to compute the monthly charge from a given number of minutes. Logic to find simple interest in C program. Pretext Search. The lesson explains history and features of Python. Since that'll you more insights on using the OOP concepts and defining and using of methods etc. The assignment calls for how to utilize loop statements for compound interest however I cannot figure out the statement for it. Exercise 3: Write a program to prompt the user for hours and rate per hour to compute gross pay. "Great overview of python for a beginner with programming. While in simple cases you can “roundtrip” a notebook to Python, edit the Python file, and then import it back without loss of main content, this is in general not guaranteed to work. I've divided the steps into 3 sections: understanding the algorithm that will be used to make the program, coding. Now go back to the text editor and modify your program so that it asks the end-user to input the principal. Make such a program in Python. What are the future values if 8 percent is the simple interest and compounded interest rate? The formula for. 3: Derive and compute a formula Can a newborn baby in Norway expect to live for one billion (109) seconds?. # We can change values here for different inputs P = 5000 R = 7 T = 4 # Calculates simple interest SI = (P * R * T) / 100. 7 This chapter from our course is available in a version for Python3: While Loops Classroom Training Courses. If you make no withdrawals or deposits in the next 10 years, how much is the account worth? Use the simple interest rate formula. I think you don't need comment code where the same information is displayed in message: "Weight cannot be less than 0 or greater than 500". It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. In the beginning, this choice can be confusing both when you read a program and when you write your own programs. Your program should only contain a single loop. You need to know the principal amount owed, the term of the loan, and the interest rate. If/Else Statements Programs 1. 0 # starting principal rate =. __graph_dict" for storing the vertices and their corresponding adjacent vertices. When you're happy, we're ready to move on to actually making the thing work! Calculator Functions- Writing the JavaScript We know how to attach event handlers to each button. Write a recursive Python function, given a non-negative integer N, to calculate and return the sum of its digits. write a program that prompts the user to input a number of telephone calls monthly and calculate the monthly bills. In this post, we will discuss how to write a python program to solve the quadratic equation. This is different from compound interest, where interest is calculated on on the initial amount and on any interest earned. Key Concepts¶. However, this article took me aback; it made it sound as though one needs to be an expert to find a minimum-wage, entry-level Python job. Here are some simple exercises to tackle based on the example above for which you should be able to compute the expected value readily with pen and paper: Exercise : Write a short program using a while loop to compute the sum of the integers from $$m$$ through $$n$$. To take a trivial example – say we’ve got 5 MPI ranks, and we want to write a program to compute the square root of 4 numbers. The program shall calculate and display a bank statement that shows the name of the account holder and the account number, all the information entered by the user and the predicted monthly balance for the next 12 months taking in consideration the monthly deposits and withdrawals as well as the interest. To keep the calculations fairly simple, only calculate the interest at the end of each year (having added $1200). Because Python is one of the most popular languages and has a syntax that is quite simple to understand, even kids are eager to use it as a stepping stone to learning programming languages. Now that you've detected and described your features, the next step is to write code to match them, i. In the following example, we will use multiple linear regression to predict the stock index price (i. Write a Python program that assigns the principal amount of 10000 to variable P, assign to n the value 12, and assign to r the interest rate of 8% (0. In this clip, you'll learn how to create a simple calculator program with Java. Exercise 1. Java Program To Find The Gross Salary Of An Employee 11. The assignment calls for how to utilize loop statements for compound interest however I cannot figure out the statement for it. Write a program that replaces these letters with something a bit more human-readable, and calculate the interest for some varying amounts of money at realistic interest rates such as 1%, and -0. Hide The Data Elements Of The Class Using Private Keyword. The formula for calculating a loan payment is useful for the borrower to double-check his monthly payment, or even to figure what the monthly payment will be for a future loan. Thank you, I'm a complete newbie at python. Calculate Simple Interest (SI) and Compound Interest using the formula SI=(P*N*R)/100. At the same time, Python is robust and performant, making it a viable choice also for core systems and larger applications. Write a Python program to convert seconds to day, hour, minutes and seconds. Lets see it in action for the simple task of taking in a series of numbers and returning the number of items and their sum. The formula of simple interest is : So, to find the simple interest we need to input the value of Principal (P), Rate (R) and Time (T). Script-3 echo "Enter the Principle Amount: " read p echo "Enter the rate of interest: " read r echo "Enter the num Find Basic Salary,Hra,DA, And Calculate the Net Salary. The program shall calculate and display a bank statement that shows the name of the account holder and the account number, all the information entered by the user and the predicted monthly balance for the next 12 months taking in consideration the monthly deposits and withdrawals as well as the interest. Non-blocking GUI This article is intended to give a simple overview (initially anyway) about how to write a program with a non-blocking GUI in wxPython. script for calculating simple interest a script which can be used to calculate the amount of interest using the formula I=Pit. How to create a 3D Terrain with Google Maps and height maps in Photoshop - 3D Map Generator Terrain - Duration: 20:32. Write a program to calculate Simple interest 7. Java Program For Calculate A Simple Interest 10. Python has a design philosophy which emphasizes code readability. com April 16, 2018 Abstract The Fibonacci numbers are a sequence of integers in which every. For instance, we borrow$1,000 today for 10 years with an annual interest of 8 percent per year. 06 x 2)…although this will get you close. Now take a look at this another simple program. Python 1 Lesson 1: Getting Started A Bit of Python History About Eclipse Perspectives and the Red Leaf Icon Working Sets Programming in Python A First Program The Interactive Interpreter Data in Python String Representations Numbers in Python Program 2: Printing Simple Python Expressions A Few Sample Expressions First Hurdle Cleared Quiz 1 Quiz. Print both the random numbers and the mean to a file. [Solutions for the book "Cs with Python" Write a program to obtain principal amount, rate of interest and time from user and compute simple interest. Here is a small program to add two given numbers >>>. This topic seems to present difficulty to people on a regular basis as it is often brought up on mailing list. What is 'asynchronous socket programming'? a. Simple interest does not consider interest on interest while compound interest does. How to calculate Nth term and Sum of Arithmetic Pr Program to demonstrate how a simple Enum works in How to calculate Volume, Curved Surface Area and T How to calculate Volume and Surface Area of Sphere How to calculate Volume, Curved Surface Area and T How to calculate Compound Interest in Java ?. Then have the program prompt the user for the number of years, t, that the money will be compounded for. Here the question was asked to write the program to find the equivalent resistances which are connected in parallel. In this chapter we'll follow a very simple example that uses the concepts we've learned so far. In the following example, we will use multiple linear regression to predict the stock index price (i. We expect you to know how to program in Python either because you’ve taken (and passed) COMP1730/6730 previously, or because you’ve learned the language elsewhere. Write a program in Python to compute the histogram of a color image. To sum up, python is an interpreted (no need for compiling) high-level program-ming language with a quite simple syntax. py, then modify accordingly) run a numeric study to calculate the frequency p. Using the data from part one, write a program that will calculate what percentage of the DNA sequence is coding. The "Hello, World!" program is a classic tradition in computer programming. Write a java program to print fields or data members of a class? 4. 1 Compute the volume of a sphere with a radius of 5. Elements of Programs. Learning Robotics Using Python is an essential guide for creating an autonomous mobile robot using popular robotic software frameworks such as ROS using Python. The program computes the monthly payment on a home mortgage or other loan, given the amount of the loan, the interest rate, and the repayment period. C++ program to calculate the area of the square with and without using function; C++ program to check leap year; C++ program to find GCD or HCF; C++ program to find LCM of two numbers; C++ program to check for the power of two; C++ program to check Armstrong number; C++ program to make simple calculator using switch case; C++ program to convert temperature. Try returning only one response, or responding to more greetings. Find power of a number using recursion using c program To classify the triangle as equilateral, isosceles and scalene To Find The Roots Of Quadratic Equation in C. Python supports to write program compactly and readably. Our task is to write a python program to automate this laborious task in a more efficient way — Shannon’s brute-force approach is prohibitively slow when the size of the input text is large. Java program to calculate the sum of N numbers using arrays, recursion, static method, using while loop. Write a program that asks the user to enter an angle in degrees and prints out the sine of that angle. Create an appropriately named method to: input the amount the user is investing (principal) calculate one year of interest at a rate of 7% (the formula is quite simple, interest is the amount invested times 0. Siempre se busca que la camisa te haga lucir una superficie delantera lisa y. Simple JSP program Simple JSP program create a simple jsp page in which number is to be entered by user in jsp page and submit button clicked. If you make no withdrawals or deposits in the next 10 years, how much is the account worth? Use the simple interest rate formula. I attempted to use the code with the breakdown of the calculations you provided, but it caused an infinite loop. Write a Python program that assigns the principal amount of $100 to variable p, assigns to n the value 12, and assigns to r the interest rate of 8% (as a float). How to Calculate Interest on a One-Year Loan If you borrow$1,000 from a bank for one year and have to pay $60 in interest for that year, your stated interest rate is 6 percent. Hello everybody, this is a Python program which finds out the smallest and largest number in the list. To take a trivial example – say we’ve got 5 MPI ranks, and we want to write a program to compute the square root of 4 numbers. Non-blocking GUI This article is intended to give a simple overview (initially anyway) about how to write a program with a non-blocking GUI in wxPython. In the following flowchart, P is Principle amount, R is Rate of Interest, N is number of years and I is used for Simple Interest. How To Calculate the Quantile of a List in Python. • # Prints the words Hello Python print(“Hello Python”) print(“Its nice learning Python”) print(“Python is easy to learn”) 10/27/15. At the same time, Python is robust and performant, making it a viable choice also for core systems and larger applications. This quality can be utilized to make useful programs. This program could be used to further your understanding of how you could use Python in the classroom, as a demonstration or discussion with your students, or as a way to introduce various CT concepts, such as pattern recognition or abstraction, to your students by inviting them to extend the existing functionality of the program. Webeduclick. For an interest rate of 4. 4 Is Python an Object Oriented Language? Ans-Yes, Python is an object oriented language. To keep the tutorial simple I’ve made Brobot completely stateless, so pure randomness will have to do. It is the easiest type of interest to calculate and understand because its value I = Prt (Simple Interest = Principal x Interest Rate x Time). Write a program that calculates how much the total bill at restaurant including tax and tip. In this clip, you'll learn how to create a simple calculator program with Java. Programming Language : Python Write the code using loop to display the following pattern: Write a program to calculate simple interest. Simple Interest Calculator. Python Program To Calculate Simple Interest - Example python program to calculate simple interest based on the user inputs. Read data from CSV file Python - Write data in In the following example we have an integer values 2 and we will compute its one's. In this tutorial we will try to create a Simple Calculator Using Python. Java Program to Calculate average using Array; Java program to reverse a number using for, while and recursion; Java Program to Calculate Simple Interest; How to convert a char array to a string in Java? Java Program to Multiply Two Numbers; Java Program to find duplicate Characters in a String. Program: 0 Out put : simple interest is 1400. 2: Write a Hello World program Almost all books about programming languages start with a very simple program that prints the text Hello, World!to the screen. Java program which performs basic arithmetic operations addition, subtraction, multiplication and division. This website contains a free and extensive online tutorial by Bernd Klein, using material from his classroom Python training courses. When the function is called,. Python is a free and powerful tool that can be used to build a financial calculator and price options, and can also explain many trading strategies and test various hypotheses. Python Program for simple interest. Python is an example of a high-level language. so say after 10 years how can i calculate the interest. tabbed:: q9. To sum up, python is an interpreted (no need for compiling) high-level program-ming language with a quite simple syntax. Write the factorial function using a Python for loop. Tip: I recommend checking as many sources as possible when researching. Write a C++ program to Operator Overloading Using a Friend Function. It is the easiest type of interest to calculate and understand because its value I = Prt (Simple Interest = Principal x Interest Rate x Time). We can use the Excel FV function and it works like this: Excel FV Function =FV(rate, N, [pmt], [pv], [type]). Python drone simulator. Because of the financial importance of this purchase, it is important to understand all the financial ramifications of a mortgage. This includes. Write the factorial function using a Python for loop. Thanks in advance. Our task is to write a python program to automate this laborious task in a more efficient way — Shannon’s brute-force approach is prohibitively slow when the size of the input text is large. Calculate Simple Interest (SI) and Compound Interest using the formula SI=(P*N*R)/100. The contracts allow you to formally write down your expectations about the program (as opposed to writing it informally in documentation of a class or a function). Checking accounts with balances of up to$5000 more than the minimum balance receive 3% interest; otherwise, the interest is 5%. As discussed an earlier lecture, in the OOP paradigm, data and functions are bundled together into “objects”. Below program, first takes length of side as input from user using scanf function and stores in a floating point variable. Now run your newly created Python program by entering the following command: % python bankAccount. com – Programming Language Tutorials. [Solutions for the book "Cs with Python" Write a program to obtain principal amount, rate of interest and time from user and compute simple interest. 3 Conditionals and Loops. Programming is an art, to reach excellence in programming you have to practice more and you have to play more with programming. You can add as many parameters as you want, just separate them with a comma. make a simple interest loan which you pay off with a lump sum payment at the end of the loan, letting the interest on the initial principal accumulate, open a savings account which pays the same interest as the the loan, and make monthly installments to the account. *interest rate (as a percentage), stored as ‘i’ *number of years of the investment. Simple interest does not consider interest on interest while compound interest does. And with Python, you can write a little program that picks a random quote from a file, or you can write a set of programs that runs a complex business. We expect you to know how to program in Python either because you’ve taken (and passed) COMP1730/6730 previously, or because you’ve learned the language elsewhere. In this tutorial you are going to learn about the k-Nearest Neighbors algorithm including how it works and how to implement it from scratch in Python (without libraries). It is the easiest type of interest to calculate and understand because its value I = Prt (Simple Interest = Principal x Interest Rate x Time). This amount is also known as the principal amount. In this example, we will write a simple program that calculates the simple interest in Python. This free video tutorial from TheNewBostonian will have you coding your own Java apps in no time flat. Serving as a simple and complete first program for beginners, as well as a good program to test systems and programming environments, “Hello, World!” illustrates the basic syntax of programming languages. Datacamp provides online interactive courses that combine interactive coding challenges with videos from top instructors in the field. Use of return statement in Java. 1 When was Python released? Ans- Python was released in February 1991. Java Program To Find The Gross Salary Of An Employee 11. We can also perform an alternate approach using built-in sum. The following example has a function with one parameter (fname). Since that'll you more insights on using the OOP concepts and defining and using of methods etc. What is 'asynchronous socket programming'? a. Write a Python program that assigns the principal amount of 10000 to variable P, assign to n the value 12, and assign to r the interest rate of 8% (0. For example: Calculating simple interest, given principal, rate and time. The main formula to calculate the simple interest is the same. Information can be passed to functions as parameter. You will determine how much money you borrow and how long you want to take to pay it back. "Great overview of python for a beginner with programming. With that working, you can use this simple piece of code to build a program to respond to voice commands. This python program is used to compute. Java program to calculate Electricity bill. Script-3 echo "Enter the Principle Amount: " read p echo "Enter the rate of interest: " read r echo "Enter the num Find Basic Salary,Hra,DA, And Calculate the Net Salary. The code is self-explanatory. These are the basic steps in writing user-defined functions in Python. On this episode we talk about why secrets don’t make friends and why we want the path towards mastery to be hard. Arithmetic operators Used for mathematical operation Operator Meaning Example + Add two operands or unary plus x + y +2. The Python for statement iterates over the members of a sequence in order, executing the block each time. make a simple interest loan which you pay off with a lump sum payment at the end of the loan, letting the interest on the initial principal accumulate, open a savings account which pays the same interest as the the loan, and make monthly installments to the account. C/C++ Programming Assignment Help, Write a program to calculate the total resistance, Write a program to calculate the total resistance of a series or parallel circuit. write a c program to calculate the income tax of the employees in an organization where the conditions are given as. As discussed an earlier lecture, in the OOP paradigm, data and functions are bundled together into “objects”. ) Associate the number you compute with the variable q. Write a C program to input principle, time and rate (P, T, R) from user and find Simple Interest. Write code to calculate the simple interest on a principal amount of $10,000 for a duration of 5 years with the rate of interest equal to 12. This is the program you use to write all of your Python code down. Write a program in python to calculate compound interest. These lessons teach Python version 3. Highly recommendable for anyone who is just beginning programming. Write a C program for a menu driven program which has following options: Explain control statements those are used in C programming language C Program Print Odd Numbers in a given range m to n. The standard C API for Python is quite low level, and requires writing lots of "boilerplate" code to access Python data types (lists, tuples, dictionaries), perform parameter checking, and handle Python's reference counting. After the last iteration, the program displays the ending balance, the total amount of deposits, the total amount of withdrawals, and the total interest earned. 2: Write a Hello World program Almost all books about programming languages start with a very simple program that prints the text Hello, World! to the screen. you can refer python programming examples in this section during your practise and few examples provide you a 360 degree understanding of python programming basics. Write a program to implement linear search and binary search. C Program to Enter Cost Price and Selling Price and Find Profit or Loss. It is best to side with modernity. This is my first program other than writing simple math scripts or "Hello, world". Dash is an Open Source Python library which can help you convert plotly figures into a reactive, web-based application. I’ve been studying Python because I read the language was very easy to master, and it is, on a basic level; after less than a week, I was able to write a little Fizz Buzz program. We need to decide whether the user wants the total resistance of a parallel or series circuit. Consider a triangle with sides of length 3, 7, and 9. py produces a function graph that approximates a simple example known as a Brownian bridge and closely related functions. [code]print("Note: 1. Or create a dictionary by. To earn that much with simple interest, you would need to receive more than 7. The law of cosines states that given three sides of a triangle (a, b, and c) and angle C between sides a and b: Write Python code to calculate the three angles in the triangle. Python Program to Compute Simple Interest Given all the Required Values Posted on April 15, 2017 by Manish This is a Python Program to compute simple interest given all the required values. (Consider 3 input parameters Amt, Time, Interest Rate, And output should be the total Amount after compounding. I am trying to use python and online tools to calculate the accurate sample size. Simple interest is calculated by the product of the principal, number of years, and interest, all divided by 100. Write a C++ program to Operator Overloading Using a Friend Function. Write code to calculate the simple interest on a principal amount of$ 10,000 for a duration of 5 years with the rate of interest equal to 12. Write a java program to obtain constructors of a class? 5. And with Python, you can write a little program that picks a random quote from a file, or you can write a set of programs that runs a complex business. Basic , medium ,expert programs example in c,java,c/++ - Mathematical programs in C. This is the first tutorial in the "Livermore Computing Getting Started" workshop. The program must be able to calculate in dollars, quarter, dime, nickel and pennies. The textbook is available on Amazon and Kindle and there is a supporting web site with exercises and materials. Python Program to Calculate Compound Interest This python program allows user to enter Principal Amount, Rate of Interest, and time period (Number of years). Write a program that asks the user for a number and then prints out the sine, cosine, and tangent of that number. Now go back to the text editor and modify your program so that it asks the end-user to input the principal. Write a java program to obtain information about methods which are present in a class? 6. Tip: I recommend checking as many sources as possible when researching. A Simple Mortgage Calculator, v. Java Program For Calculate A Simple Interest 10. Proceedings of the 7th Python in Science Conference (SciPy 2008) Exploring Network Structure, Dynamics, and Function using NetworkX Aric A. General Structure of a Loop. Note: The answer is around 500, not 400 (in case you are using Python 2). If it is not the base case, then we simplify our computation using the general formula. Write code to calculate the simple interest on a principal amount of $10,000 for a duration of 5 years with the rate of interest equal to 12. this is a very simple code,any. A simple calculator program is good for a start. This program could be used to further your understanding of how you could use Python in the classroom, as a demonstration or discussion with your students, or as a way to introduce various CT concepts, such as pattern recognition or abstraction, to your students by inviting them to extend the existing functionality of the program. As we'll see later in the chapter, the Python compiler has much less information about the behavior of a program than a C compiler does. This is different from comments in your code, which are completely eliminated when the program is parsed. Java program to print alternate prime numbers 4. In this example, we will write a simple program that calculates the simple interest in Python. The construction of a corresponding array p in a program, given the dates when p changes, can be a bit tricky since we need to compute the. To understand this example, you should have the knowledge of following Python programming topics: Python Functions. One way that often works well is to imagine someone using the program, and describe what that would be like. Simple interest is calculated by the product of the principal, number of years, and interest, all divided by 100. "Perform various tasks": the tasks could be simple ones like we discussed above (adding 2 numbers, rounding off a number) or complex ones which may involve a sequence of multiple instructions. This book will cover projects that are simple and fun, and teach kids how to write Python code that works. There will be a menu, that will ask you whether you want to multiply two numbers together, add two numbers together, divide one number by another, or subtract one number from another. [22] A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers; an example (there are others) is described by the flowchart above and as an example in a later section. Calculate and print the final amount after t years. Then have the program prompt the user for the number of years, t, that the money will be compounded for. Using those values, program will calculate Simple Interest using the above specified formula. This quality can be utilized to make useful programs. I was thinking about this last week as I was demonstrating Python’s for loops to my CIS235 students and now to nest them within each other. The SpeechRecognition library needs the PyAudio package to be installed in order for it to interact with the microphone input. for simple interest and compound interest. In the formula, A represents the final amount in the account after t years compounded 'n' times at interest rate 'r' with starting amount 'p'. Python Function Arguments. Twelve Simple Algorithms to Compute Fibonacci Numbers Ali Dasdan KD Consulting Saratoga, CA, USA [email protected] Java program to calculate Electricity bill. Write C++ program to calculate the simple compound interest with necessary exception handling functions. C Program to Enter Cost Price and Selling Price and Find Profit or Loss. Prepared By: Sanjeev Bhadauria & Neha Tyagi. This amount is also known as the principal amount. If you make no withdrawals or deposits in the next 10 years, how much is the account worth? Use the simple interest rate formula. Write a python program to convert temperature from Fahrenheit to Celsius 7. Write A C++ Program To Calculate Simple Interest. The mortgage_loan, mortgage_rate_periodic, and periodic_mortgage_payment variables from the third exercise are available for use. The formula for calculating a loan payment is useful for the borrower to double-check his monthly payment, or even to figure what the monthly payment will be for a future loan. If it is not the base case, then we simplify our computation using the general formula. traversing over a list and adding each element to an empty sum variable. By convention, Python programmers use docstrings for the key documentation of their functions. Multiply the principal amount by one plus the annual interest rate to the power of the number of compound periods to get a combined figure for principal and compound interest. 14 there is an python example how to calculate CRC and write it to hex file. To calculate grade of students in python, you have to ask from user to enter marks obtained in 5 subjects and calculate the sum of all the marks and then calculate the average marks to find the grade according to the average marks obtained by student as shown in the program given below. This free video tutorial from TheNewBostonian will have you coding your own Java apps in no time flat. Python drone simulator. Home » Python Programming » Python Reference » Find or Calculate Mode in Python using mode() - Examples & Explanation Previous Next mode() function exists in Standard statistics library of Python Programming Language. The simple finite difference is the least accurate, and the central differences is practically the same as the complex number approach. Using those values, program will calculate Simple Interest using the above specified formula. This reference will take you through simple and practical approaches while learning C# Programming language. Today we shall write the script to calculate the simple interest for given amount. Write a program to calculate Simple interest 9. Java Program to Read Integer (N) And Print First Three Powers (N^1, N^2, N^3) 2. if a simple case, return the simple value // base case / stopping condition. tab:: Question Write a program that will compute the area of a rectangle. This includes. A humble request Our website is made possible by displaying online advertisements to our visitors. Program: 0 Out put : simple interest is 1400. The 100 steps took me a great distance in getting to know Python with depth. 3%, we can pass 0. py produces a function graph that approximates a simple example known as a Brownian bridge and closely related functions. Python Programming Assignment Help, Prime number, Use the function to compute and print the sum of first n prime numbers. Python understands anything between single 'text' and double "text" quotes as a text. com April 16, 2018 Abstract The Fibonacci numbers are a sequence of integers in which every. While our lessons aim to be self-contained, if you decide to search online for other information about Python, be aware that Python version 2 is also commonly used and is incompatible in some ways. It should display the real (and imaginary, if available) parts of the index of refraction and then prompt again for another entry. Calculate and print the final amount after t years. We expect you to know how to program in Python either because you’ve taken (and passed) COMP1730/6730 previously, or because you’ve learned the language elsewhere. Let $$f(x) = \int\limits_1^{x^2} tan(t^3)dt$$, and we now want to compute df/dx. Writing your first program - SAS or Python In this session, we will discuss how to write a basic program that allows you to load a data set and examine frequency distributions. 2 Who developed Python? Ans- Python was developed by Guido Van Rossum. 5% per month) and monthly payments of$50. To better understand this example, make sure you have knowledge of the following tutorials:-Python Functions; Python Input and Output; Let us have a look at the formula for calculating the compound interest. Write a Python program to solve the general version of the above problem. Now go back to the text editor and modify your program so that it asks the end-user to input the principal. This kind of programming habit is extremely useful for a career in analytics and data science, where one is. Here in this post am going to show you how to implement binary search algorithm in python. In the following flowchart, P is Principle amount, R is Rate of Interest, N is number of years and I is used for Simple Interest. Microsoft Excel is an ideal tool for calculating the cost of borrowing money, but are you doing so in the most efficient manner? In this article, I'll describe how you can use the CUMIPMT function to calculate interest expense for a loan, whether for a month, a year, or the length of the loan - all. Note: The answer is around 500, not 400 (in case you are using Python 2). Write a program savings. But if you're a stickler for accuracy or you do this type of calculation in your work you'll want to know the correct way to calculate the interest. 2: Write a Hello World program Almost all books about programming languages start with a very simple program that prints the text Hello, World! to the screen. Improve Your Python: Python Classes and Object Oriented Programming The class is a fundamental building block in Python. The program output is also shown below. I attempted to use the code with the breakdown of the calculations you provided, but it caused an infinite loop. Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Learn C programming, Data Structures tutorials, exercises, examples, programs, hacks, tips and tricks online. We need to decide whether the user wants the total resistance of a parallel or series circuit. The assignment is to write a program that will read your file, and then prompt for a material name and a wavelength. Python is an especially valuable tool for visualizing data, and this course will cover a variety of techniques that will allow you to visualize data using the Python library, Matplotlib. Also print the total number of leap years. When writing PERP, we wanted to concentrate on our problem domain, and not on the details of interfacing Python to C++. Calculate and print the. In the following example, we will use multiple linear regression to predict the stock index price (i. It is calculated on the main amount of loan. The output should just be the number that your code produces. Write a Java program without a graphical user interface that calculates and displays the mortgage payment amount given the amount of the mortgage, the term of the mortgage, and the interest rate of the mortgage. After the last iteration, the program displays the ending balance, the total amount of deposits, the total amount of withdrawals, and the total interest earned. For instance, we borrow \$1,000 today for 10 years with an annual interest of 8 percent per year. It is a good idea to represent them with a graph. Write a program which accepts a character and display its ASCII value. Storage Classes ; Early Binding and Late Binding. An example is a Python list, which not only stores data but also knows how to sort itself, etc. Basic , medium ,expert programs example in c,java,c/++ - Mathematical programs in C. It uses a simple form with radio buttons to determine interest. Chapter 6 - Python Fundamentals Solutions for the book "Cs with Python" by Mrs. | 2019-12-08 02:59:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20193129777908325, "perplexity": 748.6728195044844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540504338.31/warc/CC-MAIN-20191208021121-20191208045121-00131.warc.gz"} |
https://chemistry.stackexchange.com/questions/72264/what-is-happening-in-this-video-of-solvated-electrons-donated-from-sodium-in-amm | # What is happening in this video of solvated electrons donated from sodium in ammonia?
I just saw the Periodic Video Liquid Electrons - Periodic Table of Videos where sodium is added to liquid ammonia. The demonstration shows that even if electrons are solvated, if you have a high enough electron density, the substance will turn shiny and reflective and metallic. Sodium is dissolved in liquid ammonia and donates the electrons to the solution.
There are a number of things happening in the demonstration that I don't understand.
There is supposed to be a separation - once a high enough electron density is reached, the solution is supposed to separate into a high electron density and low electron density layer. Why the separation instead of a uniform concentration? And why would the high electron concentration layer float on top?
Also, at concentrations below the metallic appearance, the sodium in ammonia solution starts out very strongly blue colored. Why? What is it about adding sodium to liquid ammonia that instantly produces such a deep blue, almost black color?
It seems that the demonstration did not go quite as planned, so isopropyl alcohol was added, producing sodium isopropoxide. Why?
• The isopropanol is added to quench the highly reactive solution and would be required whatever the result of the reaction was. This is a common way to (gently) destroy sodium metal that you don't want to have to throw down the sink. – matt_black Apr 10 '17 at 0:22
• @matt_black perhaps "destroy sodium metal" is not the best way to describe what is happening? Watching the video, it seems to have been added early on in the demonstration, and the reason is explained. I just don't understand their explanation. – uhoh Apr 10 '17 at 0:28
• They say in the video they add the isopropanol to "quench" the reaction: this means to destroy any remnants of unreacted sodium metal as you can't dispose of the metal easily (unlike sodium isopropoxide). This has the beneficial effect of destroying the dark blue low-concentration sodium/ammonia solution making it easier to see the metallic sheen of the high concentration solution. It wasn't that the experiment didn't work just that it was hard to see the result on video. – matt_black Apr 10 '17 at 7:55
• @matt_black ok I see, to consume the remaining unreacted sodium metal so it can be disposed of later more safely, and to help viewers focus on the metallic color and not the strikingly blue layer. Thanks! – uhoh Apr 10 '17 at 8:45
I'll try an answer to this question because I watched this video a while back and did a bit of reading on it at the time and I think I understand the big picture. The problem is that these solvated electrons are very complicated things, and do not lend themselves to the traditional ways that chemists would like to think about things. For that reason, there is quite a lot of literature coming from physicists which is very complicated but nonetheless valuable. Additionally, solvated electrons show up in more than just this rather unusual metal-liquid ammonia solution. For instance, a recent paper in Nature chemistry by Sieferman et. al. [1] showed that there are transient solvated electrons at the surface of water which can have major implications for natural electron transfer reactions. So, understanding these systems is truly quite important.
## The Blue Color:
In order to get at the deep blue color which is seen in this sodium-ammonia solution, we must understand what it is that is absorbing (reddish) visible light and hence leading to the transmission of a lot of blue light.
As it happens, at low concentrations of sodium (I'll address this in a bit), the solvated electrons exist in a bound state. This is not too dissimilar to ordinary electronic energy levels, except that there is not really a molecule to be found. Rather, the electron exists in a bound state which it creates for itself. That is, the electron polarizes the surrounding solvent such that it is contained in a bound state. Sometimes people refer to this electron as being a "polaron". One important difference between this and energy levels in a molecule is that electronic excitations generally take place from one bound state to another bound state. In the case of a solvated electron, however, the transitions are from a bound state to a continuum state.
Obviously one question that would be nice to answer is why is it such a deep blue solution? Well, this sort of question is always hard to address because the color of things is more or less an accident. The fact that it is blue just tells you that the electron is relatively weakly bound because it is excited by reddish light.
For more details on the bound-continuum transition, see Aulich's paper [2] which uses photo-electron emission to study solvated electrons as a function of concentration and photon energy. For a theoretical treatment of the bound states at dilute concentrations of metals, see Jortner's paper [3].
## The Transition to a Bronze Color:
One reason that people have been very interested in this system is because of this transition that takes place as the concentration of metal increases. Let me first try to give some intuition as to why this transition takes place at all. First, going off of our model that the electrons polarize the solvent and exist in a low-energy, bound state, we should expect that as more electrons are added to the solution (i.e. as more sodium is added), these cavities which the electrons occupy will become closer and closer to each other. This means that electron-electron interactions become quite important. This means two things: first, we have to begin worrying about satisfying Pauli exclusion, and second, we have to be concerned that at some point the electrostatic repulsions will become larger than the binding energy of this electron.
These two points have been studied by some physicists. Alavi and Frenkel's paper [4] develops a model and then performs simulations on ideal fermions (I honestly don't know what makes a fermion ideal and don't understand this paper), demonstrates that the low-concentration bound states are very unstable as the concentration of electrons increases. Thus, some transition must take place.
The best paper on the theory of this transition can be found here [5]. The reason this transition is so interesting is that it is a "metal to non-metal" transition. Remember when you were first introduced to metals and the characteristic description is that they are a "sea of electrons"? That is, the electrons are free to move around, and this leads to high conductivity and other observable properties, one of which is that they absorb and reflect nearly all wavelengths of light, which leads to the silvery color of most metals.
Well, the exact same phenomenon is being observed here, but it's very peculiar because this is all happening in a liquid. For instance, Schroeder et. al. wrote a paper [6] which measures the conductivity of several metal-ammonia solutions as a function of concentration of the metal. I unfortunately couldn't add the relevant figure, but the conductivity basically increases linearly with concentration of the metal until it hits a plateau and does not increase any further.
This gradual increase explains what is going on when we see a separation into two phases. The explanation goes like this. There is only so much space which allows for the existence of the bound states we described above because these bound states occupy a cavity of relatively large volume in the solvent. As more metal is added, more electrons are free in the solution, but the solution is already saturated with these bound electrons. Thus, the electrostatic and exclusion effects become such that any additional electrons added can only exist in a metallic state. This is peculiar because this metallic state is in the liquid phase and is actually fairly dense (I read this somewhere but can't remember the reference... Will update later). If one continues adding electrons, they always become incorporated into the metallic state because the bound states are saturated. Eventually, enough electrons are present that the destabilizing effects due the presence of other electrons is large enough that no possible bound state can exist and the whole system becomes metallic. Hence the plateau in the conductivity plot I described above. At this point, I believe the sodium ceases to dissolve and just stays as plain old metal sodium.
I do not have a good explanation for why the metallic phase is specifically bronze colored as opposed to silvery. I suspect this is a very subtle feature and would certainly be temperature dependent. I haven't described the temperature dependence of all of this, but there quite a few papers that address only this point and it seems to be quite complicated.
I also don't know why the metallic phase forms on the top. I suspect this may just be an artefact of the experiment in that video. That is, it is entirely feasible that solid would sink to the bottom if it didn't just freeze to the side of the test tube. I could be wrong about that though.
## References:
1. Siefermann, K. R., Liu, Y., Lugovoy, E., Link, O., Faubel, M., Buck, U., ... & Abel, B. (2010). Binding energies, lifetimes and implications of bulk and interface solvated electrons in water. Nature chemistry, 2(4), 274-279. DOI
2. Aulich, H., Baron, B., Delahay, P., & Lugo, R. (1973). Photoelectron emission by solvated electrons in liquid ammonia. The Journal of Chemical Physics, 58(10), 4439-4443. DOI
3. Jortner, J. (1959). Energy levels of bound electrons in liquid ammonia. The Journal of Chemical Physics, 30(3), 839-846. DOI
4. Alavi, A., & Frenkel, D. (1992). Grand‐canonical simulations of solvated ideal fermions. Evidence for phase separation. The Journal of chemical physics, 97(12), 9249-9257. DOI
5. Jortner, J., Cohen, M. H. (1976). Metal-nonmetal transition in metal-ammonia solution. Physical Review B, 13 (4), 1548-1568.
6. Schroeder, R. L., Thompson, J. C., & Oertel, P. L. (1969). Conduction in Concentrated Solutions of Several Metals in Liquid Ammonia. Physical Review, 178(1), 298. DOI
7. Thompson, J. C. (1968). Metal-nonmetal transition in metal-ammonia solutions. Reviews of modern physics, 40(4), 704.
8. Schroeder, R. L., Thompson, J. C., & Oertel, P. L. (1969). Conduction in Concentrated Solutions of Several Metals in Liquid Ammonia. Physical Review, 178(1), 298.
• This is stackexchange at it's finest - thank you so much for the substantial, thorough, and thoughtful answer! You've sent me to the library today which is always a very good thing. I'm marking this as accepted because you've addressed the optical properties with my main focus here. I will break out the question of the separation of layers as a linked but separate, stand-alone question in a few hours, since it is probably not strongly related. – uhoh Apr 10 '17 at 0:42
• First off, awesome +1 answer. I do still have one point of confusion regarding the phase of the bronze colored material. At one point you said "this is all happening in a liquid" and shortly thereafter "a solid is formed (I believe) and the electrons move more or less freely around". I'm probably mixing up just what you are referring to, but my question about this is simply, is this bronze colored sea of solvated electrons precipitating out as a solid or is it a liquid (as it appears to be), or is it something more complex and kind of in between the two? – airhuff Apr 10 '17 at 1:35
• Good point @airhuff . This is a detail which was unclear to me but I updated that sentence because indeed it is still a liquid from what I can tell. This would definitely be relevant to why it stays on top. – jheindel Apr 10 '17 at 6:23
• I'm not sure a full answer is 100% ironed out here, but I'm going to click accept because there is enough insight and linked material to help me on my way to understand what's happening. Once I'm finished reading I'll ask some additional follow-up questions. Thanks for the very helpful answer! – uhoh Apr 17 '17 at 1:16
• Just to let you know, I've just asked the follow-up question How to think of solvated electrons? – uhoh Jul 26 '17 at 8:54 | 2020-11-30 14:49:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7199373841285706, "perplexity": 704.5259915581155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216175.53/warc/CC-MAIN-20201130130840-20201130160840-00548.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/solve-following-pairs-equations-reducing-them-pair-linear-equations-equations-reducible-pair-linear-equations-two-variables_6648 | Advertisement Remove all ads
# Solve the Following Pairs of Equations by Reducing Them to a Pair of Linear Equations - Mathematics
Solve the following pairs of equations by reducing them to a pair of linear equations
10/(x+y) + 2/(x-y) = 4
15/(x+y) - 5/(x-y) = -2
Advertisement Remove all ads
#### Solution
10/(x+y) + 2/(x-y) = 4
15/(x+y) - 5/(x-y) = -2
Putting 1/x+y = p in the given equations, we get:
10p + 2q = 4
⇒ 10p + 2q - 4 = 0 ... (i)
15p - 5q = -2
⇒ 15p - 5q + 2 = 0 ... (ii)
Using cross multiplication, we get
p/(4-20) = q/(-60-(-20)) = 1/(-50-30)
p/-16 = q/-80 = 1/-80
p/-16 = 1/-80
p = 1/5 and q = 1
p = 1/(x+y) = 1/5 and q = 1/(x-y) = 1
x + y = 5 ... (iii)
and x - y = 1 ... (iv)
Adding equation (iii) and (iv), we get
2x = 6
x = 3 .... (v)
Putting value of x in equation (iii), we get
y = 2
Hence, x = 3 and y = 2
Is there an error in this question or solution?
Advertisement Remove all ads
#### APPEARS IN
NCERT Class 10 Maths
Chapter 3 Pair of Linear Equations in Two Variables
Exercise 3.6 | Q 1.7 | Page 67
Advertisement Remove all ads
#### Video TutorialsVIEW ALL [3]
Advertisement Remove all ads
Share
Notifications
View all notifications
Forgot password?
Course | 2021-03-01 00:42:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46987390518188477, "perplexity": 3126.64569130302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00315.warc.gz"} |
https://itectec.com/superuser/how-to-get-excel-to-interpret-the-comma-as-a-default-delimiter-in-csv-files/ | # Excel – How to get Excel to interpret the comma as a default delimiter in CSV files
csvmicrosoft excel
I have a number of .csv files. Some of them are comma delimited, some are tab delimited (maybe they should be called .tsv …)
The csv extension gets associated with Excel when Excel is installed. However, if I open one of these files with excel, everything gets dumped into the same column and the comma is not interpreted as a delimiter.
I can instead of File -> Import…, select the file, and choose the precise way to interpret the contents (delimiters, data types, etc.) But most of the time I just want to look at the file through a clear table view. I do not want to process it with Excel.
Is there a way to get Excel to auto-interpret the delimiter and show the CSV file as a proper table as soon as it's opened? I need this so I can use Excel as a quick viewer for such files.
I suspect there must be a way, otherwise Excel wouldn't associate itself with CSV files.
While opening CSV files, Excel will use a system regional setting called List separator to determine which default delimiter to use.
Microsoft Excel will open .csv files, but depending on the system's regional settings, it may expect a semicolon as a separator instead of a comma, since in some languages the comma is used as the decimal separator. (from Wikipedia)
On Windows, you can change the List separator setting in the Regional and Language Options as specified on the Office support website :
Change the separator in a CSV text file
1. Click the Windows Start menu.
2. Click Control Panel.
3. Open the Regional and Language Options dialog box.
4. Click the Regional Options Tab.
5. Click Customize / Additional settings (Win10).
6. Type a new separator in the List separator box.
7. Click OK twice.
Note: this only works if the Decimal symbol is not also designated as comma (in line with the Wikipedia citation above). If it is, Excel will not use comma as the List separator, even if chosen. For many non-United States regions, comma is the default Decimal symbol.
On Mac OS X, this setting seems to be deduced from the decimal separator setting (in the Language & Region pane of System Preferences, go to Advanced). If the Decimal Separator is a point then the default CSV separator will be a comma, but if the Decimal Separator is a comma, then the default CSV separator will be a semicolon.
As you said yourself in the comment, there is an alternative for Mac users to quickly look at those CSV files. It's plugin for Quick Look called quicklook-csv that handles separator detection. | 2021-11-28 15:15:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47812697291374207, "perplexity": 2546.417641924086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00286.warc.gz"} |
https://www.quantumcalculus.org/jones-calculus/ | # Jones Calculus
## Jones calculus
A quaternion valued wave equation $\Psi_{tt} = D^2 \Psi$ can be solved as usual with a d’Alembert solution $\Psi(t) = \cos(D t) \Psi(0) + \sin(D t) D^{-1} \Psi'(0)$. We can write this more generally as $e^{\beta D t} (u(0) - \beta v(0))$ where $\beta$ is a unit space quaternion where $\psi(0)=u(0) - \beta v(0)$ is the initial wave. Now, $\exp(\beta x) = \cos(x) + \beta \sin(x)$ holds for any space unit quaternion $\beta$. Unlike in the complex case, we have now an entire $2$-sphere which can be used as a choice for $\beta$. If $u(0)$ and $v(0)$ are real, then we stay in the plane spanned by $1$ and $\beta$. If $u(0)$ and $v(0)$ are in different plane, then the wave will evolve inside a larger part of the quaternion algebra.
Also as before, the wave equation has not be put in artificially. It appears when letting the system move freely in its symmetry. In the limit of deformation we are given an anti-symmetric matrix $B= \beta (b+b^*)$ and get a unitary evolution $\exp(i B t)$. As we have used Pauli matrices to represent the quaternion algebra on $C^2$, a wave is now given as a pair $(\psi(t),\phi(t))$ of complex waves. Using pairs of complex vectors is nothing new in physics. It is the Jones calculus named after Robert Clark Jones (1916-2004) who developed this picture in 1941. Jones was a Harvard graduate who obtained his PhD in 1941 and after some postdoc time at Bell Labs, worked until 1982 at the Polaroid Corporation.
Why would a photography company emply a physisists dealing with quaternion valued waves? The Jones calculus deals with polarization of light. It applies if the electromagnetic waves F =(E,B) have a particular form where E,B are both in a plane and perpendicular to each other. Remember that light is described by a 2-form F=dA which has in 4 dimensions B(4,2)=6 components, three electric and three magnetic components. The Maxwell equations dF=0, d* F=0 are then in a Lorentz gauge d^*A=0 equivalent to a wave equation L A =0, where L is the Laplacian in the Lorentz space. Now, if light has a polarized form, one can describe it with a complex two vector $\Psi=(u,v)$ rather than by giving the 6 components (E,B) of the electromagnetic field. How is this applied? Sun light arrives unpolarized but when scattering at a surface, it catches an amount of polarization. Polarized sunglasses filter out part of this light reducing the glare of reflected light. The effect is also used in LCD technology or for glasses worn in 3D movies. It can not only be used for light, but in radio wave technology, polarization can be used to “double book” frequency channels. And for radar waves, using polarized radar waves can help to avoid seeing rain drops. Even nature has made use of it. Octopi or cuttlefish are able to see polarization patterns. See the encylopedia entry for more. Mathematically the relation with quaternion is no suprise because the linear fibre of a 1-form A(x) at a point is 4-dimensional. Describing the motion of the electromagnetic field potential A (which satisfies the wave equation) is therefore equivalent to a quaternion valued field.
We have to stress however that the connection between a quaternion valued quantum mechanics and wave motion of the electromagnetic field is mostly a mathematical one. First of all, we work in a discrete setup over an arbitrary finite simplicial complex. We don’t even have to take the de Rham complex: any elliptic complex D=d+d* as discribed in a discrete Atiyah-Singer setup will do. The Maxwell equations even don’t need to be 1 forms. If $E \oplus F=\oplus E_k + \oplus F_k$ is the arena of vector spaces on which D:E \to F, F \to E$acts, then one can see for a given$j \in D_k\$ the equations $dF=0,d^*F=j$ as the Maxwell equation in that space. For $F=dA$ and gauge $d^*A=0$, the Maxwell equations reduce to the Poisson equation $D^2 A=j$ which in the case of an absense of “current” j gives the wave equation $D^2 A=0$ meaning that $A$ is a harmonic k-form. Now, in a classical de Rham setup on a simplicial complex G, A is just an anti-symmetric function on k-dimensional simplices of the complex. Still, in this setup, when describing light on a space of k-forms, it is given by real valued functions. If we Lax deform the elliptic complex, then the exterior derivatives become complex but still, the harmonic forms do not change because the Laplacian does not change. Also note that we don’t incorporate time into the simplicial complex (yet). Time evolution is given by an external real quantity leading to a differential equation. The wave equation $u_{tt}=Lu$ can be described as a Schrödinger equation $u_t = i Du$. We have seen that when placing three complex evolutions together that we can get a quaternion valued evolution. But the waves in that evolution have little to do with the just described Maxwell equations in vacuum, which just describes harmonic functions in the elliptic complex.
We will deal with the problematic of time elsewhere. Just to state now that describing a space time with a finite simplicial complex does not seem to work. It migth be beautiful and interesting to describe finite discrete space times but one can hardly solve the Kepler problem with it. Mathematically close to the Einstein equations is to describe simplicial complexes with a fixed number of simplices which have maximal or minimal Euler characteristic among all complexes. Anyway, describing physics with waves evolving on finite geometries is appealing because the mathematics of its quantum mechanics is identical to the mathematics of the quantum mechanics in the continuum, just that everything is finite dimensional. Yes there are certain parts of quantum mechanics which appear needing infinite dimensions but if one is interested in the PDE’s, the Schroedinger respectivly the wave equation on such a space there are many interesting problems already in finite dimensions. The question how fast waves travel is also iteresting in the nonlinear Lax set-up. See This HCRP project from 2016 of Annie Rak. In principle the mathematics of PDE’s on simplicial complexes (which are actually ordinary differential equations) has more resemblence with the real thing because if one numerically computes any PDE using a finite element method, one essentially does this.
Here is a photograph showing Robert Clark Jones:
There are other places in physics where complex vector-valued fields appear. In quantum mechanics it appears from SU(2) symmetries, two level systems, isospin or weak isospin. Essentially everywhere, where two quantities can be exchanged, the SU(2) symmetry appears. A quaternion valued field is also an example of a non-abelian gauge field. In that case, one is interested (without matter) in the Lagrangian $|F|^2/2$ with $F=dA+A \wedge A$, where $A$ is the connection 1-form. Summing the Lagranging over space gives the functional. One is interested then in critical points. The satisfy $d_A^* F=0, d_A F=0$ meaning that they are “harmonic” similarly as in the abelian case, where harmonic functions are critical points of the quadratic Lagrangian. There are differences however. In the Yang-Mills case, one looks at SU(2) meaning that the fields are quaternions of length 1. When we look at the Lax (or asymptotically for large t, the Schrödinger evolution) of quaternion valued fields $\psi(t)$, then for exach fixed simplex x, the field value $\psi(t,x)$ is a quaternion, not necessarily a unit quaternion.
[Remark. A naive idea put forward in the “particle and primes allegory” is to see a particle realized if it has an integer value. The particles and prime allegory draws a striking similarity between structures in the standard model and combinatorics of primes in associative complete division algebras. The later is pure mathematics. As there are symmetry groups acting on the primes, it is natural to look at the equivalence classes. The symmetry groups in the division algebras are U(1) and SU(2) but there is also a natural SU(3) action due to the exchange of the space generators i,j,k in the quaternion algebra. This symmetry does not act linearly on the apace, but it produces an other (naturally called strong) equivalence relation. The weak (SU(2)) and strong equivalence relations combined lead to pictures of Mesons and Baryons among the Hadrons while the U(1) symmetry naturally leads to pictures of Electron-Positron pairs and Neutrini in the Lepton case. The nomenclature essentially pairs the particle structure seen in the standard model with the prime structure in the division algebras. As expected, the analogy does not go very far. The fundamental theorem of algebra for quaternions leads to some particle processes like pair creation and annihilation and recombination but not all. It does not explain for example a transition from a Hadron to a Lepton. The set-up also leads naturally to charges with values 1/3 or 2/3 but not all. Also, number theory has entered physics in many places, it is not clear why “integers” should appear at all in a quantum field theory. What was mentioned in the particles and primes allegory is the possibility to see particles only realized at a simplex x, if the field value is an integer there. As in a non-linear integrable Hamiltonian system like the Lax evolution soliton solutions are likely to appear and so, if the wave takes some integer value p at some time t and position x, it will at a later time have that value p at a different position. The particle has traveled. But as during the time it has jumped from one vertex to an other, it can have changed to a gauge equivalent particle. If the integer value is not prime, it decomposes as a product of primes. Taking a situation where space is a product of other spaces allows to model particle interactions. One can then ask why a particle like an electron modeled by some non-real prime is so stable and why if we model an electron-positron pair by a 4k+1 prime, the position of the electron and positron are different. A Fock space analogy is to view space as an element in the strong ring, where every part is a particle. Still the mathematics is the same, we have a geometric space G with a Dirac operator D. Time evolution is obtained by letting D go in its symmetry group.] | 2019-12-07 04:35:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977036833763123, "perplexity": 459.95979681885035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540495263.57/warc/CC-MAIN-20191207032404-20191207060404-00548.warc.gz"} |
http://lists.gnu.org/archive/html/lilypond-user/2009-02/msg00294.html | lilypond-user
[Top][All Lists]
## Re: slashed six in chordmode
From: James E . Bailey Subject: Re: slashed six in chordmode Date: Mon, 9 Feb 2009 21:38:11 +0100
I've decided to forego the \slashed-digit, since it's the wrong font, and I just made up a slashed 6 for this purpose. I'm sure when someone who knows how to use Scheme needs this sort of thing, it can be better formatted, but as someone in need, this works!
```
\version "2.12.2"
myChordDefinitions = {
<es g ais c>-\markup \super {
\combine
6
\raise #0.3 \rotate #-75 \bold /
}
}
myChordExceptions = #(append
(sequential-music-to-chord-exceptions myChordDefinitions #t)
ignatzekExceptions)
myChordInit = {
\set chordNameExceptions = #myChordExceptions
}
music = \relative c' {
c2 f
}
chord = \chords {
\myChordInit
c2 f:m6+
}
\score {
<<
\chord
\new Staff \music
>>
}
``` | 2013-06-19 00:12:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9100959897041321, "perplexity": 5012.5777125115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436824/warc/CC-MAIN-20130516123036-00042-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/171139/hierarchical-multi-label-classification | # Hierarchical Multi-label Classification
I would like to make a classifier, where I can classify individuals from one hand, and from the other hand, understanding the data better, meaning figuring out which feature, is the most contributing.
I have two datasets, which are comparable; However, the labels for these datasets are somewhat different. Both datasets have samples with NO (healthy) and Yes(Cancer) labels; But one of the main factor, that makes them different is the inter medium labels; Dataset, has one class in between called (medium) while the other dataset has two intermediate labels ( Small risk, High Risk).
Of course, small risk, is just a risk and can lead to cancer, but also can stay healthy; and high risk has more chance to become cancer but might stay just as a risk; Last but not the list, in the other hand, medium in dataset one is basically a combination of small and high risk !
One can arbitrary group high risk together with cancer, and small risk with healthy; or some other way, and exclude samples from the other dataset ...
my question is here; Can I construct a hierarchical model on the response variable and let the classifier share these information among the group WITHOUT any additional grouping ?
I assume here is an example where Bayesian can gives some real help !
• Would you comment more on the real, underlying disease process and how the risk categories were assessed in each of the two datasets? Is an element of time involved, and is it available in your data? – David C. Norris Sep 13 '15 at 21:22
• @DavidC.Norris I have updated the question; hope now is more clear – user4581 Sep 14 '15 at 20:38
• What seems weird about your data is that they combine what sound like 'hard outcomes' (yes, this person has cancer; no, this person does not have cancer) together with modeled risk levels. Do you have access to the model used to assign the risk categories? Are you trying to update that same model using some new 'hard outcomes' data acquired subsequently to the original model estimation? How would you allocate credibility to the hard outcomes vs the assigned risk categories? Finally, how would you know--even in theory--whether or not you have made a good classifier? – David C. Norris Sep 14 '15 at 21:15
• @DavidC.Norris good point; well basically, we have the size of tumor (not the real size something like less than 1 cm [small risk], larger than 1cm and less than 5cm![high risk]); in the other dataset less than 6cm e.g (I don't know the exact number); The size of a tumour, only says about the risk, meaning not all tumours are cancerous but if they get bigger it is more likely the become carcinoma; – user4581 Sep 14 '15 at 21:56
• In that case, perhaps you would benefit from seeing this as an interval censoring problem of the kind I discussed in this reply stats.stackexchange.com/a/123321/41404 to another question. You seem rightly to have intuited that Bayesianism can help here. I'd wager that the best advantage of a Bayesian treatment is that it will empower you to posit latent variables (e.g., tumor size) that facilitate thinking about your problem in more concrete, realistic terms. Maybe a 'cancerous potential' variable drives size and also a time-to-event process in which the 'event' is conversion? – David C. Norris Sep 14 '15 at 23:33
At this point, I think I can answer your final question in the affirmative. Yes, a hierarchical Bayesian model would be highly efficient at sharing whatever information exists in these heavily interval-censored data. (It is through the latent variables in these models that this sharing would be accomplished.) A Bayesian approach would be especially fruitful if your priors embody a substantial amount of additional information about the disease process, or even about the vagaries of the data collection process.
It's altogether possible that such a modeling exercise will demonstrate that, even with your best efforts to provide strongly informative priors, your data are too heavily censored to tell you much. That would be a very useful finding, however, as it would allow you to abandon other, less efficient modeling methods in favor of devoting resources to searching for new data of higher quality--or perhaps recovering some of the underlying detail lost in your current data by going back to the pathologists' original notes. I find it hard to believe a pathologist would ever set eyes on an excised tumor sample without producing (somewhere!) an exquisite description of it. (The same holds for radiologists, if your data come from MRI or CT imaging.)
• have you seen a similar paper ? basically, from where I should start ? I am familiar with graphical regression / bayesian regression ... I appreciate a bit more technical explanation where I can practically start to model/code ... – user4581 Sep 17 '15 at 17:15
• Any introduction to Bayesian computing with JAGS or BUGS should suffice. You might try Kruschke's Doing Bayesian Data Analysis, 2nd ed for a start. – David C. Norris Sep 23 '15 at 19:16
First of all, you are converting the multi-class classification problem into a binary classification problem by labelling them as 0 and 1 (i.e. No risk and Sick). So one possible approach might be labeling them as 1,2,3,4 where 1=No, 2=SmallRisk etc. , then you have 4 classes and you can try to tackle 4-class classification.
Another approach can be regression. So labelling your 4 classes as [0, 0.33, 0.66, 1] then from the features, you estimate a number between 0 and 1. And you decide the class which is closest to one of those 4 numbers.
I would like to do feature selection and find those paramteres that can returns me the best result under this setup.
I suggest linear discriminant analysis for this. Simply give your labels as the first example (i.e. 1 to 4) and it will give you 4-1=3 features per sample, that have the minimum intra-class variance and the most inter-class variance. With those features, you can try to do 4-class classification.
• Thanks; I should have added this to my question; that data is made from two datasets; In dataset 1, labels are like labelling 1; and in dataset 2, labelling are like labelling 2; Of course, I can do a modelling on each dataset separetly, but it would nice to do it under one model. – user4581 Sep 13 '15 at 14:41
• If changing the labeling is not an option, then you will need to continue with binary classification. I would try support vector machines for that. – jeff Sep 13 '15 at 21:15
• The whole idea of the question, is about how to make a hierarchical model on the labels in order to use both datasets in the same time and share information between some of the groups. – user4581 Sep 13 '15 at 21:18
• I don't know much about hierarchical models but I doubt if it's relevant in the binary case. Anyway, one obvious way of using both datasets is using the common part (i.e. only the samples from classes 1,3 and 4). I cannot think of any other solution, mainly because I think both data sets are reducing the information that you could have with labeling them into 4 classes. They answer different questions; the first one answers "can they be sick?" whereas the second one answers "do they have a large risk?" so I don't see why do you want to combine them. – jeff Sep 14 '15 at 3:38
• The multi-class {1,2,3,4} idea is not appealing because there is a natural ordering to these classes, and treating them nominally is not ideal. – jlimahaverford Sep 16 '15 at 20:31 | 2019-12-12 05:16:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5255026817321777, "perplexity": 1001.0123023418582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540537212.96/warc/CC-MAIN-20191212051311-20191212075311-00294.warc.gz"} |
https://learn.careers360.com/ncert/question-describe-the-change-in-hybridisation-if-any-of-the-al-atom-in-the-following-reaction/ | Q
# Describe the change in hybridisation (if any) of the Al atom in the following reaction.
4.25 Describe the change in hybridisation (if any) of the $Al$ atom in the following reaction.
$AlCl_{3}+Cl^{-}\rightarrow AlCl_{4}^{-}$
Views
Initially, the aluminium is in the ground state and the valence orbital can be shown as:
Then the electron gets excited so, the valence orbital can be shown as:
So, initially, aluminium $(AlC_{3})$ was $sp^2$ hybridisation and hence having a trigonal planar shape.
Then it reacts with chloride ion to form $AlC_{4}^{-}$. Where it has the empty $3p_{z}$ orbital which gets involved and the hybridisation changes from $sp^2 \rightarrow sp^3$.
Hence there is a shape change from trigonal planar to tetrahedral.
Exams
Articles
Questions | 2020-01-24 16:21:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5840075016021729, "perplexity": 1634.6370574685748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00280.warc.gz"} |
http://nrich.maths.org/7291/index | ### Ab Surd Ity
Find the value of sqrt(2+sqrt3)-sqrt(2-sqrt3)and then of cuberoot(2+sqrt5)+cuberoot(2-sqrt5).
### Em'power'ed
Find the smallest numbers a, b, and c such that: a^2 = 2b^3 = 3c^5 What can you say about other solutions to this problem?
### Route to Root
A sequence of numbers x1, x2, x3, ... starts with x1 = 2, and, if you know any term xn, you can find the next term xn+1 using the formula: xn+1 = (xn + 3/xn)/2 . Calculate the first six terms of this sequence. What do you notice? Calculate a few more terms and find the squares of the terms. Can you prove that the special property you notice about this sequence will apply to all the later terms of the sequence? Write down a formula to give an approximation to the cube root of a number and test it for the cube root of 3 and the cube root of 8. How many terms of the sequence do you have to take before you get the cube root of 8 correct to as many decimal places as your calculator will give? What happens when you try this method for fourth roots or fifth roots etc.?
# Weekly Challenge 16: Archimedes Numerical Roots
##### Stage: 5 Challenge Level:
Archimedes estimated the value of $\pi$ by finding the perimeters of regular polygons inscribed in a circle and circumscribed around the circle. He managed to establish that $3\frac{10}{71} < \pi < 3\frac{1}{7}$.
Before he could find the perimeters of polygons he need to be able to calculate square roots. How did he calculate square roots? He didn't have a calculator but needed to work to an appropriate degree of accuracy. To do this he used what we now call numerical roots.
How might he have calculated $\sqrt{3}$?
This must be somewhere between $1$ and $2$. How do I know this? Now calculate the average of $\frac{3}{2}$ and 2 (which is 1.75) - this is a second approximation to $\sqrt 3$. i.e. we are saying that a better approximation to $\sqrt 3$ is $$x_{n+1} = \frac{(\frac{3}{x_n} + x_n)}{2}$$ where $x_n$ is an approximation to $\sqrt 3$ .
We then repeat the process to find the new (third) approximation to $\sqrt{3}$ $$\sqrt{3} \approx {(3 / 1.75 + 1.75) \over {2}} = 1.73214...$$ to find a fourth approximation repeat this process using 1.73214 and so on...
How many approximations do I have to make before I can find $\sqrt{3}$ correct to five decimal places?
Why do you think it works?
Will it always work no matter what I take as my first approximation and does the same apply to finding other roots?
Did you know ... ?
BBC News on 6 January 2010 reported that a computer scientist Fabrice Bellard claimed to have computed the mathematical constant pi to nearly 2.7 trillion digits, some 123 billion more than the previous record. He used a desktop computer to perform the calculation, taking a total of 131 days to complete and check the result. This version of pi takes over a terabyte of hard disk space to store.
Previous records were established using supercomputers, but Mr Bellard claims his method is 20 times more efficient. The prior record of about 2.6 trillion digits, set in August 2009 by Daisuke Takahashi at the University of Tsukuba in Japan, took just 29 hours. | 2014-09-02 17:28:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6829187870025635, "perplexity": 358.8784905918671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922089.6/warc/CC-MAIN-20140901014522-00195-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3994126/prove-that-left-int-01xf-leftx-right-rm-dx-right-leq-frac112-ma/3994145 | # Prove that $\left|\int_{0}^{1}xf\left(x\right){\rm d}x\right|\leq\frac{1}{12}\max_{x\in\left [0,1\right]}\left|{f}'\left(x\right)\right|$
Given differentiable, continuous $$f\left ( x \right )$$ on the interval $$\left [ 0, 1 \right ]$$ so that $$\int_{0}^{1}f\left ( x \right ){\rm d}x= 0.$$ Prove that $$\left | \int_{0}^{1}xf\left ( x \right ){\rm d}x \right |\leq \frac{1}{12}\max_{x\in \left [ 0, 1 \right ]}\left | {f}'\left ( x \right ) \right |$$
I think I should transform the constant $$1/12$$ into an integral like $$k\int_{0}^{1}x^{2}{\rm d}x,$$ but $$k$$ is very unusual, I need to your helps, even an example of $$f\left ( x \right )$$ so that $$\int_{0}^{1}f\left ( x \right ){\rm d}x= 0$$ in order to know what I must do with the constant $$1/12.$$ Thanks a real lot.
• Fun fact: Till today, I still don't find such a example that used the following inequality with strictly decreasing $f\left ( x \right )$ for all $x\in \left [ 0, 1 \right ]$ by Cauchy-Schwarz: $$\int_{0}^{1}f\left ( x \right ){\rm d}x+ \left ( \int_{0}^{1}xf\left ( x \right ) \right )^{2}+ 1> 0$$ – haidangel Jan 21 at 13:24
$$\int_0^1 xf(x)\,dx=\int_0^1\left(x-\frac12\right)f(x)\,dx=\underbrace{-\frac12 x(1-x)f(x)\Bigg|_0^1}_{=0}+\frac12\int_0^1 x(1-x)f'(x)\,dx.$$ Hence, if $$\displaystyle M=\sup_{x\in(0,1)}|f'(x)|$$, then $$\displaystyle\left|\int_0^1 xf(x)\,dx\right|\leqslant\frac{M}{2}\int_0^1 x(1-x)\,dx=\frac{M}{12}$$.
Since $$\int_0^1f(x)\mathrm{d}x=0$$, by integration by parts, \begin{align*} \int_0^1xf(x)\mathrm{d}x&=\int_0^1(x+C)f(x)\mathrm{d}x\\ &=\Big(\frac{1}{2}x^2+Cx\Big)f(x)\bigg|_0^1-\int_0^1\Big(\frac{1}{2}x^2+Cx\Big)f'(x)\mathrm{d}x\\ &=\Big(\frac{1}{2}+C\Big)f(1)-\int_0^1\Big(\frac{1}{2}x^2+Cx\Big)f'(x)\mathrm{d}x \end{align*} Since $$f(1)$$ is unknown, so we let $$C=-\frac{1}{2}$$, we get $$\int_0^1xf(x)\mathrm{d}x=-\int_0^1\Big(\frac{1}{2}x^2-\frac{1}{2}x\Big)f'(x)\mathrm{d}x$$ Therefore, \begin{align*} \left|\int_0^1xf(x)\mathrm{d}x\right| &=\left|-\int_0^1\Big(\frac{1}{2}x^2-\frac{1}{2}x\Big)f'(x)\mathrm{d}x\right|\\ &=\left|-f'(\xi)\int_0^1\Big(\frac{1}{2}x^2-\frac{1}{2}x\Big)\mathrm{d}x\right|, \quad x\in(0,1)\\ &\leqslant\frac{1}{12}\max_{x\in[0,1]}|f'(x)| \end{align*}
New contributor
lingdanda is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. | 2021-03-02 02:19:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9763790369033813, "perplexity": 548.1323297865594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363211.17/warc/CC-MAIN-20210302003534-20210302033534-00036.warc.gz"} |
https://aryamccarthy.github.io/hamilton2016diachronic/ | # Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change
Today’s paper looks at how meanings evolve. The authors claim to have uncovered two laws of semantic change. The first is the “law of conformity”: the rate of semantic change scales inversely with frequency. The second is the “law of innovation”: meanings change faster in words with more senses.
For background, we know that some phenomena occur more in high frequency words than in other, and we know that more frequent words have more senses. But we don’t know how either of these relate to semantic drift.
To analyze this, the authors construct embeddings (fixed-length vectors representing each word) by three different methods, then align those of the same type from decade to decade.
Their methods are:
• PPMI, or positive point-wise mutual information. They smooth it and rectify the PMI, such that only positive correlations are emphasized. (This makes sense, because your corpus might just not be big enough to show the two words together.) It also assumes a pre-built list of context words. Each row in the matrix M is the PPMI between the given word and the context word.
• SVD. SVD is the math behind principal component analysis. They apply this as a dimensionality reduction of the PPMI matrix: M = USV*. (They keep the scores on each dimension US, rather than the directions V.) Because they only keep a fraction of the dimensions, it acts as a regularizer.
• SGNS (i.e. word2vec). Word2vec is its own conceptual disaster, but very popular and quite successful. It’s useful in this case because you can initialize the embeddings for a time period t with the embeddings from the previous time period t - , helping you better model the transitions. (They borrow this idea from Kim et al. (2014), who first used word2vec (a “neural model”) to study diachronic meaning change.)
The SVD and SGNS methods both produce a vector space representing all words as vectors. But these vector spaces can be oriented arbitrarily: If you perform one of these methods for two consecutive decades, you will get different vector spaces even though most words’ meanings are similar. If you want to compare the embeddings across decades, then, you have to rotate one vector space so that they can be compared properly. (PPMI is inherently aligned because you use the same context words in all years to define the columns.) The technique to align the vector spaces is called Procrustes (jokingly named after this maniac). It finds the orthogonal rotation Q of W_t that minimizes its difference from W_{t+1}.
I admire how the authors gamed the Spearman correlation coefficient to detect pairwise meaning shifts. If the similarity values s1, s2, …, sn correlate (positively or negatively) with the time values t1, t2, …, tn, then there’s been a meaning shift.
The authors worked on both the Google N-Grams corpus and the COHA corpus, which is supposed to be extremely well-balanced and representative of American English. They evaluate diachronic embeddings in two ways:
1. Detection: when mining shifting words from a corpus, what recall are they able to achieve on a list of words with known directional shifts?
2. Discovery: self-judging the top 10 words each method shows as having shifted, according to that Spearman sorcery. (That fact makes me uncomfortable. Also, reporting 70% is not cool when it’s 7 out of 10.) SGNS wins, then SVD, then PPMI.
Also, good to know: for (synchronic) word similarity, SVD performs best. For word analogy, SGNS performs best.
These results suggest that both these methods are reasonable choices for studies of semantic change but that they each have their own tradeoffs: SVD is more sensitive, as it performs well on detection tasks even when using a small dataset, but this sensitivity also results in false discoveries due to corpus artifacts. In contrast, SGNS is robust to corpus artifacts in the discovery task, but it is not sensitive enough to perform well on the detection task with a small dataset. Qualitatively, we found SGNS to be most useful for discovering new shifts and visualizing changes (e.g., Figure 1), while SVD was most effective for detecting subtle shifts in usage.
Now we get to the unveiling of their “laws”. They examine these by looking at the rate of change of the word’s meaning; that is, what is its similarity to the same word in the previous decade? From here on, they stick to word2vec (SGNS) vectors. They ask how a word’s polysemy and frequency predict its future position. (One thing word2vec is bad at is polysemous words. You get a triangle inequality problem: the word pulls each of its senses closer together, even though they’re unrelated.)
With SVD embeddings the effect of frequency is confounded by the fact that high frequency words have less finite-sample variance in their co-occurrence estimates, which makes the word vectors of high frequency words appear more stable between corpora, regardless of any real semantic change. The SGNS embeddings do not suffer from this issue because they are initialized with the embeddings of the previous decade.
With this in place, they create a linear model to compute an adjusted score, based on the frequency polysemy of each word along with effects from the decade itself, including random intercepts for each word and random noise at each time point. Interpreting this is dicey—they claim to find power laws out of an equation that only allows for power laws. Without credible intervals on the weights, this could have been anything.
More interesting is how they look at polysemy. They compute it as the negative of the local clustering coefficient (hooray network science!), which is essentially the triangle density of a node: how many triangles is it a part of, out ot the total number that could exist given its incident edge set? So more clustering means fewer senses. Evidently this has been shown to correlate with WordNet senses. Unfortunately, it also awards points to function words.
At the end of their pipeline, some hand-waving gives you the laws of conformity and innovation, which I gave away in the introduction.
These empirical statistical laws also lend them- selves to various causal mechanisms. The law of conformity might be a consequence of learn- ing: perhaps people are more likely to use rare words mistakenly in novel ways, a mechanism for- malizable by Bayesian models of word learning and corresponding to the biological notion of ge- netic drift (Reali and Griffiths, 2010). Or per- haps a sociocultural conformity bias makes people less likely to accept novel innovations of common words, a mechanism analogous to the biological process of purifying selection (Boyd and Richer- son, 1988; Pagel et al., 2007). Moreover, such mechanisms may also be partially responsible for the law of innovation. Highly polysemous words tend to have more rare senses (Kilgarriff, 2004), and rare senses may be unstable by the law of con- formity.
Also interesting to me about this paper is their visualization technique. It builds on the t-SNE nonlinear embedding method to show how a word’s meaning has changed:
1. Find the union of the word's
$k$ nearest neighbors over
all necessary time points.
2. Compute the t-SNE embedding
of the words on the modern
time-point.
3. For the previous time points,
hold embeddings fixed, except
the target word's. Optimize a
new embedding for the target
word.
The authors note that the procedure shows all words in their modern positions. I accept that it’s more useful to us who know these words’ modern senses, but only if a large number of reference words haven’t changed in meaning. Also, the last step is pretty unclear.
The useful tricks:
• Spearman correlation of a metric with time to show whether there’s a drift
• Procrustes projections for aligning two vector spaces
• SVD for capturing analogical relationships
• Counting the number of families a node takes part in using the local clustering coefficient.
EDIT 2018-02-22: There’s a post from The Morning Paper, out today, that talks about a response to this work. Instead of aligning word embeddings using Procrustes, they jointly optimize both the embeddings and the alignment. They include a loss term that computes the difference between the embeddings for consecutive time periods, without rotation, when learning the optimal embeddings. This lets them perform diachronic analogy tasks: the nearest neighbor to the 2016 value of obama is always whoever is the President that year.
Written on February 19, 2018 | 2019-07-18 09:12:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7106931805610657, "perplexity": 1815.364468124073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00138.warc.gz"} |
http://www.demolitionnyc.com/zsf4g/0r1n2p.php?id=4394e0-mumford-procedure-success-rate | Chemical engineering design - GAVIN TOWLER, RAY SINNOTT.pdf Trying to conceive after an early pregnancy loss: an assessment on how long couples should wait. During this procedure, we will disclose some unexpected representation properties of ReLU DNNs and show some exponential approximation results for both smooth and non-smooth functions. 431 Likes, 4 Comments - George Mason University (@georgemasonu) on Instagram: “"As a freshman at Mason, I had difficulties being on my own for the first time. 2016;127(2):204–212. Research in this area holds great promise for the development of intriguing theory and impactful policy implications, but only if empirical studies are conducted rigorously. It is possible for the Mytek to play 128Fs using an ASIO device driver, but ASIO is the driver used on Windows machines. Visitez le site pour consulter notre gamme de Véhicules Utilitaires! DAYTON – Urbana and Champaign County will celebrate January National and Ohio Blood Donor Awareness Month by hosting community blood drives Jan. 18 and Jan. 20. Download free books in PDF format. Indeed even if Bacon had produced nothing else but his masterful Essayes (first published in 1597 and then revised and expanded in 1612 and 1625), he would still rate among the top echelon of 17th-century English authors. 27 Epidural resite rate in the women with class III obesity (greater than 136 kg in weight) was 17% in a cohort study compared with 3% in the control group (less than 113 kg in weight). Read online books for free new release and bestseller The conclusion is that I have achieved 80% success in relieving the pain. My father, who was diagnosed with Parkinsons Disease approximately fifteen years ago, yet had improved after a new Deep Brain Stimulation (DBS) procedure in 2009, has also experienced increased problems with balance and speech. Undergoing an Arthroscopic Procedure Prior to Shoulder Arthroplasty is Associated with Greater Risk of Prosthetic Joint Infection Azeem Tariq Malik, Jesse Morris, Julie Y. Bishop, Andrew S. Neviaser, Safdar N. Khan, Gregory L. Cvetanovich Then, we will present a constrained linear model that provides a different explanation for the feature extraction steps in ResNet type models. Malaria is a threat to life, health and wellbeing. 13 talking about this. It is a parasitic disease transmitted exclusively by mosquito species from the Anopheles genus. During my senior…” @alwaysclau: “It’s quite an experience hearing the sound of your voice carrying out to a over 100 first year…” Angie's List selects their best experts in Home, Health, and Auto to answer your questions. Obstet Gynecol . I agreed to 6 months treatment which lasted from February till July 2010. In the past year my mother has lost much of her ability to speak clearly and walk without assistance. In 2008, two West Coast orthopedic surgeons offered to rebuild my spine but my friend Jean Lusk told me about her comparable medical situation resolved from PRP prolotherapy by Dr. Peterson. In addition, our framework utilizes a simple and efficient procedure for updating differentiation matrices on moving domains tiled by … The Urbana Church of Christ in Christian Union blood drive is Monday, Jan. 18 from 3 p.m. to 7 p.m. at 1121 North Main St., Urbana. Leadership is a key predictor of employee, team, and organizational creativity and innovation. Schliep KC, Mitchell EM, Mumford SL, et al. DSD DACs such as Playback Designs will play 128Fs DSD with Pure Music, because they support a 352.8 kHz sample rate. Of this number, 75% of the women were obese. New content on important topics shared daily. Découvrez les véhicules utilitaires Piaggio: Porter, Ape, Quargo et Electric Power. This procedure eliminates the overlap parameter $\delta$, thereby enabling tuning-free assembly of RBF-FD differentiation matrices on moving domains. As Playback Designs will play 128Fs using an ASIO device driver, but ASIO is the driver used Windows! Employee, team, and Auto to answer your questions then, we will present a constrained linear that... Utilitaires Piaggio: Porter, Ape, Quargo et Electric Power, and organizational creativity and innovation thereby tuning-free... The women were obese dsd with Pure Music, because they support a 352.8 kHz sample rate kHz sample.. Your questions experts in Home, Health, and Auto to answer your questions, Ape, et. With Pure Music, because they support a 352.8 kHz sample rate \delta $, thereby enabling tuning-free assembly RBF-FD! Véhicules utilitaires Piaggio: Porter, Ape, Quargo et Electric Power i agreed 6..., Quargo et Electric Power mumford procedure success rate Designs will play 128Fs dsd with Pure Music, because they a... Moving domains Mytek to play 128Fs dsd with Pure Music, because they support a 352.8 kHz sample.... Their best experts in Home, Health, and Auto to answer your questions type models speak and... Auto to answer your questions walk without assistance tuning-free assembly of RBF-FD differentiation matrices on moving domains relieving! Disease transmitted exclusively by mosquito species from the Anopheles genus in relieving the pain disease... By mosquito species from the Anopheles genus ASIO device driver, but ASIO is the driver on... Rbf-Fd differentiation matrices on moving domains number, 75 % of the women were obese site pour consulter gamme... Anopheles genus Ape, Quargo et Electric Power % success in relieving the pain from the Anopheles genus organizational. Moving domains feature extraction steps in ResNet type models is that i have achieved 80 % success relieving! Designs will play 128Fs dsd with Pure Music, because they support a 352.8 kHz rate. To play 128Fs dsd with Pure Music, because they support a 352.8 kHz sample rate play 128Fs dsd Pure... Asio device driver, but ASIO is the driver used on Windows machines of employee, team and! Clearly and walk without assistance ” Schliep KC, Mitchell EM, Mumford SL, al... Consulter notre gamme de véhicules utilitaires Piaggio: Porter, Ape, Quargo et Electric Power my mother lost. To speak clearly and walk without assistance on moving domains et al play dsd. Sl, et al key predictor of employee, team, and Auto to answer your.! In the past year my mother has lost much of her ability to speak and! On moving domains lost much of her ability to speak clearly and walk without assistance that provides different! 80 % success in relieving the pain leadership is a parasitic disease transmitted by..., Mitchell EM, Mumford SL, et al predictor of employee, team, and organizational and!, team, and Auto to answer your questions exclusively by mosquito species from the Anopheles genus les! And organizational creativity and innovation, thereby enabling tuning-free assembly of RBF-FD differentiation matrices on moving domains play... A constrained linear model that provides a different explanation for the feature extraction in. Organizational creativity and innovation enabling tuning-free assembly of RBF-FD differentiation matrices on moving domains, thereby tuning-free! Present a constrained linear model that provides a different explanation for the feature steps... Asio device driver, but ASIO is the driver used on Windows machines mosquito species from the genus! Will play 128Fs dsd with Pure Music, because they support a 352.8 kHz sample.... Relieving the pain notre gamme de véhicules utilitaires 128Fs dsd with Pure Music, because support. Les véhicules utilitaires we will present a constrained linear model that provides a different explanation for the feature steps... Trying to conceive after an early pregnancy loss: an assessment on long. Will present a constrained linear model that provides a different explanation for the Mytek to play 128Fs dsd Pure., because they support a 352.8 kHz sample rate my mother has much... Asio device driver, but ASIO is the driver used on Windows machines and walk without assistance in... Loss: an assessment on how long couples should wait Playback Designs will play dsd... But ASIO is the driver used on Windows machines in ResNet type models the Anopheles genus EM... Khz sample rate mother has lost much of her ability to speak clearly and walk without assistance,,!, et al type models to play 128Fs dsd with Pure Music, because they support 352.8. By mosquito species from the Anopheles genus Mytek to play 128Fs dsd with Pure Music, because they support 352.8... Mytek to play 128Fs using an ASIO device driver, but ASIO is the driver used on Windows.... This procedure eliminates the overlap parameter$ \delta $, thereby enabling tuning-free assembly RBF-FD. Sl, et al eliminates the overlap parameter$ \delta $, thereby enabling assembly! Conclusion is that i have achieved 80 % success in relieving the pain different for..., thereby enabling tuning-free assembly of RBF-FD differentiation matrices on moving domains possible for the feature extraction steps in type... Of her ability to speak clearly and walk without assistance which lasted from February till 2010... Treatment which lasted from February till July 2010, but ASIO is the driver used on Windows machines answer questions... Ability to speak clearly and walk without assistance Windows machines, Mumford SL, et al different explanation for feature... Gamme de véhicules utilitaires Porter, Ape, Quargo et Electric Power on how couples... Constrained linear model that provides a different explanation for the Mytek to play 128Fs dsd with Pure Music, they... Have achieved 80 % success in relieving the pain the overlap parameter \delta. My senior… ” Schliep KC, Mitchell EM, Mumford SL, et al, Ape Quargo... I agreed to 6 months treatment which lasted from February till July 2010, thereby enabling tuning-free assembly of mumford procedure success rate... Site pour consulter notre gamme de véhicules utilitaires Piaggio: Porter, Ape, Quargo et Electric Power Porter Ape! List selects their best experts in Home, Health, and organizational creativity and innovation mosquito species the... Different explanation for the Mytek to play 128Fs dsd with Pure Music, because they support a 352.8 kHz rate. Explanation for the Mytek to play 128Fs dsd with Pure Music, they., Health, and organizational creativity and innovation 352.8 kHz sample rate of employee, team, and organizational and! Assessment on how long couples should wait provides a different explanation for feature! Support a 352.8 kHz sample rate assessment on how long couples should wait your questions ASIO. To speak clearly and walk without assistance Home, Health, and Auto to answer your questions parasitic disease exclusively... Lasted from February till July 2010, but ASIO is the driver used on Windows machines to! To speak clearly and walk without assistance women were obese trying to conceive after an early pregnancy loss an! Play 128Fs dsd with Pure Music, because they support a 352.8 kHz sample rate, Mitchell EM Mumford! List selects their best experts in Home, Health, and organizational creativity and innovation site consulter. Walk without assistance February till July 2010 after an early pregnancy loss: an assessment on how long should. Et al senior… ” Schliep KC, Mitchell EM, Mumford SL, et al senior… ” Schliep KC Mitchell. A key predictor of employee, team, and Auto to answer mumford procedure success rate questions her! 128Fs dsd with Pure Music, because mumford procedure success rate support a 352.8 kHz sample rate, et al agreed 6... Of the women were obese 128Fs dsd with Pure Music, because they support a 352.8 kHz sample rate thereby. Employee, team, and Auto to answer your questions type models the Mytek to play 128Fs using ASIO! Ability to speak clearly and walk without assistance Ape, Quargo et Electric.. Mytek to play 128Fs dsd with Pure Music, because they support a 352.8 kHz rate... In the past year my mother has lost much of her ability to speak clearly walk! A 352.8 kHz sample rate a key predictor of employee, team, and Auto to answer your.... And walk without assistance predictor of employee, team, and organizational creativity and innovation has much... Em, Mumford SL, et al mother has lost much of her ability to speak clearly walk... For the Mytek to play 128Fs dsd with Pure Music, because they support a 352.8 kHz sample rate,. Mytek to play 128Fs using an ASIO device driver, but ASIO is the used. Early pregnancy loss: an assessment on how long couples should wait have 80. Site pour consulter notre gamme de véhicules utilitaires as Playback Designs will play dsd... Device driver, but ASIO is the driver used on Windows machines team, and organizational creativity and innovation rate. Mytek to play 128Fs using an ASIO device driver, but ASIO is the driver used Windows... Sl, et al overlap parameter$ \delta \$, thereby enabling tuning-free of! Different explanation mumford procedure success rate the feature extraction steps in ResNet type models steps in ResNet type.. Is that i have achieved 80 % success in relieving the pain till July 2010 this... Her ability to speak clearly and walk without assistance should wait ” Schliep KC, Mitchell EM, Mumford,. Designs will play 128Fs dsd with Pure Music, because they support a 352.8 kHz sample rate parasitic! Provides a different explanation for the feature extraction steps in ResNet type models 128Fs with!, Ape, Quargo et Electric Power et Electric Power with Pure Music because! Should wait et Electric Power from the Anopheles genus this number, 75 % of the women were.! To play 128Fs using an ASIO device driver, but ASIO is the driver used on Windows machines achieved! Your questions the conclusion is that i have achieved 80 % success in relieving the pain the Mytek play... The feature extraction steps in ResNet type models to answer your questions Pure Music, because support... 80 % success in relieving the pain year my mother has lost much her!
Jefferson Mount Laurel, Temporary Multi Storey Car Parks, Naruto: Ninja Council Ds, Best Education System In Africa 2019, Crystal Cove Cottages Reservations, | 2022-05-27 04:02:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26810598373413086, "perplexity": 8885.271351618443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00480.warc.gz"} |
http://math.stackexchange.com/questions/234779/if-a-function-has-a-finite-limit-at-infinity-are-there-any-additional-condition | # If a function has a finite limit at infinity, are there any additional conditions that could imply that its derivative converges to zero?
Let f be a function that has a finite limit at infinity. It is true that this alone is not enough to show that its derivative converges to zero at infinity. So I was wondering weather there were any additional conditions for f that could give the desired outcome. I am also aware of Barbalat's Lemma but this requires uniform continuity, a property which in many occasions is not easy to verify. Thank you
-
If its derivative's limit exists, then it should be zero. – Eastsun Nov 11 '12 at 10:40
What Eastsun says sounds right. For an example of a function with limit 0, but the derivative lacks a limit, see $\sin(x^2)/x.$ – Per Alexandersson Nov 11 '12 at 10:41
Without loss of generality, we have $f(x) \to 0$ as $x \to \infty$.
1. If $\lim_{x \to \infty} f'$ exists, then it's clear that $f' \to 0$.
2. If $f''$ is bounded, then $f' \to 0$. Morally, this is because $f$ must oscillate more and more tightly as $x \to \infty$ in order for $f' \not \to 0$, and $f''$ being bounded prevents that oscillation.
In addition to the points covered in mixedmath's answer we can add the hypothesis that $f'$ is monotone. Monotonicity implies that $f'(x)$ either tends to $\infty$ or to $-\infty$ or to a limit $L$ as $x \to \infty$. Clearly since $f(x + 1) - f(x) = f'(c) \to 0$ with $x < x < x + 1$ it follows that the options for $f'$ to tend to $\pm\infty$ is not possible. Hence $f'(x) \to 0$ as $x \to \infty$.
The monotonicity of $f'$ can be guaranteed by assuming that $f''$ is of constant sign for all $x$ after a certain value. | 2016-04-29 10:34:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770601987838745, "perplexity": 99.44873972569287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111313.83/warc/CC-MAIN-20160428161511-00026-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://codeczz.com/148197090.html | # abc227 Question
There is no state on the examination room.
#### A
Come with periodicity.
Enumeration.
#### C
Discover$$A\le \sqrt[3] N$$,$$B \le \sqrt N$$Directly enumerated and found that you can run. The complexity will not be certified, it seems to use the calcination.
I don’t know what I want, enumerate$$B$$Use the entire division. . . The complexity is the same, but the constant is t.
#### D
A natural idea isConsider the process of choice deletionDiscover the priority of the current largest$$k$$A certain best, but time complexity fly.
There is another common TRICK is: We don’t consider the process of choice, but consider how much contribution to the answer. Found nowhere to choose$$t$$Group out,$$a_i$$The contribution of the provided theory is the most$$\min(t,a_i)$$, we have two points each time$$t$$
#### E
Don’t throw it easily
Obviously directly simulate this switching process is not. Consider a sequence will only exchange$$n$$
That can be naturally introduced to this DP status: DP [Teachment 1 Number] [There are several K] [there are several E] [There are several Y] [exchanged G times] can construct the last Number of rows of numbers. There is actually a very common trick —- direct constructive answer sequence. at mePrevious blog
Then think about it carefully and find that you can forward (ie, interchangeably). Then just push it.
(I won’t be played like this.
#### F
The first feel is DP. Discover this$$k$$Big (and also required) is not well maintained.
trick: We consider setting a threshold$$p$$, ie$$k$$Large numbers, if the current number is more than equal$$p$$Just add his contribution. How to ensure that this number must be$$k$$
Reply: Is it better than him?$$k-1$$? Hahahaha SB, if this path has not been valued$$p$$The number, then you will find this path only$$k-1$$And if the same number is much, these may be added more.
Solution: For the current value equal to the threshold$$p$$Time, special treatment: You can choose or don’t choose.
So we run a DP for each number, time complexity:$$\mathcal {O}(H^2W^2k)$$.
#include <cstdio>
#include <algorithm>
#include <cmath>
#include <cstring>
#define LL long long
#define uint unsigned int
using namespace std;
const int MAXN = 35;
const LL lof = 4e18;
int n, m, k, a[MAXN][MAXN];
LL ans = lof, dp[MAXN][MAXN][MAXN << 1];
LL Calc(int x, int y) {
for(int i = 1; i <= max(n, m); i ++) for(int j = 0; j <= k; j ++) dp[i][0][j] = dp[0][i][j] = lof;
dp[0][1][0] = 0;
for(int i = 1; i <= n; i ++) {
for(int j = 1; j <= m; j ++) {
if(a[i][j] == a[x][y]) {
for(int u = 0; u <= k; u ++) {
dp[i][j][u] = lof;
if(u) dp[i][j][u] = min(dp[i - 1][j][u - 1], dp[i][j - 1][u - 1]) + a[i][j];
dp[i][j][u] = min(dp[i][j][u], min(dp[i - 1][j][u], dp[i][j - 1][u]));
if(dp[i][j][u] > lof) dp[i][j][u] = lof;
}
}
else if(a[i][j] > a[x][y]) {
dp[i][j][0] = lof;
for(int u = 1; u <= k; u ++) {
dp[i][j][u] = min(dp[i - 1][j][u - 1], dp[i][j - 1][u - 1]) + (a[i][j] >= a[x][y] ? a[i][j] : 0);
if(dp[i][j][u] > lof) dp[i][j][u] = lof;
}
}
else {
for(int u = 0; u <= k; u ++) {
dp[i][j][u] = min(dp[i - 1][j][u], dp[i][j - 1][u]) + (a[i][j] >= a[x][y] ? a[i][j] : 0);
if(dp[i][j][u] > lof) dp[i][j][u] = lof;
}
}
// for(int u = 0; u <= k; u ++) printf("%d %d %d %lld\n", i, j, u, dp[i][j][u]);
}
}
// printf("|%d %d %d %lld|\n", x, y, k, dp[n][m][k]);
return dp[n][m][k];
}
int main() {
scanf("%d%d%d", &n, &m, &k);
for(int i = 1; i <= n; i ++) for(int j = 1; j <= m; j ++) scanf("%d", &a[i][j]);
for(int i = 1; i <= n; i ++) for(int j = 1; j <= m; j ++) ans = min(ans, Calc(i, j));
printf("%lld", ans);
return 0;
}
#### G
Definition$$a\sim b=a\times(a+1)\times (a+2)…\times b$$.
$$k$$Small. So we can put the final format as$$\frac {(n+k-1)\sim n}{1\sim k}$$.
With intuition, we naturally think of classification discussions (root sections) according to the unique decomposition theorem (ie, the number is equal to the index).
1. $$p>\max(k,\sqrt n)$$($$p$$Troubon. Such$$p$$Only the molecule will be removed once, and the denominator will not be completely removed. So we should consider how to remove it.$$\le k$$The quality contem.
2. $$p\le\max(k,\sqrt n)$$($$p$$As the number). Such$$p$$$$n/\ln(n)$$First of all, for each prime number$$log_p$$The method finds that the number in the interval will take its index. Consider the method of sieve, will$$[n-k+1,n]$$$$<p$$All parties are all about, complexity$$<\mathcal {O}(klog)$$. Remaining$$p>k$$The factor, then this question is done, nor is it difficult, considering the basic skills of the sieve quality.
#include <cstdio>
#include <algorithm>
#include <cmath>
#include <cstring>
#define LL long long
using namespace std;
const int Mod = 998244353, MAXK = 1e6 + 5;
LL n, a[MAXK], ans = 1;
int k, pr[MAXK], tot, N;
bool vis[MAXK];
void Prime() {
for(int i = 2; i <= N; i ++) {
if(!vis[i]) pr[++ tot] = i;
for(int j = 1; j <= tot && i * pr[j] <= N; j ++) {
vis[i * pr[j]] = 1;
if(i % pr[j] == 0) break;
}
}
}
LL Calc(int p, LL l, LL r) {
LL res = 0;
for(LL i = p; i <= r; i *= p) {
res += r / i - (l - 1) / i;
}
return res;
}
int main() {
scanf("%lld%d", &n, &k);
N = max((int)sqrt(n) + 1, k);
Prime();
for(int i = 1; i <= k; i ++) a[i] = n - k + i;
for(int i = 1; i <= tot; i ++) {
LL t = Calc(pr[i], n - k + 1, n) - Calc(pr[i], 1LL, (LL)k);
ans = (ans * ((t + 1) % Mod)) % Mod;
}
for(int i = 1; i <= tot; i ++) {
for(LL j = (n - k + 1) / pr[i] * pr[i]; j <= n; j += pr[i]) {
if(j < n - k + 1) continue;
while(a[j - n + k] % pr[i] == 0) a[j - n + k] /= pr[i];
}
}
for(int i = 1; i <= k; i ++) {
if(a[i] > 1) ans = ans * 2 % Mod;
}
printf("%lld", ans);
return 0;
}
• Top | 2022-01-27 04:42:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6068698763847351, "perplexity": 7842.719322369441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00354.warc.gz"} |
https://www.physicsforums.com/threads/hall-effect.713565/ | Hall Effect
1. Sep 30, 2013
Turion
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
$$Δ{ V }_{ H }=\frac { IB }{ nqt } \\ 9.6*{ 10 }^{ -6 }=\frac { 50*1.3 }{ n*e*3.3*{ 10 }^{ -4 } } \\ n=1.28*{ 10 }^{ 29 }\quad electrons/{ m }^{ 3 }$$
2. Sep 30, 2013
Staff: Mentor
You can use WolframAlpha to check calculations like that.
3. Oct 1, 2013
Turion
I thought the method was wrong because it gives us d=1mm but we don't use it in our calculations.
4. Oct 1, 2013
Staff: Mentor
I'm not sure where the Hall voltage is measured, either 0.330mm or 1mm is not necessary.
5. Oct 1, 2013
Staff: Mentor
Take a look at the Hyperphysics web page on the Hall Effect. | 2017-08-19 09:34:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46084678173065186, "perplexity": 4507.072395619301}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105334.20/warc/CC-MAIN-20170819085604-20170819105604-00427.warc.gz"} |
https://physics.stackexchange.com/questions/672687/long-range-correlations-in-transverse-field-ising-model/672704 | Long-range correlations in transverse field Ising model
The transverse field Ising model in 1+1d has two phases: a symmetric "disordered" phase and a symmetry-breaking "ordered" phase. Both of these phases have a finite excitation gap. In the symmetry-breaking phase, there is long-range order, that is $$\langle Z_i Z_j\rangle \neq 0$$ for $$|i - j| \rightarrow \infty$$. ($$Z_i$$ is the Pauli $$z$$ operator on site $$i$$.)
However, I thought that gapped states should have short-range, i.e., exponentially decaying, correlations. In this answer, Dominic Else says
A ground state of gapped Hamiltonian must have correlations which decay exponentially with distance (this has been proved rigorously by Hastings and Koma).
My question is: why are these facts (long-range correlations in gapped, symmetry-breaking phase and short-range correlations in gapped phase) not in contradiction with each other? I think I must be missing something simple, since both sides of this are well-understood.
• I want to say that it's really the fluctuations $\tilde{Z}_i = Z_i - \left < Z_i \right >$ which have exponentially decaying correlations in the broken phase. Oct 20, 2021 at 20:37
The key word here is "connected." For the $$Z_i$$'s (which trivially commute with each other so the above sentence applies), this implies that it is $$C(i,j) = \langle Z_i Z_j \rangle - \langle Z_i \rangle \langle Z_j \rangle$$ which decays exponentially at large |i - j|, not the correlator $$\langle Z_i Z_j \rangle$$. It is surely these connected correlations which Dominic meant in his answer on characterizing gapped ground states. | 2022-05-17 18:25:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958176851272583, "perplexity": 747.4199243981599}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00716.warc.gz"} |
https://math.stackexchange.com/questions/1855024/generalized-harmonic-number-summation-sum-n-1-infty-2-nh-n2 | # Generalized Harmonic Number Summation $\sum_{n=1}^{\infty} {2^{-n}}{(H_{n}^{(2)})^2}$
Prove That $$\sum_{n=1}^{\infty} \dfrac{(H_{n}^{(2)})^2}{2^n} = \tfrac{1}{360}\pi^4 - \tfrac16\pi^2\ln^22 + \tfrac16\ln^42 + 2\mathrm{Li}_4(\tfrac12) + \zeta(3)\ln2$$
Notation : $$\displaystyle H_{n}^{(2)} = \sum_{r=1}^{n} \dfrac{1}{r^2}$$
We can solve the above problem using the generating function $$\displaystyle \sum_{n=1}^{\infty} (H_{n}^{(2)})^2 x^n$$, but it gets rather tedious especially taking into account the indefinite polylogarithm integrals involved. Can we solve it using other methods like Euler Series Transform or properties of summation?
• As for Problem 24, it follows directly from the formula $$(\sin^{-1} x)^4 = \frac32 \sum_{n=1}^\infty \frac{2^{2n} H_{n-1}^{(2)}}{n^2 \binom{2n}{n}} \,x^{2n}.$$ (See here for example.) – nospoon Jul 10 '16 at 15:12
• @nospoon Innovative! Can you post it as a solution on Brilliant Summation Contest? Thanks! – MathGod Jul 10 '16 at 15:26
• Hi Ishan, it would be difficult for me to typeset math in the following days as I will be away from my PC. Feel free to post it there with details, if you want to. – nospoon Jul 10 '16 at 15:45
• @nospoon Oh. Do participate whenever you get time though. I'll post my own method in the meantime (which involves using Beta Function and integrating). – MathGod Jul 10 '16 at 15:47
starting with the integral representation of $$\displaystyle H_n^{(2)}=\zeta(2)+\int_0^1\frac{t^n\ln t}{1-t}\ dt\tag1$$ we can write our sum: \begin{align} S&=\sum_{n=1}^\infty\frac{\left(H_n^{(2)}\right)^2}{2^n}=\sum_{n=1}^\infty\frac1{2^n}\left(\zeta(2)+\int_0^1\frac{x^n\ln x}{1-x}\ dx\right)\left(\zeta(2)+\int_0^1\frac{y^n\ln y}{1-y}\ dy\right)\\ &=\small{\sum_{n=1}^\infty\frac1{2^n}\left(\zeta^2(2)+\zeta(2)\int_0^1\frac{y^n\ln y}{1-y}\ dy+\zeta(2)\int_0^1\frac{x^n\ln x}{1-x}\ dx+\int_0^1\int_0^1\frac{(xy)^n\ln x\ln y}{(1-x)(1-y)}\ dx\ dy\right)} \end{align} note that the second and the third term have the same value and using the geometric series, we have
\begin{align} S&=\zeta^2(2)+2\zeta(2)\int_0^1\frac{\ln x}{1-x}\sum_{n=1}^\infty\left(\frac{x}{2}\right)^n\ dx+\int_0^1\int_0^1\frac{\ln x\ln y}{(1-x)(1-y)}\sum_{n=1}^\infty\left(\frac{xy}{2}\right)^n\ dx\ dy\\ &=\zeta^2(2)+2\zeta(2)\int_0^1\frac{x\ln x}{(1-x)(2-x)}\ dx+\int_0^1\int_0^1\frac{xy\ln x\ln y}{(1-x)(1-y)(2-xy)}\ dx\ dy\\ &=\zeta^2(2)+2\zeta(2)(-\ln^22)+\int_0^1\frac{\ln x}{1-x}\left(\int_0^1\frac{xy\ln y}{(1-y)(2-xy)}\ dy\right)\ dx\\ &=\zeta^2(2)-2\zeta(2)\ln^22+\int_0^1\frac{\ln x}{(1-x)(2-x)}\left(\int_0^1\frac{x\ln y}{1-y}\ dy-\int_0^1\frac{2x\ln y}{2-xy}\ dy\right)\ dx\\ &=\zeta^2(2)-2\zeta(2)\ln^22+\int_0^1\frac{\ln x}{(1-x)(2-x)}\left(-\zeta(2)x+2\operatorname{Li_2}\left(\frac{x}{2}\right)\right)\ dx\\ &=\zeta^2(2)-2\zeta(2)\ln^22+(-\ln^22)(-\zeta(2))+2\int_0^1\frac{\ln x\operatorname{Li_2}\left(\frac{x}{2}\right)}{(1-x)(2-x)}\ dx\\ &=\zeta^2(2)-\zeta(2)\ln^22+2\color{blue}{\int_0^1\frac{\ln x\operatorname{Li_2}\left(\frac{x}{2}\right)}{(1-x)(2-x)}\ dx}\\ &=\frac52\zeta(4)-\zeta(2)\ln^22+2\left(\color{blue}{\operatorname{Li_4}\left(\frac{1}{2}\right)-\frac98\zeta(4)+\frac12\ln2\zeta(3)+\frac1{12}\ln^42}\right)\\ &=2\operatorname{Li_4}\left(\frac{1}{2}\right)+\frac14\zeta(4)+\ln2\zeta(3)-\ln^22\zeta(2)+\frac16\ln^42 \end{align}
Evaluation of the blue integral:
$$\int_0^1\frac{\ln x\operatorname{Li_2}\left(\frac{x}{2}\right)}{(1-x)(2-x)}\ dx=\int_0^1\frac{\ln x\operatorname{Li_2}\left(\frac{x}{2}\right)}{1-x}\ dx-\frac12\int_0^1\frac{\ln x\operatorname{Li_2}\left(\frac{x}{2}\right)}{1-\frac x2}\ dx$$
expand $$\text{Li}_2(x/2)$$ in series in the first integral and use that $$\sum_{n=1}^\infty H_n^{(2)}x^n=\frac{\text{Li}_2(x)}{1-x}$$ for the second integral with replacing $$x$$ by $$x/2$$
$$=\sum_{n=1}^\infty\frac{1}{n^22^n}\int_0^1\frac{x^n \ln x}{1-x}dx-\frac12\sum_{n=1}^\infty\frac{H_n^{(2)}}{2^n}\int_0^1 x^n\ln xdx$$
use $$(1)$$ for the first integral
$$=\sum_{n=1}^\infty\frac{1}{n^22^n}\left(-\zeta(2)+H_n^{(2)}\right)-\frac12\sum_{n=1}^\infty\frac{H_n^{(2)}}{2^n}\left(-\frac1{(n+1)^2}\right)$$
reindex the second sum
$$=-\zeta(2)\text{Li}_2(1/2)+\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^22^n}+\sum_{n=1}^\infty\frac{H_{n-1}^{(2)}}{n^22^n}$$
write $$H_{n-1}^{(2)}=H_n^{(2)}-\frac1{n^2}$$ and simplify
$$=-\zeta(2)\text{Li}_2(1/2)+2\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^22^n}-\sum_{n=1}^\infty\frac{1}{n^42^n}$$
$$=-\text{Li}_4(1/2)-\zeta(2)\text{Li}_2(1/2)+2\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^22^n}$$
substitute
\begin{align*} \sum_{n=1}^{\infty}\frac{H_n^{(2)}}{{n^22^n}}=\operatorname{Li_4}\left(\frac12\right)+\frac1{16}\zeta(4)+\frac14\ln2\zeta(3)-\frac14\ln^22\zeta(2)+\frac1{24}\ln^42 \end{align*}
and $$\text{Li}_2(1/2)=\frac12\zeta(2)-\frac12\ln^2(2)$$ , the blue closed form follows.
second approach suggested by Cornel Ioan Valean using summation by parts and lets start with the following sum:
with $${N \in \mathbb{N}_{\ \geq\ 1}}$$ \begin{align} \sum_{n=1}^N\frac{\left(H_{n-1}^{(2)}\right)^2}{2^n}=\sum_{n=1}^N\frac{\left(H_n^{(2)}\right)^2}{2^n}-2\sum_{n=1}^N\frac{H_n^{(2)}}{n^22^n}+\sum_{n=1}^N\frac1{n^42^n}\tag{1} \end{align} on the other hand: \begin{align} \sum_{n=1}^N\frac{\left(H_{n-1}^{(2)}\right)^2}{2^n}=\sum_{n=1}^{N-1}\frac{\left(H_{n}^{(2)}\right)^2}{2^{n+1}}=\sum_{n=1}^{N}\frac{\left(H_{n}^{(2)}\right)^2}{2^{n+1}}-\frac{\left(H_{N}^{(2)}\right)^2}{2^{N+1}}\tag{2} \end{align} from $$(1)$$ and $$(2)$$ we reach $$\sum_{n=1}^N\frac{\left(H_{n}^{(2)}\right)^2}{2^n}=4\sum_{n=1}^N\frac{H_n^{(2)}}{n^22^n}-2\sum_{n=1}^N\frac{1}{n^42^n}-2\frac{\left(H_{N}^{(2)}\right)^2}{2^{N+1}}$$ letting $$N$$ approach $$\infty$$ we get $$\sum_{n=1}^\infty\frac{\left(H_{n}^{(2)}\right)^2}{2^n}=4\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^22^n}-2\sum_{n=1}^\infty\frac{1}{n^42^n}-0$$ I was able here to prove \begin{align*} \sum_{n=1}^{\infty}\frac{H_n^{(2)}}{{n^22^n}}=\operatorname{Li_4}\left(\frac12\right)+\frac1{16}\zeta(4)+\frac14\ln2\zeta(3)-\frac14\ln^22\zeta(2)+\frac1{24}\ln^42 \end{align*} which follows $$\sum_{n=1}^\infty\frac{\left(H_{n}^{(2)}\right)^2}{2^n}=2\operatorname{Li_4}\left(\frac{1}{2}\right)+\frac14\zeta(4)+\ln2\zeta(3)-\ln^22\zeta(2)+\frac16\ln^42$$
• This can be generalized to arbitrary $\frac{\text{harmonic}}{2^n}$ sums, which is a critical case of Xu's geometric Euler sums. I'm working on such geometric, binomial and quadratic Euler sums now. – Iridescent Jun 14 '20 at 14:53
• @Edit profile and settings yes. Also Cornel suggested the generalization but I was too lazy to include it. – Ali Shadhar Jun 14 '20 at 23:38 | 2021-03-04 23:01:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828696846961975, "perplexity": 2009.1328756864618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00440.warc.gz"} |
http://taoofmac.com/space/blog/2006/07/12 | # HSDPA Support In Mac OS X 10.4.7
I've been poking under the hood of the 10.4.7 update, and as it turns out Apple is doing more than a few things right where it regards WWAN support - something I wrote about a couple of months ago, trying to unravel the intricacies of driver models and wondering whether there would be some sort of unified support for Mac OS X.
Well, my curiosity was spiked by this bullet in the 10.4.7 changelog:
• Adds support for Sierra wireless cards AirCard 580 (AC580), PC5220, and for the Novatel V620, S620, and U730 wireless cards for PowerBook computers.
The U730 is the HSDPA card currently sold by some US operators, and is a variation of the U740. As it happens, that particular card (and its UMTS predecessor, the U630) is reasonably popular here in Portugal.
So even lacking access to a Mac with a PCMCIA slot, I did a little digging under /System/Library/Extensions on my iMac and found that IOSerialFamily.kext has a new set of plugins:
IOSerialFamily.kext
+ Contents
+ Plugins --+-- AppleRS574Serial.kext
.
.
+-- AppleVerizonSupport.kext
+-- AppleVerizonSupportKicker.kext
.
Precisely what these two .kexts do, I don't know - especially the "Kicker". I do know that the first one has USB vendor/product IDs for the Novatel product range so I assume that sets up the correct pseudo-serial port mappings and specifies the use of new "WWAN Support" modem script (it too a novelty in 10.4.7).
The "WWAN Support" modem script is located in /Library/Modem Scripts, and has only basic initialization commands - it does not specify an APN for the WWAN connection, therefore assuming that the connection will use the default APN on your SIM card. For more on what this means, check my HOWTO.
Well, as it happens, the U730's USB id is the same as the U740, which means that you can plug in such a card and Mac OS X will recognize it. And I'm told it works fine with the right kind of tweaking, too.
Bear in mind, though, that -
• This is not necessarily a generic configuration, or (most importantly) one that works with your mobile operator's network.
• Operator-supplied drivers and software exist for a reason - i.e., to make your life easier.
So what little information I unearthed tonight (time to reach for the Disclaimer) is only for the geeks in the audience that happen to really like fiddling with this kind of thing.
### Thanks, Apple
But the main point of this is that it is very encouraging to see Apple adding support for WWAN connections in the OS, even if timidly and in a US-centric fashion.
I can, of course, mention the trifling little detail that there are probably more 3G users in Europe - by an order of magnitude - but some support is better than no support.
Furthermore, I'm told that the drivers above display a nice AirPort-like signal indicator in the menu bar (again, not having a Mac with a PCMCIA slot, I can't confirm), which is very interesting.
For starters, it raises the question of whether that signal indicator is the driver's doing or whether it is a new standard Mac OS X feature that happens to be available for these drivers to use (improving awareness of radio conditions is fundamental for WWAN support at both the UI and driver levels).
At this point, I do not know - I can find no bitmaps in the files above, although that does not mean they aren't there - so any more information is most welcome.
Update: Here it is, courtesy of a colleague of mine:
From what we looked at, it seems like the indicator is managed by the driver itself and is not a generic Mac OS X feature. Too bad, I guess...
We'll see how this pans out over the next few months - 2006 promises to be an amazing year in terms of mobile data (especially here in Europe), so Apple is sure to improve upon this.
Hey Steve, if you need a hand, just let me know. :) | 2016-10-21 21:56:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28920477628707886, "perplexity": 1730.1143689515695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718309.70/warc/CC-MAIN-20161020183838-00558-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://bitbucket.org/arigo/cpython-withatomic/src/44a82ac654a4/Doc/liballos.tex?at=legacy-trunk | # cpython-withatomic / Doc / liballos.tex
The branch 'legacy-trunk' does not exist.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 \chapter{Generic Operating System Services} The modules described in this chapter provide interfaces to operating system features that are available on (almost) all operating systems, such as files and a clock. The interfaces are generally modelled after the \UNIX{} or C interfaces but they are available on most other systems as well. Here's an overview: \begin{description} \item[os] --- Miscellaneous OS interfaces. \item[time] --- Time access and conversions. \item[getopt] --- Parser for command line options. \item[tempfile] --- Generate temporary file names. \end{description} | 2015-05-07 02:44:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2768305540084839, "perplexity": 1613.5439695362159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430460160413.1/warc/CC-MAIN-20150501060240-00025-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/140825/why-does-clrs-refer-to-the-disk-parts-as-pages-rather-than-blocks | # Why does CLRS refer to the disk parts as pages rather than blocks?
I recently decided to review the B-tree chapter (chapter 18, p 486 in 3ed) in Introduction Algorithms, and found that they call pages what I always referred to as blocks or clusters:
In order to amortize the time spent waiting for mechanical movements, disks access not just one item but several at a time. Information is divided into a number of equal-sized pages of bits that appear consecutively within tracks, and each disk read or write is of one or more entire pages. For a typical disk, a page might be $$2^{11}$$ to $$2^{14}$$ bytes in length. Once the read/write head is positioned correctly and the disk has rotated to the beginning of the desired page, reading or writing a magnetic disk is entirely electronic (aside from the rotation of the disk), and the disk can quickly read or write large amounts of data.
I always thought that pages are related to the virtual memory, although including swapping with the disk access. When the talk goes into the discussion of the filesystems in general, then I thought the information is divided into blocks rather than pages. Is he talking about virtual memory here? If I restate the whole paragraph in terms of blocks, would it be still correct, except for the provided sizes of the blocks?
• The book has four different authors. May 28, 2021 at 22:21
• I don't think that such distinctions are of interested to the authors of Introduction to Algorithms, who are theoreticians. May 28, 2021 at 22:23
• It is common in database-related terminology to call page a contiguous set of disk blocks that is treated as the unit of transfer between main memory and permanent memory (e.g. disk). So the estimates of the access costs of persistent data structures, like trees or relations are usually in number of pages read and written. May 29, 2021 at 12:54
• @Renzo, interesting, just opened Garcia-Molina et al Database Systems and saw that they use disk blocks and pages interchangeably. Makes sense, thank you! May 29, 2021 at 16:43
If their "pages" are sectors (= disk blocks), then they're right to say that pages "appear consecutively within tracks, and each disk read or write is of one or more entire pages", but their size range for pages makes no sense. The overwhelmingly most common sector size in 2009, when this textbook was published, was $$2^9$$ bytes. Also, I've never heard the term "page" used for a disk sector before. | 2022-07-05 18:20:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44679415225982666, "perplexity": 1096.102453745195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00681.warc.gz"} |
https://easychair.org/smart-program/FLoC2022/2022-08-01.html | FLOC 2022: FEDERATED LOGIC CONFERENCE 2022
PROGRAM FOR MONDAY, AUGUST 1ST
Days:
previous day
next day
all days
View: session overviewtalk overview
08:30-09:00Coffee & Refreshments
09:00-10:30 Session 26A (FoMLAS)
FoMLAS Session 5
Location: Ullmann 201
09:00
Scalable Verification of GNN-based Job Schedulers
PRESENTER: Haoze Wu
ABSTRACT. Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users' expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying image and NLP classifiers. We develop vegas, the first general framework for verifying both single-step and multi-step properties of these schedulers based on carefully designed algorithms that combine abstractions, refinements, solvers, and proof transfer. Our experimental results show that vegas achieves significant speed-up when verifying important properties of a state-of-the-art GNN-based scheduler compared to previous methods.
09:30
VPN: Verification of Poisoning in Neural Networks
PRESENTER: Youcheng Sun
ABSTRACT. Neural networks are successfully used in many domains including safety and security critical applications. As a result researchers have proposed formal verification techniques for verifying neural network properties. A large majority of previous efforts have focused on checking local robustness in neural networks. We instead focus on another neural network security issue, namely data poisoning, whereby an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger causes the classifier to predict some target class. In this paper, we show how to formulate absence of data poisoning as a property that can be checked with off-the-shelf verification tools, such as Marabou and nneum. Counterexamples of failed checks constitute potential triggers that we validate through testing. We further show that the discovered triggers are ‘transferable’ from a small model to a larger, better-trained model, allowing us to analyze state-of-the art performant models trained for image classification tasks.
10:00
Verification-Aided Deep Ensemble Selection
PRESENTER: Guy Amir
ABSTRACT. Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks. However, as highlighted by many recent studies, even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN. This renders DNNs vulnerable to strategic input manipulations by attackers, and also prone to oversensitivity to environmental noise.
To mitigate this phenomenon, practitioners apply joint classification by an ensemble of DNNs. By aggregating the classification outputs of different individual DNNs for the same input, ensemble-based classification reduces the risk of misclassifications due to the specific realization of the stochastic training process of any single DNN. However, the effectiveness of a DNN ensemble is highly dependent on its members not simultaneously erring on many different inputs.
In this case study, we harness recent advances in DNN verification to devise a methodology for identifying ensemble compositions that are less prone to simultaneous errors, even when the input is adversarially perturbed --- resulting in more robustly-accurate ensemble-based classification.
Our proposed framework uses a DNN verifier as a backend, and includes heuristics that help reduce the high complexity of directly verifying ensembles. More broadly, our work puts forth a novel universal objective for formal verification that can potentially improve the robustness of real-world, deep-learning-based systems across a variety of application domains.
09:00-10:30 Session 26B: Foundations (GDE)
Welcome ceremony to the 2nd workshop on Goal-directed Execution of Answer Set Programs (GDE 2022) followed by a tutorial (40 minutes) and two talks, each roughly 20 minutes plus 5 minutes for discussion and questions.
Location: Ullmann 310
09:00
Tutorial: Automating Commonsense Reasoning
ABSTRACT. Automating commonsense reasoning, i.e., automating the human thought process, has been considered fiendishly difficult. It is widely believed that automation of commonsense reasoning is needed to build intelligent systems that can rival humans. We argue that answer set programming (ASP) along with its goal-directed implementation allows us to reach this automation goal. We discuss essential elements needed for automating the human thought process, and show how they are realized in ASP and the s(CASP) goal-directed ASP engine.
09:40
A Query Evaluation Method for ASP with Abduction
ABSTRACT. In this paper, we present a goal-directed proof procedure for ASP with abduction. Our proposed procedure in this paper is correct for any consistent abductive framework proposed in \cite{Kakas}. In other words, if the procedure succeeds, there is a set of hypotheses which satisfies a query, and if the procedure finitely fails, there is no such set. If we do not consider abducibles, this procedure is a goal-directed proof procedure for ASP as well. NOTE: This paper is an extended abstract of a paper with the title A Query Evaluation Method for Abductive Logic Programming'' that appeared in the Proceedings of the Joint International Conference and Symposium on Logic Programming (JICSLP'92), pp. 671 -- 685.
10:05
First order logic and commonsense reasoning: a path less travelled
PRESENTER: Tanel Tammet
ABSTRACT. The context of the paper is developing logic-based components for hybrid -- machine learning plus logic -- commonsense question answering systems. The paper presents the main principles and several lessons learned from implementing an automated reasoner able to handle both undecidable exceptions and numerical confidences for full first order logic. Although the described reasoner is based on the resolution method, some of these lessons may be useful for the further development of ASP systems as well.
09:00-10:30 Session 26C: Confluence criteria and completeness (IWC)
Location: Ullmann 306
09:00
Development Closed Critical Pairs: Towards a Formalized Proof
PRESENTER: Aart Middeldorp
ABSTRACT. Having development closed critical pairs is a well-known sufficient condition for confluence of left-linear term rewrite systems. We present formalized results involving proof terms and unification that play an important role in the proof.
09:30
On Confluence of Parallel-Innermost Term Rewriting
PRESENTER: Carsten Fuhs
ABSTRACT. We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures. We propose a simple sufficient criterion for confluence of parallel-innermost rewriting based on non-overlappingness. Our experiments on a large benchmark set indicate the practical usefulness of our criterion. We close with a challenge to the community to develop more powerful dedicated techniques for this problem.
10:00
Uniform Completeness
ABSTRACT. We introduce uniform completeness and give a local characterisation of it. We show it yields a complete method for showing completeness of rewrite systems.
09:00-10:30 Session 26D (LogTeach)
Location: Ullmann 301
09:00
Teaching Logic
ABSTRACT. We follow Ludwig Wittgenstein: "How can the all-embracing logic which mirrors the world use such special catches and manipulations? Only because all these are connected into an infinitely fine network, the great mirror." (Tractatus Logico-Philosophicus 5.511) Teaching logic means therefore to teach how surprisingly straightforward insights generate the most abstract structures known. In this lecture we intend to develop tools to motivate this generation for beginners.
10:00
Logic for computer science: starting earlier, at school
ABSTRACT. While it is true that the place of logic in undergraduate computer science curricula is either in danger or has already been lost, we suggest that solutions should look not only at the immediate term, by way of redesigning logic teaching at the undergraduate level, but also consider the long term. We propose that this requires bringing in some logic at the high school level, and re-examining logic in undergraduate mathematics curricula.
09:00-10:30 Session 26E: Joint QBF Session (PC and QBF)
Location: Ullmann 309
09:00
QBF Solvers and their Proof Systems
ABSTRACT. We give an overview of the main paradigms in QBF solving, as well as techniques for preprocessing. In each case, we present corresponding proof systems and discuss lower bound results. We point to open problems, and consider current trends in solver development.
10:00
QCDCL with Cube Learning or Pure Literal Elimination – What is best?
PRESENTER: Benjamin Böhm
ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL.
09:00-10:30 Session 26F (PCCR)
Location: Ullmann 203
09:00
Parameterized algorithmics in access control: linking theory and practice
ABSTRACT. Access control is an important area of information security. The first paper in application of parameterized algorithmics in access control was published in 2010 by Qihua Wang and Ninghui Li. In the paper, the authors suggested a parameterization of the Workflow Satisfiability Problem (WSP). In its basic form, WSP is simply the Constraint Satisfaction Problem, which Wang and Li parameterized by the number of steps (= variables in CSP terms). Wang and Li proved that WSP is W[1]-hard and identified a practically important class of constraints for which WSP is FPT. They also conducted computational experiments using an off-the-shelf solver. Unfortunately, in their paper, the theoretical results and practical computation were not linked.
In the last decade, Jason Crampton (Information Security Group, RHUL), the speaker and their co-authors studied theoretical and practical computing aspects initially for WSP with user-independent (UI) non-unary constraints and its variations and then for other access control problems. Several FPT algorithms and lower bounds were obtained, FPT algorithms were implemented and off-the-shelf solvers were used in computational experiments. Among practical computing highlights are FPT algorithms for WSP with UI non-unary constraints performing in practice much better than off-the-shelf solvers, and the possibility for off-the-shelf solvers to emulate FPT algorithms for WSP and other access control problems (approximately, the runtime exponential growth is wrt the parameter only).This theoretical and practical computing research was highly acknowledged by the access control community: most of the papers were published in top information security journals and some papers received the best paper awards in the annual symposium on access control (SACMAT).
10:00
MaxSAT: Parameterized, Parallel, Absolute
ABSTRACT. In this talk I will discuss two promising applications of parameterized complexity for the natural generalization of Boolean satisfiability to optimization problems: MaxSAT, where the task is to find an assignment that maximizes the total weight of all satisfied clauses. As for most intractable optimization problems, it is rather hard to obtain parallel algorithms for MaxSAT with meaningful theoretical guarantees in the classical complexity theoretic sense. The situation is fundamentally different, however, if we study the \emph{parallel parameterized complexity} of the problem. In the first part of the talk I will give a gentle introduction to the underling framework of \emph{parallel parameterized algorithms} and \emph{parameterized circuit complexity.} Along the line we will motivate the definitions by pointing out on a high-level how to achieve various levels of parallelization for MaxSAT for different parameters within this model~--~in contrast, in a sequential world all these results would simply collapse into the observation that MaxSAT is fixed-parameter tractable parameterized by any of these parameters.
The second application is an observation about the target function that we wish to maximize. The literature almost solely considers the case of positive weights. While the general case of the problem is only restricted slightly by this constraint, many special cases become trivial in the absence of negative weights. I present various versions of the problem that are parameterized intractable if negative weights are present. We will observe that negative weights lead to a new interesting version: Instead of maximizing the sum of weights of satisfied clauses, we can maximize the absolute value of that sum. This turns out to be surprisingly expressive even restricted to monotone formulas in disjunctive normal form with at most two literals per clause. In contrast to the versions without the absolute value, however, we will discuss that these variants are parallel fixed-parameter tractable by providing an easy to compute kernel.
09:00-10:30 Session 26G: Applications (POS)
Location: Ullmann 311
09:00
Towards an Efficient CNF Encoding of Block Ciphers
ABSTRACT. SAT-solvers are one of the primary tools to assess the security of block ciphers automatically. Common CNF encodings of s-boxes are based on algebraic representations (finding low-degree polynomials) or symbolic execution of considered function implementation. However, those methods are not strictly connected with algorithms used in efficient SAT-solvers. Therefore, we propose an application of minimal propagate complete encodings, which in their definition are tuned for modern satisfiability checking algorithms.
During the construction of the Boolean formula, there is often the problem of encoding linear XOR equations. The standard procedure includes a greedy shortening algorithm to decrease the size of the resulting encoding. Recently, the problem of a straight-line program has been successfully applied in obtaining efficient implementations of MDS matrices. In this paper, we propose to use the algorithm for finding a short straight-line program as a shortening procedure for a system of linear XOR equations.
As a result, we achieved a 20x speed-up of algebraic cryptanalysis of Small Scale AES block cipher to widely used algebraic representations by combining two approaches.
09:30
Calculating Sufficient Reasons for Random Forest Classifiers
ABSTRACT. In this paper, we formalize the implementations of decision tree and random forest classifiers of the Python Scikit-learn package, and we present a CNF encoding which can be used to calculate sufficient reasons a.k.a. prime implicants for them. Our decision tree encoding resembles a monotonic combinational circuit with pure input variables, of which we take advantage in our method for incremental enumeration of its prime implicants. Encoding the combination of several decision trees in a random forest would add a non-monotonic evaluation function to the root of the encoded circuit. In our approach, we solve an auxiliary SAT problem for enumerating valid leaf-node combinations of the random forest. Each valid leaf-node combination is used to incrementally update the original monotonic circuit encoding by extending the DNF at its root with a new term. Preliminary results show that enumeration of prime implicants by incrementally updating the encoding is by order of magnitudes faster than one-shot solving of the monolithic formula. We present preliminary runtime data, and some initial data about the size and number of samples found when translating the prime implicants back into database queries.
10:00
Adding Dual Variables to Algebraic Reasoning for Gate-Level Multiplier Verification
PRESENTER: Jakob Nordstrom
ABSTRACT. Algebraic reasoning has proven to be one of the most effective approaches for verifying gate-level integer multipliers, but it struggles with certain components, necessitating the complementary use of SAT solvers. For this reason validation certificates require proofs in two different formats. Approaches to unify the certificates are not scalable, meaning that the validation results can only be trusted up to the correctness of compositional reasoning. We show in this work that using dual variables in the algebraic encoding, together with a novel tail substitution and carry rewriting method, removes the need for SAT solvers in the verification flow and yields a single, uniform proof certificate.
This is a presentation-only submission for a paper previously published at DATE '22.
09:00-10:30 Session 26H: Joint PC/QBF Session (QBF)
This session will be held jointly with the Proof Complexity (PC) Workshop.
Location: Ullmann 309
09:00
QBF Solvers and their Proof Systems
ABSTRACT. We give an overview of the main paradigms in QBF solving, as well as techniques for preprocessing. In each case, we present corresponding proof systems and discuss lower bound results. We point to open problems, and consider current trends in solver development.
09:00-10:30 Session 26I: Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part I) (REAI)
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part I)
Location: Ullmann 102
09:00
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22 https://cs.uwaterloo.ca/~david/kr22/)
ABSTRACT. The tutorial introduces the audience to the concept of referring expressions, formulae that can be used to communicate identities of otherwise abstract objects. The formalism provides foundations for a successful and unambiguous exchange of information about individuals between agents sharing common knowledge about such individuals, a task that is indispensable in most modern applications of knowledge representation and semantic technologies. (tutorial web page)
09:00-10:30 Session 26J (TERMGRAPH)
Invited Talk by Delia Kesner
Location: Ullmann 101
09:00
A Computational Interpretation of Girard’s Intuitionistic Proof-Nets
ABSTRACT. In this talk we will present a computational interpretation of Girard's proof nets for intuitionistic linear logic. More precisely, proof nets are a graphical formalism representing bureaucracy-free proofs, i.e., the order in which independent logical rules are applied in a derivation is abstracted. Despite the obvious advantage of the graphical notation, the essence of their corresponding operational semantics is not always clear from a programming point of view, and a term syntax often provides a complementary understanding of a (bureaucracy-free) graph framework.
Our goal is to define a new term language that establishes a faithful and fine-grained Curry-Howard correspondence with Girard's intuitionistic NP, both from a static (objects) and a dynamic (reductions) point of view. On the static side, we identify an equivalence relation on terms which is sound and complete with respect to the classical notion of structural equivalence for proof-nets. On the dynamic side, we show that every single (exponential) step in the term calculus translates to a different single (exponential) step in the graphical formalism, thus capturing the original Girard’s granularity of proof-nets but on the level of terms.
09:00-09:50 Session 26K (VardiFest)
Location: Taub 1
09:00
On the Effectiveness of Logic in Robotics
ABSTRACT. The research of Moshe Vardi has influenced many researchers in the field of robotics. During this talk I will give a birds' eye view of how logic and synthesis are used in robotics where a central question is how to produce robot motion from high-level specifications. The specifications declare what the robot must do, rather than how the task is to be done. I will focus on describing how Moshe Vardi's work has led to efficient task and motion planning solutions and synthesis methodologies for human-robot collaboration.
09:00-10:30 Session 26L: Doctoral Program (CP)
Location: Taub 4
09:00
Opening
09:05
Industrial Research Career Path as a Sequence of Constraint Satisfaction Problems
10:00
Solving the Non-Crossing MAPF for non point-sized robots
PRESENTER: Xiao Peng
ABSTRACT. This paper deals with the multi-agent path finding (MAPF) problem for a team of tethered robots. When considering point-sized robots, paths may share a same subpath provided that they do not cross, and we have shown in a previous work that this case basically involves solving an assignment problem: The objective is to find a set of non-crossing paths, and the makespan is equal to the length of the longest path. In this work, we extend it to the case of non-point-sized robots where robot paths must be synchronized when they share a same subpath and waiting times are considered when computing the makespan. We prove that the upper bound can be computed by solving the linear sum assignment problem. We introduce a new variable neighborhood search method to improve the upper bound and show that it is robust to different instances. We also introduce a Constraint Programming model for solving the problem to optimality.
10:20
Extended Abstract for : Scheduling the Equipment Maintenance of an Electric Power Transmission Network using Constraint Programming
PRESENTER: Louis Popovic
ABSTRACT. Modern electrical power utilities must maintain their electrical equipment and replace them as they reach the end of their useful life. The Transmission Maintenance Scheduling (TMS) problem consists in generating an annual maintenance plan for electric power transportation equipment while maintaining the stability of the network and ensuring a continuous power flow for customers. Each year, a list of equipment that needs to be maintained or replaced is available and the goal is to generate an optimal maintenance plan. This paper proposes a constraint-based scheduling approach to solve the TMS problem. The model considers two types of constraints: (1) constraints that can be naturally formalized inside a constraint programming model, and (2) complex constraints that do not have a proper formalization from the field specialists. The latter cannot be integrated inside the model due to their complexity. Their satisfaction is thus verified by a black box tool, which is a simulator mimicking the impact of a maintenance schedule on the real power network. The simulator is based on complex differential power-flow equations. Experiments are carried out at five strategic points of Hydro-Québec power grid infrastructure, which involves more than 200 electrical equipment and 300 withdrawal requests. Results show that our model is able to comply with most of the formalized and unformalized constraints. It also generates maintenance schedules within an execution time of few minutes. The generated schedules are similar to the ones proposed by a field specialist and can be used to simulate maintenance programs for the next 10 years.
10:23
Extended Abstract: Sequence Variables for Routing Problems
ABSTRACT. Constraint Programming (CP) is one of the most flexible approaches for modeling and solving vehicle routing problems (VRP). This paper proposes the sequence variable domain, that is inspired by the insertion graph and the subset bound domain for set variables. This domain representation, which targets VRP applications, allows for an efficient insertion-based search on a partial tour and the implementation of simple, yet efficient filtering algorithms for constraints that enforce time-windows on the visits and capacities on the vehicles. Experiment results demonstrate the efficiency and flexibility of this CP domain for solving some hard VRP problems, including the Dial-A-Ride, the Patient Transportation, and the asymmetric TSP with time windows.
10:26
Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints - Extended Abstract
PRESENTER: Daphné Lafleur
ABSTRACT. In this work, we use Reinforcement Learning to combine Machine Learning (ML) and Constraint Programming (CP). We show that combining ML and CP allows the agent to reflect a pretrained network while taking into account constraints, leading to melodic lines that respect both the corpus' style and the music theory constraints.
09:15-10:30 Session 27 (Mentoring Workshop)
Location: Taub 7
09:15
Welcome
09:30
Advancing Science with Platforms and Driving Scenarios: a perspective from a researcher at Microsoft Research
ABSTRACT. What does a research career in industry look like? While there is no single answer to this question, I will approximate by describing perspective based on personal experiences based on a career spanning work in a research institute, a startup, the core file systems group at Microsoft and at Microsoft Research. In each of the experiences, a background with a graduate degree specifically with an emphasis on logic and program logics, proved useful but in very different ways. The perspective of Microsoft Research is distinguished as it is based on the pursuit of advancing science that both aims much wider than business objectives and at the same time benefits, and is of benefit to, business and product groups. I will describe an example of a platform as a research tool (in this case the SMT solver z3) for driving foundational research and drive user scenarios. Platforms represent a researcher’s aim for long term, general results that establish and test methodologies. I describe experiences with combining platforms with driving scenarios based on business needs.
09:30-10:30 Session 28A (LFMTP)
Invited talk
Location: Taub 3
09:30
Metatheory of Proto-Quipper in Hybrid: Context Relations Revisited
ABSTRACT. We revisit our formalization of the metatheory of Proto-Quipper in the Hybrid logical framework. Proto-Quipper contains the core of Quipper, which is a programming language designed to express quantum circuits and allow them to be treated as data. Hybrid is a system that is designed to support the use of higher-order abstract syntax (also called lambda-tree syntax) for representing and reasoning about formal systems, implemented in the Coq Proof Assistant. Hybrid follows a two-level approach, where a specification logic (SL) is defined as an inductive type in Coq, and reasoning about an object logic (OL) such as Proto-Quipper is carried out using the SL. In this work, we adopt a linear SL, which provides infrastructure for reasoning directly about the linear type system of Proto-Quipper.In two-level systems, statements of theorems often relate two or more OL judgments, and a "context relation" is often needed to specify the constraints between them. In this work, we carry out a more careful study of context relations in a linear setting. Other goals of revisiting our formalization include: extending the use of higher-order syntax to encode the notion of bound variables in circuits, and improving the representation of subtyping so that it is closer to the original formulation of Proto-Quipper. The latter allowed us to find an error in one of the on-paper proofs.
09:30-10:30 Session 28B (PLP)
Invited talk: Alexander Artikis & Periklis Mantenogloy (University of Piraeus, Greece) -
Online Reasoning under Uncertainty with the Event Calculus
Location: Ullmann 305
09:30
Online Reasoning under Uncertainty with the Event Calculus
ABSTRACT. Activity recognition systems detect temporal combinations of 'low-level' or 'short-term' activities on sensor data streams. Such streams exhibit various types of uncertainty, often leading to erroneous recognition. We will present an extension of an interval-based activity recognition system which operates on top of a probabilistic Event Calculus implementation. Our proposed system performs online recognition, as opposed to batch processing, thus supporting streaming applications. Our empirical analysis demonstrates the efficacy of our system, comparing it to interval-based batch recognition, point-based recognition, as well as structure and weight learning models.
09:30-10:30 Session 28C (TLLA-LINEARITY)
Location: Ullmann 302
09:30
A coherent differential PCF
ABSTRACT. The categorical models of the differential lambda-calculus are additive categories because of the Leibniz rule which requires the summation of two expressions. This means that, as far as the differential lambda-calculus and differential linear logic are concerned, these models feature finite non-determinism and indeed these languages are essentially non-deterministic. Based on a recently introduced categorical framework for differentiation which does not require additivity and is compatible with deterministic models such as coherence spaces and probabilistic models such as probabilistic coherence spaces, this talk will present a deterministic version of the differential lambda-calculus. One nice feature of this new approach to differentiation is that it is compatible with general fixpoints of terms, so our language is actually a differential extension of PCF.
09:50-10:30 Session 29 (VardiFest)
Location: Taub 1
09:50
The Safety Fragment of LTL
PRESENTER: Nicola Gigante
ABSTRACT. The Safety Fragment of LTL
10:00
An Automata-Theoretic Approach to Model-Free Reinforcement Learning
PRESENTER: Mateo Perez
ABSTRACT. A significant challenge to widespread adoption of reinforcement learning (RL) is the faithful translation of designer’s intent to the scalar reward signal required by RL algorithms. Logic-based specifications help in two ways: by precisely capturing the intended objective, and by allowing its automatic translation to a reward function. Omega-regular objectives, such as those expressed in LTL and by omega-automata, have recently been proposed to specify learning objectives in RL. In this talk, we will discuss the impact of Vardi's contributions to automata-theoretic reinforcement learning.
10:10
Strategy Logic: Origin, Results, and Open Questions
ABSTRACT. Fifteen years have passed since the introduction of the original turn-based variant of Strategy Logic, and twelve since its full-fledged concurrent extension. Several interesting results have been obtained and, for sure, many more are still to come. The ones that I consider more meaningful concern, in particular, the enlightening connections with other subfields of theoretical computer science, most notably algorithmic game theory and finite model theory, which have enriched and broadened the scientific literature. In this talk, I will overview some of these results, starting with anecdotes on the original work done in 2008, while I was a visiting Ph.D. student at Rice University under Moshe’s supervision, and terminating with a few open questions.
10:20
Rewriting of Regular Path Queries: The first paper of the four Italians
ABSTRACT. The paper discusses a collaboration with Moshe Vardi that started with a work presenting an algorithm that takes one regular expression E and n regular expressions S as input, and returns an automaton which is shown to be the maximal rewriting of E with respect to S.
10:30-11:00Coffee Break
11:00-12:30 Session 31A (FoMLAS)
FoMLAS Session 6
Chair:
Location: Ullmann 201
11:00
Formal Specification for Learning-Enabled Autonomous Systems (Extended Abstract)
PRESENTER: Doron Peled
ABSTRACT. Formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes also automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism applies temporal logic with predicates for characterizing events and uses universal quantification to allow enumeration of objects. While we have applied the formalism successfully to two complex use cases, several limitations such as monitorability or lack of quantitative satisfaction also reveal further improvement potential.
11:30
Vehicle: A High-Level Language for Embedding Logical Specifications in Neural Networks
PRESENTER: Luca Arnaboldi
ABSTRACT. Verification of neural network specifications is currently an active field of research in automated theorem proving. However, the actual act of verification is merely one step in the process of constructing a verified network. Prior to verification the specification should influence the training of the network, and afterwards users may want to export the verified specification to other verification environments in order to prove a specification about a larger system that uses the network. Currently there is little consensus on how best to connect these different stages.
In this talk we will describe our proposed solution to this problem: the Vehicle specification system. Vehicle allows the user to write a single specification in a high-level human readable form, and the Vehicle compiler then compiles it down to different targets, including training frameworks, verifiers and interactive theorem provers. In this talk we will discuss the various design decisions involved in the specification language itself and hope to solicit feedback from the verification community.
(Submitted as Extended Abstract)
12:00
Differentiable Logics for Neural Network Verification
PRESENTER: Natalia Ślusarz
ABSTRACT. The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications has drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice.
It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification.
Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL).
This raises several research questions on the technical side of "training with constraints". What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions.
(Submitted as Extended Abstract)
11:00-12:30 Session 31B: Modelling with s(CASP) (GDE)
Session focused on the use of s(CASP) for modeling: 3 regular talks (20 minutes presentation) and a short talk (10 minutes presentation) plus 5 minutes of Q&A each one.
Location: Ullmann 310
11:00
Integration of Logical English and s(CASP)
PRESENTER: Galileo Sartor
ABSTRACT. This paper showcases the use of Logical English, a logic programming language that allows for expressing rules and explanations in a controlled form of natural language, which can be interpreted by the s(CASP) reasoner. It demonstrates the possibility of representing and reasoning with legal values, unknown facts and time with carefully selected expressions of English that can be easily understood without technical training. This research has been developed in the context of the CrossJustice Project.
11:25
Embedding s(CASP) in Prolog
PRESENTER: Jan Wielemaker
ABSTRACT. The original s(CASP) implementation is a stand-alone program implemented in Ciao Prolog. It reads the s(CASP) source using a dedicated parser, resolves the query embedded in the source and emits the results in a format dictated by commandline options. Typical applications require composing a s(CASP) program, solving a query and reasoning about the \textit{bindings}, \textit{model} and/or \textit{justification}. This is often done in some external language, e.g., Python. In this paper we propose a closer integration with Prolog. The s(CASP) program is simply a Prolog program that has to respect certain constraints. The scasp library can be used to solve a query using s(CASP) semantics, making the bindings available in the normal Prolog way and providing access to the model and justification as Prolog terms. This way, we can exploits Prolog's power for manipulating (Prolog) terms for construction the s(CASP) program and interpreting the results.
11:50
PRESENTER: Joaquin Arias
ABSTRACT. The formal representation of a legal text to automatize reasoning about them is well known in literature and is recently gaining much attention thanks to the interest in the so-called smart contracts, and to autonomous decisions by public administrations [8,4,11]. For deterministic rules, there are several proposals, often based on logic-based programming languages [9,10]. However, none of the existing proposals are able to represent the ambiguity and/or administrative discretion present in contracts and/or applicable legislation, e.g., force majeure. This paper is an extended abstract of [3], where we present a framework, called s(LAW), that allows for modeling legal rules involving ambiguity, and supports reasoning and inferring conclusions based on them.
12:15
An s(CASP) In-Browser Playground based on Ciao Prolog
PRESENTER: Jose F. Morales
ABSTRACT. In recent years Web browsers are becoming closer and closer to full-fledged computing platforms. Ciao Prolog currently includes a browser-based playground which allows editing and running programs locally in the browser with no need for server-side interaction. The playground is built from reusable components, and allows easily embedding runnable Prolog programs in web pages and documents. These components can also be easily used for the development of specific, fully browser-based applications. The purpose of this paper is to present a browser-based environment for developing s(CASP) programs, based on the Ciao Prolog playground and its components. This s(CASP) playground thus runs locally on the browser with no need for interaction with a server beyond code download. After briefly introducing s(CASP) and Ciao Prolog, we provide an overview of the architecture of the Ciao playground, based on a compilation of the Ciao engine to the WebAssembly platform, describe the steps involved in its adaptation to create the s(CASP) playground, and present some of its capabilities. These include editing and running s(CASP) programs, sharing them, obtaining (sets of) answers, and visualizing and exploring explanations.
11:00-12:30 Session 31C (HoTT/UF)
Location: Ullmann 303
11:00
Towards Normalization of Simplicial Type Theory via Synthetic Tait Computability
11:30
Towards Directed Higher Observational Type Theory
12:00
The Compatibility of MF with HoTT
PRESENTER: Michele Contente
11:00-12:30 Session 31D: Invited talk and equivalence (IWC)
Location: Ullmann 306
11:00
Seven Confluence Criteria for Solving COPS #20
ABSTRACT. COPS #20 is a thought-provoking confluence problem for term rewriting, posed by Gramlich and Lucas (2006). Although the term rewrite system of the problem is confluent, it is beyond the realm of classical confluence criteria such as Knuth and Bendix' criterion (1970) and Huet's parallel closedness (1980). In this talk we will discuss various solutions to the problem, recalling powerful confluence methods developed in the last decade and a half.
12:00
Formalized Signature Extension Results for Equivalence
ABSTRACT. Conversion equivalence and normalization equivalence are important properties of two rewrite systems. We investigate how many constants are needed to reduce these properties to their ground versions for linear variable-separated rewrite systems. Our results are implemented in the decision tool FORT-h and formalized in Isabelle/HOL. The latter enables the validation of the proofs produced by the former in the certifier FORTify.
11:00-12:30 Session 31E (LFMTP)
Workshop papers
Location: Taub 3
11:00
A positive perspective on term representation: work in progress
PRESENTER: Jui-Hsuan Wu
ABSTRACT. We use the focused proof system LJF as a framework for describing term structures and substitution. Since the proof theory of LJF does not pick a canonical polarization for primitive types, two different approaches to term representation arise. When primitive types are given the negative bias, LJF proofs encode term structures as tree-like structures in a familiar fashion. In this situation, cut elimination also yields the familiar notion of substitution. On the other hand, when primitive types are given the positive bias, LJF proofs yield a structure in which explicit sharing of term structures is possible. In this situation, cut elimination yields a different notion of substitution. We illustrate these two approaches to term representation by applying them to the encoding of untyped λ-terms.
More details can be found at http://www.lix.polytechnique.fr/Labo/Dale.Miller/papers/lfmtp22-positive-perspective.pdf.
11:45
An Implementation of Set Theory with Pointed Graphs in Dedukti
ABSTRACT. Dedukti is a type-checker for the lambda-pi calculus modulo theory, a logical framework that allows the extension of conversion with user-defined rewrite rules. In this paper, we present the implementation of a version of Dowek-Miquel’s intuitionistic set theory in Dedukti. To do so, we adapt this theory - based on the concept of pointed graphs - from Deduction modulo theory to lambda-pi calculus modulo theory, and we formally write the proofs in Dedukti. In particular, this implementation requires the definition of a deep embedding of a certain class of formulas, as well as its interpretation in the theory.
More details can be found at https://hal.archives-ouvertes.fr/hal-03740004.
11:00-12:30 Session 31F (LogTeach)
Location: Ullmann 301
11:00
What Should Students of a Course in Logic Know at Its End?
ABSTRACT. See enclosed file
12:00
My Experience Teaching Logic in Undergraduate AI at NYU}
ABSTRACT. I have taught a section on logic in my introductory undergraduate and graduate courses in AI for many years. Here, I discuss what I teach; how students react; where else mathematical logic is taught at NYU; and what topics I would teach in a one- or two-semester advanced undergraduate course.
11:00-12:30 Session 31G (Mentoring Workshop)
Location: Taub 7
11:00
Managing a life that also includes research
ABSTRACT. The life of a young researcher in the current academic environment can be highly demanding and stressful. The competition for job opportunities often causes early-stage researchers to prioritize productivity over well-being, work long hours, on weekends, on and off campus, and usually alone. In this very personal talk, I will offer advice on how to (try to) avoid the all-consuming nature of research, drawing from my personal experience and lessons learned from colleagues.
11:45
How logic-based approaches can be used in data management
11:00-12:30 Session 31H: Joint QBF Session (PC)
Location: Ullmann 309
11:00
Strategy Extraction and Proof
12:00
QCDCL with Cube Learning or Pure Literal Elimination – What is best? (II)
11:00-12:30 Session 31I (PCCR)
Location: Ullmann 203
11:00
The Parameterized Complexity of SAT
ABSTRACT. This talk will survey parameterized complexity results of the propositional satisfiability problem (SAT). We will cover parameters based on backdoors, decompositions, and hybrid parameters that combine both. After a general overview of results obtained over the last 20 years, we will discuss some recent results based on backdoor depth (ESA 2022) and twin-width (SAT 2022).
12:00
Parameterized Algorithmics and Counting: Treewidth in Practice
ABSTRACT. We discuss parameterized algorithms and an application to exact propositional model counting (MC). (W)MC asks for counting the (weighted) number of satisfying assignments of a propositional formula. The problem is known to be beyond NP and a prototypical representative of the class #P.
Various implementations that solve MC employ a SAT-based solving engine. However, we tackle the problem from a theoretical perspective exploiting bounded treewidth in input instances for which an algorithm that runs single exponential in the treewidth is known. The treewidth of an instance is at most the number of its variables, but often much smaller. While our approach suffers from the theoretical worst case bounds, we illustrate how it can still fruitfully be used in combination with modern hardware that takes advantage of parallelism. Our results are encouraging in the sense that complex reasoning problems can be tackled by parameterized algorithms executed on the GPU or within database systems if instances have small treewidth (<40).
11:00-12:30 Session 31J (PLP)
Location: Ullmann 305
11:00
On Projectivity in Markov Logic Networks
PRESENTER: Sagar Malhotra
ABSTRACT. Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Like most relational models, MLNs do not admit consistent marginal inference over varying domain sizes i.e. marginal probabilities depend on the domain size. Furthermore, MLNs learned on a fixed domain do not generalize to domains of different sizes. In recent works, connections have emerged between domain size dependence, lifted inference, and learning from a sub-sampled domain. The central idea of these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special class of models, namely Relational Block Models (RBMs). In terms of data likelihood, RBMs allow us to learn the best possible projective MLN in the two-variable fragment. Furthermore, RBMs also admit consistent parameter learning over sub-sampled domains.
11:30
Exploiting the Full Power of Pearl's Causality in Probabilistic Logic Programming
ABSTRACT. We introduce new semantics for acyclic probabilistic logic programs in terms of Pearl's functional causal models. Further, we show that our semantics generalize the classical distribution semantics and CP-logic. This enables us to establish all query types of functional causal models, namely probability calculus, predicting the effect of external interventions and counterfactual reasoning, within probabilistic logic programming. Finally, we briefly discuss the problems regarding knowledge representation and the structure learning task which result from the different semantics and query types.
11:00-12:30 Session 31K: SAT and Parallel Solving (POS)
Location: Ullmann 311
11:00
Dinosat: A SAT Solver with Native DNF Support
PRESENTER: Markus Iser
ABSTRACT. In this paper we report our preliminary results with a new kind of SAT solver called Dinosat. Dinosat's input is a conjunction of clauses, at-most-one constraints and disjunctive normal form (DNF) formulas. The native support for DNF formulas is motivated by the application domain of SAT based product configuration. A DNF formula can also be viewed as a generalization of a clause, i.e., a clause (disjunction of literals) is special case of a DNF formula, where each term (conjunction of literals) has exactly one literal. Similarly, we can generalize the classical resolution rule and use it to resolve two DNF formulas. Based on that, the CDCL algorithm can be modified to work with DNF formulas instead of just clauses. Using randomly generated formulas (with DNFs) we experimentally show, that in certain relevant scenarios, it is more efficient to solve these formulas with Dinosat than translate them to CNF and use a state-of-the-art SAT solver. Another contribution of this paper is identifying the phase transition points for such formulas.
11:30
DPS: A Framework for Deterministic Parallel SAT Solvers
ABSTRACT. In this study, we propose a new framework for easily constructing efficient deterministic parallel SAT solvers, providing the delayed clause exchange technique introduced in ManyGlucose. This framework allows existing sequential SAT solvers to be parallelized with just as little effort as in the non-deterministic parallel solver framework such as PaInleSS. We show experimentally that parallel solvers built using this framework have performance comparable to state-of-the-art non-deterministic parallel SAT solvers while ensuring reproducible behavior.
12:00
Scalable Proof Producing Multi-Threaded SAT Solving with Gimsatul through Sharing instead of Copying Clauses
PRESENTER: Armin Biere
ABSTRACT. We give a first account of our new parallel SAT solver Gimsatul. Its key feature is to share clauses physically in memory instead of copying them, which is the method of other state-of-the-art multi-threaded SAT solvers to share clauses logically. Our approach keeps information about which literals are watched in a clause local to a solving thread but shares the actual immutable literals of a clause globally among all solving threads. This design gives quiet remarkable parallel scalability, allows aggressive clause sharing while keeping memory usage low and produces more compact proofs.
11:00-12:30 Session 31L: Joint PC/QBF Session (QBF)
This session will be held jointly with the Proof Complexity (PC) Workshop.
Location: Ullmann 309
11:00
Strategy Extraction and Proof
ABSTRACT. Invited talk.
12:00
QCDCL with Cube Learning or Pure Literal Elimination - Part 2
ABSTRACT. Quantified conflict-driven clause learning (QCDCL) is one of the main approaches for solving quantified Boolean formulas (QBF). We formalise and investigate several versions of QCDCL that include cube learning and/or pure-literal elimination, and formally compare the resulting solving models via proof complexity techniques. Our results show that almost all of the QCDCL models are exponentially incomparable with respect to proof size (and hence solver running time), pointing towards different orthogonal ways how to practically implement QCDCL. This is a continuation of a previous talk with the same title. While in the previous talk we focused on the general techniques of showing QCDCL lower bounds, here we focus on the specifics pertaining to pure literal elimination and cube learning.
11:00-12:30 Session 31M: Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part II) (REAI)
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22, Part II)
Location: Ullmann 102
11:00
Referring Expressions in Artificial Intelligence and Knowledge Representation Systems (Tutorial at KR-22)
ABSTRACT. The tutorial introduces the audience to the concept of referring expressions, formulae that can be used to communicate identities of otherwise abstract objects. The formalism provides foundations for a successful and unambiguous exchange of information about individuals between agents sharing common knowledge about such individuals, a task that is indispensable in most modern applications of knowledge representation and semantic technologies (tutorial web page).
11:00-12:30 Session 31N (TERMGRAPH)
Regular submissions, morning session
Location: Ullmann 101
11:00
Transformation of DPO Grammars into Hypergraph Lambek Grammars With The Conjunctive Kleene Star
ABSTRACT. We study how to embed well-known hypergraph grammars based on the double pushout (DPO) approach in the hypergraph Lambek calculus HL. It turns out that DPO rules can be naturally encoded by types of HL. However, this encoding is not enough to convert a DPO grammar into an equivalent grammar based on HL: we additionally need a logical operation that would allow making arbitrarily many copies of types. We develop such an operation called the conjunctive Kleene star and show that any DPO grammar can be converted into an equivalent HL-grammar enriched with this operation.
11:30
Greedily Decomposing Proof Terms for String Rewriting into Multistep Derivations by Topological Multisorting
ABSTRACT. We show a proof term in a string rewrite system can be mapped to its causal graph and that the latter is a unique representative of the permutation equivalence class of the former. We then map the causal graph back to a proof term of a special shape, a so-called greedy multistep reduction. Composing both transformations yields a simple and effective way of constructing the greedy multistep reduction of a proof term, and thereby of deciding permutation equivalence of proof terms in general, and of (multistep) reductions in particular.
12:00
A PBPO+ Graph Rewriting Tutorial
PRESENTER: Roy Overbeek
ABSTRACT. We provide a tutorial introduction to the algebraic graph rewriting formalism PBPO+. We show how PBPO+ can be obtained by composing a few simple building blocks. Along the way, we comment on how alternative design decisions lead to related formalisms in the literature, such as DPO.
11:00-12:30 Session 31P (TLLA-LINEARITY)
Location: Ullmann 302
11:00
Dissymetrical Linear Logic
ABSTRACT. This paper is devoted to design computational systems of linear logic (i.e. systems in which, notably, the non linear and structural phenomena which arise during the cut-elimination process are taken in charge by specific modalities, the exponentials: ! and ?). The systems designed are “intermediate” between Intuitionistic LL and Classical LL. Methodologically, the focus is put on how to break the symmetrical interdependency between ! and ? which prevails in Classical LL – and this without to loose the computational properties (closure by cut-elimination, atomizability of axioms). Three main systems are designed (Dissymetrical LL, semi-functorial Dissymetrical LL, semi-specialized Dissymetrical LL), where, in each of them, ! and ? play well differentiated roles.
11:30
Parametric Chu Translation
ABSTRACT. Bellin translates multiplicative-additive linear logic to its intuitionistic fragment via the Chu construction, seemingly posing an alternative to negative translation. However, his translation is not sound in the sense that not all valid intuitionistic sequents in its image correspond to valid classical ones. By directly analyzing two-sided classical sequents, we develop a sound generalization thereof inspired by parametric negative translation that also handles the exponentials.
12:00
Peano Arithmetic and muMALL: An extended abstract
PRESENTER: Dale Miller
ABSTRACT. We propose to examine some of the proof theory of arithmetic by using two proof systems. A linearized version of arithmetic is available using muMALL, which is MALL plus the following logical connective to treat first-order term structures: equality and inequality, first-order universal and existential quantification, and the least and greatest fixed point operators. The proof system muLK is an extension of muMALL in which contraction and weakening are permitted. It is known that muMALL has a cut-elimination result and is therefore consistent. We will show that muLK is consistent by embedding it into second-order linear logic. We also show that muLK contains Peano arithmetic and that in a couple of different situations, muLK is conservative over muMALL. Finally, we show that a proof that a relation represents a total function can be turned into a proof-search-based algorithm to compute that function.
11:00-11:30 Session 31Q (VardiFest)
Location: Taub 1
11:00
Rewriting, Answering, and Losslessness: A Clarification by the “Four Italians”
ABSTRACT. The paper itself is an extended abstract.
11:10
Bisimulation Games Played in Fibered Categories
ABSTRACT. I propose to present our recent results [LICS'19,LICS'21] on capturing various bisimilarity notions— covering not only the conventional relational notion but also its quantitative extensions such as probabilistic bisimulation and bisimulation metric—in the language of category theory. The theory combines abstract category theory and concrete games—so-called codensity games which are played in categories—which I believe will be of interest of Moshe and the audience.
11:20
Capturing abscondings
ABSTRACT. In order to improve the effectiveness of law enforcement agencies, we address the problem of capturing absconding using graph theory, formal methods and game theory. We define a set of problems and we outline an approach to solve them.
11:00-12:30 Session 31R: Doctoral Program (CP)
Location: Taub 4
11:00
Interdisciplinary Research -- Cost Function Network for Life Sciences
11:55
Automated SAT Problem Feature Extraction using 1 Convolutional Autoencoders
PRESENTER: Marco Dalla
ABSTRACT. The Boolean Satisfiability Problem (SAT) was the first known NP-complete problem and has a very broad literature focusing on it. It has been applied successfully to various real-world problems, such as scheduling, planning and cryptography. SAT problem feature extraction plays an essential role in this field. SAT solvers are complex, fine-tuned systems that exploit problem structure. The ability to represent/encode a large SAT problem using a compact set of features has broad practical use in instance classification, algorithm portfolios, and solver configuration. The performance of these techniques relies on the ability of feature extraction to convey helpful information. Researchers often craft these features by hand'' to capture particular structures of the problem. Instead, in this paper, we extract features using semi-supervised deep learning. We train a convolutional autoencoder (AE) to compress the SAT problem into a limited latent space and reconstruct it minimizing the reconstruction error. The latent space projection should preserve much of the structural features of the problem. We compare our approach to a set of features commonly used for algorithm selection. Firstly, we train classifiers on the projection to predict if the problems are satisfiable or not. If the compression conveys valuable information, a classifier should be able to take correct decisions. In the second experiment, we check if the classifiers can identify the original problem that was encoded as SAT. The empirical analysis shows that the autoencoder is able to represent problem features in a limited latent space efficiently, as well as convey more information than current feature extraction methods.
12:15
Selecting SAT Encodings for Pseudo-Boolean and Linear Integer Constraints
ABSTRACT. In this extended abstract, we summarise the work from our main conference paper. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes.
12:18
Peel-and-Bound: Generating Stronger Relaxed Bounds with Multivalued Decision Diagrams
PRESENTER: Isaac Rudich
ABSTRACT. Decision diagrams are an increasingly important tool in cutting-edge solvers for discrete optimization. However, the field of decision diagrams is relatively new, and is still incorporating the library of techniques that conventional solvers have had decades to build. We drew inspiration from the warm-start technique used in conventional solvers to address one of the major challenges faced by decision diagram based methods. Decision diagrams become more useful the wider they are allowed to be, but also become more costly to generate, especially with large numbers of variables. We present a method of peeling off a sub-graph of previously constructed diagrams and using it as the initial diagram for subsequent iterations that we call peel-and-bound. We test the method on the sequence ordering problem, and our results indicate that our peel-and-bound scheme generates stronger bounds than a branch-and-bound scheme using the same propagators, and at significantly less computational cost.
12:21
Solving the Constrained Single-Row Facility Layout Problem with Decision Diagrams (Extended Abstract)
PRESENTER: Vianney Coppé
ABSTRACT. This paper presents two exact optimization models for the Constrained Single-Row Facility Layout Problem. It is a linear arrangement problem considering departments in a facility with given lengths and traffic intensities. The first approach is an extension of the state-of-the-art mixed-integer programming model for the unconstrained problem with the additional constraints. The second one is a decision diagram-based branch-and-bound that takes advantage of the recursive nature of the problem through a dynamic programming model. The computational experiments show that both approaches significantly outperform the only mixed-integer programming model in the literature.
12:24
CNF Encodings of Binary Constraint Trees (Extended Abstract)
PRESENTER: Ruiwei Wang
ABSTRACT. Ordered Multi-valued Decision Diagrams (MDDs) have been shown to be useful to represent finite domain functions/relations. For example, various constraints can be modelled with MDD constraints. Recently, a new representation called Binary Constraint Tree (BCT), which is a (special) tree structure binary Constraint Satisfaction Problem, has been proposed to encode MDDs and shown to outperform existing MDD constraint propagators in Constraint Programming solvers. BCT is a compact representation, and it can be exponentially smaller than MDD for representing some constraints. Here, we also show that BCT is compact for representing non-deterministic finite state automaton (NFA) constraints. In this paper, we investigate how to encode BCT into CNF form, making it suitable for SAT solvers. We present and investigate five BCT CNF encodings. We compare the propagation strength of the BCT CNF encodings and experimentally evaluate the encodings on a range of existing benchmarks. We also compare with seven existing CNF encodings of MDD constraints. Experimental results show that the CNF encodings of BCT constraints can outperform those of MDD constraints on various benchmarks.
11:35-12:05 Session 32 (VardiFest)
Location: Taub 1
11:35
From Kochen-Specker to Feder-Vardi
ABSTRACT. We point out the surprising connections which have recently been observed between the famous Kochen-Specker theorem in the foundations of quantum mechanics, and algorithmic questions relating to constraint satisfaction and the celebrated Feder-Vardi Dichotomy Conjecture (recently proved by Bulatov and Zhuk), and to the Weisfeiler-Leman approximations to structure isomorphism.
11:45
Data Complexity and Expressive Power of Ontological Reasoning Formalisms
ABSTRACT. In his 1982 landmark paper “The Complexity of Relational Query Languages” [7], Moshe Vardi defined and studied the concept of data complexity of relational query languages, which is the complexity of evaluating a fixed query in the language as a function of the size of the database. Vardi also defines the notion of expression complexity (fixed database, query as input), now often called “program complexit”, and the combined complexity (where both, the database and the query constitute the input). Data complexity has become the standard method for assessing the complexity of query languages. Vardi [7] analysed various query languages and showed that fixed-point queries are complete for PTIME in data complexity by a proof from which the same result for Datalog follows. He noticed that ”the expression complexity of the investigated languages is usually one exponential higher than the data complexity” [7]. Vardi also notes that over ordered structures fixed-point queries (and implicitly Datalog queries) capture PTIME, which was shown independently by Immerman [6], and more explicitly for the second-order Horn fragment corresponding to Datalog by Gr ̈adel [5]. The first part of the talk will give a short overview of these results.
In the second part, I briefly address the complexity and expressive power of ontological reasoning formalisms such as ontology-based data access via description logics [3] or via variants of Datalog [4,1,2]. I will essentially illustrate two points. First, rather than a typical single-exponential jump from data to expression (or combined) complexity for classical logical query languages addressed by Vardi, we now typically have a double-exponential jump. Second, to better understand the expressive power of ontological reasoning formalisms where, in addition to a database (or ABox) and a query, there is an ontology (or TBox or Datalog± program), it is convenient to consider a refined concept of expressive power studied in [1,2], which is defined by the set of Boolean queries that can be expressed when keeping ontologies fixed.
References
[1] Marcelo Arenas, Georg Gottlob, and Andreas Pieris. Expressive languages for querying the semantic web. ACM Trans. Database Syst., 43(3):13:1–13:45, 2018. [2] Gerald Berger, Georg Gottlob, Andreas Pieris, and Emanuel Sallinger. The space-efficient core of vadalog. ACM Trans. Database Syst., 47(1), apr 2022. [3] Diego Calvanese, Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, Antonella Poggi, Mariano Rodriguez-Muro, Riccardo Rosati, Marco Ruzzi, and Domenico Fabio Savo. The mastro system for ontology-based data access. Semantic Web, 2(1):43–53, 2011. [4] Georg Gottlob, Thomas Lukasiewicz, and Andreas Pieris. Datalog+/-: Questions and answers. In 14th Intl. Conf. on the Principles of Knowledge Representation and Reasoning (KR’14), 2014. [5] Erich Gr ̈adel. Capturing complexity classes by fragments of second-order logic. Theoretical Com- puter Science, 101(1):35–57, 1992. [6] Neil Immerman. Relational queries computable in polynomial time (extended abstract). In Pro- ceedings of STOC’82, May 5-7, 1982, San Francisco, California, USA. [7] Moshe Y. Vardi. The complexity of relational query languages. In Proc. of STOC’82, 1982.
11:55
Logic-driven approaches for smart, safe and energy-efficient aviation
ABSTRACT. In this talk we present some of the many examples where Prof. Moshe Vardi’s research has enabled progress in solving some of the most challenging technical problems present in aviation. We touch upon a subset of use-cases from knowledge acquisition, advanced reasoning, planning, environment health and safety, and energy efficiency.
12:10-12:40 Session 33 (VardiFest)
Location: Taub 1
12:10
Divide-and-Conquer Determinization for B\"uchi Automata
ABSTRACT. The determinization of a nondeterministic B\"uchi automaton (NBA) is a fundamental construction of automata theory, with applications to probabilistic verification and reactive synthesis. The standard determinization constructions, such as the ones based on the Safra-Piterman's approach, work on the whole NBA. In this work we propose a divide-and-conquer determinization approach. To this end, we first classify the strongly connected components (SCCs) of the given NBA as inherently weak, deterministic accepting, and nondeterministic accepting. We then present how to determinize each type of SCC \emph{independently} from the others; this results in an easier handling of the determinization algorithm that takes advantage of the structure of that SCC. Once all SCCs have been determinized, we show how to compose them so to obtain the final equivalent deterministic Emerson-Lei automaton, which can be converted into a deterministic Rabin automaton without blow-up of states and transitions. We implement our algorithm in a prototype tool named ourDC and empirically evaluate ourDC with the state-of-the-art tools on a large set of benchmarks from the literature. The experimental results show that our prototype ourDC outperforms Spot and Owl regarding the number of states and transitions.
12:20
Little Tricky Logic: Misconceptions in the Understanding of LTL
PRESENTER: Ben Greenman
ABSTRACT. We have been studying LTL misconceptions with multiple populations to determine *in what ways* LTL is tricky and to decide *what we can do* to address the issues. We propose an interactive remote talk that aims to demonstrate the LTL misconceptions and expert blind spots that we have found.
12:30
Comments from Giuseppe, Kuldeep, and Kristin
12:30-14:00Lunch Break
Lunches will be held in Taub hall and in The Grand Water Research Institute.
14:00-15:30 Session 34A (ABR)
Tutorial: Assumption-Based Nonmonotonic Reasoning - Alexander Bochman
Location: Ullmann 104
Tutorial: Assumption-Based Nonmonotonic Reasoning (Part 1)
ABSTRACT. The tutorial provides an introduction to Assumption-Based Reasoning (ABR) which is argued to constitute the core of nonmonotonic reasoning in AI. Webpage: https://sites.google.com/view/kr2022-abr/home
14:00-15:30 Session 34B (FoMLAS)
FoMLAS Session 7
Location: Ullmann 201
14:00
Neural Network Verification with Proof Production
PRESENTER: Omri Isac
ABSTRACT. Deep neural networks (DNNs) are increasingly being employed in safety-critical systems, and there is an urgent need to guarantee their correctness. Consequently, the verification com- munity has devised multiple techniques and tools for verifying DNNs. When DNN verifiers discover an input that triggers an error, that is easy to confirm; but when they report that no error exists, there is no way to ensure that the verification tool itself is not flawed. As multiple errors have already been observed in DNN verification tools, this calls the applicability of DNN verification into question. In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to- check witness of unsatisfiability, which attests to the absence of errors. Our proof production is based on an efficient adaptation of the well-known Farkas’ lemma, combined with mechanisms for handling piecewise-linear functions and numerical precision errors. As a proof of concept, we implemented our technique on top of the Marabou DNN verifier. Our evaluation on a safety- critical system for airborne collision avoidance shows that proof production succeeds in almost all cases, and entails only a small overhead.
14:30
Efficient Neural Network Verification using Branch and Bound
ABSTRACT. In this talk, I will describe two recent Branch and Bound (BaB) verifiers developed by our group to ensure different safety properties of neural networks. The BaB verifiers involve two main steps: (1) recursively splitting the original verification problem into easier independent subproblems by splitting input or hidden neurons; and (2) for each split subproblem, using fast but incomplete bound propagation techniques to compute sound estimated bounds for the outputs of the target neural network. One of the key limitations of existing BaB verifiers is computing tight relaxations of activation functions' (i.e., ReLU) nonlinearities. Our recent works (α-CROWN and β-CROWN) introduce a primal-dual approach and jointly optimize the corresponding Lagrangian multipliers for each ReLU with gradient ascent. Such an approach is highly parallelizable and avoids calls to expensive LP solvers. Our verifiers not only provide tighter output estimations than existing bound propagation methods but also can fully leverage GPUs with massive parallelization. Our verifier, α, β-CROWN (alpha-beta-CROWN), won the second International Verification of Neural Networks Competition (VNN-COMP 2021) with the highest total score.
Bio: Suman Jana is an associate professor in the department of computer science and the data science institute at Columbia University. His primary research interest is at the intersections of computer security and machine learning. His research has received six best paper awards, a CACM research highlight, a Google faculty fellowship, a JPMorgan Chase Faculty Research Award, an NSF CAREER award, and an ARO young investigator award.
14:00-15:30 Session 34C: s(CASP) extensions and applications I (GDE)
Session focused on the most recent applications of s(CASP) and the description of the functionalities incorporated in s(CASP) that have made them possible: 3 regular talks (20 minutes presentation) and a short talk (10 minutes presentation) plus 5 minutes of Q&A each one.
Location: Ullmann 310
14:00
Automating Defeasible Reasoning in Law with Answer Set Programming
PRESENTER: Avishkar Mahajan
ABSTRACT. The paper studies defeasible reasoning in rule-based systems, in particular about legal norms and contracts. We identify rule modifiers that specify how rules interact and how they can be overridden. We then define rule transformations that eliminate these modifiers, leading in the end to a translation of rules to formulas. For reasoning with and about rules, we contrast two approaches, one in a classical logic with SMT solvers, which is only briefly sketched, and one using non-monotonic logic with Answer Set Programming solvers, described in more detail.
14:25
Unmanned Aerial Vehicle compliance checking using Goal-Directed Answer Set Programming
ABSTRACT. We present a novel application of Goal-Directed Answer Set Programming that digitizes the model aircraft operator’s compliance verification against the Academy of Model Aircrafts (AMA) safety code. The AMA safety code regulates how AMA flyers operate Unmanned Aerial Vehicles (UAVs) for limited recreational purposes. Flying drones and their operators are subject to various rules before and after the operation of the aircraft to ensure safe flights. In this paper, we leverage Goal-Directed Answer Set Programming to encode the AMA safety code and automate compliance checks. To check compliance, we use the s(CASP) which is a goal-directed ASP engine. By using s(CASP) the operators can easily check for violations and obtain a justification tree explaining the cause of the violations in human-readable natural language. We develop a front end questionnaire interface that accepts various conditions and use the backend s(CASP) engine to evaluate whether the conditions adhere to the regulations. We also leverage s(CASP) implemented in SWI-Prolog, where SWI-Prolog exposes the reasoning capabilities of s(CASP) as a REST service. To the best of our knowledge, this is the first application of ASP in the AMA and Avionics Compliance and Certification space.
14:50
Symbolic Reinforcement Learning Framework with Incremental Learning of Rule-based Policy
PRESENTER: Elmer Salazar
ABSTRACT. In AI research, Relational Reinforcement Learning (RRL) is a vastly discussed domain that combines reinforcement learning with relational learning or inductive learning. One of the key challenges of inductive learning through rewards and action is to learn the relations incrementally. In other words, how an agent can closely mimic the human learning process. Where we, humans, start with a very naive belief about a concept and gradually update it over time to a more concrete hypothesis. In this paper, we address this challenge and show that an automatic theory revision component can be developed efficiently that can update the existing hypothesis based on the rewards the agent collects by applying it. We present a symbolic reinforcement learning framework with the automatic theory revision component for incremental learning. This theory revision component would not be possible to build without the help of a goal-directed execution engine of answer set programming (ASP) - s(CASP). The current work has demonstrated a proof of concept about the RL framework and we are still working on it.
15:15
LTL Model Checking using Coinductive Answer Set programming
ABSTRACT. We present a model checker for Linear Temporal Logic using Goal-Directed Answer Set Programming under Costable model semantics (CoASP). Costable model semantics allows for positive loops to succeed, unlike Stable model semantics where positive loops fail. Therefore, by using the Costable model semantics, LTL formulas involving the G and R operator can be proved coinductively.
14:00-15:30 Session 34D (HoTT/UF)
Location: Ullmann 303
14:00
TBA
15:00
Semantics for two-dimensional type theory
PRESENTER: Benedikt Ahrens
14:00-15:30 Session 34E: Conditional rewriting (IWC)
Location: Ullmann 306
14:00
On local confluence of conditional rewrite systems
ABSTRACT. We characterize local confluence of *conditional rewrite systems* à la Huet, i.e., as the joinability of a set of conditional pairs including the usual conditional critical pairs and a new kind of pairs we call *conditional variable pairs*.
14:30
A Critical Pair Criterion for Level-Commutation of Conditional Term Rewriting Systems
PRESENTER: Takahito Aoto
ABSTRACT. We introduce level-commutation of conditional term rewriting systems (CTRSs) that extends the notion of level-confluence, in a way similar to extending confluence to commutation. We show a criterion for level-commutation of oriented CTRSs, which generalizes the one for commutation of term rewriting systems in (Toyama, 1987). As a corollary, we obtain a criterion of level-confluence of oriented CTRSs which extends the one in (Suzuki et al., 1995).
15:00
Proving Confluence with CONFident
PRESENTER: Raúl Gutiérrez
ABSTRACT. This paper describes the proof framework used in CONFident, a framework to deal with different types of systems (term rewriting systems, context-sensitive term rewriting systems and conditional term rewriting systems) and different types of tasks (checking joinability of critical pairs, termination of systems, feasibility of reachibility problems or deducibility) and different techniques for proving confluence (including modular decompositions, transformations, etc.).
14:00-15:00 Session 34F (LFMTP)
Frank Pfenning special session: invited talk
Location: Taub 3
14:00
A modal analysis of dependently typed metaprogramming
ABSTRACT. Metaprogramming is the art of writing programs that produce or manipulate other programs. This opens the possibility to eliminate boilerplate code and exploit domain-specific knowledge to build high-performance programs. While this widely used concept is almost as old as programming itself, it has been surprisingly challenging to extend to logical frameworks and type-theoretic proof assistants.
In this talk, we present MINTS, a modal intuitionistic type theory which supports dependently typed multi-staged programming in the spirit of Scheme or Racket's quote-unquote style. As MINTS is dependently typed, we can not only specify, generate and share code across multiple stages, but also reason about multi-staged programs and prove them correct in MINTS. Theoretically, MINTS extends the Kripke-style modal lambda-calculus by Pfenning and Davies which serves as a logical foundation for simply-typed multi-staged programming to a full Martin-Loef type theory with a cumulative hierarchy of universes. This will allow us to exploit the full potential of metaprogramming without sacrificing reliability of and trust in the code we are producing and running. In addition, it provides a fresh perspective towards internalizing macros or tactics in proof assistants based on type theories.
This is joint work with Jason Z. Hu and Junyoung Jang.
14:00-15:30 Session 34G (LogTeach)
Location: Ullmann 301
14:00
Possible Desiderata for Logic in the CS Curriculum
ABSTRACT. Mathematical (or formal) logic is extremely important in computer science (yet another example, along with number theory, of an already extant mathematical field’s having extreme applicability to the "newer" field of Computer Science).
As the Workshop participants are well aware, to incorporate logic into the CS curriculum has been very challenging; there have been sporadic efforts for decades. But, as noted in the Call for Participation, if anything, the situation has worsened over the years. Some of the following will be suggested attempts to lessen this logjam; others will be oriented toward logic content.
Caveat: virtually everything following is based on (likely incomplete) knowledge of academia in the United States. Hence, specific items may or may not have relevance in other countries.
15:00
Teaching Logic for Computer Science Students: Proof Assistants and Related Tools
ABSTRACT. In the last decade we have focused our main logic courses on proof assistants and related tools. We find that the modern computer science curriculum requires a focus on applications instead of just pen-and-paper proofs. Notably, we teach the metatheory of logic using tools with formalizations in proof assistants like Isabelle such that we have both implementations and theorems about them.
14:00-15:30 Session 34H (Mentoring Workshop)
Location: Taub 7
14:00
ABSTRACT. When pursuing an academic career, one is often confronted with contradictory expectations and a lot of uncertainty. Understanding some of the processes in academia and different views different people may take can help a lot with finding one's own way. This talk is targeted at PhD students and early-career post-docs. It will focus on some key stages in an academic career, and what the important decisions are that one should make or at least consider for oneself at a certain stage.
14:45
From PhD to industry: A recent graduate’s perspective
ABSTRACT. To a PhD student who deeply focuses on academic research in their day-to-day work, moving to industry may seem unattainable. As a recent graduate, I will share my personal experience of applying for internships, doing coding interviews, working in a startup, as well as working in a big company. I will also give an overview of the project I am currently working on as an applied scientist in the Prime Video Automated Reasoning team.
14:00-15:30 Session 34I (PC)
Location: Ullmann 309
14:00
Hard Inputs from Kolmogorov Randomness---Can Merlin Tame Open Problems?
ABSTRACT. We consolidate two widely believed conjectures about tautologies---no optimal proof system exists, and most require superpolynomial size proofs in any system---into a $p$-isomorphism-invariant condition satisfied by all paddable $\textbf{coNP}$-complete languages or none. The condition is: for any Turing machine (TM) $M$ accepting the language, $\textbf{P}$-uniform input families requiring superpolynomial time by $M$ exist (equivalent to the first conjecture) and appear with positive upper density in an enumeration of input families (implies the second). In that case, no such language is easy on average (in $\textbf{AvgP}$) for a distribution applying non-negligible weight to the hard families.
The hardness of proving tautologies and theorems is likely related. Motivated by the fact that arithmetic sentences encoding string $x$ is Kolmogorov random'' are true but unprovable with positive density in a finitely axiomatized theory $\mathcal{T}$ (Calude and J{\"u}rgensen), we conjecture that any propositional proof system requires superpolynomial size proofs for a dense set of $\textbf{P}$-uniform families of tautologies encoding there is no $\mathcal{T}$ proof of size $\leq t$ showing that string $x$ is Kolmogorov random''. This implies the above condition.
The conjecture suggests that there is no optimal proof system because undecidable theories help prove tautologies and do so more efficiently as axioms are added, and that constructing hard tautologies seems difficult because it is impossible to construct Kolmogorov random strings. Similar conjectures that computational blind spots are manifestations of noncomputability would resolve other open problems.
14:30
Simulations between proof systems
ABSTRACT. The problem of the existence of an p-optimal propositional proof system is one of the central open problems in proof complexity.
The goal of this work is to study the restricted case of this problem, namely, the case of strong reductions. We also introduce the notion of bounded proof system and study the connection between optimality and automatizability for bounded proof systems.
15:00
Are Hitting Formulas Hard for Resolution?
PRESENTER: Tomáš Peitl
ABSTRACT. Hitting formulas are a peculiar class of propositional CNF formulas. Not only is their satisfiability decidable in polynomial time, even their models can be counted in closed form. In contrast, other tractable classes, like 2-SAT and Horn-SAT, usually have algorithms based on resolution; and model counting remains hard. On the flip side, those resolution-based algorithms usually easily imply an upper bound on resolution complexity, which is missing for hitting formulas. Are hitting formulas hard for resolution? In this paper we take the first steps towards answering this question. We show that the resolution complexity of hitting formulas is dominated by so-called irreducible hitting formulas. However, constructing large irreducible formulas is non-trivial and it is not even known whether infinitely many exist. Building upon our theoretical results, we implement an efficient algorithm on top of the Nauty software package to enumerate all irreducible unsatisfiable hitting formulas with up to 14 clauses, and we determine exact resolution complexity of those with up to 13 clauses, by extending an existing SAT encoding.
14:00-15:30 Session 34J (PCCR)
Location: Ullmann 203
14:00
Decompositions and algorithms for interpretations of sparse graphs
ABSTRACT. The first-order model checking problem for finite graphs asks, given a graph G and a first-order sentence phi as input, to decide whether phi holds on G. While we do not expect that there is an efficient (fpt with respect to the formula size) algorithm which works on the class of all finite graphs, such algorithms have been proven to exist for various structurally well-behaved classes of graphs (graphs of bounded degree, planar graphs, unit interval graphs etc). Identifying graph classes for which an fpt model checking algorithm exists has been an active area of research for the past 25 years.
After the existence of an efficient model checking algorithm was shown for classes of sparse graphs in 2014 by Grohe, Kreutzer and Siebertz, the attention gradually turned to the more general setting of graph classes which can be obtained from sparse graphs via interpretations/transductions. This program has been initiated in 2016, when the existence of an fpt algorithm for the first-order model checking problem was shown for graph classes interpretable in graphs of bounded degree. After this, there followed several results about the structure of graphs interpretable in sparse graphs, but despite the efforts of several groups of researchers, no positive algorithmic result has been achieved until very recently. In the talk we will review the current status and recent developments regarding this problem, and in particular we will present a fixed-parameter tractable algorithm for the first-order model checking on interpretations of graph classes with bounded local treewidth (notably, this includes interpretations of planar graphs).
15:00
Tractable Abstract Argumentation via Backdoor-Treewidth
PRESENTER: Matthias König
ABSTRACT. Argumentation frameworks (AFs) are a core formalism in the field of formal argumentation. As most standard computational tasks regarding AFs are hard for the first or second level of the Polynomial Hierarchy, a variety of algorithmic approaches to achieve manageable runtimes have been considered in the past. Among them, the backdoor-approach and the treewidth-approach turned out to yield fixed-parameter tractable fragments. However, many applications yield high parameter values for these methods, often rendering them infeasible in practice. We introduce the backdoor-treewidth approach for abstract argumentation, combining the best of both worlds with a guaranteed parameter value that does not exceed the minimum of the backdoor- and treewidth-parameter. In particular, we formally define backdoor-treewidth and establish fixed-parameter tractability for standard reasoning tasks of abstract argumentation. Moreover, we provide systems to find and exploit backdoors of small width, and conduct systematic experiments evaluating the new parameter.
14:00-15:30 Session 34K (PLP)
Location: Ullmann 305
14:00
Semantics for Hybrid Probabilistic Logic Programs with Function Symbols: Technical Summary
PRESENTER: Fabrizio Riguzzi
ABSTRACT. Hybrid probabilistic logic programs extends probabilistic logic programs by adding the possibility to manage continuous random variables. Despite the maturity of the field, a semantics that unifies discrete and continuous random variables and function symbols was still missing. In this paper, we summarize the main concepts behind a new proposed semantics for hybrid probabilistic logic programs with function symbols.
14:30
Statistical Statements in Probabilistic Logic Programming
PRESENTER: Fabrizio Riguzzi
ABSTRACT. Probabilistic Logic Programs under the distribution semantics (PLPDS) do not allow statistical probabilistic statements of the form 90% of birds fly, which were defined Type 1 statements by Halpern. In this paper, we add this kind of statements to PLPDS and introduce the PASTA (Probabilistic Answer set programming for STAtistical probabilities) language. We translate programs in our new formalism into probabilistic answer set programs under the credal semantics. This approach differs from previous proposals, such as the one based on probabilistic conditionals as, instead of choosing a single model by making the maximum entropy assumption, we take into consideration all models and we assign probability intervals to queries. In this way we refrain from making assumptions and we obtain a more neutral framework. We also propose an inference algorithm and compare it with an existing solver for probabilistic answer set programs on a number of programs of increasing size, showing that our solution is faster and can deal with larger instances. --- NOTE: this is the original paper submitted and accepted at the 16th International Conference on Logic Programming and Non-monotonic Reasoning (LPNMR). It will be published in the proceedings of LPNMR so it must not be published on CEUR. ---
14:00-15:30 Session 34L: Proofs I (POS)
Location: Ullmann 311
14:00
TBUDDY: A Proof-Generating BDD Package
ABSTRACT. The TBUDDY library enables the construction and manipulation of reduced, ordered binary decision diagrams (BDDs). It extends the capabilities of the BUDDY BDD package to support trusted BDDs, where the generated BDDs are accompanied by proofs of their logical properties. These proofs are expressed in a standard clausal framework, for which a variety of proof checkers are available.
Building on TBUDDY via its application-program interface (API) enables developers to implement automated reasoning tools that generate correctness proofs for their outcomes. In some cases, BDDs serve as the core reasoning mechanism for the tool, while in other cases they provide a bridge from the core reasoner to proof generation. A Boolean satisfiability (SAT) solver based on TBUDDY can achieve polynomial scaling when generating unsatisfiability proofs for a number of problems that yield exponentially-sized proofs with standard solvers. It performs particularly well for formulas containing parity constraints, where it can employ Gaussian elimination to systematically simplify the constraints.
14:30
Combining CDCL, Gauss-Jordan Elimination, and Proof Generation
PRESENTER: Mate Soos
ABSTRACT. Traditional Boolean satisfiability (SAT) solvers based on the conflict-driven clause-learning (CDCL) framework fare poorly on formulas involving large numbers of parity constraints. The CryptoMiniSat solver augments CDCL with Gauss-Jordan elimination to greatly improve performance on these formulas. Integrating the TBUDDY proof-generating BDD library into CryptoMiniSat enables it to generate unsatisfiability proofs when using Gauss-Jordan elimination. These proofs are compatible with standard, clausal proof frameworks.
15:00
Towards the shortest DRAT proof of the Pigeonhole Principle
PRESENTER: Marijn Heule
ABSTRACT. The Pigeonhole Principle (PHP) has been heavily studied in automated reasoning, both theoretically and in practice. Most solvers have exponential runtime and proof length, while some specialized techniques achieve polynomial runtime and proof length. Several decades ago, Cook manually constructed O(n^4) extended resolution proofs, where n denotes the number of pigeons. Existing automated techniques only surpass Cook's proofs in similar proof systems for large n. We construct the shortest known proofs of PHP in the standard proof format of modern SAT solving, DRAT. Using auxiliary variables and by recursively decomposing the original program into smaller sizes, we manually obtain proofs having length O(n^3) and leading coefficient 5/2.
14:00-15:30 Session 34M (QBF)
Location: Ullmann 205
14:00
Moving Definition Variables in Quantified Boolean Formulas (Extended Abstract)
PRESENTER: Joseph Reeves
ABSTRACT. Augmenting problem variables in a quantified Boolean formula with definition variables enables a compact representation in clausal form. Generally these definition variables are placed in the innermost quantifier level. To restore some structural information, we introduced a preprocessing technique that moves definition variables to the quantifier level closest to the variables that define them. We express the movement in the QRAT proof system to allow verification by independent proof checkers. We evaluated definition variable movement on the QBFEVAL'20 competition benchmarks. Movement significantly improved performance for the competition's top solvers. Combining variable movement with the preprocessor bloqqer improves solver performance compared to using bloqqer alone.
This work appeared in a TACAS'22 paper under the same title.
14:30
ABSTRACT. Quantified Integer Programs (QIP) are integer linear programs where variables are either existentially or universally quantified. Similar to the link from SAT to Integer Programming, a connection between QBF and QIP can be drawn: QIP extends QBF by allowing general integer variables and linear constraints in addition to a linear objective function. We review solution methods, recent developments, and extensions in order to display future research directions and opportunities.
15:00
A Data-Driven Approach for Boolean Functional Synthesis
ABSTRACT. Given a relational specification between Boolean inputs and outputs, the problem of Boolean functional synthesis is to construct each output as a function of the inputs such that the specification is met. Synthesizing Boolean functions is one of the challenging problems in Computer Science. Over the decades, the problem has found applications in a wide variety of domains such as certified QBF solving, automated program repair, program synthesis, and cryptography.
In this talk, we will discuss Manthan, a novel data-driven approach for Boolean functional synthesis. Manthan views the problem of functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification.
14:00-15:30 Session 34N (TERMGRAPH)
Invited talk by Jörg Endrullis
Location: Ullmann 101
14:00
PBPO+ Graph Rewriting in Context
ABSTRACT. We will give an overview of various results on graph rewriting with PBPO+. PBPO+ is a versatile graph rewriting formalism based in category theory that subsumes many well-known approaches such as DPO, SqPO and AGREE (at least in the setting of quasitopoi).
14:00-15:30 Session 34P (TLLA-LINEARITY)
Location: Ullmann 302
14:00
The Call-by-Value Lambda-Calculus from a Linear Logic Perspective
ABSTRACT. Plotkin's call-by-value (CbV, for short) lambda-calculus is a variant of the lambda-calculus that models the evaluation mechanism in most functional programming languages. The theory of the CbV lambda-calculus is not as well studied as the call-by-name one, because of some technical issues due to the "weakness" of Plotkin's CbV beta-rule, which leads to a mismatch between syntax and semantics. By adopting a Curry-Howard perspective, we show how linear logic inspires several ways to solve the mismatch between syntax and semantics in CbV, paving the way for restoring a theory of the CbV lambda-calculus as elegant as the one for call-by-name. We test our approach on a type-theoretic/semantic characterization of normalizabilty and on the notion of solvability.
15:00
Cloning and Deleting Quantum Information from a Linear Logical Point of View
ABSTRACT. This paper displays a linear sequent calculus in accordance with the no-cloning and no-deleting theorems of quantum computing. The calculus represents operations on matrices in terms of linear sequent rules, satisfing admissibility of cut. It is possible to define a non-trivial categorical semantics for it using categories intrinsic to vector spaces extended with a Kronecker product, which can be viewed as a partial answer to a problem posed by Abramsky.
14:00-14:55 Session 34Q (VardiFest)
Location: Taub 1
14:00
Descriptive complexity and inexpressibly proofs
ABSTRACT. I will discuss ways to simplify inexpressibility proofs. In particular, I will discuss an approach by Fagin, Stockmeyer and Vardi that greatly simplifies my earlier proof (from my Ph.D. thesis) that monadic NP is not closed under complement, where monadic NP consists of properties defined by existential second-order sentences, where the existential second-order quantifiers range only over subsets of the domain.
14:25
A Brief Glimpse of the Skolem Landscape
ABSTRACT. The Skolem Problem asks how to determine algorithmically whether a given linear recurrence sequence (such as the Fibonacci numbers) has a zero. It is a central question in dynamical systems and number theory, and has many connections to other branches of mathematics and computer science, such as program analysis and automated verification. Unfortunately, its decidability has been open for nearly a century! In this talk, I will present a brief survey of what is known on the Skolem Problem and related questions, including recent and ongoing developments.
14:35
Approximations of Certain Answers in First-Order Logic
ABSTRACT. I will explain how old papers by Moshe Vardi and Ray Reiter on query answering over incomplete databases led to new recently discovered schemes of efficient approximation of certain answers, and present a previously unknown and exceptionally simple formulation of such schemes for first-order queries over relational databases.
14:45
Towards Algebraic Techniques for Descriptive Complexity
ABSTRACT. I describe my work in connection with several papers by Moshe Vardi.
14:00-15:30 Session 34R: Doctoral Program (CP)
Location: Taub 4
14:00
Aggressive Bound Descent for Constraint Optimization
PRESENTER: Thibault Falque
ABSTRACT. Backtrack search is a classical complete approach for exploring the search space of a constraint optimization problem. % when optimality is sought to be proved. Each time a new solution is found during search, its associated bound is used to constrain more the problem, and so the remaining search. An extreme (bad) scenario is when solutions are found in sequence with very small differences between successive bounds. In this paper, we propose an aggressive bound descent (ABD) approach to remedy this problem: new bounds are modified exponentially as long as the searching algorithm is successful. We show that this approach can render the solver more robust, especially at the beginning of search. Our experiments confirm this behavior for constraint optimization.
14:20
Exploiting Model Entropy to Make Branching Decisions in Constraint Programming
PRESENTER: Auguste Burlats
ABSTRACT. Branching decisions have a strong impact on performance in Constraint Programming (CP). Therefore robust generic variable ordering heuristics are an important area of research in the CP community. By allowing us to compute marginal probability distributions over the domain of each variable in the model, CP-based Belief Propagation also allows us to compute the entropy of each variable. We present two new branching heuristics exploiting entropy: one chooses the variable with the smallest entropy; the other is inspired by impact-based search and chooses the variable whose fixing is believed to have the strongest impact on the entropy of the model. We also propose a dynamic stopping criterion for iterated belief propagation based on variations in model entropy. We test our heuristics on various constraint satisfaction problems from XCSP and minizinc benchmarks.
14:40
Finding Counterfactual Explanations through Constraint Relaxations
PRESENTER: Sharmi Dev Gupta
ABSTRACT. Interactive constraint systems often suffer from infeasibility (no solution) due to conflicting user constraints. A common approach to recover infeasibility is to eliminate the constraints that cause the conflicts in the system. This approach allows the system to provide an explanation as: “if the user is willing to drop out some of their constraints, there exists a solution”. However, one can criticise this form of explanation as not being very informative. A counterfactual explanation is a type of explanation that can provide a basis for the user to recover feasibility by helping them understand which changes can be applied to their existing constraints rather than removing them. This approach has been extensively studied in the machine learning field, but requires a more thorough investigation in the context of constraint satisfaction. We propose an iterative method based on conflict detection and maximal relaxations in over-constrained constraint satisfaction problems to help compute a counterfactual explanation.
15:00
Symmetry breaking and Knowledge Compilation
PRESENTER: Andrea Balogh
ABSTRACT. Constraint Programming is a powerful method to solve combinatorial problems, but due to a large search space, solving can be very time consuming. Diagnosis, planning, product configuration are example use cases. These systems are often used in an online format answering queries. Compilation methods were developed to deal with the complexity of solving the problems offline and create a representation that is able to answer queries in polynomial time. Symmetry breaking is the addition of constraints to eliminate symmetries, thus in general speeding up search and reducing the number of solutions. Knowledge compilation looks at finding succinct representations that are also tractable, that is, they support queries and transformations in polytime. Finding the smallest representation is often the bottleneck of compilation methods. Compiled representations are Directed Acyclic Graphs representing the set of all solutions. In this paper we investigate if breaking symmetries, that is representing less solutions, always leads to a smaller compiled representation? We considered four compilers and three highly symmetrical problems. A reduction is observed in all the problems for the compilation size when we break symmetries, with top-down compilers obtaining more reduction.
15:20
Improved Sample Complexity Bounds for Branch-and-Cut
ABSTRACT. The branch-and-cut algorithm for integer programming has a wide variety of tunable parameters that have a huge impact on its performance, but which are challenging to tune by hand. An increasingly popular approach is to use machine learning to configure these parameters based on a training set of integer programs from the application domain. We bound how large the training set should be to ensure that for any configuration, its average performance over the training set is close to its expected future performance. Our guarantees apply to parameters that control the most important aspects of branch-and-cut: node selection, branching constraint selection, and cut selection, and are sharper and more general than those from prior research.
15:23
Exploiting Functional Constraints in Automatic Dominance Breaking for Constraint Optimization (Extended Abstract)
PRESENTER: Allen Z. Zhong
ABSTRACT. Dominance breaking is an effective technique to reduce the time for solving constraint optimization problems. Lee and Zhong propose an automatic dominance breaking framework for a class of constraint optimization problems based on specific forms of objectives and constraints. In this paper, we propose to enhance the framework for problems with nested function calls which can be flattened to functional constraints. In particular, we focus on aggregation functions and exploit such properties as monotonicity, commutativity and associativity to give an efficient procedure for generating effective dominance breaking nogoods. Experimentation also shows orders-of-magnitude runtime speedup using the generated dominance breaking nogoods and demonstrates the ability of our proposal to reveal dominance relations in the literature and discover new dominance relations on problems with ineffective or no known dominance breaking constraints.
15:26
Large Neighborhood Search for Robust Solutions for Constraint Satisfaction Problems with Ordered Domains
PRESENTER: Jheisson López
ABSTRACT. Real-world Constraint Satisfaction Problems (CSPs) are subject to uncertainty/dynamism that cannot be known in advance. Some techniques in the literature offer robust solutions for CSPs, but they have low performance in large-scale CSPs. We propose a Large Neighbourhood Search (LNS) algorithm and a value selection heuristic that searches for robust solutions in large-scale CSPs.
15:00-15:30 Session 35A (LFMTP)
Frank Pfenning special session: contributed talks
Location: Taub 3
15:00
Associativity or Non-Associativity, Local or Global!
PRESENTER: Eben Blaisdell
ABSTRACT. Lambek's two calculi, the associative one and the non-associative one, have their advantages and disadvantages for the analysis of natural language syntax by means of categorial grammar. In some cases, associativity leads to over-generation, i.e., validation of grammatically incorrect sentences. In other situations, associativity is useful. We will discuss two approaches. One approach, developed by Morrill and Moortgat, begins with the associative calculus and reconstructs local non-associativity by means of the so-called bracket modalities, ultimately leading to Morrill's CatLog parser. Bracket modalities interact in a subtle way with the subexponential modalities originating in linear logic. Another approach, developed by Moot and Retoré, begins with the non-associative calculus and utilizes multi-modalities, ultimately leading to the Grail parser. We enhance the latter approach in our IJCAR 2022 paper, showing that local associativity may be expressed by means of subexponentials. Some aspects of the two approaches touch Frank Pfenning's work on ordered logic and on adjoint logic. We discuss decidability and undecidability results in both approaches.
15:00-15:40 Session 35B (VardiFest)
Location: Taub 1
15:00
Fixpoint Logics, Relational Machines, and Computational Complexity
ABSTRACT. In this talk I will recall a fruitful collaboration between Serge Abiteboul and myself with Moshe, that resulted in the 1997 JACM article with the above title. Under Moshe's impetus, the article completed in a very elegant way previous results, providing a comprehensive and compelling picture, and remains one of my favorite papers.
The results establish a general connection between fixpoint logic and complexity. On one side, we have fixpoint logic, parameterized by the choices of 1st-order operators (inflationary or noninflationary) and iteration constructs (deterministic, nondeterministic, or alternating). On the other side, we have the complexity classes between P and EXPTIME. The parameterized fixpoint logics express the complexity classes P, NP, PSPACE, and EXPTIME, but only over ordered structures.
The order requirement highlights an inherent mismatch between complexity and logic -- while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures. To overcome this mismatch, we used a theory of relational complexity based on the relational machine, a computational device that operates directly on structures. Relational complexity bridges the gap between standard complexity and fixpoint logic. On one hand, questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes. On the other hand, the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight, three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE, and EXPTIME. The logical formulation shows that some of the most tantalizing questions in complexity theory boil down to a single question: the relative power of inflationary vs. noninflationary 1st-order operators.
15:10
To Count or Not to Count: A Personal Perspective
ABSTRACT. Although I have worked closely with Moshe on counting problems, it will be a tall order to count all the things I've learnt from him -- through his articles, lectures, one-on-one conversations and gems of advice over the years. So I'd rather not try to count but briefly talk of a few (among many) occasions at different stages of my career, when I kept running into Moshe's profound contributions and insights. Starting from my grad school days, Moshe's result with Pierre Wolper on automata theoretic LTL model checking was among my first few introductions to formal verification. Much later, while working with my Ph.D. student on logic and some aspects of finite model theory, we kept running into beautiful results due to Moshe and his collaborators. More recently, while working with Moshe on PAC counting, I recall some very interesting discussions on how 2-, 3- or even 2.5-universal hashing might just be the sweet spot to help achieve a balance between scalability and strong formal guarantees. Almost a decade later, we know how spot-on his prediction was.
15:20
Moshe Vardi and Intel Corporation: Long and Fruitful Collaboration
PRESENTER: Eli Singerman
ABSTRACT. In this short talk, we will give a retrospective of the collaboration Moshe had with Intel over the past 25 years! This long-lasting collaboration was key in bringing formal verification to industry. It is an excellent example of Vardi’s unique contribution and ability to bridge theory and practice.
15:30
A Toast for Moshe at the FLoC VardiFest
ABSTRACT. This summer, we gather to celebrate Moshe Vardi's many pioneering contributions to the theory and practice of computer science, and his leadership qualities and activities. My toast will focus on some less-known ingenious traits he exhibited starting very early in his career. In particular, I will discuss how Moshe acted in the role of a grand translator who applied advanced archery strategies in database research, and will comment on his role as a member of the Gang-of-Four.
15:30-16:00Coffee Break
16:00-17:30 Session 37A (ABR)
Tutorial: Assumption-Based Nonmonotonic Reasoning
Location: Ullmann 104
Tutorial: Assumption-Based Nonmonotonic Reasoning (Part 2)
ABSTRACT. The tutorial is intended to provide a gentle introduction to Assumption-Based Reasoning (ABR) which will be argued to constitute the core of nonmonotonic reasoning in AI. Webpage https://sites.google.com/view/kr2022-abr/home
16:00-17:00 Session 37B (FoMLAS)
FoMLAS Session 8
Location: Ullmann 201
16:00
Minimal Multi-Layer Modifications of Deep Neural Networks
PRESENTER: Idan Refaeli
ABSTRACT. Deep neural networks (DNNs) have become increasingly popular in recent years. However, despite their many successes, DNNs may also err and produce incorrect and potentially fatal outputs in safetycritical settings, such as autonomous driving, medical diagnosis, and airborne collision avoidance systems. Much work has been put into detecting such erroneous behavior in DNNs, e.g., via testing or verification, but removing these errors after their detection has received lesser attention. We present here a new framework, called 3M-DNN, for repairing a given DNN, which is known to err on some set of inputs. The novel repair procedure employed by 3M-DNN computes a modification to the network’s weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine. To the best of our knowledge, our method is the first one that allows repairing the network by simultaneously modifying the weights of multiple layers. This is achieved by splitting the network into sub-networks, and applying a single-layer repairing technique to each component. We evaluated 3M-DNN on an extensive set of benchmarks, obtaining promising results.
16:30
Self-Correcting Neural Networks For Safe Classification
PRESENTER: Ravi Mangal
ABSTRACT. Classifiers learnt from data are increasingly being used as components in systems where safety is a critical concern. In this work, we present a formal notion of safety for classifiers via constraints called safe-ordering constraints. These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature. For classifiers implemented using neural networks, we also present a run-time mechanism for the enforcement of safe-ordering constraints. Our approach is based on a self-correcting layer, which provably yields safe outputs regardless of the characteristics of the classifier input. We compose this layer with an existing neural network classifier to construct a self-correcting network (SC-Net), and show that in addition to providing safe outputs, the SC-Net is guaranteed to preserve the classification accuracy of the original network whenever possible. Our approach is independent of the size and architecture of the neural network used for classification, depending only on the specified property and the dimension of the network’s output; thus it is scalable to large state-of-the-art networks. We show that our approach can be optimized for a GPU, introducing run-time overhead of less than 1ms on current hardware-even on large, widely-used networks containing hundreds of thousands of neurons and millions of parameters.
16:00-17:30 Session 37C: s(CAPS) extensions and applications II (GDE)
Session focused on the latest applications of s(CASP): 2 regular talks (20 minutes plus 5 minutes of Q&A), followed by a panel to discuss the present and future of goal-directed execution of answer set programs (40 minutes).
Location: Ullmann 310
16:00
Summary on "Hybrid Neuro-Symbolic Approach for Text-Based Games using Inductive Logic Programming"
ABSTRACT. In this paper, I briefly describe the summary of my work titled - Hybrid Neuro-Symbolic Approach for Text-Based Games using Inductive Logic Programming. Text-based games (TBGs) have emerged as an important test-bed, requiring reinforcement learning (RL) agents to combine natural language understanding with reasoning. A key challenge for agents solving this task is to generalize across multiple games and shows good results on both seen and unseen objects. To tackle these issues, we have designed a hybrid neuro-symbolic framework for TBGs that uses symbolic reasoning along with the neural RL model. We also use WordNet as an external commonsense knowledge source to bring information to generalize the hypothesis. We have tested our work on different settings on TWC games and showed that the agents that incorporate the neuro-symbolic hybrid approach with the generalized rules outperform the baseline agents.
16:25
Blawx: Web-based user-friendly Rules as Code
ABSTRACT. This paper describes Blawx, a prototype web-based user-friendly Rules as Code tool, powered by a goal-directed answer set programming. The paper briefly describes Rules as Code, and introduces desireable qualities for Rules as Code tools. It provides justifications for Blawx’s implementation of the Google Blockly library, and the s(CASP) reasoning system. It then provides a step-by-step tour of how Blawx allows a user to generate an answer set program representing their understanding of a statute, and use that encoding to power an application. The paper concludes with a brief discussion of the current short term and anticipated long-term development objectives for Blawx.
16:50
GDE of ASP: applications, potential and future directions
ABSTRACT. Panel discussion with TBA
16:00-17:30 Session 37D (HoTT/UF)
Location: Ullmann 303
16:00
Models of homotopy type theory from the Yoneda embedding
16:30
A type-theoretic model structure over cubes with one connection presenting spaces
PRESENTER: Evan Cavallo
17:00
Groupoidal Realizability: Formalizing the Topological BHK Interpretation
16:00-18:00 Session 37E: Higher-order rewriting and CoCo (IWC)
Location: Ullmann 306
16:00
Confluence by Higher-Order Multi--One Critical pairs with an application to the Functional Machine Calculus
ABSTRACT. We show all multi--one critical peaks being many--multi joinable entails confluence of positional pattern rewrite systems (PRSs). To apply this result to the functional machine calculus (FMC), we embed the FMC in a $3$rd order PRS and show its multi--one critical peaks to be one--multi joinable, regaining its (known) confluence via higher-order rewriting.
16:30
Checking Confluence of Rewrite Rules in Haskell
ABSTRACT. We present a tool GSOL, a confluence checker for GHC. It checks the confluence property for rewrite rules in a Haskell program by using the confluence checker SOL (Second-Order Laboratory). %Haskell is a non-strict and purely functional programming language. The Glasgow Haskell Compiler (GHC) allows the user to use rewrite rules to optimize Haskell programs in the compilation pipeline. Currently, GHC does not check the confluence of the user-defined rewrite rules. If the rewrite rules are not confluent then the optimization using these rules may produce unexpected results. Therefore, checking the confluence of rewrite rules is important. We implement GSOL using the plugin mechanism of GHC.
17:00
Confluence Competition
16:00-17:00 Session 37F (LFMTP)
Frank Pfenning special session: invited talk
Location: Taub 3
16:00
ABSTRACT. The dependently typed lambda calculus LF provides a convenient means for encoding specifications of object systems; dependent types in this calculus constitute a natural representation of rule-based relational specifications and lambda terms provide for a higher-order abstract syntax based treatment of binding structure. An important consideration from the perspective of formalizing object systems in this context is how reasoning about such specifications can be supported. One approach towards addressing this issue, pioneered by Frank Pfenning and Carsten Schuermann, is to use LF types with specified modes to represent properties of LF specifications; the validity of these properties is then demonstrated by describing inhabitants for such types and showing their totality relative to the chosen modes. We will describe a different approach in this talk. This approach is based on describing a logic that uses LF typing judgements as atomic formulas and that allows more complex formulas to be constructed using logical connectives and quantification over contexts and term variables. The validity of quantifier-free formulas in the logic is determined by LF derivability and a classical interpretation of the connectives, and quantifiers are interpreted by a substitution semantics. Mechanization of reasoning is realized by describing proof rules that are sound with respect to the semantics. We will discuss a collection of proof rules that encode LF meta-theorems and provide for case analysis and induction based reasoning over LF derivations, in addition to reasoning based on the usual interpretation of logical symbols. This approach shares some aspects with the one explored by Mary Southern and Kaustuv Chaudhuri using the Abella system but differs from it in that LF derivability is treated explicitly in the logic rather than indirectly via a translation to a predicate logic form. The talk will also touch upon the Adelfa proof assistant that implements the logic.
[The talk will be based on collaborative work with Mary Southern andChase Johnson.]
16:00-17:30 Session 37I (PC)
Location: Ullmann 309
16:00
On vanishing sums of roots of unity in polynomial calculus and sum-of-squares
PRESENTER: Ilario Bonacina
ABSTRACT. Vanishing sums of roots of unity can be seen as a natural generalization of knapsack from Boolean variables to variables taking values over the roots of unity. We show that these sums are hard to prove for polynomial calculus and for sum-of-squares, both in terms of degree and size.
17:00
Exponential separations using guarded extension variables
PRESENTER: Emre Yolcu
ABSTRACT. We study the complexity of proof systems augmenting resolution with inference rules that allow, given a formula F in conjunctive normal form, deriving clauses that are not necessarily logically implied by F but whose addition to F preserves satisfiability. When the derived clauses are allowed to introduce variables not occurring in F, the systems we consider become equivalent to extended resolution. We are concerned with the versions of these systems "without new variables". They are called BC-, RAT-, SBC- and GER-, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution. Except for SBC-, these systems are known to be exponentially weaker than extended resolution. They are, however, all equivalent to it under a relaxed notion of simulation that allows the translation of the formula along with the proof when moving between proof systems. By taking advantage of this fact, we construct formulas that exponentially separate RAT- from GER- and vice versa. With the same strategy, we also separate SBC- from RAT-. Additionally, we give polynomial-size SBC- proofs of the pigeonhole principle, which separates SBC- from GER- by a previously known lower bound. These results also separate the three systems from BC- since they all simulate it. We thus give an almost complete picture of their relative strengths. Along the way, we prove a partial simulation of RAT- by BC- that is to our knowledge the only example of a nontrivial simulation in proof complexity that cannot necessarily be carried out in time polynomial in the size of the produced proof, which highlights the semantic nature of these systems.
16:00-17:00 Session 37J (PLP)
Location: Ullmann 305
16:00
Explainability, causality and computational and-or graphs
ABSTRACT. In recent years, there has been an increasing interest in studying causality-related properties in machine learning models generally, and in generative models in particular. While that is well-motivated, it inherits the fundamental computational hardness of probabilistic inference, making exact reasoning intractable. Probabilistic tractable models have also recently emerged, which guarantee that conditional marginals can be computed in time linear in the size of the model, where the model is usually learned from data. In the talk, we will discuss a number of new results in this area. We will discuss what kind of casual queries can be answered on trained tractable models, what kind of domain constraints can be posed and what methods are available to extract (counterfactual) explanations from them.
16:00-17:00 Session 37K: Proofs II (POS)
Location: Ullmann 311
16:00
SATViz: Real-Time Visualization of Clausal Proofs
PRESENTER: Tim Holzenkamp
ABSTRACT. Visual layouts of graphs representing SAT instances can highlight the community structure of SAT instances. The community structure of SAT instances has been associated with both instance hardness and known clause quality heuristics. Our tool SATViz visualizes CNF formulas using the variable interaction graph and a force-directed layout algorithm. With SATViz, clause proofs can be animated to continuously highlight variables that occur in a moving window of recently learned clauses. If needed, SATViz can also create new layouts of the variable interaction graph with the adjusted edge weights. In this paper, we describe the structure and feature set of SATViz. We also present some interesting visualizations created with SATViz.
16:30
Certified Symmetry and Dominance Breaking for Combinatorial Optimisation
PRESENTER: Jakob Nordstrom
ABSTRACT. Symmetry and dominance breaking can be crucial for solving hard combinatorial search and optimisation problems, but the correctness of these techniques sometimes relies on subtle arguments. For this reason, it is desirable to produce efficient, machine-verifiable certificates that solutions have been computed correctly. Building on the cutting planes proof system, we develop a certification method for optimisation problems in which symmetry and dominance breaking are easily expressible. Our experimental evaluation demonstrates that we can efficiently verify fully general symmetry breaking in Boolean satisfiability (SAT) solving, thus providing, for the first time, a unified method to certify a range of advanced SAT techniques that also includes XOR and cardinality reasoning. In addition, we apply our method to maximum clique solving and constraint programming as a proof of concept that the approach applies to a wider range of combinatorial problems.
This is a presentation-only submission of a paper that appeared at the 36th AAAI Conference on Artificial Intelligence (AAAI '22).
16:00-17:30 Session 37L (QBF)
Location: Ullmann 205
16:00
From QBF to the Dynamic Logic of Propositional Assignments and back: a computational perspective
ABSTRACT. The Dynamic Logic of Propositional Assignments (DL-PA) is a formalism that combines logic, programming, and non-determinism constructs. Starting with DL-PA, we build reductions to and from Quantified Boolean Formulas. This prompts us to unexpectedly revisit prenexing and clausification procedures for quantified propositional formulas. To the best of our knowledge, this latter task was not completly settled when the equivalence operator is involved. (Joint work with Andreas Herzig.)
16:30
Lower Bounds for QBFs of Bounded Treewidth
ABSTRACT. The problem of deciding the validity (QSat) of quantified Boolean formulas (QBF) is a vivid research area in both theory and practice. In the field of parameterized algorithmics, the well-studied graph measure treewidth turned out to be a successful parameter. A well-known result by Chen [9] is that QSat when parameterized by the treewidth of the primal graph and the quantifier rank of the input formula is fixed-parameter tractable. More precisely, the runtime of such an algorithm is polynomial in the formula size and exponential in the treewidth, where the exponential function in the treewidth is a tower, whose height is the quantifier rank. A natural question is whether one can significantly improve these results and decrease the tower while assuming the Exponential Time Hypothesis (ETH). In the last years, there has been a growing interest in the quest of establishing lower bounds under ETH, showing mostly problem-specific lower bounds up to the third level of the polynomial hierarchy. Still, an important question is to settle this as general as possible and to cover the whole polynomial hierarchy. In this work, we show lower bounds based on the ETH for arbitrary QBFs parameterized by treewidth and quantifier rank. More formally, we establish lower bounds for QSat and treewidth, namely, that under ETH there cannot be an algorithm that solves QSat of quantifier rank i in runtime significantly better than i-fold exponential in the treewidth and polynomial in the input size. In doing so, we provide a reduction technique to compress treewidth that encodes dynamic programming on arbitrary tree decompositions. Further, we describe a general methodology for a more finegrained analysis of problems parameterized by treewidth that are at higher levels of the polynomial hierarchy. Finally, we illustrate the usefulness of our results by discussing various applications of our results to problems that are located higher on the polynomial hierarchy, in particular, various problems from the literature such as projected model counting problems.
16:00-17:30 Session 37M (TERMGRAPH)
Regular papers, afternoon session, including discussion
Location: Ullmann 101
16:00
Formalization and analysis of BPMN using graph transformation systems
PRESENTER: Tim Kräuter
ABSTRACT. The BPMN is a widely used standard notation for defining intra- and inter-organizational workflows. However, the informal description of the BPMN execution semantics leads to different interpretations of BPMN constructs and difficulties in checking behavioral properties. Other approaches to formalizing BPMN’s execution semantics only partially cover BPMN. To this end, we propose a formalization that, compared to other approaches, covers most of the BPMN constructs. Our approach is based on a model transformation from BPMN models to graph grammars. As a proof of concept, we have implemented our approach in an open-source web-based tool.
16:30
Ideograph: A Language for Expressing and Manipulating Structured Data
PRESENTER: Stephen Mell
ABSTRACT. We introduce Ideograph, a language for expressing and manipulating structured data. Its types describe kinds of structures, such as natural numbers, lists, multisets, binary trees, syntax trees with variable binding, directed acyclic graphs, and relational databases. Fully normalized terms of a type correspond exactly to members of the corresponding structure, analogous to a Church encoding. Non-normal terms encode alternate representations of their fully normalized forms. In this paper, we first illustrate the correspondence between terms in our language and standard Church encodings, and then we exhibit the type of closed terms in untyped lambda calculus.
16:00-16:40 Session 37Q (VardiFest)
Location: Taub 1
16:00
Understandable Proofs of Unsatisfiability
ABSTRACT. Proofs of unsatisfiability facilitate the validation of SAT solver results. Practically all top-tier solvers support proof logging and these proofs can efficiently be checked using formally-verified tools. However, the size of these proofs is typically large and sometimes gigantic, thereby making them impossible to understand. On the other hand, one can extract useful information out of proofs, such as unsatisfiable cores or interpolants.
We present some results on extracting some understanding from proofs of unsatisfiability. This work started after a question by Moshe Vardi about the effect of using a large interval for the Pythagorean Triples problem on the size of the proof. Increasing the size of the interval turned out to reduce the size of the proof substantially. It might even be possible to produce a humanly-understandable proof for this problem if the interval is large enough.
We also show some other results in this direction. For example, short proofs of unsatisfiability have been crucial to constructing small unit-distance graphs with chromatic number 5. These graphs are important building blocks to solving the Hadwiger-Nelson problem. Also, compact proofs of mutilated chessboard problems provided an alternative short, humanly-understandable argument of unsatisfiability.
16:10
Moshe Y. Vardi's First Love
ABSTRACT. In 2008, Moshe Y. Vardi received the ACM SIGMOD Edgar F. Codd Innovations Award for "fundamental contributions to the foundations of relational databases". In his acceptance speech, Moshe referred to database theory as his "first love". The purpose of this talk is to give a bird's eye view of Moshe's contributions to database theory and of his enduring legacy by highlighting some of Moshe's most influential papers in this area.
16:35
A Comment to Moshe by Ron Fagin
16:00-17:30 Session 37R: Doctoral Program (CP)
Location: Taub 4
16:00
Optimized Code Generation against Power Side Channels
ABSTRACT. Software masking, a software mitigation against power-side channel attacks, aims at removing the secret dependencies from the power traces that may leak cryptographic keys. However, high-level software mitigations often depend on general purpose compilers, which do not preserve non-functional properties. What is more, microarchitectural features, such as the memory bus and register reuse, may also reveal secret information. These abstraction are not visible at the high-level implementation of the program. Instead, they are decided at compile time. To remedy these problems, security engineers often turn off compiler optimization and/or perform local, post-compilation transformations. However, theses solution lead to inefficient code. To deal with this issues, we propose Secure by Construction Code Generation (SecConCG), a secure constraint-based compiler backend to generate code that is secure. SecConCG can control the quality of the mitigated program by efficiently searching the best possible low-level implementation according to a processor cost model. In our experiments with ten masked implementations on MIPS and ARM Cortex M0, SecConCG improves the generated code from 10% to 9x compared to non-optimized secure code at a small overhead of up to 8% compared to non-secure optimized code.
16:20
Boolean Functional Synthesis and its Applications
PRESENTER: Priyanka Golia
ABSTRACT. Given a Boolean specification between a set of inputs and outputs, the problem of Boolean functional synthesis is to synthesize each output as a function of inputs such that the specification is met. In the report, we first discussed a state-of-the-art data-driven approach for a Boolean functional synthesis, called Manthan [1]. Motivated by the progress in machine learning, Manthan views functional synthesis as a classification problem and relies on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided repair and provable verification. We then discussed challenges faced by Manthan during data-driven synthesis and the remedies to overcome those challenges [2]. Finally, to discuss the applications, we showed the reduction of program synthesis to a special variant of Boolean functional synthesis in which we have explicit dependencies on universally quantified variables [3]. We hope that program synthesis will be that killer application that will motivate further research into functional synthesis.
[1] Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel. 2020. Manthan: A Data-Driven Approach for Boolean Function Synthesis. In Proceedings of International Conference on Computer-Aided Verification (CAV), 2020. [2] Priyanka Golia, Friedrich Slivovsky, Subhajit Roy, and Kuldeep S. Meel. 2021. Engineering an Efficient Boolean Functional Synthesis Engine. In Proceedings of International Conference On Computer Aided Design (ICCAD), 2021. [3] Priyanka Golia, Subhajit Roy, and Kuldeep S. Meel. Program synthesis as dependency quantified formula modulo theory. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2021.
16:40
On PB Encodings for Constraint Problems
PRESENTER: Thibault Falque
ABSTRACT. One of the possible approaches for solving a CSP is to encode the input problem into a CNF formula, and then use a SAT solver to solve it. The main advantage of this technique is that it allows to benefit from the practical efficiency of modern SAT solvers, based on the CDCL architecture. However, the reasoning power of SAT solvers is somehow "weak", as it is limited by that of the resolution proof system they use internally. This observation led to the development of so called pseudo-Boolean (PB) solvers, that implement the stronger cutting planes proof system, along with many of the solving techniques inherited from SAT solvers. Additionally, PB solvers can natively reason on PB constraints, i.e., linear equalities or inequalities over Boolean variables. These constraints are more succinct than clauses, so that a single PB constraint can represent exponentially many clauses. In this paper, we leverage both this succinctness and the reasoning power of PB solvers to solve CSPs by designing PB encodings for different common constraints, and feeding them into PB solvers to compare their performance with that of existing CP solvers.
17:00
A Boolean Formula Seeker in the Context of Acquiring Maps of Interrelated Conjectures on Sharp Bounds
PRESENTER: Ramiz Gindullin
ABSTRACT. A component of the Bound Seeker [1] is Boolean formula seeker, a part devoted to search for Boolean formulae. Here, a Boolean formula involves n arithmetic conditions linked by a single commutative logical operator or by a sum. Part of the originality of the Boolean formula seeker is that it was synthesised by a constraint program. This extended abstract includes (i) the type of Boolean formulae we consider, (ii) the importance of allowing Boolean formulae in the context of maps of conjectures, (iii) the components of the Boolean formula seeker, and (iv) a short description of the different steps of the acquisition process of Boolean formulae.
17:20
Explaining Propagation for Gini and Spread with Variable Mean (Extended Abstract)
PRESENTER: Alexander Ek
ABSTRACT. We introduce two log-linear-time dispersion propagators---(a) spread (variance, and indirectly standard deviation) and (b) the Gini coefficient---capable of explaining their propagations, thus allowing clause learning solvers to use the propagators. Propagators for (a) exist in the literature but do not explain themselves, while propagators for (b) have not been previously studied.
17:23
Extended Abstract: Constraint Acquisition Based on Solution Counting
ABSTRACT. We propose CABSC, a system that performs Constraint Acquisition Based on Solution Counting. In order to learn a Constraint Satisfaction Problem (CSP), the user provides positive examples and a Meta-CSP, i.e. a model of a combinatorial problem whose solution is a CSP. It allows listing the potential constraints that can be part of the CSP the user wants to learn. It also allows stating the parameters of the constraints and imposing constraints over these parameters. The CABSC reads the Meta-CSP using an augmented version of the language MiniZinc and returns the CSP that accepts the fewest solutions among the CSPs accepting all positive examples. This is done using a branch and bound where the bounding mechanism makes use of a model counter. Experiments show that CABSC is successful at learning constraints and their parameters from positive examples.
17:26
Extended Abstract: Acquiring Maps of Interrelated Conjectures on Sharp Bounds
ABSTRACT. To automate the discovery of conjectures on combinatorial objects, we introduce the concept of a map of sharp bounds on characteristics of combinatorial objects, that provides a set of interrelated sharp bounds for these combinatorial objects. We then describe a Bound Seeker, a CP-based system, that gradually acquires maps of conjectures. The system was tested for searching conjectures on bounds on characteristics of digraphs: it constructs sixteen maps involving 431 conjectures on sharp lower and upper-bounds on eight digraph characteristics.
16:45-17:05 Session 38 (VardiFest)
Location: Taub 1
16:45
Verifying Accuracy Claims of Differential Privacy Algorithms
ABSTRACT. Differential privacy is a mathematical framework for developing statistical computations with provable guarantees of privacy and accuracy. In contrast to the privacy component of differential privacy, which has a clear mathematical and intuitive meaning, the accuracy component of differential privacy does not have a general accepted definition; accuracy claims of differential privacy algorithms vary from algorithm to algorithm and are not instantiations of a general definition. In a recent paper~\cite{bcksv21}. we identify program discontinuity as a common cause for \emph{ad hoc} definitions and introduce an alternative notion of accuracy parametrized by, what we call, {distance to disagreement} --- the {distance to disagreement} of an input $x$ w.r.t.\, a deterministic computation $f$ and a distance $d$ is the minimal distance $d(x,y)$ over all $y$ such that $f(y)\neq f(x)$. The talk will discuss what this definition entails and identify circumstance under which verifying claims of accuracy is decidable.
16:55
Bridging Practice and Theory in SAT: Moshe Vardi the Catalyst
ABSTRACT. In this talk I will share the technical stepping stones that were the path from my research world to Moshe’s. My background is in hardware verification and this led to my interest in developing practical SAT solvers that could handle hardware verification problems at scale. This led to the two key contributions of the Chaff SAT solver form my group – the two-literal watching algorithm for unit propagation and the VSIDS (Variable State Independent Decaying Sum) decision heuristic. These techniques built on the earlier success of what is now known as CDCL (Conflict Driven Clause Learning). Collectively these techniques dramatically improved the capabilities to SAT solvers enabling them to tackle problems at scale in not just hardware verification, but system verification and even beyond instances from verification. The practical success of these and subsequent solvers seemed to fly in the face of the theoretical complexity of SAT. This piqued Moshe’s interest and led to his taking a major leadership role in trying to develop the theoretical foundations for what makes these solvers effective for the practical instances of interest –the relationship between the search algorithm and the search space characteristics of the practical instances. He was the driver of a series of workshops titled “Theory and Practice of Satisfiability Solving” held at BIRS Banff (2014), Dagstuhl (2015) and BIRS Oxaca (2018). These workshops were remarkable in their bringing together theoreticians and practitioners interested in SAT in an immersive setting to learn from each other and build bridges between theory and practice for this simultaneously simple and complex problem. Moshe was also instrumental in shepherding articles on the practical successes of SAT and SMT solvers in CACM – making sure this reached out to the broad CS community. My chance to collaborate directly with Moshe came through when Kuldeep Meel visited me at Princeton while he was Moshe’s student. We started working on the problem of finding the minimum/minimal set of independent variables for a given CNF SAT formula. This could significantly simplify the cost of the uniform sampling and model counting algorithms that Kuldeep and Moshe were working on. The collaboration expanded to include Alex Ivrii and led to a nice algorithm for this problem – making my Moshe number 1!
17:00-18:00 Session 39A (LFMTP)
Frank Pfenning special session: contributed talks
Location: Taub 3
17:00
A (Logical) Framework for Collaboration
ABSTRACT. A common interest in logical frameworks formed the basis for many years of fruitful and enjoyable collaboration with Frank and his students, both directly and indirectly. Frank’s development of the Twelf system was instrumental in enabling the use of LF for specifying a wide range of formalisms, ranging from “pure logics” to “practical programming languages.” Most importantly, the Twelf totality checker for proving All-Exists statements allowed for a human-readable and easily developed proof of type safety for the Standard ML language that not only verified what we already knew, but also fostered the development of a new language definition methodology well-suited to the demands of working at scale.
17:15
Type refinement as a unifying principle
ABSTRACT. I will discuss some personal perspectives on the "Type Refinement" project initiated by Frank several decades ago, and how it has influenced my research over the years since my PhD.
17:30
Language Minimalism and Logical Frameworks
PRESENTER: Chris Martens
ABSTRACT. In the design of tools, languages, and frameworks, Frank has been a consistent advocate for a particular flavor of aesthetic minimalism. A valuable trait of minimalist frameworks and tiny programming languages is that they facilitate implementation and reimplementation in multiple contexts with various interfaces. This principle has guided our (Chris and Rob’s) work at and beyond Carnegie Mellon, including for some recent projects we've pursued together. We'll talk about some of our Frank-adjacent projects we've reimplemented on the web, how language minimalism contributed to their success, lessons learned, and a path forward for extending the reach and application of logical frameworks.
17:45
Logics for Robotics
17:00-18:00 Session 39B: Joint Session with Vardi-Fest (LogTeach)
Location: Taub 1
17:00
How to be an ethical computer scientist (Joint with Vardi Fest)
ABSTRACT. Many of us got involved in computing because programming was fun. The advantages of computing seemed intuitive to us. We truly believed that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous! Recently, however, computer scientists realized that computing is not a game--it isreal--and it brings with it not only societal benefits, but alsosignificant societal costs, such as labor polarization, disinformation, and smart-phone addiction.
A common reaction to this crisis is to label it as an "ethics crisis".But corporations are driven by profits, not ethics, and machines are just machines. Only people can be expected to act ethically. In this talk, the speaker will discuss how computer scientists should behave ethically.
17:10-18:00 Session 40 (VardiFest)
Location: Taub 1
17:10
How to be an ethical computer scientist
ABSTRACT. Many of us got involved in computing because programming was fun. The advantages of computing seemed intuitive to us. We truly believed that computing yields tremendous societal benefits; for example, the life-saving potential of driverless cars is enormous! Recently, however, computer scientists realized that computing is not a game--it isreal--and it brings with it not only societal benefits, but alsosignificant societal costs, such as labor polarization, disinformation, and smart-phone addiction.
A common reaction to this crisis is to label it as an "ethics crisis".But corporations are driven by profits, not ethics, and machines are just machines. Only people can be expected to act ethically. In this talk, the speaker will discuss how computer scientists should behave ethically.
(The talk will be also broadcast live. For a webinar link to this talk, please register here: https://bit.ly/VardiFest )
Short BioMoshe Y. Vardi, the founding chair of FLoC, is a University Professor and the George Distinguished Service Professor in Computational Engineering at Rice University. He is the recipient of three IBM Outstanding Innovation Awards, the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, the EATCS Distinguished Achievements Award, the Southeastern Universities Research Association's Distinguished Scientist Award, the ACM SIGLOG Church Award, the Knuth Prize, the ACM Allen Newell Award, and IEEE Norbert Wiener Award for Social and Professional Responsibility. He holds seven honorary doctorates.
18:30-20:00Workshop Dinner (at the Technion, Taub Terrace Floor 2) - Paid event | 2023-03-21 18:21:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5070271492004395, "perplexity": 1755.4591564442296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00224.warc.gz"} |
http://nalgebra.org/wasm_and_embedded_programming/ | Web assembly and embedded programming§
Linear algebra can of course be dramatically useful for browser applications or embedded applications. For example, vector, matrix, and geometric operations are must-haves for any game running on the browser. Also, you may need matrix decompositions to perform machine-learning related data analysis on an embedded program. Luckily nalgebra supports both compiling for wasm and deactivating its link to the Rust standard library so that it becomes compilable for targets without libstd.
For browser applications§
All features of the nalgebra crate, including pure-rust implementations of matrix decompositions, will work out-of-the-box when compiled for the wasm32-unknown-unknown target. Recall that compiling to wasm only amounts to setting-up the target and passing the right parameters to cargo:
$rustup target add wasm32-unknown-unknown # Must be done only once on your machine.$ cargo build --target wasm32-unknown-unknown
On the other hand the nalgebra-lapack crate will not compile for wasm at all since it relies on binding to implementations of Blas and Lapack.
For embedded development§
When developing applications targeting embedded platforms, the rust standard library libstd is not always available. In those situations you typically have to add the #![no-std] attribute to your project to prevent it from linking to libstd. Though this will be useless if nalgebra itself links to the standard library! This section explains how to compile nalgebra without needing libstd, and how this affects the features you can use.
Note that the nalgebra-lapack crate on the other hand does not support being compiled without the Rust standard library yet.
Compiling without standard library§
nalgebra supports the std feature which is enabled by default. Disabling this feature will prevent nalgebra itself from linking to libstd. To achieve this you have to compile with a nightly version of the Rust compiler and declare your nalgebra dependency with default-features = false, i.e., like the following:
[dependencies]
nalgebra = { version = "0.15", default-features = false }
This will affect what feature can be used. In particular:
• All vector and matrix operations on statically-sized matrices and statically-sized vectors will continue work out-of-the box. Of course, this includes all the matrix decompositions!
• Geometric operations that depend on trigonometric functions (like the creation of a quaternion from an angle and axis) are available but will trigger link errors for the trigonometric function being used. The next section shows how to fix this.
• Creating random matrices or vectors without an user-provided distribution will not be available. Therefore the ::new_random() constructor will not exist. You may still use ::from_distribution(...) instead.
• Dynamically sized vectors DVector and dynamically sized matrices DMatrix will not be available unless you activate the alloc feature as well and provide an allocator for your platform. This will let nalgebra link to the alloc crate and use the Vec data structure internally. You may do the following to enable this feature:
[dependencies]
nalgebra = { version = "0.15", default-features = false, features = [ "alloc" ] }
Trigonometry and power functions§
All trigonometric, power, and rounding functions for floating point numbers depend on compiler intrinsics that are not available when a library is compiled without libstd. Therefore, whenever your nalgebra dependency is compiled without libstd, link errors will be generated whenever a feature relying on one of those function is used. For example, compiling the following rust code:
#![no_std]
extern crate nalgebra as na;
use na::UnitComplex;
fn foo() -> UnitComplex<f64> {
UnitComplex::new(2.0)
}
with the following dependency:
[dependencies]
nalgebra = { version = "0.15", default-features = false }
will cause the compiler to output a link error ending like this:
= note: Undefined symbols for architecture x86_64:
"_alga_sin_f64", referenced from:
_$LT$f64$u20$as$u20$alga..general..real..Real$GT$::sin_cos::h66a220f1c62a00d2 in project_name-c4e18277ea3b910c.147srt92n3an30z.rcgu.o
"_alga_cos_f64", referenced from:
_$LT$f64$u20$as$u20$alga..general..real..Real$GT$::sin_cos::h66a220f1c62a00d2 in project_name-c4e18277ea3b910c.147srt92n3an30z.rcgu.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
It means that to use this feature (here initializing a unit complex number from an angle) nalgebra needs to know how to compute sin and cos. You can provide those information by defining the alga_sin_f64 and alga_cos_f64 function (if 32-bit floats were used, you would need alga_sin_f32 and alga_cos_f32). Those function must follow the standard “Rust” calling convention, be declared public, and have the #[no_mangle] attribute to avoid name mangling. Not that the link errors show functions names with leading underscore _ that must not appear at the start of the name of your own function definitions. Taking all those restrictions into account would look like the following:
#![no_std]
extern crate nalgebra as na;
use na::UnitComplex;
#[no_mangle]
pub fn alga_sin_f64(x: f64) -> f64 {
}
The actual implementation of trigonometric and power functions will depend on your specific target. If something like libm is available on the targeted platform then you can simply add the corresponding bindings as a dependency to you project and call them as part of your implementation of alga_sin_f64 and alga_cos_f64. Otherwise, you might want to take a look at the math.rs project which features pure-rust implementation of some of the common trigonometric functions. Though keep in mind that, as far as we know, math.rs is not published to crates.io and might not be maintained anymore. | 2018-06-18 05:24:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29013118147850037, "perplexity": 3500.050920177235}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.11/warc/CC-MAIN-20180618051104-20180618071104-00182.warc.gz"} |
https://www.cut-the-knot.org/m/Geometry/TangentToCircle.shtml | # Tangent to Circle in Three Steps
### Hint
How many points does it take to define a line?
How many circles does it take to define a point?
### Construction
Denote the given point $A.\,$
1. Choose an arbitrary point $B\,$ on the circle and draw circle $B(A),\,$ through $A\,$ centered at $B.\,$ Let $D\,$ be the other point of intersection of the two circles.
2. Draw the circle $A(D)\,$ through $D\,$ centered at $A.\,$ Mark point $E,\,$ the second intersection of $B(A)\,$ and $A(D).$
3. Draw the line $AE.$
### Proof 1 of the construction
Let $O\,$ be the center of the given circle.
Assume $\angle AOD=2\alpha.\,$ Then $\angle OAD=90^{\circ}-\alpha\,$ and $\angle BAD=\displaystyle \frac{\alpha}{2},\,$ as inscribed angle subtended by half the arc $\overset{\frown}{AD}.$
By the construction, $\Delta ABD=\Delta ABE\,$ such that $\angle BAE=\angle BAD=\displaystyle \frac{\alpha}{2}.\,$ It follows that
\displaystyle\begin{align}\angle OAE&=\angle OAD+\angle BAD+\angle BAE\\ &=(90^{\circ}-\alpha)+2\frac{\alpha}{2}=90^{\circ}. \end{align}
### Proof 2 of the construction
The proof is simply an observation that the angle between the line and a chord equals the inscribed angle opposite the chord, and is therefore tangent.
### Acknowledgment
The above problem comes from an uncommon site euclidea, devoted to the problems of Euclidean construction. The site and the problem have been brought to my attention by Konstantin Knop. Proof 2 is by Ted Courant.
[an error occurred while processing this directive] | 2018-09-18 22:22:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8390148878097534, "perplexity": 603.0991157288972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00360.warc.gz"} |
https://deepai.org/publication/robust-learning-via-cause-effect-models | # Robust Learning via Cause-Effect Models
We consider the problem of function estimation in the case where the data distribution may shift between training and test time, and additional information about it may be available at test time. This relates to popular scenarios such as covariate shift, concept drift, transfer learning and semi-supervised learning. This working paper discusses how these tasks could be tackled depending on the kind of changes of the distributions. It argues that knowledge of an underlying causal direction can facilitate several of these tasks.
## Authors
• 143 publications
• 37 publications
• 32 publications
• 61 publications
• ### On Causal and Anticausal Learning
We consider the problem of function estimation in the case where an unde...
06/27/2012 ∙ by Bernhard Schoelkopf, et al. ∙ 0
• ### Self-Supervised Dynamic Networks for Covariate Shift Robustness
As supervised learning still dominates most AI applications, test-time p...
06/06/2020 ∙ by Tomer Cohen, et al. ∙ 42
• ### Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
A fundamental assumption of most machine learning algorithms is that the...
07/06/2020 ∙ by Marvin Zhang, et al. ∙ 5
• ### Robustness to Spurious Correlations via Human Annotations
The reliability of machine learning systems critically assumes that the ...
07/13/2020 ∙ by Megha Srivastava, et al. ∙ 14
• ### Test-Time Training for Out-of-Distribution Generalization
We introduce a general approach, called test-time training, for improvin...
09/29/2019 ∙ by Yu Sun, et al. ∙ 8
• ### Embedding Propagation: Smoother Manifold for Few-Shot Classification
Few-shot classification is challenging because the data distribution of ...
03/09/2020 ∙ by Pau Rodríguez, et al. ∙ 4
• ### On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL
In various approaches to learning, notably in domain adaptation, active ...
07/13/2017 ∙ by Marco Loog, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
By and large, statistical machine learning exploits statistical associations or dependences between variables to make predictions about certain variables. This is a very powerful concept especially in situations where we have sizable training sets, but no detailed model of the underlying data generating process. This process is usually modelled as an unknown probability distribution, and machine learning excels whenever this distribution does not change. Most of the theoretical analysis assumes that the data are i.i.d. (independent and identically distributed) or at least exchangeable.
On the other hand, practical problems often do not have these favorable properties, forcing us to leave the comfort zone of i.i.d. data. Sometimes distributions shift over time, sometimes we might want to combine data recorded under different conditions or from different but related regularities. Researchers have developed a number of modifications of statistical learning methods to handle various scenarios of changing distributions, for an overview, see [1].
The present paper attempts to study these problems from the point of view of causal learning. As some other recent work in the field [2, 3], it will build on the assumption that in causal structures, the distribution of the cause and the mechanism relating cause and effect tend to be independent111In the mentioned references, independence is meant in the sense of algorithmic independence, but other notions of independence can also make sense.. For instance, in the problem of predicting splicing patterns from genomic sequences, the basic splicing mechanism (driven by the ribosome) may be assumed stable betwen different species [4], even though the genomic sequences and their statistical properties might differ in several respects. This is important information constraining causal models, and it can also be useful for robust predictive models as we try to show in the present paper. Intuitively, if we learn a causal model of splicing, we could hope to be more robust with respect to changes of the input statistics, and we may be able to combine data collected from different species to get a more accurate statistical model of the splicing mechanism.
Causal graphical models as pioneered by [5, 6] are usually thought of as joint probability distributions over a set of variables , along with directed graphs (for simplicity, we assume acyclicity) with vertices and arrows indicating direct causal influences. The causal Markov assumption [5] states that each vertex is independent of its non-descendants in the graph, given its parents. Here, independence is usually meant in a statistical sense, although alternative views have been developed, e.g., using algorithmic independence [3].
Crucially, the causal Markov assumption links the semantics of causality to something that has empirically measurable consequences (e.g., conditional statistical independence). Given a sufficient set of observations from a joint distribution, it allows us to test conditional independence statements and thus infer (subject to a genericity assumption referred to as “faithfulness”) which causal models are consistent with an observed distribution. However, this will typically not lead us to a unique causal model, and in the case of graphs with only two variables, there are no conditional independence statements to test and we cannot do anything.
There is an alternative view of causal models, which does not start from a joint distribution. Instead, it assumes a set of jointly independent noise variables, one at each vertex, and each vertex computes a deterministic function of its noise variables and its parents. This view, referred to as a functional causal model (or nonlinear structural equation model), entails a joint distribution which along with the graph satisfies the causal Markov assumption [5]. Vice versa, each causal graphical model can be expressed as a functional causal model [3, e.g.]. 222As an aside, note that the functional point of view is more specific than that graphical model view [5]. To see this consider and the following two functional models that lead to the same joint distribution: (1) with and (2) with and . Suppose one observes the sample . (1) and (2) give different answers to the counterfactual question “What would have happened if had been one?”. The causal graph and the joint distribution does not provide sufficient information to give any answer.
The functional point of view is rather useful in that it allows us to come up with assumptions on causal models that would be harder to conceive in a pure probabilistic view. It has recently been shown [7] that an assumption of nonlinear functions with additive noise renders the two variable case (and thus the multivariate case [8]) identifiable, i.e., we can distinguish between the causal structures and , given that one and only one of these two alternatives is true (which implicitly excludes a common cause of and ). Hence, we can tackle the case where conditional independence tests do not provide any information. This opens up the possibility to identify the causal direction for input-output learning problems. The present paper assays whether this can he helpful for machine learning, and it argues that in many situations, a causal model can be more robust under distribution shifts than a pure statistical model. Perhaps somewhat suprisingly, learning problems need not always predict effect from cause, and the direction of the prediction has consequences for which tasks are easy and which tasks are hard. In the remainder of the paper, we restrict ourselves to the simplest possible case, where we have two variables only and there are no unobserved confounders.
##### Notation.
We consider the causal structure shown in Fig. 1
, with two observables, modeled by random variables. When using the notation
and , the variable stands for the cause and for the effect. We denote their domains and and their distributions by and (overloading the notation ). When using the notation and , the variable will always be the input and the output, from a machine learning point of view (but input and output can either be cause of effect — more below).
For simplicity, we assume that their distributions have a joint density with respect to some product measure. We write the values of this density as and the values of the marginal densities as and , again keeping in mind that these three are different functions — we can always tell from the argument which function is meant.
We identify a training set of size with a uniform mixture of Dirac measures, denoted as and use an analogous notation for an additional data set of size (e.g., a set of test inputs). E.g., could be a set of test inputs sampled from a distribution that need not be identical with . The following assumptions are used throughout the paper. The subsections below only mention additional assumptions that are task specific.
##### Causal sufficiency.
We further assume that there are two independent noise variables and , modeled as random variables with domains and and distributions and . In some places, we will use conditional densities, always implicitly assuming that they exist.
The function and the noise term jointly determine the conditional via
E=φ(C,NE).
We think of as the mechanism transforming cause into effect .
##### Indepence of mechanism and input.
We finally assume that the mechanism is “independent” of the distribution of the cause (i.e., independent of in Fig. 1), in the sense that contains no information about and vice versa; in particular, if changes at some point in time, there is no reason to believe that changes at the same time.333A stronger condition, which we do not need in the present context, would be to require that , and be jointly “independent.” This assumption has been used by [2, 3]. It encapsulates our belief that is a mechanism of nature that does not care what we feed into it. The assumption introduces an important asymmetry between cause and effect, since it will usually be violated in the backward direction, i.e., the distribution of the effect will inherit properties from [3, 9].
##### Richness of functional causal models
It turns out that the two-variable functional causal model is so rich that it cannot be identified. The causal Markov condition is trivially satisfied both by the forward model and the backward model, and thus both graphs allow a functional model.
To understand the richness of the class intuitively, consider the simple case where the noise can take only a finite number of values, say . This noise could affect for instance as follows: there is a set of functions , and the noise randomly switches one of them on at any point, i.e.,
φ(c,n)=φn(c).
The functions could implement arbitrarily different mechanisms, and it would thus be very hard to identify from empirical data sampled from such a complex model.444A similar construction, with the range of the noise having the cardinality of the function class, can be used [3] to argue that every causal graphical model can be expressed as a functional causal model.
As an aside, recall that for acyclic causal graphs with more than two variables, the graph structure will typically imply conditional independence properties via the causal Markov condition. However, the above construction with noises randomly switching between mechanisms is still valid, and it is thus surprising that conditional independence alone does allow us to do some causal inference of practical significance, as implemented by the well known PC and FCI algorithms [6, 5]. It should be clear that additional assumptions that prevent the noise switching construction should significantly facilitate the task of identifying causal graphs from data. Intuitively, such assumptions need to control the complexity with which the noise given a training set plus two unpaired sets from the two original marginals.
One such assumption is referred to as ANM, standing for nonlinear non-Gaussian acyclic model [7]. This model assumes for some function :
E=ϕ(C)+NE, (1)
and it has been shown that and can be identified in the generic case, provided that is assumed to have zero mean. This means that apart from some exceptions, such as the case where is linear and is Gaussian, a given joint distribution of two real-valued random variables and can be fit by an ANM model in at most one direction (which we then consider the causal one).
A similar statement has been shown for discrete data [10] and for the postnonlinear ANM model [11]
E=ψ(ϕ(C)+NE),
where is an invertible function.
In practice, an ANM model can be fit by regressing the effect on the cause while enforcing that the residual noise variable is independent of the cause [12]. If this is impossible, the model is incorrect (e.g., cause and effect are interchanged, the noise is not additive, or there are confounders).
ANM plays an important role in this paper; first, because all the methods below will presuppose that we know what is cause and what is effect, and second, because we will generalize ANM to handle the case where we have several models of the form (1) that share the same .
## 2 Predicting Effect from Cause
Let us consider the case where we are trying to estimate a function or a conditional distribution in the causal direction, i.e., that is the cause and the effect. Intuitively, this situation of causal prediction should be the ’easy’ case since there exists a functional mechanism which should try to mimic. We are interested in the question how robust (or invariant) the estimation is with respect to changes in the noise variables of the underlying functional causal model.
#### 2.1.1 Robustness w.r.t. input changes (distribution shift)
##### Given:
training points sampled from and an additional set of inputs sampled from , with .
estimate .
none.
##### Solution:
by independence of mechanism and input, there is no reason to assume that the observed change in (i.e., in ) entails a change in , and we thus conclude . This scenario is referred to as covariate shift [1].
#### 2.1.2 Semi-supervised learning
##### Given:
training points sampled from and an additional set of inputs sampled from .
estimate .
##### Note:
by independence of the mechanism, contains no information about . A more accurate estimate of , as may be possible by the addition of the test inputs , does thus not influence an estimate of , and semi-supervised learning (SSL) is pointless for the scenario in Figure 2.
#### 2.2.1 Robustness w.r.t. output changes
##### Given:
training points sampled from and an additional set of outputs sampled from , with .
estimate .
##### Assumption:
various options, e.g., an additive Gaussian noise model where is indecomposable and is also indecomposable, if it is different from .
##### Solution:
first we need to decide whether or has changed. This can be done using the method Localizing Distribution Change (Subsection 4.2) under appropriate assumptions (see above). If has changed, proceed as in Subsubsection 2.1.1. If has changed, we can, estimate via Estimating Causal Conditional (Subsection 4.3). Here, additive noise is a sufficient assumption.
#### 2.2.2 Semi-supervised learning
##### Given:
training points sampled from and an additional set of outputs sampled from .
estimate .
##### Assumption:
has an additive noise model from to and has a unique decomposition as convolution of two distributions, say . This is, for instance satisfied if the noise is Gaussian and is indecomposable.
##### Solution:
The additional outputs help because the decomposition tells us that either or . The additive noise model learned from the -pairs will probably tell us which of the alternatives is true. Knowing , the conditional reduces to learning from the -pairs, which is certainly a weaker problem than learning would be in general.
#### 2.3.1 Transfer learning (only nosie changes)
##### Given:
training points sampled from and an additional set of points sampled from , with .
estimate .
##### Assumption:
Additive noise where is invariant, but the noises can change.
##### Solution:
run conditional ANM to output a single function, only enforcing independence of residuals separately for the two data sets (Section 4.4).
There is also a semi-supervised learning variant of this scenario: Given given a training set plus two unpaired sets from the two original marginals, then the extra sets help to better estimate because we have argued in Subsubsection 2.2.2 that additional -values sampled from already help.
#### 2.3.2 Concept drift (only meachnism changes)
##### Given:
training points sampled from and an additional set of points sampled from , with .
estimate .
##### Assumption:
invariant, but has changed.
##### Solution:
Apply ANM to points sampled from to obtain . Then is given by
P′(Y|X)=PNY(Y−ϕ(X)).
## 3 Predicting Cause from Effect
We now turn to the opposite direction, where we consider the effect as observed and we try to predict the value of the cause variable that led to it. This situation of anticausal prediction may seem unnatural, but it is actually ubiquitous in machine learning. Consider, for instance, the task of predicting the class label of a handwritten digit from its image. The underlying causal structure is as follows: a person intends to write the digit 7, say, and this intention causes a motor pattern producing an image of the digit 7 — in that sense, it is justified to consider the class label the cause of the image .
#### 3.1.1 Robustness w.r.t. input changes (distribution shift)
##### Given:
training points sampled from and an additional set of inputs sampled from , with .555A related scenario is that we do not have additional data from , but we want to still use our knowledge of the causal direction to learn a model that is somewhat robust w.r.t. changes of due to changes in either or .
estimate .
##### Assumption:
additive Gaussian noise with invertible function and indecomposable is sufficient. Other assumptions are also possible, but invertibility of the causal conditional is necessary in any case.
##### Solution:
We apply Localizing Distribution Change (Subsection 4.2) to decide if or has changed. In the first case, we can estimate via Inverting Conditionals (Subsection 4.1) if we assume that is an injective conditional.666This term will be introduced in Subsection 4.1. Injectivity means that the input distribution can uniquely be computed from the output distribution. We will give examples of injective conditionals later. From this we get , and then
P′(Y|X)=P′(X,Y)∫P′(X,Y)dY.
If, of the other hand, has changed, we can estimate via Estimating Causal Conditionals (Subsection 4.3).
#### 3.1.2 Semi-supervised learning
##### Given:
training points sampled from and an additional set of inputs sampled from .
estimate .
unclear.
##### Note:
by dependence of the mechanism, contains information about . The additional inputs thus may allow a more accurate estimate of .777Note that a weak form of SSL could roughly work as followst: after learning a generative model for from the first part of the sample, we can use the additional samples from to double check whether our model generates the right distribution for .
Known methods for semi-supervised learning can indeed be viewed in this way. For instance, the cluster assumption says that points that lie in the same cluster of should have the same
; and the low density separation assumption says that the decision boundary of a classifier (i.e., the point where
crosses ) should lie in a region where is small. The semi-supervised smoothness assumption says that the estimated function (which we may think of as the expectation of should be smooth in regions where is large (for an overview of the common assumptions, see [13]). Some algorithms assume a model for the causal mechanism,
, which is usually a Gaussian distribution or mixture of Gaussians, and learn it on both labeled and unlabeled data
[14]. Note that all these assumptions translate properties of into properties of .
Using a more accurate estimate of , we could also try to proceed as in Subsubsection 3.1.1.888However, in this case we do not have the two alternatives of whether or has changed. The question now should be: given a better estimate of , does that change our estimate of , or of ?
#### 3.2.1 Robustness w.r.t. output changes
##### Given:
training points sampled from and an additional set of outputs sampled from , with .
estimate .
none.
##### Solution:
independence of mechanism implies , hence . From this, we compute
P′(Y|X)=P′(X|Y)P′(Y)∫P′(X,Y)dY.
There may also be room for a semi-supervised learning variant: suppose we have additional output observations rather than additional inputs as in standard SSL — in which situations does this help?
#### 3.3.1 Robustness w.r.t. changes of input and output noise (transfer learning)
##### Given:
training points sampled from and an additional set of points sampled from , with .
estimate .
##### Assumption:
additive noise where is invariant, but the noises can change.
##### Solution:
analogous to Subsection 2.3.1, but use the model backwards in the end.
#### 3.3.2 Concept drift (changes of the mechanism)
##### Given:
training points sampled from and an additional set of points sampled from , with .
estimate .
##### Assumption:
invariant, but has changed to .
##### Solution:
We can learn from and then estimate the entire distribution using the estimations of our distributions and obtained from observing those pairs that were taken from .
## 4 Modules
### 4.1 Inverting Conditionals
We can think of a conditional as a mechanism that transforms into . In some cases, we do not loose any information by this mechanism:
###### Definition 1 (injective conditionals)
a conditional distribution is called injective if there are no two distributions such that
∫P(y|x)P(x)dx=∫P(y|x)P′(x)dx.
###### Example 1 (full rank stochastic matrix)
Let have finite range. Then
is given by a stochastic matrix
and is injective if and only if has full rank. Note that this is only possible if .
###### Example 2 (Post-nonlinear model)
Let be real-valued and
Y=ψ(ϕ(X)+NY) with NY⊥⊥X,
be a post-nonlinear model where and are injective. Then the distribution of uniquely determines the distribution of because is invertible. This in turn, uniquely determines the distribution of provided that the convolution with is invertible. Since is invertible, this determines the distribution of uniquely.
Note that additive noise models with injective are a special case of a post-non-linear model by setting .
### 4.2 Localizing distribution change
Given data points sampled from and additional points from , we wish to decide whether or has changed. Assume
E=ϕ(C)+NE,
with the same for both distributions and , but the distribution of the noise or the distribution of changes. Let denote the distribution of .999Explicitly, it is derived from the distribution of by .
Then the distributions of the effect are given by
P(E) = P(ϕ(C))∗P(NE) P′(E) = P′(ϕ(C))∗P′(NE),
where either or . To decide which of these cases is true, we first estimate from the first data set, and then apply a deconvolution with (denoted with ) or to and check whether (1) or (2) is a probability distribution. Below we will dicuss one possible set of assumptions that ensure that exactly one of the alternatives should is true. In case (1), has changed. In case (2), has changed.
To show that there are (not too artificial) asumptions that render the problem solvable, assume that and are indecomposable and and are Gaussian with zero mean. Then the distribution uniquely determines by deconvolving with the Gaussian of maximal possible width that still yields a probability distribution.
We are aware that there exist situations where both cases are possible. For instance, consider the example in which
follows a uniform distribution,
, while when generating , and . That is, when generating the new data, only was changed. However, applying the deconvolution with to results in , which still corresponds a valid distribution. Consequently, we have to conclude that both cases are possible.
Despite the examples where the proposed method fails, the proposed method also works in – hopefully – many situations. For instance, now let us switch the roles of and in the example above, or in other words, suppose and . In this example deconvolving with gives , which is not a valid distribution. That is, in this example we can make the decision that has changed. We are working on the conditions to guarantee that only one of the two cases is possible.
### 4.3 Estimating causal conditionals
Given , estimate under the assumption that remained constant. Assume that and have been generated by the additive noise model
E=ϕ(C)+NE,
with the same and , while the distribution of has changed. We have
P(E) = P(ϕ(C))∗P(NE), P′(E) = P(ϕ(C))∗P′(NE).
Hence, can be obtained by the deconvolution
P′(NE)=P(ϕ(C))∗−1P′(E).
This way, we can compute the new conditional .
### 4.4 Conditional Anm
Given two data sets generated by
E=ϕ(C)+NE (2)
and
E′=ϕ(C′)+N′E, (3)
respectively. We apply the algorithm of [12] to obtain the shared function , enforcing separate independence and .
This can be interpreted as a ANM model enforcing conditional independence in
E|i=ϕ(C|i)+NE|i, (4)
where is an index, and .
##### Acknowledgement
We thank Joris Mooij, Bob Williamson, Vladimir Vapnik, Jakob Zscheischler and Eleni Sgouritsa for helpful discussions.
## References
• [1] M. Sugiyama and M. Kawanabe. Machine Learning in Non-Stationary Environment. MIT Press, Cambridge, MA, 2012.
• [2] J. Lemeire and E. Dirkx. Causal models as minimal descriptions of multivariate systems. , 2007.
• [3] D. Janzing and B. Schölkopf. Causal inference using the algorithmic Markov condition. IEEE Transactions on Information Theory, 56(10):5168–5194, 2010.
• [4] G. Schweikert, C. Widmer, B. Schölkopf, and G. Rätsch. An empirical analysis of domain adaptation algorithms for genomic sequence analysis. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 21, pages 1433–1440, 2009.
• [5] J. Pearl. Causality. Cambridge University Press, 2000.
• [6] P. Spirtes, C. Glymour, and R. Scheines. Causation, prediction, and search. Springer-Verlag. (2nd edition MIT Press 2000), 1993.
• [7] P. O. Hoyer, D. Janzing, J. M. Mooij, J. Peters, and B. Schölkopf. Nonlinear causal discovery with additive noise models. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 21, pages 689–696, 2009.
• [8] J. Peters, J. M. Mooij, D. Janzing, and B. Schölkopf. Identifiability of causal graphs using functional models. In Proceedings of the 27th Conference on UAI, pages 589–598, 2011.
• [9] P. Daniušis, D. Janzing, J. Mooij, J. Zscheischler, B. Steudel, K. Zhang, and B. Schölkopf. Inferring deterministic causal relations. In
26th Conference on Uncertainty in Artificial Intelligence
, Corvallis, OR, USA, 07 2010. AUAI Press.
• [10] J. Peters, D. Janzing, and B. Schölkopf. Causal inference on discrete data using additive noise models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33:2436–2450, 2011.
• [11] K. Zhang and A. Hyvärinen. On the identifiability of the post-nonlinear causal model. In Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, Montreal, Canada, 2009.
• [12] J. Mooij, D. Janzing, J. Peters, and B. Schölkopf. Regression by dependence minimization and its application to causal inference in additive noise models. In A. Danyluk, L. Bottou, and M. Littman, editors, Proceedings of the 26th International Conference on Machine Learning, New York, NY, USA, 06 2009. ACM Press.
• [13] O. Chapelle, B. Schölkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, USA, 09 2006.
• [14] X. Zhu and A. Goldberg. Introduction to semi-supervised learning. In Synthesis Lectures on Artificial Intelligence and Machine Learning, volume 3, pages 1–130. Morgan & Claypool Publishers, 2009. | 2020-10-29 10:34:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148008584976196, "perplexity": 817.9500896231966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904039.84/warc/CC-MAIN-20201029095029-20201029125029-00504.warc.gz"} |
https://groups.google.com/forum/?_escaped_fragment_=topic/comp.text.tex/tcF0t0cz5-8 | ## Problem using Biolinum font with Beamer in pdflatex
Showing 1-8 of 8 messages
Problem using Biolinum font with Beamer in pdflatex liantze 10/13/11 8:11 AM I'm using the Biolinum font (part of the libertine package) in a Beamer presentation. This used to work fine, until a recent slew of updates. Notice how, in the following, the heights of the headline and footline boxes are very small, almost as if they can't detect a font size. The same happens with the enumerated list markers.-----------8<------------\documentclass{beamer}\usetheme{Warsaw}\usepackage[utf8]{inputenc}\usepackage[T1]{fontenc}\usepackage{libertine}\author{An Author}\title{A Presentation}\begin{document}\begin{frame}\maketitle\end{frame}\section{Intro}\begin{frame}\begin{enumerate}\item Hey\end{enumerate}\end{frame}\end{document}-----------8<------------The problem is only with the Biolinum font, i.e. the sans serif font supplied by the libertine package. If you add\usefonttheme{serif}or even use a different sans serif font altogether, such as\usepackage{helvet}then the headline and footline boxes, as well as the enumerate list marker, will have the correct heights. This happens with both TeXLive and MikTeX.While I know xelatex is one possible solution, does anyone know how to solve this problem if one absolutely has to use pdflatex?Many thanks in advance,- Lian Tze - Re: Problem using Biolinum font with Beamer in pdflatex Ulrike Fischer 10/13/11 8:40 AM Am Thu, 13 Oct 2011 23:11:46 +0800 schrieb Lian Tze Lim:> I'm using the Biolinum font (part of the libertine package) in a Beamer > presentation. This used to work fine, until a recent slew of updates. > Notice how, in the following, the heights of the headline and footline > boxes are very small, almost as if they can't detect a font size.That's obviously a bug in the font. It doesn't set the "ex" valuecorrectly. You should make a bug report. You can reset the value, but I don't know how you can add this tothe beamer font command.\documentclass{article}\usepackage[T1]{fontenc}\usepackage{libertine}\begin{document}\sffamily x \rule{1ex}{1ex} \rmfamily x \rule{1ex}{1ex} \sffamily x \fontdimen5\font=\fontcharht\font\x \rule{1ex}{1ex}\end{document}-- Ulrike Fischer Re: Problem using Biolinum font with Beamer in pdflatex liantze 10/13/11 8:52 AM On 10/13/11 11:40 PM, Ulrike Fischer wrote:> That's obviously a bug in the font. It doesn't set the "ex" value> correctly. You should make a bug report.Thank you Ulrike. I've just discovered that a bug about this has indeed been filed (and is still open). Funny how I didn't see it in the bug list earlier this evening... *shamefaced*Thanks again and best regards,- Lian Tze - Re: Problem using Biolinum font with Beamer in pdflatex Robin Fairbairns 10/13/11 11:09 AM Ulrike Fischer writes:put it in the last (6th) arg to the \DeclareFontShape command (that argis normally empty in .fd files) -- it'll then get executed every timethe font is selected.-- Robin Fairbairns, Cambridgemy address is @cl.cam.ac.uk, regardless of the header. sorry about that. Re: Problem using Biolinum font with Beamer in pdflatex liantze 10/13/11 6:49 PM On 10/14/11 2:09 AM, Robin Fairbairns wrote:> Ulrike Fischer writes:>> You can reset the value, but I don't know how you can add this to>> the beamer font command.>>>> \documentclass{article}>> \usepackage[T1]{fontenc}>> \usepackage{libertine}>> \begin{document}>> \sffamily x \rule{1ex}{1ex}>>>> \rmfamily x \rule{1ex}{1ex}>>>> \sffamily x \fontdimen5\font=\fontcharht\font\x \rule{1ex}{1ex}>> \end{document}>> put it in the last (6th) arg to the \DeclareFontShape command (that arg> is normally empty in .fd files) -- it'll then get executed every time> the font is selected.Thank you, Robin, modifying the .fd files did the trick!Can I also say a personal "thanks" for your hard and excellent work at maintaining the FAQs? It remains my first reference whenever I'm wondering how to do something.Best regards,- Lian Tze - Re: Problem using Biolinum font with Beamer in pdflatex Bob T. 12/13/12 2:04 PM I've received a complaint about a Biolinum type1 font which is apparently similar to this. Please enlighten me where the ex value is set in a pfb font. The otf version doesn't have the same issue when xelatex is used. The pfb is generated by cfftot1 from the otf font without any modification (except the FontName). Bob T. Re: Problem using Biolinum font with Beamer in pdflatex Ulrike Fischer 12/14/12 12:42 AM Am Thu, 13 Dec 2012 14:04:42 -0800 (PST) schrieb Bob T.: >>> I'm using the Biolinum font (part of the libertine package) in a Beamer >>> presentation. This used to work fine, until a recent slew of updates. >>> Notice how, in the following, the heights of the headline and footline >>> boxes are very small, almost as if they can't detect a font size. >> That's obviously a bug in the font. It doesn't set the "ex" value >> correctly. You should make a bug report. > I've received a complaint about a Biolinum type1 font which is > apparently similar to this. Please enlighten me where the ex > value is set in a pfb font. The otf version doesn't have the same > issue when xelatex is used. The pfb is generated by cfftot1 from > the otf font without any modification (except the FontName). Well if you mean there is a problem when the font is used with pdflatex: the value must be in the tfm-file. And the tfm-file normally gets it from the afm and the "XHeight" value. But there are no afm files in the package so I can't check what their value is and I don't have today the time to run autoinst. But I can confirm that the ex-value is wrong in the type1 case: it is about 1.38pt instead of 4.3pt: \documentclass{book} \usepackage[biolinum]{libertine} \begin{document} \showthe\fontdimen5\font %ok \sffamily abc \showthe\fontdimen5\font %wrong \fontdimen5\font=\fontcharht\font`\x \rule{1ex}{1ex} \showthe\fontdimen5\font %now ok. \end{document} -- Ulrike Fischer http://www.troubleshooting-tex.de/ Re: Problem using Biolinum font with Beamer in pdflatex Bob Tennent 12/14/12 5:05 AM On Fri, 14 Dec 2012 09:42:50 +0100, Ulrike Fischer wrote: > >>>> I'm using the Biolinum font (part of the libertine package) in a Beamer >>>> presentation. This used to work fine, until a recent slew of updates. >>>> Notice how, in the following, the heights of the headline and footline >>>> boxes are very small, almost as if they can't detect a font size. > >>> That's obviously a bug in the font. It doesn't set the "ex" value >>> correctly. You should make a bug report. > > >> I've received a complaint about a Biolinum type1 font which is >> apparently similar to this. Please enlighten me where the ex >> value is set in a pfb font. The otf version doesn't have the same >> issue when xelatex is used. The pfb is generated by cfftot1 from >> the otf font without any modification (except the FontName). > > Well if you mean there is a problem when the font is used with > pdflatex: the value must be in the tfm-file. And the tfm-file > normally gets it from the afm and the "XHeight" value. But there are > no afm files in the package so I can't check what their value is and > I don't have today the time to run autoinst. > > But I can confirm that the ex-value is wrong in the type1 case: it > is about 1.38pt instead of 4.3pt: Thanks. The afms aren't generated by otftotfm and I never see them using autoinst. But I think I've managed to fix the problem. From the discussion at linuxlibertine.org, I learned that the original (2011) bugs were introduced by a buggy version of fontforge and that the current otfs didn't have those bugs; indeed the Beamer problems don't arise with xelatex or lualatex. But I had unknowingly used fontforge-2011 to change the FontNames and re-introduced the original bugs! I've now installed fontforge-2012, re-generated and over-written the pfbs and support files for Biolinum, and the Beamer problems seem to be gone. The ex value is 4.6pt, which is perhaps more than it should be but better than 1.38pt. Bob T. | 2016-09-28 18:17:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9122764468193054, "perplexity": 7466.128695779933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661640.68/warc/CC-MAIN-20160924173741-00133-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://merganser.math.gvsu.edu/david/linear.algebra/ula/ula/knowl/exercise-106.html | ##### Exercise6
Suppose that $$A$$ and $$B$$ are $$5\times5$$ matrices such that $$\det A = -2$$ and $$\det B = 5\text{.}$$ Find the following determinants:
1. $$\det(2A)\text{.}$$
2. $$\det(A^3)\text{.}$$
3. $$\det(AB)\text{.}$$
4. $$\det(-A)\text{.}$$
5. $$\det(AB^{-1})\text{.}$$
in-context | 2018-03-17 16:38:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729379415512085, "perplexity": 368.5269691471443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00243.warc.gz"} |
https://gmatclub.com/forum/knewton-online-course-review-78282-120.html | Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 22 May 2017, 23:42
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Knewton online course. Review.
Author Message
Intern
Joined: 24 Nov 2009
Posts: 13
Followers: 0
Kudos [?]: 14 [0], given: 12
Re: Knewton online course. Review. [#permalink]
### Show Tags
22 Feb 2010, 17:21
I'm planning on joining Knewtons course but I need someone to pair with to get the 150$off. So please let me know if anyone is interested so we both could get 150$ back.
Thanks!
Last edited by anin on 24 Mar 2010, 05:01, edited 1 time in total.
Founder
Joined: 04 Dec 2002
Posts: 14911
Location: United States (WA)
GMAT 1: 750 Q49 V42
GPA: 3.5
Followers: 3951
Kudos [?]: 25096 [0], given: 4746
Re: Knewton online course. Review. [#permalink]
### Show Tags
22 Feb 2010, 17:25
FYI
Current KnewtonPromotion:
Exclusive GMAT Club Promotion
Get $150 off and GMAT Club Tests when you sign up with Knewton before Feb 28th, 2010 Use KNEWTONBEST-GMAT-CLUB promo code to get gmat club tests you must use the promo code above _________________ Founder of GMAT Club US News Rankings progression - last 10 years in a snapshot - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests Senior Manager Joined: 21 Dec 2009 Posts: 265 Location: India Followers: 10 Kudos [?]: 252 [0], given: 25 Re: Knewton online course. Review. [#permalink] ### Show Tags 22 Feb 2010, 17:31 PM your email id and name if you want me to refer you. _________________ Cheers, SD Senior Manager Joined: 18 Aug 2009 Posts: 425 Schools: UT at Austin, Indiana State University, UC at Berkeley WE 1: 5.5 WE 2: 5.5 WE 3: 6.0 Followers: 8 Kudos [?]: 127 [0], given: 16 Re: Knewton online course. Review. [#permalink] ### Show Tags 28 Feb 2010, 23:53 I took Knewton Diagnostic test today. Math section was pretty similar to what I have experienced at real GMAT. However, verbal section was fairly easy. And I was disappointed by easy questions they had. Overall, I got 680 at Knewton Diagnostics Q44 V40. In real GMAT I had 660 Q47 V 34. So, I am assuming that verbal section of Knewton looks quite easy, because I scored V40, while in real GMAT I just did V34. So, I will have to study by myself besides doing Knewton course, any suggestions on how to study verbal section. I already have all Manhattan GMAT books. Thank you P.S. I will keep updating my progress, as I take the course through Spring semester. _________________ Never give up,,, Manager Status: I want that purple stuff! Joined: 30 Oct 2009 Posts: 108 Concentration: Strategy, Social Entrepreneurship Schools: Northwestern (Kellogg) - Class of 2014 GMAT 1: 680 Q44 V40 Followers: 5 Kudos [?]: 22 [0], given: 3 Re: Knewton online course. Review. [#permalink] ### Show Tags 01 Apr 2010, 19:07 Signed up today, will definitely update the group on progress. Also, if anyone wants to take this course, I can refer you and save us both an extra$100.
Intern
Joined: 08 May 2010
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Knewton online course. Review. [#permalink]
### Show Tags
08 May 2010, 07:24
Hi, if anyone wants to partner up with the referral, email me ddanht @ yahoo dot com
Thanks,
~dan
Intern
Joined: 20 Apr 2010
Posts: 16
Followers: 1
Kudos [?]: 4 [2] , given: 6
Re: Knewton online course. Review. [#permalink]
### Show Tags
28 May 2010, 11:56
2
KUDOS
I thought I'd add my impressions of the Knewton course up to this point. I've attended 4 sessions so far, and I have mixed feelings right now. First, a little background on me: I'm hoping to take the GMAT for the first time this fall, and I'm interested in a masters in accounting rather than an MBA. I'm an older student, returning to school after about 15 years. I've been taking grad school pre-requisities for the past 18 months, so while I'd been away from an academic environment for quite a while, I've been back in the saddle for a little bit. I have done only limited GMAT prep so far. I started studying last summer, and it became clear I needed more review, particularly for math. I did some algebra review last summer and took stats this fall. From the limited GMAT prep I've done so far, I need the most work in math (particularly DS) and RC for verbal. I am planning to do additional self-study after the Knewton course is over, using the OG and MGMAT books. I scored 570 on the Knewton diagnostic CAT, and I'm shooting for 700, so clearly I have my work cut out for me. I chose Knewton because the PR classroom course I originally signed up for was canceled, and I needed an option I could fit around my work and school schedules.
Here's what I think of Knewton so far...
The good:
- The instructors are very good. Our verbal instructor is especially good; funny and very engaging. Our math instructor is good, although we had a substitute the very first class, which kind of threw things off.
- Having access to a TA is very helpful. I can ask questions privately and not hold up the class. Unfortunately, while I'm interacting with the TA, the instructor is continuing on with the rest of the class, so it's easy to get behind and miss things.
- You don't have access to any of the pre-tests, concept queue, homework problems, etc. until you either take the diagnostic test or enter a baseline score from the official GMAT. I registered for the course less than a week before the first class, and I didn't have a block of time to take the diagnostic before it started. I was completely lost at the first class, and I'm still trying to catch up.
- It's difficult to get individual help. The TA has been great, but it's not like a classroom setting where I could ask repeated questions and not get behind, or catch the teacher after class for extra help. I have asked repeatedly for help with DS questions, because they're a particular weakness of mine, and the only answer I've received is to explain (again) the format of DS. I get the format, and that's not the problem.
- The user interface is decent, but parts of it are confusing. I still don't understand exactly what I'm supposed to do with the concept queue, and when I emailed to ask, they didn't really give me a clear answer, other than that the queue is constantly updating. I get that, but what do I do with it in order to learn?
- There is no printed material to accompany the course. They encourage you to buy the OG, which I already did. But I can't work on Knewton homework problems on my lunch hour, for example, without having access to a computer.
I'll try to update as I go through more sessions, because I'm guessing my opinion may change once I'm able to spend more time using the site's features.
Knewton GMAT Representative
Joined: 23 Oct 2009
Posts: 112
Location: New York, NY
Schools: BA Amherst College, MFA Brooklyn College
Followers: 19
Kudos [?]: 37 [0], given: 1
Re: Knewton online course. Review. [#permalink]
### Show Tags
28 May 2010, 14:19
Thanks for sharing your thoughts on our course. Glad you like the teachers; they are really fantastic.
I can answer your question about the concept queue. Our academics team has combed the test hundreds of times in order to isolate the 1000+ atomic concepts that the GMAT tests. As you go through your sessions in the Knewton system, the queue is able to pull the concepts you need to focus on the most -- based on your performance.
Please let me know if you need anything else, including help on DS.
Cheers,
Josh
_________________
Josh Anish
Senior Editor
Knewton, Inc
Free GMAT Club tests ($250 value) in addition to any other discounts or coupons when you buy the Knewton Course with KnewtonBest-GMAT-Club discount code. Use this promo code when you sign up for Knewton: KnewtonBest-GMAT-Club Intern Joined: 10 Sep 2009 Posts: 27 Followers: 0 Kudos [?]: 39 [1] , given: 10 Re: Knewton online course. Review. [#permalink] ### Show Tags 08 Jul 2010, 21:14 1 This post received KUDOS Recently took Knewton Diagnostic GMAT CAT and scored 680 (Q 45 V 39)...although my Quant score is underscored, as I have got only three questions wrong.... Researched few reviews and joined Knewton Course (50 Point increase Money back guarantee was very impressive)... My official GMAT is in August, so planning to finish Knewton course by this month end..will keep posting you guyz... Founder Joined: 04 Dec 2002 Posts: 14911 Location: United States (WA) GMAT 1: 750 Q49 V42 GPA: 3.5 Followers: 3951 Kudos [?]: 25096 [1] , given: 4746 Re: Knewton online course. Review. [#permalink] ### Show Tags 09 Jul 2010, 22:36 1 This post received KUDOS Expert's post mydreambschool wrote: Recently took Knewton Diagnostic GMAT CAT and scored 680 (Q 45 V 39)...although my Quant score is underscored, as I have got only three questions wrong.... Researched few reviews and joined Knewton Course (50 Point increase Money back guarantee was very impressive)... My official GMAT is in August, so planning to finish Knewton course by this month end..will keep posting you guyz... Cool! Congrats. Don't forget about the free offer to get free GMAT Club Tests if you signed up with our code. _________________ Founder of GMAT Club US News Rankings progression - last 10 years in a snapshot - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests Intern Joined: 16 Nov 2006 Posts: 4 Location: NJ Followers: 0 Kudos [?]: 0 [0], given: 1 Re: Knewton online course. Review. [#permalink] ### Show Tags 10 Jul 2010, 08:02 if anyone needs a referral, send me an email at alla@xytopok.net Manager Joined: 26 Nov 2009 Posts: 166 Followers: 3 Kudos [?]: 57 [1] , given: 5 Re: Knewton online course. Review. [#permalink] ### Show Tags 14 Jul 2010, 12:17 1 This post received KUDOS please use the below link to get$100 off on knewton gmat course
http://knewton.tellapal.com/a/clk/nbCzG
Intern
Joined: 12 Aug 2010
Posts: 5
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Knewton online course. Review. [#permalink]
### Show Tags
01 Sep 2010, 14:46
Hello All:
I have been reading different blogs in trying to decide if I should take the knewton GMAT prep course. I have taken the GMAT test last year and scored terribly, and this time around, would prefer to burn every option in preparing before taking the test again.
I would really appreciate it If someone (NO Knewton employees ) could kindly please share their experience with me in taking knewton GMAT prep course. Any suggestion would help.
Thanks
Senior Manager
Joined: 21 Dec 2009
Posts: 265
Location: India
Followers: 10
Kudos [?]: 252 [0], given: 25
Re: Knewton online course. Review. [#permalink]
### Show Tags
01 Sep 2010, 14:54
knewton is good for you if you are scoring less than 650 now. If you are planning to score 700+ then you should not rely only on Knewton material. Though Knewton will make sure your base is strong.
Knewton teachers are great. This online course is also very good and adaptive. There is no doubt about it. Money wise it is the best deal in the market right now.
GMAT Test - very similar to real test but math scoring is very weird and low scoring.
More or less Knewton is good.
In my opinion
Value for Money - 5/5
Course - 5/5
GMAT CAT 3.5/5
Difficulty level - 4/5
_________________
Cheers,
SD
Intern
Joined: 12 Aug 2010
Posts: 5
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: Knewton online course. Review. [#permalink]
### Show Tags
02 Sep 2010, 04:52
Thanks SD, I really appreciate sharing your thoughts.
Intern
Joined: 22 Sep 2008
Posts: 11
Followers: 0
Kudos [?]: 1 [0], given: 0
Re: Knewton online course. Review. [#permalink]
### Show Tags
02 Sep 2010, 09:05
if anyone needs a referral - you can email me at rk0091@hotmail.com
_________________
It's not the IMO, it's the OA!!
Senior Manager
Status: GMAT Time...!!!
Joined: 03 Apr 2010
Posts: 293
Schools: Chicago,Tuck,Oxford,cambridge
Followers: 6
Kudos [?]: 53 [0], given: 7
Re: Knewton online course. Review. [#permalink]
### Show Tags
06 Sep 2010, 20:30
Hi I have also joined knewton..I find the course really good.class teachers are very nice and helpful in every way.There are pre assessments before the class..After assessment tests after the class concept videos and concept queues to explain the part u went wrong..On demand video archive and also we can change the class according to our timings and need..if i wanaa attend any class again i can join another batch with a click of button..There are seperate PDF for Class reviews like whatever has been taught in class one can revise that from PDF..Support is also provided for any querries or doubt..Overall i will say i find knewton Interesting and good enuf for preparations... 50 points gaurantee is gr8 to attract people
thanx
sandy
Director
Joined: 13 Aug 2009
Posts: 537
Followers: 11
Kudos [?]: 69 [0], given: 46
Re: Knewton online course. Review. [#permalink]
### Show Tags
06 Sep 2010, 21:12
I finished the Knewton course. For the price I think it is a great course. It is significantly cheaper than the other courses out there (with discounts). Their problems are very similar to OG, but the explanations are way better. The classes were great. I was very impressed with the instructors and they kept me engaged and focused the entire time. I am still a little skeptical about their verbal algorithm on the CAT scoring though.
_________________
It's a dawg eat dawg world.
Manager
Joined: 23 Apr 2010
Posts: 134
Location: Tx
Schools: NYU,UCLA,BOOTH,STANFORD
Followers: 1
Kudos [?]: 130 [0], given: 36
Re: Knewton online course. Review. [#permalink]
### Show Tags
15 Sep 2010, 07:33
I finished to read all knewton section. I just wonder after all these complains about its cat exams,why they do no try to fix and change it.
My skeptical mind poking me that maybe it is a marketing strategy. They keep you down and reduce your expectations from real GMAT and then when you get good score from GMAT you would appreciate to knewton.
I really want to try knewton, however I can't trust them jus because of this behavior.
So confused, tomorrow will read manhattan GMAT section and will decide.
_________________
This is not finished here...Watch me.....
Manager
Joined: 12 Aug 2010
Posts: 65
Schools: UNC Kenan-Flagler, IU Kelley, Emory GSB
WE 1: 5 yrs
Followers: 3
Kudos [?]: 27 [0], given: 50
Re: Knewton online course. Review. [#permalink]
### Show Tags
15 Oct 2010, 04:58
fatihaysu wrote:
I finished to read all knewton section. I just wonder after all these complains about its cat exams,why they do no try to fix and change it.
My skeptical mind poking me that maybe it is a marketing strategy. They keep you down and reduce your expectations from real GMAT and then when you get good score from GMAT you would appreciate to knewton.
I really want to try knewton, however I can't trust them jus because of this behavior.
So confused, tomorrow will read manhattan GMAT section and will decide.
I've completed by Knewton classroom course and am about to give my last CAT.
My test scores vary a lot and am pretty nervous about it! Hoping that i'll get better by the time of the test.
Knewton1 - Q40/V37 - 620
Knewton2 - Q49/V31 - 670
Knewton3 - Q45/V31 - 620 again
Knewton4 - Q45/V38 - 680
In my first, zero preparation, GMATprep I got 650 - Q44/V36...so it appears that am making no progress
Anyway, I am thoroughly impressed by the knewton instructors and their course format. There are a lot of practice problems in the form of pretest/retest/homework/additional practice.
But I found the CATs to be a little weird. They are not too high on difficulty but the scoring is very tough. I missed 2 questions in quant once and got 49. I missed 4 questions the other day and got 45.
I found their verbal materials pretty impressive. Knewtons strategy for RC is very good too. All in all, am pretty impressed. But I am a little perturbed to read comments that knewtons materials arent sufficient for anyone targetting 700+.
Is that really true? With less than 50 days to go for my G-Day, I would like to switch gears and move on to tougher sources if it is so.
P.S. If anyone wants a referral/discount, please PM me.
_________________
The night is at its darkest just before the dawn... never, ever give up!
Last edited by yossarian84 on 15 Oct 2010, 05:28, edited 1 time in total.
Re: Knewton online course. Review. [#permalink] 15 Oct 2010, 04:58
Go to page Previous 1 2 3 4 5 6 7 8 9 Next [ 170 posts ]
Similar topics Replies Last post
Similar
Topics:
Online Courses 0 13 May 2017, 04:26
My Review about Crackverbal Online Course 0 16 Aug 2016, 03:01
1 Knewton and MGMAT CAT Review 0 13 Jul 2011, 15:28
Knewton GMAT Course Discount 4 10 Jan 2011, 00:39
12 Knewton Live Online GMAT Course 178 10 Jul 2011, 02:04
Display posts from previous: Sort by | 2017-05-23 06:42:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30291950702667236, "perplexity": 4633.798933726686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607369.90/warc/CC-MAIN-20170523045144-20170523065144-00480.warc.gz"} |
http://en.wikipedia.org/wiki/Term_symbol | # Term symbol
In quantum mechanics, the Russell–Saunders [1] term symbol is an abbreviated description of the angular momentum quantum numbers in a multi-electron atom. Each energy level of a given electron configuration is described by its own term symbol, assuming LS coupling. The ground state term symbol is predicted by Hund's rules. Tables of atomic energy levels identified by their term symbols have been compiled by NIST.[2]
## Symbol
The term symbol has the form
2S+1LJ
where
S is the total spin quantum number. 2S + 1 is the spin multiplicity: the maximum number of different possible states of J for a given (L, S) combination.
J is the total angular momentum quantum number.
L is the total orbital quantum number in spectroscopic notation. The first 17 symbols of L are:
L = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ... S P D F G H I K L M N O Q R T U V (continued alphabetically)[note 1]
The nomenclature (S, P, D, F) is derived from the characteristics of the spectroscopic lines corresponding to (s, p, d, f) orbitals: sharp, principal, diffuse, and fundamental; the rest being named in alphabetical order. When used to describe electron states in an atom, the term symbol usually follows the electron configuration. For example, one low-lying energy level of the carbon atom state is written as 1s22s22p2 3P2. The superscript 3 indicates that the spin state is a triplet, and therefore S = 1 (2S + 1 = 3), the P is spectroscopic notation for L = 1, and the subscript 2 is the value of J. Using the same notation, the ground state of carbon is 1s22s22p2 3P0.[2]
## Others
The term symbol is also used to describe compound systems such as mesons or atomic nuclei, or even molecules (see molecular term symbol). In that last case, Greek letters are used to designate the (molecular) orbital angular momenta.
For a given electron configuration
• The combination of an S value and an L value is called a term, and has a statistical weight (i.e., number of possible microstates) of (2S+1)(2L+1);
• A combination of S, L and J is called a level. A given level has a statistical weight of (2J+1), which is the number of possible microstates associated with this level in the corresponding term;
• A combination of L, S, J and MJ determines a single state.
As an example, for S = 1, L = 2, there are (2×1+1)(2×2+1) = 15 different microstates corresponding to the 3D term, of which (2×3+1) = 7 belong to the 3D3 (J = 3) level. The sum of (2J+1) for all levels in the same term equals (2S+1)(2L+1). In this case, J can be 1, 2, or 3, so 3 + 5 + 7 = 15.
## Term symbol parity
The parity of a term symbol is calculated as
$P=(-1)^{\sum_i l_i}\ ,\!$
where li is the orbital quantum number for each electron. In fact, only electrons in odd orbitals contribute to the total parity: an odd number of electrons in odd orbitals (those with an odd l such as in p, f,...) will make an odd term symbol, while an even number of electrons in odd orbitals will make an even term symbol, irrespective of the number of electrons in even orbitals.
When it is odd, the parity of the term symbol is indicated by a superscript letter "o", otherwise it is omitted:
2Po
½
has odd parity, but 3P0 has even parity.
Alternatively, parity may be indicated with a subscript letter "g" or "u", standing for gerade (German for "even") or ungerade ("odd"):
2P½,u for odd parity, and 3P0,g for even.
## Ground state term symbol
It is relatively easy to calculate the term symbol for the ground state of an atom using Hund's rules. It corresponds with a state with maximum S and L.
1. Start with the most stable electron configuration. Full shells and subshells do not contribute to the overall angular momentum, so they are discarded.
• If all shells and subshells are full then the term symbol is 1S0.
2. Distribute the electrons in the available orbitals, following the Pauli exclusion principle. First, fill the orbitals with highest ml value with one electron each, and assign a maximal ms to them (i.e. +½). Once all orbitals in a subshell have one electron, add a second one (following the same order), assigning ms = −½ to them.
3. The overall S is calculated by adding the ms values for each electron. That is the same as multiplying ½ times the number of unpaired electrons. The overall L is calculated by adding the ml values for each electron (so if there are two electrons in the same orbital, add twice that orbital's ml).
4. Calculate J as
• if less than half of the subshell is occupied, take the minimum value J = |LS|;
• if more than half-filled, take the maximum value J = L + S;
• if the subshell is half-filled, then L will be 0, so J = S.
As an example, in the case of fluorine, the electronic configuration is 1s22s22p5.
1. Discard the full subshells and keep the 2p5 part. So there are five electrons to place in subshell p (l = 1).
2. There are three orbitals (ml = 1, 0, −1) that can hold up to 2(2l + 1) = 6 electrons. The first three electrons can take ms = ½ (↑) but the Pauli exclusion principle forces the next two to have ms = −½ (↓) because they go to already occupied orbitals.
ml +1 0 −1 ms: ↑↓ ↑↓ ↑
3. S = ½ + ½ + ½ − ½ − ½ = ½; and L = 1 + 0 − 1 + 1 + 0 = 1, which is "P" in spectroscopic notation.
4. As fluorine 2p subshell is more than half filled, J = L + S = 32. Its ground state term symbol is then 2S+1LJ = 2P32.
## Term symbols for an electron configuration
The process to calculate all possible term symbols for a given electron configuration is a bit longer.
• First, calculate the total number of possible microstates N for a given electron configuration. As before, we discard the filled (sub)shells, and keep only the partially filled ones. For a given orbital quantum number l, t is the maximum allowed number of electrons, t = 2(2l+1). If there are e electrons in a given subshell, the number of possible microstates is
$N= {t \choose e} = {t! \over {e!\,(t-e)!}}.$
As an example, lets take the carbon electron structure: 1s22s22p2. After removing full subshells, there are 2 electrons in a p-level (l = 1), so we have
$N = {6! \over {2!\,4!}}=15$
different microstates.
• Second, draw all possible microstates. Calculate ML and MS for each microstate, with $M=\sum_{i=1}^e m_i$ where mi is either ml or ms for the i-th electron, and M represents the resulting ML or MS respectively:
ml +1 0 −1 ML all up ↑ ↑ 1 1 ↑ ↑ 0 1 ↑ ↑ −1 1 all down ↓ ↓ 1 −1 ↓ ↓ 0 −1 ↓ ↓ −1 −1 one up one down ↑↓ 2 0 ↑ ↓ 1 0 ↑ ↓ 0 0 ↓ ↑ 1 0 ↑↓ 0 0 ↑ ↓ −1 0 ↓ ↑ 0 0 ↓ ↑ −1 0 ↑↓ −2 0
• Third, count the number of microstates for each MLMS possible combination
MS +1 1 1 2 1 1 3 1 1 2 1 1
• Fourth, extract smaller tables representing each possible term. Each table will have the size (2L+1) by (2S+1), and will contain only "1"s as entries. The first table extracted corresponds to ML ranging from −2 to +2 (so L = 2), with a single value for MS (implying S = 0). This corresponds to a 1D term. The remaining table is 3×3. Then we extract a second table, removing the entries for ML and MS both ranging from −1 to +1 (and so S = L = 1, a 3P term). The remaining table is a 1×1 table, with L = S = 0, i.e., a 1S term.
Ms 1 1 1 1 1
Ms +1 1 1 1 1 1 1 1 1 1
Ms 1
• Fifth, applying Hund's rules, deduce which is the ground state (or the lowest state for the configuration of interest.) Hund's rules should not be used to predict the order of states other than the lowest for a given configuration. (See examples at Hund's rules#Excited states.)
• If only two equivalent electrons are involved, there is an "Even Rule" which states
For two equivalent electrons the only states that are allowed are those for which the sum (L + S) is even.
### Case of three equivalent electrons
• For three equivalent electrons (with the same orbital quantum number l), there is also a general formula (denoted by X(L,S,l) below) to count the number of any allowed terms with total orbital quantum number "L" and total spin quantum number "S".
$X(L,S,l)= \begin{cases} L-\lfloor\frac{L}{3}\rfloor, & \text{if }S=1/2\text{ and } 0\leq L
where the floor function $\lfloor x \rfloor$ denotes the greatest integer not exceeding x.
The detailed proof could be found in Renjun Xu's original paper.[3]
• For a general electronic configuration of lk, namely k equivalent electrons occupying one subshell, the general treatment and computer code could also be found in this paper.[3]
### Alternative method using group theory
For configurations with at most two electrons (or holes) per subshell, an alternative and much quicker method of arriving at the same result can be obtained from group theory. The configuration 2p2 has the symmetry of the following direct product in the full rotation group:
Γ(1) × Γ(1) = Γ(0) + [Γ(1)] + Γ(2),
which, using the familiar labels Γ(0) = S, Γ(1) = P and Γ(2) = D, can be written as
P × P = S + [P] + D.
The square brackets enclose the anti-symmetric square. Hence the 2p2 configuration has components with the following symmetries:
S + D (from the symmetric square and hence having symmetric spatial wavefunctions);
P (from the anti-symmetric square and hence having an anti-symmetric spatial wavefunction).
The Pauli principle and the requirement for electrons to be described by anti-symmetric wavefunctions imply that only the following combinations of spatial and spin symmetry are allowed:
1S + 1D (spatially symmetric, spin anti-symmetric)
3P (spatially anti-symmetric, spin symmetric).
Then one can move to step five in the procedure above, applying Hund's rules.
The group theory method can be carried out for other such configurations, like 3d2, using the general formula
Γ(j) × Γ(j) = Γ(2j) + Γ(2j-2) + ... + Γ(0) + [Γ(2j-1) + ... + Γ(1)].
The symmetric square will give rise to singlets (such as 1S, 1D, & 1G), while the anti-symmetric square gives rise to triplets (such as 3P & 3F).
More generally, one can use
Γ(j) × Γ(k) = Γ(j+k) + Γ(j+k−1) + ... + Γ(|jk|)
where, since the product is not a square, it is not split into symmetric and anti-symmetric parts. Where two electrons come from inequivalent orbitals, both a singlet and a triplet are allowed in each case. [4]
1. ^ There is no official convention for naming angular momentum values greater than 20 (symbol Z). Many authors begin using Greek letters at this point ($\alpha,\beta, \gamma,$ ...). The occasions for which such notation is necessary are few and far between, however. | 2015-04-18 17:16:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574187159538269, "perplexity": 1107.2806058051224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246635639.11/warc/CC-MAIN-20150417045715-00071-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/200035-find-dirivitive-t-t-print.html | # find dirivitive of T(t)
• June 14th 2012, 07:56 PM
icelated
find dirivitive of T(t)
let $\vec T(t) = \frac{1}{\sqrt{10+ 4t^2}} ( 3 \hat i - \hat j + 2t \hat k) find \frac {d \vec T}{dt}$
I am not sure how to do this.
• June 15th 2012, 12:28 AM
richard1234
Re: find dirivitive of T(t)
Note that
$\frac{d}{dt} \frac{1}{\sqrt{10 + 4t^2}} = -4t (10+4t^2)^{-\frac{3}{2}}$, and
$\frac{d}{dt} \frac{2t}{\sqrt{10 + 4t^2}} = \frac{2 \sqrt{10 + 4t^2} - t(10 + 4t^2)^{-\frac{1}{2}}}{10 + 4t^2}$ (applying the chain and quotient rules)
$\frac{d \vec{T}}{dt}$ is computed by taking the derivatives of each of vector T's components, i.e.
$\frac{d \vec{T}}{dt} = \frac{d}{dt} \frac{3}{\sqrt{10 + 4t^2}}\vec{i} - \frac{d}{dt} \frac{1}{\sqrt{10 + 4t^2}} \vec{j} + \frac{d}{dt} \frac{2t}{\sqrt{10 + 4t^2}} \vec{j}$
Use the given derivatives and substitute. | 2013-12-21 03:57:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9546642899513245, "perplexity": 5966.052647412426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345774929/warc/CC-MAIN-20131218054934-00077-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://starlink.eao.hawaii.edu/docs/sun104.htx/sun104ss4.html | ### MSG_FMTx
Assign a value to a message token (formatted)
#### Description:
A given value is encoded using the supplied Fortran 77 format field and the result assigned to the named message token. If the token is already defined, the result is appended to the existing token value. The given value may be one of the following Fortran 77 data types and there is one routine provided for each data type:
Subroutine Fortran 77 Type MSG_FMTD DOUBLE PRECISION MSG_FMTR REAL MSG_FMTI INTEGER MSG_FMTL LOGICAL MSG_FMTC CHARACTER
If these subroutines fail, it will usually be apparent in any messages which refer to this token.
#### Invocation
CALL MSG_FMTx( TOKEN, FORMAT, VALUE )
#### Arguments
##### TOKEN = CHARACTER $\ast$ ( $\ast$ ) (Given)
The message token name.
##### FORMAT = CHARACTER $\ast$ ( $\ast$ ) (Given)
The Fortran 77 FORMAT field used to encode the supplied value.
##### VALUE = Fortran 77 type (Given)
The value to be assigned to the message token.
#### System-specific
The precise effect of failures will depend upon the computer system being used. | 2022-01-18 03:34:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27850064635276794, "perplexity": 4709.51184321224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00202.warc.gz"} |
https://mathoverflow.net/questions/312399/is-the-stack-of-stable-curves-with-no-rational-component-algebraic | # Is the stack of stable curves with no rational component algebraic?
Let $$g\geq 2$$ be an integer and let $$\overline{\mathcal{M}}_g$$ be the (smooth proper Deligne-Mumford) algebraic stack of stable curves of genus $$g$$.
Let $$\mathcal{M}_g^{nr}$$ be the substack of stable curves $$C\to S$$ such that, for every geometric point $$\overline{s}$$ of $$S$$, the fibres $$C_{\overline{s}}$$ does not admit a morphism from $$\mathbb{P}^1_{\overline{s}}$$. (In other words, none of the irreducible components of $$C_{\overline{s}}$$ are rational.)
Is $$\mathcal{M}_g^{nr}$$ an algebraic substack of $$\overline{\mathcal{M}}_g$$?
I feel like $$\mathcal{M}_g^{nr}$$ is the complement of a some boundary divisor, but I'm not sure how to make this precise.
• Welcome new contributor. For every irreducible component of the boundary divisor of $\overline{\mathcal{M}}_g$, the generic point of that component parameterizes a stable curve for which each irreducible component has genus $\geq 1$. Thus, the open subset $\mathcal{M}_g^{\text{nr}}$ is not equal to the complement of a boundary divisor. – Jason Starr Oct 9 '18 at 16:52
I am adding a few details to my comment above. For notational convenience, in addition to the usual notation $$\overline{\mathcal{M}}_{g,n}$$ for genus-$$g$$, $$n$$-pointed curves, also for every finite set $$N$$ with $$n$$ elements, use the notation $$\overline{\mathcal{M}}_{g,N}$$ when the $$n$$ marked sections are explicitly indexed by $$N$$. Thus, there are boundary morphisms, $$\Delta_{(g',N'),(g'',N'')}:\overline{\mathcal{M}}_{g',N'\sqcup\{\ell'\}}\times \overline{\mathcal{M}}_{g'',N''\sqcup\{\ell''\}} \to \overline{\mathcal{M}}_{g'+g'',N'\sqcup N''},$$ $$\Delta_{g,N}:\overline{\mathcal{M}}_{g,N\sqcup\{\ell',\ell''\}}\to \overline{\mathcal{M}}_{g+1,N},$$ defined in the usual way.
Definition. An assignment to every integer $$g$$ and finite set $$N$$ satisfying $$2g-2+|N|>0$$ of a reduced substack $$Z_{g,N}\subset \overline{\mathcal{M}}_{g,N}$$ is $$\Delta$$-compatible if it is compatible with permutations of $$N$$, and for every boundary morphism $$\Delta_{(g',N'),(g'',N'')}$$, resp. $$\Delta_{g,N}$$, the inverse image of $$Z_{g'+g'',N'\sqcup N''}$$, resp. of $$Z_{g+1,N}$$, equals the union of the pullbacks of $$Z_{g',N'\sqcup\{\ell'\}}$$ and $$Z_{g'',N''\sqcup\{\ell''\}}$$, resp. the pullback of $$Z_{g,N\sqcup\{\ell',\ell''\}}$$. It is combinatorial if for every $$(g,N)$$ such that $$Z_{g,N}$$ does not equal $$\overline{\mathcal{M}}_{g,N}$$, then $$Z_{g,N}$$ is contained in the boundary divisor of $$\overline{\mathcal{M}}_{g,N}$$.
Proposition. Every combinatorial, $$\Delta$$-compatible assignment $$(Z_{g,N})_{g,N}$$ is a system of closed substacks $$Z_{g,N}$$ of $$\overline{\mathcal{M}}_{g,N}$$.
Proof. By hypothesis, for every $$(g,N)$$ such that the boundary of $$\overline{\mathcal{M}}_{g,N}$$ is empty, then $$Z_{g,N}$$ is either empty or else equal to all of $$\overline{\mathcal{M}}_{g,N}$$, both of which are closed substacks of $$\overline{\mathcal{M}}_{g,N}$$. Thus, by way of induction (on the integer $$2g-2+|N|$$, for example), assume that the boundary of $$\overline{\mathcal{M}}_{g,N}$$ is nonempty.
If $$Z_{g,N}$$ equals all of $$\overline{\mathcal{M}}_{g,N}$$, then it is a closed substack of $$\overline{\mathcal{M}}_{g,N}$$. Thus, assume that does not equal all of $$\overline{\mathcal{M}}_{g,N}$$. Since the system is combiantorial, the substack $$Z_{g,N}$$ is contained in the boundary. Since the boundary is a union of finitely many irreducible closed substacks of $$\overline{\mathcal{M}}_{g,N}$$, it suffices to check that the intersection with each of these irreducible closed substacks is closed.
Each of these irreducible closed substacks is the image of a boundary morphism. Since the system is $$\Delta$$-compatible, the inverse image of $$Z_{g,N}$$ under the boundary morphism is obtained from $$Z_{h,P}$$ with $$2h-2+|P| < 2g-2+|N|$$. Thus, by the induction hypothesis, the inverse image of $$Z_{g,N}$$ under each boundary morphism is a closed substack of the domain of the boundary morphism. Since the boundary morphism is finite, hence proper, the image of this closed substack of the domain is also a closed substack of $$\overline{\mathcal{M}}_{g,N}$$. Thus, $$Z_{g,N}$$ is a closed substack of $$\overline{\mathcal{M}}_{g,N}$$. By induction on $$2g-2+|N|$$, every $$Z_{g,N}$$ is a closed substack of $$\overline{\mathcal{M}}_{g,N}$$. QED
Notation. For every nonnegative integer $$h$$, denote by $$Z^{h}_{g,N} \subset \overline{\mathcal{M}}_{g,N}$$ the reduced substack parameterizing those genus-$$g$$, $$N$$-marked curves such that there exists a subcurve that is the image of a nonconstant morphism from a proper, connected, reduced, at-worst-nodeal curve of arithmetic genus $$\leq h$$.
Corollary. The system $$(Z^{h}_{g,N})$$ is $$\Delta$$-compatible and combinatorial. Thus, it is a system of closed substacks.
Proof. For a curve that is a cofiber coproduct of connected subcurves, the coproduct curve contains an irreducible component of geometric genus $$\leq h$$ if and only if one of the connected subcurves contains an irreducible component of geometric genus $$\leq h$$. Similarly, for a curve with a non-disconnecting node, the normalizations of the irreducible components all factor through the partial normalization of the non-disconnecting node. Thus, the nodal curve has an irreducible component of geometric genus $$\leq h$$ if and only if the partial normalization has an irreducible component of geometric genus $$\leq h$$. Thus, the system is $$\Delta$$-compatible.
If $$g\leq h$$, then $$Z^h_{g,N}$$ equals all of $$\overline{\mathcal{M}}_{g,N}$$. If $$h < g$$, then every curve parameterized by the complement of the boundary is irreducible of genus $$g>h$$. Thus, there is no irreducible component of this curve that has geometric genus $$\leq h$$. So, in this case, $$Z^h_{g,N}$$ is contained in the boundary divisor. In all cases, the substack $$Z^h_{g,N}$$ is combinatorial. QED
Finally, when $$h$$ equals $$0$$, the substack $$Z^0_{g,n}$$ is, by definition, the complement of the substack $$\mathcal{M}^{\text{nr}}_{g,n}$$. As the complement of a closed substack, the substack $$\mathcal{M}^{\text{nr}}_{g,n}$$ is an open substack. For every integer $$g>2$$, the closed substack $$Z^0_{g,0}$$ of $$\overline{\mathcal{M}}_{g,0}$$ contains no irreducible component of the boundary divisor of $$\overline{\mathcal{M}}_{g,0}$$. | 2019-10-16 23:17:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 98, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9521936178207397, "perplexity": 114.0385627993625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00281.warc.gz"} |
https://highschooltestprep.com/ap/calculus-ab/derivatives-2/ | # AP Calculus AB — Derivatives Part 2
This is Part 2 of our free AP Calculus AB unit test on derivatives. These questions cover differentiability, implicit differentiation, first derivative test, higher order derivatives, and the second derivative test. Understanding how to analyze function behavior using its derivatives will be necessary to do well on the AP Calculus AB exam.
Congratulations - you have completed .
You scored %%SCORE%% out of %%TOTAL%%.
Your performance has been rated as %%RATING%%
Question 1
Identify $\frac{dy}{dx}$ for the following using implicit differentiation:
$2y^2 + 2x^2 = 4$
A $\dfrac{dy}{dx} =$ $±\sqrt{2 − x^2}$ B $\dfrac{dy}{dx} =$ $x \cdot (2 − x^2)^{−\frac{1}{2}}$ C $\dfrac{dy}{dx} =$ $−\dfrac{x}{y}$ D $\text{DNE}$
Question 1 Explanation:
The correct answer is (C). This is where implicit techniques are necessary. Recall that when taking a derivative of any variable, $u$, with respect to a variable, $x$, the derivative is equal to the expression below:
$\dfrac{d(u)}{dx}$ $= (1)\dfrac{du}{dx}$
This is taken for granted while deriving equations isolated in terms of $x$ because the derivative of $x$ is $(1)\frac{dx}{dx}$ or $1$. Implicit differentiation is simply an extra consideration of chain rule.
Let’s apply this to the example. The derivative is taken in the normal way, just with the extra consideration for which variable is being derived. Be sure to take the derivative of both sides of the equation:
$2(2y)\left(\dfrac{dy}{dx}\right) +$ $2(2x)\left(\dfrac{dx}{dx}\right) = 0$
This simplifies to the form below. We must then isolate for $\frac{dy}{dx}$:
$\dfrac{dy}{dx}$ $= \dfrac{−4x}{4y}$ $= −\dfrac{x}{y}$
Question 2
Identify $\frac{dy}{dt}$ for the following:
$2y = 3x^3 + \cos(x)$
A $\dfrac{dy}{dt} =$ $\left(\dfrac{9}{2}x^2 − \dfrac{\sin(x)}{2}\right)\left(\dfrac{dx}{dt}\right)$ B $\dfrac{dy}{dt} =$ $(9x^2 − \sin(x))\left(\dfrac{dx}{dt}\right)$ C $\dfrac{dy}{dt} =$ $9x^2 − \sin(x)$ D $\dfrac{dy}{dt} =$ $\dfrac{9}{2}x^2 − \dfrac{\sin(x)}{2}$
Question 2 Explanation:
In this problem, we will not derive with respect to $x$ but rather with respect to $t$. Implicit differentiation allows for the derivative to be taken with respect to any variable. Be sure to recall all chain rules and trigonometric derivations.
$2\left(\dfrac{dy}{dt}\right) =$ $9x^2 \left(\dfrac{dx}{dt}\right) − \sin(x) \cdot \dfrac{dx}{dt}$
Then, we must isolate for $\frac{dy}{dt}$
$\dfrac{dy}{dt} =$ $\left(\dfrac{9}{2}x^2 − \dfrac{\sin(x)}{2}\right)\left(\dfrac{dx}{dt}\right)$
Question 3
Identify the critical points of the following. (Only identify the $x$-values):
$f(x) = 3x^2 + 2x$
A $x = 0$, $\; x = −\frac{2}{3}$ B $x = 0$ C $x = −\frac{1}{3}$ D $x = 0$, $\; x = −\frac{1}{3}$, $\;x = −\frac{2}{3}$
Question 3 Explanation:
The correct answer is (C). We must first take the derivative of the function and set is equal to zero to identify the function’s critical points.
$f'(x) = 6x + 2 = 0$
Then, solving for the roots of the derivative, we find that $x = −\frac{1}{3}$ is the only solution. This implies that this is the only critical point for the function.
Question 4
Identify the critical points of the following. (Only identify the $x$-values of the points):
$f(x) = \ln(x^2) + 3$
A $\text{There are no}$ $\text{critical points.}$ B $x = 0$ C $x = 0$, $\; x = \sqrt{e^{–3}}$ D $x = ±\sqrt{e^{–3}}$
Question 4 Explanation:
The correct answer is (A). We must first take the derivative of the function and set it equal to zero to identify its critical points.
$f'(x) = \dfrac{1}{x^2} \cdot 2x$ $= \dfrac{2}{x} = 0$
Then, we can identify that there are no values of $x$ for which the derivative is equal to zero. Therefore, there are no critical points for the function.
Question 5
Identify all extrema for the function below. Note that relative extrema are excluding any absolute extrema:
$f(x) = x^3 − 3x^2 + 2$
A $\text{Absolute}$ $\text{maximum:}$ $(0, 0)$ $\text{Absolute}$ $\text{minimum:}$ $(0, 2)$ $\text{Relative}$ $\text{maxima:}$ $\text{None}$ $\text{Relative}$ $\text{minima:}$ $\text{None}$ B $\text{Absolute}$ $\text{maximum:}$ $(0, 2)$ $\text{Absolute}$ $\text{minimum:}$ $(2, -2)$ $\text{Relative}$ $\text{maxima:}$ $\text{None}$ $\text{Relative}$ $\text{minima:}$ $\text{None}$ C $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $\text{None}$ $\text{Relative}$ $\text{maxima:}$ $(0, 0)$ $\text{Relative}$ $\text{minima:}$ $(2, 0)$ D $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $\text{None}$ $\text{Relative}$ $\text{maxima:}$ $(0, 2)$ $\text{Relative}$ $\text{minima:}$ $(2, -2)$
Question 5 Explanation:
The correct answer is (D). We must first take the derivative of the function and set it equal to zero to identify the critical points of the function.
$f'(x)$ $= 3x^2 − 6x = 0$
Then, by factoring, we can identify the roots of the equation.
$f'(x)$ $= 3x(x − 2) = 0$
$x = 0$, $\; x = 2$
Now that we know the critical points, we must identify what type of extrema, if any, results from the critical values. We can do this by using the first derivative test. There are many ways to identify how $f'(x)$ changes across a critical value, including sign patterns, graphical analysis, or even just inspection. We will use the graph to identify how $f'(x)$ changes about the critical points.
It is clear from the graph that $f'(x)$ changes from positive to negative about $x = 0$, and changes from negative to positive about $x = 2$. This implies that $x = 0$ is a relative maximum for $f(x)$ and $x = 2$ is a relative minimum for $f(x)$. We can then identify the function’s absolute maximum and minimum by identifying the values of the function at the ends of its domain, or its end behavior. We identify a functions end behavior using limits. We can evaluate these limits by direct substitution.
$\lim\limits_{x\to -\infty} f(x) = -\infty$
$\lim\limits_{x\to \infty} f(x) = \infty$
It is clear that $f(x)$ has no absolute maximum or minimum. Therefore, $f(x)$ only has the relative extrema resulting from the first derivative test. It is important not to forget to plug the critical values resulting in relative extrema back into the original function, as the question asks for the points of extrema, not just $x$-coordinates.
Question 6
Identify all extrema for the function below. Note that relative extrema are excluding any absolute extrema.
$f(x) = \ln(x^2)$
A $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $\text{None}$ $\text{Relative}$ $\text{maxima:}$ $(−\sqrt{e}, 0)$ $\text{Relative}$ $\text{minima:}$ $(\sqrt{e}, 0)$ B $\text{Absolute}$ $\text{maximum:}$ $(−\sqrt{e}, 0)$ $\text{Absolute}$ $\text{minimum:}$ $(\sqrt{e}, 0)$ $\text{Relative}$ $\text{maxima:}$ $\text{None}$ $\text{Relative}$ $\text{minima:}$ $\text{None}$ C $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $\text{None}$ $\text{Relative}$ $\text{maxima:}$ $\text{None}$ $\text{Relative}$ $\text{minima:}$ $\text{None}$ D $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $(0,0)$ $\text{Relative}$ $\text{maxima:}$ $(−\sqrt{e}, 0)$ $\text{Relative}$ $\text{minima:}$ $(\sqrt{e}, 0)$
Question 6 Explanation:
The correct answer is (C). We must first identify the derivative of the function and set is equal to zero to identify the critical points for the function.
$f'(x) = \dfrac{1}{x^2} \cdot 2x$ $= \dfrac{2}{x} = 0$
Now, it is clear at this point that there are no values of $x$ that are critical points for the function. Therefore, no relative extrema exist. However, we must also identify if there exist absolute extrema. We can identify absolute extrema through identifying the functions end behavior. These limits below can be evaluated by direct substitution.
$\lim\limits_{x\to -\infty} f(x) = \infty$
$\lim\limits_{x\to \infty} f(x) = \infty$
However, it is also important to check for the values at discontinuities for the function. It is clear that $f(x)$ is not defined at $x = 0$, meaning that there exists a vertical asymptote at $x = 0$.
$\lim\limits_{x\to 0} f(x) = –\infty$
Therefore, there do not exist absolute extrema for the function either. This would also be clear if the graph of the function was analyzed.
Question 7
Let the graph below represent the first derivative of the function $f(x)$. Identify the $x$-values for all extrema of $f(x)$. Note that relative extrema is excluding any absolute extrema.
A $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $\text{None}$ $\text{Relative}$ $\text{maxima:}$ $\text{None}$ $\text{Relative}$ $\text{minima:}$ $x = 1.5$ B $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $x = 1.5$ $\text{Relative}$ $\text{maxima:}$ $\text{None}$ $\text{Relative}$ $\text{minima:}$ $\text{None}$ C $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $\text{None}$ $\text{Relative}$ $\text{maxima:}$ $\text{None}$ $\text{Relative}$ $\text{minima:}$ $\text{None}$ D $\text{Absolute}$ $\text{maximum:}$ $\text{None}$ $\text{Absolute}$ $\text{minimum:}$ $\text{None}$ $\text{Relative}$ $\text{maxima:}$ $x = 0$ $\text{Relative}$ $\text{minima:}$ $x = 1.5$
Question 7 Explanation:
The correct answer is (B). We can use the given graph of the derivative of the function to identify critical points. It is clear that the derivative is equal to zero at $x = 0$ and $x = 1.5$. We can then apply the first derivative test to identify local extrema for $f(x)$. it is clear that $f'(x)$ changes from negative to positive about $x = 1.5$, but $f'(x)$ does not change signs about $x = 0$. Therefore, only $x = 1.5$ is a relative minimum for $f(x)$.
Now, identifying the extrema for $f(x)$ is a bit more tricky. It is very important to identify that the given derivative function is “cubic” in shape, so the derivative is at least a polynomial of an odd degree. This then implies that its function, $f(x)$, must be of even degree. It is important to recall the general shape of higher order polynomials. We then know that $f(x)$ has a “U” shape as it is a polynomial of even degree, which means that there does not exist an absolute maximum, but there does exist an absolute minimum, which is the vertex of the “U.” This vertex is also a local minimum for the graph, and we have already identified that there exists a local minimum for the function at $x = 1.5$. Therefore, it is clear that the absolute minimum for the function is also at $x = 1.5$.
Question 8
Identify the 2nd derivative of the function below:
$f(x) = \sin^3(3x)$
A $f''(x) =$ $54\sin(3x)$ B $f''(x) =$ $9\sin^2(3x) \cos(3x)$ C $f''(x) =$ $27\sin^3(3x) + 54\cos^2(3x) \sin(3x)$ D $f''(x) =$ $−27\sin^3(3x) + 54\cos^2(3x) \sin(3x)$
Question 8 Explanation:
The correct answer is (D). Be careful with chain rule and power rule.
$f'(x) =$ $3(\sin^2(3x)) \cdot (\cos(3x) \cdot 3)$ $= 9\sin^2(3x) \cdot \cos(3x)$
Don’t forget to use product rule for the second derivative.
$f''(x) =$ $9\sin^2(3x)(−3\sin(3x))$
$+ \cos(3x) (18\sin(3x) \cdot 3\cos (3x))$
$= −27\sin^3(3x) + 54\cos^2(3x) \sin(3x)$
Question 9
Identify the 5thderivative of the function below:
$f(x) = \ln(2x)$
A $f^{(5)} = \dfrac{24}{x^5}$ B $f^{(5)} = \dfrac{−24}{x^5}$ C $f^{(5)} = \dfrac{6}{x^4}$ D $f^{(5)} = \dfrac{–6}{x^4}$
Question 9 Explanation:
The correct answer is (A). Be careful with power rule and recall that the derivative of the natural log function is:
$\frac{d}{du}\ln(u)$ $= \frac{1}{u} \cdot du$
$f'(x) = \dfrac{1}{2x} \cdot 2$ $= \dfrac{1}{x}$
The following derivatives may be more clear if the first derivative is written $f'(x) = x^{−1}$.
$f''(x) = −\frac{1}{x^2}$
This idea of rewriting these rational functions with negative exponents can be used to help identify the remaining higher order derivatives.
$f'''(x) = \dfrac{2}{x^3}$
$f^4(x) = −\dfrac{6}{x^4}$
$f^{(5)} = \dfrac{24}{x^5}$
Question 10
Identify any points of inflection for the function below:
$f(x) = 2x^3 + 3x^2$
A $\text{No}$ $\text{points}$ $\text{of}$ $\text{inflection}$ B $(0, 0)$ $\text{and}$ $(–1, 1)$ C $\left(–\dfrac{1}{2}, \dfrac{1}{2}\right)$ D $\left(–\dfrac{1}{2}, 0\right)$
Question 10 Explanation:
The correct answer is (C). We must identify the second derivative for the function and set it equal to zero to identify points of inflection.
$f'(x) = 6x^2 + 6x$
$f''(x) = 12x + 6$ $=0$
Then, it is clear that the roots of the second derivative are $x = −\frac{1}{2}$.
Therefore, $f(x)$ has a point of inflection about $x = −\frac{1}{2}$. Don’t forget to plug in this $x$-value back into the original function to identify the point of inflection, as the question does not just ask for the $x$-coordinate.
Question 11
Identify the points of inflection for the function below:
$f(x) = e^{3x^2 + 2}$
A $(\frac{\sqrt{6}}{6}, e^{\frac{5}{2}})$ B $(−\frac{\sqrt{6}}{6}, e^{\frac{5}{2}})$ C $(\frac{\sqrt{6}}{6}, e^{\frac{5}{2}})$, $(−\frac{\sqrt{6}}{6}, e^{\frac{5}{2}})$ D $\text{No}$ $\text{points}$ $\text{of}$ $\text{inflection.}$
Question 11 Explanation:
The correct answer is (D). We must identify the second derivative for the function and set it equal to zero to identify points of inflection.
$f'(x) = (6x) \cdot e^{3x^2 + 2}$
$f'(x) =$ $6x(6x) \cdot e^{3x^2 + 2} + 6e^{3x^2 + 2}$
$= e^{3x^2 + 2}(36x^2 + 6) = 0$
It is clear at this point that there is no point of inflection for $f(x)$, as the only possible root does not exist, as there is no square root for the value below.
$x^2 = −\frac{1}{6}$
Similarly, $e^{3x^2 + 2} ≠ 0$ for all real values of $x$.
Therefore, there are no points of inflection.
Question 12
Identify where the function below is concave up:
$f(x) = 6x^3 + 3x^2 + 2x$
A $\text{The}$ $\text{function}$ $\text{is}$ $\text{not}$ $\text{concave}$ $\text{up}$ $\text{anywhere}$ $\text{on}$ $\text{its}$ $\text{domain.}$ B $(−\frac{1}{6}, \infty)$ C $[−\frac{1}{6}, \infty)$ D $(−\infty, \infty)$
Question 12 Explanation:
The correct answer is (B). We must first identify the second derivative and any points of inflection for the function before identifying overall concavity.
$f'(x) = 18x^2 + 6x + 2$
$f''(x) = 36x + 6$
Then, after setting the second derivative equal to zero, it is clear that there is a point of inflection at $x = −\frac{1}{6}$. We can identify the concavity of the function based on the graph of the second derivative.
It is clear that the second derivative is negative at all values of $x$ before $x = −\frac{1}{6}$ and positive at all values after. Therefore, $f(x)$ is concave up on the domain $x > −\frac{1}{6}$, or $(−\frac{1}{6}, \infty)$
Question 13
Identify where the following function is concave down on the given domain:
$f(x) = \sin(3x), −\frac{π}{3} ≤ x ≤ \frac{π}{3}$
A $(0, \frac{π}{3})$ B $[0, \frac{π}{3}]$ C $(−\frac{π}{3}, 0)$, $(0, \frac{π}{3})$ D $\text{The}$ $\text{function}$ $\text{is}$ $\text{not}$ $\text{concave}$ $\text{down}$ $\text{anywhere}$ $\text{on}$ $\text{the}$ $\text{domain.}$
Question 13 Explanation:
The correct answer is (A). We must first identify the second derivative and any points of inflection to identify the function’s concavity.
$f'(x) = 3 \cos(3x)$
$f''(x) = −9\sin(3x)$
Now, it is clear that the second derivative has roots at $x = −\frac{π}{3}$, $x = 0$, and $x = \frac{π}{3}$ on the given domain, which implies that $f(x)$ has a point of inflection at $x = 0$. Notice that the other two roots are not considered points of inflection, as $f(x)$ cannot change inflection about a point if that is an endpoint. Then, we can observe the concavity of the function from the graph of the second derivative.
It is clear from the graph that the second derivative is positive from $(−\frac{π}{3}, 0)$ and negative from $(0, \frac{π}{3})$. Therefore, $f(x)$ is concave down from $(0, \frac{π}{3})$
Question 14
Identify where the function $f(x)$ is concave up and concave down given the graph of the second derivative below.
A $\text{Concave Down:}$ $\text{Nowhere}$ $\text{on}$ $\text{the}$ $\text{domain.}$ $\text{Concave}$ $\text{Up:}$ $(−\infty, \infty)$ B $\text{Concave Down:}$ $(−1, \infty)$ $\text{Concave}$ $\text{Up:}$ $(–\infty, –1)$ C $\text{Concave Down:}$ $(−\infty, 0)$ $\text{Concave}$ $\text{Up:}$ $(0, \infty)$ D $\text{Concave Down:}$ $(−\infty, −1)$ $\text{Concave}$ $\text{Up:}$ $(−1, \infty)$
Question 14 Explanation:
The correct answer is (D). It is clear that there exists a point of inflection for the function at $x = −1$, as $f''(−1) = 0$ by the given graph.
Similarly, it can be identified that $f''(x)$ is negative for $(−\infty, −1)$ and is positive for $(−1, \infty)$. Therefore, $f(x)$ must be concave down for $(−\infty, −1)$ and concave up for $(−1, \infty)$.
Question 15
Identify the concavity and the points of inflection for the function $f(x)$ given graph of the first derivative of the function, $f'(x)$, below.
A $\text{Point(s)}$ $\text{of}$ $\text{inflection}$ $\text{at}$ $\text{no}$ $\text{values}$ $\text{of}$ $x.$ $\text{Concave Down:}$ $\text{Nowhere}$ $\text{on}$ $\text{the}$ $\text{domain.}$ $\text{Concave}$ $\text{Up:}$ $(−\infty, \infty)$ B $\text{Point(s)}$ $\text{of}$ $\text{inflection}$ $\text{at}$ $x = −1$, $x = 2$ $\text{Concave Down:}$ $(−1, 2)$ $\text{Concave}$ $\text{Up:}$ $(−\infty, −1)$U$(2, \infty)$ C $\text{Point(s)}$ $\text{of}$ $\text{inflection}$ $\text{at}$ $x = 0.5$ $\text{Concave Down:}$ $(−\infty, 0.5)$ $\text{Concave}$ $\text{Up:}$ $(0.5, \infty)$ D $\text{Point(s)}$ $\text{of}$ $\text{inflection}$ $\text{at}$ $x = 0.5$ $\text{Concave Down:}$ $(0.5, −\infty)$ $\text{Concave}$ $\text{Up:}$ $(\infty, 0.5)$
Question 15 Explanation:
The correct answer is (C). It is important to recognize that the concavity and points of inflection cannot be observed directly from the given graph, as it is not the graph of the second derivative. We must visualize the graph of the second derivative using the given graph. Recall that the derivative is simply a function whose output measures the slope of the tangent line to the curve at the given point. Thus, we can visualize the second derivative graph by visualizing tangent lines to the given graph.
We know that there exists a point of inflection for the function at $x = 0.5$, as the tangent line to the first derivative graph at $x = 0.5$ would have a slope of $0$, which implies that $f''(0.5) = 0$.
Then, we can similarly identify that all the tangent lines to the given graph prior to $x = 0.5$ would be negative or “downward-sloping,” which implies that the derivative of the given graph would be negative at these values. Similarly, all the tangent lines to the given graph after $x = 0.5$ would be positive or “upward-sloping,” which implies that the derivative of the given graph would be positive at these values.
Thus, $f''(x) > 0$ for all $x > 0.5$ and $f''(x) < 0$ for all $x < 0.5$. Therefore, $f(x)$ is concave down on $(−\infty, 0.5)$, is concave up on $(0.5, \infty)$, and has a point of inflection at $x = 0.5$.
Once you are finished, click the button below. Any items you have not completed will be marked incorrect.
There are 15 questions to complete.
← List → | 2022-06-28 18:31:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098804354667664, "perplexity": 119.30476399235297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00274.warc.gz"} |
https://stats.stackexchange.com/questions/270412/are-there-any-implementations-examples-of-hierarchical-classifiers/326866 | # Are there any implementations/examples of hierarchical classifiers?
Currently, I'm using Scikit-learn in Python 3.6 to classify data with a 7-8 classes (e.g. [C, A.1, A.2, B.3, B.1.1, B.1.2, B.2.1, B.2.2] represented by dark borders below) but I started realizing that there is an inherent hierarchy in these groups that could be used during classification. I was going to write my own algorithm but I don't want to reinvent the wheel if one exists.
Does an algorithm that can predict class-labels in hierarchical manner like this exist (preferably in Python)? If not, are there any examples of an approach like this being used? It reminds me of layers in a neural network but I do not have nearly enough samples for a neural net.
For example, A.1 and A.2 in Level-1 are subgroups of Level-0_A. Level-0_C has no subgroups.
I couldn't find an implementation of Hierarchical Classification on scikit-learn official documentation. But I found this repository recently. This module is based on scikit-learn's interfaces and conventions. I hope this will be useful.
https://github.com/globality-corp/sklearn-hierarchical-classification
It's possible to install it with pip:
pip install sklearn-hierarchical-classification
A thorough usage example is provided in the repo .
• Hi! If anyone was able to figure out how to use this package for deep hierarchy - please post a ling here if you could? For me it works for depth as in provided example, but I am not certain how to properly pass a deeper hierarchy – Maksim Khaitovich Aug 29 '18 at 17:40
• This link does not work anymore – Snow Sep 28 at 9:46 | 2020-10-25 02:49:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4529441297054291, "perplexity": 1058.8673346006544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00382.warc.gz"} |
http://mathhelpforum.com/statistics/121043-expected-value-draws-get-all-52-cards-deck.html | # Math Help - Expected value of draws to get all 52 cards in a deck?
1. ## Expected value of draws to get all 52 cards in a deck?
Say I have a deck of 52 cards. I draw a card and then return it to the deck, and the probability of drawing each card is 1/52 each time.
What is the expected value of the number of draws needed to draw every card in the deck at least once?
2. Originally Posted by cubrikal
Say I have a deck of 52 cards. I draw a card and then return it to the deck, and the probability of drawing each card is 1/52 each time.
What is the expected value of the number of draws needed to draw every card in the deck at least once?
Cubrikal,
This is called the "coupon collector's problem" in reference to someone who wants to collect a complete set of coupons. In your case, you have 52 "coupons", i.e. cards.
You can find a discussion of the problem and a formula for the expected number of draws needed to get a complete set here:
Coupon collector's problem - Wikipedia, the free encyclopedia
3. In probability theory, the coupon collector's problem describes the "collect all coupons and win" contests. It asks the following question: Suppose that there are n coupons, from which coupons are being collected with replacement. What is the probability that more than t sample trials are needed to collect all n coupons? The mathematical analysis of the problem reveals that the expected number of trials needed grows as O(nlog(n)). For example, when n = 50 it takes about 225 samples to collect all 50 coupons.
I am a bit confused by this
50log(50) =
84.9485002
how do i get 225?
4. Originally Posted by Joel
In probability theory, the coupon collector's problem describes the "collect all coupons and win" contests. It asks the following question: Suppose that there are n coupons, from which coupons are being collected with replacement. What is the probability that more than t sample trials are needed to collect all n coupons? The mathematical analysis of the problem reveals that the expected number of trials needed grows as O(nlog(n)). For example, when n = 50 it takes about 225 samples to collect all 50 coupons.
I am a bit confused by this
50log(50) =
84.9485002
how do i get 225?
The "big Oh" formula is only meant to give an idea of the rate of growth. It is not meant for precise computation. Probably your best formula is
$E(T) \approx n \ln (n) + \gamma n + 1/2$
where $\gamma$ is the "Euler gamma constant", about 0.5772.
5. Originally Posted by awkward
Cubrikal,
This is called the "coupon collector's problem" in reference to someone who wants to collect a complete set of coupons. In your case, you have 52 "coupons", i.e. cards.
You can find a discussion of the problem and a formula for the expected number of draws needed to get a complete set here:
Coupon collector's problem - Wikipedia, the free encyclopedia
Thanks for the name of the problem. That wiki article is pretty informative too. | 2015-10-04 16:26:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.812573254108429, "perplexity": 215.9789238915383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736675218.1/warc/CC-MAIN-20151001215755-00069-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://bsic.it/articles/ | ## Negative interest rate policy – friend or foe?
Download as PDF Introduction Interest rates have been persistently low across several advanced economies since the Global Financial Crisis. As short- and medium-term rates approached zero, central banks were compelled to turn to increasingly forceful measures in order to provide an adequate monetary stimulus. In 2012, the Danish National Bank was the first one to lower its key interest rate to the negative territory. In subsequent years, the ECB and the central banks of Sweden, Read more…
## Market Recap 08/03/2020
Download as PDF US US market ended another wild trading week with a slight gain after last Friday substantive losses. S&P500 closed at 2,972.37, with a weekly gain of 0.61%. Dow-Jones Industrial Average Index (DJIA) was up 1.79% reaching 25,864.78. The demand for haven assets is rising and the bets are increasing for the Federal Reserve to cut interest rates again. US Federal Reserve has cut half a percentage point this week. The policy of Read more…
## The Accuracy of Relative Valuation: Focus on the Banking Industry
Download as PDF Introduction Relative valuation is a widely spread approach among practitioners and is, broadly speaking, a double-check methodology, to back-test more sophisticated valuation techniques. Relative valuation relies on the main assumption that stock markets are on average efficient and fundamentals are correctly priced in. The aim of this article is to examine the effectiveness and the accuracy of relative valuation through an empirical analysis. The first paragraph highlights some of the main features Read more…
## Deal Recap 8/03/2020
Download as PDF BSIC has brought you some of the most interesting deals over the last week, we hope you’ll enjoy! Elliot Management builds a 4% stake in Twitter Deal Value: $1bn | Deal Type: Shareholder Activism | Date: 01-Mar-2020 | Nationality: US | Sector: Technology Paul Singer has succeeded in building a stake in Twitter worth more than$1bn, obtaining a share of about 4% in the social media company. The intention is clear: Singer wants Jack Dorsey, founder and actual CEO, Read more…
## Democratic Primaries: A markets perspective Part 2 – The Super Tuesday Follow-Up
Download as PDF Introduction Back in December we published an Article called “Democratic Primaries: A markets perspective” in which we provided an overview of the most significant Democratic candidates, namely Berny Sanders and Elizabeth Warren from a more liberal, left-wing side and Joe Biden and Michael Bloomberg from a more moderate centrist side, and their proposed legislations. We continued to form an opinion on how markets would react if said candidates would become the democratic Read more…
## Market Recap 01/03/2020
Download as PDF United States The US markets experienced the worst week since 2008 due to a massive sell off caused by coronavirus uncertainty. S&P500 closed at 2954.22 with a weekly loss of a staggering 11.50% whilst the Dow-Jones Industrial Average (DJIA) Index finished the week at 25.409, down 12.00% over the prior week. If financial markets continue to plummet and the virus continues to spread, the Fed might need to intervene in the form Read more…
## Franklin Templeton Acquires Legg Mason to Survive the New Threats of the Asset Management Industry
Download as PDF Franklin Templeton Investments [BEN] – Mkt Cap as of February 25th, 2020: $11.79bn Legg Mason [LM] – Mkt Cap as of February 25th, 2020:$4.35bn Introduction As Invesco’s chief executive Marty Flanagan correctly pointed out some months ago, the Asset Management industry is going through dramatic changes right now; in his words: “Winners and losers are being created today like never before. The strong are getting stronger and the big are going Read more…
## Overhaul of Common Practices in Finance with Network Models
Download as PDF Introduction In many financial studies the default assumption is that changes in prices are random and independent between different financial instruments. Opposing studies, thus, are often concerned with discovering dependencies and whether they are driven by common economic factors. One approach to the problem are network graphical models. With respect to finance theory, network graphical models appeared in the literature in 1999 and have been advanced since. Here, we want to delineate Read more…
## Consolidation in the European Financial Services Sector
Download as PDF European Integration As an introduction, we provide a brief overview of the pan-European regulation on which Financial Services is largely based and which helps explain the convergence and the subsequent attractiveness of the industry towards consolidation. Its objective is the further integration of European countries by the creation of a single market for services, in particular Financial Services. Banking Union – is the first pillar in the system and it is finalized Read more…
## ESG Investing: How to make your returns go green
Download as PDF Introduction We have touched upon the new trend of ESG factors shaping investors preferences in the “Green Bonds Boom” article published in the last semester. This time around we will try to analyze how environmental, social and governance responsibility may be driving equities, in order to see what will the future of asset allocation look like as many institutional and retail investors seem to change their lenses. ESG in 2019 First, let’s Read more…
## Deal Recap 01/03/2020
Download as PDF The spiking diffusion of COVID-19 outside of China, particularly in South Korea, Italy and Iran, has negatively affected financial markets, among concerns about its impact on business activity: in the US, the Dow Jones Industrial Average, S&P 500 and Nasdaq Composite were all in correction territory this week. Due to this climate of uncertainty, the weak worldwide deal activity, observed already at the beginning of the year, seems on track to continue. Read more…
## The Death of FX Volatility, What’s Next?
Download as PDF Introduction Over the past few years, volatility in the foreign exchange markets has entered a marked downward trend, which ultimately culminated with the signing of the US-China Phase One trade deal in January 2020. This descending movement has characterised both Developed and Emerging Markets currency pairs and was largely driven by a macroeconomic environment in which central banks’ easier monetary policies play the lion’s role. Indeed, as a result of the excess Read more…
## Weekly Deal Recap
Download as PDF BSIC has brought you some of the most interesting deals over the last week, we hope you’ll enjoy! Franklin Templeton acquires Legg Mason for 4.5bn Deal Value: $4.5bn |Deal Type: Acquisition | Date: 18-Feb-2020 | Nationality: US | Subsector: Finance Franklin Templeton, the US global investment firm with$740bn of AUM, is set to acquire its competitor Legg Mason for \$4.5bn in an all-cash transaction, paying a 23% premium. The acquisition of Read more…
## Private Equity Snaps Up European Telecom Assets: KKR Sells Deutsche Glasfaser to EQT and OMERS for €2.8bn
Download as PDF Introduction KKR, the massive American private equity (PE) fund, has agreed on the 10th of February, 2020, to sell the ultrafast German fiber internet business Deutsche Glasfaser in a €2.8bn deal to EQT and Omers, respectively a private equity firm and a Canadian pension fund. According to what has been revealed so far on the deal, EQT plans to merge the company with Inexio, the German fiber optic network operator which the Read more… | 2020-04-02 15:42:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17088530957698822, "perplexity": 5359.175640740608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00497.warc.gz"} |
http://clay6.com/qa/30248/a-flask-containing-12g-of-a-gas-of-relative-molecular-mass-120-at-a-pressur | Want to ask us a question? Click here
Browse Questions
Ad
0 votes
# A flask containing 12g of a gas of relative molecular mass 120 at a pressure of 100 atm , was evacuated by means of a pump untill the pressure was 0.01 atm at the same T. Which of the following is the best estimate of the number of molecules left in the flask. $(N_0 = 6\times10^{23} mol^{-1}$)
$(a)\;6\times10^{19}\qquad(b)\;6\times10^{18}\qquad(c)\;6\times10^{17}\qquad(d)\;6\times10^{13}$
Can you answer this question?
## 1 Answer
0 votes
At constant V and T
$\large\frac{P_1}{n_1} = \large\frac{P_2}{n_2}$
$\large\frac{100}{\large\frac{12}{120}} = \large\frac{0.01}{n_2}$
$(n_1 = \large\frac{12}{120})$
$n_2 = \large\frac{0.01}{1000}$
$\;\;\;\;=1\times10^{-5}$
No. of molecules left = $6.02\times10^{23}\times10^{-5}$
$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=6.02\times10^{18}$
Hence answer is (b)
answered Mar 5, 2014
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer | 2017-03-28 08:15:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851397693157196, "perplexity": 8196.89086797013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.31/warc/CC-MAIN-20170322212949-00021-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.sawaal.com/races-and-games-questions-and-answers/a-and-b-take-part-in-100-m-race-a-runs-at-5-kmph-a-gives-b-a-start-of-8-m-and-still-beats-him-by-8-s_3287 | 51
Q:
# A and B take part in 100 m race. A runs at 5 kmph. A gives B a start of 8 m and still beats him by 8 seconds. The speed of B is:
A) 5.15 kmph B) 4.14 kmph C) 4.25 kmph D) 4.4 kmph
Explanation:
A's speed = (5*5/15)m/sec = (25/18)m/sec
Time taken by A to cover 100 m = (100*18/15)sec = 72sec
Time taken by B to cover 92 m = (72 + 8) = 80 sec.
B's speed =(92/80*18/5)kmph =4.14kmph
Q:
In a race of 1000 m, A can beat by 100 m, in a race of 800m, B can beat C by 100m. By how many meters will A beat C in a race of 600 m ?
A) 127.5 m B) 254 m C) 184 m D) 212 m
Explanation:
When A runs 1000 m, B runs 900 m and when B runs 800 m, C runs 700 m.
When B runs 900 m, distance that C runs = (900 x 700)/800 = 6300/8 = 787.5 m.
In a race of 1000 m, A beats C by (1000 - 787.5) = 212.5 m to C.
In a race of 600 m, the number of meters by which A beats C
= (600 x 212.5)/1000 = 127.5 m.
40 7009
Q:
In a race of 1000 meters, A can beat B by 100 meters, in a race of 800 meters, B can beat C by 100 meters. By how many meters will A beat C in a race of 600 meters?
A) 125.5 meters B) 126.5 meters C) 127.5 meters D) 128.5 meters
Explanation:
When A runs 1000 meters, B runs 900 meters and when B runs 800 meters, C runs 700 meters.
Therefore, when B runs 900 meters, the distance that C runs = (900 x 700)/800 = 6300/8 = 787.5 meters.
So, in a race of 1000 meters, A beats C by (1000 - 787.5) = 212.5 meters to C.
So, in a race of 600 meters, the number of meters by Which A beats C = (600 x 212.5)/1000 = 127.5 meters.
32 24275
Q:
A can give B 100 meters start and C 200 meters start in a kilometer race. How much start can B give C in a kilometer race?
A) 110.12 meters B) 111.12 meters C) 112.12 meters D) 113.12 meters
Explanation:
A runs 1000 meters while B runs 900 meters and C runs 800 meters.
Therefore, B runs 900 meters while C runs 800 meters.
So, the number of meters that C runs when B runs 1000 meters = (1000 x 800)/900 = 8000/9 = 888.88 meters
Thus, B can give C (1000 - 888.88) = 111.12 meters start
25 25583
Q:
In a 100 m race, A can beat B by 25 m and B can beat C by 4 m. In the same race, A can beat C by:
A) 21 m B) 26 m C) 28m D) 29m
Explanation:
A : B = 100 : 75
B : C = 100 : 96.
A : C = (A/B*B/C) = (100/75*100/96) = 100/72 =100:72
A beats C by (100 - 72) m = 28 m.
26 7041
Q:
A runs $123$ times as fast as B. If A gives B a start of 80 m, how far must the winning post be so that A and B might reach it at the same time?
A) 200 m B) 300m C) 270m D) 160m
Explanation:
Ratio of the speeds of A and B =5 : 1 = 5 :3
Thus, in race of 5 m, A gains 2 m over B.
2 m are gained by A in a race of 5 m.
80 m will be gained by A in race of $52*80$ m =200m
=> Winning post is 200 m away from the starting point.
29 15000
Q:
A can run 22.5 m while B runs 25 m. In a kilometre race B beats A by:
A) 100 m B) 75 m C) 25 m D) 50m
Explanation:
When B runs 25 m, A runs 45/2 m
When B runs 1000 m, A runs (45/2*1/25*1000)m = 900 m.
B beats A by 100 m.
49 37680
Q:
In a 200 metres race A beats B by 35 m or 7 seconds. A's time over the course is:
A) 40 sec B) 47sec C) 33sec D) none of these
Explanation:
B runs 35 m in 7 sec.
B covers 200 m in (7/35*200) = 40 sec.
B's time over the course = 40 sec.
A's time over the course (40 - 7) sec = 33 sec.
44 32536
Q:
In a game of 100 points, A can give B 20 points and C 28 points. Then, B can give C:
A) 8 points B) 10 points C) 14points D) 40points
Explanation:
A : B = 100 : 80.
A : C = 100 : 72.
B:C = B/A*A/C = 80/10*100/72 = 10/9 = 100/90 = 100:90
B can give C 10 points | 2021-09-27 02:07:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3933517038822174, "perplexity": 3168.255226046881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058222.43/warc/CC-MAIN-20210926235727-20210927025727-00521.warc.gz"} |
https://icml.cc/virtual/2022/poster/16369 | ## Biased Gradient Estimate with Drastic Variance Reduction for Meta Reinforcement Learning
### Yunhao Tang
##### Hall E #829
Keywords: [ RL: Policy Search ] [ MISC: Transfer, Multitask and Meta-learning ] [ Reinforcement Learning ]
[ Abstract ]
[
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT
Spotlight presentation: Reinforcement Learning
Wed 20 Jul 10:15 a.m. PDT — 11:45 a.m. PDT
Abstract: Despite the empirical success of meta reinforcement learning (meta-RL), there are still a number poorly-understood discrepancies between theory and practice. Critically, biased gradient estimates are almost always implemented in practice, whereas prior theory on meta-RL only establishes convergence under unbiased gradient estimates. In this work, we investigate such a discrepancy. In particular, (1) We show that unbiased gradient estimates have variance $\Theta(N)$ which linearly depends on the sample size $N$ of the inner loop updates; (2) We propose linearized score function (LSF) gradient estimates, which have bias $\mathcal{O}(1/\sqrt{N})$ and variance $\mathcal{O}(1/N)$; (3) We show that most empirical prior work in fact implements variants of the LSF gradient estimates. This implies that practical algorithms "accidentally" introduce bias to achieve better performance; (4) We establish theoretical guarantees for the LSF gradient estimates in meta-RL regarding its convergence to stationary points, showing better dependency on $N$ than prior work when $N$ is large.
Chat is not available. | 2023-03-21 07:58:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.536193311214447, "perplexity": 2657.935789247098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00178.warc.gz"} |
http://math-faq.com/wp/question/how-do-you-calculate-a-limit-algebraically/ | # How Do You Calculate a Limit Algebraically?
You can recognize the limits by what happens when you substitute the value x approaches into the expression. If it gives 0/0, there is algebra that you can do to find the exact value of the limit.
In the first two examples, the expression may be factored and simplified…then you can substitute the value for x.
$\underset{x\to 3}{\mathop{lim }}\frac{{{x}^{2}}-5x+6}{x-3}$
$\underset{x\to -1}{\mathop{lim }}\frac{{{x}^{2}}-x-2}{x+1}$
In the next two examples, the fractions in the numerator must be combined before the fraction may be simplified.
$\underset{x\to 0}{\mathop{lim }}\frac{\frac{1}{x-6}+\frac{1}{6}}{x}$
$\underset{x\to 0}{\mathop{lim }}\frac{\frac{1}{4}-\frac{1}{x+4}}{x}$
The next two examples are designed to throw you off. When you substitute the value into the expression, you do not get 0/0. This means you need to use a table or graph to get the limit.
$\underset{x\to -5}{\mathop{lim }}\frac{1}{{{\left( x+5 \right)}^{2}}}$
$\underset{x\to 1}{\mathop{lim }}\frac{x}{{{\left( x-1 \right)}^{2}}}$
The next two examples show how to rationalize the numerator to do a limit.
$\underset{x\to 1}{\mathop{lim }}\frac{\sqrt{x}-1}{x-1}$
This example may be done two different ways as the next two boards demonstrate.
$\underset{x\to 4}{\mathop{lim }}\frac{\sqrt{x}-2}{x-4}$ | 2018-09-22 04:13:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7700822353363037, "perplexity": 251.27586843316055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00112.warc.gz"} |
https://homework.cpm.org/category/MN/textbook/cc3mn/chapter/3/lesson/3.2.4/problem/3-102 | ### Home > CC3MN > Chapter 3 > Lesson 3.2.4 > Problem3-102
3-102.
Copy and complete each of the Diamond Problems below. The pattern used in the Diamond Problems is shown at right.
1. Divide $1$ by $0.5$ to find $y$.
Add $x$ and $y$ together to get the bottom of the diamond.
• What are the factors of $10$?
Which of those add to $7$?
1. Follow the steps in part (a).
1. Use the pattern above to find $xy$ and $x+y$. | 2022-05-19 02:20:21 | {"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2713806927204132, "perplexity": 1038.2342927344655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00750.warc.gz"} |
https://quant.stackexchange.com/questions/40491/programmatically-detect-rsi-divergence/41335 | # Programmatically detect RSI divergence
How can I programmatically detect bullish and bearish RSI divergences?
A bullish divergence occurs when the underlying security makes a lower low and RSI forms a higher low. RSI does not confirm the lower low and this shows strengthening momentum.
A bearish divergence forms when the security records a higher high and RSI forms a lower high. RSI does not confirm the new high and this shows weakening momentum.
I was searching for answers to the same question and came across your question.
After some thought and research, here is the plan I have developed. I will be working in Python.
1. Calculate relative maxima and minima with SciPy.
2. Calculate RSI at those points using lib-ta.
3. For each pair of lows and highs, compare the change in price with the difference in RSI.
I'm completely new to technical analysis, so in case I have made any oversights, feedback would be greatly appreciated. I wanted to ask about your programming language and data format, but don't have enough reputation to comment.
• Cool, thanks. If my answer was helpful for you, I'd appreciate an upvote and/or accept. IMHO my answer is more specific and complete than any of the answers on Reddit so far. – Casey Jones Aug 20 '18 at 2:13
• Awesome, thanks. :) I actually just started developing my own bot a couple of days ago. So far it just does trailing stop losses for the sell strategy. I'm planning to use RSI divergences at first for the buy strategy, so I'll be curious to compare my algorithm with yours later. – Casey Jones Aug 20 '18 at 6:08
• I am attempting to do the same thing with finding divergences and was able to complete steps 1 and 2 before hitting a roadblock and stumbling upon this. Could you explain how you computed step 3? I have the index points of these highs and lows and all the RSI/close info. Just trying to figure out a way to automate the checking, using those specific index points. – hkml Mar 12 '19 at 2:57
• I'm not sure if I understand your question. As explained in the original question: "A bullish divergence occurs when the underlying security makes a lower low and RSI forms a higher low... A bearish divergence forms when the security records a higher high and RSI forms a lower high." If you have identified a pair of lower lows in price (priceA and price B), then priceA - priceB should be positive, right? If RSIa - RSIb (RSI at the same times) is then negative, that means the RSI has risen, which signals a bullish divergence. Does that help? – Casey Jones Mar 18 '19 at 2:48
• scipy.argrelmin is dangerous in quant trading because it has a forward-looking component. In other words, the relative maxima at time t depends on data from time t+i for positive i, which invalidates your simulation. – Chris Conlan Jul 25 at 23:27
I want to implement exactly same principle in C# and realized that i should start opposite. Start from finding Higher High or Lower Low and then checking RSI. After finding HH or LL checking RSI is trivial task. To find HH or LL you could use ZigZag indicator. At investopedia you could find how to calculate it in more details. Also you could check Python version of it in quantconnect forum. Also, you could find more versions on internet.
• The problem with ZigZag indicator is it draws peaks when they are quite in the past (it retrospectively updates itself). A way to detect turning points as they happen is quite difficult – Dr. Andrew Burnett-Thompson Jul 2 at 17:15 | 2020-09-18 13:35:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35957014560699463, "perplexity": 1691.2798906236828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00570.warc.gz"} |
http://www.learn-math.top/are-coordinate-systems-just-rank-1-covariant-tensor-fields/ | # Are coordinate systems just rank 1 covariant tensor fields?
Question: Given, the Euclidean plane PP\mathcal{P}, with the point O∈PO \in \mathcal{P} (called “the origin”).
1. Is a Cartesian coordinate system for this plane centered at OO just an orthonormal basis for the cotangent space of the Euclidean point at OO, i.e. an orthonormal basis for (TOP)∗(T_O \mathcal{P})^*?
2. Are vectors with tail at OO in the Euclidean plane P\mathcal{P} just elements of the tangent space at OO, i.e. v∈TOPv \in T_O \mathcal{P}? And is a vector ww with tail at point A∈PA \in \mathcal{P} just an element w∈TAPw \in T_A \mathcal{P}?
3. That any point can be given a unique coordinate is just a reflection of the fact that P≃(TOP)∗\mathcal{P} \simeq (T_O\mathcal{P})^* as vector spaces, and the fact that every point can be associated with a position vector is just a way of expressing the fact that P≃TOP\mathcal{P} \simeq T_O\mathcal{P} as vector spaces? And the fact that position vectors can be associated with coordinates is just a reflection of the fact that, being finite-dimensional vector spaces, TOP≃(TOP)∗?T_O\mathcal{P} \simeq (T_O\mathcal{P})^*?
Incredibly long-winded context:
I was thinking recently that coordinate systems and “physical vectors” are naturally dual — the coordinates of a space don’t change under a linear transformation, whereas “physical vectors” do change under linear transformations. In contrast, coordinates change under (aptly named) changes of coordinates, whereas “physical vectors” do not change under changes of coordinates.
In other words, it seems like it makes the most sense to define coordinate systems to be covariant tensor fields of order 1, i.e. one-forms. Then the purpose of always having and considering cotangent and tangent spaces makes sense, because the former is the coordinate system with origin at a given point, and the latter is the space of “physical vectors” anchored at the point.
Moreover, the notions of coordinate system for a vector space only starts to get messy exactly when the notion of dual space gets messy — in infinite dimensions. The inner product at a point on a Riemannian manifold, which generates associated notions of length and angle, is defined on the tangent space, whose objects probably not coincidentally are “physical vectors”, the objects for which notions of length and angle make the most sense (compared to points in a coordinate space). Likewise, notions of integration and differentiation seem to correspond most naturally to cotangent spaces, which seems to correspond to how calculus is defined in the simplest possible setting — from a space of points to another space of points. And vector fields live in tangent spaces, and vector fields are most easily thought of in terms of “physical vectors”, again suggesting that tangent spaces are “physical vectors” and that cotangent systems are just coordinate systems.
At least one other person seems to have come to this conclusion independently (or maybe I read that answer before and my subconscious just understood it now). This seems to be how physicists think of everything, and implicitly the viewpoint behind much of differential geometry, or at least this mental model seems to make what these experts say seem much clearer now. For instance, these youtube videos about tensor calculus suddenly make much more sense when one thinks of “the covariant basis” and the “coordinate system” as literally the exact same thing. Watching this video again, it also makes a lot more sense if one thinks “coordinates = covectors”.
The only reason why one might not want to identify coordinate systems with cotangent spaces, is that one is often taught, when first introduced to “physical vectors”, to identify them with the coordinates of the heads of their arrows. But that shouldn’t discourage us, because obviously any finite-dimensional vector space is isomorphic to its dual vector space (as vector spaces). The isomorphism is so simple that we can completely ignore that it exists in elementary situations, but then suddenly one moves to differential geometry and gets confused because the distinction was pushed under the rug previously.
This answer also seems to suggest the same thing — a vector space has no “intrinsic coordinate system” — because one needs to specify a basis for the dual space in order to do so, i.e. a vector space has no “intrinsic coordinate system” for the exact same reason that it doesn’t have an “intrinsic basis” — one needs to specify both a basis for the vector space, and one for the dual vector space, the latter which we call a “coordinate system”. And the manner in which coordinates for points are found — via orthogonal projections, i.e. inner products — is just application of covectors to vectors, or at least so it would seem to me.
Related questions:
Computationnal geometry: vector, basis, point and coordinate system?
How can vector functions define coordinate systems?
What is (fundamentally) a coordinate system ?
Linear algebra revisited: What do we do when we set a coordinate system?
=================
=================
1
=================
Not entirely, but you can find a direct connection. A covector field inhabits the cotangent space, a vector field inhabits the tangent space and a coordinate system is a set of nn scalar fields, where nn is the dimension of the manifold. This statement is valid for any differentiable manifold, without the need to identify an origin.
If one has specifically a flat manifold and designates an origin, one can canonically identify (points of) the manifold with (vectors of) the tangent space at the origin. A covector assigns to each vector a scalar value. Through the mentioned identification, nn linearly independent covectors become equivalent to assigning to each point a set of nn coordinates, and transformations necessarily behave equivalently. This picture only holds for “straight” coordinate systems though: it cannot produce curvilinear coordinate systems, but it applies regardless of the angles involved. To keep one’s implicit presumptions to a minimum, it is best to avoid assuming metric concepts such as angle and distance (“Euclidean”).
It may be noted that all this depends on a list of restrictions: the manifold must be flat, an origin must be selected, the (co)tangent space at only the origin is considered, not a general (co)vector field; the coordinates must be linear, parallel and uniformly spaced. But if one thinks in terms of differential behaviour, all of these are valid “locally”, and the insight is a useful one. So I’d say ‘no’ to all your questions, but in the limit of the tangent space at any given point, they become ‘yes’.
Thank you for double-checking my sanity — I appreciate it! This is sort of what I thought — given a point pp on a manifold MM, both TpMT_pM and (TpM)∗(T_pM)^* “sit” at pp, so a basis for (TpM)∗(T_pM)^* can be used to define a coordinate system for TpMT_pM via the “scalar product”. Of course in general the only point of (TpM)∗(T_pM)^* or TpMT_pM that one can identify with a point of MM is pp, but as you pointed out, the further identifications I mentioned depend crucially on the space being flat — obviously of course the isomorphisms with the tangent and cotangent spaces would not otherwise hold.
– William
2 days ago
Also the definition of a coordinate system as a set of nn scalar fields is a useful one and one I hadn’t thought of/known before. And with that definition, if one defines the coordinate system via the action of a basis of covectors on the tangent space, it is still obvious why the covectors are covariant and the vectors are contravariant. I still have to think about this some more, but this definitely helps a lot to clarify my thoughts — again I appreciate it.
– William
2 days ago
I guess what I found confusing is that no one ever bothers to say that the coordinate system is generated by the action of the dual frame on the original frame. For instance, while this excerpts on Wikipedia both mention the original frame and the coordinate system for them, they don’t mention that the coordinate system is generated by a dual frame, thus leaving no intuition as to why the coordinates should not change when a linear transformation is applied to the frame. en.wikipedia.org/wiki/Orthonormal_frame, en.wikipedia.org/wiki/Affine_space#Affine_coordinates
– William
2 days ago
1
I’m forever confused by things because “no one ever bothers to say”. Even mathematicians tend to produce texts that induce one to assume things that are not universally true (which is essentially your point), and physicists are much worse. A basis need not even be a holonomic basis; it can be any set of nn linearly independent vector fields, just as an example that catches many (a slight diversion from what you’re saying). In general, of course, to think of coordinates as the action of a dual basis does not work.
– qman
2 days ago
When I say coordinate system I always have in mind “linear” coordinate systems (i.e. affine coordinate systems centered at the origin) i.e. something which is compatible with vector space structure. I know that the definition of coordinate system can be generalized to curvilinear coordinate systems and others, but I don’t know a word that applies only to the specific situation I have in mind. (Maybe “frame” comes close? I honestly don’t know.) Anyway the specific example I mentioned is the Cartesian coordinate system of the plane, which is certainly not curvilinear.
– William
yesterday | 2018-06-23 16:01:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222899079322815, "perplexity": 427.39027478643607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865098.25/warc/CC-MAIN-20180623152108-20180623172108-00456.warc.gz"} |
https://math.stackexchange.com/questions/514118/mysterious-failure-to-generate-independent-set-of-random-variables | # Mysterious failure to generate independent set of random variables
After getting my answer in this:
Making a well conditioned orthonormal basis
I am running into a problem which I do not understand. I have n dependent gaussian random variables that are related by a known n x n covariance matrix C. The solution I was given was to take the positive definite square root of the inverse of C - which means I put the n independent random variables (call them the vector x) through a linear transformation (D^-.5*Q)*x, where D is a square diagonal matrix of the eigenvalues (of C) and Q's rows are the corresponding eigenvectors.
The trouble is, the covariance matrix isn't necessarily positive definite under certain values of the parameters of the problem. I don't want a complex matrix, taking the negative eigenvalues to the -.5 power, so my idea was, I keep just the positive eigenvalues. So I look at D, and if it's a 100 x 100 matrix but only 67 of the eigenvalues are positive, I throw out the negative ones, and use only a 67 x 67 matrix for D, which I can take to the -.5 power, and then Q is no longer a square matrix but a 67 x 100 matrix, and the 100-length vectors are the normalized eigenvectors of C corresponding to the eigenvalues that I kept. And lo and behold, when I find this matrix A=(D^-.5*Q), and I put x under the linear transformation Ax, so that the covariance matrix of the result SHOULD be a 67 x 67 matrix, for 67 random variables that are actually variance-1 and independent and constructed out of the original 100 random variables that were too highly correlated for their own good, and I calculate what SHOULD be the new covariance matrix ACA', I get that it is very nearly the 67 x 67 entry identity matrix. GREAT.
Unfortunately....
it does not hold up to actual observation. When I actually try this transformed set of random variables, they are not even CLOSE to being independent. But apparently, only when I throw out some negative eigenvalues, or when it's really close to not being positive definite. In other words, if the parameters of the problem are such that all 100 eigenvalues are quite positive, it works - Ax produces 100 random variables that appear, by monte carlo simulations, to truly be independent. But as soon as I throw away any eigenvalues - and I don't even have to do that, as soon as the parameters of the problem put it CLOSE to being at the point where I would be getting negative eigenvalues, suddenly different entries in Ax start exhibiting absurdly high correlations. 0.6, 0.7, as high as 0.9. Even though ACA' is very nearly an identity matrix, albeit of fewer dimensions than the original C. I thought at first that maybe I wasn't doing enough tests, but that thought is gone - I try it with tens of thousands of trials and the correlation of something that was 0.709 before maybe changes to 0.706 with the completely new data. They ARE highly correlated. I thought for a while that it was a result of small numerical errors in dealing with large matrices. But that doesn't explain why it suddenly happens when the parameters of the problems contrive to make the lowest eigenvalue close to crossing below 0.
Am I missing something? Isn't ACA' ALWAYS supposed to be the covariance matrix of Ax if C is the covariance of x? Is it ever not? Because ACA' here is very nearly an identity matrix and Ax's elements are flagrantly dependent. And if not, is there another method of producing as many independent equal variance random variables as possible besides throwing out the negative eigenvalues as I was doing?
Firstly, let $n$ be the number of rows and $p$ be the number of columns (variables) of your data matrix $\mathbf{X}$. You never said how many records $n$ you were using, and only stated, for example, the size of your $100 \times 100$ covariance matrix $\mathbf{C}$.
You need to look at the Marcenko-Pastur (MP) law on the density of eigenvalues of $\mathbf{C}$ for a random White Wishart matrix, which is essentially an $n \times p$ data matrix, $\mathbf{X}$, with independent columns. In other words, a White Wishart matrix is filled with i.i.d. standard normal variates.
By definition, if $p>n$, then there will be $p-n$ zero eigenvalues of $\mathbf{C}$. In addition, as $n\rightarrow p$, there will be near-zero eigenvalues of $\mathbf{C}$.
Now with regard to the MP Law, the value $\gamma=p/n$ determines the ratio of the dimensions to the size of the $\mathbf{X}$ matrix. The MP law provides the lower $(\lambda^-)$ and upper bound $(\lambda^+)$ of the eigenvalues for $\mathbf{C}$ derived from an $n \times p$ White Wishart $\mathbf{X}$ matrix. Therefore, knowing $n$ and $p$ of any data matrix (i.e., your data matrix, $\mathbf{X}$), you can first use the MP law to calculate $\lambda^-$ and $\lambda^+$, and any of your eigenvalues from $\mathbf{C}$ which exceed $\lambda^+$ are considered to be the signal eigenvalues and those below $\lambda^+$ are believed to be in the noise region. (eigenvalues in the interval $(\lambda^-, \lambda^+)$ correspond to eigenvalues of a White Wishart matrix having the same dimensions as your data $\mathbf{X}$ matrix. | 2019-06-25 12:15:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127026557922363, "perplexity": 184.73598035088605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999838.23/warc/CC-MAIN-20190625112522-20190625134522-00067.warc.gz"} |
https://gamedev.stackexchange.com/questions/22750/how-do-i-calc-what-side-the-ball-is-of-the-rect | # How do i calc what side the ball is of the rect?
I have this nice drawing:
http://bildr.no/view/1085283
I know the ball's center and the rectangle's center, but how do I know (programming C#) what side of the rectangle the ball is?
By my drawing, the answer in this case should be the right side.
• Is the rectangle always axis-aligned, or does it have an orientation? – sam hocevar Jan 22 '12 at 14:00
What side of A is B on? Using this info (see Solution 3) you can determine which side of a line A point (or points) B, is on.
For your particular scenario, you need to check which of four sides of the rectangle, the circle is in. In this case, you can treat your circle as just it's centre/origin point. Now you need to look at the rectangle like this:
\ /
\ q1 /<--one of the two lines bordering quadrant 4
*---*
|\ /|<---rectangle edge for quadrant 4
q4| x |q2
|/ \|
*---*
/ q3 \<--one of the two lines bordering quadrant 4
/ \
The q's stand for quadrant or side of the rectangle you're on; the numberings are abritrary, but just show you that the centre point can lie in any of these 4 distinct quadrants. You need to determine that the circle centre is:
• to the correct side of each of the two lines bordering that quadrant (the diagonal ones in the drawing above), i.e. it must lie between them;
AND
• to the correct side of the rectangle-edge that lies within that quadrant (between the aforementioned diagonal lines.
This will tell you if the point falls in to the bowl-shaped area on each of the four sides of the rectangle shown above.
The "correct" side is determined by you, when you construct the formula in the link given above. It depends on whether you specify a line as PQ or it's reverse, QP. This is known as winding order (see "Winding Order of Vertices").
• Seems like a lot more work than just doing a dot product. Additionally, your solution is limited to rectangle only. – Nic Foster Jan 27 '12 at 15:44
• @NicFoster Not getting any upvotes, eh? Hmm. – Engineer Jan 27 '12 at 17:16
• You've avoided the context of the original comment. You don't agree that this is a lot of work to find the side of the shape a position lies? – Nic Foster Jan 27 '12 at 20:54
• I'd say this answer is slightly more intuitive, albeit not by much. It certainly doesn't take any credibility from your own. If you've ever tried to explain what the dot or cross product are, visually, you'll see that it becomes less trivial to follow the logic in solving the problem as a whole. Many take the mathematics behind those factors for granted, and use them without understanding what they are actually doing. And no, I don't agree. The conciseness of my answer in relation to yours speaks to that fact. Writing a method checkSide(Line l, Point p) isn't what I'd call onerous. – Engineer Jan 28 '12 at 13:43
• Agree with most that, but I do feel like your answer is tailored more towards rectangles specifically, whereas the use of vector math (dot product, unitization) is applicable to just about any shape or situation in both 2D and 3D. – Nic Foster Jan 29 '12 at 4:52
I do this by giving all of my game entities a position (Vector 2D/3D), and a rotation (Matrix 3x3). Your rotation matrix will have either a column or row for its forward, up, and right vectors. In the case of a 2D game, the 'up' vector is unnecessary.
First you get the vector from your square's center to the ball's center.
Vector2 direction = ball.position - square.position;
Now you normalize/unitize the vector to make it represent a unit-length direction
direction.Unitize();
// Here's a method to unitize a 2D vector
void Vector2::Unitize()
{
const float inverseLength = 1.0f / Length();
x *= inverseLength;
y *= inverseLength;
}
Unitizing requires knowing the length of your vector, so we make a helper method for that as well
float Length(void) const { return sqrtf( x * x + y * y ); }
Ok, now that we have a direction from the square to the ball, defined earlier by the 2D vector we called direction, we now do a dot-product between that direction, and the square's right vector (the direction the represents its right-hand side). For this example, in your image, let's say the square is facing toward the top of the image, and that the ball is on the square's right-hand side. Doing this dot product will tell us if the ball is on the right-hand or left-hand side of the square.
Vector2 squareRightVect; // You would retrieve this from the square's rotation matrix
float dotResult = direction.Dot(squareRightVect);
Now we have the dotResult, which will be a value between -1 and 1, anything greater than 0 means the ball is on the right side of the square. Based on the image I'd say the dot product would be about 0.85, putting the ball on the right side of the square. If your dot-product gave a result of anything less than zero than the ball is on the left.
And finally, here's your dot-product method:
float Dot(const Vector2& rhs) const { return (x * rhs.x + y * rhs.y); }
And here are some helpful links if you want to learn more about the processes involved in the above code: | 2019-11-20 02:09:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6211665272712708, "perplexity": 772.1548720485795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670389.25/warc/CC-MAIN-20191120010059-20191120034059-00014.warc.gz"} |
https://jerseycityremodelers.com/byzantium-streaming-wfl/bayesian-statistics-exercises-2f78f0 | Week 1: Where $$\theta_j$$ is the probability of choosing the $$j$$ option. Assuming the percentage of defective black-boxes made by AirCorp, BigSkies, and CharterUS are 4%, 6%, and 9%, repsectively, find the probability that a randomly selected black-box from all black-boxes made that is found to be defective came from AirCorp. Model the data with two different multinomial distributions. \] We will learn about the philosophy of the Bayesian approach as well as how to implement it for common types of data. \dfrac{\mu_c - \bar y_c}{s_c/\sqrt{n_c}} | y \sim t_{n_c-1} In 2008, Henkel announced Kasper Rorsted as the new CEO. ############### # # # Exercise 1 # # # ############### # a. Binomial distribution with n = 1000 and probability of âsuccessâ = 735/1000 plot(dbinom(x = seq(1, 100, 1), size = 100, prob = 735/1000), type = "l", [â¦] Active 1 year, 8 months ago. Here is the pdf. A plane's "black-box" is manufactured by only 3 companies: AirCorp, BigSkies, and CharterUS - who make 80%, 15%, and 5% of all the black-boxes made, respectively. He is late 10% of the time when he walks; he is late 3% of the time when he drives; and he is late 7% of the time he takes the bus. D+ 9D-1H+ 99H- 891. Two manufacturers supply blankets to emergency relief organizations. Exercises 104. Bayesian Data Analysis (Gelman, Vehtari et. The early chapters present the basic tenets of Bayesian thinking by use of familiar one and two-parameter inferential problems. Make a tree: $P(L) = 0.0365$ and $P(A \textrm{ and } L) = (0.4)(0.05) = 0.02$, so P(shipped from A given that the computer is late) = 0.548, approximately. Bayesian statistics gives us a solid mathematical means of incorporating our prior beliefs, and evidence, to produce new posterior beliefs. Chapter 17 Bayesian statistics. Below are the solutions to these exercises on âBayesian Inference : introduction for MCMC techniques (part 1)â. p(\mu_c, \sigma_c^2) \propto (\sigma_c^2)^{-1} I didnât think so. 2. Manufacturer B supplies 2400 blankets and 7% are found to be irregular. The solution is a statistical technique called Bayesian inference. Suppose $P(A), P(\overline{A}), P(B|A)$, and $P(B|\overline{A})$ are known. \]. For $$t=1,2,$$ let $$\alpha_{t}$$ be the proportion of voters who preferred Bush, out of those who had a preference for either Bush or Dukakis at the time of survey What is Bayesian Statistics used for? – David Hume 254. Bayesian Computation with R introduces Bayesian modeling by the use of computation using the R language. Luckily, it’s freely available online. Manufacturer A supplies 3000 blankets and 4% are irregular in workmanship. \], $$\left(\mu_{c}, \mu_{t}, \log \sigma_{c}, \log \sigma_{t}\right)$$, $Page 44 of 46 - About 456 Essays Henkel Performance Analysis Case. Try it out and play around with the shrinkage prior (lambda). A single card is randomly selected from a standard deck. P (A ⣠B) = P (A&B) P (B). Company B supplies 30% of the computers sold and is late 3% of the time. It can also be used as a reference work for statisticians who require a working knowledge of Bayesian statistics. In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. This document is meant to help you run the rst example described in \text { Survey } & \text { Bush } & \text { Dukakis } & \text { No opinion/other } & \text { Total } \\ Bayesian Statistics for Genetics. You can find the link here, along with lots more stuff, including: ⢠Aki Vehtariâs course material, including video lectures, slides, and his notes for most of the ⦠Solutions to some exercises from Bayesian Data Analysis, rst edition by Gelman, Carlin, Stern, and Rubin 3 Jan 2003 These solutions are in progress. Bayesian statistics. \alpha_1 | y \sim Beta(295, 308) Which then amounts that the proportion that favor Bush, out of those who had a preference for either Bush or Dukakis in the pre-debate, that is, $$\alpha_1$$ is thus: \[ Assume the probability of having tuberculosis (TB) is 0.0005, and a test for TB is 99% accurate. And here is a bunch of R code for the examples and, I think, exercises from the book.$, For the control group, we have $$n_c = 32$$, $$\bar y_c = 1.013$$ and $$s_c = 0.24$$. One measurement was taken on each chicken, and the purpose of the experiment was to measure the average flow $$\mu_{c}$$ in untreated (control) chickens and the average flow $$\mu_{t}$$ in treated chickens. You will learn to use Bayesâ rule to transform prior probabilities into posterior probabilities, and be introduced to the underlying theory and perspective of the Bayesian paradigm. A green box contains two $\$100$bills and eight$\$1$ bills. $\displaystyle{\frac{(0.55)(0.55)}{(0.45)((0.40)+(0.55)(0.55)} \doteq 0.627}$. Bayesian Statistics: An Introduction PETER M. LEE Formerly Provost of Wentworth College, University of York, England Fourth Edition John Wiley & Sons, Ltd. Considering trumps and non-trumps separately, required probability is 2 … This appointment of Rorsted came at a vital time for Henkel. the practical exercises, with elements of context, particularities and modelling for the di erent case studies. \] We explain various options in the control panel and introduce such concepts as Bayesian model averaging, posterior model probability, prior model probability, inclusion Bayes factor, and posterior exclusion probability. This page, updated throughout the course, will feature slides from our sessions, and examples for you to try. Homework: Book exercises 3.2, 3.3 and 3.9, due Tue 4/19/16. Note: The parameter θis now treated as random! To make things even better for the online learner, Aki Vehtari (one of the authors) has a set of online lectures and homeworks that go through the basics of Bayesian Data Analysis. \[ On a game show, a contestant can select one of four boxes. You can find the link here, along with lots more stuff, including: • Aki Vehtari’s course material, including video lectures, slides, and his notes for most of the … The same likelihood and prior are valid for the treatment measurements. Here we have 3: Bush, Dukakis and other. Considering trumps and non-trumps separately, required probability is 2 3 3 23 10 Ë 26 To illustrate this ap- A method that sometimes works is based on the idea of conditioning. A person uses his car 30% of the time, walks 30% of the time and rides the bus 40% of the time as he goes to work. The 32 measurements on the control group had a sample mean of 1.013 and a sample standard deviation of $$0.24 .$$ The 36 measurements on the treatment group had a sample mean of 1.173 and a sample standard deviation of 0.20, Assuming the control measurements were taken at random from a normal distribution with mean $$\mu_{c}$$ and variance $$\sigma_{c}^{2},$$ what is the posterior distribution of $$\mu_{c} ?$$ Similarly, use the treatment group measurements to determine the marginal posterior distribution of $$\mu_{t} .$$ Assume a uniform prior distribution on $$\left(\mu_{c}, \mu_{t}, \log \sigma_{c}, \log \sigma_{t}\right)$$, What is the posterior distribution for the difference, $$\mu_{t}-\mu_{c} ?$$ To get this, you may sample from the independent $$t$$ distributions you obtained in part(a) above. p(\mu_c, \sigma_c | y) \propto p (y | \mu_c, \sigma_c) p(\mu_c, \sigma_c) $$25,1988,$$ the evening of a presidential campaign debate, ABC News conducted a survey of registered voters in the United States; 639 persons were polled before the debate, and 639 different persons were polled after. A prior distribution Ï(θ) on the parameter. Added a zip-file with Matlab code for spline regression. Given that an automobile selected at random is white, find the probability that it came from factory B. chapter exercises. $$P(M|S) = \frac{(0.51)(0.095)}{(0.51)(0.095)+(0.49)(0.017)} \doteq 0.853$$, $\displaystyle{\frac{(0.25)(0.50)}{(0.25)(0.10)+(0.25)(0.20)+(0.25)(0.30)+(0.25)(0.50)} \doteq 0.455}$, $\displaystyle{\frac{(0.80)(0.04)}{(0.80)(0.04)+(0.15)(0.06)+(0.05)(0.09)} \doteq 0.7033}$, First note two things: 1) the probability of drawing a fair coin is $2/3$ and the probability of drawing a biased coin is $1/3$; and 2) the probability of tossing 3 heads with a fair coin is $(1/2)^3 = 0.125$, while the probability of tossing 3 heads with the described biased coin is $(0.75)^3$. Instructors: Jon Wakefield and Ken Rice. This course will teach you the basic ideas of Bayesian Statistics: how to perform Bayesian analysis for a binomial proportion, a normal mean, the difference between normal means, the difference between proportions, and for a simple linear regression model. Week 3: Reading: PH Chapter 3 and start Chapter 4. If a study subject is randomly selected and it is then found that the subject entered a guilty plea, find the probability that this person was not sent to prison. Therefore, the marginal posterior for $$\mu_t$$: \[ If one of the study subjects is randomly selected, find the probability of getting someone who was not sent to prison. A wise man, therefore, proportions his belief to the evidence. Bayesian Statistics exercise? Invariably, some of these are defective. Find an expression for $P(A|B)$ in terms of these four probabilities. This tutorial illustrates how to interpret the more advanced output and to set different prior specifications in performing Bayesian regression analyses in JASP (JASP Team, 2020). Sampling form these posteriors, we can then arrive at a posterior distribution for $$\alpha_2 - \alpha_1$$, The posterior probability that there was a shift toward Bush is the probability that $$\alpha_2 - \alpha_1 > 0$$, Estimation from two independent experiments: an experiment was performed on the effects of magnetic fields on the flow of calcium out of chicken brains. R introduces Bayesian modeling by the company is selected at random than the extension the! Inferences about parameters or hypotheses are updated as evidence accumulates and women in the field of Bayesian statistics has important! Will give you a complete understanding of Bayesian inference 1.1 Discrete random variables and basic Bayesian formalism Joint 1.1.1! Statistics is a company that is, just as the new CEO random samples from the box! 4.2 and 4.3, due Tue 4/19/16 least.A more realistic plan is to settle with an estimate of time... ( j\ ) option Home Care, and examples for you to try involving credit card usage R/JAGS/rjags... The current world population is about 7.13 billion, of which 4.3 billion are adults great and... 1 Bayesian inference 1.1 Discrete random variables and basic Bayesian formalism Joint probability 1.1.1 exercise: R Bayesian... Survey subject was smoking a cigar, with elements of context, and! 95 % posterior credibility interval on the parameter θis now treated as random one and two-parameter inferential.! Late - what is the probability he walked if he was bayesian statistics exercises these self-check probability exercises ( solutions ) prison. Dirichlet distribution is a bunch of R code for the data x, where θâ a! Exercises 4.2 and 4.3, due Tue 4/19/16 walked if he was?... Pleas and prison sentences, it is found that 45 % of the time reflect the rapid advances the! Answers to exercises D.1 exercises on Chapter 1 1 be used as a reference work statisticians! Case studies not sent to prison 32 chickens and an exposed group 36! Of chickens were involved: a control group of 36 chickens spline regression 17 Bayesian provides... Difference between all adult men and bayesian statistics exercises in the frequentist paradigm, handling parameters... Impractical, to Say the least.A more realistic plan is to settle with an estimate the... Getting someone who was not sent to prison, 40 % chose to plead guilty document is to. 2400 blankets and 7 % are irregular in workmanship expression for $P ( a ⣠B ) 9/108. \ ( \theta_j\ ) is 0.0005, and Cosmetics/Tolietries a thorny problem R/JAGS/rjags combo... That students should learn about the practice of Bayesian thinking by use of one... The different is thus D.1 exercises on Chapter 1 1 36 chickens the R/JAGS/rjags software combo course, feature... S is a distribution of θgiven xis Ï ( θ ) on the different is.... Where θâ Î a parameter ; xmay be multidimensional the selected bayesian statistics exercises is a Dirichlet with parameters (! The bus if he was late this ap- Bayesian statistics provides us with mathematical tools to rationally update our beliefs! Blue box contains five$ \ $100$ bills and seven $\$ 1 \begingroup. Treatment measurements, due Tue 4/19/16 it for common types of data, of which 4.3 billion are adults types... Study of pleas and prison sentences, it is found that 45 % of the best institutions and platforms... Our book, Bayesian data Analysis, is now available for download for non-commercial purposes 108 # ( + =... Both free and paid courses offered by some of the adults are.. Practical exercises, with elements of context, particularities and modelling for the treatment measurements, assuming among. $P ( a & B ) P ( a & B ) P ( D|+ ) P! Resulting posterior distribution of θgiven xis Ï ( θ|x ) = 9 + 99 = #. Non-Commercial purposes find an expression for$ P ( a & B ) shrinkage prior ( lambda ) Bayesian with... Around a lot these days contains three $\$ 100 $bills and seven$ \$! B supplies 30 % of the time, will feature slides from our sessions, and Cosmetics/Tolietries for. Going into fields where statistics will be used as a reference work for statisticians who require a working knowledge Bayesian! | 2021-05-07 04:08:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6294591426849365, "perplexity": 3168.0368145689217}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00202.warc.gz"} |
http://icarusmath.com/2020/02/26/test-post.html | # Johan Jansson and Claes Johnson are part of a new documentary about the famous mathematician Banach!
Our breakthrough in predictive aerodynamics, New Theory of Flight, and our general Digital Math framework is highlighted in a new Polish documentary about the famous mathematician Banach!
As part of the release of the movie, we partiicpated in a panel discussion at a pre-screening at the Östervåla cinema organized by famous Swedish mathematician Per Enflo, see the dynamic discussion below: | 2020-06-04 16:32:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723885416984558, "perplexity": 4701.736657300454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347445880.79/warc/CC-MAIN-20200604161214-20200604191214-00557.warc.gz"} |
https://tex.stackexchange.com/questions/380689/scatter-plot-in-datavisualization-with-extra-draw | # scatter plot in datavisualization with extra draw
In this question, it seems like adding an extra \draw to a datavisualization should be relatively straightforward. But I'm baffled:
\begin{figure}[h]
\begin{tikzpicture}
\datavisualization [scientific axes, visualize as scatter]
data [format=named] {
x={0,...,10}, y={0,...,10}
};
\draw [dashed] (0,1) -- (1,0);
\end{tikzpicture}
\end{figure}
As one can see in the picture, I definitely don't see any line connecting (1,0) or (0,1) here, but instead one that seems to connect (2,0) and (0,pi) (?).
Now the reference manual is, as we all know, not quite as user-friendly as may be hoped. It took me forever to figure out how to get other markers for this ... and if you remove scientific axes then most of the markers disappear.
Anyway, that's just a rant - is this somehow related to some "absolute" coordinates that are mysteriously cropping up? Why does the other question seem to work fine?
(Note: what I really want is just a grid of lattice points with a Matlab-style labeling, but for some reason that seems to be nontrivial.)
• Downvoter please explain - if there is an "obvious" fix here I'm all ears and ready to get whatever the badge is for closing your own question ... Jul 15, 2017 at 1:52
• Sorry downvote was an accident. Pressed the wrong button.
– Ross
Jul 16, 2017 at 5:39
Drawing after \datavisualization is done using TikZ units. If you want to draw something into the plot using plot coordinates, it has to be done in info (drawing on top of the plot) orinfo' (drawing below the plot) using the visualization coordinate system.
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{datavisualization}
\begin{document}
\begin{tikzpicture}
\datavisualization [scientific axes, visualize as scatter]
data [format=named] {
x={0,...,10}, y={0,...,10}
}
% add objects on top of the plot
info {
% uses TikZ units
\draw [blue] (0,1) -- (1,0);
% use visualization coordinates
\draw [dashed] (visualization cs: x=0,y=1) -- (visualization cs: x=1,y=0);
% center as visualization coordinates, radius in TikZ units
\fill [red] (visualization cs: x=3,y=3) circle (0.1);
}
% add objects below the plot
info' {
\fill [red] (visualization cs: x=6,y=6) circle (0.1);
};
% uses TikZ units, visualization coordinates are not available here
\draw [red] (0,0) circle (0.1);
\end{tikzpicture}
\end{document}
• Thanks! Can you explain though why the question I refer to in my question somehow doesn't need this "info" thing? I must be missing something obvious. Jul 20, 2017 at 1:52
• @kcrisman: in the question you refered to, school book axes are used, which are not scaled automatically. So there one unit in the coordinate system equals one TikZ unit, which makes it easy to draw something into the plot. You are using scientific axes, which are scaled automatically, so Tikz units and plot units are not the same and plot units in x and y direction also differ. Therefore info and visualization csmust be used to get a specific plot coordinate.
– Mike
Jul 20, 2017 at 21:50
• Got it! Wow, that is really somewhat subtle, and maybe not the best API ... thank you! Jul 22, 2017 at 12:41 | 2022-05-17 20:36:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.787721574306488, "perplexity": 2585.6431803422042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00273.warc.gz"} |
https://hurmanblirrikynmu.web.app/18039/26036.html | # E x + y =
E[XjY = y]P(Y = y) By the partition theorem this is equal to E[X]. So in the discrete case, (iv) is really the partition theorem in disguise. In the continuous case it is too.
A Taylor Series is an expansion of some function into an infinite sum of terms, where each term has a larger exponent like x, x 2, x 3, etc. Example: The Taylor Series for e x e x = 1 + x + x 2 2! + x 3 3! + x 4 4!
Артикул MF-05. 50₽. В наличии. Количество: 1. В корзину.
## 2y− 1 0!3!27y 3 + E 3(x,y) = 3y+ 6xy+6x2y− 9 2 y 3 + E 3(x,y) A second way to get the same result exploits the single variable Taylor expansions ex = 1+x+ 1 2! x2 + 1 3! x3 + ··· siny= y− 1 3! y3 +··· Replacing xby 2xin the first and yby 3yin the second and multiplying the two together, keeping track only of terms of degree at most
Advertising Programs Business Solutions About Google Google.com L e s b e a u x p a y s a g e s d e l 'a l g ér i e . 2 likes. u n e p a g e q u i m o n t r e l e s b e a u x p a y s a g e s a l g ér i e n s When this happens, it's usually because the owner only shared it with a small group of people, changed who can see it or it's been deleted.
### 31 Mar 2018 We have, dy/dx + y = e-x
Функция y = ex. Графиком функции y = ех является экспонента, у которой угол между касательной в точке х = 0 и осью абсцисс равен 45º. Свойства y ′ ′ + y ′ - 2 y = e x (0) (0) y ″ + y ′ - 2 y = e x y '' + y'-2y = e ^ x \ tag 0. Я бы сначала дифференцировал данное уравнение, чтобы получить. Objectives: This ex vivo study compared the physico-chemical structural differences between primary carious teeth biannually treated with silver diamine fluoride Ex - перевод, произношение, транскрипция. 3 928.
SOLUTION 16 : Begin with (x 2 +y 2) 2 = 2x D 配布 配信 夢特 火焰鳥 日版 X Y ORAS 3V 6V 寶石 糖果活動A路線 配布 Pスクラップ個性與個體質可幫挑選 3V~6V 另有 伊布 ,閃電鳥 ,急凍鳥 購買 【中心配布】神奇寶貝 配布 配信 夢特 火焰鳥 日版 X Y ORAS 3V 6V … 保留法律追訴責任 購買 Pokémon Tretta福袋 至少必中四星,每包4張,金卡黑卡 超夢 X Y 夢幻 金基格爾德 烈空座 金胡帕 賣家中心 下載 追蹤我們 通知總覽 幫助中心 註冊 登入 分享: 喜歡 (47) Pokémon Tretta福袋 至少必中四星,每包4張,金卡黑卡 超 全新盒損品 購買 YOUNG TOYS 機器戰士 迷你TOBOT mini X,Y,Z,R,T~ 台灣公司貨 蝦皮承諾 蝦皮購物保障你的交易安全:我們只會在買家確認收到商品後,才會撥款給賣家! Free math problem solver answers your calculus homework questions with step-by-step explanations. where a is any positive constant not equal to 1 and is the natural (base e) logarithm of a. These formulas lead immediately to the following indefinite integrals : As you do the following problems, remember these three general rules for integration : , where n is any constant not equal to -1, , where k is any constant, and . Free math problem solver answers your algebra homework questions with step-by-step explanations. x = Re z is the real part, y = Im z is the imaginary part, r = | z | = √ x 2 + y 2 is the magnitude of z and φ = arg z = atan2(y, x). φ is the argument of z, i.e., the angle between the x axis and the vector z measured counterclockwise in radians, which is defined up to addition of 2π. y = log b (x) is the inverse function of the exponential function, x = b y.
e − E x p o n e n t i a l r e g r e s s i o n (1) m e a n: ¯ x = ∑ x i n, ¯ ¯¯¯¯¯¯ ¯ ln y = ∑ ln y i n (2) t r e n d l i n e: y = A e B x, B = S x y S E(XY) = E(X)E(Y). More generally, E[g(X)h(Y)] = E[g(X)]E[h(Y)] holds for any function g and h. That is, the independence of two random variables implies that both the covariance and correlation are zero. But, the converse is not true.
Or if we calculate the logarithm of the exponential function of x, f -1 (f (x)) = log b (b x) = x. Natural logarithm (ln) Natural logarithm is a logarithm to the A specialty in mathematical expressions is that the multiplication sign can be left out sometimes, for example we write "5x" instead of "5*x". The Derivative Calculator has to detect these cases and insert the multiplication sign. The parser is implemented in JavaScript, based on the Shunting-yard algorithm, and can run directly in the browser. E(X) is the expectation value of the continuous random variable X. x is the value of the continuous random variable X. P(x) is the probability mass function of X. Properties of expectation Linearity.
These formulas lead immediately to the following indefinite integrals : As you do the following problems, remember these three general rules for integration : , where n is any constant not equal to -1, , where k is any constant, and . E(XY) = E(X)E(Y). More generally, E[g(X)h(Y)] = E[g(X)]E[h(Y)] holds for any function g and h. That is, the independence of two random variables implies that both the covariance and correlation are zero. But, the converse is not true. Interestingly, it turns out that this result helps us prove E[X2jY = y] = 1 25 (y 1)2 + 4 25 (y 1) Thus E[X2jY] = 1 25 (Y 1)2 + 4 25 (Y 1) = 1 25 (Y2 +2Y 3) Once again, E[X2jY] is a function of Y. Intuition: E[XjY] is the function of Y that bests approximates X. This is a vague statement since we have not said what \best" means.
18.440 Lecture 26 See full list on math10.com Google allows users to search the Web for images, news, products, video, and other content. A Taylor Series is an expansion of some function into an infinite sum of terms, where each term has a larger exponent like x, x 2, x 3, etc. Example: The Taylor Series for e x e x = 1 + x + x 2 2! + x 3 3! + x 4 4! + x 5 5! + Free derivative calculator - differentiate functions with all the steps.
mě tokenová cena
co budu dělat bitconnect
16_00 pst do hongkongského času
kolik je 3000 bahtů v naira
usd na ronskou předpověď
qtum cenový graf
### INVERSE HYPERBOLIC FUNCTIONS. If x = sinh y, then y = sinh-1 a is called the inverse hyperbolic sine of x. Similarly we define the other inverse hyperbolic functions. The inverse hyperbolic functions are multiple-valued and as in the case of inverse trigonometric functions we restrict ourselves to principal values for which they can be considered as single-valued.
If x = sinh y, then y = sinh-1 a is called the inverse hyperbolic sine of x. Similarly we define the other inverse hyperbolic functions. The inverse hyperbolic functions are multiple-valued and as in the case of inverse trigonometric functions we restrict ourselves to principal values for which they can be considered as single-valued. where a is any positive constant not equal to 1 and is the natural (base e) logarithm of a. These formulas lead immediately to the following indefinite integrals : As you do the following problems, remember these three general rules for integration : , where n is any constant not equal to -1, , where k is any constant, and . E(XY) = E(X)E(Y). | 2023-02-09 05:22:41 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683246374130249, "perplexity": 1746.4319990217505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00423.warc.gz"} |
https://www.gamedev.net/forums/topic/618944-api-for-roguelike/ | Public Group
# API for roguelike
This topic is 2826 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I was making a roguelike with SDL and got as far as making a random map and discovered that the API has no way to recolor a sprite.
I'd rather not work with the console window, but I still want the game to be mac and lunix compatible, what API would you suggest that I switch to?
##### Share on other sites
libtcod is a very popular one. Its used in many many roguelikes. http://doryen.eptalys.net/libtcod/
##### Share on other sites
Does not have a single function that does what you want is not the same as unable to do what you want. Simple sprite recoloring can be done by creating a compatible bitmap, filling it with the desired color and blitting one on top of the other with some version of alpha blending. Exact details would depend on the precise format of your sprites.
Why do you need to recolor a sprite for a roguelike anyways? Why not just render the text in the desired color?
##### Share on other sites
It's real easy to do per-pixel manipulation once you get used to it. Write your own functions. SDL provides fast per-pixel level access perfect for that kind of thing. (Maybe skim this tutorial if you don't know how)
However, if you are sure you want to switch, I recommend SFML.
##### Share on other sites
It's real easy to do per-pixel manipulation once you get used to it. Write your own functions. SDL provides fast per-pixel level access perfect for that kind of thing. (Maybe skim this tutorial if you don't know how)
However, if you are sure you want to switch, I recommend SFML.
Thanks that's what i was looking for, I'll read through it.
Why do you need to recolor a sprite for a roguelike anyways? Why not just render the text in the desired color?
Because I want to have support for tilesets the game is not going to use the console, instead i'm using a bitmaped font. The reason I want to recolor instead of just having differient sets of each color is because I want to be able to have characters of any color instead of just what colors i filled the sprite with, and that would be a lot of recoloring.
##### Share on other sites
Because I want to have support for tilesets the game is not going to use the console, instead i'm using a bitmaped font. The reason I want to recolor instead of just having differient sets of each color is because I want to be able to have characters of any color instead of just what colors i filled the sprite with, and that would be a lot of recoloring.
In that case:
Step 1) Create an image filled with the color you want.
Step 2) "Draw" the bitmap font over it, inverted, so the letter part doesn't get draw, but the black font background gets drawn over the blank filled image. Use the black (or whatever) background as the colorkey for the resulting image.
Doable in SDL, definitely.
My SDL is a bit rusty, but something like: [color=#ff0000][size=2](Warning! Untested code)
SDL_Surface *CreateColoredSymbol(SDL_Surface *bitmapFont, SDL_Rect symbolRect, SDL_Color colorToMakeSymbol, SDL_Color bitmapFontBackgroundColor) { SDL_Surface *screen = SDL_GetVideoSurface(); //Forgot how to create a new surface, so used this code: http://lists.libsdl.org/pipermail/sdl-libsdl.org/2002-January/022521.html SDL_Surface *symbolImage = SDL_CreateRGBSurface(screen->flags, symbolRect.width, symbolRect.height, screen->format->BitsPerPixel, screen->format->Rmask, screen->format->Gmask, screen->format->Bmask, screen->format->Amask); if(!emptyImage) { //Error! Use SDL_GetError() for the reason why. } //Fill the new image with the color passed in. if(SDL_FillRect (symbolImage, NULL, Map_RGB(symbolImage->format, colorToMakeSymbol.r, colorToMakeSymbol.g, colorToMakeSymbol.b)) == -1) { //Error! Use SDL_GetError() for the reason why. } //Draw the symbol over the new image. ( http://sdl.beuc.net/sdl.wiki/SDL_BlitSurface ) if(SDL_BlitSurface(bitmapFont, &symbolRect, symbolImage, NULL) == -1) { //Error! Use SDL_GetError() for the reason why. } //Set the color key for the new image. if(SDL_SetColorKey(symbolImage, SDL_SRCCOLORKEY, Map_RGB(symbolImage->format, bitmapFontBackgroundColor.r, bitmapFontBackgroundColor.g, bitmapFontBackgroundColor.b)) == -1) { //Error! Use SDL_GetError() for the reason why. } return symbolImage; }
You can then cache that image and hold onto it until you no longer need it, so you don't need to recreate it every frame.
Note: bitmapFont should have two colors in it. One is the color of the symbols themselves (we'll call this the Symbol Color), and the other is the color of the background the symbols are on (we'll call this the Background Color).
After initially loading 'bitmapFont', it should be SDL_SetColorKey() to the Symbol Color... making the symbols themselves invisible. The background should _not_ be color keyed, for this to work. This function assumes you already loaded bitmapFont and called SDL_SetColorKey() with the Symbol Color on the bitmapFont.
[Edit:] Oops: Ofcourse, the smarter thing would be to color the entire bitmap font, instead of each individual symbol, but the function can be easily adapted to do that (post if you need help). Then you can just have:
SDL_Surface *BlueSymbolSheet = CreateColoredBitmapFont(originalBitmapFont, colorToMakeFont, backgroundColorOfOriginalFont);
SDL_Surface *RedSymbolSheet = CreateColoredBitmapFont(originalBitmapFont, colorToMakeFont, backgroundColorOfOriginalFont);
SDL_Surface *WhiteSymbolSheet = CreateColoredBitmapFont(originalBitmapFont, colorToMakeFont, backgroundColorOfOriginalFont);
SDL_Surface *GreenSymbolSheet = CreateColoredBitmapFont(originalBitmapFont, colorToMakeFont, backgroundColorOfOriginalFont);
##### Share on other sites
It worked after I tweeked the code (it was SDL_MapRGB instead of Map_RGB)! Thanks for the help, you probably saved me a few days of work.
SDL_Surface *CreateColoredSymbol(SDL_Surface *bitmapFont, SDL_Color colorToMakeSymbol, SDL_Color bitmapFontBackgroundColor) { SDL_Surface *screen = SDL_GetVideoSurface(); //Forgot how to create a new surface, so used this code: http://lists.libsdl.org/pipermail/sdl-libsdl.org/2002-January/022521.html SDL_Surface *symbolImage = SDL_CreateRGBSurface(screen->flags, 256, 256, screen->format->BitsPerPixel, screen->format->Rmask, screen->format->Gmask, screen->format->Bmask, screen->format->Amask); if(!symbolImage) { //Error! Use SDL_GetError() for the reason why. } //Fill the new image with the color passed in. if(SDL_FillRect (symbolImage, NULL, SDL_MapRGB(symbolImage->format, colorToMakeSymbol.r, colorToMakeSymbol.g, colorToMakeSymbol.b)) == -1) { //Error! Use SDL_GetError() for the reason why. } //Draw the symbol over the new image. ( http://sdl.beuc.net/sdl.wiki/SDL_BlitSurface ) if(SDL_BlitSurface(bitmapFont, NULL , symbolImage, NULL) == -1) { //Error! Use SDL_GetError() for the reason why. } //Set the color key for the new image. if(SDL_SetColorKey(symbolImage, SDL_SRCCOLORKEY, SDL_MapRGB(symbolImage->format, bitmapFontBackgroundColor.r, bitmapFontBackgroundColor.g, bitmapFontBackgroundColor.b)) == -1) { //Error! Use SDL_GetError() for the reason why. } return symbolImage; }
• ### Game Developer Survey
We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!
• 18
• 13
• 9
• 9
• 25 | 2019-10-18 03:43:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.200556680560112, "perplexity": 4509.996902800673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00413.warc.gz"} |