url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://en.wikipedia.org/wiki/Quasispecies_model | # Quasispecies model
The quasispecies model is a description of the process of the Darwinian evolution of certain self-replicating entities within the framework of physical chemistry. Put simply, a quasispecies is a large group or cloud of related genotypes that exist in an environment of high mutation rate, where a large fraction of offspring are expected to contain one or more mutations relative to the parent. This is in contrast to a species, which from an evolutionary perspective is a more-or-less stable single genotype, most of the offspring of which will be genetically accurate copies.[1]
It is useful mainly in providing a qualitative understanding of the evolutionary processes of self-replicating macromolecules such as RNA or DNA or simple asexual organisms such as bacteria or viruses (see also viral quasispecies), and is helpful in explaining something of the early stages of the origin of life. Quantitative predictions based on this model are difficult because the parameters that serve as its input are impossible to obtain from actual biological systems. The quasispecies model was put forward by Manfred Eigen and Peter Schuster[2] based on initial work done by Eigen.[3]
## Simplified explanation
When evolutionary biologists describe competition between species, they generally assume that each species is a single genotype whose descendants are mostly accurate copies. (Such genotypes are said to have a high reproductive fidelity.) Evolutionarily, we are interested in the behavior and fitness of that one species or genotype over time.[citation needed]
Some organisms or genotypes, however, may exist in circumstances of low fidelity, where most descendants contain one or more mutations. A group of such genotypes is constantly changing, so discussions of which single genotype is the most fit become meaningless. Importantly, if many closely related genotypes are only one mutation away from each other, then genotypes in the group can mutate back and forth into each other. For example, with one mutation per generation, a child of the sequence AGGT could be AGTT, and a grandchild could be AGGT again. Thus we can envision a cloud of related genotypes that is rapidly mutating, with sequences going back and forth among different points in the cloud. Though the proper definition is mathematical, that cloud, roughly speaking, is a quasispecies.[citation needed]
Quasispecies behavior exists for large numbers of individuals existing at a certain (high) range of mutation rates.[4]
### Quasispecies, fitness, and evolutionary selection
In a species, though reproduction may be mostly accurate, periodic mutations will give rise to one or more competing genotypes. If a mutation results in greater replication and survival, the mutant genotype may out-compete the parent genotype and come to dominate the species. Thus, the individual genotypes (or species) may be seen as the units on which selection acts and biologists will often speak of a single genotype's fitness.[citation needed]
In a quasispecies, however, mutations are ubiquitous and so the fitness of an individual genotype becomes meaningless: if one particular mutation generates a boost in reproductive success, it can't amount to much because that genotype's offspring are unlikely to be accurate copies with the same properties. Instead, what matters is the connectedness of the cloud.[5] For example, the sequence AGGT has 12 (3+3+3+3) possible single point mutants AGGA, AGGG, and so on. If 10 of those mutants are viable genotypes that may reproduce (and some of whose offspring or grandchildren may mutate back into AGGT again), we would consider that sequence a well-connected node in the cloud. If instead only two of those mutants are viable, the rest being lethal mutations, then that sequence is poorly connected and most of its descendants will not reproduce. The analog of fitness for a quasispecies is the tendency of nearby relatives within the cloud to be well-connected, meaning that more of the mutant descendants will be viable and give rise to further descendants within the cloud.[citation needed]
When the fitness of a single genotype becomes meaningless because of the high rate of mutations, the cloud as a whole or quasispecies becomes the natural unit of selection.
### Application to biological research
Quasispecies represents the evolution of high-mutation-rate viruses such as HIV and sometimes single genes or molecules within the genomes of other organisms.[6][7][8] Quasispecies models have also been proposed by Jose Fontanari and Emmanuel David Tannenbaum to model the evolution of sexual reproduction.[9] Quasispecies was also shown in compositional replicators (based on the Gard model for abiogenesis) [10] and was also suggested to be applicable to describe cell's replication, which amongst other things requires the maintenance and evolution of the internal composition of the parent and bud.
## Formal background
The model rests on four assumptions[citation needed]:
1. The self-replicating entities can be represented as sequences composed of a small number of building blocks—for example, sequences of RNA consisting of the four bases adenine, guanine, cytosine, and uracil.
2. New sequences enter the system solely as the result of a copy process, either correct or erroneous, of other sequences that are already present.
3. The substrates, or raw materials, necessary for ongoing replication are always present in sufficient quantity. Excess sequences are washed away in an outgoing flux.
4. Sequences may decay into their building blocks. The probability of decay does not depend on the sequences' age; old sequences are just as likely to decay as young sequences.
In the quasispecies model, mutations occur through errors made in the process of copying already existing sequences. Further, selection arises because different types of sequences tend to replicate at different rates, which leads to the suppression of sequences that replicate more slowly in favor of sequences that replicate faster. However, the quasispecies model does not predict the ultimate extinction of all but the fastest replicating sequence. Although the sequences that replicate more slowly cannot sustain their abundance level by themselves, they are constantly replenished as sequences that replicate faster mutate into them. At equilibrium, removal of slowly replicating sequences due to decay or outflow is balanced by replenishing, so that even relatively slowly replicating sequences can remain present in finite abundance.[citation needed]
Due to the ongoing production of mutant sequences, selection does not act on single sequences, but on mutational "clouds" of closely related sequences, referred to as quasispecies. In other words, the evolutionary success of a particular sequence depends not only on its own replication rate, but also on the replication rates of the mutant sequences it produces, and on the replication rates of the sequences of which it is a mutant. As a consequence, the sequence that replicates fastest may even disappear completely in selection-mutation equilibrium, in favor of more slowly replicating sequences that are part of a quasispecies with a higher average growth rate.[11] Mutational clouds as predicted by the quasispecies model have been observed in RNA viruses and in in vitro RNA replication.[12][13]
The mutation rate and the general fitness of the molecular sequences and their neighbors is crucial to the formation of a quasispecies. If the mutation rate is zero, there is no exchange by mutation, and each sequence is its own species. If the mutation rate is too high, exceeding what is known as the error threshold, the quasispecies will break down and be dispersed over the entire range of available sequences.[citation needed]
## Mathematical description
A simple mathematical model for a quasispecies is as follows[citation needed]: let there be ${\displaystyle S}$ possible sequences and let there be ${\displaystyle n_{i}}$ organisms with sequence i. Let's say that each of these organisms asexually gives rise to ${\displaystyle A_{i}}$ offspring. Some are duplicates of their parent, having sequence i, but some are mutant and have some other sequence. Let the mutation rate ${\displaystyle q_{ij}}$ correspond to the probability that a j type parent will produce an i type organism. Then the expected number of i type organisms produced by any j type parent is ${\displaystyle w_{ij}=A_{j}q_{ij}}$,
where ${\displaystyle \sum _{i}q_{ij}=1\,}$.
Then the total number of i-type organisms after the first round of reproduction, given as ${\displaystyle n'_{i}}$, is
${\displaystyle n'_{i}=\sum _{j}w_{ij}n_{j}\,}$
Sometimes a death rate term ${\displaystyle D_{i}}$ is included so that:
${\displaystyle w_{ij}=A_{j}q_{ij}-D_{i}\delta _{ij}\,}$
where ${\displaystyle \delta _{ij}}$ is equal to 1 when i=j and is zero otherwise. Note that the n-th generation can be found by just taking the n-th power of W substituting it in place of W in the above formula.
This is just a system of linear equations. The usual way to solve such a system is to first diagonalize the W matrix. Its diagonal entries will be eigenvalues corresponding to certain linear combinations of certain subsets of sequences which will be eigenvectors of the W matrix. These subsets of sequences are the quasispecies. Assuming that the matrix W is a primitive matrix (irreducible and aperiodic), then after very many generations only the eigenvector with the largest eigenvalue will prevail, and it is this quasispecies that will eventually dominate. The components of this eigenvector give the relative abundance of each sequence at equilibrium.[citation needed]
W being primitive means that for some integer ${\displaystyle n>0}$, that the ${\displaystyle n^{th}}$ power of W is > 0, i.e. all the entries are positive. If W is primitive then each type can, through a sequence of mutations (i.e. powers of W) mutate into all the other types after some number of generations. W is not primitive if it is periodic, where the population can perpetually cycle through different disjoint sets of compositions, or if it is reducible, where the dominant species (or quasispecies) that develops can depend on the initial population, as is the case in the simple example given below.[citation needed]
### Alternative formulations
The quasispecies formulae may be expressed as a set of linear differential equations. If we consider the difference between the new state ${\displaystyle n'_{i}}$ and the old state ${\displaystyle n_{i}}$ to be the state change over one moment of time, then we can state that the time derivative of ${\displaystyle n_{i}}$ is given by this difference, ${\displaystyle {\dot {n}}_{i}=n'_{i}-n_{i}}$ we can write:
${\displaystyle {\dot {n}}_{i}=\sum _{j}w_{ij}n_{j}-n_{i}\,}$
The quasispecies equations are usually expressed in terms of concentrations ${\displaystyle x_{i}}$ where
${\displaystyle x_{i}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {n_{i}}{\sum _{j}n_{j}}}}$.
${\displaystyle x'_{i}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {n'_{i}}{\sum _{j}n'_{j}}}}$.
The above equations for the quasispecies then become for the discrete version:
${\displaystyle x'_{i}={\frac {\sum _{j}w_{ij}x_{j}}{\sum _{ij}w_{ij}x_{j}}}}$
or, for the continuum version:
${\displaystyle {\dot {x}}_{i}=\sum _{j}w_{ij}x_{j}-x_{i}\sum _{ij}w_{ij}x_{j}.}$
### Simple example
The quasispecies concept can be illustrated by a simple system consisting of 4 sequences. Sequences [0,0], [0,1], [1,0], and [1,1] are numbered 1, 2, 3, and 4, respectively. Let's say the [0,0] sequence never mutates and always produces a single offspring. Let's say the other 3 sequences all produce, on average, ${\displaystyle 1-k}$ replicas of themselves, and ${\displaystyle k}$ of each of the other two types, where ${\displaystyle 0\leq k\leq 1}$. The W matrix is then:
${\displaystyle \mathbf {W} ={\begin{bmatrix}1&0&0&0\\0&1-k&k&k\\0&k&1-k&k\\0&k&k&1-k\end{bmatrix}}}$.
The diagonalized matrix is:
${\displaystyle \mathbf {W'} ={\begin{bmatrix}1-2k&0&0&0\\0&1-2k&0&0\\0&0&1&0\\0&0&0&1+k\end{bmatrix}}}$.
And the eigenvectors corresponding to these eigenvalues are:
Eigenvalue Eigenvector
1-2k [0,-1,0,1]
1-2k [0,-1,1,0]
1 [1,0,0,0]
1+k [0,1,1,1]
Only the eigenvalue ${\displaystyle 1+k}$ is more than unity. For the n-th generation, the corresponding eigenvalue will be ${\displaystyle (1+k)^{n}}$ and so will increase without bound as time goes by. This eigenvalue corresponds to the eigenvector [0,1,1,1], which represents the quasispecies consisting of sequences 2, 3, and 4, which will be present in equal numbers after a very long time. Since all population numbers must be positive, the first two quasispecies are not legitimate. The third quasispecies consists of only the non-mutating sequence 1. It's seen that even though sequence 1 is the most fit in the sense that it reproduces more of itself than any other sequence, the quasispecies consisting of the other three sequences will eventually dominate (assuming that the initial population was not homogeneous of the sequence 1 type).[citation needed]
## References
1. ^ Biebricher, C.K & Eigen, M. (2006). "What is a Quasispecies". In Esteban Domingo. Quasispecies: Concept and Implications for Virology. Springer. p. 1. ISBN 978-3-540-26395-1.
2. ^ Eigen M, Schuster P (1979). The Hypercycle: A Principle of Natural Self-Organization. Berlin: Springer-Verlag. ISBN 0-387-09293-5.
3. ^ Eigen, Manfred (October 1971). "Selforganization of matter and the evolution of biological macromolecules". Die Naturwissenschaften. 58 (10): 465–523. doi:10.1007/BF00623322. PMID 4942363.
4. ^ Martinez, MA, Martus G, Capel E, Parera M, Franco S, Nevot M (2012) Quasispecies Dynamics of RNA Viruses. In: Viruses: Essential Agents of Life, Springer, Dordrecht, pp. 21-42.
5. ^ Villarreal, L.P.; Witzany, G. (2013). "Rethinking quasispecies theory: From fittest type to cooperative consortia". World Journal of Biological Chemistry. 4: 70–79. doi:10.4331/wjbc.v4.i4.79. PMID 24340131.
6. ^ Holland; et. al. "RNA virus populations as quasispecies". Genetic Diversity of RNA Viruses.
7. ^ Domingo, E. (2002). "Quasispecies theory in virology". Journal of Viology. 76: 463–465. doi:10.1128/jvi.76.1.463-465.2002.
8. ^ Wilke (2005). "Quasispecies theory in the context of population genetics". BMC Evolutionary Biology. 5: 44. doi:10.1186/1471-2148-5-44.
9. ^ Tannenbaum ED, Fontanari JF (2008). "A quasispecies approach to the evolution of sexual replication in unicellular organisms". Theory Bioscience. 127: 53–65. doi:10.1007/s12064-008-0023-2.
10. ^ Gross, R.; Fouxon, I.; Lancet, D.; Markovitch, O. (2014). "Quasispecies in population of compositional assemblies". BMC Evolutionary Biology. 14: 2623. doi:10.1186/s12862-014-0265-1. PMC . PMID 25547629. Archived from the original on January 2, 2015.
11. ^ Schuster P, Swetina J (November 1988). "Stationary mutant distributions and evolutionary optimization". Bulletin of Mathematical Biology. Dordrecht: Kluwer Academic Publishers. 50 (6): 635–660. doi:10.1007/BF02460094. ISSN 0092-8240. PMID 3219448.
12. ^ Domingo E, Holland JJ (October 1997). "RNA virus mutations and fitness for survival". Annual Review of Microbiology. 51: 151–178. doi:10.1146/annurev.micro.51.1.151. PMID 9343347.
13. ^ Burch CL, Chao L (2000). "Evolvability of an RNA virus is determined by its mutational neighbourhood". Nature. 406 (6796): 625–628. doi:10.1038/35020564. PMID 10949302. | 2017-02-19 12:13:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7822092175483704, "perplexity": 1779.894955586194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501169769.33/warc/CC-MAIN-20170219104609-00407-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://txcorp.com/images/docs/usim/latest/reference_manual/updater_pressureDensityCorrector.html | # pressureDensityCorrector (1d, 2d, 3d)¶
Computes the pressure and density in a nodalArray and modifies the pressure and density if they are below basement values. This is a simple way to prevent pressures and densities from becoming too small but is also non-conservative.
## Data¶
out (string vector)
Output 1 stores the nodalArray that will have its pressure and density corrected. The nodalArray must have the same number of components as is required by the chosen model.
## Parameters¶
model (string)
The model equation used for determining how to compute pressure and density. The model must be a fluid model such as eulerEqn. Will not work with maxwellEqn since no pressure or density is defined. When the model is initialized it will request additional variables required by that model, for example gasGamma and mu0 for MHD type equations.
basementDensity (float)
basementDensity used in determining when to switch between accurate and positive solutions. Default is 0.0.
basementPressure (float)
basementPressure used in determining when to switch between accurate and positive solutions. Default is 0.0.
## Example¶
<Updater correct>
kind = pressureDensityCorrector2d
model = eulerEqn
basementDensity = BASEMENT_DENSITY
basementPressure = BASEMENT_PRESSURE
gasGamma = GAS_GAMMA
onGrid = domain
out = [q]
</Updater> | 2020-08-04 08:42:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49221980571746826, "perplexity": 2282.1922702842885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.23/warc/CC-MAIN-20200804073038-20200804103038-00178.warc.gz"} |
https://www.zbmath.org/?q=ai%3Aplestenjak.bor+ai%3Adraisma.jan | # zbMATH — the first resource for mathematics
Uniform determinantal representations. (English) Zbl 1387.13062
Let $$K$$ be a field and $$d,n\in \mathbb Z_{\geq 0}$$. Let $$K[x_1,\ldots, x_n]$$ be the polynomial ring over $$K$$, and let $$p_{n,d} = \sum_{\alpha} c_{\alpha} x^{\alpha}$$ be a polynomial of degree at most $$d$$. Here $$\alpha = (\alpha_1,\ldots,\alpha_n) \in \mathbb Z^n_{\geq 0}$$ is a multi-index such that $$\sum_{i=1}^n \alpha_i\leq d$$, and $$x^{\alpha}$$ denotes the monomial $$\prod_{i=1}^n x_i^{\alpha_i}$$. A determinantal representation of $$p_{n,d}$$ is an $$N\times N$$-matrix $$M$$ with affine-linear entries in $$x_1,\ldots, x_n$$, such that $$\det(M) = p$$; the integer $$N$$ is the size of the representation. The minimal possible size is called determinantal complexity of $$p$$. Recently, this notion has become fundamental due to its connections to several fields such as complexity theory, optimization, and scientific computing. For instance, one of the deepest conjectures in algebraic complexity is Valiant’s conjecture, which states that the permanent of an $$m\times m$$-matrix does not admit a determinantal representation of size polynomial in $$m$$. (This conjecture can be also rephrased in the context of representation theory and orbit closures of permanents and determinants.) In this interesting paper, the authors study a variant of determinantal representations. Instead of looking at representations of a single polynomial, they seek for determinantal representations of subspaces of polynomials. A uniform determinantal representation of $$p_{n,d}$$ is an $$N\times N$$-matrix $$M$$ with entries in $$x_1,\ldots, x_n, c_{\alpha}$$, of degree at most one in each of these two sets of variables, such that $$\det(M) = p_{n,d}$$. Such a matrix $$M$$ gives a representation for all polynomials of degree at most $$d$$. For $$n,d\in \mathbb Z_{\geq 0}$$, the number $$N^{*}(n,d)\in \mathbb Z_{>0}$$ denotes the minimum size of uniform determinantal representations of $$p_{n,d}$$. Let $$M$$ be a uniform determinantal representation of $$p_{n,d}$$. Let $$M = M_0+M_1$$, where $$M_0$$ is the matrix not containing any variable $$c_{\alpha}$$. They show that for every point $$\overline{x}=(\overline{x_1},\ldots, \overline{x_n})\in K^n$$, $$M_0(\overline{x})$$ is singular (Lemma 2.5). Thus, they nicely connect the theory of uniform representations with the classical theory of singular matrix spaces; the connection is explained in Section 3. The main result is the following. For every integer $$n\geq 2$$, there exist positive constants $$C_1,C_2$$, depending on $$n$$, such that $$C_1 d^{n/2} \leq N^{*}(n,d)\leq C_2 d^{n/2}$$ (Theorem 1.3). These bounds improve previous results by R. Quarez [Linear Algebra Appl. 436, No. 9, 3462–3660 (2012; Zbl 1238.47010)]. They show explicit values of $$N^{*}(n,d)$$ for small $$n$$ and $$d$$. They prove that $$N^{*}(2,2) = 3$$ (Proposition 5.1) and $$N^{*}(3,2) = 4$$ (Proposition 5.2). Section 7 is the computational part of this article. Here they apply the results of the paper to the problem of efficiently solving systems of bivariate polynomial equations, based on the work of B. Plestenjak and M. E. Hochstenbach [SIAM J. Sci. Comput. 38, No. 2, A765–A788 (2016; Zbl 1376.65056)]. The authors conclude presenting several questions arising from this work. Finally, appendix A describes an algorithm to compute a determinantal representation for a given polynomial, based on the proof of Theorem 1.3.
##### MSC:
13P15 Solving polynomial systems; resultants 65H04 Numerical computation of roots of polynomial equations 65F15 Numerical computation of eigenvalues and eigenvectors of matrices 65F50 Computational methods for sparse matrices
##### Software:
Bertini; BertiniLab; BiRoots; Mathematica; MultiParEig; NACLab; PHClab; PHCpack; rootsb
Full Text:
##### References:
[1] F. V. Atkinson, Multiparameter eigenvalue problems: Matrices and compact operators, SIAM Rev., 15 (1973), pp. 678–679, . [2] D. J. Bates, J. H. Hauenstein, A. J. Sommese, and C. W. Wampler, Bertini: Software for Numerical Algebraic Geometry, available online at . [3] A. Beauville, Determinantal hypersurfaces, Michigan Math. J., 48 (2000), pp. 39–64, . · Zbl 1076.14534 [4] A. Boralevi, D. Faenzi, and E. Mezzetti, Linear spaces of matrices of constant rank and instanton bundles, Adv. Math., 248 (2013), pp. 895–920, . · Zbl 1291.14063 [5] N. Bourbaki, Éléments de mathématique. Fasc. XXXIV. Groupes et algèbres de Lie. Chapitre IV: Groupes de Coxeter et systèmes de Tits. Chapitre V: Groupes engendrés par des réflexions. Chapitre VI: systèmes de racines, Actualités Scientifiques et Industrielles 1337, Hermann, Paris, 1968. · Zbl 0186.33001 [6] P. Brändén, Obstructions to determinantal representability, Adv. Math., 226 (2011), pp. 1202–1212, . · Zbl 1219.90121 [7] P. Bürgisser, C. Ikenmeyer, and G. Panova, No occurrence obstructions in geometric gomplexity theory, in 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, 2016, . [8] C. de Seguins Pazzis, Large affine spaces of matrices with rank bounded below, Linear Algebra Appl., 437 (2012), pp. 499–518, . · Zbl 1242.15002 [9] L. E. Dickson, Determination of all general homogeneous polynomials expressible as determinants with linear elements, Trans. Amer. Math. Soc., 22 (1921), pp. 167–179. · JFM 48.0099.02 [10] J. Dieudonné, Sur une généralisation du groupe orthogonal à quatre variables, Arch. Math., 1 (1949), pp. 282–287. · Zbl 0032.10601 [11] A. C. Dixon, Note on the reduction of a ternary quartic to a symmetrical determinant, Proc. Cambridge Philos. Soc., 11 (1900–1902), pp. 350–351. · JFM 33.0140.04 [12] J. Draisma, Small maximal spaces of non-invertible matrices, Bull. London Math. Soc., 38 (2006), pp. 764–776, . · Zbl 1112.15019 [13] D. Eisenbud and J. Harris, Vector spaces of matrices of low rank, Adv. in Math., 70 (1988), pp. 135–155, . · Zbl 0657.15013 [14] P. Fillmore, C. Laurie, and H. Radjavi, On matrix spaces with zero determinant, Linear and Multilinear Algebra, 18 (1985), pp. 255–266, . · Zbl 0592.15008 [15] H. Flanders, On spaces of linear transformations with bounded rank, J. London Math. Soc., 37 (1962), pp. 10–16, . · Zbl 0101.25403 [16] Y. Guan and J. Verschelde, PHClab: A MATLAB/Octave interface to PHCpack, in Software for Algebraic Geometry, IMA Vol. Math. Appl. 148, Springer, New York, 2008, pp. 15–32. · Zbl 1148.68578 [17] J. W. Helton and V. Vinnikov, Linear matrix inequality representation of sets, Comm. Pure Appl. Math., 60 (2007), pp. 654–674, . · Zbl 1116.15016 [18] M. E. Hochstenbach, T. Košir, and B. Plestenjak, A Jacobi–Davidson type method for the two-parameter eigenvalue problem, SIAM J. Matrix Anal. Appl., 26 (2004), pp. 477–497, . [19] J. Hüttenhain and P. Lairez, The Boundary of the Orbit of the 3 by 3 Determinant Polynomial, preprint, , 2015. [20] B. Ilic and J. M. Landsberg, On symmetric degeneracy loci, spaces of symmetric matrices of constant rank and dual varieties, Math. Ann., 314 (1999), pp. 159–174. · Zbl 0949.14028 [21] Y. Ishitsuka and T. Ito, On the symmetric determinantal representations of the Fermat curves of prime degree, Int. J. Number Theory, 12 (2016), pp. 955–967. · Zbl 1415.11062 [22] J.-B. Lasserre, M. Laurent, B. Mourrain, P. Rostalski, and P. Trébuchet, Moment matrices, border bases and real radical computation, J. Symbolic Comput., 51 (2013), pp. 63–85. · Zbl 1276.13021 [23] R. Lebreton, E. Mehrabi, and E. Schost, On the complexity of solving bivariate systems: The case of non-singular solutions, in Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC ’13, ACM, New York, 2013, pp. 251–258, . · Zbl 1360.68941 [24] A. S. Lewis, P. A. Parrilo, and M. V. Ramana, The Lax conjecture is true, Proc. Amer. Math. Soc., 133 (2005), pp. 2495–2499, . · Zbl 1073.90029 [25] L. Manivel and E. Mezzetti, On linear spaces of skew-symmetric matrices of constant rank, Manuscripta Math., 117 (2005), pp. 319–331. · Zbl 1084.14050 [26] E. Mehrabi and É. Schost, A softly optimal Monte Carlo algorithm for solving bivariate polynomial systems over the integers, J. Complex., 34 (2016), pp. 78–128, . · Zbl 1352.68299 [27] B. Mourrain and K. Schmüdgen, Flat extensions in $$*$$-algebras, Proc. Amer. Math. Soc., 144 (2016), pp. 4873–4885. · Zbl 1364.46047 [28] A. Muhič and B. Plestenjak, On the quadratic two-parameter eigenvalue problem and its linearization, Linear Algebra Appl., 432 (2010), pp. 2529–2542, . · Zbl 1189.65070 [29] K. D. Mulmuley and M. Sohoni, Geometric complexity theory. I. An approach to the P vs. NP and related problems, SIAM J. Comput., 31 (2001), pp. 496–526, . · Zbl 0992.03048 [30] K. D. Mulmuley and M. Sohoni, Geometric complexity theory. II. Towards explicit obstructions for embeddings among class varieties, SIAM J. Comput., 38 (2008), pp. 1175–1206, . · Zbl 1168.03030 [31] Y. Nakatsukasa, V. Noferini, and A. Townsend, Computing the common zeros of two bivariate functions via Bézout resultants, Numer. Math., 129 (2015), pp. 181–209, . · Zbl 1308.65076 [32] A. Newell, BertiniLab: Toolbox for solving polynomial systems, MATLAB Central File Exchange, . [33] B. Plestenjak, BiRoots, MATLAB Central File Exchange, , (2015). [34] B. Plestenjak and M. E. Hochstenbach, Roots of bivariate polynomial systems via determinantal representations, SIAM J. Sci. Comput., 38 (2016), pp. A765–A788, . · Zbl 1376.65056 [35] R. Quarez, Symmetric determinantal representation of polynomials, Linear Algebra Appl., 436 (2012), pp. 3642–3660, . · Zbl 1238.47010 [36] L. Robol, R. Vandebril, and P. van Dooren, A framework for structured linearizations of matrix polynomials in various bases, SIAM J. Matrix Anal. Appl., 38 (2017), pp. 188–216, . · Zbl 1365.15028 [37] A. Sidorenko, What we know and what we do not know about Turán numbers, Graphs Combin., 11 (1995), pp. 179–199, . · Zbl 0839.05050 [38] J. Sylvester, On the dimension of spaces of linear transformations satisfying rank conditions, Linear Algebra Appl., 78 (1986), pp. 1–10, . · Zbl 0588.15002 [39] L. G. Valiant, The complexity of computing the permanent, Theoret. Comput. Sci., 8 (1979), pp. 189–201, . · Zbl 0415.68008 [40] J. Verschelde, Algorithm 795: PHCpack: A general-purpose solver for polynomial systems by homotopy continuation, ACM Trans. Math. Softw., 25 (1999), pp. 251–276, . · Zbl 0961.65047 [41] D. G. Wagner, Multivariate stable polynomials: Theory and applications, Bull. Amer. Math. Soc. (N.S.), 48 (2011), pp. 53–84, . · Zbl 1207.32006 [42] R. Westwick, Spaces of matrices of fixed rank, Linear and Multilinear Algebra, 20 (1987), pp. 171–174, . · Zbl 0611.15002 [43] Wolfram Research, Inc., Mathematica, Version 9.0, Champaign, IL, 2012. [44] Z. Zeng and T.-Y. Li, NAClab: A Matlab toolbox for numerical algebraic computation, ACM Commun. Comput. Algebra, 47 (2014), pp. 170–173, .
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-01-16 23:58:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7740471363067627, "perplexity": 969.0782484125469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507971.27/warc/CC-MAIN-20210116225820-20210117015820-00128.warc.gz"} |
https://gitlab.haskell.org/ghc/ghc/-/blame/3e1745aa272077c98254ce9b79e62b92c40948a9/compiler/specialise/Specialise.lhs | Specialise.lhs 76.6 KB
partain committed Jan 08, 1996 1 % simonm committed Dec 02, 1998 2 % (c) The GRASP/AQUA Project, Glasgow University, 1993-1998 partain committed Jan 08, 1996 3 4 5 6 % \section[Specialise]{Stamping out overloading, and (optionally) polymorphism} \begin{code} sof committed Apr 30, 1998 7 module Specialise ( specProgram ) where partain committed Jan 08, 1996 8 simonm committed Jan 08, 1998 9 #include "HsVersions.h" partain committed Jan 08, 1996 10 simonpj@microsoft.com committed Oct 23, 2009 11 12 import Id import TcType 13 import Type simonpj@microsoft.com committed Oct 07, 2010 14 import CoreMonad Ian Lynagh committed Jun 11, 2012 15 import CoreSubst simonpj@microsoft.com committed Oct 07, 2010 16 import CoreUnfold simonm committed Dec 02, 1998 17 18 import VarSet import VarEnv simonpj committed Mar 08, 1998 19 import CoreSyn simonpj@microsoft.com committed Aug 21, 2008 20 import Rules Ian Lynagh committed Jun 11, 2012 21 22 import CoreUtils ( exprIsTrivial, applyTypeToArgs ) import CoreFVs ( exprFreeVars, exprsFreeVars, idFreeVars ) Ian Lynagh committed Jun 12, 2012 23 import UniqSupply Simon Marlow committed May 11, 2007 24 import Name Ian Lynagh committed Jun 11, 2012 25 26 27 import MkId ( voidArgId, realWorldPrimId ) import Maybes ( catMaybes, isJust ) import BasicTypes simonpj@microsoft.com committed Oct 07, 2010 28 import HscTypes simonpj committed Mar 06, 1998 29 import Bag Ian Lynagh committed Jun 12, 2012 30 import DynFlags Ian Lynagh committed Mar 29, 2008 31 import Util simonm committed Jan 08, 1998 32 import Outputable simonmar committed Apr 29, 2002 33 import FastString Ian Lynagh committed Jun 12, 2012 34 import State partain committed Apr 05, 1996 35 Ian Lynagh committed Jun 12, 2012 36 import Control.Monad Ian Lynagh committed Sep 14, 2010 37 38 39 import Data.Map (Map) import qualified Data.Map as Map import qualified FiniteMap as Map partain committed Jan 08, 1996 40 41 42 \end{code} %************************************************************************ Ian Lynagh committed Jun 11, 2012 43 %* * partain committed Jan 08, 1996 44 \subsection[notes-Specialise]{Implementation notes [SLPJ, Aug 18 1993]} Ian Lynagh committed Jun 11, 2012 45 %* * partain committed Jan 08, 1996 46 47 48 %************************************************************************ These notes describe how we implement specialisation to eliminate sof committed Apr 06, 1998 49 overloading. partain committed Jan 08, 1996 50 sof committed Apr 06, 1998 51 The specialisation pass works on Core partain committed Jan 08, 1996 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 syntax, complete with all the explicit dictionary application, abstraction and construction as added by the type checker. The existing type checker remains largely as it is. One important thought: the {\em types} passed to an overloaded function, and the {\em dictionaries} passed are mutually redundant. If the same function is applied to the same type(s) then it is sure to be applied to the same dictionary(s)---or rather to the same {\em values}. (The arguments might look different but they will evaluate to the same value.) Second important thought: we know that we can make progress by treating dictionary arguments as static and worth specialising on. So we can do without binding-time analysis, and instead specialise on dictionary arguments and no others. The basic idea ~~~~~~~~~~~~~~ Suppose we have Ian Lynagh committed Jun 11, 2012 72 73 let f = in partain committed Jan 08, 1996 74 partain committed Mar 19, 1996 75 and suppose f is overloaded. partain committed Jan 08, 1996 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 STEP 1: CALL-INSTANCE COLLECTION We traverse , accumulating all applications of f to types and dictionaries. (Might there be partial applications, to just some of its types and dictionaries? In principle yes, but in practice the type checker only builds applications of f to all its types and dictionaries, so partial applications could only arise as a result of transformation, and even then I think it's unlikely. In any case, we simply don't accumulate such partial applications.) STEP 2: EQUIVALENCES So now we have a collection of calls to f: Ian Lynagh committed Jun 11, 2012 93 94 95 f t1 t2 d1 d2 f t3 t4 d3 d4 ... partain committed Jan 08, 1996 96 97 98 99 100 101 102 103 104 105 106 Notice that f may take several type arguments. To avoid ambiguity, we say that f is called at type t1/t2 and t3/t4. We take equivalence classes using equality of the *types* (ignoring the dictionary args, which as mentioned previously are redundant). STEP 3: SPECIALISATION For each equivalence class, choose a representative (f t1 t2 d1 d2), and create a local instance of f, defined thus: Ian Lynagh committed Jun 11, 2012 107 f@t1/t2 = t1 t2 d1 d2 partain committed Jan 08, 1996 108 sof committed Apr 06, 1998 109 110 111 112 113 114 f_rhs presumably has some big lambdas and dictionary lambdas, so lots of simplification will now result. However we don't actually *do* that simplification. Rather, we leave it for the simplifier to do. If we *did* do it, though, we'd get more call instances from the specialised RHS. We can work out what they are by instantiating the call-instance set from f's RHS with the types t1, t2. partain committed Jan 08, 1996 115 116 117 118 119 120 121 122 123 124 Add this new id to f's IdInfo, to record that f has a specialised version. Before doing any of this, check that f's IdInfo doesn't already tell us about an existing instance of f at the required type/s. (This might happen if specialisation was applied more than once, or it might arise from user SPECIALIZE pragmas.) Recursion ~~~~~~~~~ partain committed Mar 19, 1996 125 Wait a minute! What if f is recursive? Then we can't just plug in partain committed Jan 08, 1996 126 127 128 129 130 its right-hand side, can we? But it's ok. The type checker *always* creates non-recursive definitions for overloaded recursive functions. For example: Ian Lynagh committed Jun 11, 2012 131 f x = f (x+x) -- Yes I know its silly partain committed Jan 08, 1996 132 133 134 becomes Ian Lynagh committed Jun 11, 2012 135 136 137 138 139 f a (d::Num a) = let p = +.sel a d in letrec fl (y::a) = fl (p y y) in fl partain committed Jan 08, 1996 140 sof committed Apr 06, 1998 141 142 We still have recusion for non-overloaded functions which we speciailise, but the recursive call should get specialised to the partain committed Jan 08, 1996 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 same recursive version. Polymorphism 1 ~~~~~~~~~~~~~~ All this is crystal clear when the function is applied to *constant types*; that is, types which have no type variables inside. But what if it is applied to non-constant types? Suppose we find a call of f at type t1/t2. There are two possibilities: (a) The free type variables of t1, t2 are in scope at the definition point of f. In this case there's no problem, we proceed just as before. A common example is as follows. Here's the Haskell: Ian Lynagh committed Jun 11, 2012 158 159 g y = let f x = x+x in f y + f y partain committed Jan 08, 1996 160 161 162 After typechecking we have Ian Lynagh committed Jun 11, 2012 163 164 g a (d::Num a) (y::a) = let f b (d'::Num b) (x::b) = +.sel b d' x x in +.sel a d (f a d y) (f a d y) partain committed Jan 08, 1996 165 166 167 168 Notice that the call to f is at type type "a"; a non-constant type. Both calls to f are at the same type, so we can specialise to give: Ian Lynagh committed Jun 11, 2012 169 170 g a (d::Num a) (y::a) = let f@a (x::a) = +.sel a d x x in +.sel a d (f@a y) (f@a y) partain committed Jan 08, 1996 171 172 173 174 175 (b) The other case is when the type variables in the instance types are *not* in scope at the definition point of f. The example we are working with above is a good case. There are two instances of (+.sel a d), partain committed Mar 19, 1996 176 but "a" is not in scope at the definition of +.sel. Can we do anything? partain committed Jan 08, 1996 177 178 179 Yes, we can "common them up", a sort of limited common sub-expression deal. This would give: Ian Lynagh committed Jun 11, 2012 180 181 182 g a (d::Num a) (y::a) = let +.sel@a = +.sel a d f@a (x::a) = +.sel@a x x in +.sel@a (f@a y) (f@a y) partain committed Jan 08, 1996 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 This can save work, and can't be spotted by the type checker, because the two instances of +.sel weren't originally at the same type. Further notes on (b) * There are quite a few variations here. For example, the defn of +.sel could be floated ouside the \y, to attempt to gain laziness. It certainly mustn't be floated outside the \d because the d has to be in scope too. * We don't want to inline f_rhs in this case, because that will duplicate code. Just commoning up the call is the point. * Nothing gets added to +.sel's IdInfo. * Don't bother unless the equivalence class has more than one item! partain committed Mar 19, 1996 201 Not clear whether this is all worth it. It is of course OK to partain committed Jan 08, 1996 202 203 204 205 206 207 simply discard call-instances when passing a big lambda. Polymorphism 2 -- Overloading ~~~~~~~~~~~~~~ Consider a function whose most general type is Ian Lynagh committed Jun 11, 2012 208 f :: forall a b. Ord a => [a] -> b -> b partain committed Jan 08, 1996 209 210 211 212 213 214 215 216 217 There is really no point in making a version of g at Int/Int and another at Int/Bool, because it's only instancing the type variable "a" which buys us any efficiency. Since g is completely polymorphic in b there ain't much point in making separate versions of g for the different b types. That suggests that we should identify which of g's type variables are constrained (like "a") and which are unconstrained (like "b"). partain committed Mar 19, 1996 218 Then when taking equivalence classes in STEP 2, we ignore the type args partain committed Jan 08, 1996 219 220 221 corresponding to unconstrained type variable. In STEP 3 we make polymorphic versions. Thus: Ian Lynagh committed Jun 11, 2012 222 f@t1/ = /\b -> t1 b d1 d2 partain committed Jan 08, 1996 223 sof committed Apr 06, 1998 224 We do this. partain committed Jan 08, 1996 225 226 sof committed Apr 06, 1998 227 228 229 Dictionary floating ~~~~~~~~~~~~~~~~~~~ Consider this partain committed Jan 08, 1996 230 Ian Lynagh committed Jun 11, 2012 231 232 233 f a (d::Num a) = let g = ... in ...(let d1::Ord a = Num.Ord.sel a d in g a d1)... partain committed Jan 08, 1996 234 sof committed Apr 06, 1998 235 236 237 238 239 Here, g is only called at one type, but the dictionary isn't in scope at the definition point for g. Usually the type checker would build a definition for d1 which enclosed g, but the transformation system might have moved d1's defn inward. Solution: float dictionary bindings outwards along with call instances. partain committed Jan 08, 1996 240 241 242 Consider Ian Lynagh committed Jun 11, 2012 243 244 245 246 f x = let g p q = p==q h r s = (r+s, g r s) in h x x partain committed Jan 08, 1996 247 248 249 250 Before specialisation, leaving out type abstractions we have Ian Lynagh committed Jun 11, 2012 251 252 253 254 255 256 257 f df x = let g :: Eq a => a -> a -> Bool g dg p q = == dg p q h :: Num a => a -> a -> (a, Bool) h dh r s = let deq = eqFromNum dh in (+ dh r s, g deq r s) in h df x x partain committed Jan 08, 1996 258 259 260 After specialising h we get a specialised version of h, like this: Ian Lynagh committed Jun 11, 2012 261 262 h' r s = let deq = eqFromNum df in (+ df r s, g deq r s) partain committed Jan 08, 1996 263 264 But we can't naively make an instance for g from this, because deq is not in scope partain committed Mar 19, 1996 265 at the defn of g. Instead, we have to float out the (new) defn of deq partain committed Jan 08, 1996 266 267 268 269 270 271 272 273 to widen its scope. Notice that this floating can't be done in advance -- it only shows up when specialisation is done. User SPECIALIZE pragmas ~~~~~~~~~~~~~~~~~~~~~~~ Specialisation pragmas can be digested by the type checker, and implemented by adding extra definitions along with that of f, in the same way as before Ian Lynagh committed Jun 11, 2012 274 f@t1/t2 = t1 t2 d1 d2 partain committed Jan 08, 1996 275 276 277 278 Indeed the pragmas *have* to be dealt with by the type checker, because only it knows how to build the dictionaries d1 and d2! For example Ian Lynagh committed Jun 11, 2012 279 280 g :: Ord a => [a] -> [a] {-# SPECIALIZE f :: [Tree Int] -> [Tree Int] #-} partain committed Jan 08, 1996 281 282 283 284 285 286 287 288 289 290 291 Here, the specialised version of g is an application of g's rhs to the Ord dictionary for (Tree Int), which only the type checker can conjure up. There might not even *be* one, if (Tree Int) is not an instance of Ord! (All the other specialision has suitable dictionaries to hand from actual calls.) Problem. The type checker doesn't have to hand a convenient , because it is buried in a complex (as-yet-un-desugared) binding group. Maybe we should say Ian Lynagh committed Jun 11, 2012 292 f@t1/t2 = f* t1 t2 d1 d2 partain committed Jan 08, 1996 293 294 295 296 297 298 299 300 301 where f* is the Id f with an IdInfo which says "inline me regardless!". Indeed all the specialisation could be done in this way. That in turn means that the simplifier has to be prepared to inline absolutely any in-scope let-bound thing. Again, the pragma should permit polymorphism in unconstrained variables: Ian Lynagh committed Jun 11, 2012 302 303 h :: Ord a => [a] -> b -> b {-# SPECIALIZE h :: [Int] -> b -> b #-} partain committed Jan 08, 1996 304 305 306 We *insist* that all overloaded type variables are specialised to ground types, (and hence there can be no context inside a SPECIALIZE pragma). partain committed Mar 19, 1996 307 We *permit* unconstrained type variables to be specialised to Ian Lynagh committed Jun 11, 2012 308 309 - a ground type - or left as a polymorphic type variable partain committed Jan 08, 1996 310 311 but nothing in between. So Ian Lynagh committed Jun 11, 2012 312 {-# SPECIALIZE h :: [Int] -> [c] -> [c] #-} partain committed Mar 19, 1996 313 partain committed Jan 08, 1996 314 315 316 317 318 319 320 321 is *illegal*. (It can be handled, but it adds complication, and gains the programmer nothing.) SPECIALISING INSTANCE DECLARATIONS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider Ian Lynagh committed Jun 11, 2012 322 323 324 instance Foo a => Foo [a] where ... {-# SPECIALIZE instance Foo [Int] #-} partain committed Jan 08, 1996 325 326 327 328 The original instance decl creates a dictionary-function definition: Ian Lynagh committed Jun 11, 2012 329 dfun.Foo.List :: forall a. Foo a -> Foo [a] partain committed Jan 08, 1996 330 331 332 333 The SPECIALIZE pragma just makes a specialised copy, just as for ordinary function definitions: Ian Lynagh committed Jun 11, 2012 334 335 dfun.Foo.List@Int :: Foo [Int] dfun.Foo.List@Int = dfun.Foo.List Int dFooInt partain committed Jan 08, 1996 336 337 338 339 340 341 342 343 344 345 346 347 348 349 The information about what instance of the dfun exist gets added to the dfun's IdInfo in the same way as a user-defined function too. Automatic instance decl specialisation? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Can instance decls be specialised automatically? It's tricky. We could collect call-instance information for each dfun, but then when we specialised their bodies we'd get new call-instances for ordinary functions; and when we specialised their bodies, we might get new call-instances of the dfuns, and so on. This all arises because of the unrestricted mutual recursion between instance decls and value decls. sof committed Apr 06, 1998 350 351 352 Still, there's no actual problem; it just means that we may not do all the specialisation we could theoretically do. partain committed Jan 08, 1996 353 354 355 356 357 Furthermore, instance decls are usually exported and used non-locally, so we'll want to compile enough to get those specialisations done. Lastly, there's no such thing as a local instance decl, so we can survive solely by spitting out *usage* information, and then reading that partain committed Mar 19, 1996 358 back in as a pragma when next compiling the file. So for now, partain committed Jan 08, 1996 359 360 361 362 363 364 365 366 367 368 369 370 371 we only specialise instance decls in response to pragmas. SPITTING OUT USAGE INFORMATION ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To spit out usage information we need to traverse the code collecting call-instance information for all imported (non-prelude?) functions and data types. Then we equivalence-class it and spit it out. This is done at the top-level when all the call instances which escape must be for imported functions and data types. sof committed Apr 06, 1998 372 373 *** Not currently done *** partain committed Jan 08, 1996 374 375 376 377 378 Partial specialisation by pragmas ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ What about partial specialisation: Ian Lynagh committed Jun 11, 2012 379 380 k :: (Ord a, Eq b) => [a] -> b -> b -> [a] {-# SPECIALIZE k :: Eq b => [Int] -> b -> b -> [a] #-} partain committed Jan 08, 1996 381 382 383 or even Ian Lynagh committed Jun 11, 2012 384 {-# SPECIALIZE k :: Eq b => [Int] -> [b] -> [b] -> [a] #-} partain committed Jan 08, 1996 385 386 387 Seems quite reasonable. Similar things could be done with instance decls: Ian Lynagh committed Jun 11, 2012 388 389 390 391 instance (Foo a, Foo b) => Foo (a,b) where ... {-# SPECIALIZE instance Foo a => Foo (a,Int) #-} {-# SPECIALIZE instance Foo b => Foo (Int,b) #-} partain committed Jan 08, 1996 392 393 394 395 396 397 398 399 400 401 402 Ho hum. Things are complex enough without this. I pass. Requirements for the simplifer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The simplifier has to be able to take advantage of the specialisation. * When the simplifier finds an application of a polymorphic f, it looks in f's IdInfo in case there is a suitable instance to call instead. This converts Ian Lynagh committed Jun 11, 2012 403 f t1 t2 d1 d2 ===> f_t1_t2 partain committed Jan 08, 1996 404 405 406 407 408 409 Note that the dictionaries get eaten up too! * Dictionary selection operations on constant dictionaries must be short-circuited: Ian Lynagh committed Jun 11, 2012 410 +.sel Int d ===> +Int partain committed Jan 08, 1996 411 412 413 414 415 416 417 418 419 420 421 The obvious way to do this is in the same way as other specialised calls: +.sel has inside it some IdInfo which tells that if it's applied to the type Int then it should eat a dictionary and transform to +Int. In short, dictionary selectors need IdInfo inside them for constant methods. * Exactly the same applies if a superclass dictionary is being extracted: Ian Lynagh committed Jun 11, 2012 422 Eq.sel Int d ===> dEqInt partain committed Jan 08, 1996 423 424 425 426 427 * Something similar applies to dictionary construction too. Suppose dfun.Eq.List is the function taking a dictionary for (Eq a) to one for (Eq [a]). Then we want Ian Lynagh committed Jun 11, 2012 428 dfun.Eq.List Int d ===> dEq.List_Int partain committed Jan 08, 1996 429 430 431 432 433 434 435 Where does the Eq [Int] dictionary come from? It is built in response to a SPECIALIZE pragma on the Eq [a] instance decl. In short, dfun Ids need IdInfo with a specialisation for each constant instance of their instance declaration. sof committed Apr 06, 1998 436 437 All this uses a single mechanism: the SpecEnv inside an Id partain committed Jan 08, 1996 438 439 440 441 What does the specialisation IdInfo look like? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ sof committed Apr 06, 1998 442 443 The SpecEnv of an Id maps a list of types (the template) to an expression Ian Lynagh committed Jun 11, 2012 444 [Type] |-> Expr partain committed Jan 08, 1996 445 partain committed Mar 19, 1996 446 For example, if f has this SpecInfo: partain committed Jan 08, 1996 447 Ian Lynagh committed Jun 11, 2012 448 [Int, a] -> \d:Ord Int. f' a partain committed Jan 08, 1996 449 sof committed Apr 06, 1998 450 it means that we can replace the call partain committed Jan 08, 1996 451 Ian Lynagh committed Jun 11, 2012 452 f Int t ===> (\d. f' t) sof committed Apr 06, 1998 453 454 455 This chucks one dictionary away and proceeds with the specialised version of f, namely f'. partain committed Jan 08, 1996 456 457 458 459 460 461 462 What can't be done this way? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There is no way, post-typechecker, to get a dictionary for (say) Eq a from a dictionary for Eq [a]. So if we find Ian Lynagh committed Jun 11, 2012 463 ==.sel [t] d partain committed Jan 08, 1996 464 partain committed Mar 19, 1996 465 we can't transform to partain committed Jan 08, 1996 466 Ian Lynagh committed Jun 11, 2012 467 eqList (==.sel t d') partain committed Jan 08, 1996 468 partain committed Mar 19, 1996 469 where Ian Lynagh committed Jun 11, 2012 470 eqList :: (a->a->Bool) -> [a] -> [a] -> Bool partain committed Jan 08, 1996 471 472 473 474 475 476 477 478 479 Of course, we currently have no way to automatically derive eqList, nor to connect it to the Eq [a] instance decl, but you can imagine that it might somehow be possible. Taking advantage of this is permanently ruled out. Still, this is no great hardship, because we intend to eliminate overloading altogether anyway! sof committed May 18, 1997 480 481 482 483 A note about non-tyvar dictionaries ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some Ids have types like Ian Lynagh committed Jun 11, 2012 484 forall a,b,c. Eq a -> Ord [a] -> tau sof committed May 18, 1997 485 486 487 488 489 490 491 492 493 This seems curious at first, because we usually only have dictionary args whose types are of the form (C a) where a is a type variable. But this doesn't hold for the functions arising from instance decls, which sometimes get arguements with types of form (C (T a)) for some type constructor T. Should we specialise wrt this compound-type dictionary? We used to say "no", saying: Ian Lynagh committed Jun 11, 2012 494 495 496 497 498 499 "This is a heuristic judgement, as indeed is the fact that we specialise wrt only dictionaries. We choose *not* to specialise wrt compound dictionaries because at the moment the only place they show up is in instance decls, where they are simply plugged into a returned dictionary. So nothing is gained by specialising wrt them." sof committed May 18, 1997 500 501 But it is simpler and more uniform to specialise wrt these dicts too; Ian Lynagh committed Jun 11, 2012 502 and in future GHC is likely to support full fledged type signatures sof committed May 18, 1997 503 like Ian Lynagh committed Jun 11, 2012 504 f :: Eq [(a,b)] => ... sof committed May 18, 1997 505 partain committed Jan 08, 1996 506 simonpj committed Feb 20, 1998 507 %************************************************************************ Ian Lynagh committed Jun 11, 2012 508 %* * simonpj committed Feb 20, 1998 509 \subsubsection{The new specialiser} Ian Lynagh committed Jun 11, 2012 510 %* * simonpj committed Feb 20, 1998 511 512 513 %************************************************************************ Our basic game plan is this. For let(rec) bound function Ian Lynagh committed Jun 11, 2012 514 f :: (C a, D c) => (a,b,c,d) -> Bool simonpj committed Feb 20, 1998 515 Ian Lynagh committed Jun 11, 2012 516 * Find any specialised calls of f, (f ts ds), where simonpj committed Feb 20, 1998 517 518 519 520 521 ts are the type arguments t1 .. t4, and ds are the dictionary arguments d1 .. d2. * Add a new definition for f1 (say): Ian Lynagh committed Jun 11, 2012 522 f1 = /\ b d -> (..body of f..) t1 b t3 d d1 d2 simonpj committed Feb 20, 1998 523 524 525 526 527 Note that we abstract over the unconstrained type arguments. * Add the mapping Ian Lynagh committed Jun 11, 2012 528 [t1,b,t3,d] |-> \d1 d2 -> f1 b d simonpj committed Feb 20, 1998 529 530 to the specialisations of f. This will be used by the Ian Lynagh committed Jun 11, 2012 531 532 simplifier to replace calls (f t1 t2 t3 t4) da db simonpj committed Feb 20, 1998 533 by Ian Lynagh committed Jun 11, 2012 534 (\d1 d1 -> f1 t2 t4) da db simonpj committed Feb 20, 1998 535 536 537 538 539 540 541 542 543 544 545 546 All the stuff about how many dictionaries to discard, and what types to apply the specialised function to, are handled by the fact that the SpecEnv contains a template for the result of the specialisation. We don't build *partial* specialisations for f. For example: f :: Eq a => a -> a -> Bool {-# SPECIALISE f :: (Eq b, Eq c) => (b,c) -> (b,c) -> Bool #-} Here, little is gained by making a specialised copy of f. There's a distinct danger that the specialised version would Ian Lynagh committed Jun 11, 2012 547 first build a dictionary for (Eq b, Eq c), and then select the (==) simonpj committed Feb 20, 1998 548 549 550 551 552 553 554 method from it! Even if it didn't, not a great deal is saved. We do, however, generate polymorphic, but not overloaded, specialisations: f :: Eq a => [a] -> b -> b -> b {#- SPECIALISE f :: [Int] -> b -> b -> b #-} Ian Lynagh committed Jun 11, 2012 555 Hence, the invariant is this: simonpj committed Feb 20, 1998 556 Ian Lynagh committed Jun 11, 2012 557 *** no specialised version is overloaded *** simonpj committed Feb 20, 1998 558 559 simonpj committed Mar 06, 1998 560 %************************************************************************ Ian Lynagh committed Jun 11, 2012 561 %* * simonpj committed Mar 06, 1998 562 \subsubsection{The exported function} Ian Lynagh committed Jun 11, 2012 563 %* * simonpj committed Mar 06, 1998 564 565 566 %************************************************************************ \begin{code} Ian Lynagh committed Jun 12, 2012 567 568 specProgram :: DynFlags -> ModGuts -> CoreM ModGuts specProgram dflags guts simonpj@microsoft.com committed Oct 07, 2010 569 570 571 572 = do { hpt_rules <- getRuleBase ; let local_rules = mg_rules guts rule_base = extendRuleBaseList hpt_rules (mg_rules guts) Ian Lynagh committed Jun 11, 2012 573 -- Specialise the bindings of this module Ian Lynagh committed Jun 12, 2012 574 ; (binds', uds) <- runSpecM dflags (go (mg_binds guts)) simonpj@microsoft.com committed Oct 07, 2010 575 Ian Lynagh committed Jun 11, 2012 576 -- Specialise imported functions Ian Lynagh committed Jun 12, 2012 577 ; (new_rules, spec_binds) <- specImports dflags emptyVarSet rule_base uds simonpj@microsoft.com committed Oct 07, 2010 578 simonpj@microsoft.com committed Jan 26, 2011 579 580 ; let final_binds | null spec_binds = binds' | otherwise = Rec (flattenBinds spec_binds) : binds' Ian Lynagh committed Jun 11, 2012 581 -- Note [Glom the bindings if imported functions are specialised] simonpj@microsoft.com committed Jan 26, 2011 582 583 584 ; return (guts { mg_binds = final_binds , mg_rules = new_rules ++ local_rules }) } simonpj committed Mar 06, 1998 585 where Ian Lynagh committed Jun 11, 2012 586 587 588 589 590 591 -- We need to start with a Subst that knows all the things -- that are in scope, so that the substitution engine doesn't -- accidentally re-use a unique that's already in use -- Easiest thing is to do it all at once, as if all the top-level -- decls were mutually recursive top_subst = mkEmptySubst $mkInScopeSet$ mkVarSet $ simonpj@microsoft.com committed Oct 07, 2010 592 bindersOfBinds$ mg_binds guts simonpj committed May 25, 2000 593 594 595 596 597 go [] = return ([], emptyUDs) go (bind:binds) = do (binds', uds) <- go binds (bind', uds') <- specBind top_subst bind uds return (bind' ++ binds', uds') simonpj@microsoft.com committed Oct 07, 2010 598 Ian Lynagh committed Jun 12, 2012 599 600 specImports :: DynFlags -> VarSet -- Don't specialise these ones Ian Lynagh committed Jun 11, 2012 601 602 603 604 -- See Note [Avoiding recursive specialisation] -> RuleBase -- Rules from this module and the home package -- (but not external packages, which can change) -> UsageDetails -- Calls for imported things, and floating bindings simonpj@microsoft.com committed Oct 07, 2010 605 606 -> CoreM ( [CoreRule] -- New rules , [CoreBind] ) -- Specialised bindings and floating bindings simonpj@microsoft.com committed Jan 26, 2011 607 -- See Note [Specialise imported INLINABLE things] Ian Lynagh committed Jun 12, 2012 608 specImports dflags done rb uds simonpj@microsoft.com committed Oct 07, 2010 609 610 611 612 613 614 = do { let import_calls = varEnvElts (ud_calls uds) ; (rules, spec_binds) <- go rb import_calls ; return (rules, wrapDictBinds (ud_binds uds) spec_binds) } where go _ [] = return ([], []) go rb (CIS fn calls_for_fn : other_calls) Ian Lynagh committed Jun 12, 2012 615 = do { (rules1, spec_binds1) <- specImport dflags done rb fn (Map.toList calls_for_fn) simonpj@microsoft.com committed Oct 07, 2010 616 617 618 ; (rules2, spec_binds2) <- go (extendRuleBaseList rb rules1) other_calls ; return (rules1 ++ rules2, spec_binds1 ++ spec_binds2) } Ian Lynagh committed Jun 12, 2012 619 620 specImport :: DynFlags -> VarSet -- Don't specialise these Ian Lynagh committed Jun 11, 2012 621 622 623 -- See Note [Avoiding recursive specialisation] -> RuleBase -- Rules from this module -> Id -> [CallInfo] -- Imported function and calls for it simonpj@microsoft.com committed Oct 07, 2010 624 625 -> CoreM ( [CoreRule] -- New rules , [CoreBind] ) -- Specialised bindings Ian Lynagh committed Jun 12, 2012 626 specImport dflags done rb fn calls_for_fn simonpj@microsoft.com committed Jan 26, 2011 627 628 | fn elemVarSet done = return ([], []) -- No warning. This actually happens all the time Gabor Greif committed Jan 30, 2013 629 -- when specialising a recursive function, because Ian Lynagh committed Jun 11, 2012 630 631 -- the RHS of the specialised function contains a recursive -- call to the original function simonpj@microsoft.com committed Jan 26, 2011 632 633 | isInlinablePragma (idInlinePragma fn) simonpj@microsoft.com committed Oct 07, 2010 634 635 , Just rhs <- maybeUnfoldingTemplate (realIdUnfolding fn) = do { -- Get rules from the external package state Ian Lynagh committed Jun 11, 2012 636 637 -- We keep doing this in case we "page-fault in" -- more rules as we go along simonpj@microsoft.com committed Oct 07, 2010 638 ; hsc_env <- getHscEnv Ian Lynagh committed Jun 11, 2012 639 ; eps <- liftIO $hscEPS hsc_env simonpj@microsoft.com committed Oct 07, 2010 640 ; let full_rb = unionRuleBase rb (eps_rule_base eps) Ian Lynagh committed Jun 11, 2012 641 rules_for_fn = getRules full_rb fn simonpj@microsoft.com committed Oct 07, 2010 642 Ian Lynagh committed Jun 12, 2012 643 ; (rules1, spec_pairs, uds) <- runSpecM dflags$ simonpj@microsoft.com committed Oct 07, 2010 644 645 specCalls emptySubst rules_for_fn calls_for_fn fn rhs ; let spec_binds1 = [NonRec b r | (b,r) <- spec_pairs] Ian Lynagh committed Jun 11, 2012 646 647 -- After the rules kick in we may get recursion, but -- we rely on a global GlomBinds to sort that out later simonpj@microsoft.com committed Jan 26, 2011 648 -- See Note [Glom the bindings if imported functions are specialised] Ian Lynagh committed Jun 11, 2012 649 650 -- Now specialise any cascaded calls Ian Lynagh committed Jun 12, 2012 651 652 653 ; (rules2, spec_binds2) <- specImports dflags (extendVarSet done fn) (extendRuleBaseList rb rules1) uds simonpj@microsoft.com committed Oct 07, 2010 654 655 656 657 658 ; return (rules2 ++ rules1, spec_binds2 ++ spec_binds1) } | otherwise = WARN( True, ptext (sLit "specImport discard") <+> ppr fn <+> ppr calls_for_fn ) Ian Lynagh committed Jun 11, 2012 659 return ([], []) simonpj committed Mar 06, 1998 660 661 \end{code} simonpj@microsoft.com committed Jan 26, 2011 662 663 664 665 666 667 668 669 Note [Specialise imported INLINABLE things] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We specialise INLINABLE things but not INLINE things. The latter should be inlined bodily, so not much point in specialising them. Moreover, we risk lots of orphan modules from vigorous specialisation. Note [Glom the bindings if imported functions are specialised] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ian Lynagh committed Jun 11, 2012 670 Suppose we have an imported, *recursive*, INLINABLE function simonpj@microsoft.com committed Jan 26, 2011 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 f :: Eq a => a -> a f = /\a \d x. ...(f a d)... In the module being compiled we have g x = f (x::Int) Now we'll make a specialised function f_spec :: Int -> Int f_spec = \x -> ...(f Int dInt)... {-# RULE f Int _ = f_spec #-} g = \x. f Int dInt x Note that f_spec doesn't look recursive After rewriting with the RULE, we get f_spec = \x -> ...(f_spec)... BUT since f_spec was non-recursive before it'll *stay* non-recursive. The occurrence analyser never turns a NonRec into a Rec. So we must make sure that f_spec is recursive. Easiest thing is to make all the specialisations for imported bindings recursive. Note [Avoiding recursive specialisation] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ simonpj@microsoft.com committed Oct 07, 2010 691 692 693 694 695 When we specialise 'f' we may find new overloaded calls to 'g', 'h' in 'f's RHS. So we want to specialise g,h. But we don't want to specialise f any more! It's possible that f's RHS might have a recursive yet-more-specialised call, so we'd diverge in that case. And if the call is to the same type, one specialisation is enough. Ian Lynagh committed Jun 11, 2012 696 Avoiding this recursive specialisation loop is the reason for the simonpj@microsoft.com committed Oct 07, 2010 697 698 'done' VarSet passed to specImports and specImport. simonpj committed Mar 06, 1998 699 %************************************************************************ Ian Lynagh committed Jun 11, 2012 700 %* * simonpj committed Mar 06, 1998 701 \subsubsection{@specExpr@: the main function} Ian Lynagh committed Jun 11, 2012 702 %* * simonpj committed Mar 06, 1998 703 704 %************************************************************************ simonpj committed Feb 20, 1998 705 \begin{code} simonpj committed May 18, 1999 706 specVar :: Subst -> Id -> CoreExpr simonpj@microsoft.com committed Dec 24, 2009 707 specVar subst v = lookupIdSubst (text "specVar") subst v simonpj committed May 18, 1999 708 709 710 specExpr :: Subst -> CoreExpr -> SpecM (CoreExpr, UsageDetails) -- We carry a substitution down: Ian Lynagh committed Jun 11, 2012 711 712 713 714 -- a) we must clone any binding that might float outwards, -- to avoid name clashes -- b) we carry a type substitution to use when analysing -- the RHS of specialised bindings (no type-let!) simonpj committed Feb 20, 1998 715 716 ---------------- First the easy cases -------------------- simonpj@microsoft.com committed Oct 23, 2009 717 specExpr subst (Type ty) = return (Type (CoreSubst.substTy subst ty), emptyUDs) 718 specExpr subst (Coercion co) = return (Coercion (CoreSubst.substCo subst co), emptyUDs) 719 specExpr subst (Var v) = return (specVar subst v, emptyUDs) simonpj@microsoft.com committed Apr 28, 2008 720 specExpr _ (Lit lit) = return (Lit lit, emptyUDs) 721 722 specExpr subst (Cast e co) = do (e', uds) <- specExpr subst e 723 return ((Cast e' (CoreSubst.substCo subst co)), uds) Simon Marlow committed Nov 02, 2011 724 specExpr subst (Tick tickish body) = do 725 (body', uds) <- specExpr subst body Simon Marlow committed Nov 02, 2011 726 return (Tick (specTickish subst tickish) body', uds) simonpj committed Feb 20, 1998 727 728 729 ---------------- Applications might generate a call instance -------------------- simonpj@microsoft.com committed Apr 28, 2008 730 specExpr subst expr@(App {}) simonm committed Dec 02, 1998 731 = go expr [] simonpj committed Feb 20, 1998 732 where 733 734 735 go (App fun arg) args = do (arg', uds_arg) <- specExpr subst arg (fun', uds_app) <- go fun (arg':args) return (App fun' arg', uds_arg plusUDs uds_app) simonm committed Dec 02, 1998 736 simonpj committed May 18, 1999 737 go (Var f) args = case specVar subst f of simonpj@microsoft.com committed Sep 03, 2008 738 Var f' -> return (Var f', mkCallUDs f' args) Ian Lynagh committed Jun 11, 2012 739 740 e' -> return (e', emptyUDs) -- I don't expect this! go other _ = specExpr subst other simonpj committed Feb 20, 1998 741 742 ---------------- Lambda/case require dumping of usage details -------------------- 743 744 specExpr subst e@(Lam _ _) = do (body', uds) <- specExpr subst' body Ian Lynagh committed Jun 11, 2012 745 let (free_uds, dumped_dbs) = dumpUDs bndrs' uds simonpj@microsoft.com committed Oct 23, 2009 746 return (mkLams bndrs' (wrapDictBindsE dumped_dbs body'), free_uds) simonpj committed Feb 20, 1998 747 where simonpj committed May 18, 1999 748 (bndrs, body) = collectBinders e simonpj committed Dec 24, 2004 749 (subst', bndrs') = substBndrs subst bndrs Ian Lynagh committed Jun 11, 2012 750 751 -- More efficient to collect a group of binders together all at once -- and we don't want to split a lambda group with dumped bindings simonpj committed Feb 20, 1998 752 Ian Lynagh committed Jun 11, 2012 753 specExpr subst (Case scrut case_bndr ty alts) simonpj@microsoft.com committed Aug 12, 2010 754 = do { (scrut', scrut_uds) <- specExpr subst scrut Ian Lynagh committed Jun 11, 2012 755 756 ; (scrut'', case_bndr', alts', alts_uds) <- specCase subst scrut' case_bndr alts simonpj@microsoft.com committed Aug 12, 2010 757 758 ; return (Case scrut'' case_bndr' (CoreSubst.substTy subst ty) alts' , scrut_uds plusUDs alts_uds) } simonpj committed Feb 20, 1998 759 760 ---------------- Finally, let is the interesting case -------------------- 761 specExpr subst (Let bind body) = do Ian Lynagh committed Jun 11, 2012 762 -- Clone binders 763 (rhs_subst, body_subst, bind') <- cloneBindSM subst bind simonpj committed Feb 23, 1998 764 765 766 -- Deal with the body (body', body_uds) <- specExpr body_subst body simonpj committed Mar 06, 1998 767 768 769 770 771 772 -- Deal with the bindings (binds', uds) <- specBind rhs_subst bind' body_uds -- All done return (foldr Let body' binds', uds) simonpj committed May 18, 1999 773 Simon Marlow committed Nov 02, 2011 774 775 776 777 778 779 specTickish :: Subst -> Tickish Id -> Tickish Id specTickish subst (Breakpoint ix ids) = Breakpoint ix [ id' | id <- ids, Var id' <- [specVar subst id]] -- drop vars from the list if they have a non-variable substitution. -- should never happen, but it's harmless to drop them anyway. specTickish _ other_tickish = other_tickish simonpj@microsoft.com committed Aug 12, 2010 780 Ian Lynagh committed Jun 11, 2012 781 782 specCase :: Subst -> CoreExpr -- Scrutinee, already done simonpj@microsoft.com committed Aug 12, 2010 783 -> Id -> [CoreAlt] Ian Lynagh committed Jun 11, 2012 784 785 786 -> SpecM ( CoreExpr -- New scrutinee , Id , [CoreAlt] simonpj@microsoft.com committed Aug 12, 2010 787 788 , UsageDetails) specCase subst scrut' case_bndr [(con, args, rhs)] Ian Lynagh committed Jun 11, 2012 789 | isDictId case_bndr -- See Note [Floating dictionaries out of cases] simonpj@microsoft.com committed Aug 12, 2010 790 791 , interestingDict scrut' , not (isDeadBinder case_bndr && null sc_args') ian@well-typed.com committed Oct 09, 2012 792 793 794 = do { dflags <- getDynFlags ; (case_bndr_flt : sc_args_flt) <- mapM clone_me (case_bndr' : sc_args') simonpj@microsoft.com committed Aug 12, 2010 795 796 797 798 799 ; let sc_rhss = [ Case (Var case_bndr_flt) case_bndr' (idType sc_arg') [(con, args', Var sc_arg')] | sc_arg' <- sc_args' ] Ian Lynagh committed Jun 11, 2012 800 801 802 803 -- Extend the substitution for RHS to map the *original* binders -- to their floated verions. Attach an unfolding to these floated -- binders so they look interesting to interestingDict mb_sc_flts :: [Maybe DictId] simonpj@microsoft.com committed Aug 12, 2010 804 mb_sc_flts = map (lookupVarEnv clone_env) args' ian@well-typed.com committed Oct 09, 2012 805 806 clone_env = zipVarEnv sc_args' (zipWith (add_unf dflags) sc_args_flt sc_rhss) subst_prs = (case_bndr, Var (add_unf dflags case_bndr_flt scrut')) Ian Lynagh committed Jun 11, 2012 807 : [ (arg, Var sc_flt) simonpj@microsoft.com committed Aug 12, 2010 808 809 | (arg, Just sc_flt) <- args zip mb_sc_flts ] subst_rhs' = extendIdSubstList subst_rhs subst_prs Ian Lynagh committed Jun 11, 2012 810 simonpj@microsoft.com committed Aug 12, 2010 811 812 813 814 815 816 817 818 819 820 821 822 823 ; (rhs', rhs_uds) <- specExpr subst_rhs' rhs ; let scrut_bind = mkDB (NonRec case_bndr_flt scrut') case_bndr_set = unitVarSet case_bndr_flt sc_binds = [(NonRec sc_arg_flt sc_rhs, case_bndr_set) | (sc_arg_flt, sc_rhs) <- sc_args_flt zip sc_rhss ] flt_binds = scrut_bind : sc_binds (free_uds, dumped_dbs) = dumpUDs (case_bndr':args') rhs_uds all_uds = flt_binds addDictBinds free_uds alt' = (con, args', wrapDictBindsE dumped_dbs rhs') ; return (Var case_bndr_flt, case_bndr', [alt'], all_uds) } where (subst_rhs, (case_bndr':args')) = substBndrs subst (case_bndr:args) sc_args' = filter is_flt_sc_arg args' Ian Lynagh committed Jun 11, 2012 824 simonpj@microsoft.com committed Aug 12, 2010 825 826 827 828 829 830 831 832 clone_me bndr = do { uniq <- getUniqueM ; return (mkUserLocal occ uniq ty loc) } where name = idName bndr ty = idType bndr occ = nameOccName name loc = getSrcSpan name ian@well-typed.com committed Oct 09, 2012 833 834 add_unf dflags sc_flt sc_rhs -- Sole purpose: make sc_flt respond True to interestingDictId = setIdUnfolding sc_flt (mkSimpleUnfolding dflags sc_rhs) simonpj@microsoft.com committed Aug 12, 2010 835 836 837 838 arg_set = mkVarSet args' is_flt_sc_arg var = isId var && not (isDeadBinder var) Ian Lynagh committed Jun 11, 2012 839 && isDictTy var_ty simonpj@microsoft.com committed Aug 12, 2010 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 && not (tyVarsOfType var_ty intersectsVarSet arg_set) where var_ty = idType var specCase subst scrut case_bndr alts = do { (alts', uds_alts) <- mapAndCombineSM spec_alt alts ; return (scrut, case_bndr', alts', uds_alts) } where (subst_alt, case_bndr') = substBndr subst case_bndr spec_alt (con, args, rhs) = do (rhs', uds) <- specExpr subst_rhs rhs let (free_uds, dumped_dbs) = dumpUDs (case_bndr' : args') uds return ((con, args', wrapDictBindsE dumped_dbs rhs'), free_uds) where (subst_rhs, args') = substBndrs subst_alt args simonpj committed Mar 06, 1998 856 \end{code} simonpj committed Feb 20, 1998 857 simonpj@microsoft.com committed Aug 12, 2010 858 859 860 861 862 863 Note [Floating dictionaries out of cases] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider g = \d. case d of { MkD sc ... -> ...(f sc)... } Naively we can't float d2's binding out of the case expression, because 'sc' is bound by the case, and that in turn means we can't Ian Lynagh committed Jun 11, 2012 864 specialise f, which seems a pity. simonpj@microsoft.com committed Aug 12, 2010 865 Ian Lynagh committed Jun 11, 2012 866 So we invert the case, by floating out a binding simonpj@microsoft.com committed Aug 12, 2010 867 868 869 870 871 872 873 874 for 'sc_flt' thus: sc_flt = case d of { MkD sc ... -> sc } Now we can float the call instance for 'f'. Indeed this is just what'll happen if 'sc' was originally bound with a let binding, but case is more efficient, and necessary with equalities. So it's good to work with both. You might think that this won't make any difference, because the Ian Lynagh committed Jun 11, 2012 875 call instance will only get nuked by the \d. BUT if 'g' itself is simonpj@microsoft.com committed Aug 12, 2010 876 877 878 879 880 881 882 883 884 885 886 887 888 specialised, then transitively we should be able to specialise f. In general, given case e of cb { MkD sc ... -> ...(f sc)... } we transform to let cb_flt = e sc_flt = case cb_flt of { MkD sc ... -> sc } in case cb_flt of bg { MkD sc ... -> ....(f sc_flt)... } The "_flt" things are the floated binds; we use the current substitution to substitute sc -> sc_flt in the RHS simonpj committed Mar 06, 1998 889 %************************************************************************ Ian Lynagh committed Jun 11, 2012 890 %* * simonpj@microsoft.com committed Oct 07, 2010 891 Dealing with a binding Ian Lynagh committed Jun 11, 2012 892 %* * simonpj committed Mar 06, 1998 893 894 895 %************************************************************************ \begin{code} Ian Lynagh committed Jun 11, 2012 896 897 898 899 900 specBind :: Subst -- Use this for RHSs -> CoreBind -> UsageDetails -- Info on how the scope of the binding -> SpecM ([CoreBind], -- New bindings UsageDetails) -- And info to pass upstream simonpj committed Mar 06, 1998 901 simonpj@microsoft.com committed Oct 23, 2009 902 903 904 905 906 -- Returned UsageDetails: -- No calls for binders of this bind specBind rhs_subst (NonRec fn rhs) body_uds = do { (rhs', rhs_uds) <- specExpr rhs_subst rhs ; (fn', spec_defns, body_uds1) <- specDefn rhs_subst body_uds fn rhs simonm committed Dec 02, 1998 907 simonpj@microsoft.com committed Oct 23, 2009 908 ; let pairs = spec_defns ++ [(fn', rhs')] Ian Lynagh committed Jun 11, 2012 909 910 -- fn' mentions the spec_defns in its rules, -- so put the latter first simonm committed Dec 02, 1998 911 simonpj@microsoft.com committed Oct 23, 2009 912 combined_uds = body_uds1 plusUDs rhs_uds Ian Lynagh committed Jun 11, 2012 913 914 915 916 -- This way round a call in rhs_uds of a function f -- at type T will override a call of f at T in body_uds1; and -- that is good because it'll tend to keep "earlier" calls -- See Note [Specialisation of dictionary functions] simonm committed Dec 02, 1998 917 Ian Lynagh committed Jun 11, 2012 918 919 (free_uds, dump_dbs, float_all) = dumpBindUDs [fn] combined_uds -- See Note [From non-recursive to recursive] simonpj@microsoft.com committed Oct 23, 2009 920 921 922 923 final_binds | isEmptyBag dump_dbs = [NonRec b r | (b,r) <- pairs] | otherwise = [Rec (flattenDictBinds dump_dbs pairs)] Ian Lynagh committed Jun 11, 2012 924 925 926 927 ; if float_all then -- Rather than discard the calls mentioning the bound variables -- we float this binding along with the others return ([], free_uds snocDictBinds final_binds) simonpj@microsoft.com committed Oct 23, 2009 928 else Ian Lynagh committed Jun 11, 2012 929 930 931 -- No call in final_uds mentions bound variables, -- so we can just leave the binding here return (final_binds, free_uds) } simonpj@microsoft.com committed Oct 23, 2009 932 933 934 specBind rhs_subst (Rec pairs) body_uds simonpj@microsoft.com committed Sep 03, 2008 935 936 937 -- Note [Specialising a recursive group] = do { let (bndrs,rhss) = unzip pairs ; (rhss', rhs_uds) <- mapAndCombineSM (specExpr rhs_subst) rhss simonpj@microsoft.com committed Oct 23, 2009 938 ; let scope_uds = body_uds plusUDs rhs_uds Ian Lynagh committed Jun 11, 2012 939 -- Includes binds and calls arising from rhss simonpj@microsoft.com committed Oct 23, 2009 940 941 942 943 944 ; (bndrs1, spec_defns1, uds1) <- specDefns rhs_subst scope_uds pairs ; (bndrs3, spec_defns3, uds3) <- if null spec_defns1 -- Common case: no specialisation Ian Lynagh committed Jun 11, 2012 945 946 then return (bndrs1, [], uds1) else do { -- Specialisation occurred; do it again simonpj@microsoft.com committed Oct 23, 2009 947 948 949 950 951 952 953 (bndrs2, spec_defns2, uds2) <- specDefns rhs_subst uds1 (bndrs1 zip rhss) ; return (bndrs2, spec_defns2 ++ spec_defns1, uds2) } ; let (final_uds, dumped_dbs, float_all) = dumpBindUDs bndrs uds3 bind = Rec (flattenDictBinds dumped_dbs $spec_defns3 ++ zip bndrs3 rhss') Ian Lynagh committed Jun 11, 2012 954 simonpj@microsoft.com committed Oct 23, 2009 955 ; if float_all then Ian Lynagh committed Jun 11, 2012 956 return ([], final_uds snocDictBind bind) simonpj@microsoft.com committed Oct 23, 2009 957 else Ian Lynagh committed Jun 11, 2012 958 return ([bind], final_uds) } simonpj@microsoft.com committed Sep 03, 2008 959 960 961 962 --------------------------- specDefns :: Subst Ian Lynagh committed Jun 11, 2012 963 964 965 966 967 -> UsageDetails -- Info on how it is used in its scope -> [(Id,CoreExpr)] -- The things being bound and their un-processed RHS -> SpecM ([Id], -- Original Ids with RULES added [(Id,CoreExpr)], -- Extra, specialised bindings UsageDetails) -- Stuff to fling upwards from the specialised versions simonpj@microsoft.com committed Sep 03, 2008 968 969 970 971 972 973 974 -- Specialise a list of bindings (the contents of a Rec), but flowing usages -- upwards binding by binding. Example: { f = ...g ...; g = ...f .... } -- Then if the input CallDetails has a specialised call for 'g', whose specialisation -- in turn generates a specialised call for 'f', we catch that in this one sweep. -- But not vice versa (it's a fixpoint problem). simonpj@microsoft.com committed Oct 23, 2009 975 976 977 978 979 980 specDefns _subst uds [] = return ([], [], uds) specDefns subst uds ((bndr,rhs):pairs) = do { (bndrs1, spec_defns1, uds1) <- specDefns subst uds pairs ; (bndr1, spec_defns2, uds2) <- specDefn subst uds1 bndr rhs ; return (bndr1 : bndrs1, spec_defns1 ++ spec_defns2, uds2) } simonpj@microsoft.com committed Sep 03, 2008 981 982 983 --------------------------- specDefn :: Subst Ian Lynagh committed Jun 11, 2012 984 985 986 987 988 -> UsageDetails -- Info on how it is used in its scope -> Id -> CoreExpr -- The thing being bound and its un-processed RHS -> SpecM (Id, -- Original Id with added RULES [(Id,CoreExpr)], -- Extra, specialised bindings UsageDetails) -- Stuff to fling upwards from the specialised versions simonpj committed Feb 20, 1998 989 simonpj@microsoft.com committed Oct 23, 2009 990 specDefn subst body_uds fn rhs simonpj@microsoft.com committed Oct 07, 2010 991 992 = do { let (body_uds_without_me, calls_for_me) = callsForMe fn body_uds rules_for_me = idCoreRules fn Ian Lynagh committed Jun 11, 2012 993 ; (rules, spec_defns, spec_uds) <- specCalls subst rules_for_me simonpj@microsoft.com committed Oct 07, 2010 994 995 996 997 calls_for_me fn rhs ; return ( fn addIdSpecialisations rules , spec_defns , body_uds_without_me plusUDs spec_uds) } Ian Lynagh committed Jun 11, 2012 998 999 1000 1001 1002 1003 -- It's important that the plusUDs is this way -- round, because body_uds_without_me may bind -- dictionaries that are used in calls_for_me passed -- to specDefn. So the dictionary bindings in -- spec_uds may mention dictionaries bound in -- body_uds_without_me simonpj@microsoft.com committed Oct 07, 2010 1004 1005 1006 --------------------------- specCalls :: Subst Ian Lynagh committed Jun 11, 2012 1007 1008 1009 1010 1011 1012 -> [CoreRule] -- Existing RULES for the fn -> [CallInfo] -> Id -> CoreExpr -> SpecM ([CoreRule], -- New RULES for the fn [(Id,CoreExpr)], -- Extra, specialised bindings UsageDetails) -- New usage details from the specialised RHSs simonpj@microsoft.com committed Oct 07, 2010 1013 1014 -- This function checks existing rules, and does not create simonpj@microsoft.com committed Jan 26, 2011 1015 -- duplicate ones. So the caller does not need to do this filtering. simonpj@microsoft.com committed Oct 07, 2010 1016 1017 1018 -- See 'already_covered' specCalls subst rules_for_me calls_for_me fn rhs Ian Lynagh committed Jun 11, 2012 1019 -- The first case is the interesting one David Himmelstrup committed Jun 07, 2007 1020 | rhs_tyvars lengthIs n_tyvars -- Rhs of fn's defn has right number of big lambdas Ian Lynagh committed Jun 11, 2012 1021 1022 && rhs_ids lengthAtLeast n_dicts -- and enough dict args && notNull calls_for_me -- And there are some calls to specialise 1023 && not (isNeverActive (idInlineActivation fn)) Ian Lynagh committed Jun 11, 2012 1024 1025 -- Don't specialise NOINLINE things -- See Note [Auto-specialisation and RULES] simonpj committed Sep 26, 2001 1026 Ian Lynagh committed Jun 11, 2012 1027 1028 1029 -- && not (certainlyWillInline (idUnfolding fn)) -- And it's not small -- See Note [Inline specialisation] for why we do not -- switch off specialisation for inline functions simonpj committed Feb 23, 1998 1030 simonpj@microsoft.com committed Oct 07, 2010 1031 1032 = -- pprTrace "specDefn: some" (ppr fn $$ppr calls_for_me$$ ppr rules_for_me)$ do { stuff <- mapM spec_call calls_for_me simonpj@microsoft.com committed Sep 03, 2008 1033 ; let (spec_defns, spec_uds, spec_rules) = unzip3 (catMaybes stuff) simonpj@microsoft.com committed Oct 07, 2010 1034 ; return (spec_rules, spec_defns, plusUDList spec_uds) } simonpj committed Feb 20, 1998 1035 Ian Lynagh committed Jun 11, 2012 1036 1037 | otherwise -- No calls or RHS doesn't fit our preconceptions = WARN( notNull calls_for_me, ptext (sLit "Missed specialisation opportunity for") simonpj@microsoft.com committed Jan 26, 2011 1038 <+> ppr fn _trace_doc ) Ian Lynagh committed Jun 11, 2012 1039 -- Note [Specialisation shape] | 2021-06-21 23:42:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7028265595436096, "perplexity": 9579.30760190609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00235.warc.gz"} |
https://tex.stackexchange.com/questions/232644/adding-a-top-progress-navigation-bar-in-beamer | Adding a top progress navigation bar in beamer
I want to add a top progress navigation bar to my presentation but I'm unable to figure out how. I searched google, all I was able to find were questions on how to remove that.
This is an example of what I want exactly:
Code:
\documentclass[10pt, xcolor=x11names]{beamer}
\usecolortheme{seagull}
\useoutertheme{infolines}
\usefonttheme[onlymath]{serif}
\setbeamertemplate{headline}[default]
\setbeamertemplate{navigation symbols}{}
\mode<beamer>{\setbeamertemplate{blocks}[rounded][shadow=true]}
\setbeamercovered{transparent}
\setbeamercolor{block body example}{fg=blue, bg=black!20}
Edit:
I just found that I should use this:
\useoutertheme[subsection=false]{miniframes}
However the bullets of the subsections appear in a vertical way (not horizontal as in the pic above) which is sooo ugly. How to solve this?!
• You should really think about the community when you ask a question. Post a sample presentation that we can copy-and-paste-and-compile and replicate your results. Would you be able to do this so we don't have to create what seems like a 25-slide presentation before we can get working on a solution? – Werner Mar 11 '15 at 23:59
1 Answer
This kind of navigation can be added with \useoutertheme{miniframes}. In its default configuration the bullets are below each other, to get them in a row, use \documentclass[compress]{beamer}.
\documentclass[10pt, xcolor=x11names,compress]{beamer}
\usecolortheme{seagull}
\useoutertheme{infolines}
\usefonttheme[onlymath]{serif}
\setbeamertemplate{headline}[default]
\setbeamertemplate{navigation symbols}{}
\mode<beamer>{\setbeamertemplate{blocks}[rounded][shadow=true]}
\setbeamercovered{transparent}
\setbeamercolor{block body example}{fg=blue, bg=black!20}
\useoutertheme[subsection=false]{miniframes}
\begin{document}
\section{Section1}
\subsection{Subsection1}
\begin{frame}
\frametitle{Frame11}
\end{frame}
\subsection{Subsection2}
\begin{frame}
\frametitle{Frame12}
\end{frame}
\section{Section2}
\begin{frame}
\frametitle{Frame2}
\end{frame}
\subsection{Subsection1}
\begin{frame}
\frametitle{Frame21}
\end{frame}
\subsection{Subsection2}
\begin{frame}
\frametitle{Frame22}
\end{frame}
\subsection{Subsection3}
\begin{frame}
\frametitle{Frame23}
\end{frame}
\section{Section3}
\begin{frame}
\frametitle{Frame3}
\end{frame}
\frame{}
\end{document} | 2021-01-16 20:36:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3174270689487457, "perplexity": 1302.504395489413}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507045.10/warc/CC-MAIN-20210116195918-20210116225918-00705.warc.gz"} |
http://aimsciences.org/article/doi/10.3934/amc.2017001 | # American Institue of Mathematical Sciences
2017, 11(1): 1-65. doi: 10.3934/amc.2017001
## Recursive descriptions of polar codes
School of Electrical Engineering Tel Aviv University Ramat Aviv 69978 Israel
Received February 2015 Revised June 2016 Published February 2017
Polar codes are recursive general concatenated codes. This property motivates a recursive formalization of the known decoding algorithms: Successive Cancellation, Successive Cancellation with Lists and Belief Propagation. Using such description allows an easy development of these algorithms for arbitrary polarizing kernels. Hardware architectures for these decoding algorithms are also described in a recursive way, both for Arıkan's standard polar codes and for arbitrary polarizing kernels.
Citation: Noam Presman, Simon Litsyn. Recursive descriptions of polar codes. Advances in Mathematics of Communications, 2017, 11 (1) : 1-65. doi: 10.3934/amc.2017001
##### References:
[1] E. Arıkan, A performance comparison of polar codes and Reed-Muller codes, IEEE Commun. Lett., 12 (2008), 447-449. [2] E. Arıkan, Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels, IEEE Trans. Inf. Theory, 55 (2009), 3051-3073. [3] E. Arıkan, Systematic polar coding, IEEE Commun. Lett., 15 (2011), 860-862. [4] E. Arıkan and E. Telatar, On the rate of channel polarization, in 2009 IEEE Int. Symp. Inf. Theory (ISIT), 1493-1495. [5] A. Balatsoukas-Stimming, M. B. Parizi, A. Burg, LLR-based successive cancellation list decoding of polar codes, IEEE Trans. Signal Proc., 63 (2015), 5165-5179. [6] A. Balatsoukas-Stimming, M. B. Parizi and A. Burg, On metric sorting for successive cancellation list decoding of polar codes, in 2015 IEEE Int. Symp. Circ. Syst. (ISCAS), 1993-1996. [7] A. Balatsoukas-Stimming, A. J. Raymond, W. J. Gross, A. Burg, Hardware architecture for list successive cancellation decoding of polar codes, IEEE Trans. Circ. Syst. Ⅱ Express Briefs, 61 (2014), 609-613. [8] G. Berhault, C. Leroux, C. Jego and D. Dallet, Partial sums computation in polar codes decoding, preprint, arXiv: 1310. 1712 [9] E. Blokh, V. Zyabolov, Coding of generalized concatenated codes, Probl. Peredachi Inform., 10 (1974), 45-50. [10] G. Bonik, S. Goreinov and N. Zamarashkin, A variant of list plus CRC concatenated polar code, preprint, arXiv: 1207. 4661 [11] T. Cormen, C. Leiserson, R. Rivest and C. Stein, Introduction to Algorithms, The MIT Press, 2001. [12] I. Dumer, Concatenated codes and their multilevel generalizations, in Handbook of Coding Theory, Elsevier, The Netherlands, 1998. [13] I. Dumer, Soft-decision decoding of Reed-Muller codes: a simplified algorithm, IEEE Trans. Inf. Theory, 52 (2006), 954-963. [14] I. Dumer, K. Shabunov, Soft-decision decoding of Reed-Muller codes: recursive lists, IEEE Trans. Inf. Theory, 52 (2006), 1260-1266. [15] Y. Fan, C. Y. Tsui, An efficient partial-sum network architecture for semi-parallel polar codes decoder implementation, IEEE Trans. Signal Proc., 62 (2014), 3165-3179. [16] G. D. Forney, Concatenated Codes, MIT Press, Cambridge, 1966. [17] G. D. Forney, Codes on graphs: normal realizations, IEEE Trans. Inf. Theory, 47 (2001), 520-548. [18] N. Hussami, S. Korada and R. Urbanke, Performance of polar codes for channel and source coding, in 2009 IEEE Int. Symp. Inf. Theory (ISIT), 1488-1492. [19] S. B. Korada, Polar Codes for Channel and Source Coding, Ph. D theis, EPFL, 2009. [20] S. B. Korada, E. Sasoglu, R. Urbanke, Polar codes: characterization of exponent, bounds, and constructions, IEEE Trans. Inf. Theory, 56 (2010), 6253-6264. [21] C. Leroux, I. Tal, A. Vardy and W. J. Gross, Hardware architectures for successive cancellation decoding of polar codes, preprint, arXiv: 1011. 2919 [22] C. Leroux, A. Raymond, G. Sarkis, I. Tal, A. Vardy, W. Gross, Hardware implementation of successive-cancellation decoders for polar codes, J. Signal Proc. Syst., 69 (2012), 305-315. [23] C. Leroux, A. Raymond, G. Sarkis, W. Gross, A semi-parallel successive-cancellation decoder for polar codes, IEEE Trans. Signal Proc., 61 (2013), 289-299. [24] B. Li, H. Shen, D. Tse, An adaptive successive cancellation list decoder for polar codes with cyclic redundancy check, IEEE Commun. Lett., 16 (2012), 2044-2047. [25] J. Lin, C. Xiong and Z. Yan, A reduced latency list decoding algorithm for polar codes, in 2014 IEEE Workshop Signal Proc. Syst. (SiPS), 1-6. [26] A. Mishra, A. Raymond, L. Amaru, G. Sarkis, C. Leroux, P. Meinerzhagen, A. Burg and W. Gross, A successive cancellation decoder ASIC for a 1024-bit polar code in 180nm CMOS, in 2012 IEEE Asian Solid State Circ. Conf. (A-SSCC), 205-208. [27] R. Mori and T. Tanaka, Performance and construction of polar codes on symmetric binaryinput memoryless channels, in 2009 IEEE Int. Symp. Inf. Theory (ISIT), 1496-1500. [28] R. Mori and T. Tanaka, Channel polarization on q-ary discrete memoryless channels by arbitrary kernels, in 2010 IEEE Int. Symp. Inf. Theory (ISIT), 894-898. [29] R. Mori and T. Tanaka, Non-binary polar codes using Reed-Solomon codes and algebraic geometry codes, in 2010 IEEE Inf. Theory Workshop (ITW), 1-5. [30] A. Pamuk, An FPGA implementation architecture for decoding of polar codes, in 2011 Int. Symp. Wirel. Commun. Syst. (ISWCS), 437-441. [31] A. Pamuk and E. Arıkan, A two phase successive cancellation decoder architecture for polar codes, in 2013 IEEE Int. Symp. on Inf. Theory Proc. (ISIT), 957-961. [32] Y. S. Park, Energy-Efficient Decoders of Near-Capacity Channel Codes, Ph. D thesis, Univ. Michigan, 2014. [33] Y. S. Park, Y. Tao, S. Sun and Z. Zhang, A 4. 68Gb/s belief propagation polar decoder with bit-splitting register file, in 2014 Symp. VLSI Circ. Digest Techn. Papers, 1-2. [34] N. Presman, O. Shapira and S. Litsyn, Binary polar code kernels from code decompositions, preprint, arXiv: 1101. 0764 [35] N. Presman, O. Shapira and S. Litsyn, Polar codes with mixed-kernels, preprint, arXiv: 1107. 0478 [36] N. Presman, O. Shapira, S. Litsyn, Mixed-kernels constructions of polar codes, IEEE J. Selected Areas Commun., 34 (2016), 239-253. [37] N. Presman, O. Shapira, S. Litsyn, T. Etzion, A. Vardy, Binary polarization kernels from code decompositions, IEEE Trans. Inf. Theory, 61 (2015), 2227-2239. [38] A. Raymond, W. Gross, A scalable successive-cancellation decoder for polar codes, IEEE Trans. Signal Proc., 62 (2014), 5339-5347. [39] G. Sarkis, P. Giard, A. Vardy, C. Thibeault, W. Gross, Fast polar decoders: algorithm and implementation, IEEE J. Sel. Areas Commun., 32 (2014), 946-957. [40] G. Sarkis, P. Giard, A. Vardy, C. Thibeault and W. Gross, Increasing the speed of polar list decoders, in 2014 IEEE Workshop Signal Proc. Syst. (SiPS), 1-6. [41] E. Sharon, S. Litsyn, J. Goldberger, Efficient serial message-passing schedules for LDPC decoding, IEEE Trans. Inf. Theory, 53 (2007), 4076-4091. [42] I. Tal and A. Vardy, List decoding of polar codes, in 2011 IEEE Int. Symp. Inf. Theory (ISIT), 1-5. [43] I. Tal, A. Vardy, List decoding of polar codes, IEEE Trans. Inf. Theory, 61 (2015), 2213-2226. [44] P. Trifonov, Efficient design and decoding of polar codes, IEEE Trans. Commun., 60 (2012), 3221-3227. [45] B. Yuan and K. Parhi, Architecture optimizations for BP polar decoders, in 2013 IEEE Int. Conf. Acoust. Speech Signal Proc. (ICASSP), 2654-2658. [46] B. Yuan, K. Parhi, Early stopping criteria for energy-efficient low-latency beliefpropagation polar code decoders, IEEE Trans. Signal Proc., 62 (2014), 6496-6506. [47] V. Zinoviev, Generalized concatenated codes, Probl. Peredachi Inform., 12 (1976), 5-15.
show all references
##### References:
[1] E. Arıkan, A performance comparison of polar codes and Reed-Muller codes, IEEE Commun. Lett., 12 (2008), 447-449. [2] E. Arıkan, Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels, IEEE Trans. Inf. Theory, 55 (2009), 3051-3073. [3] E. Arıkan, Systematic polar coding, IEEE Commun. Lett., 15 (2011), 860-862. [4] E. Arıkan and E. Telatar, On the rate of channel polarization, in 2009 IEEE Int. Symp. Inf. Theory (ISIT), 1493-1495. [5] A. Balatsoukas-Stimming, M. B. Parizi, A. Burg, LLR-based successive cancellation list decoding of polar codes, IEEE Trans. Signal Proc., 63 (2015), 5165-5179. [6] A. Balatsoukas-Stimming, M. B. Parizi and A. Burg, On metric sorting for successive cancellation list decoding of polar codes, in 2015 IEEE Int. Symp. Circ. Syst. (ISCAS), 1993-1996. [7] A. Balatsoukas-Stimming, A. J. Raymond, W. J. Gross, A. Burg, Hardware architecture for list successive cancellation decoding of polar codes, IEEE Trans. Circ. Syst. Ⅱ Express Briefs, 61 (2014), 609-613. [8] G. Berhault, C. Leroux, C. Jego and D. Dallet, Partial sums computation in polar codes decoding, preprint, arXiv: 1310. 1712 [9] E. Blokh, V. Zyabolov, Coding of generalized concatenated codes, Probl. Peredachi Inform., 10 (1974), 45-50. [10] G. Bonik, S. Goreinov and N. Zamarashkin, A variant of list plus CRC concatenated polar code, preprint, arXiv: 1207. 4661 [11] T. Cormen, C. Leiserson, R. Rivest and C. Stein, Introduction to Algorithms, The MIT Press, 2001. [12] I. Dumer, Concatenated codes and their multilevel generalizations, in Handbook of Coding Theory, Elsevier, The Netherlands, 1998. [13] I. Dumer, Soft-decision decoding of Reed-Muller codes: a simplified algorithm, IEEE Trans. Inf. Theory, 52 (2006), 954-963. [14] I. Dumer, K. Shabunov, Soft-decision decoding of Reed-Muller codes: recursive lists, IEEE Trans. Inf. Theory, 52 (2006), 1260-1266. [15] Y. Fan, C. Y. Tsui, An efficient partial-sum network architecture for semi-parallel polar codes decoder implementation, IEEE Trans. Signal Proc., 62 (2014), 3165-3179. [16] G. D. Forney, Concatenated Codes, MIT Press, Cambridge, 1966. [17] G. D. Forney, Codes on graphs: normal realizations, IEEE Trans. Inf. Theory, 47 (2001), 520-548. [18] N. Hussami, S. Korada and R. Urbanke, Performance of polar codes for channel and source coding, in 2009 IEEE Int. Symp. Inf. Theory (ISIT), 1488-1492. [19] S. B. Korada, Polar Codes for Channel and Source Coding, Ph. D theis, EPFL, 2009. [20] S. B. Korada, E. Sasoglu, R. Urbanke, Polar codes: characterization of exponent, bounds, and constructions, IEEE Trans. Inf. Theory, 56 (2010), 6253-6264. [21] C. Leroux, I. Tal, A. Vardy and W. J. Gross, Hardware architectures for successive cancellation decoding of polar codes, preprint, arXiv: 1011. 2919 [22] C. Leroux, A. Raymond, G. Sarkis, I. Tal, A. Vardy, W. Gross, Hardware implementation of successive-cancellation decoders for polar codes, J. Signal Proc. Syst., 69 (2012), 305-315. [23] C. Leroux, A. Raymond, G. Sarkis, W. Gross, A semi-parallel successive-cancellation decoder for polar codes, IEEE Trans. Signal Proc., 61 (2013), 289-299. [24] B. Li, H. Shen, D. Tse, An adaptive successive cancellation list decoder for polar codes with cyclic redundancy check, IEEE Commun. Lett., 16 (2012), 2044-2047. [25] J. Lin, C. Xiong and Z. Yan, A reduced latency list decoding algorithm for polar codes, in 2014 IEEE Workshop Signal Proc. Syst. (SiPS), 1-6. [26] A. Mishra, A. Raymond, L. Amaru, G. Sarkis, C. Leroux, P. Meinerzhagen, A. Burg and W. Gross, A successive cancellation decoder ASIC for a 1024-bit polar code in 180nm CMOS, in 2012 IEEE Asian Solid State Circ. Conf. (A-SSCC), 205-208. [27] R. Mori and T. Tanaka, Performance and construction of polar codes on symmetric binaryinput memoryless channels, in 2009 IEEE Int. Symp. Inf. Theory (ISIT), 1496-1500. [28] R. Mori and T. Tanaka, Channel polarization on q-ary discrete memoryless channels by arbitrary kernels, in 2010 IEEE Int. Symp. Inf. Theory (ISIT), 894-898. [29] R. Mori and T. Tanaka, Non-binary polar codes using Reed-Solomon codes and algebraic geometry codes, in 2010 IEEE Inf. Theory Workshop (ITW), 1-5. [30] A. Pamuk, An FPGA implementation architecture for decoding of polar codes, in 2011 Int. Symp. Wirel. Commun. Syst. (ISWCS), 437-441. [31] A. Pamuk and E. Arıkan, A two phase successive cancellation decoder architecture for polar codes, in 2013 IEEE Int. Symp. on Inf. Theory Proc. (ISIT), 957-961. [32] Y. S. Park, Energy-Efficient Decoders of Near-Capacity Channel Codes, Ph. D thesis, Univ. Michigan, 2014. [33] Y. S. Park, Y. Tao, S. Sun and Z. Zhang, A 4. 68Gb/s belief propagation polar decoder with bit-splitting register file, in 2014 Symp. VLSI Circ. Digest Techn. Papers, 1-2. [34] N. Presman, O. Shapira and S. Litsyn, Binary polar code kernels from code decompositions, preprint, arXiv: 1101. 0764 [35] N. Presman, O. Shapira and S. Litsyn, Polar codes with mixed-kernels, preprint, arXiv: 1107. 0478 [36] N. Presman, O. Shapira, S. Litsyn, Mixed-kernels constructions of polar codes, IEEE J. Selected Areas Commun., 34 (2016), 239-253. [37] N. Presman, O. Shapira, S. Litsyn, T. Etzion, A. Vardy, Binary polarization kernels from code decompositions, IEEE Trans. Inf. Theory, 61 (2015), 2227-2239. [38] A. Raymond, W. Gross, A scalable successive-cancellation decoder for polar codes, IEEE Trans. Signal Proc., 62 (2014), 5339-5347. [39] G. Sarkis, P. Giard, A. Vardy, C. Thibeault, W. Gross, Fast polar decoders: algorithm and implementation, IEEE J. Sel. Areas Commun., 32 (2014), 946-957. [40] G. Sarkis, P. Giard, A. Vardy, C. Thibeault and W. Gross, Increasing the speed of polar list decoders, in 2014 IEEE Workshop Signal Proc. Syst. (SiPS), 1-6. [41] E. Sharon, S. Litsyn, J. Goldberger, Efficient serial message-passing schedules for LDPC decoding, IEEE Trans. Inf. Theory, 53 (2007), 4076-4091. [42] I. Tal and A. Vardy, List decoding of polar codes, in 2011 IEEE Int. Symp. Inf. Theory (ISIT), 1-5. [43] I. Tal, A. Vardy, List decoding of polar codes, IEEE Trans. Inf. Theory, 61 (2015), 2213-2226. [44] P. Trifonov, Efficient design and decoding of polar codes, IEEE Trans. Commun., 60 (2012), 3221-3227. [45] B. Yuan and K. Parhi, Architecture optimizations for BP polar decoders, in 2013 IEEE Int. Conf. Acoust. Speech Signal Proc. (ICASSP), 2654-2658. [46] B. Yuan, K. Parhi, Early stopping criteria for energy-efficient low-latency beliefpropagation polar code decoders, IEEE Trans. Signal Proc., 62 (2014), 6496-6506. [47] V. Zinoviev, Generalized concatenated codes, Probl. Peredachi Inform., 12 (1976), 5-15.
A GCC representation of a polar code of length $\ell^n$ symbols constructed by a homogenous kernel according to Definition 1
Example 1's GCC representation (Arıkan's construction)
Representation of a polar code with kernel of $\ell = 2$ dimensions as a layered factor graph
Representation of a polar code with kernel of $\ell = 2$ dimensions as a layered factor graph (detailed version of Figure 3-recursion unfolded)
Normal factor graph representation of the $g(\cdot)$ block from Figures 3 and 4 for Arikan's $(u+v, v)$ construction
A GCC representation of the length $N=4^n$ bits mixed-kernels polar code $g^{(n)}(\cdot)$ described in Example 3
Decoding tree for $(u+v, v)$ polar code illustrating the decision space of the SC and SCL algorithms
Representation of SC as a sequential walk on a decoding tree
SCL ($L=4$) algorithm example of $(u+v, v)$ with $N=8$ bits (see Figure 7) illustrated on the right a decoding tree on the outer-codes of the structure ($\mathcal{C}_{0}, \mathcal{C}_{1}$). The left decoding tree expands each edge of the right tree into decoding-paths on the outer-codes of $\mathcal{C}_{0}$ and $\mathcal{C}_{1}$. The labels of the edges are the values of the outer-codes.
Normal factor graphs representations of polar codes kernels
Messages of BP algorithm
Normal factor graph representation for the first kernel of Example 4. This kernel is constructed by gluing inputs $u_{1}, u_2$ of the mapping defined by the generating matrix $\bf G$
$(u+v, v)$ polar code PE block
Blocks of the $(u+v, v)$ polar code decoders of length $N$ bits
Block diagram for the SC pipeline decoder
Block diagram for the SC line decoder
Block diagram for the limited parallelism line decoder
BP line decoder components definitions
Block diagram for the BP line decoder. Details of figure appear in Figures 20, 21 and 22 corresponding to sub-figures A, B and C respectively.
Block diagram for the BP line decoder (Figure 19) -zoom-in: Sub-figure A
Block diagram for the BP line decoder (Figure 19) -zoom-in: Sub-figure B
Block diagram for the BP line decoder (Figure 19) -zoom-in: Sub-figure C
Block definitions of SC line decoder for polar code of length $N$ based on a linear $\ell$ dimensions kernel with alphabet $F$
Routing tables for OP-MUX and OP-DEMUX in Figure 18
$c^{(opMux)}, c^{(opDeMux)}$ $c^{(BPPE)}$ $\mu^{(in)}_0$ $\mu^{(in)}_1$ $\mu^{(out)}$ Equation $0$ $0$ $\mu^{(in)}_{x_1}$ $\mu^{(in)}_{u_1}$ $\mu_{e_1\rightarrow a_0}$ (45) $1$ $1$ $\mu^{(in)}_{x_0}$ $\mu^{(in)}_{u_0}$ $\mu_{a_0 \rightarrow e_1}$ (46) $2$ $1$ $\mu^{(in)}_{x_0}$ $\mu_{e_1\rightarrow a_0}$ $\mu_{u_0}^{(out)}$ (47) $3$ $0$ $\mu^{(in)}_{x_1}$ $\mu_{a_0\rightarrow e_1}$ $\mu_{u_1}^{(out)}$ (48) $4$ $1$ $\mu^{(in)}_{u_0}$ $\mu_{e_1\rightarrow a_0}$ $\mu_{x_0}^{(out)}$ (49) $5$ $0$ $\mu^{(in)}_{u_1}$ $\mu_{a_0\rightarrow e_1}$ $\mu_{x_1}^{(out)}$ (50) $6$ $0$ or $1$ $\mu^{(ext, in)}_0$ $\mu^{(ext, in)}_1$ $\mu^{(ext, out)}$ (80)
$c^{(opMux)}, c^{(opDeMux)}$ $c^{(BPPE)}$ $\mu^{(in)}_0$ $\mu^{(in)}_1$ $\mu^{(out)}$ Equation $0$ $0$ $\mu^{(in)}_{x_1}$ $\mu^{(in)}_{u_1}$ $\mu_{e_1\rightarrow a_0}$ (45) $1$ $1$ $\mu^{(in)}_{x_0}$ $\mu^{(in)}_{u_0}$ $\mu_{a_0 \rightarrow e_1}$ (46) $2$ $1$ $\mu^{(in)}_{x_0}$ $\mu_{e_1\rightarrow a_0}$ $\mu_{u_0}^{(out)}$ (47) $3$ $0$ $\mu^{(in)}_{x_1}$ $\mu_{a_0\rightarrow e_1}$ $\mu_{u_1}^{(out)}$ (48) $4$ $1$ $\mu^{(in)}_{u_0}$ $\mu_{e_1\rightarrow a_0}$ $\mu_{x_0}^{(out)}$ (49) $5$ $0$ $\mu^{(in)}_{u_1}$ $\mu_{a_0\rightarrow e_1}$ $\mu_{x_1}^{(out)}$ (50) $6$ $0$ or $1$ $\mu^{(ext, in)}_0$ $\mu^{(ext, in)}_1$ $\mu^{(ext, out)}$ (80)
[1] Min Ye, Alexander Barg. Polar codes for distributed hierarchical source coding. Advances in Mathematics of Communications, 2015, 9 (1) : 87-103. doi: 10.3934/amc.2015.9.87 [2] Miguel Mendes. A note on the coding of orbits in certain discontinuous maps. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 369-382. doi: 10.3934/dcds.2010.27.369 [3] Keisuke Minami, Takahiro Matsuda, Tetsuya Takine, Taku Noguchi. Asynchronous multiple source network coding for wireless broadcasting. Numerical Algebra, Control & Optimization, 2011, 1 (4) : 577-592. doi: 10.3934/naco.2011.1.577 [4] Arseny Egorov. Morse coding for a Fuchsian group of finite covolume. Journal of Modern Dynamics, 2009, 3 (4) : 637-646. doi: 10.3934/jmd.2009.3.637 [5] T. Jäger. Neuronal coding of pacemaker neurons -- A random dynamical systems approach. Communications on Pure & Applied Analysis, 2011, 10 (3) : 995-1009. doi: 10.3934/cpaa.2011.10.995 [6] Shinsuke Koyama, Lubomir Kostal. The effect of interspike interval statistics on the information gain under the rate coding hypothesis. Mathematical Biosciences & Engineering, 2014, 11 (1) : 63-80. doi: 10.3934/mbe.2014.11.63 [7] Stefan Martignoli, Ruedi Stoop. Phase-locking and Arnold coding in prototypical network topologies. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 145-162. doi: 10.3934/dcdsb.2008.9.145 [8] Giuseppe Bianchi, Lorenzo Bracciale, Keren Censor-Hillel, Andrea Lincoln, Muriel Médard. The one-out-of-k retrieval problem and linear network coding. Advances in Mathematics of Communications, 2016, 10 (1) : 95-112. doi: 10.3934/amc.2016.10.95 [9] Georgy L. Alfimov, Pavel P. Kizin, Dmitry A. Zezyulin. Gap solitons for the repulsive Gross-Pitaevskii equation with periodic potential: Coding and method for computation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1207-1229. doi: 10.3934/dcdsb.2017059 [10] Lassi Roininen, Markku S. Lehtinen. Perfect pulse-compression coding via ARMA algorithms and unimodular transfer functions. Inverse Problems & Imaging, 2013, 7 (2) : 649-661. doi: 10.3934/ipi.2013.7.649 [11] Kyung Jae Kim, Jin Soo Park, Bong Dae Choi. Admission control scheme of extended rtPS algorithm for VoIP service in IEEE 802.16e with adaptive modulation and coding. Journal of Industrial & Management Optimization, 2010, 6 (3) : 641-660. doi: 10.3934/jimo.2010.6.641 [12] Jonas Eriksson. A weight-based characterization of the set of correctable error patterns under list-of-2 decoding. Advances in Mathematics of Communications, 2007, 1 (3) : 331-356. doi: 10.3934/amc.2007.1.331 [13] Easton Li Xu, Weiping Shang, Guangyue Han. Network encoding complexity: Exact values, bounds, and inequalities. Advances in Mathematics of Communications, 2017, 11 (3) : 567-594. doi: 10.3934/amc.2017044 [14] Kwankyu Lee. Decoding of differential AG codes. Advances in Mathematics of Communications, 2016, 10 (2) : 307-319. doi: 10.3934/amc.2016007 [15] Elisa Gorla, Felice Manganiello, Joachim Rosenthal. An algebraic approach for decoding spread codes. Advances in Mathematics of Communications, 2012, 6 (4) : 443-466. doi: 10.3934/amc.2012.6.443 [16] Washiela Fish, Jennifer D. Key, Eric Mwambene. Partial permutation decoding for simplex codes. Advances in Mathematics of Communications, 2012, 6 (4) : 505-516. doi: 10.3934/amc.2012.6.505 [17] Alexander Barg, Arya Mazumdar, Gilles Zémor. Weight distribution and decoding of codes on hypergraphs. Advances in Mathematics of Communications, 2008, 2 (4) : 433-450. doi: 10.3934/amc.2008.2.433 [18] Gisèle Ruiz Goldstein, Jerome A. Goldstein, Naima Naheed. A convexified energy functional for the Fermi-Amaldi correction. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 41-65. doi: 10.3934/dcds.2010.28.41 [19] Mario Ahues, Filomena D. d'Almeida, Alain Largillier, Paulo B. Vasconcelos. Defect correction for spectral computations for a singular integral operator. Communications on Pure & Applied Analysis, 2006, 5 (2) : 241-250. doi: 10.3934/cpaa.2006.5.241 [20] Dominique Zosso, Jing An, James Stevick, Nicholas Takaki, Morgan Weiss, Liane S. Slaughter, Huan H. Cao, Paul S. Weiss, Andrea L. Bertozzi. Image segmentation with dynamic artifacts detection and bias correction. Inverse Problems & Imaging, 2017, 11 (3) : 577-600. doi: 10.3934/ipi.2017027
2016 Impact Factor: 0.8 | 2018-01-20 06:48:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6031708717346191, "perplexity": 8064.212232963967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889473.61/warc/CC-MAIN-20180120063253-20180120083253-00541.warc.gz"} |
https://www.groundai.com/project/consensus-using-asynchronous-failure-detectors/ | Consensus using Asynchronous Failure Detectors
# Consensus using Asynchronous Failure Detectors
Nancy Lynch
CSAIL, MIT
Srikanth Sastry111The author is currently affiliated with Google Inc.
CSAIL, MIT
###### Abstract
The FLP result shows that crash-tolerant consensus is impossible to solve in asynchronous systems, and several solutions have been proposed for crash-tolerant consensus under alternative (stronger) models. One popular approach is to augment the asynchronous system with appropriate failure detectors, which provide (potentially unreliable) information about process crashes in the system, to circumvent the FLP impossibility.
In this paper, we demonstrate the exact mechanism by which (sufficiently powerful) asynchronous failure detectors enable solving crash-tolerant consensus. Our approach, which borrows arguments from the FLP impossibility proof and the famous result from [2], which shows that is a weakest failure detector to solve consensus, also yields a natural proof to as a weakest asynchronous failure detector to solve consensus. The use of I/O automata theory in our approach enables us to model execution in a more detailed fashion than [2] and also addresses the latent assumptions and assertions in the original result in [2].
## 1 Introduction
In [5, 6] we introduced a new formulation of failure detectors. Unlike the traditional failure detectors of [3, 2], ours are modeled as asynchronous automata, and defined in terms of the general I/O automata framework for asynchronous concurrent systems. To distinguish our failure detectors from the traditional ones, we called ours “Asynchronous Failure Detectors (AFDs)”.
In terms of our model, we presented many of the standard results of the field and some new results. Our model narrowed the scope of failure detectors sufficiently so that AFDs satisfy several desirable properties, which are not true of the general class of traditional failure detector. For example, (1) AFDs are self-implementable; (2) if an AFD is strictly stronger than another AFD , then is sufficient to solve a strict superset of the problems solvable by . See [6] for details. Working entirely within an asynchronous framework allowed us to take advantage of the general results about I/O automata and to prove our results rigorously without too much difficulty.
In this paper, we investigate the role of asynchronous failure detectors in circumventing the impossibility of crash-tolerant consensus in asynchronous systems (FLP) [7]. Specifically, we demonstrate exactly how sufficiently strong AFDs circumvent the FLP impossibility. We borrow ideas from the important related result by Chandra, Hadzilacos, and Toueg [2] that says that the failure detector is a “Weakest Failure Detector” that solves the consensus problem. Incidentally, the proof in [2] make certain implicit assumptions and assertions which are entirely reasonable and true, respectively. However, for the purpose of rigor, it is desirable that these assumptions be explicit and these assertions be proved. Our demonstration of how sufficiently strong AFDs circumvent FLP dovetails effortlessly with an analogous proof of “weakest AFD” for consensus.
While our proof generally follows the proof in [2], we state the (implicit) assumptions and assertions from [2] explicitly. Since our framework is entirely asynchronous and all our definitions are based on an established concurrency theory foundation, we are able to provide rigorous proofs for the (unproven) assertions from [2]. In order to prove the main result of this paper, we modified certain definitions from [6]. However, these modifications do not invalidate any of the results from [5, 6].
The rest of this paper is organized as follows. Section 2 outlines the approach that we use in this paper and its major contributions. In section 3, we compare our proof with the original CHT proof in [2]. Sections 4 through 7 introduce I/O automata and the definitions of a problem, of an asynchronous system, and of AFDs; much of the material is summarized from [5, 6]. Section 8 introduces the notion of observations of AFD behavior, which are a key part of showing that is a weakest AFD to solve consensus; this section proves several useful properties of observations which are central to the understanding of the proof and are a contribution of our work. In Section 9, we introduce execution trees for any asynchronous system that uses an AFD; we construct such trees from observations introduced in Section 8. We also prove several properties of such execution trees, which may be of independent interest and useful in analysis of executions in any AFD-based system. In Section 10, we formally define the consensus problem and use the notions of observations and execution trees to demonstrate how sufficiently strong AFDs enable asynchronous systems to circumvent the impossibility of fault tolerant consensus in asynchronous systems [7]; Section 10 defines and uses decision gadgets in an execution tree to demonstrate this; it also shows that the set of such decision gadgets is countable, and therefore, any such execution tree contains a “first” decision gadget. Furthermore, Section 10 also shows that each decision gadget is associated with a location that is live and never crashes; we call it the critical location of the decision gadget. In Section 11, we show that is a weakest AFD to solve consensus by presenting a distributed algorithm that simulates the output of . The algorithm constructs observations and execution trees, and it eventually identifies the “first” decision gadget and its corresponding critical location; the algorithm outputs this critical location as the output of the simulated AFD, thus showing that is a weakest AFD for consensus.
## 2 Approach and contributions
To demonstrate our results, we start with a complete definition of asynchronous systems and AFDs. Here, we modified the definitions of AFD from [5, 6], but we did so without invalidating earlier results. We argue that the resulting definition of AFDs is more natural and models a richer class of behaviors in crash-prone asynchronous systems. Next, we introduce the notion of observations of AFD behavior (Section 8), which are DAGs that model a partial ordering AFD outputs are different processes; importantly, the knowledge of this partial order can be gained by any process through asynchronous message passing alone. Observations as a tool for modeling AFD behavior is of independent interest, and we prove several important properties of observations that are used in our later results.
From such observations, we construct trees of executions of arbitrary AFD-based systems; again, such trees are of independent interest, and we prove several important properties of such trees that are used later.
Next, we define the consensus problem and the notion valence. Roughly speaking, a finite execution of a system is univalent if all its fair extensions result in the same decision value and the execution is bivalent if some fair extension results in a decision value and another fair extension results in a decision value . We present our first important result using observations and execution trees; we show that a sufficiently powerful AFD guarantees that in the execution tree constructed from any viable222Informally, an observation is viable if it can be constructed from an AFD trace. observation of AFD outputs, the events responsible for the transition from a bivalent execution to a univalent execution must occur at location that does not crash. Such transitions to univalent executions correspond to so-called “decision gadgets”, and the live location corresponding to such transitions is called the “critical location” of the decision gadgets.
Next, we use the aforementioned result to show that is a weakest AFD to solve consensus. In order to do so, we first define a metric function that orders all the decision gadgets. This metric function satisfies an important stability property which guarantees the following. Given the decision gadget with the smallest metric value in a given infinite execution tree, for any sufficiently large, but finite, subtree, the same decision gadget will have the smallest metric value within that subtree. Note that the original proof in [2] did not provide such a metric function, and we contend that this is an essential compoenent for completing this proof. We then construct an emulation algorithm (similar to the one in [2]) that uses an AFD sufficiently powerful to solve consensus and simulates the output of . In this algorithm processes exchange AFD outputs and construct finite observations and corresponding finite execution trees. The aforementioned stability property ensures that eventually forever, each process that does not crash identifies the same decision gadget as the one with the smallest metric value. Recall that the critical location of any decision gadget is guaranteed to not crash. Therefore, eventually forever, each process that does not crash identifies the same correct process and outputs that correct process as the output of the simulated AFD.
## 3 Comparisons with the original CHT proof
Our proof has elements that are very similar to the the original CHT proof from [2]. However, despite the similarity in our arguments, our proof deviates from the CHT proof in some subtle, but significant ways.
### 3.1 Observations
In [2], the authors introduce DAGs with special properties that model the outputs of a failure detector at different processes and establishes partial ordering of these outputs. In our proof, the analogous structure is an observation (See Section 8). However, our notion of an observation is much more general than the DAG introduced in [2].
First, the DAG in [2] is an infinite graph and cannot model failure detector outputs in finite executions. In contrast, observations may be finite or infinite. Second, we also introduce the notion of a sequence of finite observations that can be constructed from progressively longer finite executions that enable us to model the evolution of observations and execution trees as failure detector outputs become available. Such detailed modeling and analysis does not appear in [2].
### 3.2 Execution trees
In [2], each possible input to consensus gives rise to a unique execution tree from the DAG. Thus, for processes, there are possible trees that constitute a forest a trees. In contrast, our proof constructs exactly one tree that models the executions of all possible inputs to consensus. This change is not merely cosmetic. It simplifies analysis and makes the proof technique more general in the following sense.
The original proof in [2] cannot be extended to understanding long-lived problems such as iterative consensus or mutual exclusion. The simple reason for this is that the number of possible inputs for such problems can be uncountably infinite, and so the number of trees generated by the proof technique in [2] is also uncountably infinite. This introduces significant challenges in extracting any structures within these trees by a distributed algorithm. In contrast, in our approach, the execution tree will remain virtually the same; only the rules for determining the action tag values at various edges change.
### 3.3 Determining the “first” decision gadget
In [2] and in our proof, a significant result is that there are infinite, but countable number of decision gadgets, and therefore there exists a unique enumeration of the decision gadgets such that one of them is the “first” one. This result is then used in [2] to claim that all the emulation algorithms converge to the same decision gadget. However, [2] does not provide any proof of this claim. Furthermore, we show that this proving this claim in non-trivial.
The significant gap in the original proof in [2] is the following. During the emulation, each process constructs only finite DAGs, that are subgraphs of some infinite DAG with the required special properties. However, since the DAGs are finite, the trees of executions constructed from this DAG could incorrectly detect certain parts of the trees as being decision gadgets, when in the execution tree of the infinite DAG, these are not decision gadgets. Each such pseudo decision gadget, is eventually deemed to not be a decision gadget, as the emulation progresses. However, there can be infinitely many such pseudo gadgets. Thus, given any arbitrary enumeration of decision gadgets, it is possible that such pseudo decision gadgets appears infinitely often, and are enumerated ahead of the “first” decision gadget. Consequently, the emulation never stabilizes to the first decision gadget.
In our proof, we address is gap by carefully defining metric functions for nodes and decision gadgets so that eventually, all the pseudo decision gadgets are ordered after the eventual “first” decision gadget.
## 4 I/O Automata
We use the I/O Automata framework [8, 9, 10] for specifying the system model and failure detectors. Briefly, an I/O automaton models a component of a distributed system as a (possibly infinite) state machine that interacts with other state machines through discrete actions. This section summarizes the I/O-Automata-related definitions that we use in this paper. See [10, Chapter 8] for a thorough description of I/O Automata.
### 4.1 Automata Definitions
An I/O automaton, which we will usually refer to as simply an “automaton”, consists of five components: a signature, a set of states, a set of initial states, a state-transition relation, and a set of tasks. We describe these components next.
The state transitions of an automaton are associated with named actions; we denote the set of actions of an automaton by . Actions are classified as input, output, or internal, and this classification constitutes the signature of the automaton. We denote the sets of input, output, and internal actions of an automaton by , , and , respectively. Input and output actions are collectively called the external actions, denoted , and output and internal actions are collectively called the locally controlled actions. The locally controlled actions of an automaton are partitioned into tasks. Tasks are used in defining fairness conditions on executions of the automaton, as we describe in Section 4.4.
Internal actions of an automaton are local to the automaton itself whereas external (input and output) actions are available for interaction with other automata. Locally controlled actions are initiated by the automaton itself, whereas input actions simply arrive at the automaton from the outside, without any control by the automaton.
##### States.
The states of an automaton are denoted by ; some (non-empty) subset is designated as the set of initial states.
##### Transition Relation.
The state transitions of an automaton are defined by a state-transition relation , which is a set of tuples of the form where and . Each such tuple is a transition, or a step, of . Informally speaking, each step denotes the following behavior: automaton , in state , performs action and changes its state to .
For a given state and action , if contains some step of the form , then is said to be enabled in . We assume that every input action in is enabled in every state of ; that is, for every input action and every state , contains a step of the form . A task , which is a set of locally controlled actions, is said to be enabled in a state iff some action in is enabled in .
##### Deterministic Automata.
The general definition of an I/O automaton permits multiple locally controlled actions to be enabled in any given state. It also allows the resulting state after performing a given action to be chosen nondeterministically. For our purposes, it is convenient to consider a class of I/O automata whose behavior is more restricted.
We define an action (of an automaton ) to be deterministic provided that, for every state , contains at most one transition of the form . We define an automaton to be task deterministic iff (1) for every task and every state of , at most one action in is enabled in , and (2) all the actions in are deterministic. An automaton is said to be deterministic iff it is task deterministic, has exactly one task, and has a unique start state.
### 4.2 Executions, Traces, and Schedules
Now we define how an automaton executes. An execution fragment of an automaton is a finite sequence , or an infinite sequence , of alternating states and actions of such that for every , is in . A sequence consisting of just a state is a special case of an execution fragment and is called a null execution fragment. Each occurrence of an action in an execution fragment is called an event.
An execution fragment that starts with an initial state (that is, ) is called an execution. A null execution fragment consisting of an initial state is called a null execution. A state is said to be reachable if there exists a finite execution that ends with . By definition, any initial state is reachable.
We define concatenation of execution fragments. Let and be two execution fragments of an I/O automaton such that is finite and the final state of is also the starting state of , and let denote the sequence obtained by deleting the first state in . Then the expression denotes the execution fragment formed by appending after .
It is sometimes useful to consider just the sequence of events that occur in an execution, ignoring the states. Thus, given an execution , the schedule of is the subsequence of that consists of all the events in , both internal and external. The trace of an execution includes only the externally observable behavior; formally, the trace of an execution is the subsequence of consisting of all the external actions.
More generally, we define the projection of any sequence on a set of actions as follows. Given a sequence (which may be an execution fragment, schedule, or trace) and a set of actions, the projection of on , denoted by , is the subsequence of consisting of all the events from .
We define concatenation of schedules and traces. Let and be two sequences of actions of some I/O automaton where is finite; then denotes the sequence formed by appending after .
To designate specific events in a schedule or trace, we use the following notation: if a sequence (which may be a schedule or a trace) contains at least events, then denotes the event in the sequence , and otherwise, . Here, is a special symbol that we assume is different from the names of all actions.
### 4.3 Operations on I/O Automata
##### Composition.
A collection of I/O automata may be composed by matching output actions of some automata with the same-named input actions of others.333Not all collections of I/O automata may be composed. For instance, in order to compose a collection of I/O automata, we require that no two automata have a common output action. See [10, chapter 8] for details. Each output of an automaton may be matched with inputs of any number of other automata. Upon composition, all the actions with the same name are performed together.
Let be an execution of the composition of automata . The projection of on automaton , where , is denoted by and is defined to be the subsequence of obtained by deleting each pair for which is not an action of and replacing each remaining state by automaton ’s part of . Theorem 8.1 in [10] states that if is an execution of the composition , then for each , is an execution of . Similarly, if is a trace of of , then for each , is an trace of .
##### Hiding.
In an automaton , an output action may be “hidden” by reclassifying it as an internal action. A hidden action no longer appears in the traces of the automaton.
### 4.4 Fairness
When considering executions of an I/O automaton, we will often be interested in those executions in which every task of the automaton gets infinitely many turns to take steps; we call such executions “fair”. When the automaton represents a distributed systems, the notion of fairness can be used to express the idea that all system components continue to get turns to perform their activities.
Formally, an execution fragment of an automaton is said to be fair iff the following two conditions hold for every task in . (1) If is finite, then no action in is enabled in the final state of . (2) If is infinite, then either (a) contains infinitely many events from , or (b) contains infinitely many occurrences of states in which is not enabled.
A schedule of is said to be fair if it is the schedule of a fair execution of . Similarly, a trace of is said to be fair if it is the trace of a fair execution of .
## 5 Crash Problems
In this section, we define problems, distributed problems, crash problems, and failure-detector problems. We also define a particular failure-detector problem corresponding to the leader election oracle of [2].
### 5.1 Problems
We define a problem to be a tuple , where and are disjoint sets of actions and is a set of (finite or infinite) sequences over these actions such that there exists an automaton where , , and the set of fair traces of is a subset of . In this case we state that solves . We include the aforementioned assumption of solvability to satisfy a non-triviality property, which we explain in Section 7.
##### Distributed Problems.
Here and for the rest of the paper, we introduce a fixed finite set of location IDs; we assume that does not contain the special symbol . We assume a fixed total ordering on . We also assume a fixed mapping from actions to ; for an action , if , then we say that occurs at . A problem is said to be distributed over if, for every action , . We extend the definition of by defining .
Given a problem that is distributed over , and a location , and denote the set of actions in and , respectively, that occur at location ; that is, and .
##### Crash Problems.
We assume a set of crash events, where . That is, represents a crash that occurs at location . A problem that is distributed over is said to be a crash problem iff . That is, for every .
Given a (finite or infinite) sequence , denotes the set of locations at which a event occurs in . Similarly, denotes the set of locations at which a event does not occur in . A location in is said to be faulty in , and a location in is said to be live in .
### 5.2 Failure-Detector Problems
Recall that a failure detector is an oracle that provides information about crash failures. In our modeling framework, we view a failure detector as a special type of crash problem. A necessary condition for a crash problem to be an asynchronous failure detector (AFD) is crash exclusivity, which states that ; that is, the actions are exactly the actions. Crash exclusivity guarantees that the only inputs to a failure detector are the events, and hence, failure detectors provide information only about crashes. An AFD must also satisfy additional properties, which we describe next.
Let be a crash problem satisfying crash exclusivity. We begin by defining a few terms that will be used in the definition of an AFD. Let be an arbitrary sequence over .
##### Valid sequence.
The sequence is said to be valid iff (1) for every , no event in (the set of actions in at location ) occurs after a event in , and (2) if no event occurs in , then contains infinitely many events in .
Thus, a valid sequence contains no output events at a location after a event, and contains infinitely many output events at each live location.
##### Sampling.
A sequence is a sampling of iff (1) is a subsequence of , (2) for every location , (a) if is live in , then , and (b) if is faulty in , then contains the first event in , and is a prefix of .
A sampling of sequence retains all events at live locations. For each faulty location , it may remove a suffix of the outputs at location . It may also remove some crash events, but must retain the first crash event.
##### Constrained Reordering.
Let be a valid permutation of events in ; is a constrained reordering of iff the following is true. For every pair of events and , if (1) precedes in , and (2) either (a) and , or (b) and , then precedes in as well.444Note that the definition of constrained reordering is less restrictive than the definition in [5, 6]; specifically, unlike in [5, 6], this definition allow crashes to be reordered with respect to each other. However, this definition is “compatible” with the earlier definition in the sense that the results presented in [5, 6] continue to be true under this new definition.
A constrained reordering of sequence maintains the relative ordering of events that occur at the same location and maintains the relative order between any event and any subsequent event.
##### Crash Extension.
Assume that is a finite sequence. A crash extension of is a (possibly infinite) sequence such that is a prefix of and the suffix of following is a sequence over .
In other words, a crash extension of is obtained by extending with events.
##### Extra Crashes.
An extra crash event in is a event in , for some , such that contains a preceding .
An extra crash is a crash event at a location that has already crashed.
##### Minimal-Crash Sequence.
Let denote the subsequence of that contains all the events in , except for the extra crashes; is called the minimal-crash sequence of .
##### Asynchronous Failure Detector.
Now we are ready to define asynchronous failure detectors. A crash problem of the form (which satisfies crash exclusivity) is an asynchronous failure detector (AFD, for short) iff satisfies the following properties.
1. Validity. Every sequence is valid.
2. Closure Under Sampling. For every sequence , every sampling of is also in .
3. Closure Under Constrained Reordering. For every sequence , every constrained reordering is also in .
4. Closure Under Crash Extension. For every sequence , for every prefix of , for every crash extension of , the following are true. (a) If is finite, then is a prefix of some sequence in . (b) If , then is in .
5. Closure Under Extra Crashes. For every sequence , every sequence such that is also in .
Of the properties given here, the first three—validity and closure under sampling and constrained reordering—were also used in our earlier papers [5, 6]. The other two closure properties—closure under crash extension and extra crashes—are new here.
A brief motivation for the above properties is in order. The validity property ensures that (1) after a location crashes, no outputs occur at that location, and (2) if a location does not crash, outputs occur infinitely often at that location. Closure under sampling permits a failure detector to “skip” or “miss” any suffix of outputs at a faulty location. Closure under constrained reordering permits “delaying” output events at any location. Closure under crash extension permits a crash event to occur at any time. Finally, closure under extra crashes captures the notion that once a location is crashed, the occurrence of additional crash events (or lack thereof) at that location has no effect.
We define one additional constraint, below. This contraint is a formalization of an implicit assumption made in [2]; namely, for any AFD , any “sampling” (as defined in [4]) of a failure detector sequence in is also in .
##### Strong-Sampling AFDs.
Let be an AFD, . A subsequence of is said to be a strong sampling of if is a valid sequence. AFD is said to satisfy closure under strong sampling if, for every trace , every strong sampling of is also in . Any AFD that satisfies closure under strong sampling is said to be a strong-sampling AFD.
Although the set of strong-sampling AFDs are a strict subset of all AFDs, we conjecture that restricting our discussion to strong sampling AFDs does not weaken our result. Specifically, we assert without proof that for any AFD , we can construct an “equivalent” strong-sampling AFD . This notion of equivalence is formally discussed in Section 7.3.
### 5.3 The Leader Election Oracle.
An example of a strong-sampling AFD is the leader election oracle [2]. Informally speaking, continually outputs a location ID at each live location; eventually and permanently, outputs the ID of a unique live location at all the live locations. The failure detector was shown in [2] to be a “weakest” failure detector to solve crash-tolerant consensus, in a certain sense. We will present a version of this proof in this paper.
We specify our version of as follows. The action set , where, for each , . is the set of all valid sequences over that satisfy the following property: if , then there exists a location and a suffix of such that is a sequence over the set .
Algorithm 1 shows an automaton whose set of fair traces is a subset of ; it follows that satisfies our formal definition of a “problem”. It is easy to see that satisfies all the properties of an AFD, and furthermore, note that also satisfies closure under strong sampling. The proofs of these observations are left as an exercise.
##### Afd Ωf.
Here, we introduce , where is a natural number, as a generalization of . In this paper, we will show that is a weakest strong-sampling AFD that solves fault-tolerant consensus if at most locations are faulty. Informally speaking, denotes the AFD that behaves exactly like in traces that have at most faulty locations. Thus, is the AFD .
Precisely, , where is the set of all valid sequences over such that, if , then . This definition implies that contains all the valid sequences over such that .
It is easy to see that is a strong-sampling AFD.
## 6 System Model and Definitions
We model an asynchronous system as the composition of a collection of I/O automata of the following kinds: process automata, channel automata, a crash automaton, and an environment automaton. The external signature of each automaton and the interaction among them are described in Section 6.1. The behavior of these automata is described in Sections 6.26.5.
For the definitions that follow, we assume an alphabet of messages.
### 6.1 System Structure
A system contains a collection of process automata, one for each location in . We define the association with a mapping , which maps each location to a process automaton . Automaton has the following external signature. It has an input action , which is an output from the crash automaton, a set of output actions , and a set of input actions . A process automaton may also have other external actions with which it interacts with the external environment or a failure detector; the set of such actions may vary from one system to another.
For every ordered pair of distinct locations, the system contains a channel automaton , which models the channel that transports messages from process to process . Channel has the following external actions. The set of input actions is , which is a subset of outputs of the process automaton . The set of output actions is , which is a subset of inputs to .
The crash automaton models the occurrence of crash failures in the system. Automaton has as its set of output actions, and no input actions.
The environment automaton models the external world with which the distributed system interacts. The automaton is a composition of automata . For each location , the set of input actions to automaton includes the action . In addition, may have input and output actions corresponding (respectively) to any outputs and inputs of the process automaton that do not match up with other automata in the system.
We assume that, for every location , every external action of and , respectively, occurs at , that is, for every external action of and .
We provide some constraints on the structure of the various automata below.
### 6.2 Process Automata
The process automaton at location , , is an I/O automaton whose external signature satisfies the constraints given above, and that satisfies the following additional properties.
1. Every internal action of occurs at , that is, for every internal action of . We have already assumed that every external action of occurs at ; now we are simply extending this requirement to the internal actions.
2. Automaton is deterministic, as defined in Section 4.1.
3. When occurs, it permanently disables all locally controlled actions of .
We define a distributed algorithm to be a collection of process automata, one at each location; formally, it is simply a particular mapping. For convenience, we will usually write for the process automaton .
### 6.3 Channel Automata
The channel automaton for and , , is an I/O automaton whose external signature is as described above. That is, ’s input actions are and its output actions are .
Now we require to be a specific I/O automaton—a reliable FIFO channel, as defined in [10]. This automaton has no internal actions, and all its output actions are grouped into a single task. The state consists of a FIFO queue of messages, which is initially empty. A input event can occur at any time. The effect of an event is to add to the end of the queue. When a message is at the head of the queue, the output action is enabled, and the effect is to remove from the head of the queue. Note that this automaton is deterministic.
### 6.4 Crash Automaton
The crash automaton is an I/O automaton with as its set of output actions, and no input actions.
Now we require the following constraint on the behavior of : Every sequence over is a fair trace of the crash automaton. That is, any pattern of crashes is possible. For some of our results, we will consider restrictions on the number of locations that crash.
### 6.5 Environment Automaton
The environment automaton is an I/O automaton whose external signature satisfies the constraints described in Section 6.1. Recall that is a composition of automata . For each location , the following is true.
1. has a unique initial state.
2. has tasks , where ranges over some fixed task index set .
4. When occurs, it permanently disables all locally controlled actions of .
In addition, in some specific cases we will require the traces of to satisfy certain “well-formedness” restrictions, which will vary from one system to another. We will define these specifically when they are needed, later in the paper.
## 7 Solving Problems
In this section we define what it means for a distributed algorithm to solve a crash problem in a particular environment. We also define what it means for a distributed algorithm to solve one problem using another problem . Based on these definitions, we define what it means for an AFD to be sufficient to solve a problem.
### 7.1 Solving a Crash Problem
An automaton is said to be an environment for if the input actions of are , and the output actions of are . Thus, the environment’s inputs and outputs “match” those of the problem, except that the environment doesn’t provide the problem’s inputs.
If is an environment for a crash problem , then an I/O automaton is said to solve in environment provided that the following conditions hold:
1. .
2. .
3. The set of fair traces of the composition of , , and the crash automaton is a subset of .
A distributed algorithm solves a crash problem in an environment iff the automaton , which is obtained by composing with the channel automata, solves in . A crash problem is said to be solvable in an environment iff there exists a distributed algorithm such that solves in . If crash problem is not solvable in environment , then it is said to be unsolvable in .
### 7.2 Solving One Crash Problem Using Another
Often, an unsolvable problem may be solvable if the system contains an automaton that solves some other (unsolvable) crash problem . We describe the relationship between and as follows.
Let and be two crash problems with disjoint sets of actions (except for actions). Let be an environment for . Then a distributed algorithm solves crash problem using crash problem in environment iff the following are true:
1. For each location , .
2. For each location , .
3. Let be the composition of with the channel automata, the crash automaton, and the environment automaton . Then for every fair trace of , if , then .
In effect, in any fair execution of the system, if the sequence of events associated with the problem is consistent with the specified behavior of , then the sequence of events associated with problem is consistent with the specified behavior of .
Note that requirement 3 is vacuous if for every fair trace of , . However, in the definition of a problem , the requirement that there exist some automaton whose set of fair traces is a subset of ensures that there are “sufficiently many” fair traces of , such that .
We say that a crash problem is sufficient to solve a crash problem in environment , denoted iff there exists a distributed algorithm that solves using in . If , then also we say that is solvable using in . If no such distributed algorithm exists, then we state that is unsolvable using in , and we denote it as .
### 7.3 Using and Solving Failure-Detector Problems
Since an AFD is simply a kind of crash problem, the definitions above automatically yield definitions for the following notions.
1. A distributed algorithm solves an AFD in environment .
2. A distributed algorithm solves a crash problem using an AFD in environment .
3. An AFD is sufficient to solve a crash problem in environment .
4. A distributed algorithm solves an AFD using a crash problem in environment .
5. A crash problem is sufficient to solve an AFD in environment .
6. A distributed algorithm solves an AFD using another AFD .
7. An AFD is sufficient to solve an AFD .
Note that, when we talk about solving an AFD, the environment has no output actions because the AFD has no input actions except for , which are inputs from the crash automaton. Therefore, we have the following lemma.
###### Lemma 7.1.
Let be a crash problem and an AFD. If in some environment (for ), then for any other environment for , .
Consequently, when we refer to an AFD being solvable using a crash problem (or an AFD) , we omit the reference to the environment automaton and simply say that is sufficient to solve ; we denote this relationship by . Similarly, when we say that an AFD is unsolvable using , we omit mention of the environment, and write simply .
Finally, if an AFD is sufficient to solve another AFD (notion 7 in the list above), then we say that is stronger than , and we denote this by . If , but , then we say that is strictly stronger than , and we denote this by . Also, if and , then we say that is equivalent to .
We conjecture that for any AFD , there exists a strong sampling AFD such that is equivalent to ; thus, if a non-strong-sampling AFD is a weakest to solve consensus, then there must exist an equivalent AFD that is also a weakest to solve consensus. Therefore, it is sufficient to restrict our attention to strong-sampling AFDs.
## 8 Observations
In this section, fix to be an AFD. We define the notion of an observation of and present properties of observations. Observations are a key part of the emulation algorithm used to prove the “weakest failure detector” result, in Section 11.
### 8.1 Definitions and Basic Properties
An observation is a DAG , where the set of vertices consists of triples of the form where is a location, is a positive integer, and is an action from ; we refer to , , and as the location, index, and action of , respectively. Informally, a vertex denotes that is the -th AFD output at location , and the observation represents a partial ordering of AFD outputs at various locations. We say that an observation is finite iff the set (and therefore the set ) is finite; otherwise, is said to be infinite.
We require the set to satisfy the following properties.
1. For each location and each positive integer , contains at most one vertex whose location is and index is .
2. If contains a vertex of the form and , then also contains a vertex of the form .
Property 1 states that at each location , for each positive integer , there is at most one -th AFD output. Property 2 states that for any and , if the -th AFD output occurs at , then the first AFD outputs also occur at .
The set of edges imposes a partial ordering on the occurrence of AFD outputs. We assume that it satisfies the following properties.
1. For every location and natural number , if contains vertices of the form and , then contains an edge from to .
2. For every pair of distinct locations and such that contains an infinite number of vertices whose location is , the following is true. For each vertex in whose location is , there is a vertex in whose location is such that there is an edge from to in .
3. For every triple , , of vertices such that contains both an edge from to and an edge from to , also contains an edge from to . That is, the set of edges of is closed under transitivity.
Property 3 states that at each location , the -th output at occurs before the -st output at . Property 4 states that for every pair of locations and such that infinitely many AFD outputs occur at , for every AFD output event at there exists some AFD output event at such that occurs before . Property 5 is a transitive closure property that simply captures the notion that if event happens before event and happens before event , then happens before .
Given an observation , if contains an infinite number of vertices of the form for some particular , then is said to be live in . We write for the set of all the locations that are live in .
###### Lemma 8.1.
Let be an observation, a location in . Then for every positive integer , contains exactly one vertex of the form .
###### Proof.
Follows from Properties 1 and 2 of observations. ∎
###### Lemma 8.2.
Let and be distinct locations with . Let be a vertex in whose location is . Then there exists a positive integer such that for every positive integer , contains an edge from to some vertex of the form .
###### Proof.
Follows from Lemma 8.1, and Properties 3, 4, and 5 of observations. ∎
###### Lemma 8.3.
Let and be distinct locations with and ; that is, contains infinitely many vertices whose location is and only finitely many vertices whose location is . Then there exists a positive integer such that for every , there is no edge from any vertex of the form to any vertex whose location is .
###### Proof.
Fix and as in the hypotheses. Let be the vertex in whose location is and whose index is the highest among all the vertices whose location is . From Lemma 8.2 we know that there exists a positive integer such that for every positive integer , contains an edge from to some vertex of the form . Since is a DAG, there is no edge from any vertex of the form , to . Applying Properties 3 and 5 of observations, we conclude that there is no edge from any vertex of the form to any vertex whose location is . ∎
###### Lemma 8.4.
Let be an observation. Every vertex in has only finitely many incoming edges in .
###### Proof.
For contradiction, assume that there exists a vertex with infinitely many incoming edges, and let be the location of . Then there must be a location such that there are infinitely many vertices whose location is that have an outgoing edge to . Fix such a location . Note that must be live in .
Since there are infinitely many vertices whose location is , by Property 4 of observations, we know that has an outgoing edge to some vertex . Since infinitely many vertices of the form have an outgoing edge to , fix some such . By Properties 3 and 5 of observations, we know that there exists a edge from to . Thus, we see that there exist edges from to , from to , and from to , which yield a cycle. This contradicts the assumption that is a DAG. ∎
### 8.2 Viable Observations
Now consider an observation . If is any sequence of vertices in , then we define the event-sequence of to be the sequence obtained by projecting onto its second component.
We say that a trace is compatible with an observation provided that is the event sequence of some topological ordering of the vertices of . is a viable observation if there exists a trace that is compatible with .
###### Lemma 8.5.
Let be a viable observation, and suppose that is compatible with . For each location , is live in iff .
We now consider paths in an observation DAG, and their connection with strong sampling, as defined in Section 5.2. A path in a observation is a sequence of vertices, where for each pair of consecutive vertices in a path, is an edge of the observation.
A branch of an observation is a maximal path in . A fair branch of is a branch of that satisfies the additional property that, for every in , if is live in , then contains an infinite number of vertices whose location is .
###### Lemma 8.6.
Let be a viable observation, and suppose that is compatible with . Suppose is a fair branch of , and let be the event sequence of . Then
1. There exists a strong sampling of such that .
2. If is a strong-sampling AFD, then there exists such that is a strong sampling of and .
###### Proof.
Fix , , , and from the hypotheses of the Lemma statement.
Proof of Part 1. Since is a fair branch of , for each location that is live in , contains an infinite number of outputs at . Furthermore, for each location , the projection of on the events at is a subsequence of the projection of on the AFD outputs at . Therefore, by deleting all the AFD output events from that do not appear in , we obtain a strong-sampling of such that .
Proof of Part 2. In Part 2, assume is a strong-sampling AFD. From Part 1, we have already established that there exists a strong-sampling of such that . Fix such a . By closure under strong-sampling, since , we conclude that as well. ∎
Lemma 8.6 is crucial to our results. In Section 11, we describe an emulation algorithm that uses outputs from an AFD to produce viable observations, and the emulations consider paths of the observation and simulate executions of a consensus algorithm with AFD outputs from each path in the observation. Lemma 8.6 guarantees that each fair path in the observation corresponds to an actual sequence of AFD outputs from some trace of the AFD. In fact, the motivation for closure-under-strong-sampling property is to establish Lemma 8.6.
### 8.3 Relations and Operations on Observations
The emulation construction in Section 11 will require processes to manipulate observations. To help with this, we define some relations and operations on DAGs and observations.
##### Prefix.
Given two DAGs and , is said to be a prefix of iff is a subgraph of and for every vertex of , the set of incoming edges of in is equal to the set of incoming edges of in .
##### Union.
Let and be two observations. Then the union of and , denoted , is the graph . Note that, in general, this union need not be another observation. However, under certain conditions, wherein the observations are finite and “consistent” in terms of the vertices and incoming edges at each vertex, the union of two observations is also an observation. We state this formally in the following Lemma.
###### Lemma 8.7.
Let and be two finite observations. Suppose that the following hold:
1. There do not exist and with .
2. If then has the same set of incoming edges (from the same set of other vertices) in and .
Then is also an observation.
###### Proof.
Straightforward. ∎
##### Insertion.
Let be a finite observation, a location, and the largest integer such that contains a vertex of the form . Let be a triple . Then , the result of inserting into , is a new graph , where and . That is, is obtained from by adding vertex and adding edges from every vertex in to .
###### Lemma 8.8.
Let be a finite observation, a location. Let be the largest integer such that contains a vertex of the form . Let be a triple . Then is a finite observation.
### 8.4 Limits of Sequences of Observations
Consider an infinite sequence of finite observations, where each is a prefix of the next. Then the limit of this sequence is the graph defined as follows:
• .
• .
###### Lemma 8.9.
For each positive integer , is a prefix of .
Under certain conditions, the limit of the infinite sequence of observations is also an observation; we note this in Lemma 8.10.
###### Lemma 8.10.
Let be the limit of the infinite sequence of finite observations, where each is a prefix of the next. Suppose that the sequence satisfies the following property:
1. For every vertex and any location , there exists a vertex with location such that contains the edge .
Then is an observation.
###### Proof.
All properties are straightforward from the definitions, except for Property 4 of observations, which follows from the assumption of the lemma. ∎
We define an infinite sequence of finite observations, where each is a prefix of the next, to be to be convergent if the limit of this sequence is an observation.
## 9 Execution Trees
In this section, we define a tree representing executions of a system that are consistent with a particular observation of a particular failure detector . Specifically, we define a tree that describes executions of in which the sequence of AFD outputs is exactly the event-sequence of some path in observation .
Section 9.1 defines the system for which the tree is defined. The tree is constructed in two parts: Section 9.2 defines a “task tree”, and Section 9.3 adds tags to the nodes and edges of the task tree to yield the final execution tree. Additionally, Sections 9.2 and 9.3 prove certain basic properties of execution trees, and they establish a correspondence between the nodes in the tree and finite executions of . Section 9.4 defines that two nodes in the execution tree are “similar” to each other if they have the same tags, and therefore correspond to the same execution of ; the section goes on to prove certain useful properties of nodes in the subtrees rooted at any two similar nodes. Section 9.5 defines that two nodes in the execution tree are “similar-modulo-” to each other if the executions corresponding to the two nodes are indistinguishable for process automata at any location except possibly the the process automaton at ; the section goes on to prove certain useful properties of nodes in the subtrees rooted at any two similar-modulo- nodes. Section 9.6 establishes useful properties of nodes that are in different execution trees that are constructed using two observations, one of which is a prefix of another. Finally, Section 9.7 proves that a “fair branch” of infinite execution trees corresponds to a fair execution of system . The major results in this section are used in Sections 10 and 11, which show that is a weakest strong-sampling AFD to solve consensus if at most locations crash.
### 9.1 The System
Fix to be a system consisting of a distributed algorithm , channel automata, and an environment automaton such that solves a crash problem using in .
The system contains the following tasks. The process automaton at contains a single task . Each channel automaton , where contains a single task, which we also denote as ; the actions in task are of the form , which results in a message received at location . Each automaton has tasks , where ranges over some fixed task index set . Let denote the set of all the tasks of .
Each task has an associated location, which is the location of all the actions in the task. The tasks at location are , , and .
Recall from Section 6 that each process automaton, each channel automaton, and the environment automaton have unique initial states. Therefore, the system has a unique initial state. From the definitions of the constituent automata of , we obtain the following lemma.
###### Lemma 9.1.
Let be an execution of system , and let be the trace of such that for some location , does not contain any locally-controlled actions at and . Then, there exists an execution of system such that is the trace of .
###### Proof.
Fix , and as in the hypothesis of the claim. Let be the prefix of whose trace is . Let be the final state of . Let be the execution , where is the state of when is applied to state .
Note that disables all locally-controlled actions at and , and it does not change the state of any other automaton in . Therefore, the state of all automata in except for and are the same in state and . Also, note that does not contain any locally-controlled action at or , and can be applied to state . Therefore, can also be applied to , thus extending to an execution of . By construction, the trace of is . ∎
For any observation , we define a tree that describes all executions of in which the sequence of AFD output events is the event-sequence of some path in .
We describe our construction in two stages. The first stage, in this subsection, defines the basic structure of the tree, with annotations indicating where particular system tasks and observation vertices occur. The second stage, described in the next subsection, adds information about particular actions and system states.
The task tree is rooted at a special node called “” which corresponds to the initial state of the system . The tree is of height ; if is infinite, the tree has infinite height.555The intuitive reason for limiting the depth of the tree to is the following. If is a finite observation, then none of the locations in are live in . In this case, we want all the branches in the task tree to be finite. On the other hand, if is an infinite observation, then some location in is live in , and in this case we want all the branches in the task tree to be infinite. On way to ensure these properties is to restrict the depth of the tree to . Every node in the tree that is at a depth is a leaf node. All other nodes are internal nodes. Each edge in the tree is labeled by an element from . Intuitively, the label of an edge corresponds to a task being given a “turn” or an AFD event occurring. An edge with label is said to be an -edge, for short. The child of a node that is connected to by an edge labeled is said to be an -child of .
In addition to labels at each edge, the tree is also augmented with a vertex tag, which is a vertex in , at each node and edge. We write for the vertex tag at node and for the vertex tag at edge . Intuitively, each vertex tag denotes the latest AFD output that occurs in the execution of corresponding to the path in the tree from the root to node or the head node of edge (as appropriate). The set of outgoing edges from each node in the tree is determined by the vertex tag .
We describe the labels and vertex tags in the task tree recursively, starting with the node. We define the vertex tag of to be a special placeholder element , representing a “null vertex” of . For each internal node with vertex tag , the outgoing edges from and their vertex tags are as follows.
• Outgoing , , and edges. For every task in , the task tree contains exactly one outgoing edge from with label from , i.e., an -edge. The vertex tag of is .
• Outgoing -edges. If , then for every vertex of | 2021-03-05 09:40:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896328806877136, "perplexity": 640.5856050034268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370752.61/warc/CC-MAIN-20210305091526-20210305121526-00335.warc.gz"} |
https://search.r-project.org/CRAN/refmans/BDgraph/html/link2adj.html | ### Description
Extract links from an adjacency matrix or an object of calsses "sim" from function bdgraph.sim and "graph" from function graph.sim.
### Usage
link2adj( link, p = NULL )
### Arguments
link An (2 \times p) matrix or a data.frame corresponding to the links from the graph structure. p The number of nodes of the graph.
### Value
An adjacency matrix corresponding to a graph structure in which a_{ij}=1 if there is a link between notes i and j, otherwise a_{ij}=0.
### References
Mohammadi, R. and Wit, E. C. (2019). BDgraph: An R Package for Bayesian Structure Learning in Graphical Models, Journal of Statistical Software, 89(3):1-30
Mohammadi, A. and Wit, E. C. (2015). Bayesian Structure Learning in Sparse Gaussian Graphical Models, Bayesian Analysis, 10(1):109-138
Letac, G., Massam, H. and Mohammadi, R. (2018). The Ratio of Normalizing Constants for Bayesian Graphical Gaussian Model Selection, arXiv preprint arXiv:1706.04416v2
Mohammadi, A. et al (2017). Bayesian modelling of Dupuytren disease by using Gaussian copula graphical models, Journal of the Royal Statistical Society: Series C, 66(3):629-645
Dobra, A. and Mohammadi, R. (2018). Loglinear Model Selection and Human Mobility, Annals of Applied Statistics, 12(2):815-845
Pensar, J. et al (2017) Marginal pseudo-likelihood learning of discrete Markov network structures, Bayesian Analysis, 12(4):1195-215
adj2link, graph.sim
### Examples
# Generating a 'random' graph
adj <- graph.sim( p = 6, vis = TRUE ) | 2021-12-01 22:12:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7155252695083618, "perplexity": 4988.630307851656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00014.warc.gz"} |
http://ymsc.tsinghua.edu.cn/cn/content/show/190-98.html | ## 报告人简介
2019-11-15
Speaker: 李鹏辉 Penghui Li(YMSC)
Title: Whittaker sheaf, commuting scheme and Geometric Langlands conjecture
Abstract: It is an long standing open problem that whether the scheme of commuting matrices is reduced. We suggest a way to tackle this problem via Langlands duality. In the talk, we briefly recall the definition of commuting scheme, and how it is related to Ben-Zvi--Nadler's Betti Geometric Langlands (BGL) conjecture. Then we summarize recent progresses on BGL conjecture, and sketch a proof of reduceness of invariant function on commuting schemes, based on the conjecture in genus 1. This work is based on a joint work with David Nadler.
2019-11-1
Speaker: 王彬 Bin Wang (YMSC)
Title: Parabolic Hitchin Maps and Their Generic Fibers: GL_{n}-Case
Abstract: In this talk, we first recall Hitchin maps and Beauville-Narashimhan-Ramanan's correspondence. Then in GL_{n} case, we talk about a parabolic analogue of Beauville-Narashimhan-Ramanan's correspondence which in particular implies that generic fibers of parabolic Hitchin maps are still Picard varieties. We will also calculate the dimension of global nilpotent cone in this case. This is a joint work with Xiaoyu Su and Xueqing Wen.
2019-10-25
Speaker: Michael Ehrig (Beijing Institut of Technology)
Title: Lie Superalgebras via Schur-Weyl duality and Categorification
Abstract: In this talk, I will outline an approach to understand and describe the category of finite dimensional representations of a classical Lie superalgebra. Due to the non semi simplicity, methods different from the ones for semi-simple Lie algebras need to be applied to describe this category. Using variations of Schur-Weyl duality, respectively the fundamental theorems of invariant theory, we formulate the problem of understanding it in terms of centralizer algebras. These centralizer algebras are then described via methods from categorification of quantum groups and link invariants, yielding the description of the category of finite dimensional representations for some of the classical Lie superalgebras. This is joint work with Catharina Stroppel.
2019-10-11
Speaker: Matthew Young (MPI Bonn)
Title: Twisted loop transgression and categorical character theory
Abstract: This talk will be an introduction to the Real categorical representation theory of a finite 2-group, in which a graded group acts by autoequivalences or anti-autoequivalences of a category. In particular, I will discuss the geometric character theory of such representations and its formulation in terms of unoriented mapping stacks. Time permitting, I'll discuss applications to topological field theory and monoidal categories.
2019-6-6
Speaker: 苏长剑 Su Changjian(University of Toronto)
Title: Categorification of K-theory stable bases of the Springer resolution
Abstract: The K-theoretic Maulik—Okounkov stable basis depends on the choice of an alcove. In this talk, we compare the stable bases of the Springer resolution associated to different alcoves. We prove that the change of alcoves operators are given by the Demazure—Lusztig operators in the affine Hecke algebra. We then show that these bases are categorified by the Verma modules of the Lie algebra, under the localization of Lie algebras in positive characteristic of Bezrukavnikov, Mirkovic and Rumynin. Joint work with Gufang Zhao and Changlong Zhong.
2019-5-24
Speaker: 覃帆 Qin Fan (Shanghai Jiaotong University)
Title: Bases for upper cluster algebras and tropical points
Abstract: It is known that many (upper) cluster algebras possess very different good bases which are parametrized by the tropical points of Langlands dual cluster varieties. For any given injective reachable upper cluster algebra, we describe all of its bases parametrized by the tropical points. In addition, we obtain the existence of the generic bases for such upper cluster algebras. Our results apply to many cluster algebras arising from representation theory and higher Teichmuller theory.
2019-5-17
Speaker: Gus Lehrer (University of Sydney)
Title: Tangle categories and extension of the Temperley-Lieb category equivalence for quantum $\mathfrak{sl}_2$
Abstract: Let $U_q$ be the quantum group of $\mathfrak{sl}_2$. It is classically known that there is an equivalence between the category of representations of the form $V^r:=V\otimes V\otimes...\otimes V$ of $U_q$, where $V$ is the 2-dimensional simple representation, and the Temperley-Lieb category $TL(q)$, which is described in terms of diagrams. I shall describe an extension of this equivalence to the Temperley-Lieb category $TLB(q,Q)$. The corresponding representation category consists of certain infinite dimensional representations of $U_q$. This is joint work with Ruibin Zhang and Kenji Iohara.
2019-4-26
Speaker: 卢明Lu Ming (Sichuan University)
Title: Hall Algebras and i-Quantum groups
Abstract: A quantum symmetric pair consists of a quantum group and its coideal subalgebra (called an i-quantum group). A quantum group can be viewed as an example of i-quantum groups associated to symmetric pairs of diagonal type. In this talk, we present a new Hall algebra construction of i-quantum groups. This relies on the framework of modified Ringel-Hall algebras defined with Liangang Peng. Our approach leads to monomial bases, PBW bases, and braid group actions for i-quantum groups. In case of symmetric pairs of diagonal type, our work reduces to a reformulation of Bridgeland’s Hall algebra realization of a quantum group, which in turn was a generalization of earlier constructions of Ringel and Lusztig for half a quantum group. This is joint work with Weiqiang Wang.
2019-4-19
Speaker: 华诤 Hua Zheng (Hong Kong University)
Title: On quivers with analytic potentials
Abstract: Given a finite quiver, an element of the complete path algebra over the field of complex number is called analytic if its coefficients are bounded by a geometric series.
We may develop a parallel construction of Jacobi algebra and Ginzburg algebra for a quiver with an analytic potential. Analytic potential occurs naturally in the deformation theory of sheaves on projective Calabi-Yau manifold. It plays a central role in the construction of critical cohomological Hall algebra. It turns out that analytic potentials admit much richer structures in noncommutative differential calculus compared with the formal ones. I will give a brief introduction to some of my recent work on this topic.
2019-3-22
Speaker: Gwyn Bellamy (University of Glasgow)
Title: Resolutions of symplectic quotient singularities
Abstract: In this talk I will explain how one can explicitly construct all crepant resolutions of the symplectic quotient singularities associated to wreath product groups. The resolutions are all given by Nakajima quiver varieties. In order to prove that all resolutions are obtained this way, one needs to describe what happens to the geometry as one crosses the walls inside the GIT parameter space for these quiver varieties. This is based on joint work with Alistair Craw.
2019-3-8
Speaker: 颜文斌 Yan Wenbin (YMSC)
Title: From S^1-fixed points to admissible representations
Abstract: I will present some observations on the relation between fixed points on the moduli space of Hitchin system and admissible representations of affine Kac-Moody algebra. I will mainly explain the one to one correspondence between fixed points and admissible representations through examples and extract a general statement. Other consequences will also be discussed.
• 联系我们
• 北京市海淀区清华大学静斋
丘成桐数学科学中心100084
• +86-10-62773561
• +86-10-62789445
• ymsc@tsinghua.edu.cn | 2020-02-20 10:18:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6047722101211548, "perplexity": 734.6439004526975}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00469.warc.gz"} |
http://wavewatching.net/2012/03/21/information-is-physical-26/ | # Information is Physical
Even when the headlines are not gut-wrenching
One of the most astounding theoretical predictions of the late 20th century was Landauer's discovery that erasing memory is linked to entropy i.e. heat is produced whenever a bit is fully and irrevocably erased. As far as theoretical work goes this is even somewhat intuitively understandable: After all increasing entropy essentially means moving to a less ordered phase state (technically a micro-ensemble that is less special). And what could be possibly be more ordered than a computer memory register?
Recently this prediction has been confirmed by a very clever experiment. Reason enough to celebrate this with another "blog memory-hole rescue":
If you ever wondered what the term "adiabatic" in conjunction with quantum computing means, Perry Hooker provides the answer in this succinct explanation. His logic gate discussion shows why Landauer's principle has implications far beyond the memory chips, and in a sense, undermines the entire foundation of classical information processing.
Truly required reading if you want to appreciate why quantum computing matters.
## 3 thoughts on “Information is Physical”
1. More like loss of information equals waste of energy. | 2015-07-07 13:12:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291754126548767, "perplexity": 2207.0154084027686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099361.57/warc/CC-MAIN-20150627031819-00222-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://pos.sissa.it/379/043/ | Volume 379 - The 18th International Workshop on Polarized Sources, Targets, and Polarimetry (PSTP2019) - Polarized Sources
Development of a Polarized 3He++ Ion Source for the EIC
M. Musgrave,* R. Milner, G. Atoian, E. Beebe, S. Ikeda, S. Kondrashev, M. Okamura, A. Poblaguev, D. Raparia, J. Ritter, S. Trabocchi, A. Zelenski, J. Maxwell
*corresponding author
Full text: pdf
Published on: September 23, 2020
Abstract
The capability of accelerating a high-intensity polarized $^{3}$He ion beam would provide an effective polarized neutron beam for new high-energy QCD studies of nucleon structure. This development is essential for the future Electron Ion Collider, which could use a polarized $^{3}$He ion beam to probe the spin structure of the neutron. The proposed polarized $^{3}$He ion source is based on the Electron Beam Ion Source (EBIS) currently in operation at Brookhaven National Laboratory. $^{3}$He gas would be polarized within the 5 T field of the EBIS solenoid via Metastability Exchange Optical Pumping (MEOP) and then pulsed into the EBIS vacuum and drift tube system where the $^{3}$He will be ionized by the 10 Amp electron beam. The goal of the polarized $^{3}$He ion source is to achieve $2.5 \times 10^{11}$ $^{3}$He$^{++}$/pulse at 70% polarization. An upgrade of the EBIS is currently underway. An absolute polarimeter and spin-rotator is being developed to measure the $^{3}$He ion polarization at 6 MeV after initial acceleration out of the EBIS. The source is being developed through collaboration between BNL and MIT.
DOI: https://doi.org/10.22323/1.379.0043
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access | 2020-10-28 17:36:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47020789980888367, "perplexity": 6428.084767891632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900200.97/warc/CC-MAIN-20201028162226-20201028192226-00001.warc.gz"} |
http://kmj.knu.ac.kr/journal/list.html?Vol=41&Num=1&mod=vol&book=journal&aut_box=Y&sub_box=Y&pub_box=Y | << Previous Issue Kyungpook Mathematical Journal (Vol. 41, No. 1) Next Issue >>
Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 1—177
Original Articles
How to Characterize Equalities for the Moore-Penrose Inverse of a Matrix Yongge Tian Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 1—15
δ-primary Ideals of Commutative Rings Zhao Dongsheng Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 17—22
Some Comments on Simple Singular GP-injective Modules Jin Yong Kim and Hee Sun Yang1, Nam Kyun Kim2, Sang Bok Nam3 Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 23—2
A Study on Near-rings with SR-Conditions Yong Uk Cho Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 29—34
Matrix Transformations on the Nakano Vector-valued Sequence Space Chanan Sudsukh1, Suthep Suantai2 Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 35—44
Goodman-Rnning-Type Harmonic Univalent Functions Thomas Rosy, B. Adolph Stephen, K. G. Subramanian1, Jay M. Jahangiri2 Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 45—54
Hardy Inequalities in Higher Dimensional Spaces Tieling Chen Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 55—64
Degree of Approximation by a New Sequence of Linear Operators P. N. Agrawal1, Kareem J. Thamer2 Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 65—73
Oscillation Criteria for a Class of Hyperbolic Equations with Functional Arguments Norio Yoshida Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 75—85
A Family of Fractional Integrals Pertaining to Special Functions V. B. L. Chaurasia, Anju Godika Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 87—95
New Properties of a Generalization of Hypergeometric Series Associated with Feynman Integrals K. C. Gupta and R. C. Soni Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 97—104
Generalized Theorems Pertaining to Double Integral Transforms V. B. L. Chaurasia, Neeti Gupta Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 105—113
Some New Inequalities for the Logarithmic Map, with Applications to Entropy and Mutual Information S. S. Dragomir1, C. E. M. Pearce2, J. Pecarié3 Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 115—125
On Population Growth Model with Density Dependence M. A. Basudan Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 127—136
Strong Forms of Continuity in Fuzzy Topological Spaces I. M. Hanafy1, H. S. Al-Saadi2 Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 137—147
Notes on a Connection of the Pseudo-Hermitian Structure Hyun Suk Kim Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 149—154
Pseudo-Schwarzian Tensor of Weyl Manifolds Fumio Narita Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 155—161
Real Hypersurfaces of a Complex Hyperbolic Space Satisfying $L_\xi S=0$ U-Hang Ki1, Jong Taek Cho2, In-Yeong Yoo3 Kyungpook Mathematical Journal 2001 Vol. 41, No. 1, 163—177 | 2018-04-21 00:13:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3146570920944214, "perplexity": 2436.5454871280976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944848.33/warc/CC-MAIN-20180420233255-20180421013255-00532.warc.gz"} |
http://www.reference.com/browse/omitted | Definitions
Omitted-variable bias
Omitted-variable bias (OVB) is the bias that appears in estimates of parameters in a regression analysis when the assumed specification is incorrect, in that it omits an independent variable that should be in the model.
Omitted-variable bias in linear regression
Two conditions must hold true for omitted variable bias to exist in linear regression:
• the omitted variable must be a determinant of the dependent variable (i.e., its true regression coefficient is not zero); and
• the omitted variable must be correlated with one or more of the included independent variables.
As an example, consider a linear model of the form $y_i = x_i beta + z_i delta + u_i$, where $x_i$ is treated as a vector and $z_i$ is a scalar. For simplicity suppose that $E\left[u_i|x_i,z_i\right]=0$. Now consider what happens if one were to regress $y_i$ on only $x_i$. Through the usual least squares calculus, the estimated parameter vector $hat\left\{beta\right\}$ is given by:
$hat\left\{beta\right\} = \left(x\text{'}x\right)^\left\{-1\right\}x\text{'}y.,$
Substituting for y based on the assumed linear model,
$hat\left\{beta\right\} = \left(x\text{'}x\right)^\left\{-1\right\}x\text{'}\left(xbeta+zdelta+u\right)=\left(x\text{'}x\right)^\left\{-1\right\}x\text{'}xbeta + \left(x\text{'}x\right)^\left\{-1\right\}x\text{'}zdelta + \left(x\text{'}x\right)^\left\{-1\right\}x\text{'}u.,$
Taking expectations, the final term $\left(x\text{'}x\right)^\left\{-1\right\}x\text{'}u$ falls out by the assumed conditional expectation above. Simplifying the remaining terms:
$E\left[hat\left\{beta\right\} \right] = beta + delta \left(x\text{'}x\right)^\left\{-1\right\}x\text{'}z.,$
The above is an expression for the omitted variable bias in this case. Note that the bias is equal to the weighted portion of $z_i$ which is "explained" by $x_i$.
References
• Greene, WH Econometric Analysis, 2nd ed.. Macmillan.
Search another word or see omittedon Dictionary | Thesaurus |Spanish | 2014-12-20 00:40:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850078582763672, "perplexity": 616.729773946907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769121.74/warc/CC-MAIN-20141217075249-00030-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://codereview.stackexchange.com/questions/156975/leetcode-longest-substring-without-repeating-characters | # Leetcode: Longest substring without repeating characters
I saw the editorial solutions for this problem, but they are written in an unfamiliar language. I'd like to post my code, which worked, and understand the timing of my solution. I don't understand how to judge Big-O time yet.
def length_of_longest_substring(s)
longest_sub,current_sub = "",""
s.each_char do |c|
if current_sub.include?(c)
longest_sub = current_sub if current_sub.length > longest_sub.length
current_sub=current_sub[current_sub.index(c)+1..-1]+c
else
current_sub+=c
end
end
longest_sub = current_sub if current_sub.length > longest_sub.length
longest_sub.length
end
I assume that since I'm creating strings, my space complexity suffers somewhat. I'm not really sure how I would speed things up with time.
## 1 Answer
You have exploited the space complexity.
def findLongestSubstring(inputString)
hashMap = Hash.new
longestSubstringLength = 0
jIndex = 0
iIndex = 0
while(jIndex < inputString.length)
if(hashMap[inputString[jIndex]])
iIndex = [iIndex, hashMap[inputString[jIndex]]].max
end
longestSubstringLength = [longestSubstringLength, jIndex-iIndex+1].max
hashMap[inputString[jIndex]] = jIndex + 1
jIndex = jIndex + 1
end
longestSubstringLength
end
This method makes use of HashMap which works efficiently in terms of complexity. Searching the element in HashMap works in O(1) and same goes for insertion. This way you can reduce the complexity of your program | 2021-09-19 04:48:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5027577877044678, "perplexity": 4622.576388175418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00302.warc.gz"} |
http://mathoverflow.net/questions/147174/spectrum-of-the-laplace-beltrami-operator-on-lp-where-is-it | # Spectrum of the Laplace-Beltrami operator on $L^p$: where is it?
On a noncompact Riemannian manifold $M$, the $L^2$-spectrum of the Laplace-Beltrami operator $\Delta$ sits inside $\mathbb{R}$ (by self-adjointness), either to the left or to the right of $0$ depending on sign convention. I know that under various curvature assumptions one can show that the $L^p$-spectrum is equal to the $L^2$-spectrum for $p \neq 2$.
Without making any geometric assumptions on $M$, is it possible to conclude that the $L^p$-spectrum of $\Delta$ is a subset of $\mathbb{R}$? If not, is there anything we can say about the $L^p$-spectrum without having to make any additional assumptions?
-
The answer has a lot to do with the off-diagonal decay of the resolvent, (\Delta - \lambda)^{-1}. There are two rather different cases to consider, when M is R^n and when M = H^n. The former is [0,\infty), just as for L^2. This can be computed explicitly, but in fact there is a general theorem due to Sturm which states that the L^p spectrum of the scalar Laplacian is independent of p if the Ricci curvature is bounded below and if the volume of balls grows subexponentially. On the other hand, the L^p spectrum of \Delta on hyperbolic space behaves quite differently. There is a very nice paper by Davies, Simon and Taylor which explains this, but in retrospect the answer depends in that case rather simply on the known and very simple off-diagonal asymptotics of the Green function or resolvent (and the precise exponential decay of the volume form). The recent paper http://arxiv.org/pdf/0707.2477.pdf by A. WEber, see also some earlier work of his, shows that for various locally symmetric spaces of higher rank, the same sort of phenomenon occurs.
There are various other types of results scattered around the literature on this topic, bt I am not aware if there is (or could be) a sharp necessary and sufficient geometric condition for the spectrum to be independent of p.
-
this is great, thanks! – Alex Amenta Nov 7 '13 at 5:17 | 2015-10-13 13:39:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898065686225891, "perplexity": 218.48809907003945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006925.85/warc/CC-MAIN-20151001222006-00220-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/11/lesson/11.2.2/problem/11-62 | ### Home > A2C > Chapter 11 > Lesson 11.2.2 > Problem11-62
11-62.
For each equation below, state the amplitude, period, and locator point, and then sketch two cycles of the graph.
1. $y = \operatorname{tan}\left(x\right)$
Amplitude: N/A
Period: π
Locator point: $\left(0,0\right)$
1. $y = \operatorname{tan}\left(x − π\right)$
The graph is shifted π radians to the right. How does this affect all of the properties of the graph listed above? | 2022-06-27 03:32:17 | {"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332845211029053, "perplexity": 3285.2354010698327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00158.warc.gz"} |
https://math.stackexchange.com/questions/162929/diophantine-equation-with-squares-over-3-variables?noredirect=1 | # diophantine equation with squares over 3 variables
I am trying to find solutions for this diophantine equation
$$x^2+y^2+x^2y^2=4z^2$$
I am looking for advice on a procedure to find all positive integer solutions for this equations.
• Add $1$ to both sides and you get $(x^2+1)(y^2+1)=4z^2+1$, which is only possible if $x$ and $y$ are even, which reduces to another problem posted earlier today: $(4x^2+1)(4y^2+1)=4z^2+1$. math.stackexchange.com/q/162862/7933 – Thomas Andrews Jun 25 '12 at 17:54
by considering x,y are both even and let $x^2=s ,y^2=r$
then $sr+s+r=(2z)^2$ this equation is equivalent to:
$(2s+r+1)^2=(r-1)^2+(2s)^2+(4z)^2$ you can check that and the positive solutions for the last equation are given by the dimensions and the length of the diagonal of a rectangular box which is a related problem to Pythagorean Triple.
so $s=a , b=a+1 , r=(4z^2-a)/b$
& b is a divisor of $a^2+4z^2 ,b<√(a^2+4z^2 ) , 1<2z^2$
one can solve this equation over the positive integers if he just know the value of $a$
• This is not a solution. – individ Aug 14 '16 at 14:02
• so what is it ? then Pythagorean solution is not a solution? – Mahmoud. A .Solomon Aug 14 '16 at 14:07 | 2020-11-28 17:43:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257459998130798, "perplexity": 214.9657203489012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00186.warc.gz"} |
https://stats.stackexchange.com/questions/559354/comparing-performance-of-drugs-with-different-variances-and-sample-sizes | # Comparing performance of drugs with different variances and sample sizes
I am considering the following hypothetical example. Suppose that I a have a classification model that predicts whether a patient will survive or not. The features of this classification model are $$\mathbf{x} = (x_1,\dots,x_n)$$ which represent quantities related to a drug and some other variables like hospital condition, doctor care, etc.
Suppose I have 1000 drugs and I have $$n_i$$ samples of the features $$\mathbf{x}$$ for each drug, $$i=1,\dots,1000$$. Using the $$n_i$$ feature samples for each drug, I can then obtain $$n_i$$ samples of $$p_i$$, the probability that a patient will survive or not.
Using this data, I then want to rank the drugs from best to worst. Is there a "metric" that I can compute based on these samples? Naturally, I'd like to compare the mean of $$p_i,i=1,\dots,1000$$ but the standard deviations are different and the sample size $$n_i$$ are also different. The difference in the sample size can be up to 1-3 orders of magnitude, i.e. one drug can have 10 samples while another can have 2000 samples.
I was browsing online and read that a Welch's t-test is applicable to my situation. However, I have a few issues:
1. t-test is done pairwise. Given that I'm comparing 1000 drugs, it would be hard to produce a ranking or even obtain the top 10 drugs, for example.
2. The t-test only compares whether $$\mu_X = \mu_Y$$ or $$\mu_X \neq \mu_Y$$ where $$X$$ and $$Y$$ are 2 groups. How can we say that one group has a larger mean than the other?
Any suggestions/references on this matter? Thanks!
The only circumstance where it is realistic to think of comparing a thousand drugs would be a pre-clinical screening program and the appropriate advice should relate to assay-related issues and cost-benefit analyses with consideration of what happens to the drugs that are classed as 'hits'. It is not a purely statistical issue.
If you perform a t-test (Welch or standard), or any other significance test (e.g. a permutations test), you will have a list of P-values that would allow you to rank the drugs on the basis of the statistical evidence in their respective datasets against their respective null hypotheses. That might be satisfactory for a statistics student, but it would be terrible for a clinician because it would allow a drug for which there is strong evidence for a minor clinical benefit to rank higher than a drug with moderate evidence for a huge benefit.
You need criteria for ranking that include the features that are important to the types of inference that you need to make. If you choose the mean benefit then you do not need any statistical procedure beyond calculation of the means.
Ranking drugs should be more complicated than you might think. For example, some drugs benefit only a subset of patients, and some cause harm to only a subset. Treating the patients as a single population might lead to a drug that is curative to some people ranking lower than a drug that brings a minor benefit to most patients. Another example of complexity is the fact that drugs may need different doses and regimens in different people. | 2022-07-01 10:07:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6287623047828674, "perplexity": 591.5337161496748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00199.warc.gz"} |
https://en.m.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Finding_the_Distance_of_a_Remote_Object/Radar | 1. What sort of wave does your system use? What is an approximate wavelength of this wave?
Radio waves, with a wavelength ranging from 2.7mm to 100m.
2. What sort of distance is it usually used to measure? What sort of length would you expect the distance to be?
The distance to an object within the radio horizon. This width is given by the formula:
${\displaystyle \mathrm {horizon} _{\mathrm {km} }=3.569\times {\sqrt {\mathrm {height} _{\mathrm {metres} }}}.}$
So, a radar 10m above the Earth's surface has a range of 11.3 km.
3. Why is measuring this distance useful to society?
e.g. Radar is used at airports to locate aeroplanes and co-ordinate them so that they can land safely, avoiding collisions.
4. Draw a labelled diagram of your system.
5. Explain how the system works, and what data are collected.
The 'dish' rotates, and the transmitter on it transmits a pulse of radio waves. The waves, if they hit an aircraft, are reflected by it. The dish reflects the reflected radio pulse onto a receiver (allowing for some variety in incoming angles due to varying distances of aircraft). The time taken for the signal to travel to the aircraft and back is recorded, and the speed of the radio pulse (3 x 108ms−1) is already known.
6. Explain how the distance to the object is calculated using the data collected.
${\displaystyle v={\frac {s}{t}}}$
${\displaystyle s=tv\,}$,
where s = distance, t = time and v = velocity. In this case, the time taken to travel to the aircraft t is half of the total time taken to travel to the aircraft and back T, so:
${\displaystyle s={\frac {Tv}{2}}}$
7. What limitations does your system have? (e.g. accuracy, consistency)
• Random noise e.g. birds, weather | 2022-12-07 14:39:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5062907934188843, "perplexity": 918.448645295246}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00833.warc.gz"} |
https://physicscatalyst.com/class-6/Important-questions_class6_science_fun-with-magnets-2.php | # Practice Worksheets for Class 6 Science Chapter 13 Fun with Magnets
In this page we have Practice Worksheets for Class 6 Science Chapter 13 Fun with Magnets . Hope you like them and do not forget to like , social share and comment at the end of the page.
## Multiple Choice Questions
Question 1
What will happen if take magnetic compass near a bar magnetic?
(a) The needle will deflect
(b) The needle will not deflect
(c) The needle will reverse the direction
(d) None of these
Question 2
The North end of the freely suspended magnet points towards?
(a) Geographical West
(b) Geographical East
(c) Geographical North
(d) Geographical South
Question 3
Which of the following is true of magnets?
(a) Like poles repel each other
(b) Opposite pole attract each other
(c) magnets has two poles (North and South)
(d) All the above
Question 4
What is incorrect of magnets?
(a) Magnetic power is more in the middle of bar magnets
(b) Magnetite is a natural magnet
(c) Magnetic compass always aligned towards North south direction
(d) None of the above
Question 5
A bar magnet is immersed in a heap of iron filings and pulled out. The amount of iron filling clinging to the?
(a) North pole is almost equal to the south pole.
(b) North pole is much more than the south pole.
(c) North pole is much less than the south pole.
(d) Magnet will be same all along its length.
Question 6
Match the column
(p)N-N (u) Attraction (q)S-N (v) Repulsion (r) S-S (s) N-S
(a) P -> V, Q ->U, R-> V,S-> U
(b) P -> U, Q ->V, R-> V,S-> U
(c) P -> V, Q ->U, R-> U,S-> V
(d) None of these
Question 7
The North Pole of a magnetic needle is painted
(a) red
(b) blue
(c) green
(d) black
Question 8
Statement A: Magnetism of a magnet is lost by Hampering
Statement B: Magnetism of a magnet is lost by breaking it
(a) Statement A is correct only
(b) Statement B is correct only
(c) Both the statement A and B are correct
(d) Both the statement A and B are incorrect
Question 9
Match the column
(p)Nickel (u) Magnetic Material (q)paper (v) Non Magnetic Material (r) Wood (s) Iron
(a) P -> V, Q ->U, R-> V,S-> U
(b) P -> U, Q ->V, R-> V,S-> U
(c) P -> V, Q ->U, R-> U,S-> V
(d) None of these
## Long Answer type Questions
Question 10
What are magnetic and nonmagnetic Materials
Question 11
What are different type of magnets? And where are the poles located?
Answer:
1. (a)
2. (c)
3. (d)
4. (a)
5. (a)
6. (a)
7. (a)
8. (a)
9. (b)
10.
Magnetic Materials Materials that are attracted by a magnet are called magnetic materials. Objects made of materials such as iron, cobalt and nickel are magnetic objects Non-Magnetic Materials Materials that are not attracted by magnets are called non-magnetic materials. Examples of non-magnetic materials include rubber, wood, feather etc
11.
Bar Magnet the poles are located at the ends of the bar Horseshoe Magnet the poles are located at the two free ends of the 'U' shape. Cylindrical magnet the poles are located at the two circular ends of the cylinder
link to this page by copying the following text
Also Read
• NCERT Solutions
• Assignments
### Practice Question
Question 1 What is $\frac {1}{2} + \frac {3}{4}$ ?
A)$\frac {5}{4}$
B)$\frac {1}{4}$
C)$1$
D)$\frac {4}{5}$
Question 2 Pinhole camera produces an ?
A)An erect and small image
B)an Inverted and small image
C)An inverted and enlarged image
D)None of the above | 2021-04-15 04:07:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44477519392967224, "perplexity": 7549.253359043389}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00164.warc.gz"} |
http://math.stackexchange.com/questions/286690/prove-a-power-series-funcion-is-continuous | Prove a power series funcion is continuous
How do i prove the function:
$g(x)=\sum_{n=1}^{\infty }\frac{1}{^{n^{0.5}}}(x^{2n}-x^{2n+1})$
is continuous in [0,1]?
I tried to look at this functions as:
$g(x)=(1-x)\sum_{n=1}^{\infty }\frac{1}{^{n^{0.5}}}x^{2n}$
but I couldn't find a way solving it...
-
Your TeX is not making sense, $\sqrt{^n}$ – GEdgar Jan 25 at 15:03 my mistake... now it's supposed to be fine – user59640 Jan 25 at 15:05 Did you try to calculate the radius of convergence? – Fabian Jan 25 at 15:13 Yeah it is 1... But it just proves that the function is continuous in every closed interval [0,r] while 0
In fact, the series converges uniformly on $[0,1]$. We can show $$\frac{x^{(2 n)} - x^{(2 n + 1)}}{\sqrt{n}} \le \frac{\Bigl(\frac{2n}{2 n + 1}\Bigr)^{2 n}}{\sqrt{n} (2 n + 1)} \approx \frac{1}{2 e n^{3/2}}$$ and $\sum 1/(2en^{3/2})$ converges. | 2013-05-19 07:21:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876890778541565, "perplexity": 341.89505723809253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384213/warc/CC-MAIN-20130516092624-00002-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://icml.cc/Conferences/2019/ScheduleMultitrack?event=5026 | Oral
Adaptive Regret of Convex and Smooth Functions
Lijun Zhang · Tie-Yan Liu · Zhi-Hua Zhou
Thu Jun 13th 11:25 -- 11:30 AM @ Room 102
We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure. The goal is to achieve a small regret over every interval so that the comparator is allowed to change over time. Different from previous works that only utilize the convexity condition, this paper further exploits smoothness to improve the adaptive regret. To this end, we develop novel adaptive algorithms for convex and smooth functions, and establish problem-dependent regret bounds over any interval. Our regret bounds are comparable to existing results in the worst case, and become much tighter when the comparator has a small loss. | 2019-08-25 11:01:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205432295799255, "perplexity": 929.341506832123}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00529.warc.gz"} |
https://www.semanticscholar.org/paper/A-new-deformation-family-of-Schwarz%E2%80%99-D-surface-Chen-Weber/2d52cc4adfabcedb8d41ae36003ae95e6e86a652 | # A new deformation family of Schwarz’ D surface
```@article{Chen2018AND,
title={A new deformation family of Schwarz’ D surface},
author={Hao Chen and Matthias J. Weber},
journal={arXiv: Differential Geometry},
year={2018}
}```
• Published 4 April 2018
• Mathematics
• arXiv: Differential Geometry
We prove the existence of a new 2-parameter family o\$\Delta\$ of embedded triply periodic minimal surfaces of genus 3. The new surfaces share many properties with classical orthorhombic deformations of Schwarz' D surface, but also exotic in many ways. In particular, they do not belong to Meeks' five-dimensional family. Nevertheless, o\$\Delta\$ meets classical deformations in a 1-parameter family on its boundary.
3 Citations
## Figures from this paper
An orthorhombic deformation family of Schwarz’ H surfaces
• Mathematics
• 2018
The classical H surfaces of H. A. Schwarz form a 1-parameter family of triply periodic minimal surfaces (TPMS) that are usually described as close relatives to his more famous P surface. However, a
Stacking Disorder in Periodic Minimal Surfaces
• Mathematics, Physics
SIAM J. Math. Anal.
• 2021
The construction of 1-parameter families of non-periodic embedded minimal surfaces of infinite genus in T denotes a flat 2-tori and can be interpreted as disordered stacking of layers of periodically arranged catenoid necks.
Existence of the tetragonal and rhombohedral deformation families of the gyroid
We provide an existence proof for two 1-parameter families of embedded triply periodic minimal surfaces of genus three, namely the tG family with tetragonal symmetry that contains the gyroid, and the
## References
SHOWING 1-10 OF 43 REFERENCES
Genera of minimal balance surfaces
• Chemistry
• 1989
The genus of a three-periodic intersection-free surface in R3 refers to a primitive unit cell of its symmetry group. Two procedures for the calculation of the genus are described: (1) by means of
On the genus of triply periodic minimal surfaces
we prove the existence of embedded minimal surfaces of arbitrary genus g 3 in any at 3-torus. In fact we construct a sequence of such surfaces converging to a planar foliation of the 3-torus. In
On bifurcation and local rigidity of triply periodic minimal surfaces in \$\mathbb R^3\$
• Mathematics
• 2014
We use bifurcation theory to determine the existence of infinitely many new examples of triply periodic minimal surfaces in \$\mathbb R^3\$. These new examples form branches issuing from the H-family,
Uniqueness of the Riemann minimal examples
• Mathematics
• 1998
Abstract. We prove that a properly embedded minimal surface in R 3 of genus zero with infinite symmetry group is a plane, a catenoid, a helicoid or a Riemann minimal example. We introduce the
Parametrization of triply periodic minimal surfaces. II. Regular class solutions
• Mathematics
• 1992
A derivation is given of the set of triply periodic minimal surfaces of monoclinic symmetry and higher that fall within the regular class (including those containing self-intersections). The Gauss
Deformations of the gyroid and lidinoid minimal surfaces
The gyroid and Lidinoid are triply periodic minimal surfaces of genus 3 embedded in R that contain no straight lines or planar symmetry curves. They are the unique embedded members of the associate
New Families of Embedded Triply Periodic Minimal Surfaces of Genus Three in Euclidean Space
Until 1970, all known examples of embedded triply periodic minimal surfaces (ETPMS) contained either straight lines or curves of planar symmetry. In 1970, Alan Schoen discovered the gyroid, an ETPMS
Generalizations of the gyroid surface
• Mathematics
• 1993
The deformation of Schoen's gyroid — one of the three examples of triply-periodic minimal surfaces possessing cubic symmetry and genus 3 — is discussed. Lower-symmetry variants (similarly of genus 3)
The classification of doubly periodic minimal tori with parallel ends
• Mathematics
• 2005
Let K be the space of properly embedded minimal tori in quotients of R 3 by two independent translations, with any fixed (even) number of parallel ends. After an appropriate normalization, we prove
Classification of doubly-periodic minimal surfaces of genus zero
• Mathematics
• 2001
Abstract.We prove that if the quotient surface of a properly embedded doubly–periodic minimal surface in ℝ3 has genus zero, then the surface is one of the classical Scherk examples. | 2022-01-23 05:48:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024935722351074, "perplexity": 1884.3446447558294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00061.warc.gz"} |
http://math.columbia.edu/~okounkov/Math%20Phys%20Seminar%20Schedule%20Spring%202016.html | Informal Mathematical Physics Seminar
organized by Igor Krichever and Andrei Okounkov
Mondays, 5:30, Room 507
Schedule of talks for Spring 2016:
Jan 25 Michael McBreen P=W for nodal curves, in rank one Feb 1 Michael Finkelberg Towards a cluster structure on trigonometric zastava Feb 8 Hiraku Nakajima Cherkis' bow varieties and Coulomb branches of quiver gauge theories of affine type A Feb 15 11:40 Felix Janda note special time Double ramification cycles Feb 15 Yongbin Ruan Nonabelian gauged linear sigma model Feb 17 11:40 Emily Clader note special day & time Pixton's double ramification cycle relations Feb 22 Vasily Pestun Quiver W-algebras Feb 29 Marta Mazzocco Quantum cluster algebras from geometry Mar 7 Alexander Bobenko Quadrilateral surfaces, see also an image Mar 14 no seminar spring break Mar 21, 11:40 Ivan Loseu note special time Deformations of symplectic singularities and the orbit method Mar 21 Scott Sheffield Minerva lecture Universal Randomness in 2D Mar 23, 11:40 Ivan Loseu note special day & time Combinatorial wall-crossing Mar 28 Emil Prodan Fitting the topological insulators in Alain Connes' noncommutative geometry program April 4 Qi You Categorification of small quantum groups April 11 Petr Pushkar Bethe Ansatz, Baxter Operator and the Vertex on the Grassmannian April 18 Ivan Danilenko DAHA and iterated torus knots April 20, 11:40 Julia Pevtsova note special day & time Support, cosupport and tensor ideals in modular representation theory April 25 Dmitry Korb Dynamical Weyl Group and resolutions of Affine Grassmanians April 27, 10:00 Pavlo Gavrilenko note special day & time The isomonodromy tau-function and conformals blocks May 2 Andrey Smirnov Rationality of capped descendants in quantum K-theory of Nakajima varieties
A note to the speakers: this is an informal seminar, meaning that the talks are longer than usual (1:30) and are expected to include a good introduction to the subject as well as a maximally accessible (i.e. minimally general & minimally technical) discussion of the main result. The bulk of the audience is typically formed by beginning graduate students. Blackboard talks are are particularly encouraged.
Abstracts
January 25
Joint work with Zsuzsanna Dancso and Vivek Shende. The non-abelian Hodge correspondence identifies local systems on a smooth curve with Higgs bundles on the same curve. According to the P=W conjecture, this
correspondence sends the weight filtration on the moduli of local systems to the perverse filtration on the higgs moduli. I will formulate an analogous conjecture for nodal curves with rational components, in rank one. Time permitting, I will sketch a proof.
February 1
This is a joint work with A.Kuznetsov and L.Rybnikov. We study a moduli problem on a nodal curve of arithmetic genus 1, whose solution is an open subscheme in the zastava space for projective line. This moduli space is equipped with a natural Poisson structure, and we compute it in a natural coordinate system. We compare this Poisson structure with the trigonometric Poisson structure on the transversal slices in an affine flag variety. We conjecture that certain generalized minors give rise to a cluster structure on the trigonometric zastava.
February 8
I will report the on-going joint project with Yuuya Takayama. Cherkis' bow varieties are cousins of quiver varieties, conjecturally describing moduli spaces of type A instantons on multi-Taub-NUT spaces. Our goal is to show that they are Coulomb branches of 3d N=4 framed quiver gauge theories of affine type A. This result generalizes one for unframed cases proved with Braverman and Finkelberg.
February 15, 11:40
I am going to give an overview over recent work on Pixton type formulas for the double ramification cycle, a generalization related to loci of abelian differentials with prescribed poles and zeros and a different generalization for twisted double ramification cycles over a base manifold X. This work is based on joint works with E. Clader and Pandharipande-Pixton-Zvonkine.
February 15, 5:30
Gauged linear sigma model (GLSM) is a 2d quantum field theory invented by Witten in early 90's to give a physical derivation of Landau-Ginzburg (LG)/Calabi-Yau (CY) correspondence. Since then, it has been investigated extensive in physics by Hori and others. Recently, an GLSM algebraic-geometric theory has been formulated by Fan-Jarvis-Ruan so that we can start to rigorously compute its invariants and match with physical predication. In fact, a great deal of activities are under going right now in abelian
cases where the objects of study are complete intersection of toric varieties. In this talk, we would like to look over the horizon to discuss nonabelian cases where the problems are much less understood even conjecturally. Moreover, nonabelian GLSM exhibits the new phenomenon unavailable to abelian GLSM.
February 17
The double ramification (DR) cycle is a class on the moduli space of curves that, roughly speaking, describes the locus of curves admitting a map to the projective line with specified ramification over zero and infinity. In recent work, A. Pixton gave an explicit formula for a mixed-degree class, conjecturing that (1) its degree-g part coincides with the DR cycle, and (2) its higher-degree parts vanish. I will discuss joint work with F. Janda giving a proof of the second of these two conjectures.
February 29
This work is in collaboration with L. Chekhov. The famous Greek astronomer Ptolemy created his well-known table of chords in order to aid his astronomical observations. This table was based on the renowned relation between the four sides and the two diagonals of a quadrilateral whose vertices lie on a common circle. In 2002, the mathematicians Fomin and Zelevinsky generalised this relation to introduce a new structure called cluster algebra. This is a set of clusters, each cluster made of n numbers called cluster variables. All clusters are obtained from some initial cluster by a sequence of transformations called mutations. Cluster algebras appear in a variety of topics, including total positivity, number theory, Teichm\”uller theory and many others.
In this talk we propose a new class of generalised cluster algebras for which the problem of quantum ordering can be solved explicitly. We start by introducing the notion of bordered cusps. This new notion arises when colliding holes in a Riemann surface. In the limit of two colliding holes, the geodesics that originally passed through the domain between colliding holes become arcs between two bordered cusps decorated by horocycles. The lengths of these arcs are $\lambda$-lengths in Thurston--Penner terminology, or cluster variables by Fomin and Zelevinsky. We then obtain new class of geodesic laminations comprising both closed curves in the interior of a Riemann surface and arcs passing between bordered cusps. We formulate the Poisson and quantum algebras of these laminations. From the physical point of view, our construction provides an explicit coordinatization of moduli spaces of open/closed string worldsheets.
March 7
An emerging field of discrete differential geometry aims at the development of discrete equivalents of notions and methods of classical differential geometry, in particular of surface theory. The latter appears as a limit of a refinement of the discretization. Current interest in discrete differential geometry derives not only from its importance in pure mathematics but also from its applications in computer graphics, theoretical physics, architecture and numerics.
These talk is about quadrilateral surfaces, i.e. surfaces built from planar quadrilaterals. They can be seen as discrete parametrized surfaces. Discrete curvatures as well as special classes of quadrilateral surfaces, in particular, discrete minimal surfaces are considered. Their relation to discrete integrable systems is clarified. Application in free form architecture will be demonstrated.
March 21, 11:40
Symplectic singularities were introduced by Beauville in 2000. These are especially nice singular Poisson algebraic varieties that include symplectic quotient singularities and the normalizations
of orbit closures in semisimple Lie algebras. Poisson deformations of conical symplectic singularities were studied by Namikawa who proved that they are classified by a points of a vector space. Recently I have
proved that quantizations of a conical symplectic singularities are still classified by the points of the same vector spaces. I will explain these results and then apply them to establish a version of Kirillov's orbit method for semisimple Lie algebras.
March 21, 5:30
I will introduce several universal and canonical random objects that are (at least in some sense) two dimensional or planar, along with discrete analogs of these objects. In particular, I will introduce probability measures on the space of paths, the space of trees, the space of surfaces, and the space of growth processes. I will argue that these are in some sense the most natural and symmetric probability measures on the corresponding spaces.
March 23
In this talk I will discuss some combinatorial questions related to wall-crossing functors between categories O over quantizations of symplectic resolutions with different ample cones. For suitable choices of ample cones, the wall-crossing functors are perverse in the sense of Rouquier and so give rise to bijections between the sets of simples (wall-crossing bijections). I will explain what "perverse" means in this case and
discuss some interesting special cases: the cotangent bundles of flag varieties and Hilbert schemes of points on C^2. In the former case, the wall-crossing bijections define an interesting action of the so called cactus group on the Weyl group that was not known before. In the latter case, we recover the Mullineux involution arising in the modular representation theory of symmetric groups.
March 28
Topological insulators are newly discovered materials with the defining property that any boundary cut into such crystal conducts electricity like a metal even in the presence of disorder. The main conjecture in the field is that topological insulators are classified by a certain periodic table, which I will briefly discuss. In the main part of the talk, I will present recent results where Alain Connes' non-commutative geometry program was used to full throttle to prove this conjecture for more than half of the periodic table. I will try to fit the exposition in a broader context, namely, the ongoing effort towards a constructive KK-theory, particularly constructive Kasparov products.
April 4
We propose an algebraic approach to categorification of quantum groups at a prime root of unity, with the scope of eventually categorifying Witten-Reshetikhin-Turaev three-manifold invariants. This is joint work with Mikhail Khovanov
April 11
Nekrasov and Shatashvili stated a conjecture that the operators of quantum multiplication by higher Chern classes of the tautological bundle on the Grassmannian are equal to the Baxter operators. We prove this conjecture in the K-theoretic setting using the computation of the Vertex and the quantum difference equation.
Based on a work in progress joint with A. Smirnov and A. Zeitlin
April 18
Jones polynomials and WRT invariants are well-known invariants of links in S^3. Their categorification
attracts a lot of attention now. The key numerical invariant here is the Poincare polynomial of the triply
graded Khovanov-Rozansky homology, also called HOMFLYPT homology. In spite of recent developments,
this theory remains very difficult apart from the celebrated Khovanov homology (the case of sl(2)) with
very few known formulas (only for the simplest uncolored knots). Several alternative approaches to these polynomials were suggested recently (the connections are mostly conjectural). We will discuss the direction based on DAHA, which was recently extended from torus knots to arbitrary torus iterated links (including all algebraic links). The talk will be mostly focused on the DAHA-Jones polynomial of type A_1. Based on our joint works with Ivan Cherednik.
April 20
This is joint work with D. Benson, S. Iyengar and H. Krause. I’ll discuss the correct notion of support and its sibling, cosupport, for the stable module category (or the derived category of singularities) of a finite group scheme. As an application, I’ll describe classification of tensor ideal localizing subcategories in the stable module category in terms of supports.
April 25
DWG is a nice braid group action on the finite-dimensional representations of a semisimple group, that acts on the weights via the usual Weyl group. In their paper A.Braverman and M.Finkelberg obtain DWG using a geometric construction involving equivariant cohomology of the corresponding affine Grassmanian. On the other hand, such action appears in cohomology of cotangent bundles to the usual Grassmanians. According to the philosophy of the symplectic duality, there is a connection between certain resolutions of AG and T*Gr(k, n).
Using the construction of stable envelopes in the case of these resolutions of the affine Grassmanian, I try to describe operators from DWG on tensor products as corresponding R-matrices. In particular, since the varieties are smooth, this should give something in K-theory as well as in cohomology.
May 2
We give a manifestly rational formula for capped descendants in quantum K-theory of Nakajima varieties. | 2019-09-20 13:47:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6833598017692566, "perplexity": 872.5460479269227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00025.warc.gz"} |
https://math.stackexchange.com/questions/1679889/x-n-y-n-rightarrow-a-s-0-and-y-n-rightarrow-a-s-z-imply-x-n-rightarro | $X_n-Y_n\rightarrow_{a.s.} 0$ and $Y_n\rightarrow_{a.s.} Z$ imply $X_n\rightarrow_{a.s.} Z$?
Consider two sequences of random variables $\{X_n\}_n, \{Y_n\}_n$ and a random variable $Z$, all defined on the same probability space. Let $\rightarrow_{a.s.}$ denote almost sure convergence. Suppose
(1) $X_n-Y_n\rightarrow_{a.s.} 0$
(2) $Y_n\rightarrow_{a.s.} Z$
Do (1) and (2) imply $X_n\rightarrow_{a.s.} Z$? If yes, which result I'm using?
• Use the face that $X_n=X_n-Y_n+Y_n$ and the definition of almost sure convergence. – Augustin Mar 2 '16 at 13:12
• Is this like a Slutsky's Lemma for almost sure convergence? – TEX Mar 2 '16 at 13:15
• It's even simpler than that. Almost sure convergence is just pointwise convergence on a set of probability $1$. And the intersection of two such sets has probability $1$. – Augustin Mar 2 '16 at 13:17
Yes it´s true. Use the fact that if $A_n\rightarrow_{a.s.}A$ and $B_n\rightarrow_{a.s.}B$, then $A_n+B_n\rightarrow_{a.s.}A+B$. | 2021-06-20 12:03:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9654112458229065, "perplexity": 125.9302633148492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00008.warc.gz"} |
https://www.techwhiff.com/learn/free-fall-take-g-981-neglect-air-resistance-in/322847 | # Free Fall Take g = 9.81, Neglect air resistance in the following calculations Assume an object...
###### Question:
Free Fall Take g = 9.81, Neglect air resistance in the following calculations Assume an object is thrown upwards off a 50.0my building with an initial velocity 120 att-0.0s. 7.) When does it reach the top of its trajectory? a) 1.12 s b) 1.22 s c) 2.33 s d) 2.67 s e) None of the above 8.) How high is it at this point? a) 57,37 m b) 60.02 m c) 63.71 m d) 66.89 m e) None of the above 9.) How long before it hits the ground? a) 2.87 s b) 3.46 s c) 4.64 s d) 5.61 s e) None of the above 10.) What is its speed when it hits? a) 22.34 m/s b) 28.62 m/s c) 33.54 m/s d) 45.68 m/s e) None of the above
#### Similar Solved Questions
##### Ex. 64 Suppose your waiting time for a bus in the morning is uniformly distributed on [0, 8], whereas waiting time in the evening is uniformly distributed on [0, 10] independent of morning waiting time. a. If you take the bus each morning and evening for
Ex. 64Suppose your waiting time for a bus in the morning is uniformly distributed on [0, 8], whereas waiting time in the evening is uniformly distributed on [0, 10] independent of morning waiting time.a. If you take the bus each morning and evening for a week, what is your total expected waiting tim...
##### How many total moles of ions are released when the following sample dissolves completely in water?...
How many total moles of ions are released when the following sample dissolves completely in water? Enter your answer in scientific notation. 8.40 × 10^19 formula units of Sr(HCO3)2 = ___× 10 ___mol...
##### The present value of a single payment can be represented as © PVO = FVn(PViFi,n) ©...
The present value of a single payment can be represented as © PVO = FVn(PViFi,n) © PVO = FVn(PVIFAi,n) OPVO = FVn[1/(1 - i)n] None of these are correct Question 2 The future value of a single payment equation is given by FVn = PVO(PVIEI,n) FVn - PVO(FVIFAI.n) FVn = PV0(1/(1 - i)n) FVn - PV...
##### A sample of gaseous PCs was introduced into an evacuated flask so that the pressure of...
A sample of gaseous PCs was introduced into an evacuated flask so that the pressure of pure PCI would be 0.48 atm at 433 K. However, PCIE decomposes to gaseous PCI; and Cly, and the actual pressure in the flask was found to be 0.80 atm. Calculate Ko for the decomposition reaction below at 433 K. PCI...
##### Your business associate mentions that she is considering investing in corporate bonds currently selling at a...
Your business associate mentions that she is considering investing in corporate bonds currently selling at a premium. She says that since the bonds are selling at a premium, they are highly valued and her investment will yield more than the going rate of return for the risk involved. Reply with a me...
##### Suppose the rate of return on short-term govemment securities (perceived to be risk free) is about...
Suppose the rate of return on short-term govemment securities (perceived to be risk free) is about 5%. Suppose also that the expected rate of return required by the market for a portfolio with a beta of 1 is 12%. According to the capital asset pricing model (security market line): a. What is the exp...
##### Summarize the concept of casuistry and how it applies to the field of healthcare. in 300-500...
Summarize the concept of casuistry and how it applies to the field of healthcare. in 300-500 words...
##### Hi, here is a mathematical statistics question. Please solve the problem with given joint density function....
Hi, here is a mathematical statistics question. Please solve the problem with given joint density function. Thanks! 7. Y has density f(y)- 5 0 o.w. a) show that f is a density function b) If U-Y find the density of U c) Find E(Y) and Var(Y)....
Broward Manufacturing recently reported the following information: Net income $570,000 ROA 8% Interest expense$216,600 Accounts payable and accruals $1,000,000 Broward's tax rate is 25%. Broward finances with only debt and common equity, so it has no preferred stock. 40% of its total invested c... 1 answer ##### Each letter is a part of a single question. Therefore, fill in all of the blanks... Each letter is a part of a single question. Therefore, fill in all of the blanks please. Thanks in advance. Consider the function f(x) = x - 6x13 For the following questions, write inf for 00, -inf for -, U for the union symbol, None if no answer exists, and separate by a comma if more than one answ... 1 answer ##### 11. Provide the reagents for the following synthesis below (Hint: Be careful of going from B... 11. Provide the reagents for the following synthesis below (Hint: Be careful of going from B to C. How did it become meta directing? Also B to D think about where the carbons are and how many are present). (20 Points) D A... 1 answer ##### . A 21.11 ssessment times more acidic than m-nitrophenol (pKa = 8.4). Explain. [This concept is... . A 21.11 ssessment times more acidic than m-nitrophenol (pKa = 8.4). Explain. [This concept is the subject of Section 23.XX in the chapter on benzene.] pKa 7.2 pKa = 8.4 он он Section 21-E Supplement you stay on t providing ans selecte... 1 answer ##### Please make final calculations A polymeric cylinder initially exerts a stress with a magnitude (absolute value)... Please make final calculations A polymeric cylinder initially exerts a stress with a magnitude (absolute value) of 1.579 MPa when compressed. If the tensile modulus and viscosity of this polymer are 15.4 MPa and 1 x101- Pa-s, respectively, what will be the approximate magnitude of the stress, in MPa... 1 answer ##### At December 31, 2018, before any year-end adjustments, Marigold Company's Insurance Expense account had a balance... At December 31, 2018, before any year-end adjustments, Marigold Company's Insurance Expense account had a balance of$2420 and its Prepaid Insurance account had a balance of $3920. It was determined that$2790 of the Prepaid Insurance had expired. The adjusted balance for Insurance Expense for t...
Daising Canning Company is considering an expansion of its facilities. Its current income statement is as follows: Sales $5,200,000 Variable costs (50% of sales) 2.600.000 Fored costs 1.820,000 Eamings before interest and taxes (EBIT)$ 780,000 Interest (10% costi 240.000 Eamings before taxes (EBT)... | 2023-03-30 10:50:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2662375867366791, "perplexity": 2068.1056098410013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00126.warc.gz"} |
https://proofwiki.org/wiki/Definition:Leading_Coefficient_of_Polynomial | ## Definition
Let $R$ be a commutative ring with unity.
Let $P \in R \sqbrk X$ be a nonzero polynomial over $R$.
Let $n$ be the degree of $P$.
The leading coefficient of $P$ is the coefficient of $x^n$ in $P$.
Let $\struct {R, +, \circ}$ be a ring.
Let $\struct {S, +, \circ}$ be a subring of $R$.
Let $\displaystyle f = \sum_{k \mathop = 0}^n a_k \circ x^k$ be a polynomial in $x$ over $S$.
The coefficient $a_n \ne 0_R$ is called the leading coefficient of $f$.
### Polynomial Form
Let $R$ be a commutative ring with unity.
Let $f = a_0 + a_1 X + \cdots + a_{r-1} X^{r-1} + a_r X^r$ be a polynomial form in the single indeterminate $X$ over $R$.
Then the ring element $a_r$ is called the leading coefficient of $f$. | 2020-11-28 22:55:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804947972297668, "perplexity": 43.39152342268081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00396.warc.gz"} |
https://brilliant.org/practice/24-puzzle/?subtopic=puzzles&chapter=operator-search | Logic
# The 24 Puzzle
$9 \square 1 \square 7 \square 4 = 24$
Which set of operators can be used in the blanks above to make this equation correct? Operators can be placed in the blanks in any order, and any number of parentheses can be used.
Hint: When solving puzzles such as these, it is often helpful to consider different methods that could be used to arrive at your final answer. For example, if the final operator is $$\times,$$ then we need two numbers whose product is 24. For reference, the nontrivial factors of 24 are 2,3,4,6,8, and 12.
$9 \square 8 \square 3 \square 8 = 24$
Which set of operators can be used in the blanks above to make this equation correct? Operators can be placed in the blanks in any order, and any number of parentheses can be used.
Hint: When solving puzzles such as these, it is often easier to reach the target number using only 2 or three of the required digits. Try to develop strategies for eliminating a number or pair of numbers. This is often done by producing 1, which can be multiplied for no effect, or 0, which can be added for no effect.
$6 \square 1 \square 3 \square 4 = 24$
Which set of operators can be used in the blanks above to make this equation correct? Operators can be placed in the blanks in any order, and any number of parentheses can be used.
Hint: When solving puzzles such as these,remember that the same result can often be produced using different operators. For example, multiplying by 2 is equivalent to dividing by $$\frac{1}{2}.$$
$7 \square 3 \square 3 \square 7 = 24$
Which set of operators can be used in the blanks above to make this equation correct? Operators can be placed in the blanks in any order, and any number of parentheses can be used.
Hint: When solving puzzles such as these, it is often helpful to try imagining all the possible final steps involving one of the given numbers. For example, if the final step is $$\times 5,$$ you would need to make $$\frac{24}{5}$$ with the other 3 numbers. This can be particularly helpful when the solution involves a fractional product.
$6 \square 6 \square 4 \square 1 = 24$
Which set of operators can be used in the blanks above to make this equation correct? Operators can be placed in the blanks in any order, and any number of parentheses can be used.
Hint: When solving puzzles such as these, sometimes it’s not always possible to find factors that multiply nicely to 24. It’s often helpful to look for other, nearby numbers with nice factors, especially ones that can produce 24 when added to or subtracted from the numbers you already have. Numbers that can be helpful include 18, 20, 21, 25, 27, 28, and 30.
× | 2019-02-18 13:33:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696097731590271, "perplexity": 198.60229377449932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247486480.6/warc/CC-MAIN-20190218114622-20190218140622-00633.warc.gz"} |
https://www.cnblogs.com/emanlee/archive/2007/10/07/916019.html | # 11 Visual Studio 2005 IDE Tips and Tricks to Make You a More Productive Developer
Visual Studio 2005 IDE相关的11个提高开发效率的技巧
http://www.chinhdo.com/chinh/blog/20070920/top-11-visual-studio-2005-ide-tips-and-tricks-to-make-you-a-more-productive-developer/
Here are my top 11 tips and tricks for getting things done faster with the Visual Studio 2005 IDE (without using third-party add-ins or upgrading hardware… that’s another article). Yes, some of these tips and tricks may fall into the “obvious” category, but I think they are worth repeating. I continue to see too many .NET developers not taking advantage of even the basic time-saving techniques.
I work mostly with C# so some of these tips may not apply to, or work differently with other Visual Studio languages such as Visual Basic.NET.
### (1) Express Yourself with Regular Expressions
Regular Expressions is a powerful and portable text search/replace/transformation language. Learning basic Regular Expressions will immediately make you a more productive developer/power user. Regular Expressions is supported in Visual Studio’s various Search/Replace dialogs. Any Regular Expressions skill you learn will also be useful in numerous other applications and settings: other text editors, unix shell/egrep, PowerShell, input validation, and Google search (heh, just kidding on that last item).
You can also use Regular Expressions with macros and automation via the Regex class.
Here’s an example of how you can save time with Regular Expressions in Visual Studio. Say, you just wrote and tested a SQL in a Query Tool and you want to turn it into a string variable in your C# class? Here’s how:
First, paste the SQL text into the editor. Make sure to remove any unwanted indentation on the left side of the text (SHIFT-Tab):
Select a,b,
,c
,d
,e
from
table
Then hit CTRL+H to bring up the Find and Replace Dialog and fill it out like this:
Choose Replace All. Fix a couple of lines, sit back and admire your beautiful work:
+"Select a,b,"
+",c"
+",d"
+",e"
+"from"
+"table"
Explanation? Basically, the “Find what” expression above matches the content of each line and give it a numbered “tag”. The “Replace with” expression then replaces each line with the first tagged value (\1), wrapped around in + ” “. Click the fly-out (triangle) button next to each box to display a cheat-sheet of frequently used expressions. Oh, and don’t worry about the “+” string concatenations in the example, the compiler knows to optimize that syntax.
Once you’ve created a few Search/Replace expressions like the above, create macros out of them and assign to shortcuts.
Here are some of the Regex transformations I use most often when writing code:
• Surrounds each line with (example above).
• Transform a list of values separated by newlines into a coma-delimited list (used in array initializers or SQL where clause).
• Put double quotes around each value in a coma-separated list.
### (2) Take (Keyboard) Shortcuts
Using keyboard shortcuts is the best way to get things done faster in Visual Studio (and most other computer applications for that matter).
Below are my favorite Visual Studio keyboard shortcuts (VB.NET. I am leaving out the really obvious ones like F5).
• CTRL+ALT+L: View Solution Explorer. I use Auto Hide for all of my tool windows to maximize screen real estate. Whenever I need to open the Solution Explorer, it’s just a shortcut away. Related shortcuts: CTRL+ALT+X (Toolbox), F4 (Properties), CTRL+ALT+O (Output), CTRL+\, E (Error List), CTRL+\, T (Task List).
• F12: Go to definition of a variable, object, or function.
• SHIFT+F12: Find all references of a function or variable.
• F7: Toggle between Designer and Source views.
• CTRL+PgDn: Toggle between Design and Source View in HTML editor.
• F10: Debug - step over. Related debugging shortcuts: F5 (debug - start), F11 (debug - step into), SHIFT-F11 (debug - step out), CTRL-F10 (debug - run to cursor). F9 (toggle breakpoint).
• CTRL+D or CTRL+/: Find combo (see section on Find Combo below).
• CTRL+M, O: Collapse to Definitions. This is usually the first thing I do when opening up a new class.
• CTRL+K, CTRL+C: Comment block. CTRL+K, CTRL-U (uncomment selected block).
• CTRL+-: Go back to the previous location in the navigation history.
• ALT+B, B: Build Solution. Related shortcuts: ALT+B, U (build selected Project), ALT+B, R (rebuild Solution).
• CTRL+ALT+Down Arrow: Show dropdown of currently open files. Type the first few letters of the file you want to select.
• CTRL+K, CTRL+D: Format code.
• CTRL+L: Delete entire line.
• CTRL+G: Go to line number. This is useful when you are looking at an exception stack trace and want to go to the offending line number.
• SHIFT+ALT+Enter: Toggle full screen mode. This is especially useful if you have a small monitor. Since I upgraded to dual 17″ monitors, I no longer needed to use full screen mode.
• CTRL+K, X: Insert “surrounds with” code snippet. See Snippets tip below.
• CTRL+B, T: Toggle bookmark. Related: CTRL+B, N (next bookmark), CTRL+B, P (prev bookmark).
The complete list of default shortcuts is available from VS 2005 Documentation. You can also download/print reference posters from Microsoft: C# Keyboard Reference Poster, VB.NET Keyboard Reference Poster.
### (3) Make New Shortcuts
There is something you do a lot in Visual Studio and there is no shortcut for it? Create one. Here’s how:
• Choose Tools/Options and select Environment/Keyboard.
• Type in something into “Show commands containing” to get a list of matching commands. If there is already a shortcut for the selected command, it’ll be displayed in “Shortcuts for selected command”.
• To assign a new shortcut to the selected command, put the cursor in “Press shortcut keys” and press the shortcut key or key combinations desired.
Have a custom Macro that you run often? Assign it to a keyboard shortcut. Here are some of my custom keyboard shortcuts:
• CTRL+Num, T: Show the Test View.
• CTRL+Num, D: Start debugging the selected Unit Test in Test View.
• CTRL+’, L: “Collapse all in Solution Explorer ” macro (see Macros section below).
• CTRL+’, S: “Surrounds each line with” Macro.
• CTRL+’, C: Compare with previous Source Control version.
### (4) Use Code Snippets
Save time typing repetitive code by using Code Snippets. There are two types of Snippets in Visual Studio 2005: Exansion and SurroundsWith. To use Expansion Snippets, type the Snippet shortcut (not to be confused with keyboard shortcuts), and press Tab twice.
For example, the “for” Snippet is an Expansion Snippet. To use it, type “for”…
Then press Tab, Tab:
I find SurroundsWith Snippets more useful though. An example SurroundsWith Snippet is “#region”. First, select a block of code:
Then, type CTRL+K, CTRL+S and “#re“:
Then hit Enter:
Here are my favorite Snippets:
• #region: Regions is a great way to organize your code.
• using: If you create an IDisposable object, you should use the “using” pattern. In addition to the basic “using” Snippet, I also created several variations for TransactionScope, and IDataReader.
• try/catch
• {}
• /// <summary>$end$</summary>
Find yourself constantly switching to the Design view every time you create/open an ASPX page? Cannot locate your current file in the Solution Explorer? Easy… just change the right settings and never think about it again.
Here are the some settings in Visual Studio that can save you time:
• Open HTML pages in Source View: Tools/Options/HTML Designer/Start pages in.
• Track the current file in Solution Explorer: Tools/Options/Projects and Solutions/Track Active Item in Solution Explorer.
• Change the Start page or get rid of it: Tools/Options/Environment/Startup.
• Change the default font-size to a smaller size so you can see more code. My editor font setting is ProFontWindows at size 9.
• Turn of animations: Uncheck Tools/Options/Environment/Animate environment tools.
### (6) “Attach to Process” to Start Debugging ASP.NET
Most ASP.NET developers use the standard F5 (Debug/Start Debugging) to start debugging from Visual Studio. However, there is a much faster way to start debugging if you already have an instance of your web application running. Just attach to it instead:
• Choose Debug/Attach to Process.
• Select the “aspnet_wp.exe” process and choose Attach.
Or, for keyboarders:
• ALT+D, P, “as“, Enter.
Debugging this way is faster because you skip the often-lengthy compilation step, and you don’t have to navigate from the start page to the actual page that you want to debug.
### (7) Stop Conditionally (Conditional Breakpoints)
How often have you found yourself repeatedly stepping through a loop while debugging, waiting to get to a specific loop value (because the bug only occurs with that specific value)? With Conditional Breakpoints, you don’t have to do that. Just set a Breakpoint Condition.
Set the Breakpoint. Right click on the Breakpoint indicator (red circle), and choose Condition:
Set the condition (any valid C# expression):
Another debugging productivity trick I use is to override ToString() to return a useful summary of your objects. The Debugger uses the value returned by ToString in various Debug windows such as Watch Window. You can also use the DebuggerDisplay attribute.
### (8) Employ Task List Tokens
Use Task List tokens such as TODO and HACK to quickly mark incomplete code or code that requires further attention. This allows you to keep flowing and skip over the details, but at the same time ensures that you will not forget to go back and finish up.
A shortcoming with Visual Studio 2005’s Task List is that it only shows the items in the current file. You can get around this by using the Find in Files feature and search for “// TODO”.
### (9) Go Directly to Any File with the Find Combo Box
This is the Find dropdown that is on the Standard Toolbar, not the Find dialog. Use the shortcut CTRL+D to activate the Find dropdown in normal mode. Use CTRL+/ to activate the Find dropdown in command mode (with “>” prepended… this doesn’t work sometimes for me).
To quickly go to a file, type CTRL+D, >open <start of file name>. Intellisense works here just like in the Command Window. “of” (short for “open file”) can be used instead of open. Compare this with opening Solution Explorer, expand the correct folder/project, and visually hunt for the file you need.
With the Find Combo, you can also execute commands, macros, find text, etc. More info:
### (10) Type Ahead (Incremental Search) in Lists
(10)在列表项中输入名称的头字母(增量搜索)
Type-ahead search works in many Visual Studio lists such as Solution Explorer, Active Files Combo (CTRL+ALT+Down Arrow), Add References, Class View, Attach to Process, Test View, etc.
To see how it works, it’s best to try it yourself. Open the Solution Explorer and start typing the first few letters of a visible file.
### (11) Automate with Macros and Visual Studio Automation
(11)用宏和VS自动化工具实现自动化
I save this for last because I think macros and Automation have the potential to give you the biggest productivity booster, but also require the most initial time investment.
For many developers, the most effective way to take advantage of macros is to find and use or customize someone else’s macros.
If you want to get started with writing your own macros, the first feature you should get familiarize yourself with is the Macro Recording feature (shortcut CTRL+SHIFT+R). | 2019-04-22 05:05:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3263022005558014, "perplexity": 6771.002699131481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061654-00095.warc.gz"} |
https://tex.stackexchange.com/questions/167412/texmaker-when-compiling-gives-me-error-misplaced-alignment | # TEXMAKER when compiling gives me error misplaced alignment [duplicate]
! Misplaced alignment tab character &.
<argument> ...d/record.url?eid=2-s2.0-78751524824&
partnerID=40&md5=4144bdb06...
l.76 ...D=40&md5=4144bdb064f723bb3e27d5ff60673d79}
I can't figure out why you would want to use a tab mark
## marked as duplicate by Sean Allred, egreg, Jesse, Svend Tveskæg, lockstepMar 24 '14 at 15:54
• Welcome to TeX.SX! Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – jub0bs Mar 24 '14 at 12:35
• @SeanAllred these things are tricky, it's a duplicate problem (and duplicate answer) but the way the question is phrased is different, It's not "how do I type this special character" it's "what does this error mean" If the OP had spotted that & was special the question wouldn't have arisen, which means they are unlikely to find the duplicate. – David Carlisle Mar 24 '14 at 12:47
texmaker is just the editor, the error comes from TeX. You have used & which is reserved for marking table cells (alignment tabs) to get a & in text you need \& (or better use the url package). | 2019-10-20 08:23:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5398160815238953, "perplexity": 2542.525976514591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00061.warc.gz"} |
https://bitbucket.org/amirouche/python-earley-parser/src/57dbc29ad05952e426095bc1fcc25e208d3b3873/README.rst?at=default | # Python Earley Parser / README.rst
## How does the parser the parser run
1. goto python-earley_parser
2) run with make flat for a flat output 2bis) run with make dot for a graphical output
## FAQ
Q: How to change the grammar ? A: Currently there is no easy way to change the grammar, check out
grammar_from_api@grammar.py file to see current grammar.
Q: How does it works ? A: the parser works in two step :
1) build earley super set, after the sentence given in input, see parser@earley_parser
1. complete, scan or do prediction on any generated item from ess.items list
b) when you finished a) a.k.a you can't add any items, clean up the mess, some items are usefull for the first step but won't be of any interrest for the second step.
2) build parse trees based on cool item from ess.items, this step occurs, by now, in ParseRuleSet@parse_tools.py and buid_parse_tree@earley_parser.py 3) add semantic rules based on the parsed trees, be carful earley parser don't solve ambiguity, so you might end up with several trees !
Q: How complex is the earley parser ? A: Based on Earley's paper it's a $n^3$ algorithm...
Q: Which grammar can earley parser process ? A: Every grammar ever ! At least that's what I understood.
Q: Where can I find out more about the earley algorithm ? A: check out mendley's output @ http://www.mendeley.com/research-papers/search/#0/earley | 2015-09-05 05:21:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47545793652534485, "perplexity": 4708.370377826746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645378542.93/warc/CC-MAIN-20150827031618-00052-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.jiskha.com/questions/1314010/if-alpha-beta-are-the-zeroes-of-a-polynomial-such-that-alpha-beta-6-and-alpha-into-beta-4 | # kvno.1
if alpha ,beta are the zeroes of a polynomial,such that alpha+beta=6 and alpha into beta=4 ,then write the polynomial.
1. 👍
2. 👎
3. 👁
1. Let α + β = 6 , α β = 4
x^2 -( α + β) x + αβ = 0
x^2 - 6 x + 4 = 0 is your polynomial
1. 👍
2. 👎
## Similar Questions
1. ### maths
If alpha and beta are the zeros of the polynomial p(x)=x^2+x+1 then find the value of 1÷alpha+1÷beta 2)alpha^2+beta^2
2. ### math, calculus 2
Consider the area between the graphs x+y=16 and x+4= (y^2). This area can be computed in two different ways using integrals. First of all it can be computed as a sum of two integrals integrate from a to b of f(x)dx + integrate
3. ### Math
Alpha writes the infinite arithmetic sequence 10, 8, 6, 4, 2, 0... Beta writes the infinite geometric sequence 9, 6, 4, 8/3, 16/9,... Gamma makes a sequence whose n^th term is the product of the n^th term of Alpha's sequence and
4. ### maths
if alpha and beta are the zeros of p(x)=3x^2-7x-6.find a polynomial whose zeros are alpha^2 and beta^2
1. ### If alpha
If alpha& beta are the zeroes of the polynomial 2x2-7x+3. Find the sum of the reciprocal of its zeroes
2. ### maths
if alpha and beta are the zeros of the polynomial 2x^2-4x+5 then find the values of (i)alpha^2+beta^2 (ii)1/alpha^2+1/beta^2 (iii)(r)alpha/beta+(r)beta/alpha (iv)alpha^-1+beta^-1
3. ### precalculus, complex numbers
Let $\omega$ be a complex number such that $\omega^7 = 1$ and $\omega \neq 1$. Let $\alpha = \omega + \omega^2 + \omega^4$ and $\beta = \omega^3 + \omega^5 + \omega^6$. Then $\alpha$ and $\beta$ are roots of the quadratic \[x^2 +
4. ### Math
If alpha and beta are the zeros of the polynomial ax^2 + bx + c then evaluateA. (alpha)^2 / beta + (beta)^2 / alpha B. alpha^2 .beta + alpha.beta^2 C. 1/(alpha)^4 + 1/(beta)^4. Please work the complete solution.
1. ### Maths
If alpha and beta are zeroes of 4x-5x-1. Find the value of alpha square multiplied by beta
2. ### Math
if alpha and beta are the zeroes of the polynomial f(x)=x^2-3x-2 , find a quadratic polynomial whose zeroes are 1/2(alpha)+beta and 1/2(beta)+alpha ? Please... i have no idea !! | 2021-02-26 18:45:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745687007904053, "perplexity": 1776.074158692709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00137.warc.gz"} |
http://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-6-inverse-functions-6-1-inverse-functions-6-1-exercises-page-407/24 | ## Calculus 8th Edition
Published by Cengage
# Chapter 6 - Inverse Functions - 6.1 Inverse Functions - 6.1 Exercises: 24
#### Answer
The inverse function is $$f^{-1}(x)=\frac{3x+1}{4-2x},\quad x\neq 2.$$
#### Work Step by Step
First, we have to exclude $2x+3=0\Rightarrow x=-\frac{3}{2}$ from the domain of the function because the denominator must be different than zero. That same value must be excluded from the range of the inverse function as well. Now, let $y=f(x)=\frac{4x-1}{2x+3}$ to find the inverse function we need to express $x$ in terms of $y$ and then we will have $x=f^{-1}(y).$ $$y=\frac{4x-1}{2x+3}\Rightarrow y(2x+3)=4x-1\Rightarrow 2xy+3y=4x-1\\ 4x-2xy=3y+1\Rightarrow x=\frac{3y+1}{4-2y},$$ so $$x=f^{-1}(y)=\frac{3y+1}{4-2y}.$$ Renaming $y$ back to $x$ we get $$f^{-1}(x)=\frac{3x+1}{4-2x}$$ where $x\neq 2$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-04-23 23:55:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548107743263245, "perplexity": 296.25655526301426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00216.warc.gz"} |
https://gamedev.stackexchange.com/questions/184250/managing-ai-state-transitions-in-unity | # Managing AI state transitions in Unity
I am building a horror game in Unity, and I am trying to make an enemy movement controller. The way it works, is the enemy has two states, made available via an enum called EnemyStates the possible values are Chasing and Patrolling. When the enemy spawns it, it defaults to patrolling, which when done makes them follow a specific path set by waypoints on the map.
When a gameobject tagged as 'Player' is within chase distance, the enemystate is switched from 'Patrolling' to chasing, and the target is set to the player gameobjects transform. The mob then begins chasing the player.
I am doing all of my checks in the Update() function, which works mostly, but causes problems in this area;
Since it's checking every frame, it's overwritting it's previous state even if it's the same as the last time. When the enemy is patrolling then switches to 'chase' state, it plays a special one-time animation via a SetBool() with an exit time. The problem is that since it's in the Update() function, it's continually trying to call the one-time animation overwritting itself and not doing anything.
I am trying to find the best approach to handle target change etc without relying on Update() to do it, but still making the target update correctly when either 1.) A new player gets closer and the enemy starts chasing them, or 2.) No players are within range, and the enemy goes back to patrolling.
I was thinking, maybe it would be best to put the distance check on the player, and when the player is close enough to the enemy, update the enemies target and enemystate directly ? Then when the player gets out of range of the enemy, it updates the enemies target and enemystate back to patrolling?
Is that the best approach, or is there something else I should be doing? I have attached my current bit of the mobController.cs that I am working with.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;
using TMPro;
// Need to redo all of this. The first thing the mob should do after it spawns, is head to the first waypoint. Then check if a player is close by, and if so set them to the target.
// We also need to build this so that if the monster is currently chasing somebody, they first do the 'Howl' animation, proceeded by the chase animation (which should be a run).
// Once the player gets out of distance from the monster, it goes back to the 'patrol' animation (which should be a walk or crawl)
public class mobController : MonoBehaviour
{
public string mobName;
public GameObject mobNameObject;
public TMP_Text mobNameText;
public float chaseDistance;
public float deathDistance;
private NavMeshAgent agent;
private Animator animatorController;
private float defaultspeed;
private int currentWaypoint = 0;
enum EnemyStates
{
Patrolling,
Chasing
}
[SerializeField]
GameObject target;
[SerializeField]
AudioClip chaseMusic;
[SerializeField]
Transform[] waypoints;
[SerializeField]
EnemyStates currentState;
// Start is called before the first frame update
void Start()
{
agent = GetComponent<NavMeshAgent>();
target = this.gameObject;
animatorController = GetComponent<Animator>();
float defaultspeed = agent.speed;
}
// Update is called once per frame
void Update()
{
// Loop through every player, see if any are close. If they are, set the mob to chasing. If not, set it to patrolling.
GameObject[] gos = GameObject.FindGameObjectsWithTag("Player");
GameObject closest = null;
Vector3 position = transform.position;
float localdistance = Vector3.Distance(target.transform.position, transform.position);
if (localdistance < chaseDistance)
{
//Debug.Log("You are being chased!");
}
// Loop through all players, see if the mob is near anyone. If so, and it's not the original target, start chasing them instead.
foreach (GameObject go in gos)
{
float distance = Vector3.Distance(go.transform.position, transform.position);
// This will need updated eventually. Right now, if the player gets above the chase distance, the mob will go back to patrolling.
// We should probably make this a decision based thing. (Should the mob follow indefinitely, until another target gets closer?
if (distance < chaseDistance)
{
target = go;
target.GetComponentInChildren<AudioSource>().clip = chaseMusic;
if (!target.GetComponentInChildren<AudioSource>().isPlaying)
{
target.GetComponentInChildren<AudioSource>().Play();
}
// Play the howl animation to start the chase
if (currentState != EnemyStates.Chasing && target != this.gameObject)
{
animatorController.SetBool("move", false);
//animatorController.SetBool("howl", true);
StartCoroutine(StopHowl());
currentState = EnemyStates.Chasing;
animatorController.SetBool("chase", true);
}
}
else
{
currentState = EnemyStates.Patrolling;
if (target != this.gameObject)
{
target.GetComponentInChildren<AudioSource>().Stop();
}
}
// If the monster is close to a target player, do the death sequence.
if (distance <= deathDistance)
{
agent.isStopped = true;
animatorController.SetBool("move", false);
animatorController.SetBool("attack", true);
target.GetComponent<CharacterController>().GameOver();
// Then reset the target to self until next update
target = this.gameObject;
currentState = EnemyStates.Patrolling;
}
}
// If the mob is not chasing anyone, return to waypoint patrolling
if (Vector3.Distance(transform.position, waypoints[currentWaypoint].position) <= 0.1f)
{
Debug.Log("Updating waypoint");
currentWaypoint++;
if (currentWaypoint == waypoints.Length)
{
currentWaypoint = 0;
}
} else
{
Debug.Log("Too far away from waypoint");
}
if (currentState == EnemyStates.Chasing)
{
agent.destination = target.transform.position;
//agent.speed = defaultspeed + 2;
animatorController.SetBool("chase", true);
} else
{
agent.destination = waypoints[currentWaypoint].position;
//agent.speed = defaultspeed;
animatorController.SetBool("chase", false);
animatorController.SetBool("move", true);
}
}
IEnumerator StopHowl()
{
yield return new WaitForSeconds(1);
animatorController.SetBool("howl", false);
}
}
• An unrelated suggestion: your Update() function is rather long. You might want to break it into several functions that are called from Update(). Jul 10, 2020 at 18:54
About a third of the way into your Update() function, you have this:
// Play the howl animation to start the chase
if (currentState != EnemyStates.Chasing && target != this.gameObject)
{
animatorController.SetBool("move", false);
StartCoroutine(StopHowl());
currentState = EnemyStates.Chasing;
animatorController.SetBool("chase", true); //<----------
}
Later in the Update() function you have this, which is called every frame as long as the enemy is in the Chasing state:
if (currentState == EnemyStates.Chasing)
{
agent.destination = target.transform.position;
animatorController.SetBool("chase", true); //Why are we calling this again?
}
Notice you call animatorController.SetBool("chase", true); in two places, the latter of which is called every frame. Presumably you don't need the second one at all.
• Kevin. I've removed this second one, but the issue still remains with the animation and the timing. I'm still trying to find the best way to do all the chase/patrol logic without relying on the update() method to do it.
– Ex0r
Jul 12, 2020 at 0:54
• You wrote in the original question " The problem is that since it's in the Update() function, it's continually trying to call the one-time animation overwritting itself and not doing anything." That should be fixed by removing the second call to SetBool("chase", true"). If your code still isn't working, there must be something else wrong, but I'm not seeing what the additional problem is. The state should not be reset every frame since you have the condition if (currentState != EnemyStates.Chasing Jul 13, 2020 at 17:06
• You should try to use logging (with Debug.Log()) and the corresponding stack traces to determine what's going wrong with your code. Jul 13, 2020 at 17:07
• Kevin - My apologies. In checking again, it does appear to have somewhat resolved the issue, however now I am stuck trying to figure out how to clean up the howl animation. Preferably i'd like the animation to complete before the rest of the code is ran, but the animation clip between different monsters is different so I can't really use a flat .wait() coroutine to do it. Need to find a more reliable way to do the complete howl animation before transitioning into the chase animation, and continuing the navmesh movement.
– Ex0r
Jul 15, 2020 at 20:31
• @Ex0r If my answer has helped, I'd appreciate an upvote! As for timing scripts with animation, one solution is to use Animation Events, though they are somewhat clumsy: docs.unity3d.com/Manual/script-AnimationWindowEvent.html Jul 15, 2020 at 22:48 | 2022-09-30 12:29:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23091402649879456, "perplexity": 3774.540755854373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00595.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-6-percent-review-exercises-page-467/40 | ## Basic College Mathematics (9th Edition)
Fill in the percent equation. part=(percent)(whole) $85=x(1620)$ Divide both sides by whole: $0.0524=x$ Multiple x by 100 and round to the nearest tenth= 5.2% | 2019-11-22 20:02:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19211994111537933, "perplexity": 4151.752077342915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00533.warc.gz"} |
http://www.humandynamics.se/00030/cea714-example-of-precipitation-titration | # example of precipitation titration
It continues till the last amount of analyte is consumed. Example – To determine the concentration of chloride ion in a certain solution we can titrate this solution with silver nitrate solution (whose concentration is known). Precipitation Titration Mohr Method - essay example for free Newyorkessays - database with more than 65000 college essays for studying 】 An example showing both these features is the titration of 8-hydroxyquinolinc by aluminium ions (Fig. The %w/w I– in a 0.6712-g sample was determined by a Volhard titration. Next we draw our axes, placing pCl on the y-axis and the titrant’s volume on the x-axis. A simple equation takes advantage of the fact that the sample contains only KCl and NaBr; thus, $\textrm{g NaBr = 0.3172 g} - \textrm{g KCl}$, $\dfrac{\textrm{g KCl}}{\textrm{74.551 g KCl/mol KCl}}+\dfrac{\textrm{0.3172 g}-\textrm{g KCl}}{\textrm{102.89 g NaBr/mol NaBr}}=4.048\times10^{-3}$, $1.341\times10^{-2}(\textrm{g KCl})+3.083\times10^{-3}-9.719\times10^{-3}(\textrm{g KCl}) = 4.048\times10^{-3}$, $3.69\times10^{-3}(\textrm{g KCl})=9.65\times10^{-4}$, The sample contains 0.262 g of KCl and the %w/w KCl in the sample is, $\dfrac{\textrm{0.262 g KCl}}{\textrm{0.3172 g sample}}\times100=\textrm{82.6% w/w KCl}$. The analysis for I– using the Volhard method requires a back titration. Iron ion is used as indicator in Volhard’s method. To compensate for this positive determinate error, an analyte-free reagent blank is analyzed to determine the volume of titrant needed to affect a change in the indicator’s color. 7. Substances like mercury, lead, silver, copper in … In precipitation titration curve, a graph is drawn between change in titrant’s concentration as a function of the titrant’s volume. Vedantu academic counsellor will be calling you shortly for your Online Counselling session. The first type of indicator is a species that forms a precipitate with the titrant. A precipitation titration curve follows the change in either the titrand’s or the titrant’s concentration as a function of the titrant’s volume. A chemical indicator is used in precipitation titration procedures to obtain a visually detectable change (usually of color change or turbidity) in the solution. The concentration of unreacted Cl– after adding 10.0 mL of Ag+, for example, is, \begin{align} You can review the results of that calculation in Table 9.18 and Figure 9.43. Ansewer of example : a) before adding AgNO3: NaCl → Na+ + … Calculate the %w/w Ag in the alloy. Potassium chromate is used as indicator. a When two reagents are listed, the analysis is by a back titration. A Presentation On. In a precipitation titration, the stoichiometric reaction is a reaction which produces in solution a slightly soluble salt that precipitates out. &=\mathrm{\dfrac{(0.0500\;M)(50.0\;mL)-(0.100\;M)(10.0\;mL)}{50.0\;mL+10.0\;mL}=2.50\times10^{-2}\;M} In this article we will discuss mainly precipitation titration definition with example and argentometric titration (a type of precipitation titration), Volhard method, Fajan’s method, Mohr’s method and difference between Mohr’s method and Volhard’s method. Because it is difficult to tell when all the halide ion has reacted with the silver ion, a small … It can be used for the determination of concentration of anions in the analyte. If you want to read more on the topic, register yourself on Vedantu and go through the study material, NCERT Solutions for CBSE Class 12 etc. Reaction – If analyte contains chloride anions. Reaction – Reaction involved can be shown as follows –. As we did for other titrations, we first show how to calculate the titration curve and then demonstrate how we can sketch a reasonable approximation of the titration … Most precipitation titrations use Ag+ as either the titrand or the titration. It is used for the determination of halide ions in the solution. The titration’s end point is the formation of a reddish-brown precipitate of Ag2CrO4. Before the equivalence point, Cl– is present in excess and pCl is determined by the concentration of unreacted Cl–. Precipitation titrimetry is one of the oldest analytical techniques, dating back to the mid-1800s. Figure 9.43 Titration curve for the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3. Click here to review your answer to this exercise. Calcium nitrate, Ca (NO 3) 2, was used as the titrant, forming a precipitate of CaCO 3 and CaSO 4. We will also discuss titration curves in detail. The titration’s end point was signaled by noting when the addition of titrant ceased to generate additional precipitate. Step 1: Calculate the volume of AgNO3 needed to reach the equivalence point. This titration can be carried out under room temperature. There are two precipitates in this analysis: AgNO3 and I– form a precipitate of AgI, and AgNO3 and KSCN form a precipitate of AgSCN. Each mole of I– consumes one mole of AgNO3, and each mole of KSCN consumes one mole of AgNO3; thus, \[\textrm{moles AgNO}_3=\textrm{moles I}^-\textrm{ + moles KSCN}, $\textrm{moles I}^-=\textrm{moles AgNO}_3-\textrm{moles KSCN}$, $\textrm{moles I}^- = M_\textrm{Ag}\times V_\textrm{Ag}-M_\textrm{KSCN}\times V_\textrm{KSCN}$, $\textrm{moles I}^-=(\textrm{0.05619 M AgNO}_3)\times(\textrm{0.05000 L AgNO}_3)-(\textrm{0.05322 M KSCN})\times(\textrm{0.03514 L KSCN})$, that there are 9.393 × 10–4 moles of I– in the sample. . [\textrm{Ag}^+]&=\dfrac{\textrm{moles Ag}^+\textrm{ added}-\textrm{initial moles Cl}^-}{\textrm{total volume}}=\dfrac{M_\textrm{Ag}V_\textrm{Ag}-M_\textrm{Cl}V_\textrm{Cl}}{V_\textrm{Cl}+V_\textrm{Ag}}\\ We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Symbol of silver is Ag which is taken from its latin name argentum. Ansewer of example : a) before adding AgNO3: NaCl → Na+ + Cl-0.1 0.1 0.1 In precipitation titration curve, a graph is drawn between change in titrant’s concentration as a function of the titrant’s volume. That's what we will do in the following example. Precipitation titrations are mainly based on the formation of the precipitate by the reaction of the sample with precipitating agents. Watch the recordings here on Youtube! separable solid compounds form during the course of the reaction. Calcium nitrate, Ca(NO3)2, was used as the titrant, forming a precipitate of CaCO3 and CaSO4. The first task is to calculate the volume of Ag+ needed to reach the equivalence point. A precipitation titration curve is given below for 0.05M NaCl with 0.1M AgNO3. The most frequent use of precipitation reactions in analytical chemistry is the titration of halides, in particular Cl-by Ag+. After the equivalence point, Ag+ is in excess and the concentration of Cl– is determined by the solubility of AgCl. of Ag+ and Cl-5) Precipitation titration curve is influenced by KSP value (completeness of reaction) . Precipitation titrations are mainly based on the formation of the precipitate by the reaction of the sample with precipitating agents. When calculating a precipitation titration curve, you can choose to follow the change in the titrant’s concentration or the change in the titrand’s concentration. Because CrO42– imparts a yellow color to the solution, which might obscure the end point, only a small amount of K2CrO4 is added. To evaluate the relationship between a titration’s equivalence point and its end point we need to construct only a reasonable approximation of the exact titration curve. Here we have discussed an example of precipitation titration. As we have done with other titrations, we first show how to calculate the titration curve and then demonstrate how we can quickly sketch a reasonable approximation of the titration curve. Title: Potentiometric Precipitation Titration Example 1 Potentiometric Precipitation Titration Example. This method was given by American chemist Kazimierz Fajan. Again, the calculations are straightforward. One type of titration is precipitation titration which started in the early 18 th century and was considered as the oldest analytical techniques. Precipitation titration is a type of titration which involves the formation of precipitate during the titration technique. Examples of substances analyzed include divalent ions, trivalent ions etc. By: Rahul Malik By: Rahul Malik March, 2016 March, 2016. First, the sample to be analyzed is titrated with a AgNO3 solution, which results in the precipitation of a white silver solid (e.g., AgCl). A 1.963-g sample of an alloy is dissolved in HNO3 and diluted to volume in a 100-mL volumetric flask. The titration is continued till the last drop of the analyte is consumed. The end point is determined by green suspension (of AgCl and incidation) turning pink (complex of AgCl and indicator). Many anions produce sparingly soluble silver compounds (precipitates) that can … Visit to learn more. Ag + + Cl − AgCl (ppt.) After the end point, the surface of the precipitate carries a positive surface charge due to the adsorption of excess Ag+. Related: Potentiometric Titration. Before precipitation titrimetry became practical, better methods for identifying the end point were necessary. Example: Cl can be determined when titrated with AgNO 3 Determination of chloride Principle Chlorides are present in all types of water resources at a varying concentration depending on the geo-chemical conditions in the form of CaCl2, MgCl2 and NaCl. Titrations with silver nitrate are sometimes called argentometric titrations. The titrant react with the analyte forming an insoluble material and the titration continues till the very last amount of analyte is consumed. In the Fajans method for Cl– using Ag+ as a titrant, for example, the anionic dye dichlorofluoroscein is added to the titrand’s solution. Environment • Determination of chloride in water Food and beverage when KSP value is small the titration curve is perfect . Because this equation has two unknowns—g KCl and g NaBr—we need another equation that includes both unknowns. By now you are familiar with our approach to calculating a titration curve. Pro Subscription, JEE The precipitate formed is the less soluble compound. Step 4: Calculate pCl after the equivalence point by first calculating the concentration of excess AgNO3 and then calculating the concentration of Cl– using the Ksp for AgCl. The %w/w I– in the sample is, $\dfrac{(9.393\times10^{-4}\textrm{ mol I}^-)\times 126.9\textrm{ g I}^- /\textrm{mol I}^-}{\textrm{0.6712 g sample}}\times100=17.76\%\textrm{ w/w I}^-$. The indicator used will depend on the precipitation reaction and the nature of the ion in excess. Let’s calculate the titration curve for the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3. 4) Precipitation titration curve is influenced by the conc. With the help of precipitation reactions, we can determine the presence of different ions present in a particular solution. Many practical based questions are asked in the final board exam of CBSE Class 12 Chemistry. • Determine the concentrations of fluoride and calcium free in solution at the following titration volumes. First, the sample to be analyzed is titrated with a AgNO3 solution, which results in the precipitation of a white silver solid AgCl. In this method dichlorofluorescein is used as an indicator. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Note You can use this to monitor Cl- ! At best, this is a cumbersome method for detecting a titration’s end point. This kind of titration is based on precipitation reactions, i.e. It is used to determine chloride by using silver ions. It reacts and forms a white precipitate of silver thiocyanate or silver chloride. Precipitation titration: Here precipitating agents are used for quantitative estimation of ions and elements. The first reagent is added in excess and the second reagent used to back titrate the excess. Precipitation Titration - Definition of Precipitation Titration, example with silver nitrate, Volhard’s method, Fajan’s method, Method selection process of precipitate titration. The first drop of titrant in excess will react with an indicator resulting in a color change and announcing the termination of the titration. 3. Although precipitation titrimetry is rarely listed as a standard method of analysis, it may still be useful as a secondary analytical method for verifying other analytical methods. At the beginning of this section we noted that the first precipitation titration used the cessation of precipitation to signal the end point. So, word argentometric is also taken from latin word argentum. Precipitation titration. Figure 9.44b shows pCl after adding 10.0 mL and 20.0 mL of AgNO3. Dichlorofluoroscein now adsorbs to the precipitate’s surface where its color is pink. For example, after adding 35.0 mL of titrant, \begin{align} Titration is a technique used in analytical chemistry to determine concentration of unknown solution by using solution of known concentration. Titration Curves for Argentometric Methods Plots of titration curves are normally sigmoidal curves consisting of pAg (or pAnalyte) versus volume of AgNO 3 solution added. during the reaction a salt is precipitated as the titration is completed. Ag+ + Cl− Image AgCl (ppt.) This titration must be performed in acidic medium otherwise iron ion get precipitated as hydrated oxide. Fields of application The determination of the anions I-, Br and Ag+ is also common. We call this type of titration a precipitation titration. 6. Only limited precipitating agents are used because of the slow rate of appearance of precipitate (Skoog, et al., 2014). Most frequent precipitation titration is precipitation with silver nitrate (AgNO3). At the end point, when all chloride ions are consumed by silver ion, reddish brown colored precipitate is formed by reaction of silver ion and chromate ion. In some titrations the point of initial precipitation was delayed, and in others the precipitate dissolved in an ~XCCSSof reagent. Another method for locating the end point is a potentiometric titration in which we monitor the change in the titrant’s or the titrand’s concentration using an ion-selective electrode. Figure 9.44a shows the result of this first step in our sketch. Report the %w/w KCl in the sample. Precipitation Titration. When the silver(I) has been precipitated as white silver thiocyanate, the first excess of titrant and the iron(III) indicator react and form a soluble red complex. Subtracting the end point for the reagent blank from the titrand’s end point gives the titration’s end point. 1 See answer ishup3349 is waiting for your help. A better fit is possible if the two points before the equivalence point are further apart—for example, 0 mL and 20 mL— and the two points after the equivalence point are further apart. A precipitation titration curve follows the change in either the titrand’s or the titrant’s concentration as a function of the titrant’s volume. Precipitation titrations are based on reactions that yield ionic compounds of limited solubility. Let’s use the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3. 100.0 mL solution containing 0.100 M NaCl was titrated with 0.100 M AgNO3 and monitored with a S.C.E. This method involves the determination of halide (F, Cl, Br, I) ions, anions like phosphate, chromate in acidic medium by using silver ions. precipitation titration and the second unit is on gravimetric titrations and in the last unit we introduce you to instruments used in analytical analysis. of Ag+ and Cl-5) Precipitation titration curve is influenced by KSP value (completeness of reaction) . The scale of operations, accuracy, precision, sensitivity, time, and cost of a precipitation titration is similar to those described elsewhere in this chapter for acid–base, complexation, and redox titrations. dichlorofluorescein: greenish cloudy solution turns reddish at the end point. In this method 1, analyte (halide ion solution or any other anionic solution) is titrated with measured excess of AgNO, Now the unreacted or in excess silver ions are titrated with standard solution of KSCN using iron ion (Fe. ) We bring two reacting substances into contact in precipitationtitration. One of the earliest precipitation titrations—developed at the end of the eighteenth century—was the analysis of K2CO3 and K2SO4 in potash. Example. In this reaction, the analayte and titrant form an insoluble precipitate that can serve as a basis for a titration (LibreTexts.org, 2016).Silver nitrate is an important precipitating … Precipitation Titration An example of a precipitation titration reaction is the Mohr method, which is used to find the concentration of halide ions in solution (particularly Cl- and Br-). Titration involving precipitation at end of process is called as precipitation titration. The reaction occurs by the formation of a solid precipitate at the bottom of the flask. Condition for titration should be acidic. In the Mohr method for Cl– using Ag+ as a titrant, for example, a small amount of K2CrO4 is added to the titrand’s solution. Example (1): If the solubility of AgCl is 0.0015 g/l what is the solubility product. Table 13-1 Concentration changes during a titration of 50.00 mL of 0.1000M AgNO3 with 0.1000M KSCN 0.1000M KSCN, mL [Ag+] mmol/L mL of KSCN to cause a tenfold decrease in [Ag+] pAg pSCN 0.00 1.000 × 10-1 1.00 For example, the formation of a second precipitate such as silver chromate, Ag 2 CrO 4, of distinctive color is the basis for end-point detection with the Mohr method. Fajan's Method (indicator adsorption method).The precipitation titration in which silver ions is titrated with halide or thiocyanate ions in presence of adsorption indicator is called fajan's method.Adsorption indicators function in an entirely different manner than the chemical indicators and they can be used in many precipitation titrations.Since the adsorption of … Precipitation: During titration, the precipitate will form if the reaction forms a solid. Additional results for the titration curve are shown in Table 9.18 and Figure 9.43. Next, we draw a straight line through each pair of points, extending them through the vertical line representing the equivalence point’s volume (Figure 9.44d). An example of the chelate is ethylene tetra-acetic acid (EDTA)sodium salt. Titration of a strong acid with a strong base (continued) Titration of a weak acid with a strong base. • The nature of precipitation equilibrium may be studied by use of calculations involving solubility product constant. This is the same example that we used in developing the calculations for a precipitation titration curve. To calculate the concentration of Cl– we use the Ksp expression for AgCl; thus, \[K_\textrm{sp}=\mathrm{[Ag^+][Cl^-]}=(x)(x)=1.8\times10^{-10}. Calculate the titration curve for the titration of 50.0 mL of 0.0500 M AgNO3 with 0.100 M NaCl as pAg versus VNaCl, and as pCl versus VNaCl. Our goal is to sketch the titration curve quickly, using as few calculations as possible. The Volhard method was first published in 1874 by Jacob Volhard. One of the earliest precipitation titrations—developed at the end of the eighteenth century—was the analysis of K 2 CO 3 and K 2 SO 4 in potash. This creates anion vacancies in the crystal and analyte, such as F- can diffuse ... – A free PowerPoint PPT presentation (displayed as a Flash slide show) on PowerShow.com - id: f1106-ZDc1Z Potentiometric titration curves with a pH electrode for precipitation of 25.0 mL of 0.10 M NaCl with 0.50 M AgNO 3 in the absence of a mediator. A second type of indicator uses a species that forms a colored complex with the titrant or the titrand. can be analysed by precipitation titration. Reactions involved are as follows – Reactions involved are as follows – A 0.3172-g sample is dissolved in 50 mL of water and titrated to the Ag2CrO4 end point, requiring 36.85 mL of 0.1120 M AgNO3. An example of a precipitation titration reaction is the Mohr method, which is used to find the concentration of halide ions in solution (particularly Cl- and Br-). A mixture containing only KCl and NaBr is analyzed by the Mohr method. In this section we demonstrate a simple method for sketching a precipitation titration curve. Figure 9.44 Illustrations showing the steps in sketching an approximate titration curve for the titration of 50.0 mL of 0.0500 M NaCl with 0.100 M AgNO3: (a) locating the equivalence point volume; (b) plotting two points before the equivalence point; (c) plotting two points after the equivalence point; (d) preliminary approximation of titration curve using straight-lines; (e) final approximation of titration curve using a smooth curve; (f) comparison of approximate titration curve (solid black line) and exact titration curve (dashed red line). As a result, the end point is always later than the equivalence point. 2. \end{align}\], $[\textrm{Cl}^-]=\dfrac{K_\textrm{sp}}{[\textrm{Ag}^+]}=\dfrac{1.8\times10^{-10}}{1.18\times10^{-2}}=1.5\times10^{-8}\textrm{ M}$. This titration must be performed in acidic medium otherwise iron ion get precipitated as hydrated oxide. Example: When use of the solution of silver nitrate takes place to a solution of ammonium thiocyanate or sodium chloride. In this method silver nitrate is used as titrant and chloride ion solution as analyte. The concentration of an acid or base in solution can be determined by titration with a strong base or strong acid, respectively. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. A titration in which Ag+ is the titrant is called an argentometric titration. [ "article:topic", "Precipitation", "titration curve", "End point", "Indicator", "titrant", "authorname:harveyd", "titrations", "Precipitation Titration", "showtoc:no", "Titrimetry", "titrand", "Titration Curves", "Mohr method", "Volhard method", "Fajans method", "argentometric titration", "Precipitation Titrimetry", "license:ccbyncsa" ], https://chem.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FUnder_Construction%2FPurgatory%2FBook%253A_Analytical_Chemistry_2.0_(Harvey)%2F09_Titrimetric_Methods%2F9.5%253A_Precipitation_Titrations, 9.5.2 Selecting and Evaluating the End point, 9.5.4 Evaluation of Precipitation Titrimetry, information contact us at info@libretexts.org, status page at https://status.libretexts.org. It is an indirect method of precipitation. We know that, $\textrm{moles KCl}=\dfrac{\textrm{g KCl}}{\textrm{74.551 g KCl/mol KCl}}$, $\textrm{moles NaBr}=\dfrac{\textrm{g NaBr}}{\textrm{102.89 g NaBr/mol NaBr}}$, which we substitute back into the previous equation, $\dfrac{\textrm{g KCl}}{\textrm{74.551 g KCl/mol KCl}}+\dfrac{\textrm{g NaBr}}{\textrm{102.89 g NaBr/mol NaBr}}=4.048\times10^{-3}$. Condition for titration should be neutral to alkaline. A reaction in which the analyte and titrant form an insoluble precipitate also can serve as the basis for a titration. Titration Curves The titration curve for a precipitation titration follows the change in either the analyte’s or titrant’s concentration as a function of the volume of titrant. A blank titration requires 0.71 mL of titrant to reach the same end point. If you are unsure of the balanced reaction, you can deduce the stoichiometry from the precipitate’s formula. Precipitation Titrations are generally famous due to their unique ability to form an insoluble precipitate during the reaction. • The nature of precipitation equilibrium may be studied by use of calculations involving solubility product constant. PRECIPITATION TITRATION CAN BE CLASSIFIED ACCORDING TO THE TITRANTS: Argentometry – titrant AgNO 3, Thiocyanatometry – titrant NH 4 SCN, or (KSCN, NaSCN), MercurOmetry – titrant Hg 2 (NO 3) 2, Sulfatometry – titrant H 2 SO 4, (or Na 2 SO 4) Hexacyanoferatometry – titrant K 4 … Example: calculate pZn at the equivalence point of zinc titration … The number of precipitating agents that can be used is limited because of the slow action to form the precipitate. Main & Advanced Repeaters, Vedantu Precipitation titrations are based on reactions that yield ionic compounds of limited solubility. Now the unreacted or in excess silver ions are titrated with standard solution of KSCN using iron ion (Fe+3) as indicator which gives red color in the end point. &=\dfrac{\textrm{(0.100 M)(35.0 mL)}-\textrm{(0.0500 M)(50.0 mL)}}{\textrm{50.0 mL + 35.0 mL}}=1.18\times10^{-2}\textrm{ M} The molar concentration of the unknown solution is calculated as follows: 31.00 mL x 0.6973 molar = 21.62 mmol Ag + = 21.62 mmol Cl-. This method is also known as indicator adsorption method because in this method chloride ions present in excess are adsorbed on silver chloride surface. ... Potentiometric titrations can be classified as precipitation titrations, complex formation titrations, neutralization titrations and oxidation/reduction titrations. When 25mL of O.l moI L −1 AgN0 3 has been added, 75mL of NaCl remains in a total volume of 125mL. An example of such a reaction is Silver nitrate with Ammonium chloride. Precipitation titrations also can be extended to the analysis of mixtures provided that there is a significant difference in the solubilities of the precipitates. A further discussion of potentiometry is found in Chapter 11. To indicate the equivalence point’s volume, we draw a vertical line corresponding to 25.0 mL of AgNO3. Titration curves for precipitation titrations The titration curve is a relation between the values of the – log ionic concentration of the substance being determined against the volume of titrant added. Legal. The Fajans method was first published in the 1920s by Kasimir Fajans. Add your answer and earn points. Precipitation Titration A special type of titremetric procedure involves the formation of precipitates during the course of titration. Difference Between Mohr’s Method and Volhard’s Method, Vedantu Thus far we have examined titrimetric methods based on acid–base, complexation, and redox reactions. Because dichlorofluoroscein also carries a negative charge, it is repelled by the precipitate and remains in solution where it has a greenish-yellow color. For example: fluorescein: greenish cloudy solution turns reddish at the end point. Precipitation titration is an important topic for Class 12. For example, In an analysis for I– using Ag+ as a titrant as a function of the titrant’s volume.IporAgptitration curve may be a plot ofThe. Example … 4) Precipitation titration curve is influenced by the conc. The points on the curve can be calculated, given the analyte concentration, AgNO 3 concentration and the appropriate K sp. For example, in forming a precipitate of Ag2CrO4, each mole of CrO42– reacts with two moles of Ag+. In the Volhard method for Ag+ using KSCN as the titrant, for example, a small amount of Fe3+ is added to the titrand’s solution. This method involves the determination of halide (F, Cl, Br, I) ions, anions like phosphate, chromate in acidic medium by using silver ions. Figure 9.45 Titration curve for the titration of a 50.0 mL mixture of 0.0500 M I– and 0.0500 M Cl– using 0.100 M Ag+ as a titrant. We call this type of titration a precipitation titration. 6 Estimations Based on Precipitation and Gravimetry • explain an example in which formation of a coloured complex ion can be employed to indicate the end point in a precipitation titration and • The mode of action of adsorption indicators for precipitation titrations. The stoichiometry of the reaction requires that, $M_\textrm{Ag}\times V_\textrm{Ag}=M_\textrm{Cl}\times V_\textrm{Cl}$, $V_\textrm{eq}=V_\textrm{Ag}=\dfrac{M_\textrm{Cl}V_\textrm{Cl}}{M_\textrm{Ag}}=\dfrac{\textrm{(0.0500 M)(50.0 mL)}}{\textrm{(0.100 M)}}=\textrm{25.0 mL}$. when KSP value is small the titration curve is perfect . Titration involves measuring and recording the cell potential (in units of millivolts or pH) after each addition of titrant. The final category for … as indicator which gives red color in the end point. The red arrows show the end points. Precipitation titration Titrations with precipitating agents are useful for determining certain analyte. Pcl of 4.89 remains in a solution of known concentration is known as indicator which gives red color the! Content in food, beverages and water KCl ( aq ) + Cl- ( aq ) and Cl- aq. Sample of an acid or base in solution where it has a greenish-yellow color the reaction... The pH is too acidic, chromate is present in excess and the titration is recognized... Is limited because of the flask ammonium thiocyanate or sodium chloride method first... The concentrations of Ag+ needed to reach the equivalence point, the calculations for a precipitation titration titrations precipitating... A smooth curve that connects the three straight-line segments ( figure 9.44e ) of 10 mM CaCl.... Hcro4– instead of CrO42–, and in others the precipitate of precipitation reactions, can..., AgNO3 + Cl- AgCl + indicator AgCl-Ag+ indicator discuss the feasibility of precipitation titration to review your answer this... I– in a 0.6712-g sample was determined by green suspension ( of AgCl precipitation to signal end. Voltage reading would be observed after 65.0 mL precipitate also can serve as the titration is on precipitation! Dichromate etc indicate the equivalence point, the precipitate KSCN requires 27.19 mL to reach the example! Analyte in titration technique ) → AgCl ( ppt. it has a negative charge, is. Its color is pink same example that we used in developing the for. Of K2CO3 and K2SO4 in potash NaCl ) ( white ppt ) 9.18 and figure 9.43 axes... Answer to this exercise AgCl ( ppt. oxidation/reduction titrations, Br and is! Involved are as follows –, Ag+ + AgCl + NO3-, ( solution. Comparison of our sketch reaction involved can be shown as follows- point using the Volhard was! ( figure 9.44e ) note that the concentrations of fluoride and calcium in! 20.0 mL of AgNO3 needed to reach the equivalence point, Ag+ AgCl... Surface where its color is pink as carbromal, KCl infusion, NaCl infusion etc dichromate! By Kasimir Fajans titrations with silver nitrate are sometimes called argentometric titrations this first step in our sketch such. 1St analyte ( halide ion solution as analyte in titration technique: determining solute concentration by titration... Shown in the following example due to the precipitate dissolved in an acidic solution prevent... As follows- basis of this section we demonstrate a simple method for sketching a precipitation titration is sketch. Same example that we used in developing the calculations for a mixture containing only KCl NaBr! Change in the indicator ’ s volume, we draw our axes, placing pCl on the dissolved! 0.100 M AgNO3 of ferric thiocyanate is formed which indicates end point: a ) adding., NaCl infusion etc because CrO42– is a titrimetric method which involves the use of calculations involving product. And NaBr is analyzed by the conc and g NaBr—we need another equation that includes unknowns... Discussion of potentiometry is found in Chapter 11 that forms a precipitate of AgCl and indicator ) reactions involved as... Titration requires 0.71 mL of 0.0500 M NaCl was titrated with measured excess of AgNO3 titration used the of. The end point for I– is earlier than the end point is the same example that we used in the. Its color is pink forms an insoluble substance precipitation reactions in analytical Chemistry is formation. Visually examining the titration curve is influenced by the solubility of AgCl and indicator ) in technique. 1 Potentiometric precipitation titration is precipitation with silver nitrate ( AgNO3 ) corresponds to the mid-1800s include divalent,... And incidation ) turning pink ( complex of AgCl and incidation ) turning pink ( complex of AgCl has negative... Point by determining the concentration of anions in the end point for the titration ’ volume! Need 25.0 mL of AgNO3 of iodide and cyanate is not possible a colored complex with the and... ( halide ion solution as analyte simpler than gravimetric methods a positive surface charge due to their unique to... Libretexts content is licensed by CC BY-NC-SA 3.0 ( aq ) NaCl ) ( white ). The mid-1800s precipitate and remains in solution can be written as follows – point precipitation! Of CBSE Class 12 ( white ppt ) added precipitating reagent = Quantity of substance precipitated. Significant difference in the solubilities of the slow rate of appearance of precipitate ( Skoog, et,! When KSP value ( completeness of reaction ), therefore, has to with! Mmol Cl-/46.00 mL Cl-= 0.4700 molar Cl- Worked example: a ) before adding AgNO3: NaCl → +... Can determine the concentrations of Ag+ needed to reach the same example that we need 25.0 mL Ag+! Placing pCl on the precipitate ’ s equivalence point, the precipitate ’ s volume, know. Of 0.0500 M NaCl with 0.1M AgNO3 is consumed ( figure 9.44e ) turning pink ( of! Hcro4– instead of CrO42– reacts with the analyte concentration, AgNO 3 ( )... Of end point so, word argentometric is also known as titrant while solution of NaCl remains solution... Is one of the flask second type of indicator uses a species that forms a white precipitate of CaCO3 CaSO4!, LibreTexts content is licensed by CC BY-NC-SA 3.0 info @ libretexts.org or check our! A cumbersome method for sketching a precipitation titration curve are shown in Table 9.18 observed! In excess and pCl is determined by the precipitate ’ s why it is used for the titration s! Would be observed after 65.0 mL present in a particular solution deduce the stoichiometry from the.... Point uses a species that forms a solid precipitate at the beginning of this we! Neutralization titrations and oxidation/reduction titrations, complexation, and the titrant reacts with the analyte pZn at the following volumes... Then use the titration of a solid precipitate at the end point of reaction ) Cl− ] as 1.3 10–5! Precipitation titrations, neutralization titrations and oxidation/reduction titrations to react with an indicator figure 9.44f ) shows that they in... Precipitation was delayed, and in others the precipitate of ferric thiocyanate is formed which indicates point... The earliest precipitation titrations—developed at the following titration volumes it can be classified as precipitation use... Added in excess are adsorbed on silver chloride surface when 25mL of O.l moI L −1 AgN0 3 been... 0.6712-G example of precipitation titration was determined by a Volhard titration generally famous due to their unique ability to form the of... Are used for silver and chloride, because the titration then use the KSP expression to calculate the of... Early 18 th century and was considered as the titration continues till the last drop of the titration curve several! Aluminium ions ( Fig Cl − AgCl ( s ) thiocyanate or sodium.. Analyte ( halide ion solution or any other anionic solution ) is with... Negative charge, it is a species that forms a colored complex with the titrant with! For precipitation titration American Chemist Kazimierz Fajan of I– and Cl– are.... Precipitate also can be used for such reaction when the titration reaction asked in solution... Surface charge due to the adsorption of excess Cl– Cl– are equal its color is.! Out under room temperature of our sketch use the KSP expression to the... Finally, we can determine the presence of different ions present in excess and the titrant, forming a of. Reaction is a weak acid with a solution by using silver ions of a reddish-brown precipitate of CaCO3 CaSO4! Shortly for your help point for Cl– because AgI is less soluble than AgCl a colored complex the., Ca ( NO3 ) 2, was used as indicator in Volhard ’ s formula and. I– is earlier than the end of process is called an argentometric titration Cl– are equal dichromate.. Was determined by green suspension ( of AgCl and incidation ) turning pink ( complex of AgCl infusion, infusion... Ph is too acidic, chromate is formed which indicates end point, know... Method, titration of iodide and cyanate is not possible 100-mL volumetric flask – precipitation titration which involves the of! Precipitates out 3: calculate pZn at the end point s color signals the point... Analyte is consumed as Fajan ’ s volume, which, as we determined earlier, the surface the! Th century and was considered as the oldest analytical techniques, dating back to data. The titrand, Cl–, is 25.0 mL of Ag+ to reach the equivalence point using the Volhard was! Adding AgNO3: NaCl → Na+ + Cl-0.1 0.1 0.1 6 precipitate dissolved in an solution. The third type of indicator uses a species that changes color when it adsorbs to adsorption... With our approach to calculating a titration in which the example of precipitation titration and forms an insoluble precipitate precipitate also serve! | 2021-09-24 08:28:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6429741978645325, "perplexity": 5051.845495889328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00557.warc.gz"} |
http://www.dummies.com/how-to/content/military-flight-aptitude-conversions-and-equivalen.html | The Military Flight Aptitude Test will ask you about measurements and conversions. Need to quickly convert a metric unit to a U.S. customary one? No problem. Here are some fast conversion formulas for your viewing pleasure:
• Seconds to minutes: Number of seconds / 60 = number of minutes
• Meters to inches: Number of meters / 0.0254 = number of inches
• Centimeters to inches: Number of centimeters / 2.54 = number of inches
• Kilograms to pounds: Number of kilograms / 0.45 = number of pounds
• Pascals to atmospheres: Number of pascals / 101.325 = number of atmospheres
• Joules to calories: Number of joules / 4.184 = number of calories
These tables provide a handy list of metric equivalents that you can use to get the hang of the various metric measurements.
Length Metric Equivalents
UnitAbbreviationNumber of MetersApproximate U.S. Customary Equivalent
kilometer km 1,000 0.62 miles
hectometer hm 100 328.08 feet
dekameter dam 10 32.81 feet
meter m 1 39.37 inches
decimeter dm 0.1 3.94 inches
centimeter cm 0.01 0.39 inches
millimeter mm 0.001 0.039 inches
micrometer μ 0.000001 0.000039 inches
Area Metric Equivalents
UnitAbbreviationNumber of Square MetersApproximate U.S. Customary Equivalent
square kilometer sq km or km2 1,000,000 0.3861 square miles
hectare ha 10,000 2.47 acres
acre a 4,047 4,840 square yards
square centimeter sq cm or cm2 0.0001 0.155 square inches
Volume Metric Equivalents
UnitAbbreviationNumber of Cubic MetersApproximate U.S. Customary Equivalent
cubic meter m3 1 1.307 cubic yards
cubic decimeter dm3 0.001 61.023 cubic inches
cubic centimeter cu cm, cm3, or cc 0.000001 0.061 cubic inches | 2015-12-02 05:14:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324894309043884, "perplexity": 13613.927452888134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448399326483.8/warc/CC-MAIN-20151124210846-00283-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/ampere-law-problem.469985/ | # Ampere Law Problem
In ampere circuital law, ∫B.dl = μo inet
i includes current "only passing through the loop "
also B is the net mag field at any point "all to any current anywhere"
now look at the pic.
In A, it is very easy to find field using ampere law ... its μi/2πr
Also if in B is you apply the law, B is again μi/2πr ... how is this possible
its like the external current doesn't make any difference!!!
Plz explain me this !!!
#### Attachments
• Untitled.png
1.9 KB · Views: 328
Ampere's law relies on what you define to be your curve of integration. Ampere's law states that the closed-loop integral $Bdl$ over a given curve is proportional to the net current ENCLOSED by said loop.
In A you only have 1 wire enclosed by your curve of integration (red circle). In B, yes there are 2 wires, but only one is enclosed by your curve of integration, thus the same answer as in A.
Ampere's law relies on what you define to be your curve of integration. Ampere's law states that the closed-loop integral $Bdl$ over a given curve is proportional to the net current ENCLOSED by said loop.
In A you only have 1 wire enclosed by your curve of integration (red circle). In B, yes there are 2 wires, but only one is enclosed by your curve of integration, thus the same answer as in A.
So you are saying that i studied it wrong that in $$Bdl$$ B is not due to all the currents existing in space???
So you are saying that i studied it wrong that in $$Bdl$$ B is not due to all the currents existing in space???
No, no, you are correct. B is due to the combined effect from all the currents.
Also if in B is you apply the law, B is again μi/2πr ... how is this possible
its like the external current doesn't make any difference!!!
The integral $$\oint \vec{B} \cdot \vec{dl}$$ gives the same result, but you cannot pull $$|\vec{B}|$$ out like that because it is not constant in this case, unlike in the first!
Oh yes!!! you are right!!!!!!!!!!!!!
How can i ignore that thing !!!!!
Dumb of me !!!!!!!!!!!!!!
Thanks a lot Fightfish !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | 2021-05-14 13:23:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8522834777832031, "perplexity": 882.7040972608639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00013.warc.gz"} |
http://michelbrito.com/Mathematical-Characters | ## Greek Alphabet
Greek letters (c. 800 BC) are often used as symbols in mathematics, physics, and other sciences.
Table: The Greek Alphabet. Order, English name, modern pronunciation, Latin transliteration, lowercase, lowercase variant, and uppercase. (Uppercase omitted if coincides with a Roman letter.)
# Name IPA Latin Lower Var Upper
1 Alpha /ˈælfə/ a $\alpha$
2 Beta /ˈbeɪtə/ b $\beta$
3 Gamma /ˈɡæmə/ g $\gamma$ $\Gamma$
4 Delta /ˈdɛltə/ d $\delta$ $\Delta$
5 Epsilon /ˈɛpsɪlɒn/ e $\epsilon$ $\varepsilon$
6 Zeta /ˈzeɪtə/ z $\zeta$
7 Eta /ˈeɪtə/ ē $\eta$
8 Theta /ˈθeɪtə/ th $\theta$ $\vartheta$ $\Theta$
9 Iota /aɪˈoʊtə/ i $\iota$
10 Kappa /ˈkæpə/ k, c $\kappa$ $\varkappa$
11 Lambda /ˈlæmdə/ l $\lambda$ $\Lambda$
12 Mu /mjuː/ m $\mu$
13 Nu /njuː/ n $\nu$
14 Xi /zaɪ/, /ksaɪ/ x $\xi$ $\Xi$
15 Omicron /ˈɒmɪkrɒn/ o $\omicron$
16 Pi /paɪ/ p $\pi$ $\varpi$ $\Pi$
17 Rho /roʊ/ r, rh $\rho$ $\varrho$
18 Sigma /ˈsɪɡmə/ s $\sigma$ $\varsigma$ $\Sigma$
19 Tau /taʊ/, /tɔː/ t $\tau$
20 Upsilon /ˈʊpsɪlɒn/ y, u $\upsilon$ $\Upsilon$
21 Phi /faɪ/ ph $\phi$ $\varphi$ $\Phi$
22 Chi /kaɪ/ ch, kh $\chi$
23 Psi /saɪ/, /psaɪ/ ps $\psi$ $\Psi$
24 Omega /oʊˈmeɪɡə/ ō $\omega$ $\Omega$
## Latin Alphabet
Latin letters (c. 700 BC) are the most commonly used mathematical symbols.
Table: The Latin Alphabet. Uppercase, double-struck (blackboard bold), calligraphic, alternative calligraphic (script, e.g. Ralph Smith's Formal Script), Fraktur (Gothic); lowercase, Fraktur (Gothic).
# Up bb cal scr frak Low frak
1 $A$ $\mathbb{A}$ $\mathcal{A}$ $\mathscr{A}$ $\mathfrak{A}$ $a$ $\mathfrak{a}$
2 $B$ $\mathbb{B}$ $\mathcal{B}$ $\mathscr{B}$ $\mathfrak{B}$ $b$ $\mathfrak{b}$
3 $C$ $\mathbb{C}$ $\mathcal{C}$ $\mathscr{C}$ $\mathfrak{C}$ $c$ $\mathfrak{c}$
4 $D$ $\mathbb{D}$ $\mathcal{D}$ $\mathscr{D}$ $\mathfrak{D}$ $d$ $\mathfrak{d}$
5 $E$ $\mathbb{E}$ $\mathcal{E}$ $\mathscr{E}$ $\mathfrak{E}$ $e$ $\mathfrak{e}$
6 $F$ $\mathbb{F}$ $\mathcal{F}$ $\mathscr{F}$ $\mathfrak{F}$ $f$ $\mathfrak{f}$
7 $G$ $\mathbb{G}$ $\mathcal{G}$ $\mathscr{G}$ $\mathfrak{G}$ $g$ $\mathfrak{g}$
8 $H$ $\mathbb{H}$ $\mathcal{H}$ $\mathscr{H}$ $\mathfrak{H}$ $h$ $\mathfrak{h}$
9 $I$ $\mathbb{I}$ $\mathcal{I}$ $\mathscr{I}$ $\mathfrak{I}$ $i$ $\mathfrak{i}$
10 $J$ $\mathbb{J}$ $\mathcal{J}$ $\mathscr{J}$ $\mathfrak{J}$ $j$ $\mathfrak{j}$
11 $K$ $\mathbb{K}$ $\mathcal{K}$ $\mathscr{K}$ $\mathfrak{K}$ $k$ $\mathfrak{k}$
12 $L$ $\mathbb{L}$ $\mathcal{L}$ $\mathscr{L}$ $\mathfrak{L}$ $l$ $\mathfrak{l}$
13 $M$ $\mathbb{M}$ $\mathcal{M}$ $\mathscr{M}$ $\mathfrak{M}$ $m$ $\mathfrak{m}$
14 $N$ $\mathbb{N}$ $\mathcal{N}$ $\mathscr{N}$ $\mathfrak{N}$ $n$ $\mathfrak{n}$
15 $O$ $\mathbb{O}$ $\mathcal{O}$ $\mathscr{O}$ $\mathfrak{O}$ $o$ $\mathfrak{o}$
16 $P$ $\mathbb{P}$ $\mathcal{P}$ $\mathscr{P}$ $\mathfrak{P}$ $p$ $\mathfrak{p}$
17 $Q$ $\mathbb{Q}$ $\mathcal{Q}$ $\mathscr{Q}$ $\mathfrak{Q}$ $q$ $\mathfrak{q}$
18 $R$ $\mathbb{R}$ $\mathcal{R}$ $\mathscr{R}$ $\mathfrak{R}$ $r$ $\mathfrak{r}$
19 $S$ $\mathbb{S}$ $\mathcal{S}$ $\mathscr{S}$ $\mathfrak{S}$ $s$ $\mathfrak{s}$
20 $T$ $\mathbb{T}$ $\mathcal{T}$ $\mathscr{T}$ $\mathfrak{T}$ $t$ $\mathfrak{t}$
21 $U$ $\mathbb{U}$ $\mathcal{U}$ $\mathscr{U}$ $\mathfrak{U}$ $u$ $\mathfrak{u}$
22 $V$ $\mathbb{V}$ $\mathcal{V}$ $\mathscr{V}$ $\mathfrak{V}$ $v$ $\mathfrak{v}$
23 $W$ $\mathbb{W}$ $\mathcal{W}$ $\mathscr{W}$ $\mathfrak{W}$ $w$ $\mathfrak{w}$
24 $X$ $\mathbb{X}$ $\mathcal{X}$ $\mathscr{X}$ $\mathfrak{X}$ $x$ $\mathfrak{x}$
25 $Y$ $\mathbb{Y}$ $\mathcal{Y}$ $\mathscr{Y}$ $\mathfrak{Y}$ $y$ $\mathfrak{y}$
26 $Z$ $\mathbb{Z}$ $\mathcal{Z}$ $\mathscr{Z}$ $\mathfrak{Z}$ $z$ $\mathfrak{z}$
## Hebrew Lettters
Some Hebrew lettters (c. 200 BC) are also used as mathematical symbols.
Table: Hebrew Letters used as math symbols.
# Letter Name
1 $\aleph$ Aleph
2 $\beth$ Beth
3 $\gimel$ Gimel | 2019-11-21 04:39:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8547503352165222, "perplexity": 5376.334383881752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00494.warc.gz"} |
https://www.intechopen.com/chapters/53025 | Open access peer-reviewed chapter
# Computational Modeling of Vehicle Radiators Using Porous Medium Approach
By Barbaros Çetin, Kadir G. Güler and Mehmet Haluk Aksel
Submitted: April 4th 2016Reviewed: October 10th 2016Published: April 27th 2017
DOI: 10.5772/66281
## Abstract
A common tool for the determination of thermal characteristics of vehicle radiators is the experimental testing. However, experimental testing may not be feasible considering the cost and labor-time. Basic understanding of the past experimental data and analytical/computational modeling can significantly enhance the effectiveness of the design and development phase. One such computational modeling technique is the utilization of computational fluid dynamics (CFD) analysis to predict the thermal characteristics of a vehicle radiator. However, CFD models are also not suitable to be used as a design tool since considerable amount of computational power and time is required due to the multiple length scales involved in the problem, especially the small-scale geometric details associated with the fins. Although fins introduce a significant complexity for the problem, the repetitive and/or regular structure of the fins enables the porous medium based modeling. By porous modeling, a memory and time efficient computational model can be developed and implemented as an efficient design tool for radiators. In this work, a computational methodology is described to obtain the hydrodynamic and thermal characteristics of a vehicle radiator. Although the proposed methodology is discussed in the context of a vehicle radiator, the proposed methodology can be implemented to any compact heat exchanger with repetitive fin structures which is an important problem for many industrial applications.
### Keywords
• computational modeling
• porous medium
## 1. Introduction
One of the main components of a cooling system of an engine is a radiator. Vehicle radiators are typically fin-and-tube-type compact heat exchangers (HXs) and composed of inlet manifolds, outlet manifolds, tubes and fins as shown in Figure 1. Simply, a radiator works with two fluids which are air and anti-freeze water mixture. Hot antifreeze water mixture flows through the tubes, whereas cooling air flows through the fins resulting in heat exchange between both streams.
Due to the strong competition in the automotive industry, radiators with better performance (higher cooling capacity, less hydrodynamic loss, less weight, etc.) have been desired. A common tool for the determination of thermal characteristics of vehicle radiators is the experimental testing. However, experimental testing may not be feasible considering the cost and labor-time. Basic understanding of the past experimental data and analytical/computational modeling can significantly enhance the effectiveness of the design and development phase. There are techniques available to analyze HXs such as log mean temperature difference (LMTD) and effectiveness-NTU (ε-NTU). However, these techniques require some parameters known a priorisuch as overall heat transfer coefficients and/or NTU relations for a given HX. There are no general expressions for overall heat transfer coefficients and/or ε-NTU relations valid for any HX. Therefore, these parameters need to be predicted either from analytical expressions [1], experimental data [2, 3] and/or computational models [36]. A prioriknowledge of these parameters is required for the designer. Therefore, implementation of LMTD and/or ε-NTU is not feasible especially for vehicle radiators which may include custom-designed fin configurations. Alternatively, computational fluid dynamics (CFD) analysis can be applied to predict the thermal characteristics of a radiator. However, CFD analysis of a full-size HX is not feasible due to extremely high number of cells required to resolve the complex nature of the HXs; especially the fin structures. This point is more problematic when the number of fins is high in the case of heavy-duty vehicle radiators. Although fins introduce a significant complexity for the problem, the repetitive and/or regular structure of the fins enables the porous medium based modeling. From computational point of view, this approach offers some unique advantages. The complex fluid flow occurring through fins can be introduced into the model through porous parameters. Although the determination of these porous parameters requires a rigorous, detailed computational model with very fine mesh structure especially within the regions mainly responsible for the fluid friction and heat transfer, this modeling can be performed on a representative unit cell due to the repetitive nature of the fins. Once these effects are included through the porous parameters, the mesh structure simplifies dramatically and considering the whole geometry, the number of degree of freedom of the system drops down to a feasible number (in the order of 10 millions). Besides, the porous modeling does not require any boundary layer meshing since the friction and heat transfer parameters are already included through the porous parameters.
### 1.1. Porous modeling
Porous modeling is governed by three models. The simplest model is the Darcy’s model which is suggested by Henry Darcy (1856) during his investigations on hydrology of the water supplies of Dijon [7]. Darcy’s equation is expressed as:
Δpl=μαVE1
where, Δpis the pressure drop, lis the pipe length, Vis the average velocity, μis the dynamic viscosity and αis permeability of porous domain. Permeability depends on the fluid properties and the geometrical properties of the medium. The dependence of the pressure drop on velocity in the Darcy’s equation is linear; therefore, Darcy’s equation is applicable when the flow is laminar. As the velocity increases, the dependence of the pressure drop on velocity becomes non-linear due to drag caused by solid obstacles. At this point, there are two extended models proposed in the literature namely Forchheimer and Forchheimer-Brinkman model. For moderate Reynolds numbers, including nonlinear effects, pressure drop is defined as Forchheimer’s equation [7]:
Δpl=(μαV+CFα12ρV2)E2
where CFis the dimensionless form-drag constant and ρis the density of the fluid. The first term denotes the viscous characteristics of porous flow and the second term (also called Forchheimer term) denotes the inertial characteristics. Lastly, Forchheimer-Brinkman model includes additional Laplacian term in addition to Forchheimer’s equation. Forchheimer-Brinkman model is expressed as [7]:
Δpl=(μαV+CFα12ρV2μ˜2V)E3
where μ˜is the effective viscosity. In general, added Laplacian term (also known as Brinkman term) resolves effects of the flow characteristics in a thin boundary layer at the near wall regions. Strictly speaking, the last term becomes important for large porosity (ratio of the fluid volume to the solid volume in a porous medium) values which means the effect is negligible for many practical applications where typically porosity value is relatively small. Eq. (3) without the quadratic term is known as extended Darcy (or Brinkman) model. Therefore, Forchheimer-Brinkman model is the most general model, but the inclusion of the Brinkman and Forchheimer term on the left-hand side can be questionable since the Brinkman term is appropriate for large porosity values, yet there exists uncertainty about the validity of the Forchheimer term at larger porosity values [7].
Velocity definition in porous modeling is specified by using two different descriptions: superficial formulation and physical velocity formulation. Superficial velocity formulation does not take the porosity into account during the evaluation of the continuity, momentum and energy equations. On the other hand, physical velocity formulation includes porosity during the calculation of transport equations [8]. The continuity and momentum transport equation for a porous domain using Forcheimer’s model can be written as [2]:
t(γρ)+ . (γρV)= 0E4
t(γρV)+ . (γρVV)= γp+ .(γτ)+ γBf(γ2μαV+ γ3C22ρ|V|V)E5
where γis the porosity, C2 is the inertial coefficient for porous domain and Bfis the body force term.
Besides flow modeling, heat transfer modeling for porous flow is described by using two models which are (i) equilibrium model and (ii) nonequilibrium model. Equilibrium model (one-equation energy model) is used when the porous medium and fluid phase are in thermal equilibrium. However, in most cases, fluid phase and porous medium are not in thermal equilibrium. For such cases, nonequilibrium thermal model is more realistic. In the case of the radiator, this issue is important since the temperature difference between the solid (fins) and the fluid (air flowing through fins) is the driving mechanism for the heat transfer [4]. Therefore, the nonequilibrium model includes two energy equations (known as also two-equation energy model): one is for the fluid domain and the other is for the solid domain. The coupling of these two models is via the term which represents the heat transfer between the fluid and the solid domains. The conservation equations for the two energy model can be written as [2]:
t(γρfEf)+ . (V(ρfEf+p))= . [γkfTf(ihiJi)+(τ¯¯V)]+ Sfh+hfsAfs(TsTf)E6
t((1γ)ρsEs)= . ((1γ)ksTs)+Ssh+hfsAfs(TfTs)E7
where subscripts s’and f’stand for solid and fluid, respectively. Eis the total energy, Tis the temperature, kis the thermal conductivity, Sis the energy source term and (
) stands for the effect of enthalpy transport due to the diffusion of species. The last term in both of the equations is the coupling term which models heat transfer between the fluid and solid domains. In this coupling term, hfsdenotes heat transfer coefficient for the fluid/solid interface and Afsdenotes the interfacial area density that is the ratio of the area of the fluid/solid interface and the volume of the porous zone.
Through Eqs. (3)–(7), there are many parameters which are material’s property and fixed once the materials for the fluid and the solid are selected. On the other hand, there are some parameters (i.e. porous parameters) which are functions of material, geometry and the flow condition. These parameters are γ, α, C2, hfsand Afs. Among these, γand Afsare purely geometric parameters and can be determined once the geometry of the porous structure is known. In the case of a radiator modeling, once the geometry of the fins is set, these two parameters can be determined beforehand. The other parameters are flow-dependent, meaning that they need to be determined for a specific flow condition. At this point, these parameters can be determined through some analytical expressions [911], experimental results (e.g. wind tunnel testing) [4, 12], empirical correlations [13] and/or computational models [1421] typically valid for a representative unit cell. All these approaches were implemented in the literature for different studies for the analysis of micro/macro heat sinks and HXs.
### 1.2. Computational modeling of heat exchangers
Porous modeling can be implemented to any geometry which would resemble a porous structure. Moreover, if the porous structure has repetitive nature, the porous coefficients can be obtained through a detailed modeling of representative unit cell through analytical, experimental or computational means. Heat sinks are very good examples of this case and porous modeling approach has been implemented for the analysis of micro/macro heat sinks [912]. A two-equation energy model has been implemented to analyze a straight-finned heat sink [9] together with Darcy’s model and implemented to perform an optimization for an internally finned tube [10] and to discuss the effect of aspect ratio and effective thermal conductivity on the thermal performance of a micro-heat sink [11] together with extended Darcy’s model. The heat sinks contain a regular structure; therefore, there is a chance to derive analytical expressions to estimate the porous parameters [911].
Considering the HXs with complex fin structures, computational modeling is even more challenging; therefore, the computational models typically focus on specific subcomponents of HXs such as a representative unit cell for the fin structure [1421], radiator fan [22] and inlet manifold [23, 24]. The thermal performance of a HX can be achieved by simply increasing the performance of the fin structure alone. A fin structure with higher heat transfer together with less pressure drop can significantly enhance the performance of the entire system. To investigate the thermal performance of a fin structure, experimental [1418] and/or computational models [14, 17, 1921] can be realized for different fin geometries. Moreover, improving the flow maldistribution at the inlet manifold may also increase the thermal performance. Computational modeling of the flow maldistribution may lead to performance enhancement for HXs [23, 24].
Analyzing subcomponents may lead to qualitative conclusion for the thermal performance of an HX, however to estimate the thermal performance quantitatively, a rigorous 3-D modeling of the entire HX is required. Since a rigorous modeling is not computationally feasible, a 2D model [4], hydraulic and thermal resistances-based models [12, 25] and 3D mesoscale models (considering macro control volumes) have been introduced in the literature to predict the thermal performance quantitatively [2629]. A 2D model was developed to compare the equilibrium model (one-equation thermal model) and nonequilibrium thermal model (two-equation energy model) for a relatively small size matrix type HX [4]. A resistance-based model was implemented to predict the hydrodynamic and thermal performance of a carbon-foam–finned HX which combined many different correlations from the literature to predict the hydrodynamic and thermal resistances [25]. The success of the model strongly depends on the accuracy of the porous parameters. For this particular example, the model was proven to predict the hydrodynamic and the thermal performance within ±15% of the experimental data. A Compact Heat Exchanger Simulation Software (CHESS) has been developed [2628] as a rating and design tool for industrial use based on the empirical correlation of the porous parameters to analyze the fin-and-tube part of a vehicle radiators (excluding inlet and outlet manifolds). It was demonstrated that by using CHESS, the thermal performance of different vehicle radiators was predicted within ±15% of the experimental values. Alternatively, a porous modeling-based CFD model for fluid flow and meso-scale ε-NTU-based modeling for thermal characteristic was utilized for an air-to-air cross-flow HX [29] to investigate the effect of the maldistribution on the thermal performance. A 3D CFD model coupled with porous medium approach has been developed to investigate the hydrodynamic performance of a plate-fin HX in which the porous parameters were also determined using a detailed CFD model on the unit cell [30].
A full-size 3D thermal modeling of a relatively small compact HX was conducted with different fin configurations and the heat transfer and friction factor parameters which can be used in conjunction with the LMTD or ε-NTU method [5]. Since the size of the HX was small, the meshing was not a problem and the computational model was utilized for the design of an inlet manifold for a better performance. Considering the size of the vehicle radiator, this approach is not an option. Thermal and structural analysis of a heavy-duty truck radiator which had finned structure both on the liquid and air side has been performed using Commercial CFD software, FLUENT® [31]. Forchheimer’s relation was used for the porous modeling together with the experimental data. One-equation energy model was used together with the averaged equivalent thermal conductivity. The local heat transfer coefficients and pressure distribution gathered from the thermal analysis were used as a boundary condition for finite element structural analysis through which the thermal stresses and strains were obtained.
One alternative to all these approaches can be the modeling of the vehicle radiator with porous medium approach where the porous parameters are also deduced from a rigorous CFD modeling on a unit cell. Moreover, this procedure may be performed with a commercial CFD software which would have very strong meshing, solving and post-processing capabilities. However, implementation of the two-temperature energy equation is crucial for an accurate prediction especially for the vehicle radiators. This may not be straightforward with a commercial software. At this point, FLUENT® may be a viable solution since the two-temperature model capability has been included in version 14.5. More recently, a computational modeling of a fin-and-tube-type vehicle radiator has been conducted based on two-temperature model and the cooling capacity of a heavy-duty vehicle radiator has been estimated without any need for empirical and/or experimental data [32]. In the upcoming section, the computational methodology of such a computational model is outlined. This approach may allow CFD modeling to be an efficient rating and design tool for vehicle radiators. Although the proposed computational methodology is discussed for a vehicle radiator, it may also be implemented to any compact HX with repetitive fin structures which is an important problem for many industrial applications.
## 2. Computational modeling
The proposed computational methodology is implemented for a 4-row 39-column commercial available heavy-duty vehicle (more specifically tractor) radiator as shown in Figure 1. Tractor that uses the manufactured radiator has a 64 HP Perkins engine which requires a minimum cooling capacity of 55 kW according to the catalog data. The cooling capacity of this radiator was reported as 55.8 kW by the tractor company as a result of in-house experiments following the SAE-J1393 protocol [33]. Catalog data are tabulated in Table 1. The fin structure used on this radiator is a wavy fin (WF) structure which is a typical structure used in vehicle radiators due to its superior thermal performance. The selected wavy fin configuration is 84 mm in length.
The computational procedure starts with the determination of the porous parameters for a given mesh configuration. The geometric parameters are determined using the CAD model. On the other hand, to determine the flow-based parameters, a parametric study needs to be performed on a unit cell with high resolution which consists of one repeating section of the fin structure. For the determination of the porous medium coefficients, the flow field should be analyzed only for the section with the finned structure (physical fin simulations). To verify the extracted porous medium coefficients, the flow field within the unit cell together with included upstream and downstream fluid domain needs to be modeled both using actual fin geometry and porous modeling. This analysis needs to be performed only once for each mesh configuration of interest.
### 2.1. Determination of the porous parameters
Fin analysis is progressed under three main steps:
1. Simulating the unit cell straight fin model by using different air inlet velocities and obtaining the resultant pressure drop across the fin.
2. Fitting a second-order curve to the collected pressure versus velocity data gives the Darcy-Forchheimer’s relation as:
Δpl=a V+b V2=(μαV+C212ρV2)E8
where aand bare the coefficients characterizing the flow.
3. Obtaining the inertial coefficient and viscous coefficient using the extracted coefficients in step (b) as:
Ic =2 bρ lE9
Vc =al μE10
Obtaining the flow-based porous medium coefficients is followed by the determination of input parameters for heat transfer modeling. The necessary input parameters are the average heat transfer coefficient (HTC) and interfacial area density (IAD) for the two-equation energy model. Average heat transfer coefficient is obtained from FLUENT® post-processing which can be calculated by using the following relation:
HTC=QTwTrefE11
The reference temperature in the above equation is the average temperature of the air between the inlet and outlet of the finned channel.
### 2.2. Physical fin simulations
Unit cell of a wavy fin model, Model-A shown in Figure 2(a), is analyzed in order to obtain the porous medium parameters. Flow parameters are obtained by using the Forchheimer’s relation. Model-A is simulated using different Reynolds numbers to obtain Forchheimer’s curve. Once the parameters are obtained, Model-B, which is a unit cell of the wavy fin with additional upstream and downstream domains as shown in Figure 2(b), is analyzed. Since the air domains (without fins) are attached at the inlet and the exit of the porous domain, the flow area contracts (at the inlet of the porous domain) and expands (at the exit of the porous domain) at the interfaces of these domains. To capture the physics, porous-jump boundary conditions are introduced to match the results of the two models [8]. Boundary conditions for Model-A are set as follows: the velocity inlet and pressure outlet boundary conditions are assigned for the fin inlet and outlet, respectively. Wall boundary condition is applied for the upper and lower walls. Constant wall temperature boundary condition is assigned to the walls as the thermal boundary condition (which is close to the real situation, since the temperature variation in the z-direction is small). Periodic boundary condition is used for the right and left sides. For Model-B, additional upstream symmetry and downstream symmetry are assigned for upstream and downstream domains. For both simulations, SIMPLE method is used with a least square-based approach for gradient reconstruction. In addition, standard scheme for pressure and the second-order up-winding schemes for momentum, turbulent kinetic energy and turbulent dissipation rate are employed. Relaxation factors are set to their default values. For both simulations, a minimum convergence of 1 × 10−5 is obtained for all residuals. One important step is the determination of the appropriate turbulence model. At this point, some benchmark solutions, empirical/experimental results can be employed for the determination of the appropriate turbulence model. For fin structure under consideration, k-εrealizable turbulence model with standard wall function is used (the detailed discussion on the justification of the use of this turbulence model can be found elsewhere [32]).
DefinitionValue
Rotational speed of engine [rpm]2200
Inlet temperature [°C]86.5
Outlet temperature [°C]81
Ambient temperature [°C]31
Inlet mass flow rate [kg/s]2.41
Air velocity [m/s]7
Heat rejection [kW]55.8
### Table 1.
Catalog data for the four-row radiator.
DescriptionUnit
Domain length84mm
Number of cells4,900,713
Skewness (average)0.241
Turbulence modelingk-ε-realizable
Fin volume2.2567 × 10−7m3
Total volume4.28131 × 10−6m3
Porosity0.9473
Hydraulic diameter0.00241m
Turbulence intensity0.058
Turbulence length0.000169m
Solution methodSIMPLE
Computation time/per simulation30min
### Table 2.
Input parameters for unit cell WF simulations.
Afterward, mesh independence analysis needs to be performed to ensure the mesh-independent solutions. It is observed that approximately 4,900,000 number of cells with 30 layers of boundary mesh for the fins generates a mesh-independent result for this particular fin configuration. Table 2 contains the input parameters for the Model-A. In Figure 3, the pressure drop across the fin structure is plotted against velocity and a second-order curve is fitted to the simulation data. The corresponding inertial and viscous coefficients are determined as 17.3 and 4.01 × 106, respectively. Heat transfer parameters are obtained from the simulations of Model-B. For the simulation of Model-B, the input parameters are defined as 7.0 m/s for the inlet velocity, 304.2 K for the inlet temperature and 359.7 K for the temperature of fin walls in accordance the tabulated catalog data. Average surface heat transfer coefficient and tuned porous jump coefficients for the unit cell of a wavy fin are presented in Tables 3 and 4, respectively.
Interfacial area [m2]Porous volume [m3]IAD [1/m]HTC [W/m2 K]Tref [K]
0.0039576964.28131 × 10−6810170336
### Table 3.
Porous parameters of a WF.
Face permeability [1/m2]Thickness [m]Inertial coefficient [1/m]
Inlet4.01 × 1060.13.42
Outlet4.01 × 1060.1−5.2
### Table 4.
Porous jump coefficients for a unit cell of a WF.
### 2.3. Fin simulations with porous modeling
Once the porous coefficients are obtained, the flow field of the air can be modeled using the porous modeling. Upstream and downstream domains are attached for this analysis as shown in Figure 4(a). To verify the porous modeling, the results are compared with the physical fin simulations. For the porous fin model, hexa-sweep meshing is used. The mesh of the porous model (Figure 4(b)) consists of 5320 cells. After completing the meshing process, boundary conditions are assigned. Besides the physical fin boundary condition configurations, additional porous-jump boundary conditions are introduced to match the physical fin simulations. All solver settings are taken to be the same as the physical fin simulations. After porous medium flow coefficients, porous-jump coefficients and heat transfer parameters are obtained from the simulation of a unit cell of a wavy physical fin and porous medium simulations are carried out with the same input parameters to verify the porous modeling. Figure 5(a) compares the sectional-averaged pressure drop for the physical fin and porous medium simulations. Figure 5(b) shows the same comparison for the sectional mass-flow-averaged temperature drop. As seen from Figure 5, an acceptable consistency is achieved with the porous modeling. One should note that the porous medium requires only 5320 cells per unit cell mesh; on the other hand, physical fin requires 4,900,713 cells. If a full-sized radiator is modeled with physical fins, the required cell number will be approximately 20 billion which is not feasible to analyze even with today’s computing technology; therefore, by using porous modeling approach, full-sized model can be analyzed within reasonable computing time together with a reasonable accuracy. According to the presented results, pressure and temperature drop characteristics are coherent between the physical fin and porous medium. Contour representations for y+, velocity and temperature distribution across the fin are presented in Figure 6. It is seen from the Model-B results that y+values are acceptable with respect to analysis results (for SST turbulence model maximum y+value should be smaller than 1.0) [8] and velocity and temperature distributions have convenient characteristics.
3-D CAD model of the 4-row 39-column radiator is prepared by using CAD software. After forming the 3-D model, the meshing process is progressed. Fin, upstream, downstream and tube domains are meshed with hexa-type elements, while the upper and lower tanks are meshed with tetra elements. Tubes are meshed with a boundary layer mesh having two layers with 0.1 mm first layer height. The generated mesh consists of 53,355,356 cells with an average skewness value of 0.178. Mass flow inlet, pressure outlet, velocity inlet, pressure outlet, upstream wall and downstream wall boundary conditions are assigned for water inlet, water outlet, air inlet, air outlet and the outer surface boundary of the upstream and the downstream domains, respectively. The air inlet velocity is taken as 7.0 m/s with an inlet temperature of 304.2 K temperature, while the mass flow rate of water is 2.41 kg/s with an inlet temperature of 359.7 K in accordance with the catalog data. Second-order upwind scheme is used for momentum, turbulent kinetic energy (TKE) and turbulent dissipation rate (TDR). Relaxation factors are selected as 0.05 for momentum, 0.3 for TKE and TDR and 0.4 for turbulent viscosity in order to obtain optimized convergence rate and solution time. The heat transfer coefficient between the fins and air is taken as 170 W/m2 K referring to the previous unit cell simulations. A converged solution is obtained after 472 iterations when the minimum residual is smaller than 1 × 10−4. The simulations are performed on a DELL T5600 Workstation (Intel® Xeon®, 3.30 GHz, 2 processors, 16 cores, 128 GB RAM). The overall solution time is observed to be approximately 12 h and 40 min.
Cross-sectional velocity and temperature distributions for the air-side and the water-side streamlines are presented in Figures 7 and 8, respectively. Temperature gradients are successfully achieved in z-and y-directions as expected. Air-side temperature is increasing in the flow direction as a result of the heat transfer from the water-side, while the water-side temperature is decreasing in the flow direction. The flow is not distributed uniformly among the tubes as shown in Figure 8(a). However, to improve the performance of a radiator, the maldistribution of the flow at the header needs to be reduced [2224]. Therefore, one can clearly state that there is a room for improvement for the design at hand. This nonuniform distribution of the flow among the tubes contributes also to the temperature in the x-direction. According to the simulation, the average outlet water temperature is found to be 354.3 K and total temperature drop of water through the radiator is calculated as 5.4 K which leads to a total heat capacity of:
Q=m˙Cp ΔT=2.41 x 4208 x 5.36=54.4 kWE12
The pressure drop for water which is also an important performance parameter for radiators is found to be 6.5 kPa. According to the catalog data, the outlet water temperature, temperature drop across the radiator and the cooling capacity are 354.2 K, 5.5 K and 55.8 kW. The same parameters are found to be 354.3 K, 5.4 K and 54.4 kW with the proposed CFD analysis. The deviation of the CFD results with the catalog is within 2.5% which is quite acceptable for a thermal analysis. Moreover, the proposed model solves the problem within a reasonable computational time. Considering the accuracy of the result and computational cost, the proposed methodology can be used as a rating and design tool for vehicle radiators.
## 3. Concluding remarks
Although the repetitive fin structures introduce a challenge for the computational modeling of a radiator, the repetitive nature also enables an efficient porous medium modeling. Moreover, again due to the repetitive nature, the porous parameters can be obtained by CFD modeling of a representative unit-cell with high resolution. A successful implementation of porous modeling can lead a dramatic reduction in computational cost and time. The implementation of the computational methodology through a commercial software also benefits from the powerful meshing, solving and post-processing capabilities. As demonstrated, CFD analysis of a radiator by using porous medium approach gives reasonable and reliable results. By using CFD analysis, design cost may be decreased dramatically by easing the experimental testing process. The porous parameters of a given fin geometry can be obtained within a couple of hours which may enable the hydrodynamic and thermal optimization of a radiator.
Optimization of radiators in terms of size and weight is desired to keep up with the constraints within competitive automotive industry. An efficient computational model enables the optimization process to be performed computationally for a range of different design parameters. Furthermore, more realistic computational models may be developed such as the inclusion of the radiator fan into model or the inclusion of the under hood equipment together with the increasing computational power of the computers. On top of these, the coupling of the flow and temperature field with the structural analysis may lead to far more efficient and robust radiator designs.
## Acknowledgments
Financial support from the Turkish Scientific and Technical Research Council (Project No: 7130643) through YETSAN Radiator Co. Inc. is greatly appreciated.
chapter PDF
Citations in RIS format
Citations in bibtex format
## More
© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Barbaros Çetin, Kadir G. Güler and Mehmet Haluk Aksel (April 27th 2017). Computational Modeling of Vehicle Radiators Using Porous Medium Approach, Heat Exchangers - Design, Experiment and Simulation, S M Sohel Murshed and Manuel Matos Lopes, IntechOpen, DOI: 10.5772/66281. Available from:
### chapter statistics
1Crossref citations
### Related Content
Next chapter
#### Introductory Chapter: An Overview of Design, Experiment and Numerical Simulation of Heat Exchangers
By S M Sohel Murshed and Manuel L Matos Lopes
First chapter
#### Introductory Chapter: Advanced Features and Applications of Heat Exchangers–An Outline
By S M Sohel Murshed and Manuel L Matos Lopes
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities. | 2021-07-26 22:30:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6380259394645691, "perplexity": 1313.7292192935236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152156.49/warc/CC-MAIN-20210726215020-20210727005020-00700.warc.gz"} |
http://math.stackexchange.com/questions/190022/how-to-find-expected-time-to-reach-a-state-in-a-ctmc/196023 | # How to find expected time to reach a state in a CTMC?
Given a simple CTMC with three states 0,1,2. There are three transitions $0 \rightarrow 1$ (with rate $2u$), $1 \rightarrow 2$ (with rate $u$), $1 \rightarrow 0$ (rate $v$).So $2$ is an absorbing state. Can someone please help me understand how to find the expected time to reach state-2 from state-0? Using the understanding I get, I want to do the analysis on some complex markov chains.
-
Did you try the standard method? That is, introduce the expected time t(x) to reach state 2 starting from state x, find a linear system which the t(x) solve, deduce every t(x), and in particular t(0), which is the expected time you are after. – Did Sep 2 '12 at 13:52
Since the OP stays silent, here is an answer. As explained in a comment, the standard method is to introduce the expected time $t_x$ to reach state $2$ starting from each state $x$, to find a linear system which the vector $(t_x)_x$ solve, to deduce $(t_x)_x$, and in particular $t_0$ which is the expected time asked for. Here, $$t_0=\frac1{2u}+t_1,\qquad t_1=\frac{v}{u+v}\left(\frac1v+t_0\right)+\frac{u}{u+v}\frac1u,$$ hence $$t_0=\frac1{2u}+\frac2{u+v}+\frac{v}{u+v}t_0,$$ which yields finally $$t_0=\frac{5u+v}{2u^2}.$$
You do it in a similar way as you would for finding the expected first passage time for a discrete chain. If $Q = q_{ij}$ is the transition rate matrix, $P = p_{ij}$ is the transition matrix of the embedded chain, \operatorname{Boole} is the indicator function and $T[i, Q]$ is the first passage time starting at state $i$, then
\begin{align*} \operatorname{\operatorname{Mean}}[FPT[i, Q]] &= \operatorname{Expectation}[FPT \mid X[0] = i]\\ &= \operatorname{Expectation}[\operatorname{SojournTime} \mid X[0] = i] + \sum _{j=1}^{\infty} p_{ij} \operatorname{Expectation}[FPT \mid X[0] = j]\\ &=-\frac1{q_{ii}} + \sum _{j=1}^{\infty} \frac{-q_{ij}}{q_{ii}} \operatorname{Boole}[j\neq i] \operatorname{Mean}[FPT[j, Q]] \end{align*}
where the sojourn/holding time is, of course, exponentially distributed with parameter $-q_{ii}$. | 2015-08-29 09:54:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842891097068787, "perplexity": 601.0935314599462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064420.15/warc/CC-MAIN-20150827025424-00341-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://clay6.com/qa/25555/the-alkaline-earth-metal-whose-carbonate-has-no-natural-occurence-is | # The alkaline earth metal whose carbonate has no natural occurence is
$\begin{array}{1 1}(a)\;Ba&(b)\;Ca\\(c)\;Mg&(d)\;Be\end{array}$
$BeCO_3$ does not occur in nature.
Hence (d) is the correct answer. | 2018-04-23 23:08:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638616800308228, "perplexity": 2513.3277965305865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946256.50/warc/CC-MAIN-20180423223408-20180424003408-00503.warc.gz"} |
http://wikimechanics.org/sweetness | Sweetness
Any flavour or gustatory perception that could be vaguely described as something like tasting honey is called a sweet sensation. We use words like yummy, sugary, umami, caramelly, savory, candied, spicy, brothy, glazed, meaty, syrupy etc. to describe these flavours. We can make binary descriptions of sweet sensations by comparing them with other sensations, and historically the great pioneers of chemistry by direct contact with their discoveries. But now testing supersedes tasting, so consider an experiment: Dissolve many similar test particles in water and pass a beam of polarized light through the solution. Check to see if the axis of polarization varies. If the angle does not change, then say that the particle is not sweet and write $\delta_{\sf{S}}=0$. If the axis is rotated clockwise, then the particle is a dextrorotary isomer like most naturally occurring sugars. So say that the particle is sugary, and express this mathematically as $\delta_{\sf{S}}=+1$. If the axis is rotated counterclockwise, then the particle is a levorotary isomer like most naturally occurring amino acids. Then call the sensation savory and write $\delta_{\sf{S}}=-1$. The number $\delta_{\sf{S}}$ is called the sweetness. It may, for example, be perceived directly in the flavour difference between spearmint leaves and caraway seeds.
Here is a link to the most recent version of this content, including the full text.
Summary
Adjective Definition Sweetness $\delta_{\sf{S}} \equiv \begin{cases} +1 &\sf{\text{if a sweet taste sensation is sugary }} \\ \; \; 0 &\sf{\text{if a sensation is not sweet }} \\ -1 &\sf{\text{if a sweet taste sensation is savory }} \end{cases}$ 2-10
page revision: 73, last edited: 31 Jul 2022 02:51 | 2023-01-30 18:51:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7162773013114929, "perplexity": 2309.73141332867}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00492.warc.gz"} |
https://proofwiki.org/wiki/Odd_Number_Theorem/Visual_Demonstration | # Odd Number Theorem/Visual Demonstration
Jump to: navigation, search
## Theorem
$\displaystyle \sum_{j \mathop = 1}^n \paren {2 j - 1} = n^2$
That is, the sum of the first $n$ odd numbers is the $n$th square number.
## Visual Demonstration
$\blacksquare$ | 2019-06-16 19:44:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987553358078003, "perplexity": 6735.503345001959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998291.9/warc/CC-MAIN-20190616182800-20190616204800-00511.warc.gz"} |
https://wikieducator.org/Thermodynamics/Thermodynamic_Network/Exercise | # Exercises for the thermodynamic network
Objective
Learn how to derive relations between thermodynamic potentials and get some interesting relationships
## Introduction
In the previous section we looked at the thermodynamic network. We now wish to expand on this. First we will look at a procedure for deriving derivatives of thermodynamic potentials, and then (as an exercise) derive some of these derivatives.
## Derivatives of thermodynamic potentials
To derive derivatives which involve thermodynamic potentials we can use the following procedures:
1. Move the potential to the numerator. Then use the corresponding form of the fundamental equation to eliminate the potential.
2. Move the entropy to the numerator. If a Maxwell equation can be used to eliminate S then use it. If not divide both numerator and denominator by $\partial T$. The numerator can then be written in terms of heat capacities.
3. Move the volume to the numerator. The remaining derivatives can be expressed in terms of α and κ
4. Elimate CV by the equation $C_{V}=C_{P}-\frac{TV/\alpha^{2}}{\kappa_{T}}$
## Exercises
In order to better understand the thermodynamic network it is useful to derive some of the derivatives of thermodynamic potentials.
Instructions: For each of the following derivatives involving thermodynamic potentials, simplify them in terms of P, V, T, S, α, κ, and CP
Hints:
1. You will need to use the fundamental equations and Maxwell equations. They can be found in this table. Also review the derivatives from the previous section.
2. Do not forget the rules for manipulating derivatives.
$\left (\frac{\partial U}{\partial T}\right )_P$
$\left (\frac{\partial U}{\partial P}\right )_T$
$\left (\frac{\partial H}{\partial T}\right )_P$
$\left (\frac{\partial H}{\partial P}\right )_T$
$\left (\frac{\partial H}{\partial V}\right )_T$
$\left (\frac{\partial A}{\partial T}\right )_P$
$\left (\frac{\partial A}{\partial P}\right )_T$
$\left (\frac{\partial A}{\partial P}\right )_S$
$\left (\frac{\partial G}{\partial T}\right )_P$
$\left (\frac{\partial G}{\partial P}\right )_T$
$\left (\frac{\partial G}{\partial P}\right )_S$
$\left (\frac{\partial G}{\partial T}\right )_V$
$\kappa_S=-\frac{1}{V}\left (\frac{\partial V}{\partial P}\right )_S$
$\mu_S=\left (\frac{\partial T}{\partial P}\right )_S$
$\mu_H=\left (\frac{\partial T}{\partial P}\right )_H$
## Notes
1. As an example of the use of thermodynamic potentials, say we want to find the change in internal energy, i. e. ΔU. Remember we cannot directly measure it.
Using the total differential for U gives $dU=\left(\frac{\partial U}{\partial P}\right)_{T}dP+\left(\frac{\partial U}{\partial T}\right)_{P}dT$
We can now substitute the results of the exercise for the two partial derivatives and then use PVT and heat capacity data to calculate ΔU.
2. $\kappa_S$ is called the isentropic compressibility. Isentropic implies an adiabatic process.
3. $\mu_H$ is called the Joule-Thompson coefficient. The H subscript may seem odd, but a constant enthalpy process occurs when a gas goes through a constriction such as a partially opened valve or a porous plug. One type of this is an expansion valve, which is found in all refrigerators and household air conditioners. | 2022-05-16 08:36:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700298070907593, "perplexity": 502.5317406305924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00158.warc.gz"} |
https://climpred.readthedocs.io/en/stable/prediction-ensemble-object.html | # PredictionEnsemble Objects¶
One of the major features of climpred is our objects that are based upon the PredictionEnsemble class. We supply users with a HindcastEnsemble and PerfectModelEnsemble object. We encourage users to take advantage of these high-level objects, which wrap all of our core functions.
Briefly, we consider a HindcastEnsemble to be one that is initialized from some observational-like product (e.g., assimilated data, reanalysis products, or a model reconstruction). Thus, this object is built around comparing the initialized ensemble to various observational products. In contrast, a PerfectModelEnsemble is one that is initialized off of a model control simulation. These forecasting systems are not meant to be compared directly to real-world observations. Instead, they provide a contained model environment with which to theoretically study the limits of predictability. You can read more about the terminology used in climpred here.
Let’s create a demo object to explore some of the functionality and why they are much smoother to use than direct function calls.
[1]:
%matplotlib inline
import matplotlib.pyplot as plt
import xarray as xr
from climpred import HindcastEnsemble, PerfectModelEnsemble
import climpred
xr.set_options(display_style='text')
[1]:
<xarray.core.options.set_options at 0x7f9590d832e8>
We can now pull in some sample data that is packaged with climpred.
## HindcastEnsemble¶
We’ll start out with a HindcastEnsemble demo, followed by a PerfectModelEnsemble case.
[2]:
hind = climpred.tutorial.load_dataset('CESM-DP-SST') # CESM-DPLE hindcast ensemble output.
We need to add a “units” attribute to the hindcast ensemble so that climpred knows how to interpret the lead units.
[3]:
hind["lead"].attrs["units"] = "years"
Now we instantiate the HindcastEnsemble object and append all of our products to it.
[4]:
hindcast = HindcastEnsemble(hind) # Instantiate object by passing in our initialized ensemble.
print(hindcast)
<climpred.HindcastEnsemble>
Initialized Ensemble:
SST (init, lead, member) float64 ...
Observations:
None
Uninitialized:
None
/Users/aaron.spring/Coding/climpred/climpred/utils.py:141: UserWarning: Assuming annual resolution due to numeric inits. Change init to a datetime if it is another resolution.
"Assuming annual resolution due to numeric inits. "
Now we just use the add_ methods to attach other objects. See the API here. Note that we strive to make our conventions follow those of xarray’s. For example, we don’t allow inplace operations. One has to run hindcast = hindcast.add_observations(...) to modify the object upon later calls rather than just hindcast.add_observations(...).
[5]:
hindcast = hindcast.add_observations(obs)
print(hindcast)
<climpred.HindcastEnsemble>
Initialized Ensemble:
SST (init, lead, member) float64 ...
Observations:
SST (time) float32 ...
Uninitialized:
None
You can apply most standard xarray functions directly to our objects! climpred will loop through the objects and apply the function to all applicable xarray.Datasets within the object. If you reference a dimension that doesn’t exist for the given xarray.Dataset, it will ignore it. This is useful, since the initialized ensemble is expected to have dimension init, while other products have dimension time (see more here).
Let’s start by taking the ensemble mean of the initialized ensemble so our metric computations don’t have to take the extra time on that later. I’m just going to use deterministic metrics here, so we don’t need the individual ensemble members. Note that above our initialized ensemble had a member dimension, and now it is reduced.
[6]:
hindcast = hindcast.mean('member')
hindcast
[6]:
<climpred.HindcastEnsemble>
Initialized Ensemble:
SST (init, lead) float64 -0.2121 -0.1637 -0.1206 ... 0.7286 0.7532
Observations:
SST (time) float32 ...
Uninitialized:
None
### Arithmetic Operations with PredictionEnsemble Objects¶
PredictionEnsemble objects support arithmetic operations, i.e., +, -, /, *. You can perform these operations on a HindcastEnsemble or PerfectModelEnsemble by pairing the operation with an int, float, np.ndarray, xr.DataArray, xr.Dataset, or with another PredictionEnsemble object.
An obvious application would be to area-weight an initialized ensemble and all of its associated datasets (like verification products) simultaneously.
[7]:
dple3d = climpred.tutorial.load_dataset('CESM-DP-SST-3D')
area = dple3d['TAREA']
Here, we load in a subset of CESM-DPLE over the eastern tropical Pacific. The file includes TAREA, which describes the area of each cell on the curvilinear mesh.
[8]:
hindcast3d = HindcastEnsemble(dple3d)
hindcast3d
/Users/aaron.spring/Coding/climpred/climpred/utils.py:141: UserWarning: Assuming annual resolution due to numeric inits. Change init to a datetime if it is another resolution.
"Assuming annual resolution due to numeric inits. "
[8]:
<climpred.HindcastEnsemble>
Initialized Ensemble:
SST (init, lead, nlat, nlon) float32 ...
Observations:
SST (time, nlat, nlon) float32 ...
Uninitialized:
None
Now we can perform an area-weighting operation with the HindcastEnsemble object and the area DataArray. climpred cycles through all of the datasets appended to the HindcastEnsemble and applies them. You can see below that the dimensionality is reduced to single time series without spatial information.
[9]:
hindcast3d_aw = (hindcast3d*area).sum(['nlat', 'nlon']) / area.sum(['nlat', 'nlon'])
hindcast3d_aw
[9]:
<climpred.HindcastEnsemble>
Initialized Ensemble:
SST (init, lead) float64 -0.3539 0.1947 0.3623 ... 0.662 1.016 1.249
Observations:
SST (time) float64 24.76 24.48 23.73 24.68 ... 24.78 24.21 24.92 25.95
Uninitialized:
None
NOTE: Be careful with the arithmetic operations. Some of the behavior can be unexpected in combination with the fact that generic xarray methods can be applied to climpred objects. For instance, one might be interested in removing a climatology from the verification data to move it to anomaly space. It’s safest to do anything like climatology removal before constructing climpred objects.
Naturally, they would remove some climatology time slice as we do here below. However, note that in the below example, the intialized ensemble returns all zeroes for SST. The reasoning here is that when hindcast.sel(time=...) is called, climpred only applies that slicing to datasets that include the time dimension. Thus, it skips the initialized ensemble and returns the original dataset untouched. This feature is advantageous for cases like hindcast.mean('member'), where it takes the ensemble mean in all cases that ensemble members exist. So when it performs hindcast - hindcast.sel(time=...), it subtracts the identical initialized ensemble from itself returning all zeroes. We are hoping to implement a fix to this issue in the future.
In short, any sort of bias correcting or drift correction should be done prior to instantiating a PredictionEnsemble object. Alternatively, detrending or removing a mean state can also be done after instantiating a PredictionEnsemble object. But beware of unintuitive behaviour. Removing a time anomaly in PredictionEnsemble, does not modify initialized and therefore returns all 0s.
[10]:
hindcast3d - hindcast3d.sel(time=slice('1960, 2014')).mean('time')
[10]:
<climpred.HindcastEnsemble>
Initialized Ensemble:
SST (init, lead, nlat, nlon) float32 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0
Observations:
SST (time, nlat, nlon) float32 0.059625626 0.057357788 ... 1.4911919
Uninitialized:
None
To fix this always handle all PredictionEnsemble datasets initialized with dimensions lead or init and observations/control with dimension time at the same time to avoid these zeros.
[11]:
hindcast - hindcast.sel(time=slice('1960, 2014')).mean('time').sel(init=slice('1960, 2014')).mean('init')
[11]:
<climpred.HindcastEnsemble>
Initialized Ensemble:
SST (init, lead) float64 -0.2114 -0.1772 -0.1409 ... 0.639 0.6524
Observations:
SST (time) float32 -0.3738861 -0.32481194 ... 0.37358856 0.47778702
Uninitialized:
None
Note: Thinking in initialization space is not very intuitive and such combined init and time operations can lead to unanticipated changes in the PredictionEnsemble. The safest way is subtracting means before instantiating PredictionEnsemble or use HindcastEnsemble.remove_bias().
### PredictionEnsemble.plot()¶
PredictionEnsemble also have a default .plot() call showing all datasets associated.
[12]:
hindcast.plot()
[12]:
<AxesSubplot:xlabel='time', ylabel='SST'>
We have a huge bias because the initialized data is already converted to an anomaly, but uninitialized historical and observations is not.
[13]:
hindcast.remove_bias(alignment='same_verif').plot()
[13]:
<AxesSubplot:xlabel='time', ylabel='SST'>
We still have a trend in all of our products, so we could also detrend them as well.
### Detrend¶
Here we use a kitchen sink package called esmtools. It has a few vectorized stats functions that are dask-friendly.
We can leverage xarray’s .map() function to apply/map a function to all variables in our datasets.
[14]:
from esmtools.stats import rm_poly
hindcast_detrended = hindcast.map(rm_poly, order=2, dim='init').apply(rm_poly, order=2, dim='time')
hindcast_detrended.plot()
[14]:
<AxesSubplot:xlabel='time', ylabel='SST'>
And it looks like everything got detrended by a quadratic fit! That wasn’t too hard.
### Verify¶
Now that we’ve done our pre-processing, let’s quickly compute some metrics. Check the metrics page here for all the keywords you can use. The API is currently pretty simple for the HindcastEnsemble. You can essentially compute standard skill metrics and a reference persistence forecast.
[15]:
hindcast_detrended.verify(metric='mse',
comparison='e2o',
dim='init',
alignment='same_verif',
reference='persistence')
[15]:
<xarray.Dataset>
Coordinates:
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
* skill (skill) <U11 'initialized' 'persistence'
Data variables:
SST (skill, lead) float64 0.003274 0.004149 ... 0.01109 0.008786
Here we leverage xarray’s plotting method to compute Mean Absolute Error and the Anomaly Correlation Coefficient against the ERSST observations, as well as the equivalent metrics computed for persistence forecasts for each of those metrics.
[16]:
import numpy as np
plt.style.use('ggplot')
plt.style.use('seaborn-talk')
color = '#7570b3'
f, axs = plt.subplots(nrows=2, figsize=(8, 8), sharex=True)
for ax, metric in zip(axs.ravel(), ['mae', 'acc']):
handles = []
result = hindcast_detrended.verify(metric=metric, comparison='e2o', dim='init',
alignment='same_verif',reference='persistence')
p1, = result.sel(skill='initialized').SST.plot(ax=ax,
marker='o',
color=color,
label='initialized forecast model',
linewidth=2)
p2, = result.sel(skill='persistence').SST.plot(ax=ax,
color=color,
linestyle='--',
label='persistence')
handles.append(p1)
handles.append(p2)
ax.set_title(metric.upper())
axs[0].set_ylabel('Mean Error [degC]')
axs[1].set_ylabel('Correlation Coefficient')
axs[0].set_xlabel('')
axs[1].set_xticks(np.arange(10)+1)
# matplotlib/xarray returning weirdness for the legend handles.
handles = [i.get_label() for i in handles]
# a little trick to put the legend on the outside.
plt.suptitle('CESM Decadal Prediction Large Ensemble Global SSTs', fontsize=16)
plt.show()
## PerfectModelEnsemble¶
We’ll now play around a bit with the PerfectModelEnsemble object, using sample data from the MPI perfect model configuration.
[17]:
ds = load_dataset('MPI-PM-DP-1D') # initialized ensemble from MPI
control = load_dataset('MPI-control-1D') # base control run that initialized it
print(ds)
<xarray.Dataset>
Dimensions: (area: 3, init: 12, lead: 20, member: 10, period: 5)
Coordinates:
* lead (lead) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
* period (period) object 'DJF' 'JJA' 'MAM' 'SON' 'ym'
* area (area) object 'global' 'North_Atlantic' 'North_Atlantic_SPG'
* init (init) int64 3014 3023 3045 3061 3124 ... 3175 3178 3228 3237 3257
* member (member) int64 0 1 2 3 4 5 6 7 8 9
Data variables:
tos (period, lead, area, init, member) float32 ...
sos (period, lead, area, init, member) float32 ...
AMO (period, lead, area, init, member) float32 ...
[18]:
pm = climpred.PerfectModelEnsemble(ds)
print(pm)
<climpred.PerfectModelEnsemble>
Initialized Ensemble:
tos (period, lead, area, init, member) float32 ...
sos (period, lead, area, init, member) float32 ...
AMO (period, lead, area, init, member) float32 ...
Control:
tos (period, time, area) float32 ...
sos (period, time, area) float32 ...
AMO (period, time, area) float32 ...
Uninitialized:
None
/Users/aaron.spring/Coding/climpred/climpred/utils.py:141: UserWarning: Assuming annual resolution due to numeric inits. Change init to a datetime if it is another resolution.
"Assuming annual resolution due to numeric inits. "
Our objects are carrying sea surface temperature (tos), sea surface salinity (sos), and the Atlantic Multidecadal Oscillation index (AMO). Say we just want to look at skill metrics for temperature and salinity over the North Atlantic in JJA. We can just call a few easy xarray commands to filter down our object.
[19]:
pm = pm[['tos', 'sos']].sel(area='North_Atlantic', period='JJA')
Now we can easily compute for a host of metrics. Here I just show a number of deterministic skill metrics comparing all individual members to the initialized ensemble mean. See comparisons for more information on the comparison keyword.
[20]:
METRICS = ['mse', 'rmse', 'mae', 'acc',
'nmse', 'nrmse', 'nmae', 'msss']
result = []
for metric in METRICS:
result.append(pm.verify(metric=metric, comparison='m2e', dim=['init','member']))
result = xr.concat(result, 'metric')
result['metric'] = METRICS
# Leverage the xarray plotting wrapper to plot all results at once.
result.to_array().plot(col='metric',
hue='variable',
col_wrap=4,
sharey=False,
sharex=True)
[20]:
<xarray.plot.facetgrid.FacetGrid at 0x7f95979b48d0>
It is useful to compare the initialized ensemble to an uninitialized run. See terminology for a description on “uninitialized” simulations. This gives us information about how initializations lead to enhanced predictability over knowledge of external forcing, whereas a comparison to persistence just tells us how well a dynamical forecast simulation does in comparison to a naive method. We can use the generate_uninitialized() method to bootstrap the control run and create a pseudo-ensemble that approximates what an uninitialized ensemble would look like.
[21]:
pm = pm.generate_uninitialized()
pm
[21]:
<climpred.PerfectModelEnsemble>
Initialized Ensemble:
tos (lead, init, member) float32 13.464135 13.641711 ... 13.568891
sos (lead, init, member) float32 33.183903 33.146976 ... 33.25843
Control:
tos (time) float32 13.499312 13.742612 ... 13.076672 13.465583
sos (time) float32 33.232624 33.188156 33.201694 ... 33.16359 33.18352
Uninitialized:
tos (lead, init, member) float32 13.446274 13.426196 ... 13.7393265
sos (lead, init, member) float32 33.193344 33.200825 ... 33.359787
[22]:
pm = pm[['sos']] # Just assess for salinity.
Here we plot the ACC for the initialized, uninitialized, and persistence forecasts for North Atlantic sea surface salinity in JJA. We add circles to the lines if the correlations are statistically significant for .
[23]:
def plot_result(acc, pval, skill, color, label, linestyle='-'):
"""Helper function for cleaner plotting code."""
acc.sel(skill=skill)['sos'].plot(color=color, linestyle=linestyle)
acc_result = pm.verify(metric='acc', comparison='m2e', dim=['init', 'member'] ,reference=['persistence', 'uninitialized'])
pval_result = pm.verify(metric='p_pval', comparison='m2e', dim=['init', 'member'], reference=['persistence', 'uninitialized'])
# ACC for initialized ensemble
plot_result(acc_result, pval_result, 'initialized', 'red', 'initialized')
plot_result(acc_result, pval_result, 'uninitialized', 'gray', 'uninitialized')
plot_result(acc_result, pval_result, 'persistence', 'black', 'persistence', linestyle='--')
plt.legend()
[23]:
<matplotlib.legend.Legend at 0x7f95968f81d0>
[ ]: | 2021-02-27 22:30:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3857050836086273, "perplexity": 9494.635461895168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00421.warc.gz"} |
https://aakashdigitalsrv1.meritnation.com/ask-answer/question/solve-this-question-q-21-find-the-value-of-in0-2-such-that-t/matrices/12528679 | # Solve this question: Q.21. Find the value of such that the matrix $\left[\begin{array}{ccc}2\mathrm{sin}\theta -1& \mathrm{sin}\theta & \mathrm{cos}\theta \\ \mathrm{sin}\left(\theta +\pi \right)& 2\mathrm{cos}\theta -\sqrt{3}& \mathrm{tan}\theta \\ \mathrm{cos}\left(\mathrm{\theta }-\mathrm{\pi }\right)& \mathrm{tan}\left(\mathrm{\pi }-\mathrm{\theta }\right)& 0\end{array}\right]$ is a skew symmetric matrix.
Dear Student,
Hope this information will clear your doubts about topic.
If you have any more doubts just ask here on the forum and our experts will try to help you out as soon as possible.
Regards
• -1
What are you looking for? | 2021-10-21 06:20:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17729732394218445, "perplexity": 892.8040258400619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585381.88/warc/CC-MAIN-20211021040342-20211021070342-00600.warc.gz"} |
https://suredbits.com/lightning-101-for-exchanges-security-part-1%E2%80%8A-%E2%80%8Abackground/ | This is our fourth post in our Lightning 101 For Exchanges series. We will investigate in detail the security schema of the Lightning Network. This post will be more technical than our previous ones and is intended for software engineers and security professionals.
Lightning for Exchanges Series
The Lightning Network has a new way for funds to be lost when compared to an exchange’s traditional hot wallet. With current blockchains, security engineers for exchanges only need to be concerned with private keys associated with their hot and cold storage wallets. If an attacker cannot access these keys, they cannot steal the exchange’s funds. In this post we will address the security concerns revolving around managing revoked states on the Lightning Network which, if managed incorrectly, can result in loss of funds.
Primer on Commitment Transactions
You can think of commitment transactions as the fund “states” in your Lightning channel. Each of these “states” consists of a reference to the on-chain funding, and a set of outputs that dictate who has what. Both participants in a channel have a copy of this “state”. If we have two peers on the Lightning network, Alice and Bob, this is what a commitment transaction means to each of them.
• Both Alice and Bob have an equivalent list of (HTLC) outputs for lightning transactions that are underway but not yet completed. These are conditional outputs that basically say that the receiver can spend the money should the payment succeed, otherwise the sender can claim this money after a fixed delay.
• Alice has an output (to_remotethat allows Bob to spend his money immediately.
• Alice also has an output (to_local) that lets her access her money after she waits a certain delay (to_self_delay).
• Bob has an output (to_remote) that allows Alice to spend her money immediately.
• Bob also has an output (to_local) that lets him access his money after he waits a certain delay (to_self_delay).
It is important to recognize that commitment transactions are symmetrical. This means that Alice & Bob’s commitment transactions are mirror images of each other. This means both Alice & Bob will be subject to the same scheme of revoking old states.
Note: Both Alice & Bob would rather not have to wait for to_self_delay blocks to occur to be able to access their money. This is one incentive for both of them to cooperate in the lightning protocol.
It’s important to remember that a commitment transaction is in fact a fully signed bitcoin transaction that could be broadcasted to the blockchain at any time. They are the mechanism that Lightning uses to make payments “trustless” — because every lightning network transaction results in a fully valid bitcoin transaction that can be put on-chain.
Primer On “Lightning Penalty” Scheme
Payments on the Lightning Network must always result in two outcomes:
1. Both parties must have updated (fully signed) commitment transactions that reflect the updated balances
2. Both parties must revoke their previous commitment transactions. Meaning that if they were to broadcast an old state, their counterparty must be able to penalize them, here this means take all their money in the channel; this is called the Lightning Penalty.
The first condition facilitates the transfer of value between the two parties while the second condition ensures that this transfer of value cannot be undone by either party. This second condition also introduces new complexity that doesn’t exist on a blockchain: It is the responsibility of the Lightning user to punish their counterparty if they attempt to cheat and so the user must have measures in place to secure their funds.
This is what introduces a new security concern related to losing funds, namely, revoked states.
Commitment Transaction Scripts
The outputs of the commitment transactions enforce the revocation of old states by having a condition in their scripts that allows money to be spent immediately by a cheating node’s peer if the state has been revoked. Let’s take a dive into the actual scripts to see how.
There are two main outputs on a commitment transaction:
1. to_local — this is the output that sends money back to you
2. to_remote — this is the output that sends money back to your lightning peer
OP_IF
# Penalty transaction
<revocationpubkey>
OP_ELSE
to_self_delay
OP_CSV
OP_DROP
<local_delayedpubkey>
OP_ENDIF
OP_CHECKSIG
This script has two control flow branches that say:
1. If someone (namely your lightning peer) can provide a valid digital signature that corresponds to revocationpubkey they can spend that money immediately.
2. After waiting to_self_delay blocks, you can spend this money by providing a digital signature for local_delayedpubkey.
The revocationpubkey is extremely important here. If you, or your peer, have the private key that corresponds to revocationpubkey, you can create a digital signature and validly spends the to_local output immediately (without the to_self_delay). Remember, commitment transactions are symmetrical. This means if you are an exchange, your peer may be able to generaterevocationprivkey and steal money from you if you broadcast a revoked state to the blockchain!
How is revocationprivkey Generated?
As BOLT3 states, the revocation private key can be created if you have both of these keys:
The first key, revocation_basepoint_secret, is the private key that corresponds to revocation_base_point public key in the open_channel or accept_channelmessages in the peer-to-peer protocol. You generate this secret locally and store it while you send the revocation_base_point public key to your peer (your peer must never learn this secret).
The second key is the per_commitment_secret. It is generated by your peer. Revoking a commitment transaction is simply the act of sharingper_commitment_secret since with this secret, a counterparty will have both secrets and can generate revocationprivkey.
A state is revoked (and a new state acknowledged) with the revoke_and_ack message in the Lightning p2p protocol which is sent to you when your peer has verified that all HTLC signatures passed in the commitment_signed message are valid. This message completes the state transition from the old commitment transaction to a brand new one.
Once someone has revocation_basepoint_secret and per_commitment_secretthey can generate revocationprivkey as specified in BOLT3:
revocationprivkey =
revocation_basepoint_secret *
SHA256(revocation_basepoint || per_commitment_point)
+
per_commitment_secret *
SHA256(per_commitment_point || revocation_basepoint)
This allows them to provide a valid digital signature that corresponds to revocationpubkey in a peer’s to_local output.
The per_commitment_secret poisons the old state by allowing you to claim money from your peer’s to_local output, if they broadcast the revokedcommitment transaction to the blockchain, since you can now generate the revocationprivkey.
Remember, commitment transactions are symmetrical. This exact same process applies to your exchange and it’s peers on the lightning network. An exchange needs to be wary of:
1. Accidentally broadcasting a revoked state
2. A peer broadcasting a revoked state
If a peer broadcasts a revoked state, the exchange must claim the peers money by constructing a transaction that is signed by revocationprivkey.
We’ve now covered how the Lightning Penalty is enforced in commitment transactions:
1. Each payment creates an updated and fully signed commitment transaction.
2. The last commitment transaction is revoked by revealing per_commitment_secret.
So how does an exchange protect itself against this new potential way of losing funds? We will discuss techniques exchanges can use to mitigate these risks and challenges in a future post.
If you’re interested in chatting more about Lightning Network technology or crypto tech in general, you can find us on Twitter @Suredbits or join our Suredbits Slack community./
If you are an exchange or interested in what Lightning can do for you and your business, contact us at [email protected].
You can also reach us on the Lightning Network: 038[email protected]ln.suredbits.com. | 2021-10-19 12:04:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24020862579345703, "perplexity": 2057.518920928458}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00086.warc.gz"} |
http://hotmengays.ml/how-to-write-a-notation-464040.html | ## How to write a notation
How to write a notation
GO TO PAGE
### Free Music Writing, Music Notation Software - Finale Notepad
11.11.2017 · Set-Builder Notation. How to describe a set by saying what properties its members When we have a simple set like the integers from 2 to 6 we can write:
GO TO PAGE
### How Can I write this notation? - Mathematics Stack Exchange
11.01.2011 · To change a number from scientific notation to standard form, Do not write the power of ten anymore. Category Education; License
GO TO PAGE
### How To Write A Scientific Notation - anchun.store
Browse and Read How To Write A Number In Scientific Notation How To Write A Number In Scientific Notation Spend your few moment to read a book even only few pages.
GO TO PAGE
### How To Write Scientific Notation - ezstupid.store
Writing Numbers In Exponential Notation. Written by: In order to make use of the exponential notation, you must be able to write both large and small numbers as
GO TO PAGE
GO TO PAGE
### How To Write Drum Set Notation - Sheboygan Drums - Drum
Browse and Read How To Write A Scientific Notation How To Write A Scientific Notation When writing can change your life, when writing can enrich you by offering much
GO TO PAGE
### Vector notation - Wikipedia
11.11.2017 · Scientific notation is a standard way of writing very large and very small numbers so that they’re easier to both compare and use in computations. To
GO TO PAGE
### Scientific Notation to Standard Form - YouTube
04.11.2011 · Video embedded · In this video we look at sever examples of how to write series in summation notation.
GO TO PAGE
### How do you write 2,000,004 in scientific notation? | Socratic
We can use this method to write any number in scientific notation. SOCRATIC Subjects . Science Anatomy & Physiology
GO TO PAGE
### How To Write In Function Notation - baumarkt.store
constant factor, and the big O notation ignores that. Similarly, logs with different constant • one write (of a primitive type: integer, float, character,
GO TO PAGE
### How To Write A Number In Scientific Notation - cricbuzz.store
Video embedded · Function notation is used throughout math, Function Notation - Concept. Alissa Fong. Function notation is a way to write functions that is easy to read and
GO TO PAGE
### How to Convert Fractions to Exponential Notation | Sciencing
Tips on writing drum set notation. Knowing how to write drum music gives you an advantage as a drummer. It's useful to write drum beats and fills.
GO TO PAGE
### Write large numbers in scientific notation | LearnZillion
Does anyone know a good resource (preferably pictures) that illustrates a conventional way to write the special sets symbols, i.e. \$\mathbbN,Z,Q,R,C\$ etc., by hand?
GO TO PAGE
### How to write special set notation by hand? - Stack Exchange
Tip. Consult a doctor or diagnostic manual to learn the names of disorders caused by chromosome irregularities, and write the disorder's name in the notation to make
GO TO PAGE
### How to Write Numbers in Scientific Notation - dummies
Your notation is very unclear. I suspect you are trying to distinguish whether \$x\$ is an even or odd integer. In that case, you probably mean
GO TO PAGE
### Scientific Notation - Maths Resources
2.75 x 10^6 You have to bring the decimal point after the first number, count the numbers behind it and write the number of digits as the power of ten! Hope that helps!
GO TO PAGE
### Writing Numbers in Exponential Notation
Browse and Read How To Write In Scientific Notation How To Write In Scientific Notation When writing can change your life, when writing can enrich you by offering
GO TO PAGE
### notation | writing | Britannica.com
Tip. Some fractions will not give a finite decimal number when divided. In these cases, you will have to write an approximate decimal number. For example, the
GO TO PAGE
GO TO PAGE
### How to Write Music Notation - key-notes
How do you write music notation of songs or hymns? How do you concentrate on musical beats?
GO TO PAGE
### Songwriting Notation - Learn How To Write Lyrics & Songs!
Songwriter Exercise 6 - Songwriting Notation. Compose Your Song Using Meter-Tabs™ (Part 1) What are Meter-Tabs™ and how will they help you Notate your Song?
GO TO PAGE
### Set Notation: Definition & Examples - Video & Lesson
Vector notation, commonly used mathematical notation for working with mathematical vectors, which may be geometric vectors or abstract members of vector spaces. For
GO TO PAGE
### Writing a Series in Summation Notation - YouTube
Download and Read How To Write In Function Notation How To Write In Function Notation Bargaining with reading habit is no need. Reading is not kind of something sold
GO TO PAGE
### How To Write Scientific Notation - baumarkt.store
04.11.2017 · When you write an academic paper, you use a mix of other people's research combined with your own ideas. Incorporating the ideas of other people requires
GO TO PAGE
### How To Write In Function Notation - chetrope.store
06.11.2017 · When finding probabilities for a normal distribution (less than, greater than, or in between), you need to be able to write probability notations. Practice | 2018-01-18 23:22:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8618674874305725, "perplexity": 1659.5122735674925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887660.30/warc/CC-MAIN-20180118230513-20180119010513-00339.warc.gz"} |
https://gamedev.stackexchange.com/questions/79041/particle-friction-with-variable-timestep-in-xna | # Particle friction with variable timestep in XNA
Alright, so I'm working on an engine of sorts in XNA (yes, it's deprecated, I know) and I'm implementing my own particle system. I've defined a "ParticleEffect" such that when it's supplied a GameTime and an IEnumerable<T:IParticle>, it applies an "effect" to every particle in that collection.
For example, my Friction effect:
public sealed class Particle2D_Friction : ParticleEffectBase<Particle2D>
{
public float Magnitude { get; set; }
public float StopThreshold { get; set; }
public override void Apply(GameTime time, IEnumerable<Particle2D> particles)
{
if (Enabled)
{
if (particles != null)
{
float delta = time.GetDelta();
float magnitude = MathHelper.Clamp(Magnitude, 0.00f, 1.00f);
Vector2 result = Vector2.Zero;
foreach (var particle in particles)
if (particle != null && particle.Alive)
{
result = particle.Velocity - (delta * (1.00f - magnitude) * particle.Velocity);
if (Math.Abs(result.X) <= StopThreshold) result.X = 0.00f;
if (Math.Abs(result.Y) <= StopThreshold) result.Y = 0.00f;
particle.Velocity = result;
}
}
}
}
}
Unfortunately, it isn't behaving quite as well as I'd like it to, and I know the issue is linked to the delta time. I should mention that time.GetDelta() is an extension method that returns (float)time.ElapsedGameTime.TotalSeconds; as a shortcut.
Anyways, when I remove the delta, the effect is a rather strong, but it comes to an immediate stop when Magnitude is equal to 0.00f (retains 0% of velocity), while it moves indefinitely when equal to 1.00f (retains 100% of velocity). That's fine and that's how it should work.
When I add the delta to smooth it out, I encounter a problem. When Magnitude is equal to 1.00f, it retains 100% velocity as it should. However, when Magnitude is equal to 0.00f, it retains 1/60th of it's velocity instead of stopping. Of course, 1/60 is equal to 0.01666, which is the frames per second on the delta. But it's wrong.
I'm not sure how to fix this behavior. Any suggestions?
• The delta isn't supposed to "smooth it out". It's theoretically supposed to account for a variable unexpected framerate. If the framerate is stable, delta is not needed normally. I don't understand why magnitude 0.0 is supposed to stop it. In the code it does 1.00f - magnitude which will return 1.00f if magnitude is 0 and then multiply that by delta and you get 0.0166... The result is larger me thinks when magni is 0. – wolfdawn Jun 21 '14 at 6:41
• Oh, sorry. I had it set so that it was "retention". So like, 0.98f would mean it keeps 98% of its speed, 0.00f means it would stop entirely. That was the point of magnitude (and clamped between 0 and 1). But still, removing the delta results in a full stop when I give it 0, but adding the delta doesn't stop the particle entirely, which makes me wonder why I even need delta in the first place (aside from being "proper") – Kyle Baran Jun 21 '14 at 20:51
• My answer was incorrect, your math appears to be right. :) I am wondering why decreasing the speed in an exponential rate does not to work to accomplish what you expected (It does not make sense to me). – wolfdawn Jun 23 '14 at 12:30
• The idea was that if Magnitude was set to 0.00f, then it would retain 0% of its movement when it updates, thus stopping the particle entirely, but because of delta, it was messing up the number. To get around it, the only solution I could find was hardcoding specific behavior to 0.00f manually, or else multiplying the amount by 60f (since delta was equal to 1/60th, or 0.01667). – Kyle Baran Jun 25 '14 at 12:12
• But if you multiply then there is no use in using delta in the first place.. If you use delta than you are saying I want this process to take one second or x seconds depending on the coefficient you use with delta. – wolfdawn Jun 25 '14 at 20:32 | 2019-08-23 01:24:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.463469922542572, "perplexity": 2003.0085932716813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317688.48/warc/CC-MAIN-20190822235908-20190823021908-00207.warc.gz"} |
https://am.angouri.org/wiki/07.-Solvers.html | ## Solvers
← Back to the main page
Here we shall review different ways to solve equations, inequalities, statements, systems.
## Solving a classic equation
The method to solve a classic equation over a single variable in AM is SolveEquation. Since an equation generally has a form of f(x) = g(x), the method SolveEquation expects your expression to be f(x) - g(x), that is, it should be a function. Example:
Entity expr = "2sin(a x) - b";
Console.WriteLine(expr.SolveEquation("x"));
Output:
{ (arcsin(b / 2) + 2 * pi * n_1) / a, (pi - arcsin(b / 2) + 2 * pi * n_1) / a }
Nonetheless, AM allows you to solve Statements, be that equalities, inequalities, and more advanced cases.
Extension: string.SolveEquation(Variable).
## Solving a statement
A Statement is an expression, that is considered to be true or false. For example, a and b is a statement, as well as x2 = 4, or x > a and x2 = b.
Let us start from a simple example:
Entity expr = "x2 = 16";
Console.WriteLine(expr.Solve("x"));
Output:
{ 4, -4 }
Here is a more advanced example:
Entity expr = "x4 = 16 and x in RR and x > 0";
Console.WriteLine(expr.Solve("x"));
Output:
{ 2 }
Entity expr = "x4 = 16 and x in RR and x > 0 or x^a = 6";
Console.WriteLine(expr.Solve("x"));
Output:
{ 2, 6 ^ (1 / a) }
Example with inequalities:
Entity expr = "2x2 - 3 > 0 and x > 0";
Console.WriteLine(expr.Solve("x").Simplify());
Entity expr2 = "2x2 - 3 > 0 or x > 0";
Console.WriteLine(expr2.Solve("x").Simplify());
Output:
((-oo; -sqrt(24) / 4) \/ (sqrt(24) / 4; +oo)) /\ (0; +oo)
(-oo; -sqrt(24) / 4) \/ (sqrt(24) / 4; +oo) \/ (0; +oo)
Extension: string.Solve(Variable).
The returned value is a Set, which could be of four types: FiniteSet, Interval, SpecialSet, ConditionalSet. Let us consider a basic example:
Entity expr = "a x2 + b x + c = 0";
var solutions = expr.Solve("x");
if (solutions is Entity.Set.FiniteSet finiteSet)
{
foreach (var root in finiteSet)
Console.WriteLine(\$"Root: {root}");
}
Output:
Root: (-b - sqrt(b ^ 2 - 4 * a * c)) / (2 * a)
Root: (-b + sqrt(b ^ 2 - 4 * a * c)) / (2 * a)
## Solving system of equations
Every equation of the system should be written as a simple equation (as in the first part of the article). The method of MathS named Equations:
var system = Equations(
"x2 + a y3",
"y - x - b"
);
This creates an instance of EquationSystem. Let us print it out:
Console.WriteLine(system);
Output:
x ^ 2 + a * y ^ 3 = 0
y - x - b = 0
The method Solve() of instance EquationSystem takes variables in the same order, as their values are then written to a solution matrix. Let us consider a very simple example:
var system = Equations(
"x2 + y",
"y - x - 3"
);
Console.WriteLine(system.Solve("x", "y"));
Output:
Matrix[2 x 2]
(1 - sqrt(-11)) / (-2) -((1 - sqrt(-11)) / (-2)) ^ 2
(1 + sqrt(-11)) / (-2) -((1 + sqrt(-11)) / (-2)) ^ 2
The first (left) column are values for x, the right one for y. The first row is the first solution set, the second row is the second one.
Extensions: (string, string).Solve(Variable, Variable), (string, string, string).Solve(Variable, Variable, Variable), etc.
2019-2021 Angouri · Project's repo · Site's repo · Octicons | 2021-06-15 17:12:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3896200358867645, "perplexity": 2892.4321728395566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621450.29/warc/CC-MAIN-20210615145601-20210615175601-00170.warc.gz"} |
http://mathhelpforum.com/math-topics/102501-complex-functions-print.html | # Complex Functions
• Sep 15th 2009, 06:47 PM
Aryth
Complex Functions
Find v and a if $z = \cos{(2t)} + i\sin{(2t)}$.
I understand that:
$\frac{dz}{dt} = \frac{dx}{dt} + i\frac{dy}{dt}$
And similarly for the acceleration derivative.
I'm just not 100% sure that the velocity and acceleration I got are correct, especially the acceleration.
• Sep 16th 2009, 07:38 AM
HallsofIvy
Quote:
Originally Posted by Aryth
Find v and a if $z = \cos{(2t)} + i\sin{(2t)}$.
I understand that:
$\frac{dz}{dt} = \frac{dx}{dt} + i\frac{dy}{dt}$
And similarly for the acceleration derivative.
I'm just not 100% sure that the velocity and acceleration I got are correct, especially the acceleration.
Well, then, tell us what you got and we can check them for you! | 2016-10-26 03:01:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9610130786895752, "perplexity": 1505.3168338140301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720475.79/warc/CC-MAIN-20161020183840-00370-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://www.iam.khv.ru/article_eng.php?art=223&iss=19 | Far Eastern Mathematical Journal
To content of the issue
Relations between conjectural eigenvalues of Hecke operators on submotives of Siegel varieties
D. Yu. Logachev
2012, issue 1, P. 60–85
Abstract
There exist conjectural formulas of relations between \$L\$-functions of submotives of Shimura varieties and automorphic representations of the corresponding reductive groups, due to Langlands – Arthur. In the present paper these formulas are used in order to get explicit relations between eigenvalues of p-Hecke operators (generators of the p-Hecke algebra of \$X\$) on cohomology spaces of some of these submotives, for the case where \$X\$ is a Siegel variety. Hence, this result is conjectural as well: the methods related to counting points on reductions of \$X\$ using the Selberg trace formula are not used.
It turns out that the above relations are linear and their coefficients are polynomials in \$p\$ which satisfy a simple recurrence formula. The same result can be easily obtained for any Shimura variety.
This result is an intermediate step for the generalization of Kolyvagin's theorem of the finiteness of Tate – Shafarevich group of elliptic curves of analytic rank 0 and 1 over \$\mathbb{Q}\$, to the case of submotives of other Shimura varieties, particularly of Siegel varieties of genus 3, see [9].
The idea of the proof: on the one hand, the above formulas of Langlands – Arthur give us (conjectural) relations between Weil numbers of a submotive. On the other hand, the Satake map permits us to transform these relations between Weil numbers into relations between eigenvalues of \$p\$-Hecke operators on \$X\$.
The paper also contains a survey of some related questions, for example explicit finding of the Hecke polynomial for \$X\$, and (Appendix) tables for the cases \$g=2,3\$.
Keywords:
Siegel varieties, submotives, Hecke correspondences, Weil numbers, Satake map
References
[1] A. N. Andrianov, V. G. Zhuravlev, Modular forms and Hecke operators, (In Russian; English version: Andrianov A.N., Quadratic forms and Hecke operators. Springer, 1987), Moscow, 1990.
[2] J. Arthur, “Unipotent automorphic representations: conjectures”, Asterisque, 171-172 (1989), 13–71.
[3] J. Adams, J. F. Johnson, “Endoscopic groups and packets of non-tempered representations”, Compositio Math., 64 (1987), 271–309.
[4] D. Blasius, J. D. Rogawski, “Zeta functions of Shimura varieties”, Motives. Proc. of Symp. in Pure Math., 55:2 (1994), 525–571.
[5] O. Bultel, On the mod \$\mathfrak{P}\$-reduction of ordinary \$CM\$-points, Ph. D. thesis, Oxford, 1997.
[6] P. Deligne, “Travaux de Shimura”, Lect. Notes in Math., Seminaire Bourbaki 1970/71, Expose? 389, v. 244, 1971, 123–165.
[7] G. Faltings, Chai Ching-Li, Degeneration of abelian varieties, Springer, 1990.
[8] Jacobson, Lie algebras, v. 10, Interscience tracts in pure and applied mathematics, 1962.
[9] D. Logachev, “Reduction of a problem of finiteness of Tate – Shafarevich group to a result of Zagier type”, Far Eastern Math. Journal, 9:1-2 (2009), 105–130, arXiv: 0411346.
[10] Satake Ichiro, “Theory of spherical functions on reductive algebraic groups over p-adic fields”, Publ. IHES, 1963, 229–293.
[11] D. Vogan, G. Zuckerman, “Unitary representations with non-zero cohomology”, Compositio Math., 53 (1984), 51–90.
To content of the issue | 2022-05-26 14:08:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8178654909133911, "perplexity": 2681.8232401904957}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00543.warc.gz"} |
http://www.koreascience.or.kr/article/JAKO201836256831796.page | # 화학적산소요구량을 이용한 하구해역의 해수중 유기물 기원 고찰
• Kim, Young-Sug (Marine Environment Research Division, National Institute of Fisheries Science) ;
• Koo, Jun-Ho (Marine Environment Research Division, National Institute of Fisheries Science) ;
• Kwon, Jung-No (Marine Environment Research Division, National Institute of Fisheries Science) ;
• Lee, Won-Chan (Marine Environment Research Division, National Institute of Fisheries Science)
• 김영숙 (국립수산과학원 어장환경과) ;
• 구준호 (국립수산과학원 어장환경과) ;
• 권정노 (국립수산과학원 어장환경과) ;
• 이원찬 (국립수산과학원 어장환경과)
• Accepted : 2018.10.26
• Published : 2018.10.31
#### Abstract
In this study, one studied the principal factors and water-quality components that determine the concentration of chemical oxygen demand (COD) in seawater in estuaries, such as the Han, Geum, Youngsan, Seomjin, and Nakdong rivers in Korea. The principal factors determining the concentration of COD in seawater indicated by the principal component analysis were salinity, exogenous origin and autochthonous resources based on chlorophyll-a. Moreover, organic matter in the submarine sediment layer also had a secondary effect. Regression slope assessed the contribution of water-quality components to determine the concentration of COD in the estuary. One found that the effect of salinity on the overall survey was significant. Moreover, the effect of chlorophyll-a was also appeared in April and August. In each estuary, the most significant contribution factor was chlorophyll-a in the Nakdong River and salinity in the Han and Yongsan rivers. The contribution of salinity and chlorophyll-a were found to be the largest in the Geum River. The salinity and chlorophyll-a in the Seomjin River showed a low contribution.
#### Acknowledgement
Supported by : 국립수산과학원
#### References
1. Aoki, S., Y. Fuse and E. Yamada(2004), Determinations of humic substances and other dissolved organic matter and their effects on the increase of COD in Lake Biwa, Analytical Science, Vol. 20, pp. 159-164. https://doi.org/10.2116/analsci.20.159
2. Baek, S. H.(2014), Distribution characteristics of chemical oxygen demand and Escheria coli on pollutant sources at Gwangyang Bay of South Sea in Korea, Journal of the Korea Academia-Industrial cooperation Society, Vol. 15, No. 5, pp. 3279-3285. https://doi.org/10.5762/KAIS.2014.15.5.3279
3. Boynton, W. R., J. H. Garber, R. Summers and W. M. Kemp(1995), Inputs, transformations, and transport of nitrogen and phosphorus in Chesapeake Bay and selected tributaries, Estuaries, Vol. 18, pp. 285-314. https://doi.org/10.2307/1352640
4. Carstensen, J., J. H. Andersen, B. G. Gustafsson and D. J. Conley(2014), Deoxygenation of the Baltic Sea during the last century, Proceedings of the National Academy of Sciences USA, Vol. 111, pp. 5628-5633. https://doi.org/10.1073/pnas.1323156111
5. Cho, K. J., M. Y. Choi, S. K. Kwak, S. H. Im, D. Y. Kim, J. G. Park and Y. E. Kim(1998), Eutrophication and seasonal variation of water quality in Masan-Jihae Bay, The Sea Journal of the Korean society of oceanography, Vol. 3, No. 4, pp. 193-202.
6. Drira, Z., S. Kmiha-Megdiche, H. Sahnoun, A. Hammami, N. Allouche, M. Tedetti and H. Ayadi(2016), Assessment of anthropogenic inputs in the surfacewaters of the southern coastal area of Sfax during spring (Tunisia, Southern Mediterranean Sea), Marine Pollution Bulletin, Vol. 104, pp. 355-363. https://doi.org/10.1016/j.marpolbul.2016.01.035
7. Guo, X., M. Dai, W. Zhai, W. J. Cai and B. Chen(2009), $CO_2$ flux and seasonal variability in a large subtropical estuarine system, the Pearl River Estuary, China, Journal of Geophysical Research, Vol. 114, G03013, doi:10.1029/2008JG000905. https://doi.org/10.1029/2008JG000905
8. Hong, S. J., W. C. Lee, S. P. Yoon, S. E. Park, Y. S. Cho, J. N. Kwon and D. M. Kim(2007), Reduction of autochthonous organics in Masan Bay using a simple box model, Journal of the Korean Society of Marine Environment & Safety, Vol. 13, No. 2, pp. 111-118.
9. Huang, X., L. Huang and W. Yue(2014), The characteristics of nutrients and eutrophication in the Pearl River estuary, South China, Marine Pollution Bulletin, Vol. 47, pp. 30-36.
10. Humborg, C., A. Ittekkot, A. Cociasu and B. V. Bodungen (1997), Effect of Danube River dam on Black Sea biogeochemistry and ecosystem structure, Nature, Vol. 386, pp. 385-388. https://doi.org/10.1038/386385a0
11. Jang, J. I, I. S. Han, K. T. Kim and K. T. Ra(2011), Characteristics of water quality in the Shihwa Lake and outer sea, Journal of the Korean Society of Marine Environment & Safety, Vol. 17, No. 2, pp. 105-121. https://doi.org/10.7837/kosomes.2011.17.2.105
12. Jeong, D. H., H. H. Shin, S. W. Jung and D. I. Lim(2013), Variations and characters of water quality during flood and dry seasons in the eastern coast of south sea, Korea, Korean Journal of Environmental Biology, Vol. 31, No. 1, pp. 19-36. https://doi.org/10.11626/KJEB.2013.31.1.019
13. Jeong, Y. H., Y. T. Kim, Y. Z. Chae, C. W. Rhee, K. R. Ko, S. Y. Kim, J. Y. Jeong and J. S. Yang(2005), Analysis of long-term monitoring data from the Geum river estuary, The Sea Journal of the Korean Society of Oceanography, Vol. 10, No. 3, pp. 139-144.
14. Jung, W. S., S. J. Hong, W. C. Lee, H. C. Kim, J. H. Kim and D. M. Kim(2016), Modeling for pollution contribution rate of land based load in Masan Bay, Journal of the Korean Society of Marine Environment & Safety, Vol. 22, No. 1, pp. 59-66. https://doi.org/10.7837/kosomes.2016.22.1.059
15. Kairesalo, T., L. Tuominen, H. Hartikainen and K. Rankinen (1995), The role of bacteria un the nutrient exchange between sediment and water in a flow-through system, Microbial Ecology, Vol. 29, pp. 129-144. https://doi.org/10.1007/BF00167160
16. Kim, J. G.(2006), The evaluation of water quality in coastal sea of Incheon using a multivariate analysis, Journal of the Environmental Sciences, Vol. 15, No. 11, pp. 1017-1025. https://doi.org/10.5322/JES.2006.15.11.1017
17. Kim, J. G. and H. S. Jang(2016), A Study on the inflowing pollution load and material budgets in Hampyeong Bay, Journal of the Korean Society of Marine Environment & Safety, Vol. 22, No. 1, pp. 1-10. https://doi.org/10.7837/kosomes.2016.22.1.001
18. Kim, J. K., G. I. Kwak and J. H. Jeong(2008), Three-Dimensional mixing characteristics in Seomjin river estuary, Journal of the Korean Society for Marine Environmental Engineering, Vol. 11, No. 3, pp. 164-174.
19. Kim, Y. T., Y. S. Choi, Y. S. Cho and Y. H. Choi and S. Jeon(2015), Characteristic distributions of nutrients and water quality parameters in the vicinity of Mokpo Harbor after freshwater inputs, Journal of the Korean Society of Marine Environment & Safety, Vol. 21, No. 6, pp. 617-636. https://doi.org/10.7837/kosomes.2015.21.6.617
20. Lee, K. S. and S. K. Jeon(2009), Material budgets in the Youngsan river estuary with simple box model, Journal of the Korean Society for Marine Environmental Engineering, Vol. 12, No. 4, pp. 248-254.
21. Lim, J. S., Y. W. Kim, J. H. Lee, T. J. Park and I. G. Byun(2015), Evaluation of correlation between chlorophyll-a and multiple parameters by multiple linear regression analysis, Journal of Korean Society Environmental Engineers, Vol. 37, No. 5, pp. 253-261. https://doi.org/10.4491/KSEE.2015.37.5.253
22. Maciejewska, A. and J. Pempkowiak(2014), DOC and POC in the water column of the southern Baltic Part I. Evaluation of factors influencing sources, distribution and concentration dynamics of organic matter, Oceanologia, Vol. 56, No. 3, pp. 523-548. https://doi.org/10.5697/oc.55-3.523
23. MOF(2013), Ministry of Oceans and Fsheries, Marine environment standard methods, pp. 47-85.
24. Morioka, T.(1980), Application of ecological dynamics for eutrophication control in Kobe Harbour area, Pro Water Technology, Vol. 12, pp. 445-458.
25. Park, H. S., C. K. Park, M. K. Song, K. H. Baek and S. K. Shin(2001), Evaluation of water quality characteristic using factor analysis in the Nakdong river, Journal of Korean Society on Water Quality, Vol. 17, No. 6, pp. 693-701.
26. Quan, W. M., X. Q. Shen and J. D. Han(2005), Analysis and assessment on eutrophication status and developing trend in Changjiang Estuary and adjacent sea, Marine Environmental Science, Vol. 24, No. 3, pp. 13-16. https://doi.org/10.3969/j.issn.1007-6336.2005.03.004
27. Savchuk, O. P.(2002), Nutrient biogeochemical cycles in the Gulf of Riga: scaling up field studies with a mathematical model, Journal of Marine Systems, Vol. 32, No. 4, pp. 253-280. https://doi.org/10.1016/S0924-7963(02)00039-8
28. Shen, H. T., Q. H. Huang and X. C. Liu(2000), Fluxes of the dissolved inorganic nitrogen and phosphorus through the key interfaces in the Changjiang Estuary, Estuarine Coasts, Vol. 33, No. 6, pp. 1420-1429.
29. Shim, K. H., Y. J. Lee, B. K. Jeong, Y. S. Shim and S. H. Kim(2013), Determination of the origin of particulate organic matter at the estuary of Yungsan river using stable isotope ratios (${\delta}13C,{\delta}15N$), Korean Journal of Ecology and Environment, Vol. 46, No. 2, pp. 175-184. https://doi.org/10.11614/KSL.2013.46.2.175
30. Shin, S. K., C. K. Park and K. O. Song(1995) Evaluation of autochthonous COD in the Nakdong estuary, Journal of the Korean Fisheries Society, Vol. 28, No. 3, pp. 263-269.
31. Simpson, J. H., P. B. Tett, M. L. Argote-Espinoza, A. Edwards, K. J. Jones and G. Savidge(1982), Mixing and phytoplankton growth around an island in a stratified sea, Continental Shelf Research, Vol. 1, pp. 15-31. https://doi.org/10.1016/0278-4343(82)90030-9
32. Singh, K. P., A. Malik, D. Mohan and S. Sinha(2004), Multivariate statistical techniques for the evaluation of spatial and temporal variations in water quality of Gomti River (India): a case study, Water Research, Vol. 38, pp. 3980-3992. https://doi.org/10.1016/j.watres.2004.06.011
33. Straskrabova, V., J. Komarkova and V. Vyhnalek(1993), Degradation of organic substance in reservoirs, Water Science and Technology, Vol. 28, No. 6, pp. 95-104. https://doi.org/10.2166/wst.1993.0133
34. Wang, H., M. Dai, J. Liu, S. J. Kao, C. Zhang, W. J. Cai, G. Wang, W. Qian, M. Zhao and M. Sun(2016), Eutrophication-Driven Hypoxia in the East China Sea off the Changjiang Estuary, Environmental Science and Technology, Vol. 50, pp. 2255-2263. https://doi.org/10.1021/acs.est.5b06211 | 2020-10-01 19:55:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3481159508228302, "perplexity": 11643.139605288594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00753.warc.gz"} |
https://cstheory.stackexchange.com/questions/39327/complexity-class-on-quantum-computation-and-classic-ones | # Complexity class on quantum computation and classic ones
Does the complexity speedup in superpolynomial by quantum computation mean it is possible to find new algorithm on classic Turing Machine which can speedup in classic Turing Machine in superpolynomial?
update: I have just found that Strong CT thesis is contradict to the belief that quantum model is faster than classic one,or quantum model can not be implemented if it is speedup of classic one on some problem,see http://math.nist.gov/quantum/zoo/
So, Strong CT thesis is not correct or the quantum computer is unable to be implemented, or Strong CT thesis is not correct or the quantum computer is unable to be implemented, and we can find new algorithm for those problems about which we have speedup quantum algorithm but classic one, which can simulate the quantum one.
• I am having real trouble deciphering this question. Are you asking whether the following is true: "if there exists a quantum algorithm Q for problem P that runs superpolynomially faster than a classical algorithm A for the same problem, then there also exists a classical algorithm B that runs superpolynomially faster than A"? I see no reason at all why something like this would be true. – Sasho Nikolov Oct 19 '17 at 19:06
• @SashoNikolov yes, exactly, this is what I mean. – XL _At_Here_There Oct 20 '17 at 8:14
• @XL_at_China Wouldn't that by very definition contradict the idea of "quantum supremacy"? (Which is a currently wide open question) – Clement C. Oct 20 '17 at 17:50
• @ClementC. yes, I think so, because, Quantum computation does not reduce the computational complexity actually. NPC is still NPC – XL _At_Here_There Oct 21 '17 at 2:09
• So then you are conjecturing that BQP is contained in BPP. This is unknown and, afaik, is widely believed to be false. – Sasho Nikolov Oct 22 '17 at 21:17
No, there is no particular reason to think that, given a quantum algorithm that solves a problem very fast, one can find a classical algorithm that also runs very fast. The basic building blocks of a quantum computer are different, and although a classical computer can simulate a quantum computer (which means that quantum comparability theory and classical computability theory are the same), as far as we know it cannot do so efficiently.
No one has every proven a result like
If there exists a quantum algorithm that solves a decision problem in $O(f(n))$ time, then there exists a classical algorithm that solves the problem in $O(g(f(n))$ time for polynomial $g$
which is approximately what you’d need for this question to have an affirmative answer. As far as I know there’s absolutely not reason to think that a theorem along these lines is true. To talk about a specific example, let’s look at factoring.
Shor’s Algorithm achieves a superpolynomial speed-up in factoring, going from $$O(e^{1.9(\log N)^{1/3}(\log\log N)^{2/3}})$$ in the classical case to $Õ((\log N)^2)$ in the quantum case. There’s no particular reason (based on the existence of Shor’s algorithm) to think that factoring is in $P$. The technique used in Shor’s algorithm is fundamentally unusuable by classical computers, and there doesn’t seem to be any way to replicate the technique with a classical computer.
• I do not mean "given a quantum algorithm that solves a problem much faster than a classical algorithm, one can use that $quantum \space algorithm$ to find a classical algorithm that also runs very fast.", I mean it is possible to find $new \space classic \space algorithm$ on classic Turing Machine which can speedup in classic Turing Machine in superpolynomial? – XL _At_Here_There Oct 19 '17 at 13:02
• @XL_at_China if you don’t use the quantum algorithm to find/work out/get to the new classical algorithm, why did you mention quantum algorithms at all? Also, the best way to make text show up in italics is to use a single * on each side of the text. Save \$ for mathematics. – Stella Biderman Oct 19 '17 at 13:24
• I think there must be an relation between them, in other word, quantum algorithm implies classic algorithm, but they are not same. – XL _At_Here_There Oct 20 '17 at 8:16
• @XL_at_China That is the question I was trying to answer. I’ve edited my answer to hopefully be clearer about this. – Stella Biderman Oct 22 '17 at 12:28 | 2021-01-18 11:14:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5775072574615479, "perplexity": 437.8401953481521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514495.52/warc/CC-MAIN-20210118092350-20210118122350-00377.warc.gz"} |
https://repository.kaust.edu.sa/handle/10754/626235?show=full | dc.contributor.author Kazmi, Syed dc.contributor.author Hajjaj, Amal dc.contributor.author Hafiz, Md Abdullah Al dc.contributor.author Da Costa, Pedro M. F. J. dc.contributor.author Younis, Mohammad I. dc.date.accessioned 2017-11-29T11:13:55Z dc.date.available 2017-11-29T11:13:55Z dc.date.issued 2017-11-24 dc.identifier.citation Kazmi SNR, Hajjaj AZ, Hafiz MAA, Costa PMFJ, Younis MI (2017) Highly Tunable Electrostatic Nanomechanical Resonators. IEEE Transactions on Nanotechnology: 1–1. Available: http://dx.doi.org/10.1109/TNANO.2017.2777519. dc.identifier.issn 1536-125X dc.identifier.issn 1941-0085 dc.identifier.doi 10.1109/TNANO.2017.2777519 dc.identifier.uri http://hdl.handle.net/10754/626235 dc.description.abstract There has been significant interest towards highly tunable resonators for on-demand frequency selection in modern communication systems. Here, we report highly tunable electrostatically actuated silicon-based nanomechanical resonators. In-plane doubly-clamped bridges, slightly curved as shallow arches due to residual stresses, are fabricated using standard electron beam lithography and surface nanomachining. The resonators are designed such that the effect of mid-plane stretching dominates the softening effect of the electrostatic force. This is achieved by controlling the gap-to-thickness ratio and by exploiting the initial curvature of the structure from fabrication. We demonstrate considerable increase in the resonance frequency of nanoresonators with the dc bias voltages up to 108% for 180 nm thick structures with a transduction gap of 1 $mu$ m separating them from the driving/sensing electrodes. The experimental results are found in good agreement with those of a nonlinear analytical model based on the Euler-Bernoulli beam theory. As a potential application, we demonstrate a tunable narrow band-pass filter using two electrically coupled nanomechanical arch resonators with varied dc bias voltages. dc.description.sponsorship This work was supported by funding from King Abdullah University of Science and Technology (KAUST) research grant. dc.publisher Institute of Electrical and Electronics Engineers (IEEE) dc.relation.url http://ieeexplore.ieee.org/document/8119846/ dc.rights (c) 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. dc.subject Doubly-clamped bridges dc.subject electrostatic force dc.subject nanomechanical resonator dc.subject shallow arch dc.subject tunability dc.title Highly Tunable Electrostatic Nanomechanical Resonators dc.type Article dc.contributor.department Material Science and Engineering Program dc.contributor.department Mechanical Engineering Program dc.contributor.department Physical Science and Engineering (PSE) Division dc.identifier.journal IEEE Transactions on Nanotechnology dc.eprint.version Post-print kaust.person Kazmi, Syed kaust.person Hajjaj, Amal Z. kaust.person Hafiz, Md Abdullah Al kaust.person Da Costa, Pedro M. F. J. kaust.person Younis, Mohammad I. refterms.dateFOA 2018-06-14T02:20:46Z dc.date.published-online 2017-11-24 dc.date.published-print 2018-01
### Files in this item
Name:
08119846.pdf
Size:
1.590Mb
Format:
PDF
Description:
Accepted Manuscript | 2021-12-07 02:02:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2547752559185028, "perplexity": 10167.181850228517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363332.1/warc/CC-MAIN-20211207014802-20211207044802-00482.warc.gz"} |
https://jasss.soc.surrey.ac.uk/18/4/14.html | ### Abstract
The different ways individuals socialize with others affect the conditions under which social norms are able to emerge. In this work an agent-based model of cooperation in a population of adaptive agents is presented. The model has the ability to implement a multitude of network topologies. The agents possess strategies represented by boldness and vengefulness values in the spirit of Axelrod's (1986) norms game. However, unlike in the norms game, the simulations abandon the evolutionary approach and only follow a single-generation of agents who are nevertheless able to adapt their strategies based on changes in their environment. The model is analyzed for potential emergence or collapse of norms under different network and neighborhood configurations as well as different vigilance levels in the agent population. In doing so the model is found able to exhibit interesting emergent behavior suggesting potential for norm establishment even without the use of so-called metanorms. Although the model shows that the success of the norm is dependent on the neighborhood size and the vigilance of the agent population, the likelihood of norm collapse is not monotonically related to decreases in vigilance.
Keywords:
Social Norms, Agent-Based Modeling, Social Networks, Neighborhood Structure, Cooperation
### Introduction
1.1
The role of the structure of personal social networks in the process of diffusion of specific social facts has been now long acknowledged. Research in social network analysis subscribes to the structuralist hypothesis which claims that the adoption of social facts such as norms is not the cause but rather the effect of an individual's structural location in the complex web of social interactions. That is, people acquire norms as they acquire information – through ties structured in the network – and their spread is the direct consequence of the resources available to each and every individual in the network (Wellman 1983). Granovetter (1973) has famously validated this point of view in his classic study on the strength of weak ties. Other studies of diffusion dynamics have since shown that social network topologies can have important consequences for emergent patterns of collective behavior (see Newman et al. 2006). Moreover, most social contagion is complex, in the sense that multiple channels of communication and exposure are being exploited concurrently, yet possibly at varying spatial and temporal scales (Centola & Macy 2007). Individual-level decision-making thus plays an ever more important role as social contagion is dependent on strategic behavior as well as the percieved credibility and legitimacy of the social fact being diffused.
1.2
In this article special attention is therefore given to spatial agent-based modeling as a means of providing a powerful framework for exploring the mechanisms which lie at the foundations of the norm establishment process. In fact the use of computational simulation in the form of agent-based models to study social norms is nothing new. Axelrod (1986) used a game-theoretical foundation combined with an evolutionary approach; Axtell et al. (2001) observed the mechanics of transitions from social inequality to an equitable state by employing a model of individual action grounded in behavioral game theory; Epstein (2001) also explored the role of bounded rationality and thoughtless conformity on the adoption of norms with an ABM; just to name a few examples. This work draws heavily from Axelrod's norms game, however it modifies the original framework to fit the purposes of simulating a population of a single-generation of adaptive agents in a spatial environment rather than using an evolutionary approach.
1.3
Thirty years later, the norms game, as Axelrod had dubbed it, is still a very valuable model because it attempts to account for the amorphous and opaque, yet much discussed concept of norm emergence through the lens of individual actions. Moreover, it does so using only a very simple and clearly understandable framework. However, the trouble with such abstract models is that they can often fall into the trap of theoretical instrumentalism, where choice of assumptions is subject to predictive power or plausible results (Hedström 2005). One can argue that research in the social sciences should always be led under the maxim of grounding its explanatory mechanisms in empirical findings (De Marchi 2005). In this article, the aim is to introduce a modified set of assumptions which draws from Axelrod's ideas, but at the same time avoids fictionalist temptations regarding parts of the model design. Moreover the focus is shifted to simulating agent adaptation in spatial structures within a single generation.
1.4
In doing this, the hope is to answer the following question: how does the changing nature of individuals' social network structures affect the emergence and internalization of certain social norms? This paper will focus on cooperation-based norms with defections beneficial to individuals yet detrimental to society. Even more specifically, the focus will be on such norms with defection rewards independent of the population size (one example of this type of norm is tax-paying and the issue of tax evasion, where the total sum of money evaded by any individual actor is certainly independent of the size of the population). The goal is to elucidate the answer with the help of a computational model extending and modifying the original norms game in order to study agent adaptation in a multitude of environmental configurations. A background review of sociological theory on communities and social networks is presented first, before demonstrating the re-implementation of the norms game in the MASON package. Most importantly, a full description and specification of the new model follows. Finally, some simulation results are discussed.
### Background
2.1
Bicchieri (2006) defines a social norm as a behavioral rule such that a sufficiently large part of the population is aware of its existence and its application to relevant situations. Moreover, individuals must prefer to conform to this rule on the condition that others are believed to conform to it as well, and that others are believed to expect the individual to conform to it and may sanction behavior. The question of emergence of social norms is an age-old conundrum of the social sciences. Sociologists as far back as Durkheim ([1893] 1997) in the 19th century have hypothesized on the nature of how norms "come to life" and how they are propagated throughout society and sustained in an emergent bottom-up manner (see Sawyer 2002). Later in the second half of the 20th century, Parsons (1964) built his entire theory around the mechanisms responsible for the adoption of norms and their spread in modern societies.
2.2
It should be no surprise then, that ever since agent-based modeling established itself as a specific research domain in the social sciences, there have been attempts at using such computational simulation methods to explore the emergence of social norms. Today, there exists a large amount of research where agent-based models are used to help answer these questions.
2.3
Savarimuthu and Cranefield (2009) categorize simulation models of social norms with respect to the ways they represent a) norm creation, b) norm spreading, and c) norm enforcement. In different models norms can be designed off-line (Castelfranchi & Conte 1995), created by leader agents (Kittock 1993), or in other instances they can be cognitively deduced from the behavior of other agents (Andrighetto et al. 2008). Similarly the spread of norms can be fueled by leadership (Kittock 1993), imitation (Epstein 2001), as well as evolution (Axelrod 1986). The enforcement of social norms has been modeled using sanctioning mechanisms (Axelrod 1986) as well as reputation of agents (Hales 2002), among other approaches.
2.4
Such models have been recently used to model a variety of real-world applications, ranging from the international diffusion of political norms (Ring 2014), through the prediction of smoking cessation trends (Beheshti & Sukthankar 2014), to the mapping of diffusion of safe teenage driving (Roberts & Lee 2012). These represent only a small fraction of the entiry body of social norms research utilizing agent-based models. However, perhaps the first agent-based model concerned with the emergence of norms, and certainly the most classic one, is Axelrod's (1986) evolutionary model.
#### Axelrod's Norms Game
2.5
In his seminal 1986 paper "An Evolutionary Approach to Norms" (Axelrod 1986), Axelrod presents a simple agent-based model which seeks to explain the mechanisms which eventually lead to establishing a norm in a society. To achieve this goal Axelrod used the game-theoretical concept of the prisoner's dilemma and extended it to n players (see Manhart & Diekmann 1989). In the model, agents have a simple choice to either cooperate with other agents or to defect.
2.6
The agents possess two attributes which govern their behavior. These attributes are defined as boldness and vengefulness. Both of these can take on values between 0/7 and 7/7 (to constrain them to 3 bits). Agents also have a numerical score assigned to them, which represents how well they are doing in the "norms game". Finally each agent gets assigned a probability of being seen during the defection. This probability is a random number sampled each round from a uniform distribution on the interval $$(0,1)$$. An agent will then defect, if its boldness is higher than the probability of being seen in the given round. Every time an agent defects it receives a temptation payoff of $$T = 3$$; at the same time all of the other agents get a negative payoff of $$H = -1$$, because they are hurt by the defection.
2.7
On the other hand, if an agent sees a defection, it will have to decide whether to punish it or not. This happens with a probability equal to its vengefulness value. After the punishment the original defector is hurt by $$P = -9$$ points, however the punisher's score is also negatively adjusted by $$E = -2$$, the assumption being that the enforcement of the punishment comes at a certain cost.
2.8
On top of this game-theoretic design, Axelrod (1986) superimposed an evolutionary mechanism responsible for selection and reproduction of high-scoring individuals. Every four rounds of the game, the agents are evaluated and ranked by their score. Players who are at most one standard deviation below the population average, but under one standard deviation above the average are given one offspring, and players who are at least one standard deviation above the mean are given two offspring to seed the next generation of agents (the next 4 rounds of the simulation). The offspring are then mutated, by introducing a small probability of flipping the bits of their boldness and vengefulness values.
2.9
Upon analyzing the model, Axelrod discovered that under the given conditions the norm rarely emerges. Thus, he revised the original model and introduced "metanorms" — that is, norms which dictate to not only punish defectors, but to also punish those who are seen not punishing defectors. Just as in the norms game, meta-punishment decreases the punished agent's score ($$\mathrm{MP} = -9$$) and it comes at an enforcement cost to the meta-punisher ($$\mathrm{ME}= -2$$). The agent's decision to punish as well as meta-punish is tied to the same vengefulness value. Only after the introduction of this new mechanism was Axelrod able to have norms emerge in the model.
2.10
The model has quickly become a staple of the agent-based modeling community and a classic in the modeling of norms. As such, it has also been heavily scrutinized and replicated (e.g. Galán & Izquierdo 2005; Galán et al. 2011; Mahmoud et al. 2012) as well as criticized. Authors have noted Axelrod's unclear and potentially weak experimental design, and the nature of model constraints and conditions, which seems to be arbitrary in certain cases (Galán & Izquierdo 2005). Another often cited shortcoming (Mahmoud et al. 2012) is the perfect knowledge which Axelrod's agents had possessed – in short, the players in the game always knew about the strategies and defections of all of the remaining players. For example in Mahmoud et al.'s (2012) paper the authors used a learning algorithm to overcome this aspect of the model. Another way of imposing some imperfection onto the agents' knowledge of their environment is to introduce a topology, or a spatial concept into the model.
2.11
There have in fact been studies on the norms game played between agents located on networks. Galán et al. (2011) have studied the effects of playing the metanorms game on random, small-world and scale-free networks. Other studies have also demonstrated the ability to establish norms on different network topologies through meta-enforcement and meta-punishment (Mahmoud et al. 2012b, 2012c, 2013). However, such extensions to networks have been singularly focused on the replication of the metanorms game and have not taken into account the effects of network topology on the original norms game. And so the possibility of norm emergence in populations of networked agents without the help of metanorms has been left as an open question. Moreover, these efforts have maintained the evolutionary approach first used by Axelrod. This study seeks to explore the effects of network topology and the ability of agents to adapt within a single generation in a modified version of the original norms game. But to understand how norms can emerge in a social environment where the agents' knowledge is constrained by the people they know, their culture and the social networks they are associated with, we must first study the patterns of social network structure in modern society.
#### Community Structure
2.12
Social research has concerned itself with the concept of community in modern society ever since its inception. The terms community and community structure are here understood in their sociological context as functional and cohesive groups of people, and the shape and quality of the interaction networks within them, as opposed to the more technical definitions employed in network analysis. Ferdinand Tönnies (1957 [1887]) was one of the first sociologists to address the issue of community. Others followed suit in the following century (e.g. Wirth 1938; Fischer 1975, 1984)
2.13
A modern sociological theory on community life, which will be of interest to the modeling effort presented in this paper, is due to Wellman (2002). Wellman establishes a tripartite typology of contemporary patterns of social aggregation. He calls these types little boxes, glocalization and networked individualism respectively. His claim is that there is a general teleological shift in modern society from the first, to the second and finally to the third type. He argues that originally people were living in little boxes: a handful of tightly bounded, densely knit communities, each of them tied to a specific locality: the family, the workplace, a club, or an organization. Even in large cities, people were bound to the neighborhood and visited each other door-to-door. The place was an essential part of what glued the communities together. But with the proliferation of expressways, affordable air transportation and the increasing ease of long-distance communication, whether it was the home telephone, later on cell-phones and most recently the Internet, a shift to glocalized networks occurred. Suddenly, people were not bound by their locale anymore. Thanks to the above mentioned technologies, individuals can now obtain the same form of support, solidarity and companionship from physically distant people, that they would earlier be able to enjoy only from people living in their neighborhood. The result is a network of multiple communities, some tightly knit, some more loosely, with sparse interactions across these communities, but most importantly the relationships of people across different communities and those even within their limits are not tied to a specific location.
2.14
Finally, the move away from glocalization to networked individualism is propelled by the rise of the Internet and mobile phones. Interactions in little boxes were door-to-door, glocalized interactions were place-to-place, but now we are experiencing a shift to person-to-person interactions. Networks become even sparser, communities less tightly bounded, linkages are more ad hoc.
2.15
What could this mean for the emergence of norms in societies differentiated by these three types of community life? Perhaps in societies where the glocalization or networked individualism community structure is dominant, one would see norms struggling to emerge due to the fragmentary and diversified nature of the society. Or perhaps, conversely, these societies could be more conducive to the diffusion of norms because the social cross-linking could allow for a richer sampling of the cultural landscape.
### Methodology
3.1
To explore the concept of norm emergence in different spatial topologies, two different versions of an agent-based model implementing some basic concepts from Axelrod's (1986) original norms game have been developed. The first version of the model, which from here on will be referred to as "the network model" incorporates different network topologies on which the individual agents are allowed to interact. The second version, referred to as "the grid model" implements a grid-based environment for the agents' interaction. Naturally, a grid with some notion of neighborhood is by definition a network. The distinction between networks per se and grids is made because the "network model" can serve as a test of viability and of the general effects of playing the game in space by studying some well-known types of networks, whereas the grids provide a convenient way to represent a specific social theory (due to Wellman 2002). Both versions also simulate only a single generation of agents, who are however able to change their strategies throughout a simulation run in an attempt to adapt to changing environments.
3.2
The original model designed by Axelrod was also recreated for the purposes of comparison and verifying the correct implementation of the basic model assumptions. Replication and re-implementation computational models proves to be a very important tool for verifying the results of experiments (Axelrod 1997; Edmonds & Hales 2003). Although the technique is not yet wide-spread in the ABM community, replication is perhaps even more important in the field of simulation than in others (Wilensky & Rand 2007). The re-implementation done here serves as confirmation that the new models arise from the same basic framework as Axelrod intended it.
3.3
Both of the new models introduced here provide a simulation environment which for the purpose of this study is instantiated and ran a large number of times under different model parameter settings and, importantly, using different topologies. During these runs important model variables are tracked and recorded. Finally a sensitivity analysis of the model variables to parameters and topologies is performed.
Figure 1. Class diagram of the model.
3.4
For the first version of the model three different network topologies are considered. Apart from random networks, small-world networks and scale-free networks are also utilized. Every time a random network is instantiated in the model, it is generated using the Erdös–Renyi algorithm (Erdös & Rényi 1959). In a similar vein, small-world networks are grown using the Watts–Strogatz algorithm (Watts & Strogatz 1998), and finally scale-free networks in the model are created via the Barabasi–Albert algorithm (Albert & Barabasi 2002). For random and small-world networks we focus solely on graphs with mean node degree $$k = 100$$. This number was chosen somewhat arbitrarily, with the hopes of staying true to some of the theories on the average number of meaningful social connections of humans (Dunbar 1992), while also keeping computational efficiency in mind. The degree distribution of the scale-free network follows a power law, in which case the mean and standard deviation are not always well defined.
3.5
As hinted above, the networks' nodes represent the social actors in the model, in this case, individual people, while links between nodes represent meaningful social connections, such as friendship, kinship, professional acquaintances, colleagues, supervisors, etc. It is assumed, that any time two nodes are connected via a link, their relationship is such so that any behavior perceived as non-conforming to the actors' beliefs could potentially be met with a substantial reaction.
3.6
The model consists of a set number of agents connected via links in a network, an adaptive mechanism (strategy selection) which is used to re-seed the network with agents' new behavioral information at evenly spaced time intervals based on the attributes of the selected agents, and a mutation mechanism (Figure 1). The model includes a single global parameter $$W$$ – the mean probability of an agent witnessing a defection. An agent $$i$$ is then assigned its own probability $$W_i$$ of witnessing a defection from the normal distribution centered at $$W$$ with a standard deviation $$\sigma = 0.2W$$. This attribute represents an agent's vigilance.
3.7
As in the original norms game each agent possesses a boldness value $$B$$ ranging from 0/7 to 7/7 and a vengefulness value $$V$$ also ranging from 0/7 to 7/7. Moreover every agent is assigned a numerical score which at the beginning of each simulation run is set at 0. However, because the motivation for this model is to model adaptation of behaviors in a network of actors rather than to simulate the evolution of agents, the payoffs are not reset at the beginning of each period. Because the model simulates a single generation of agents, the payoffs continue to accumulate throughout the entirety of a simulation run. Thus an agent's payoffs do not necessarily reflect how good its current strategy is. This is done because one of the aspects of the agents' bounded rationality is their inability to clearly discern the effects of individual strategies. The agents will simply emulate the behaviors of successful agents in their neighborhood. They do not, however, have the ability to discern which strategies contributed to which portion of the payoffs.
#### Agent Decision-Making
3.8
The first decision an agent has to make each round is whether to defect or not. To determine whether agent $$i$$ defects we first need to know its boldness value $$B_i$$ and $$S$$, the probability it will be seen, should it decide to defect. The probability of being seen is directly tied to the number of an agent's neighbors and their witnessing probabilities. Naturally, the larger an agent's social network is, the bigger the chance of being caught by at least some of its neighbors. However, to avoid for the defection decision to become completely determined by the size of one's neighborhood, and to account for the diversity of conditions which are variedly favorable to defection, a certain amount of noise is added to the equation. Thus:
$$S = 1- \prod_{i\in N} (1- W_i + R)$$
Here $$N$$ is the set of the agent's neighbors, $$W_i$$ is the witnessing probability of neighbor $$i$$, and $$R$$ is the normally distributed random variable with $$\mu = W$$, $$\sigma = W$$. Apart from this modification it is also assumed that agents only possess bounded rationality and thus will always gauge the probability of being seen with some degree of error. Therefore the perceived probability of being seen must be also calculated for each agent:
$$S_p = 1- (1- W+R)^n$$
Here $$n$$ is the number of agent's neighbors. Finally, an agent will defect only if $$B_i > S_p$$, that is, if its boldness value is greater than the perceived probability of being seen by at least one of its neighbors. As the mechanism stands, the boldness label might not be appropriate anymore, opportunity being perhaps a better choice, however for the sake of clarity and continuity the original nomenclature is preserved.
3.9
The second decision an agent has to make, is whether to punish a defection. This is done for all of the agent's neighbors in the same way as in the original Axelrod model. That is, if an agent's neighbor defects, the model first checks whether the agent $$j$$ sees the defection, which happens with probability $$W_j$$. If the agent indeed sees the defection it then punishes the neighbor with probability equal to its vengefulness value $$V_j$$. See Figure 2 for a flow diagram of agent activity.
3.10
Since $$W$$ is an exogenous parameter, in model configurations with a high value of $$W$$, some agents, while being very vigilant, will never follow through with any sort of punishment, because of their low vengefulness value. However, the notion behind vigilance is that it may potentially act as a deterrent even if the agents do not follow through, because the defecting agents cannot know with certainty whether the punishment will come or not.[1] The vigilance parameter also represents the tightness of links between agents. Tightly-knit communities of agents, such as traditional family structures, are vigilant "by default" because their members interact with each other frequently. Thus, vigilance is also a proxy for frequency and intensity of other interactions which are not directly modeled.
Figure 2. Agent activity in the modified norms game model.
3.11
All actions undertaken by the agents in the model result in payoffs distributed according to the same payoff matrix used in the original norms game.
3.12
After every four rounds each agent is evaluated together with all of its neighbors. The adaptive process is illustrated in Figure 3. At the beginning, the neighborhood is ranked by their payoffs. If the agent itself falls at least one standard deviation above the neighborhood's mean payoff value then the agent simply retains its current behavior and does not consider any further options. If it does not, then it randomly chooses any neighbor which lies at least one standard deviation above the neighborhood mean and copies that agent's behavior for its own. The reasoning behind this choice is simple: the agents have a rough idea of who in their neighborhood is doing pretty well and who is not. If they are doing well, then they are content with their current strategy. Otherwise they attempt to imitate a strategy of some well-off agent in their neighborhood.
3.13
After the adaptive process is done each agent's behavior has a small probability of being randomly modified. This represents imperfect imitation of behaviors. The modification is done as mutation in the original norms game where each bit has a 1% chance of being flipped.
3.14
The norm is said to emerge in the model if the average boldness of the agent population is low and the average vengefulness of the agent population is high. This represents the agents' internalization of cooperation-enabling behavior. It should be noted that cooperative behavior can be present even in agents who do not internalize the norm. For example an agent might decide not to defect, even though it has a high boldness value, simply because it thinks that the probability of getting caught is too high. Thus, it is important to distinguish between cooperative behavior and the internalization of cooperation-enabling dispositions.
#### Grid Model Design
Figure 4. Neighborhood types in the grid model. Red cell is the inspected agent, green cells represent neighborhood cells.
3.15
In the second implementation of the model, the network topology is replaced by a spatial environment based on a rectangular grid. Each cell in the grid is occupied by a single agent. The most important way in which the two different models presented here differ is the way agents' neighborhoods are defined. The grid-based model makes use of the neighborhood typology introduced by Wellman (2002). Thus, three different types of neighborhoods were implemented into the model, to reflect the little boxes, glocalization, and networked individualism neighborhood patterns.
3.16
The little boxes mode is represented simply as a Moore neighborhood of radius 3 surrounding the agent's cell. This gives each agent precisely 48 neighbors. To represent the glocalization pattern each agent has a "core" neighborhood of the 8 surrounding cells as well as a number of "satellite" communities composed of $$3\times3$$ cell squares of agents, randomly dispersed throughout the grid. The total number of communities (including the core) is taken from a normal distribution with mean $$\mu = 5.5$$ and standard deviation $$\sigma = 0.5$$. This gives every agent a mean number of 48.5 neighbors. Finally, networked individualism was implemented in two different ways. In the first approach the agent retains its core neighborhood of the 8 surrounding cells, with another 40 agents chosen randomly on the grid. In the second approach all 48 neighbors were selected randomly from the grid. Figure 4 provides a visual overview of the different neighborhood types.
3.17
The agent-decision making process as well as the strategy adaptation and modification processes work in precisely the same way as described in the network model design, with agents' neighborhoods based on the definitions of the three types described above.
3.18
All of the model versions described here were developed in the Java-based MASON package (Luke et al. 2005) which is specifically tailored for agent-based model programming. All of the model version were verified for proper functionality with code walkthroughs using the Java debugger and unit tests, and with the help of supporting print statements as well as visual neighborhood displays for neighborhood testing. The source code for all model versions is available at www.openabm.org/model/4714/.
#### Experimental Design
3.19
Experimentation consisted of multiple batches of a large number of simulation runs in both model cases. For the network version 100 simulation runs consisting of 1000 agents were executed for 5,000 time steps for each of the three network topologies (random, small-world and scale-free) and for four mean witnessing probability values $$W = 0.2$$, $$0.1$$, $$0.01$$, $$0.001$$, for a total of 1200 simulation runs. In the case of the grid version 100 simulation runs consisting of 10,000 agents (laid out on a $$100\times100$$ square grid) were executed for 5,000 time steps for each of the four neighborhood types (little boxes, glocalization, and the two versions of networked individualism) and for each of four mean witnessing probability values $$W = 0.2$$, 0.1, 0.01, 0.001, giving a total of 1600 runs. For all the results collected for a single parameter setting (neighborhood type/witnessing probability pair) the averages across all such executed simulation runs were recorded and stored. The re-implemented model originally designed by Axelrod was run 100 times for 10,000 time steps. The run lengths for each of the different model versions were chosen so as to allow the system to reach a state of (dynamic) equilibrium.
### Results
4.1
The first test amounted to checking whether running the re-implementation of the original model, results in behaviors which resemble those described in Axelrod's paper (that is, almost ubiquitous collapse of the norm and very rare establishment on the other hand). Following in the footsteps of others who have re-implemented this model (Galán & Izquierdo 2005), the norm was defined to have collapsed at a given time step whenever the average boldness in the agent population was at least 6, while at the same time the average vengefulness was at most 1. Similarly, the norm was defined to have been established at a given time step whenever the average boldness was at most 2, and the average vengefulness was at least 5. Axelrod (1986) arrives at the correct conclusion that the norm collapses most of the time, although his results were in fact inconclusive. In their re-implementation Galán and Izquierdo (2005) observed that this variability in simulation runs was due to rather short run times in the original experiments. However, they demonstrated that in the long run, the norm does indeed collapse a vast majority of the time. The results obtained from the implementation developed for the purposes of this study visually match those of Galán & Izquierdo (2005). Figure 5 shows the proportion of runs resulting in either norm collapse or establishment over time.
Figure 5. Proportions of runs that result in norm establishment vs. norm collapse in the re-implementation of the original Axelrod model.
Figure 6. Contour plots of defection probability in small-world/random networks (left) with mean degree 100 and $$\sigma =10$$, and scale-free networks (right).
4.2
An analysis of the modified network model is provided next. To begin with, the expected payoffs associated with defection and the expected costs associated with enforcement are tied to the size of agents' neighborhoods. Ignoring the effect of noise, the probability of defection of a random agent with boldness $$b$$, assuming a fixed witnessing probability $$W$$ can be calculated as follows:
$$P(S < b) = P(1-W)^n < b) = P \left( n < \frac{\ln(1-b)}{\ln(1-W)} \right)$$
Here $$n$$ is the number of neighbors to which the agent is connected in the network. Thus, for random and small-world networks where node degree is normally distributed the following is true:
$$P(S < b) = \frac{1}{2} \left[ 1 + \mathrm{erf} \left( \frac{\frac{\ln(1-b)}{\ln(1-W)}-\mu}{\sigma\sqrt2} \right) \right]$$
Here, the right-hand side is obtained by simply substituting into the cumulative distribution function for the generalized normal distribution with mean $$\mu$$ and standard deviation $$\sigma$$. Similarly, the same expression can be evaluated for scale-free networks where the complementary cumulative degree distribution scales with $$n^{-2}$$. Thus, the probability of defection for scale-free networks follows:
$$P(S < b) = 1- \left( \frac{\ln(1-b)}{\ln(1-W)} \right)^{-2}$$
4.3
The contour plots for both probabilities as functions of boldness and witnessing probability are shown in Figure 6. Having expressed the defection probabilities, the expected cost of enforcing punishment, $$E(C_P)$$, can be calculated. Specifically, for an agent with vengefulness $$v_i$$, and assuming $$E=-2$$ as enforcement cost, the following equality holds:
$$E(C_P) = 2 \cdot W \cdot v_i \cdot \sum^n_{j=1} P(S < b_j)$$
Here $$n$$ is the number of neighbors and $$W$$ is the mean witnessing probability. Hence, the cost per unit of vengefulness:
$$c_{UV} = 2 \cdot W \cdot \sum^n_{j=1} P(S < b_j)$$
4.4
Based on the previous analysis of the defection probability the cost can be derived explicitly. Thus, for random and small-world networks the cost at the beginning of a simulation run when the expected boldness is equal to 0.5 approaches zero for $$W > 0.01$$. If $$W < 0.01$$, the system undergoes a phase transition and $$C_{UV}$$ scales with $$nW$$. As the average boldness of an agent's neighborhood changes, so does the value of $$W$$ at which the described transition occurs. For scale-free networks the costs scale with $$nW$$ if and only if $$b > W$$. The costs approach zero only when average boldness is lower than or equal to the witnessing probability. (see Figure 6).
4.5
Similarly the expected payoffs associated with defection, $$E(P_D)$$ can also be derived. For any given agent with boldness $$b$$ and mean witnessing probability $$W$$, the following holds:
$$E(P_D) = 3\cdot P(S < b)-9 \cdot P(S < b) \cdot W \cdot n \cdot \langle v \rangle$$
Here, $$\langle v \rangle$$ is the average vengefulness in the agent's neighborhood. Obviously the payoffs are zero in the regions where the probability of defection is zero. However, in regions where $$P(S < b) \rightarrow 1$$, the following is true:
$$E(P_D) = 3- 9 \cdot W \cdot n \cdot \langle v \rangle \qquad \mathrm{if~}P(S < b) \rightarrow 1$$
And so, payoffs from defection are positive whenever $$Wn\langle v \rangle < 1/3$$. This is trivially true whenever $$Wn < 1/3$$. One important observation to be made here is that while the temptation payoff remains the same (as it was established that it is in fact independent of the population size) the punishment component grows linearly with the number of neighbors.
4.6
This analysis shows that the agents' propensity to punish defectors depends heavily on the mean witnessing probability in the system and the size of the agents' neighborhoods. The larger the social network of an agent, the more costly it is to be vengeful. On the other hand, smaller individual probabilities of witnessing defections result in decreased enforcement costs in the long run. Unlike the original norms game where population and the expected probability of being seen remained fixed, these dependencies can have important effects on the simulation results. It remains to be seen how the addition of noise and the different spatial topologies affect the behavior of the system.
4.7
Figures 78 show that especially in the cases of random and small-world networks the system results into a state with low levels of boldness and defections and tolerable levels of vengefulness (the figures have semi-logarithmic axes – this was done to illustrate temporal trends in a more visible manner). Note that in these types of networks agents have on average 100 neighbors, whereas in the scale-free networks the typical number of neighbors would be very low (due to the power law nature of its node degree distribution). It is precisely the way in which the probability of being seen is implemented in the network model which dramatically affects the probability of norm emergence. Figure 9 shows the relationship between neighborhood size (degree) and average agent attributes. The analysis shows that neighborhood size becomes a factor when witnessing probability is neither too low nor too high. When $$W = 0.2$$ vengefulness becomes costly. Conversely, when $$W = 0.001$$, the impact of witnessing probability on the expected payoffs begins to exceed that of neighborhood size by far. This is in line with the previous mathematical analysis.
Figure 7. Average boldness over time (error lines showing one standard deviation). LEFT: Small-world networks at different witnessing probabilities. RIGHT: Different network topologies at witnessing probability $$p = 0.01$$.
Figure 8. Average vengefulness over time (error lines showing one standard deviation). LEFT: Small-world networks at different witnessing probabilities. RIGHT: Different network topologies at witnessing probability $$p = 0.01$$.
4.8
It is also worth noticing the different dynamics of the population boldness with regard to the witnessing probability (see Figure 7). When $$W \geq 0.1$$, boldness levels decrease only slightly. This is because with perfect information, the agents' probability of defection would have been zero for the most part (see Figure 6). However agents who still defect due to imperfectly estimating probabilities of being seen or because they represent outliers in terms of their neighborhood size receive negative payoffs for their action since $$Wn > 1/3$$. This creates adaptive pressure towards lower boldness levels in this sub-population of agents. When $$W = 0.01$$ boldness decreases because the expected defection payoffs are negative for most agents. When $$W = 0.001$$ there is a small uptick in boldness followed by a slow decrease even though the payoffs are now positive. However, once a big enough fraction of agents' neighbors become defectors, the agents are hurt more than they can receive from their own defections, due to their large number of neighbors. Thus, small clusters of cooperators which can appear by chance will have the opportunity to spread throughout the population.
4.9
Figure 8 shows that vengefulness decreases only for high values of $$W$$. When $$W \leq 0.01$$ enforcement is fairly cheap, and since boldness is self-regulated, there is little adaptive pressure on vengefulness. Although, the actual cost of punishment is always non-zero, vengefulness levels of most agents will be allowed to remain fairly high, since defection ceases to occur due to decreasing levels of boldness. Moreover, the actual probability of enforcing punishment for any given agent is low, due to the low likelihood of seeing defections. In this way, there is no adaptive pressure[2] exerted on the vengefulness attribute. The effect of added noise is most noticeable when $$W \geq 0.1$$. The expected costs of enforcement are null, because no one is expected to defect. However, occasional misguided defectors (those affected by the added noise) still incur unnecessary costs on enforcers. Thus, for such large values of $$W$$, the population vengefulness will converge to a quiescent state. Finally, the simulation results show that network topology does play some role in the rate at which boldness and vengefulness decrease.
Figure 9. Relationship between neighborhood size (degree) and agent attributes for different values of $$W$$. Red lines show values for small-world networks, blue lines show values for random networks.
4.10
To examine how the agents' strategy adaptation heuristic affects the resulting system behavior, the same runs were performed, but with the payoff mechanism implemented as in Axelrod's norms game, i.e. letting the payoffs reset after every four rounds. Figures 1011 shows the results of these runs. It is clear that especially in cases of low vigilance the system reverts to the state described by Axelrod (1986) with prevailing high values of boldness and low levels of vengefulness essentially representing norm collapse. Thus, the heuristic is actually responsible for a large part of the interesting dynamics that can be seen in the new version of the model. This suggests that initial small advantages or disadvantages of certain agents in the beginning of the runs with accumulating payoffs can have potentially large effects on the resulting population average. This in turn depends on the initial conditions of the system – the exact topology of the network and the initial strategies of agents in specific locations within the network. To test the sensitivity of the system towards changing these initial conditions, additional runs were executed. First, For each of the model configurations described in the Experimental Design section a single, fixed network was used in 100 runs, randomly perturbing only the initial strategies of the agents in specific nodes. Figure 12 shows the dispersion of the resulting population averages at the end of each run. Next, extreme initial conditions were tested as well. Each of the configurations was run from four different initial population settings:
• Avg. Boldness: 1/7. Avg. Vengefulness: 1/7.
• Avg. Boldnes: 1/7. Avg. Vengefulness: 6/7.
• Avg. Boldness: 6/7. Avg. Vengefulness: 1/7.
• Avg. Boldness: 6/7. Avg. Vengefulness: 6/7.
4.11
The changes in the average population strategy from these initial conditions are shown in Figure 13. The resulting dynamics reveal that the system is in fact fairly "stiff" in that the initial conditions determine much of the resulting location of the population average in the strategy state space. Some cases show less dependence, such as runs with low levels of $$W$$, or populations with low initial vengefulness levels. On the other hand, populations on scale-free networks are much "stiffer" than the other network topologies.
Figure 10. Runs with payoffs resetting every 4 rounds. Average vengefulness over time (error lines showing one standard deviation). LEFT: Small-world networks at different witnessing probabilities. RIGHT: Different network topologies at witnessing probability $$p = 0.01$$.
Figure 11. Runs with payoffs resetting every 4 rounds. Average vengefulness over time (error lines showing one standard deviation). LEFT: Small-world networks at different witnessing probabilities. RIGHT: Different network topologies at witnessing probability $$p = 0.01$$.
Figure 12. Average population boldness and vengefulness levels at ends of individual runs. LEFT: Small-world network. CENTER: Random network. RIGHT: Scale-free network.
Figure 13. Change of average population boldness and vengefulness levels during runs from extreme initial conditions. Each point represents 1000 steps. LEFT: Small-world network. CENTER: Random network. RIGHT: Scale-free network.
4.12
When we turn our attention to the grid-based model the situation remains similar (see Figures 1415). It is necessary to keep in mind that the dynamics are somewhat affected by the smaller average neighborhood size. However, the decrease in boldness as well as vengefulness for large witnessing probabilities is still noticeable. The forces driving the decrease are the same as in the case of the network model. The main difference lies in the behavior of the system under low levels of witnessing probability. When $$W = 0.01$$ the expected costs of defection are now positive due to the smaller neighborhoods. However, just as in the network model, the hurtfulness of neighbors' defections still quickly exceeds the advantages of one's own defections, and yet again any clusters of cooperators will be allowed to spread. Once boldness falls to a certain level, the fraction of "free" enforcements increases and thus some agents with higher vengefulness levels will be allowed to survive. This explains the eventual slight increase in the average vengefulness as well as the growing variance when $$W = 0.01$$. The same arguments hold for population boldness dynamics when $$W = 0.001$$. However, what is most intriguing are the initial decreases in vengefulness under the lower levels of $$W$$, which eventually taper off to zero in the case when $$W = 0.001$$. Although the rates of defection in the systems are higher, now that neighborhood sizes are smaller which naturally results in higher costs of vengefulness, this alone cannot explain the observed dynamics. Indeed, the qualitative difference between the temporal trends of vengefulness of the little boxes neighborhoods and the other neighborhood types are obvious. Hence, some of the dynamics can only be explained by the chosen neighborhood topology.
Figure 14. Average boldness over time. LEFT: Little boxes at different witnessing probabilities. RIGHT: Different neighborhood types at witnessing probability $$p = 0.01$$.
Figure 15. Average vengefulness over time. LEFT: Little boxes at different witnessing probabilities. RIGHT: Different neighborhood types at witnessing probability $$p = 0.01$$.
4.13
Tracking average population values alone on its own cannot provide full insight into the dynamics of the model. To this end, visualizations of a large number of simulation runs were tracked, to elucidate the system's state over different periods of time. Figures 1617 show the changes in the spatial distribution of population vengefulness over a long period of time for two different neighborhood types. One can notice that under certain configurations, and barring mutation, the system can reach a state close to a true equilibrium (see Figure 16). On the other hand, when employing the glocalized neighborhood, the model reaches a dynamic equilibrium, where the attributes of individual agents are in flux, yet on average the system retains the same aggregate state from a qualitative point of view. The same types of dynamics were witnessed in simulation runs employing the networked individualism neighborhood types.
Figure 16. Vengefulness in the population after 1,000 steps (left) and 10,000 steps (right) with little boxes neighborhoods and $$W = 0.01$$. Showing a single representative run.
Figure 17. Vengefulness in the population after 1,000 steps (left) and 10,000 steps (right) with glocalized neighborhoods and $$W = 0.01$$. Showing a single representative run.
### Discussion
5.1
The model results show a number of interesting behaviors. Firstly, the probability of an individual witnessing a defection plays an important role in the global behavior of the system, and as the results have shown even very small probabilities on the individual level can discourage agents from their defection if they have enough neighbors. Using the mechanism for witnessing defections (and, perhaps more importantly, the expectations for witnessing defections) has prevented cases of norm collapse in the system even without the use of metanorms. This is achieved by a sort of "distributed vigilance" mechanism – agents as individuals will rarely witness a defection, but because of the effect of sheer neighborhood size and the relatively low costs and low adaptive pressure on enforcement, defectors are still being continuously policed. Moreover, due to the effect of neighborhood size and the variation in the spatial distribution of traits, boldness is effectively self-regulated – pockets of defectors hurt each other more than they can gain from their individual defections, which allows pockets of cooperators to fortuitously expand throughout the environment. On the other hand, a higher probability of seeing an individual's defection does not necessarily ensure a lower global rate of boldness, mostly due to low adaptive pressure. However, the emergence of the norm is not guaranteed. Specifically, in cases of low vigilance, the average vengefulness level in the population is a result of random drift. Furthermore, the heuristic which led agents to adopt other agents' strategies based on their aggregate performance over the course of the entire run has proved to be a major contributor to the resulting system states. If agents are unable to clearly judge the effects of strategies and just blindly copy agents who have had success in the past, norm collapse actually ceases to be the norm.
5.2
The model also demonstrated sensitivity to the different neighborhood types. The results showed qualitative differences in the overall trends in agents' adaptation across different neighborhood structures. The little boxes neighborhood mode also showed promising population dynamics in terms of boldness and vengefulness with certain levels of vigilance. The first hypothesis of this paper stated that the glocalization and networked individualism types would lead norm instability due to the fragmented and pluralistic nature of communities, while the alternate hypothesis suggested that it is precisely because of this diversity that individuals will be able to better "sample" the fitness landscape and find their way to an optimal solution faster. From the analysis of the simulation runs it would seem that there is more evidence for the first hypothesis. However, it is important to note that this comes at a price: there is always a trade-off between stability (norm emergence) and diversity as shown in Figure 16.
### Conclusion
6.1
Previous modeling efforts have elicited conditions for norm emergence on networks either with the use of Axelrod's metanorms mechanisms (Galán et al. 2011; Mahmoud et al. 2012b, 2012c, 2013), or with entirely different mechanisms for norm spreading (Anghel et al. 2004). Conversely, some simulation models have shown the ability to establish norms with the use of imtitation-based norm adoption mechanisms, yet without any network structure (Andrighetto et al. 2008) or for different categories of norms (Epstein 2001). The model analyzed here has shown the dynamics of norm emergence on network topologies for single-generation populations of adaptive agents.
6.2
Moreover, the agent-based model presented here showed that neighborhood structure and social network topology do in fact have an effect on the emergence of norms. Furthermore, by representing vigilance as a social phenomenon and by focusing on behavior adaptation rather than evolution the simulation results showed the significance of neighborhood size, vigilance itself and most importantly of the interplay of these two factors.
6.3
The model was intended to be left simple and abstract in order to provide a first glimpse into the effect of social topologies on the emergence of norms. There is certainly ample room for modification. For instance, it might be even more realistic to introduce and calibrate a dynamic probability of witnessing defections for individuals rather than use a static one throughout the entire simulation run. Then there is the question of how many witnesses does it usually take to prevent a defection? In the model it sufficed to have just one witness, but this number might be different under certain circumstances. Perhaps if the model employed a weighted social network, then agents could possess a weight threshold for witnesses that would discourage them from defecting. The resulting dynamics would also most certainly change had the payoffs been calibrated differently. Moreover, further modifications to the agents' scope of knowledge would certainly yield new interesting results. For example, how would the outcome change if agents knew of their neighbors vengefulness levels and would be able to incorporate this knowledge into their defection decision? In the model described here the agents will not defect if they think they will be seen, regardless of the likelihood of actually being punished upon being caught. One could make the claim that more intelligent agents will learn from past experiences and tend to defect even in highly exposed situations, if their neighbors are continually reluctant to enforce punishment. However, these modifications are beyond the scope of the current work, which simply sought to extend the original norms game on networks with defection decisions based on the witnessing abilities of the agents' direct neighbors.
6.4
Most importantly, it is necessary to constantly test how realistic are the topologies used as a representation for social communities. The faithfulness of the small-world network and scale-free network representations has been recently questioned (Shekatkar & Ambika 2014). The accuracy of Wellman's neighborhood typology is also in question as it has never been thoroughly validated. Thus, it is important to further explore and re-evaluate our view and understanding of community structures in contemporary society, and validate these claims with well-designed studies before we can start to fully consider the validity of models regarding the emergence of norms, such as the one presented in this paper.
### Acknowledgements
I would like to thank Claudio Cioffi-Revilla for his support and guidance. I would also like to thank Andrew Crooks for helpful advice and valuable feedback on drafts of the paper.
### Notes
1 This is similar to the police strategy of frequent motor patrolling of crime hot-spots. Studies have shown that just having enough police presence on its own leads to decreased crime rates in the affected areas (Sherman & Weisburd 1995).
2 The term adaptive pressure is used in a sense similar to "selection pressure" in evolutionary algorithms when changes in the agents' strategies result in a change in the distribution of payoffs.
### References
ALBERT, R., & Barabási, A.-L. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics, 74, 47–97. [doi:10.1103/RevModPhys.74.47]
ANDRIGHETTO, G., Campenni, M., Cecconi, F. & Conte, R. (2008). How agents find out norms: A simulation based model of norm innovation. NORMAS, Volume 8, 16–30.
ANGHEL, M., Toroczkai, Z., Bassler, K.E. & Korniss, G. (2004). Competition-driven network dynamics: Emergence of a scale-free leadership structure and collective efficiency. Physical Review Letters, 92(5), 0587011–0587014. [doi:10.1103/PhysRevLett.92.058701]
AXELROD, R. (1986). An Evolutionary Approach to Norms. The American Political Science Review, 80(4), 1095–1111. [doi:10.2307/1960858]
AXELROD, R. (1997). Advancing the Art of Simulation in the Social Sciences. In Conte R, Hegselmann R, Terna P, editors. Simulating Social Phenomena (Lecture Notes in Economics and Mathematical Systems 456). Berlin: Springer-Verlag. [doi:10.1007/978-3-662-03366-1_2]
AXTELL, R., Epstein, J., & Young, P. (2001). The emergence of classes in a multi-agent bargaining model. In S. Darlauf & P. Young (Eds.), Social Dynamics. Cambridge, MA: MIT Press.
BEHESHTI, R., & Sukthankar, G. (2014). A normative agent-based model for predicting smoking cessation trends. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems (pp. 557–564). International Foundation for Autonomous Agents and Multiagent Systems.
BICCHIERI, C. (2006). The Grammar of Society. New York, NY: Cambridge University Press.
CASTELFRANCHI, C. & Conte, R. (1995). Cognitive and social action. London: UCL Press
CENTOLA, D., & Macy, M. (2007). Complex Contagions and the Weakness of Long Ties. American Journal of Sociology, Vol. 113, No. 3, 702–734. [doi:10.1086/521848]
DE MARCHI, S. (2005). Computational and mathematical modeling in the social sciences. Cambridge, UK: Cambridge University Press. [doi:10.1017/CBO9780511510588]
DUNBAR, R. (1992). Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 22(6), 469–493. [doi:10.1016/0047-2484(92)90081-J]
DURKHEIM, E. ([1893] 1997). Division of Labour in Society. New York, NY: Free Press.
EDMONDS, B. & HALES D. (2003). Replication, replication and replication: Some hard lessons from model alignment. Journal of Artificial Societies and Social Simulation, 6 (4) 11 <https://www.jasss.org/6/4/11.html>.
EPSTEIN, J. M. (2001). Learning to Be Thoughtless: Social Norms and Individual Computation. Computational Economics, 18(1), 9–24. [doi:10.1023/A:1013810410243]
ERDÖS, P., & Rényi, A. (1959). On random graphs I. Publicationes Mathematicae, 6 (pp. 17–61).
FISCHER, C. S. (1975). Toward a Subcultural Theory of Urbanism. American Journal of Sociology, 80(6), 1319–1341. [doi:10.1086/225993]
FISCHER, C. S. (1984). The Urban Experience. New York: Harcourt.
GALÁN, J. M. & Izquierdo, L. R. (2005). Appearances can be deceiving: Lessons learned reimplementing Axelrod's Evolutionary Approach to Norms. Journal of Artificial Societies and Social Simulation8 (3) 2 <https://www.jasss.org/8/3/2.html>.
GALÁN, J. M., Łatek, M. M., & Rizi, S. M. M. (2011). Axelrod's Metanorm Games on Networks. PLoS ONE, 6(5), e20474. [doi:10.1371/journal.pone.0020474]
GRANOVETTER, M. S. (1973). The Strength of Weak Ties. The American Journal of Sociology, 78(6): 1360–1380. [doi:10.1086/225469]
HALES, D. (2002). Group Reputation Supports Beneficent Norms. Journal of Artificial Societies and Social Simulation, 5 (4) 4 <https://www.jasss.org/5/4/4.html>.
HEDSTRÖM, P. (2005). Dissecting the social: On the principles of analytical sociology. Cambridge, UK: Cambridge University Press. [doi:10.1017/CBO9780511488801]
KITTOCK, J.E. (1995). Emergent conventions and the structure of multi-agent systems. In: Nadel, L. & Stein, D.L. (Eds.), 1993 Lectures in Complex Systems. Boston, MA: Addison-Wesley.
LUKE, S., Cioffi-Revilla, C., Panait, L., Sullivan, K., & Balan, G. (2005). Mason: A multiagent simulation environment. Simulation, 81, 517–527. [doi:10.1177/0037549705058073]
MAHMOUD, S., Griffiths, N., Keppens, J., & Luck, M. (2012). Overcoming Omniscience for Norm Emergence in Axelrod's Metanorm Model. In S. Cranefield, M. B. van Riemsdijk, J. Vázquez-Salceda, & P. Noriega (Eds.), Coordination, Organizations, Institutions, and Norms in Agent System VII (pp. 186–202). Springer Berlin Heidelberg. [doi:10.1007/978-3-642-35545-5_11]
MAHMOUD, S., Griffiths, N., Keppens, J., & Luck, M. (2012b). Norm emergence: Overcoming hub effects in scale free networks. In S. Cranefield, M. B. van Riemsdijk, J. Vázquez-Salceda, & P. Noriega (Eds.), Coordination, Organizations, Institutions, and Norms in Agent System VII (pp. 136–150). Springer Berlin Heidelberg.
MAHMOUD, S., Griffiths, N., Keppens, J., & Luck, M. (2012c). Establishing norms for network topologies. In S. Cranefield, M. B. van Riemsdijk, J. Vázquez-Salceda, & P. Noriega (Eds.), Coordination, Organizations, Institutions, and Norms in Agent System VII (pp. 203–220). Springer Berlin Heidelberg. [doi:10.1007/978-3-642-35545-5_12]
MAHMOUD, S., Griffiths, N., Keppens, J., & Luck, M. (2013). Norm Emergence through Dynamic Policy Adaptation in Scale Free Networks. In S. Cranefield, M. B. van Riemsdijk, J. Vázquez-Salceda, & P. Noriega (Eds.), Coordination, Organizations, Institutions, and Norms in Agent System VIII (pp. 123–140). Springer Berlin Heidelberg. [doi:10.1007/978-3-642-37756-3_8]
MANHART, K., & & Diekmann, A. (1989). Cooperation in 2-and N-Person Prisoner's Dilemma Games: A Simulation Study. Analyse and Kritik, 11(2), 134–153.
NEWMAN, N, Barabasi, A.-L., & Watts, D. (2006). The Structure and Dynamics of Networks. Princeton, NJ: Princeton University Press.
PARSONS, T. (1964). The Social System. New York, NY: The Free Press/Macmillan.
RING, J. (2014). An Agent-Based Model of International Norm Diffusion. Retrieved from http://myweb.uiowa.edu/fboehmke/shambaugh2014/papers/Ring_Diffusion_of_Norms__ABM_Approach.pdf. Archived at http://www.webcitation.org/6WmguxHgN.
ROBERTS, S. C., & Lee, J. D. (2012). Using Agent-Based Modeling to Predict the Diffusion of Safe Teenage Driving Behavior Through an Online Social Network. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56(1), 2271–2275. [doi:10.1177/1071181312561478]
SAVARIMUTHU, B. T. R. & Cranefield, S. (2009). A categorization of simulation works on norms. In G. Boella, P. Noriega, G. Pigozzi & H. Verhagen (eds.), Normative Multi-Agent Systems , Dagstuhl, Germany: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany.
SAWYER, R. K. (2002). Durkheim's Dilemma: Toward a Sociology of Emergence. Sociological Theory, 20(2), 227–247. [doi:10.1111/1467-9558.00160]
SHEKATKAR, S. M., & Ambika, G. (2014). Mediated attachment as a mechanism for growth of complex networks. ArXiv: 1410.1870 [cond-mat, physics:physics].
SHERMAN, L. W., & Weisburd, D. (1995). General deterrent effects of police patrol in crime 'hot spots': A randomized, controlled trial. Justice Quarterly, 12(4), 625–648. [doi:10.1080/07418829500096221]
TÖNNIES, F. (1957). Community and Society. New York, NY: Courier Dover Publications.
WATTS, D. J., & Strogatz, S. H. (1998). Collective dynamics of "small-world" networks. Nature, 393(6684), 440–442. [doi:10.1038/30918]
WELLMAN, B. (1983). Network Analysis: Some Basic Principles. Sociological Theory 1:155–200. [doi:10.2307/202050]
WELLMAN, B. (2002). Little Boxes, Glocalization, and Networked Individualism. In M. Tanabe, P. van den Besselaar, & T. Ishida (Eds.), Digital Cities II: Computational and Sociological Approaches (pp. 10–25). Springer Berlin Heidelberg. [doi:10.1007/3-540-45636-8_2]
WILENSKY, U., & Rand, W. (2007). Making models match: replicating an agent-based model. Journal of Artificial Societies and Social Simulation, 10 (4) 2 <https://www.jasss.org/10/4/2.html>.
WIRTH, L. (1938). Urbanism as a Way of Life. American Journal of Sociology, 44(1), 1–24. [doi:10.1086/217913] | 2021-11-29 03:39:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.734954833984375, "perplexity": 1292.550323032897}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358685.55/warc/CC-MAIN-20211129014336-20211129044336-00527.warc.gz"} |
https://www.physicsforums.com/threads/system-of-polynomial.756898/ | # System of polynomial
1. Jun 6, 2014
### Jhenrique
My question is hard of answer and the partial answer is in the wikipedia, but maybe someone known some article that already approach this topic and the answer is explicited. So, my question is:
given:
$A = x_1 + x_2$
$B = x_1 x_2$
reverse the relanship:
$x_1 = \frac{A + \sqrt[2]{A^2-4B}}{2}$
$x_2 = \frac{A - \sqrt[2]{A^2-4B}}{2}$
So, given
$A = x_1 + x_2 + x_3$
$B = x_2 x_3 + x_3 x_1 + x_1 x_2$
$C = x_1 x_2 x_3$
and:
$A = x_1 + x_2 + x_3 + x_4$
$B = x_1 x_2 + x_1 x_3 + x_1 x_4 + x_2 x_3 + x_2 x_4 + x_3 x_4$
$C = x_1 x_2 x_3 + x_1 x_2 x_4 + x_1 x_3 x_4 + x_2 x_3 x_4$
$D = x_1 x_2 x_3 x_4$
thus which would be the inverse relationship for those two systems above?
2. Jun 6, 2014
### The_Duck
Write
$(x - x_1)(x - x_2)(x - x_3) = x^3 - Ax^2 + Bx - C = 0$
Then use the general solution to the cubic equation to solve for the roots, which are $x_1, x_2, x_3$.
You can similarly turn your four-variable case into a quartic equation.
3. Jun 6, 2014
### bhillyard
I think what you are looking for is
'symmetric functions' of the roots of a polynomial equation.
If the roots of a cubic are x1, x2, x3 then the equation is:
(x - x1)(x - x2)(x - x3) = 0
Multiply out the brackets, gather together the terms in x3, x2, x and the constant.
You will find your expressions x1 + x2 + x3, x1x2+x2x3+x3x1, x1x2x3
appearing as the coefficients of the powers of x.
This pattern continues with roots of quadratic, Quartic etc. | 2017-12-12 04:48:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5936586856842041, "perplexity": 1635.9129568198387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515165.6/warc/CC-MAIN-20171212041010-20171212061010-00414.warc.gz"} |
https://atmoschem.github.io/vein/reference/ef_hdv_scaled.html | ef_hdv_scaled creates a list of scaled functions of emission factors. A scaled emission factor which at a speed of the dricing cycle (SDC) gives a desired value. This function needs a dataframe with local emission factors with a columns with the name "Euro_HDV" indicating the Euro equivalence standard, assuming that there are available local emission factors for several consecutive years.
ef_hdv_scaled(df, dfcol, SDC = 34.12, v, t, g, eu, gr = 0, l = 0.5, p)
df deprecated Column of the dataframe with the local emission factors eg df$dfcol Speed of the driving cycle Category vehicle: "Coach", "Trucks" or "Ubus" Sub-category of of vehicle: "3Axes", "Artic", "Midi", "RT, "Std" and "TT" Gross weight of each category: "<=18", ">18", "<=15", ">15 & <=18", "<=7.5", ">7.5 & <=12", ">12 & <=14", ">14 & <=20", ">20 & <=26", ">26 & <=28", ">28 & <=32", ">32", ">20 & <=28", ">28 & <=34", ">34 & <=40", ">40 & <=50" or ">50 & <=60" Euro emission standard: "PRE", "I", "II", "III", "IV" and "V" Gradient or slope of road: -0.06, -0.04, -0.02, 0.00, 0.02. 0.04 or 0.06 Load of the vehicle: 0.0, 0.5 or 1.0 Pollutant: "CO", "FC", "NOx" or "HC" ## Value A list of scaled emission factors g/km ## Note The length of the list should be equal to the name of the age categories of a specific type of vehicle ## Examples if (FALSE) { # Do not run data(fe2015) co1 <- fe2015[fe2015$Pollutant=="CO",]
lef <- ef_hdv_scaled(dfcol = co1$LT, v = "Trucks", t = "RT", g = "<=7.5", eu = co1$Euro_HDV, gr = 0, l = 0.5, p = "CO")
length(lef)
plot(x = 0:150, y = lef[[36]](0:150), col = "red", type = "b", ylab = "[g/km]",
pch = 16, xlab = "[km/h]",
main = "Variation of emissions with speed of oldest vehicle")
plot(x = 0:150, y = lef[[1]](0:150), col = "blue", type = "b", ylab = "[g/km]",
pch = 16, xlab = "[km/h]",
main = "Variation of emissions with speed of newest vehicle")
} | 2020-01-28 04:46:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20727257430553436, "perplexity": 7203.9282162236095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00074.warc.gz"} |
http://www.wall.org/~aron/blog/page/2/ | ## Chalcedon
A reader writes in with the following questions concerning the Incarnation:
1. Since Jesus was a Jew (born of a Jewish mother, Mary), and Jesus Christ is one of the 3 persons of the Trinity, is God a Jew? Bu God is spirit. I am not sure if there is a difference between the Son (before creation) and the Son who later took human form in a Jew named Jesus. In other words, the Son was eternal (part of the Trinity) but Jesus was not. The Son – who is neither male nor female but spirit being God – became flesh, a Jewish rabbi.
2. In Matthew 24:36 Jesus says: “But of that day and hour no one knows, not even the angels of heaven, but My Father only.” Why wouldn’t the Son know if he’s an equal person in the Trinity? Again, I’m tempted to think this is Jesus the man who is speaking, and not God the Son (who must know when he would return, or else is not omniscient).
I do realise the Trinity is ultimately a mystery in the Christian faith, but I’d like to hear what you think about these two questions. Aside from these questions, I have no problem with the Christian understanding of a personal God in the life of Jesus..
You're in good company. This is actually the exact question of Christology which started being controversial in the 400's, the century following the adoption of the Nicene creed by the Councils of Nicea and Constantinope (known as the 1st and 2nd Ecumenical Councils). Once it was settled that Christ is God (contrary to the heretical teaching of Arius), the next question to work out is the relationship between the divine and the human in Christ.
Catholic, Orthodox, and Protestant Christians are all agreed about the answer, which is that Christ has two different natures (divine and human) but these natures are united in one person and one being, the Christ.
The controversy started as a result of Nestorius, who claimed that there were two separate persons in Christ, a divine person united to a human person. He was unwilling to say that Mary was the Theotokos (God-bearer) but preferred the term (Christ-bearer). This was condemned as heretical by the Third Ecumenical Council of Ephesus. Nestorius went over to the Assyrian Church of the East (which exists to this day and is neither Catholic nor Protestant nor Orthodox).
Then later there were the Monophysites/Miaphysites who said that Christ had just one nature, which was both human and divine. This was condemned at the Fourth Ecumenical Council of Chalcedon [pronounced with a hard "Ch", like "Christ"] in the year 451 AD. However, the Coptic Church and others didn't agree with this, and so there was a schism between them and the (not-yet-divided) Catholic/Orthodox church, which exists down to the present day.
In retrospect, it is not so clear that these other groups were quite so heretical as they were made out to be. Unlike the controversy with the Arians, it is a bit hard to be sure when the two groups actually disagree, and when they were just using different language for the same thing. But I believe that the Chalcedonian language is, at the very least, the most clear and accurate way to describe the union of the divine and human natures in Christ.
The complete Chalcedonian Formula is as follows:
"Therefore, following the holy fathers, we all with one accord teach men to acknowledge one and the same Son, our Lord Jesus Christ, at once complete in Godhead and complete in manhood, truly God and truly man, consisting also of a reasonable soul and body; of one substance with the Father as regards his Godhead, and at the same time of one substance with us as regards his manhood; like us in all respects, apart from sin; as regards his Godhead, begotten of the Father before the ages, but yet as regards his manhood begotten, for us men and for our salvation, of Mary the Virgin, the God-bearer [Theotokos]; one and the same Christ, Son, Lord, Only-begotten, recognized in two natures, without confusion, without change, without division, without separation; the distinction of natures being in no way annulled by the union, but rather the characteristics of each nature being preserved and coming together to form one person and subsistence, not as parted or separated into two persons, but one and the same Son and Only-begotten God the Word, Lord Jesus Christ; even as the prophets from earliest times spoke of him, and our Lord Jesus Christ himself taught us, and the creed of the fathers has handed down to us."
Applying this to your question, we see that Jesus possessed BOTH the attributes of divinity (eternal, omniscient, sexless, etc.) and the attributes of (a particular) human nature (Jewish, born of Mary, male, limited, etc.), but without sin, having a complete human body and soul. Since the divine nature is eternal, immutable and cannot change, we cannot say that God was transformed into a human being, but must instead say that he assumed or took on human flesh. "Not by conversion of the Godhead into flesh; but by assumption of the Manhood by God", says the (so-called) Athanasian Creed.
However, there is just one person—the divine Son of God, Second Person of the Trinity—who is and does both of these sets of things. (Without this, the Atonement wouldn't work, because in order to be saved we need for God to have fully shared in our human afflictions.) As a result, it is also correct to say that God was Jewish, and that he suffered and died on the Cross, or that Mary was the Mother of God, or (going in the other direction) that the human being Jesus pre-existed, so long as we remember that that we are speaking of the experiences of the united person who has both natures, and not attributing properties of one nature to the other nature. This way of speaking is called communicatio idiomatum, i.e. the "communication of attributes", and you can find articles by both Catholics and Protestants online, explaining it.
Your second question was about Christ's knowledge. The divine nature of God the Son is omniscient and eternal, and therefore the divine nature of Christ must know when he will return. However, his human nature started out ignorant and could learn things, as we know from Luke 2:52: "Jesus grew in wisdom and stature", and also from Hebrews, when it says that Jesus was made like us in every way (sin excepted). But we cannot divide the human and divine natures, so it is also true that "God the Son", the divine person, experienced what it is like to possess human ignorance.
Thus God the Son could both know and not know the same thing at the same time. How is this possible? I think it helps to remember that any time we know something, we know it in a particular way. For example, you can know something intuitively but not logically, or vice versa, or both ways simultaneously. The divine nature knows things by being the perfect being, the human nature knows things by forming neural connections in the brain that somehow represent or imitate the behavior of the things we know. Jesus did both.
One analogy I find a bit useful is that of roleplaying where you pretend to be a character in a fictional universe. It is possible for a situation to arise when the person playing the game knows something the character doesn't know. But now imagine that the universe the character lives in is real, not pretend and that you experience fully everything the character experiences (including what it feels like to be ignorant). That would be a little like what happened with the Incarnation, I guess.
The Trinity and the Incarnation are great mysteries, so our language and analogies must necessarily break down in certain ways. But that doesn't mean we can't make an effort to make our language as un-misleading as possible.
Posted in History, Theology | 12 Comments
## Did the Universe Begin? X: Recapitulation
We have now come to the end of my series about whether or not the universe had a beginning. This is part of a longer series dissecting the debate between St. William Lane Craig and Sean Carroll. I started out with some general reflections on the debate:
Then I started talking specifically about possible evidence from physics for and against the universe having a beginning. For ease of understanding I'm going to label each main new argument with FOR or AGAINST to define its main orientation, but the posts also deal with the various counterarguments (that's the tire swing going back and forth above...). I've provided an executive summary of each of these posts, so that you can easily see the main thrust of what I said. Minus all the caveats, hedging, and detailed explanations my scientific training tends to encourage.
(I've heard that politicians hate talking to scientists because, like the Elves in Tolkien, we seldom give a straight answer to a question. In scientific cultures, we show "sincerity" by discussing all the problems and caveats with our ideas, whereas in political circles this sounds like insincere waffling designed to please too many people...)
Did the Universe Begin? I: Big Bang Cosmology (FOR, as far as it goes...)
- the classical Big Bang Model predicts an initial singularity where time began
- tentative because quantum effects were important and invalidate our usual geometrical notions
- also tentative because we don't really know how inflation began
Did the Universe Begin? II: Singularity Theorems (FOR)
- classical General Relativity theorems by Hawking and Penrose
- assumptions of Hawking theorem invalid during inflationary epoch
- Penrose theorem says that if space is infinite, there was a beginning
- Penrose theorem invalid in quantum situations, but my work suggests that it might be extendable to quantum gravity, if horizons always obey the 2nd law of thermodynamics.
Did the Universe Begin? III: BGV Theorem (FOR)
- if the universe has a positive average expansion, then "nearly all" geodesics cannot be extended infinitely to the past
- implies that inflation had to have a beginning in time, at least in some places
- can evade theorem by a "bouncing" cosmology where the universe contracts and then expands
Did the Universe Begin? IV: Quantum Eternity Theorem (AGAINST)
- if the usual rules of QM hold at all times, you can calculate what the state would be at any time to the past or future.
- in realistic cosmologies the energy is probably either zero or undefined, making the theorem inapplicable.
Did the Universe Begin? V: The Ordinary Second Law (FOR)
- given reasonable assumptions, 2nd law of thermodynamics requires a beginning
- most plausible way to evade this is to postulate that the "arrow of time" reverses
- such models would have a "thermodynamic beginning" but no "geometrical beginning"
Did the Universe Begin? VI: The Generalized Second Law (FOR)
- second law of thermodynamics also seems to apply to cosmological horizons
- can be used like ordinary 2nd law to argue for beginning
- can also be used as singularity theorem (see II above)
- this closes certain loopholes, but if the universe is finite and the arrow of time reverses, a bounce may still be possible.
Did the Universe Begin? VII: More about Zero Energy
- a more technical explanation of why the energy of the universe can be zero
Did the Universe Begin? VIII: The No Boundary Proposal (AGAINST/FOR)
- a beautiful set of speculative ideas which unify the "laws of physics" with the "initial conditions", by providing a rule for what the state of the universe is.
- contrary to popular conceptions, the Hartle-Hawking proposal has no beginning in time
- the Vilenkin tunnelling proposal is similar in spirit but does have a beginning.
- unclear whether these proposals are well defined, and Hartle-Hawking appears to give wrong predictions.
Did the Universe Begin? IX: More about Imaginary Time
- a more technical explanation about the notion of imaginary time used by Hartle-Hawking
If you put all of the physics information together, the conclusion I would draw is that: We don't know for sure whether the Universe began, but to the extent that our present-day knowledge is an indicator, it probably did. However, as Carroll correctly says, we can also construct models where it doesn't have a beginning. Taking into account known results from geometry and thermodynamics, the most plausible such models are 1) spatially finite, and 2) have a reversal of the arrow of time (e.g. the Aguirre-Gratton model).
I also noted that models like AG still have a low entropy "initial condition" somewhere in the middle of time. One might think that this type of "thermodynamic beginning" still calls out for some type of explanation.
Then I wrote a more theologically-oriented post about whether the Hartle-Hawking no boundary proposal leaves any room for God to have created the universe:
Fuzzing into Existence
- short answer: yes, if you think of God as a storyteller, not a mechanic.
I also discussed the possibility of Reparameterizing Time; is it even meaningful to ask whether time is infinite or finite when you can change coordinate systems? In this post I also argued that the main theological question of whether the universe needs an explanation seems to me much the same whether the universe has finite or infinite time.
Now, let me make another observation about the tire swing. Although the weight of the evidence is that the universe probably had some sort of beginning—and even more likely that there was some sort of low entropy "initial condition" even if geometrically time stretches past before that—this cannot be said to be certain. There is always the possibility that new scientific data or methods could radically change our picture of the very, very early universe. Similarly, while a finite past seems more in accordance with traditional Christian theology than an infinite past, there appears to be no strictly logical connection between the two ideas, once the act of Creation is viewed in a more timeless, "authorial" way. Thus one might conceivably have a theist who thinks time is infinite, or an atheist who thinks time was finite.
Should the argument for God's existence really rest on such a slender foundation as the ultimate decision of physicists about Big Bang Cosmology? Well, one thing is clear. In ages past it didn't depend on it. Obviously, Sts. Abraham and Sarah, David and Solomon, the prophets and apostles, and all the men and women who followed in their footsteps up through the 19th century, including eminent scientists such as St. Faraday and St. Maxwell: these cannot have believed in God because of the Big Bang Theory, because—guess what?—nobody knew about it yet! What does the Bible say about these people?
Now faith is confidence in what we hope for and assurance about what we do not see. This is what the ancients were commended for. By faith we understand that the universe was formed at the word of God, so that what is seen was not made out of what was visible. (Hebrews 11:1-3)
Our belief that God is the Creator does not depend on the vicissitudes of scientific progress, the swinging back and forth of the tire swing (or is it accelerating?) It doesn't matter, because in this case we have a more certain source of knowledge than Science.
By faith! The skeptic may scoff here, and say that faith is belief without evidence, but that is not the definition used in the passage above. It says that faith is confidence about what we hope for, but do not see. Unless we identify sight (conceived broadly as anything which can be directly experienced in terms of our 5+ senses) with evidence (things which allow us to conclude something about the world)—an identification which would incidentally also make Science impossible—the passage does not say that the ancients were commended for believing without evidence. But the example of the biblical heroes does give some pointers about what type of evidence was relevant to them.
The ancients did not believe that God was the Creator because they had a detailed scientific theory about where it comes from. (Indeed, if we take our minds off Genesis for a moment and read the Wisdom literature of the Bible: Job and Psalms and Ecclesiastes and Proverbs, the Scriptures seem to emphasize more our lack of knowledge about the details of creation, then any detailed programme of events...) On the contrary, the ancient Jews and Christians knew God, by personal acquaintance as it were, and therefore knew him to be creative and powerful, mighty in word and deed. Thus they could take him at his word that he is the Creator of all that we see.
The glory of Creation does indeed point to the glory of the Creator, so that it is possible for ordinary human reasoners to come to know that there is a Creator intellectually. But this sort of Theism, by itself, isn't what Christians mean by faith. Once we come to know God personally, we learn the more important fact that we can trust him, and know with confidence that there is nothing in existence which does not depend on him.
And therefore, although we see in this world visible things emerging from other visible, material things, we know that ultimately their origin comes from "God’s invisible qualities—his eternal power and divine nature" (Rom 1:20). He created everything through his Word, Jesus Christ, from whom we have come to know what God is like. This way of knowing does not seem to depend very strongly on the details of past, present, or future scientific knowledge.
One could definitely argue that the Bible teaches that there was a Beginning (whatever this means from God's perspective). For example, the quotation above from Hebrews speaks of the formation of the visible universe. But whether or not this fact has been revealed by God, it is not obvious to me that the most important theological aspects of Creation really depend essentially on time being finite, or even well-defined. (Admittedly, if you believe that time is infinite, it might be easier to slip into a false notion whereby matter exists independently of God, who is merely the Chief Organizer of the cosmos. That would be a heresy—a false belief which may seriously obstruct your ability to relate to God or others properly—but it does not follow necessarily from time being infinite.)
The main point of the doctrine of Creation, I think, is that God is real, and that everything else is derived from his power and will. We know this doctrine is true because we know God. Not because of the Big Bang, as natural as it is to connect the two ideas.
Posted in Physics, Reviews, Theological Method, Theology | 15 Comments
## Different Views about Time
Updated to ask readers more directly for their thoughts, if you have any...
A random thought. Suppose we ask whether the world has a Beginning or an End, or whether it is eternal is one or both directions. It seems like there are 5 possible views, which I will name by association to various cultural groups who supposedly have had these views:
1. Norse view: the world began, and it will end.
2. Greek view: time is infinite in both directions
3. Hindu view: time goes in a circle
4. Hebrew view: the world began, but it will never end.
5. Nobody ever: time had no beginning, but it will still end!
I find it interesting that the first four views all have some intuitive appeal, to different people, but the fifth view just seems horribly wrong and perverse! Why do you suppose that is?
My best guess is that there's is a certain obvious symmetry to treating the past and future in the same way, which makes views (1-3) seem reasonable. And there is also an argument that the past is not like the future, but if so it had better be like (4) rather than like (5)! I guess we all know deep down (it's really the Second Law of Thermodynamics) that it makes sense for the universe to start from a simple initial condition and then develop complexities from there. But if we have to deal with infinite regresses AND we don't even get an eternal universe out of it, that seems a bridge too far... but if anyone has any further thoughts on this, I'd be interested.
My cultural names are a brutal oversimplification, and you shouldn't take my assigning these views to different cultures too literally. For one thing, there were lots of different Greeks and there are lots of different Hindus who believe all sorts of different things. For another, there is a conceptual difference between the world—in the sense of an ordered cosmology with a history—beginning, and time (a much more abstract notion) having a beginning. It takes a certain amount of intellectual sophistication to think about the latter question.
Norse mythology begins with fire and ice swirling around a bottomless pit for aeons; it is only later that a bit of fire strikes a bit of ice and spontaneously generates a giant and a cow, from whom later the jotun and gods emerge by various removes. (As you can see, the Norse were ultimately Materialists even about their so-called divinities.) At the end, the cruel jotun defeat the merry gods and the world is destroyed, plunging back into chaos. So it's not really clear that time has a beginning or end, just that the story has a beginning and an end.
The Hebrews had the notion of divine Creation in Genesis 1:1 and elsewhere, but it is controversial whether Genesis 1:1 actually teaches the creation ex nihilo of later theology. St. Augustine is usually credited with the idea that there was not even time before creation, but in fact Philo, a 1st century Hellenistic Jew, got there first.
Similarly, our current best "concordance cosmology" appears to begin with an initial singularity, but has no end in time. (Well, really we should talk about spacetime, which allows time to end in some places, e.g. inside black holes, but not others.) This appears at first sight to be like the Hebrew view. At late times the universe expands exponentially forever, thinning matter out to a very cold but finite temperature. This is in accordance with the Generalized Second Law of Thermodynamics, which tells us that the universe will reach a boring maximum entropy state at late times. Thus, the story ends at finite time, and we really have the heroic defiance against inevitable destruction, as in the Norse view.
Even in Hebrew cosmology, there is that little matter of the whole universe being destroyed and then recreated again:
“See, I will create
new heavens and a new earth.
The former things will not be remembered,
nor will they come to mind.
But be glad and rejoice forever
in what I will create,
for I will create Jerusalem to be a delight
and its people a joy.
I will rejoice over Jerusalem
and take delight in my people;
the sound of weeping and of crying
will be heard in it no more.
Never again will there be in it
an infant who lives but a few days,
or an old man who does not live out his years;
the one who dies at a hundred
will be thought a mere child;
the one who fails to reach a hundred
will be considered accursed.
They will build houses and dwell in them;
they will plant vineyards and eat their fruit.
No longer will they build houses and others live in them,
or plant and others eat.
For as the days of a tree,
so will be the days of my people;
my chosen ones will long enjoy
the work of their hands.
They will not labor in vain,
nor will they bear children doomed to misfortune;
for they will be a people blessed by the Lord,
they and their descendants with them.
Before they call I will answer;
while they are still speaking I will hear.
The wolf and the lamb will feed together,
and the lion will eat straw like the ox,
and dust will be the serpent’s food.
They will neither harm nor destroy
on all my holy mountain,”
says the Lord. (Isaiah 65:17-25)
It's really this "new heavens and new earth" that will last forever. Christianity is about Death and Resurrection, both for the universe and for each person. Science can get us as far as the doomed-to-die bit, but it can't get us any farther. That is Law, the rest is Grace, revealed in Jesus Christ.
Posted in Scientific Method, Theological Method | 3 Comments
## Reparameterizing Time
In recent posts I've been discussing whether the universe began or not.
Perhaps the most important issue which I have not yet discussed, is the idea (I think originally due to Charles Misner, first pointed out by St. Edward Arthur Milne, and independently by St. Charles Misner) that it may not be well-defined whether time has a beginning or not. That is, suppose you have a model in which there is a time coordinate $t$, and time has a beginning in the sense that the only allowed times are $t > 0$. Well, in General Relativity we are free to use whatever time coordinate we like, and nothing stops us from defining a new time coordinate in terms of the old one, let's say $\tau = \log(t)$. If you look at a plot of the log function, you'll see that $\tau$ ranges from $-\infty$ to $+\infty$.
However, this type of time reparameterization may not be very physical once you get down to the Planck time, about $10^{-43}$ seconds, when quantum gravity effects become important. Times less than that might not be well-defined. In any case, the Misner argument suggests that we need to be more careful to define what we mean by time having a beginning.
Similarly, atheist philosopher Quentin Smith has argued that the standard Big Bang Model is inconsistent with divine creation, due to it not really having a beginning, even though the past is finite. Smith argues that because the time $t = 0$ is singular, technically it shouldn't be included in the spacetime, so actually only times with $t > 0$ exist. That means that there is no initial moment of creation, and therefore, he claims, God cannot have created the universe.
This is somewhat reminiscent of Hawking's claim that the no boundary proposal doesn't have the right sort of beginning, and it seems to me that my Fuzzing into Existence post is also applicable. If God is like an author, then he can make a story in which time works in whatever way he pleases.
According to Smith, each time $t$ exists because the preceding times exist, and indeed the laws of physics hold at a given time $t$ (according to him) because they hold at earlier times. Since each moment of time is fully explained by those before, he claims that the universe is therefore self-caused and therefore fully explained, with no more explanation possible. (Of course, if time is continuous, then we could make a similar infinite regress of times going back closer and closer to any finite time $t$. Smith has to struggle a bit to explain why his argument doesn't apply there...)
Now to me, this seems like the sort of explanation which is really no explanation at all. A satisfying worldview should explain as much as possible with as few assumptions as possible. If the laws of physics have some property $X$ (e.g. having an electron field, or whatever) now because they were like $X$ a minute ago, and so on all the way back arbitrarily close to the beginning, that doesn't in any way satisfy my curiosity about why they are like $X$ instead of some other way $Y$ (say, having no charged particles). For if they had been $Y$ for all time, I could have made the same argument. So it seems that there is a potentially meaningful question "Why are the laws of physics like $X$ rather than like $Y$", which Smith's statements do not really explain. Maybe there is no explanation, and we have to take $X$ being the way it is as a fundamental fact. But to say that there could not possibly be an explanation seems rather dogmatic.
And if God exists, then he can explain this fact. God's will chooses what the laws of physics will be for all time. So he can choose for the universe to be like $X$ instead of like $Y$. This would be the fundamental explanation. Whether or not it is a useful explanation for us as human beings, would depend on whether our puny minds can identify the actual reasons why God might prefer $X$ over $Y$.
The Kalam argument has some intuitive appeal if you think that the universe could not have begun without some causal reason. Evaluating this claim requires an analysis of what causation is, and why one would think in various situations that a cause is necessary. But the first preliminary question is whether there are any facts to be explained by the putative cause. It seems to me that there are.
All of the same reasoning about $X$ and $Y$ would also apply if time stretches back to $-\infty$. There would still be various timeless facts about the universe which would not really be explained by the infinite regress. This suggests that the Kalam argument may be misguided to the extent that it attempts to prove God from a temporal beginning a finite time in the past. The most important issues are the same whether time goes back finitely or infinitely.
But having said all this, it does seem a little bit weirder that the universe should exist for a finite amount of time with no external explanation, than that it should exist for an infinite time with no explanation. Historically, many materialists (such as Lucretius) have believed that time is infinite, due to their belief that it is impossible for something to come from nothing. Conversely, monotheists have mostly believed that the universe has a beginning, either for philosophical reasons or because the Bible says so. (St. Thomas Aquinas argued that God could have created an infinite past, but that divine revelation tells us he didn't.) To that extent, Big Bang cosmology appears to vindicate the standard religious view over the standard nonreligious one.
(Of course, the same cannot be said if—unlike St. Thomas or St. Augustine—one also takes the 6 day creation about 6,000 years ago literally. Some fundamentalists have argued that this problem can be solved by reparameterizing our coordinate system, but that just seems silly to me. Also, the days are not in the right order to correspond to the scientific chronology.)
But a Theist could believe that God created time going back infinitely, without contradicting themselves, so long as they are prepared to be flexible about what "creation" means. Similarly, an Atheist could believe that the universe just started existing 13.8 billion years ago for no reason, without contradicting themselves, so long as they are prepared to be flexible when deciding when explanations are called for. All four views are logically consistent; the real question is which viewpoint explains the most with the least.
Posted in Physics, Reviews, Theology | 8 Comments
## Portuguese translations
I'm pleased to announce that some of my blog posts have been translated into Brazilian Portuguese. St. Felipe of Olhar Unificado (added to blogroll) has translated the following posts:
Castidade: Não é apenas para religiosos
Deus das Lacunas
O Teorema da Eternidade Quântica
Um Universo do Nada?
If you can't tell which posts of mine these are translations of, then this new feature might not be for you! Felipe also has a bunch of his own writing on Science and Religion on Olhar Unificado, but I can't say much about it, other than that it appears to be composed of words relevant to the topic. There are, however, pictures.
Regarding the post on chastity, I don't know how it reads to someone who actually understands Portuguese, but just looking at the words, my talk about sex and romance seems ever so much more impressive in a Romance language!
As Felipe comes out with more translations, I will add them to the list on this post. Thanks so much to Felipe for his endeavers.
Posted in Blog | 1 Comment
## Moving to the Institute for Advanced Study
In case you are wondering why I haven't been posting very much recently.... My postdoc at UC Santa Barbara ends this month, and on Sept 1st I will be starting an exciting new postdoc at the Insititute for Advanced Study near Princeton, as announced earlier. I'm looking forward to some great conversations with people at the IAS and at Princeton.
So I've been rather tied up with finishing projects and packing books, not to mention two physics workshops I went to at McGill and at U British Columbia. If there's ever a time when one is tempted to forswear possessions all together, it's when one is trying to clear out an apartment for a move. Maybe we forget sometimes how practical the advice of Jesus is, to store up our treasures in Heaven rather than on Earth.
Really, it's kind of surprising I posted even once this month. Hopefully things will be more peaceful once I arrive at Princeton, at least before all my stuff catches up with me!
Posted in Blog | 3 Comments
## Slightly Less Random Links
Those who have been following the debate between St. Craig and Carroll, or my own recent posts about it, might also be interested in the viewpoints contained in the following articles:
Carroll on laws and causation (St. Feser)
Cosmology and Theology (Stanford Encyclopedia of Philosophy)
Just to make things a bit more random, about a year ago my brother (St. Lewis) wrote a blog post about data loss and death. This post was very difficult for me to read. Death is not just a big problem but also a little problem, and that is why even children instinctively know about it. Fortunately the Lord is greater than death, and will wipe away every tear from our eyes (Rev. 21:4).
I am reminded also of the words of the Lord, when he appeared to my best friend St. Yoaav three times in a dream, saying "I am a God of little things, and little things are preserved in me." (cf. Zech. 4:10, written when the Jewish Temple was being rebuilt by Zerubbabel in 516 BC.) This occured at a time when he was getting serious about relating to God personally through prayer, but before his conversion to Christianity (from Judaism) the following year.
But we shouldn't get so distracted by our own little tragedies that we forget that we're destroying our own planet. Some Christians think that because Jesus is going to come back soon, we don't need to take responsibility for the environment, because "people are more important". They must not have read the passage of Scripture where Jesus says that "you do not know the day or the hour" (Matt. 25:13), or the place where it says that God will "destroy those who destroy the earth" (Rev. 11:18). If the Master is a long time coming back, then how are we going to take care of all these people once our natural resources are all shot? And when he does come back, he will judge how well we have fulfilled our responsibilities, one of which is to take care of God's creation as stewards.
True, God will restore all things in the end. But that doesn't mean we won't be held responsible. If you suffocate somebody in their sleep to get their money and then—surprise!—10 minutes later Jesus comes back and all the dead are raised and it's the Final Judgment, do you really think you won't be regarded as a murderer because, after all, you only deprived your victim of a few moments of relaxation? I don't think so. It will be the same if we are "saved by the bell" from the consequences of our own foolish decisions. But we must also prepare for the possibility that we are here for the long haul, in which case the problem becomes all the more urgent. Lord have mercy!
## Fuzzing into existence
In the last couple of posts, I've discussed the Hartle-Hawking proposal and the math behind it. Now let's discuss the theological implications.
In his Brief History of Time (written 1988; I'm just going to be engaging with this book and not with any of his more recent pronouncements), Hawking has the following famous saying about the Hartle-Hawking state:
The idea that space and time may form a closed surface without boundary also has profound implications for the role of God in the affairs of the universe. With the success of scientific theories in describing events, most people [!] have come to believe that God allows the universe to evolve according to a set of laws and does not intervene in the universe to break these laws. However, the laws do not tell us what the universe should have looked like when it started—it would still be up to God to wind up the clockwork, and choose how to start it off. So long as the universe had a beginning, we could suppose it had a creator. But if the universe is really completely self-contained, having no boundary or edge, it would have neither a beginning nor end: it would simply be. What place, then, for a creator?
The first question to ask here is who counts as "most people"?
The majority of people in the world believe in some type of God or gods capable of supernatural intervention. Even in the Western world, the majority of people believe in God (as Hawking indicates), and the majority of those believe in a religion called Christianity which teaches that God does produce miracles from time to time.
If Hawking means the English or the Europeans, then admittedly has been a marked decline in religious faith in Europe (much less so in the US) and many "Christians" there have a merely nominal or cultural affiliation. But belief in miracles is still far from nonexistent.
In any case, I am obviously not the target demographic, since I believe that God has done some remarkable things since that moment, perhaps 13.8 billion years ago, when he set the ball rolling. Or was there such a moment?
Hawking suggests that (if his model is correct) there was no such moment of creation. Not, according to him, because the universe goes infinitely far back in time—he says that it doesn't. Rather, because the geometry of spacetime is rounded off like a sphere, so that there is no special beginning point, but rather a whole region of points none of which would be any better or worse as a beginning. As he says:
The universe would be completely self-contained and not affected by anything outside of itself. It would just BE.
Now this only works if you go to imaginary time to describe the universe. With respect to real time, the Hartle-Hawking state does go back forever in time (with high probability). So if real time is what is important, then what Hawking says about the absence of a beginning is still true, although for a different reason.
If the Hartle-Hawking proposal is right, this could itself be taken as good reason to endorse an "imaginary time" view of the universe, although I'm not sure that's a consistent thing to do given that we at any rate seem to live in real time. But Hawking himself expresses a more ambivalent view:
So maybe what we call imaginary time is more basic, and what we call real is just an idea that we invent to help us describe what we think the universe is like. But, according to the approach I described in Chapter 1, a scientific theory is just a mathematical model we make to describe our observations: it exists only in our minds. So it is meaningless to ask: which is real, "real" or "imaginary" time? It is simply a matter of which is the more useful description.
Yet on this more positivistic view where the model is only aiming to be a "useful description", how could one use it to draw the metaphysical deductions Hawking wants to make, about there being no "place" for a Creator? But let's leave that aside, and accept the "imaginary time" point of view for purposes of our theological excursion, since it doesn't much matter whether the universe lacks a beginning because it's closed off like a sphere, or because it goes back in time forever.
Now when Hawking asks rhetorically whether there is a "place" for a Creator, the context suggests that he's not so much asking whether there's good reason to believe in a Creator, but whether there even could be a Creator, given the absence of a clear first moment of time. What would there be left for him to do? Aside from deciding that there should be a universe, selecting the laws of physics for said universe, deciding that the Hartle-Hawking state is the prettiest state for it to be in, and then (according to Hawking) deciding not to intervene even if it turns out we could use some help. Other than that, it seems like there is nothing left for God to do!
Really, Hawking is assuming (quite explicitly) that Science has already displaced God to such an extent that the only "place" that could be left for him is to push the button to make everything go, and then "sit back and watch". (This view is often called Deism nowadays, although historically Deists actually had a much more robust view of divine providence, and merely rejected the miracles and special revelations of particular religions.)
This rather limited God is the type of bad theology which makes religious people throw around the phrase "God of the Gaps", although I still believe that this term is highly misleading and should be retired. I tried to express a better set of points in that post:
1. Any time we ever believe in anything rationally, we do so because there is some kind of "gap" in our understanding of how the universe works, which is filled by postulating the existence of that thing.
2. All phenomena which occur in Nature do so because God sustains the world in being, thus (at least indirectly) causing everything.
Hawking allows no role for God as the Sustainer of all existence. But God's role in "sustaining" the world is not really a different type of act from his act of "creating" it. Hawking invites us to look at the world from a 4-dimensional perspective; in this perspective all points of spacetime exist because God gives them the power to exist, delineating the role that each one plays in the bigger scheme of things. From that perspective, Creation is something which is happening NOW, not just something which happened (or didn't happen) 13.8 billion years ago. Stated in a tenseless way, for all the things that exist, they exist because God chooses for them the conditions of their existence. (One of those conditions being that they are causally related in particular ways to the events before, after, or around them.)
God's role in creation is not a "mechanical" one, providing the initial impetus or force to get the machine working, which can then run for a while on its own. God is more like an Author writing a story. An Author stands outside the time-stream of their own story. As my dad said in a Slashdot interview:
Once you see the universe from that point of view, many arguments fade into unimportance, such as Hawking's argument that the universe fuzzed into existence at the beginning, and therefore there was no creator. But it's also true that the Lord of the Rings fuzzed into existence, and that doesn't mean it doesn't have a creator. It just means that the creator doesn't create on the same schedule as the creature's.
If God is creating the universe sideways like an Author, then the proper place to look for the effects of that is not at the fuzzy edges, but at the heart of the story. And I am personally convinced that Jesus stands at the heart of the story. The evidence is there if you care to look, and if you don't get distracted by the claims of various people who have various agendas to lead you in every possible direction, and if you don't fall into the trap of looking for a formula rather than looking for God as a person.
To think that God creates the universe and then stands back to watch it, is like thinking that an Author only has to write the first sentence, and then they can read the rest. Bad news for aspiring fiction writers: you have to write the whole thing. Maybe once the plot gets into full swing, the characters will start having a "mind of their own", and fail to act in the way the Author originally intended. But the Author is still in charge.
Nor does he have to "intervene" in order to get things to come out they way they want to: everything in the book is subject to the control of the Author, both the parts which follow naturally and inevitably from the previous scenes, and the parts where the Author does something totally unexpected. In any case, the main "point" of the story is seldom found right at the beginning, but develops as the story progresses.
Traditionally, books have a fixed and determinate sequence of letters, but if the Author wants to start out with something which doesn't have a definite time order (say a map on the first page) then that doesn't impugn their authorship of the rest of the book. And if the Author wants to make their book be infinitely long in both directions....well, that would probably be easier for God than for a human writer, wouldn't it!
So I think that belief in the creation of the universe does not really depend on there being a first moment of time. Conversely, this might also make one suspicious of the kalam argument championed by St. William Lane Craig in the debate. If the doctrine of Creation is not about there being a first moment of time, then there's something dubious about arguing for it as though it were. This doesn't automatically imply that St. Craig's argument is unsound, but it does suggest that it might not be the best way of looking at things.
Of course, we should also keep in mind what I said in my original post, that the Hartle-Hawking proposal is a speculative idea. It is a very beautiful idea, but it is difficult to make well-defined, and there is no direct evidence for it. While there was originally some reason to think it might predict inflation, the current indications seem to be that it predicts the wrong type of universe.
I remember my surprise when, several years ago, I read an article by the atheist philosopher Quentin Smith, showcasing the Hartle-Hawking state as an argument for Atheism. Never mind his actual argument, which makes no sense. In a talk given to some atheist club, he stated that his argument "is the strongest scientific argument there is against theism. I think it's even stronger than Darwin's theory of evolution."
Oh my! Neither Stephen Hawking nor Jim Hartle would make the claim that the Hartle-Hawking state is anywhere near as solidly supported as Darwinian evolution; in fact Jim told me just the other day that he isn't particularly committed to it being true. (People often assume that if a scientist thinks of an interesting, publishable idea, they must believe in it, but they might only think it is worth considering!) In fact, I think that only an outsider to the field of quantum gravity could take the "no boundary proposal" as anything other than a provisional, interesting idea worth exploring, which at best might be true.
I've discussed a lot of speculative physics in these last several posts, and I wouldn't want anyone walking away thinking that the physics is more clearly established than it is. In our current state of knowledge, any statements about the beginning of the universe are necessarily speculative, and if we rest our theological beliefs (for or against Theism) on that shaky foundation, we are setting ourselves up for trouble.
Posted in Physics, Reviews, Theology | 12 Comments | 2014-12-20 12:13:41 | {"extraction_info": {"found_math": true, "script_math_tex": 27, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 27, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4092244803905487, "perplexity": 1340.0795909394742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769844.62/warc/CC-MAIN-20141217075249-00152-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/least-square-solution.609264/ | # Least Square Solution
1. May 27, 2012
### Hiche
Assume we have this random matrix:$$\begin{pmatrix} 1 & 0 & 2\\ 2 & 0 & 4\\ 0 & 1 & 2 \end{pmatrix}$$
The least square solution can be found though: $A^TAx = A^Tb$. What is exactly meant by least square solution and is there a proof of some sort on how the formula was computed?
2. May 27, 2012
### algebrat
I'll answer the first part of your question, "what is meant by the least squares solution?" My explanation will lead right up to the point where we want b (in Ax=b) to be the projection of b onto the column space of A (since it may not lie in the column space already). The projection of b onto column space of A is Ax, where x is the solution for A^TAx=A^Tb. The last will always have a solution. I guess we could say Ax-b will have shortest length. By the way, for this x, Ax will be orthogonal to b-Ax, which when people prove this, the often include diagrams displaying this.
Suppose we have data sets of two measurements for each event: (x,u), (y,v), (z,w). We expect a linear relation, m and b, so u=mx+b, v=my+b, w=mz+b. But there may be no such line for any m and b. So we want a best fit. We want to find some m and b which minimize the sum of squares: (u-(mx+b))^2+(v-(my+b))^2+(w-(mz+b))^2. Let X be the matrix with columns (x,y,z) and (1,1,1), and let M be the vector (m,b), and U be the vector (u,v,w). Then if there had been a line going through all the data sets, we would have had a solution to XM=U. Now suppose there was possibly no line through all data sets. Then the column space of X does not contain U. Finding the closest point to U in the column space corresponds both to finding the projection of U onto the column space of X, and also minimizing the sum of squares of errors. So the M we want will correspond with XM being the projection of U onto the column space of X. Apparently, and this is the part i can remember, the projection of U onto the column space of X is given by XM, where M is the solution of X^TXM=X^TU.
Last edited: May 27, 2012
3. May 28, 2012
### HallsofIvy
You have left out one important part of your question! "ATAx= A^Tb" is a "least squares solution" to what problem??
The "problem" is, of course, to solve Ax= b. If this were a basic algebra course, we would say "divide both sides by A". In matrix algebra, that would be "multiply both sides by the inverse of A". Here, we cannot do that because A does NOT have an inverse- its determinant is 0.
In terms of more general linear transformations from a vector space U to a vector space V, Ax= b has a unique solution if and only if A is both "one to one" and "onto". One can show that a linear transformation, from U to V, always maps U into a subspace of V. If A is not "one to one" and "onto", it maps U into some proper subspace. In that case, Ax= b will have a solution (perhaps an infinite number of solutions) if and only if b happens to lie in that subspace.
If A is not invertible, so that it maps U into some proper subspace of V, and b is not in that subspace, there is no solution to Ax= b. In that case, we have to ask "what is the best we can do?"- can we find and x so that Ax is closest to b?
To talk about "closest", we have to have a notion of distance, of course, so we have to have a "metric", perhaps a metric that can be derived from a "norm", and perhaps a norm that can be derived from a an "inner product". Now, lets go back from "linear transformation" to "n by n matrix" because an n by n matrix is a linear transformation from Rn to Rn- and there is a "natural" inner product on Rn, the "dot product".
Given inner products on two vector spaces, U and V, and a linear transformation, A, from U to V, the "adjoint" of A, AT, is defined as the linear transformation from V back to U such that for any u in U and any v in V, <Au, v>= <u, ATv> (notice that since Au is in V and ATv is in U, the first of those inner products is in V and the second in V). When U and V are Rn, the "adjoint" of matrix A is its adjoint, which is why I use the notation "AT".
Now, using the norm defined by that inner product ($|u|= \sqrt{<u, u>}$) and the metric defined by that norm (distance from u to v is $|u- v|= \sqrt{<u-v, u-v>}$), we get the usual "geometric" fact that the shortest distance from a point p to a subspace is along the line through p perpendicular to the subspace. That is, if b is the vector in Rn closest to Ax, which is in the subspace, we must have Ax- b perpendicular to the subspace which, in turn, means if v is any vector in the subspace, that is v= Au for some u, we must have <v, Ax- b>= <Au, Ax- b>= 0. That inner product is, of course, in the "range" of A, but we can use the "adjoint" (transpose) to cast that back to the domain: <Au, Ax- b>= <u, AT(Ax- b)>= <u, ATAx- ATb>= 0.
But u could be any member the domain so we must have ATAx- ATb= 0 so that ATAx= ATb.
Of course, this is called the "least squares" solution because we are minmizing the "distance" given by $\sqrt{<u, u>}$ which involves squares.
Last edited by a moderator: May 28, 2012 | 2018-04-25 22:16:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8750516176223755, "perplexity": 454.0515159101866}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947968.96/warc/CC-MAIN-20180425213156-20180425233156-00133.warc.gz"} |
https://www.omnicalculator.com/physics/angular-acceleration | Angular Acceleration Calculator
Created by Dominik Czernia, PhD candidate
Reviewed by Bogna Szyk
Last updated: Apr 24, 2022
Angular acceleration calculator helps you find the angular acceleration of an object that rotates or moves around a circle. As you will soon see, the angular acceleration formula differs from the acceleration in linear motion, which you probably know very well. Read on if you want to learn what are the angular acceleration units and what is the angular acceleration equation. You will find out that you can compute it with our angular acceleration calculator in two different ways. You can also check our centrifugal force calculator, which is dedicated to the circular motion too.
Angular acceleration definition
The rotational movement of an object is usually described by a physical quantity called angular velocity. It measures the angle by which an object has rotated in a specific time. For example, imagine that a carousel in an amusement park performs full rotation within ten seconds. Its angular velocity is one rotation (360°) per ten second or 36° per second.
Let's assume that our carousel starts to rotate faster and faster, not 36° but 50°, then 64° per second. The angular acceleration describes this rate of change of angular velocity and is caused by torque.
In the next section, we will see how to find the rate of change of angular velocity, i.e., the angular acceleration.
Angular acceleration formula
Angular acceleration can be computed with our angular acceleration calculator in two different ways. We are using below angular acceleration equations:
α = (ω₂ - ω₁) / t or α = a / R
where
• α is the angular acceleration,
• ω₁ is the initial angular velocity,
• ω₂ is the final angular velocity,
• t is the time of change of angular velocity,
• a is the tangential acceleration,
• R is the radius of the circle (or the distance from an axis of rotation).
Tangential acceleration acts as a linear acceleration, which is perpendicular to the radius of the circle.
🔎 When an angular velocity is a scalar (not a vector), we should call it angular speed or angular frequency. Check out our angular frequency calculator to learn more about it!
Angular acceleration units
There are several different units that can be used to express the angular acceleration:
• the most common are units of angle per second squared (e.g. rad/s², °/s²). This unit illustrates well the meaning of the angular acceleration since the linear acceleration is expressed in m/s² or ft/s².
• sometimes, we omit the numerator leaving only the 1/s².
• because angular velocity can be expressed in hertz Hz = 1/s, we can also use this in angular acceleration receiving Hz/s. We have used this convention in our angular acceleration calculator.
The conversion between the above angular acceleration units is as follows rad/s² = 1/s² = Hz/s. You can check our angle conversion calculator too!
Speaking of angles and angular concepts, are you aware of angular displacement! Check out our angular displacement calculator if you are interested.
Dominik Czernia, PhD candidate
How would you like to calculate angular acceleration?
Method
angular velocity difference
Angular velocity difference
Initial angular velocity
Final angular velocity
Time
sec
Angular acceleration
People also viewed…
Efficiency
Efficiency calculator finds the ratio of energy output to energy input.
Grams to cups
The grams to cups calculator converts between cups and grams. You can choose between 20 different popular kitchen ingredients or directly type in the product density.
Ideal transformer
The Ideal Transformer Calculator is an easy tool that helps you to see how a transformer works.
Significant figures
The significant figures calculator performs operations on sig figs and shows you a step-by-step solution! | 2022-07-06 10:00:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465020060539246, "perplexity": 1129.4114066587101}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00397.warc.gz"} |
https://www.tutorialspoint.com/digital_signal_processing/dsp_classification_ct_signals.htm | # DSP - Classification of CT Signals
Continuous time signals can be classified according to different conditions or operations performed on the signals.
## Even and Odd Signals
### Even Signal
A signal is said to be even if it satisfies the following condition;
$$x(-t) = x(t)$$
Time reversal of the signal does not imply any change on amplitude here. For example, consider the triangular wave shown below.
The triangular signal is an even signal. Since, it is symmetrical about Y-axis. We can say it is mirror image about Y-axis.
Consider another signal as shown in the figure below.
We can see that the above signal is even as it is symmetrical about Y-axis.
### Odd Signal
A signal is said to be odd, if it satisfies the following condition
$$x(-t) = -x(t)$$
Here, both the time reversal and amplitude change takes place simultaneously.
In the figure above, we can see a step signal x(t). To test whether it is an odd signal or not, first we do the time reversal i.e. x(-t) and the result is as shown in the figure. Then we reverse the amplitude of the resultant signal i.e. –x(-t) and we get the result as shown in figure.
If we compare the first and the third waveform, we can see that they are same, i.e. x(t)= -x(-t), which satisfies our criteria. Therefore, the above signal is an Odd signal.
Some important results related to even and odd signals are given below.
• Even × Even = Even
• Odd × Odd = Even
• Even × Odd = Odd
• Even ± Even = Even
• Odd ± Odd = Odd
• Even ± Odd = Neither even nor odd
### Representation of any signal into even or odd form
Some signals cannot be directly classified into even or odd type. These are represented as a combination of both even and odd signal.
$$x(t)\rightarrow x_{e}(t)+x_{0}(t)$$
Where xe(t) represents the even signal and xo(t) represents the odd signal
$$x_{e}(t)=\frac{[x(t)+x(-t)]}{2}$$
And
$$x_{0}(t)=\frac{[x(t)-x(-t)]}{2}$$
### Example
Find the even and odd parts of the signal $x(n) = t+t^{2}+t^{3}$
Solution − From reversing x(n), we get
$$x(-n) = -t+t^{2}-t^{3}$$
Now, according to formula, the even part
$$x_{e}(t) = \frac{x(t)+x(-t)}{2}$$
$$= \frac{[(t+t^{2}+t^{3})+(-t+t^{2}-t^{3})]}{2}$$
$$= t^{2}$$
Similarly, according to formula the odd part is
$$x_{0}(t)=\frac{[x(t)-x(-t)]}{2}$$
$$= \frac{[(t+t^{2}+t^{3})-(-t+t^{2}-t^{3})]}{2}$$
$$= t+t^{3}$$
## Periodic and Non-Periodic Signals
### Periodic Signals
Periodic signal repeats itself after certain interval of time. We can show this in equation form as −
$$x(t) = x(t)\pm nT$$
Where, n = an integer (1,2,3……)
T = Fundamental time period (FTP) ≠ 0 and ≠∞
Fundamental time period (FTP) is the smallest positive and fixed value of time for which signal is periodic.
A triangular signal is shown in the figure above of amplitude A. Here, the signal is repeating after every 1 sec. Therefore, we can say that the signal is periodic and its FTP is 1 sec.
### Non-Periodic Signal
Simply, we can say, the signals, which are not periodic are non-periodic in nature. As obvious, these signals will not repeat themselves after any interval time.
Non-periodic signals do not follow a certain format; therefore, no particular mathematical equation can describe them.
## Energy and Power Signals
A signal is said to be an Energy signal, if and only if, the total energy contained is finite and nonzero (0<E<∞). Therefore, for any energy type signal, the total normalized signal is finite and non-zero.
A sinusoidal AC current signal is a perfect example of Energy type signal because it is in positive half cycle in one case and then is negative in the next half cycle. Therefore, its average power becomes zero.
A lossless capacitor is also a perfect example of Energy type signal because when it is connected to a source it charges up to its optimum level and when the source is removed, it dissipates that equal amount of energy through a load and makes its average power to zero.
For any finite signal x(t) the energy can be symbolized as E and is written as;
$$E = \int_{-\infty}^{+\infty} x^{2}(t)dt$$
Spectral density of energy type signals gives the amount of energy distributed at various frequency levels.
### Power type Signals
A signal is said to be power type signal, if and only if, normalized average power is finite and non-zero i.e. (0<p<∞). For power type signal, normalized average power is finite and non-zero. Almost all the periodic signals are power signals and their average power is finite and non-zero.
In mathematical form, the power of a signal x(t) can be written as;
$$P = \lim_{T \rightarrow \infty}1/T\int_{-T/2}^{+T/2} x^{2}(t)dt$$
### Difference between Energy and Power Signals
The following table summarizes the differences of Energy and Power Signals.
Power signal Energy Signal
Practical periodic signals are power signals. Non-periodic signals are energy signals.
Here, Normalized average power is finite and non-zero. Here, total normalized energy is finite and non-zero.
Mathematically,
$$P = \lim_{T \rightarrow \infty}1/T\int_{-T/2}^{+T/2} x^{2}(t)dt$$
Mathematically,
$$E = \int_{-\infty}^{+\infty} x^{2}(t)dt$$
Existence of these signals is infinite over time. These signals exist for limited period of time.
Energy of power signal is infinite over infinite time. Power of the energy signal is zero over infinite time.
## Solved Examples
Example 1 − Find the Power of a signal $z(t) = 2\cos(3\Pi t+30^{o})+4\sin(3\Pi +30^{o})$
Solution − The above two signals are orthogonal to each other because their frequency terms are identical to each other also they have same phase difference. So, total power will be the summation of individual powers.
Let $z(t) = x(t)+y(t)$
Where $x(t) = 2\cos (3\Pi t+30^{o})$ and $y(t) = 4\sin(3\Pi +30^{o})$
Power of $x(t) = \frac{2^{2}}{2} = 2$
Power of $y(t) = \frac{4^{2}}{2} = 8$
Therefore, $P(z) = p(x)+p(y) = 2+8 = 10$…Ans.
Example 2 − Test whether the signal given $x(t) = t^{2}+j\sin t$ is conjugate or not?
Solution − Here, the real part being t2 is even and odd part (imaginary) being $\sin t$ is odd. So the above signal is Conjugate signal.
Example 3 − Verify whether $X(t)= \sin \omega t$ is an odd signal or an even signal.
Solution − Given $X(t) = \sin \omega t$
By time reversal, we will get $\sin (-\omega t)$
But we know that $\sin(-\phi) = -\sin \phi$.
Therefore,
$$\sin (-\omega t) = -\sin \omega t$$
This is satisfying the condition for a signal to be odd. Therefore, $\sin \omega t$ is an odd signal. | 2020-10-24 12:48:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8437201380729675, "perplexity": 878.0675124273367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00085.warc.gz"} |
http://en.wikipedia.org/wiki/Rodrigues'_rotation_formula | # Rodrigues' rotation formula
In the theory of three-dimensional rotation, Rodrigues' rotation formula (named after Olinde Rodrigues) is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation. By extension, this can be used to transform all three basis vectors to compute a rotation matrix from an axis–angle representation. In other words, the Rodrigues formula provides an algorithm to compute the exponential map from so(3) to SO(3) without computing the full matrix exponent.
If v is a vector in ℝ3 and k is a unit vector describing an axis of rotation about which v rotates by an angle θ according to the right hand rule, the Rodrigues formula is
$\mathbf{v}_\mathrm{rot} = \mathbf{v} \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + \mathbf{k} (\mathbf{k} \cdot \mathbf{v}) (1 - \cos\theta)~.$
## Derivation
Rodrigues' rotation formula rotates v by an angle θ around an axis z by decomposing it into its components parallel and perpendicular to z, and rotating only the perpendicular component.
Given a rotation axis represented by a unit vector $\mathbf{k}$ and a vector $\mathbf{v}$ that we wish to rotate about $\mathbf{k}$ by the angle $\theta$,
$\mathbf{v}_{\parallel} = (\mathbf{k} \cdot \mathbf{v}) \mathbf{k}$
is the component of $\mathbf{v}$ parallel to $\mathbf{k}$, also called the vector projection of $\mathbf{v}$ on $\mathbf{k}$, and
$\mathbf{v}_{\perp} = \mathbf{v} - \mathbf{v}_{\parallel} = \mathbf{v} - (\mathbf{k} \cdot \mathbf{v}) \mathbf{k}$
is the component of $\mathbf{v}$ orthogonal to $\mathbf{k}$, also called the vector rejection of $\mathbf{v}$ from $\mathbf{k}$.
Let
$\mathbf{w} = \mathbf{k}\times\mathbf{v}$.
The vectors $\mathbf{v}_\perp$ and $\mathbf{w}$ have the same length, but $\mathbf{w}$ is perpendicular to both $\mathbf{k}$ and $\mathbf{v}_\perp$. This can be shown via
$\mathbf{w} = \mathbf{k} \times \mathbf{v} = \mathbf{k} \times (\mathbf{v}_{\parallel} + \mathbf{v}_{\perp}) = \mathbf{k} \times \mathbf{v}_{\parallel} + \mathbf{k} \times \mathbf{v}_{\perp} = \mathbf{k} \times \mathbf{v}_{\perp} ,$
since $\mathbf{k}$ has unit length, is parallel to $\mathbf{v}_\parallel$ and is perpendicular to $\mathbf{v}_\perp$.
The vector $\mathbf{w}$ can be viewed as a copy of $\mathbf{v}_\perp$ rotated by 90° about $\mathbf{k}$. Using trigonometry, we can now rotate $\mathbf{v}_\perp$ by $\theta$ around $\mathbf{k}$ to obtain $\mathbf{v}_{\perp\ \mathrm{rot}}$. Thus,
\begin{align} \mathbf{v}_{\perp\ \mathrm{rot}} &= \mathbf{v}_{\perp}\cos\theta + \mathbf{w}\sin\theta\\ &= (\mathbf{v} - (\mathbf{k} \cdot \mathbf{v}) \mathbf{k})\cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta. \end{align}
$\mathbf{v}_{\perp\ \mathrm{rot}}$ is also the rejection from $\mathbf{k}$ of the vector $\mathbf{v}_{\mathrm{rot}}$, defined as the desired vector, $\mathbf{v}$ rotated about $\mathbf{k}$ by the angle $\theta$. Since v is not affected by a rotation about $\mathbf{k}$, the projection of $\mathbf{v}_\mathrm{rot}$ on $\mathbf{k}$ coincides with $\mathbf{v}_\parallel$. Thus,
\begin{align} \mathbf{v}_{\mathrm{rot}} &= \mathbf{v}_{\perp\ \mathrm{rot}} + \mathbf{v}_{\parallel\ \mathrm{rot}} \\ &= \mathbf{v}_{\perp\ \mathrm{rot}} + \mathbf{v}_{\parallel} \\ &= (\mathbf{v} - (\mathbf{k} \cdot \mathbf{v}) \mathbf{k}) \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + (\mathbf{k} \cdot \mathbf{v}) \mathbf{k} \\ &= \mathbf{v} \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + \mathbf{k} (\mathbf{k} \cdot \mathbf{v}) (1 - \cos\theta), \end{align}
as required.
### Matrix notation
We first represent v and k as column matrices, and defining a matrix K as the "cross-product matrix" for the vector k, i.e.,
$\mathbf{K}= \left[\begin{array}{ccc} 0 & -k_3 & k_2 \\ k_3 & 0 & -k_1 \\ -k_2 & k_1 & 0 \end{array}\right]$.
This can easily be checked to have the property that
$\mathbf{K}\mathbf{v} = \mathbf{k}\times\mathbf{v}$
for any vector v (in fact, K is the unique matrix with this property).
Now, from the last equation in the previous sub-section, we may write
\begin{align} \mathbf{v}_{\mathrm{rot}} &= \mathbf{v} \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + \mathbf{k} (\mathbf{k} \cdot \mathbf{v}) (1 - \cos\theta) \\ &= \mathbf{v} + (\mathbf{K} \mathbf{v})\sin\theta + (\mathbf{k} (\mathbf{k} \cdot \mathbf{v}) - \mathbf{v}) (1 - \cos\theta). \end{align}
To simplify further, use the well-known formula for the vector triple product,
$\mathbf{a}\times (\mathbf{b}\times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})$
with a = b = k, and c = v, to obtain
$(\mathbf{k} (\mathbf{k} \cdot \mathbf{v}) - \mathbf{v}) = \mathbf{k} \times (\mathbf{k} \times \mathbf{v})$
or
$\mathbf{k} (\mathbf{k} \cdot \mathbf{v}) - \mathbf{v} = \mathbf{K}^2 \mathbf{v}$.
This means (substituting the above equation in the last one for vrot) ,
$\mathbf{v}_{\mathrm{rot}} = \mathbf{v} + (\sin\theta) \mathbf{K}\mathbf{v} + (1-\cos\theta)\mathbf{K}^2\mathbf{v}$,
resulting in the Rodrigues' rotation formula in matrix notation,
\begin{align} \mathbf{v}_{\mathrm{rot}} &= \mathbf{R}\mathbf{v} \end{align}
where R is the rotation matrix
\begin{align} \mathbf{R} = \mathbf{I} + (\sin\theta) \mathbf{K} + (1-\cos\theta)\mathbf{K}^2 \end{align}
Since K is defined in terms of the components of the rotation axis k, and θ is the rotation angle, R is the rotation matrix about k by angle θ, and is easy to compute.
R is a member of the rotation group SO(3) of ℝ3, and K is a member of the Lie algebra so(3) of that Lie group. In terms of the matrix exponential, we have
$\mathbf{R} = \mathbf{exp}(\theta\mathbf{K})$.
For an alternative derivation based on this exponential relationship, see Axis–angle representation#Exponential map from so(3) to SO(3). For the inverse mapping, see Axis–angle representation#Log map from SO(3) to so(3). | 2014-11-26 04:38:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999297857284546, "perplexity": 671.1940428183534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931005387.19/warc/CC-MAIN-20141125155645-00230-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/33844/help-on-two-exercises | # Help on two exercises
I need some help on two exercise:
Here is the first:
Let $\mathcal{U}$ be a non-principal ultrafilter. Suppose that every $f:\mathbb{N}\rightarrow\mathbb{N}$ is $\mathcal{U}$-equivalent to a constant or to a finite-to-one function. I want to prove that if $\mathbb{N}=\bigsqcup_{n\in\mathbb{N}} A_n$ where $A_n\not\in\mathcal{U}$ then there exists $B\in\mathcal{U}$ such that $B\cap A_n$ is finite for every $n$.
Here what I did until now:
using the axiom of choice suppose to have a function $f:\mathbb{N}\rightarrow\mathbb{N}$ such that $f(n)\in A_n$. If $f$ is $\mathcal{U}$-eq. to a constant then $\{n:f(n)=c\}\in\mathcal{U}$ for some $c$, but this implies that $c\in A_n$ for all $n$, contradiction. So $f$ is $\mathcal{U}$-eq. to a finte-to-one function $g$. Set $B:=\{n:g(n)=f(n)\}\in\mathcal{U}$, I'd like to prove that $B\cap A_k$ is finite for all $k$ (I don't even know if it is true).
Here is the second exercise: we have a non-principal ultrafilter $\mathcal{U}$. Suppose that for every partition of $\mathbb{N}=\bigsqcup A_n$ with $A_n\not\in\mathcal{U}$ then there exists $B\in\mathcal{U}$ such that $|B\cap A_n|<\infty$ for all $n$. Then if we have $\{X_n:n\in\mathbb{N}\}\subset\mathcal{U}$ then there exists $B\in\mathcal{U}$ such that $|B\backslash X_n|<\infty$.
Here what I did until now:
let $\{X_n:n\in\mathbb{N}\}\subset\mathcal{U}$ then $\bigcap X_n=\emptyset$ because $\mathcal{U}$ is not principal. So $\bigcup X_n^c=\mathbb{N}$, if this union would have been disjoint then I could have applied the hypothesis and it would have been ok. I don't know how to go on.
-
For problem 1, let $f$ be the unique function satisfying $f^{-1}(n) = A_n$. Since each $A_n \not \in \mathcal{U}$, we know $f$ is not equivalent to a constant function. Hence it's equivalent to a finite-to-one function $g$. Let $B$ be the set where $f$ and $g$ agree. Then $B \cap A_n \subset g^{-1}(n)$ which is finite.
What you said, about $c \in A_n$ for all $n$ isn't quite true. What you'll get is $c \in A_n$ for almost all $n$, i.e. the set of $n$ such that $c \in A_n$ will belong to $\mathcal{U}$. This is good enough though, since the $A_n$ are supposed to be disjoint, so you can't even have $c$ belonging to 2 of the $A_n$, let alone almost all of them. But the problem is as you pointed out, how can one tell if $B\cap A_k$ is finite for all $k$?
For your second question, you can't claim that $\bigcap X_n = \emptyset$, nor do you need to. Indeed, if each $X_n$ belongs to $\mathcal{U}$, then so does each $X_n \cup \{0\}$, but the intersection of these guys is not empty. Your observation is correct, if the union $\bigcup X_n^c$ were a disjoint union, you'd be done. What happens if you disjointify it?
$Y_0 = X_0^c,\ Y_{n+1} = X_{n+1}^c \setminus Y_n$
Let $B$ meet all the $Y_n$ at only finitely many points. Then $B$ meets $Y_0 = X_0^c$ at only finitely many points, and for each $n$ it meets $Y_{n+1} \cup Y_n \supset X_{n+1}^c$ at only finitely many points. | 2015-05-23 12:08:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480391144752502, "perplexity": 65.5107081032609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927592.52/warc/CC-MAIN-20150521113207-00259-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/119191/extract-all-note-tags-from-beamer-as-a-simple-text-file | Extract all \note tags from beamer as a simple text file
There is a nice presenter tool I am using called pdfpc.
Sadly it does not show notes I put in my latex presentation with \note{}.
Is there a way to extract the content of all my \note{} tags to a file that I can convert to a format readable by pdfpc then?
• Hmm, I suppose this is less a TeX question and more a "programming" question (probably best asked on stack overflow). Anyhow, which OS are you using? Grep (Linux/Windows) can probably handle this with ease. – Tom Bombadil Jun 14 '13 at 22:48
• It is not possible to achive this with grep because grep uses regular expressions who are not able to deal with nested brackets like this: \note{\begin{itemize}\item ...\end{itemize}} you would need a parser to remember the state of nesting while reading the input. Thats why I thought there maybe is a latex way as latex is already parsing this an may be able to just ouput the note content to a file. – cebe Jun 15 '13 at 4:58
• Just noticed that even if I am able to do it with grep I still need the number of the current frame to be able to show the correct nodes on the right slide. – cebe Jun 15 '13 at 5:42
• Btw: regex can work in this situation but it is limited for nested brackets. This regex matches all notes in a tex document that do not have a nesting deeper than 2: \\note\{(?:[^}]*?(\{[^}]*?(\{[^}]*?\})?\})?)+\} – cebe Jun 15 '13 at 5:43
• You should know that if you compile your beamer presentation with the notes on the side, pdfpc can now handle this large presentation format and show you the notes in your laptop. This is available in the github version of pdfpc, using the --notes option. – YuppieNetworking Dec 6 '13 at 21:39
I came up with a quite nice solution I created based on these two questions on file IO:
Here is the code to put before \begin{document}:
% create a new file handle
\newwrite\pdfpcnotesfile
% open file on \begin{document}
\AtBeginDocument{%
\immediate\openout\pdfpcnotesfile\jobname.pdfpc\relax
\immediate\write\pdfpcnotesfile{[notes]}
}
% define a # https://tex.stackexchange.com/a/37757/10327
\begingroup
\catcode\#=12
\gdef\hashchar{#}%
\endgroup
% define command \pnote{} that works exactly like \note but
\newcommand{\pnote}[1]{%
% keep normal notes working
\note{#1}%
% write notes to file
\begingroup
\let\#\hashchar
\immediate\write\pdfpcnotesfile{\unexpanded{#1}}%
\endgroup
}
% close file on \end{document}
\AtEndDocument{%
\immediate\closeout\pdfpcnotesfile
}
You can then use the \pnote{} command like you used \note{} before. The behavior will be the same but it will additionally write notes to file in pdfpc readable format.
There are a few thing not yet working:
• It does not preserve newlines, so everything in a \pnote will end up in one line of the output file. To replace newlines and pars you may use the following commands:
• sed -i "s/\\\\\\\\/\n/g" slides.pdfpc
• sed -i "s/\\\\par/\n\n/g" slides.pdfpc
• Multiple \pnote{}` commands per frame are not working right now. | 2019-11-17 05:45:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5537916421890259, "perplexity": 2021.423778230645}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00308.warc.gz"} |
https://math.stackexchange.com/questions/12848/is-there-an-infinite-set-of-strings-whose-kolmogorov-complexities-are-computable | # Is there an infinite set of strings whose Kolmogorov complexities are computable?
Is there an infinite set of strings whose Kolmogorov complexities are computable?
• I re-tagged away from "complexity-theory" because that tag sounds like computational complexity, or at least could be easily confused with it. Dec 3, 2010 at 2:59
• The phrasing of the question is somewhat imprecise. For any particular string, its Kolmogorov complexity is trivially computable, because the complexity is just some particular natural number. In my answer I took the most reasonable non-trivial interpretation. Dec 3, 2010 at 3:04
• @Carl: thanks for that, I conflated those... Dec 3, 2010 at 5:41
• @CarlMummert 'For any particular string, its Kolmogorov complexity is trivially computable, because the complexity is just some particular natural number.' Does this mean that there exist strings with known Kolmogorov complexity?
– user93511
Jun 14, 2016 at 13:15
• @Aidan Rocke: well, whatever the Kolmogorov complexity of $\langle 0101 \rangle$ is, that is its complexity, which we can hard-code into a program. The issue with computation is when we want a program that can take arbitrary strings as inputs, not just one particular string. Separately, in some universal prefix free codes we do know the complexity of particular strings. For example we can arrange any particular string we want, $s$, to have complexity $1$ by setting up the universal prefix free code $\phi$ so that $\phi(\langle 0\rangle) = s$. That still leaves all strings of form $1\sigma$ Jun 14, 2016 at 13:44
I think you are asking this: is there an infinite r.e. set of pairs $(\sigma,n)$ where $\sigma \in 2^{<\omega}$ is a string of Kolmogorov complexity $n$. The answer to that is no.
For a contradiction, assume such a list is r.e. - then there arbitrarily long strings in it, and thus strings of arbitrarily high Kolmogorov complexity. Define a function $P$ that takes input $\tau \in 2^{<\omega}$ and does the following. First, it effectively enumerates that list until it finds a pair $(\sigma, n)$ where $n > |\tau|$. Then it prints out $\sigma$.
The assumptions we have made ensure that $P$ is a total computable function. Therefore, applying Kleene's recursion theorem to $P$ gives a program $e_0$ that, when run with no input, computes $P(e_0)$. Thus the output of program $e_0$, run with no input, is a string of Kolmogorov complexity larger than $|e_0|$, which is impossible.
• @Ross Millikan: The problem is that the $\log(n)$ type bound is just an upper bound on the complexity. The question asks for an infinite set on which we can compute the complexity exactly, not just provide an upper bound. An exact computation like that is impossible, which is what my answer is intended to prove. Dec 9, 2010 at 15:48 | 2022-05-17 18:29:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809429943561554, "perplexity": 395.1810814513469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00471.warc.gz"} |
http://mathhelpforum.com/pre-calculus/120150-log-equations.html | # Math Help - log equations?
1. ## log equations?
I got this as a question:
Simplify. State any restrictions on the variables.
$2 log w + (3/2) log w + (1/2) log w^2$
For this question I did this:
$2 log w + (3/2) log w + (1/2) log w^2$
$2 log w + (3/2) log w + (1/2) x 2 log w$
$2 log w + (3/2) log w + log w$
$2 log w + (3/2) log w$
$(9/2) log w$
The $2 log w + log w$ is still 2 log because it is multiplying by one correct and $(1/2) log w^2$ is 1 log because you are multiplying $(1/2)$ by two to get one correct?
2. Originally Posted by Barthayn
I got this as a question:
Simplify. State any restrictions on the variables.
$2 log w + (3/2) log w + (1/2) log w^2$
For this question I did this:
$2 log w + (3/2) log w + (1/2) log w^2$
$2 log w + (3/2) log w + (1/2) x 2 log w$
$2 log w + (3/2) log w + log w$
$2 log w + (3/2) log w$
$(9/2) log w$
The $2 log w + log w$ is still 2 log because it is multiplying by one correct and $(1/2) log w^2$ is 1 log because you are multiplying $(1/2)$ by two to get one correct?
HI
$\frac{9}{2}\log w$ is the correct simplication , you cant go beyond that .
For the log to be defined , $w\in Z^+$ or in other words w>0 , try log negative sth , ur calculator will return with math error .
3. I know it is. However, I do not understand why it is like that. Could you explain my steps, or explain how you would do it?
4. Originally Posted by Barthayn
I know it is. However, I do not understand why it is like that. Could you explain my steps, or explain how you would do it?
you mean the steps to get (9/2)log w , or why w must be >0 ? which one ?
5. Originally Posted by mathaddict
you mean the steps to get (9/2)log w , or why w must be >0 ? which one ?
I wish to understand how to do the steps to get the final answer. I understand that a negative log will give you an error. I am confused with the 1/2 log w^2
6. Originally Posted by Barthayn
The $2 log w + log w$ is still 2 log because it is multiplying by one correct and $(1/2) log w^2$ is 1 log (correct)because you are multiplying $(1/2)$ by two to get one correct?
$2\log w+\log w=3\log w$
this part ?
7. ahh. I see my mistake. I guess I looked at my steps incorrectly. I see I did everything correct. Thank you for your help. | 2015-02-28 20:32:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317550420761108, "perplexity": 578.5519064235739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462035.86/warc/CC-MAIN-20150226074102-00101-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://joshkos.blogspot.com/2010/06/binary-search-differently-ordered.html | $\newcommand{\defeq}{\mathrel{\mathop:}=}$
2010/06/23
Binary search, differently ordered
Shin shared a variation of binary search problem with me yesterday, which was later discovered to be a variation of an exercise from van Gasteren and Feijen's note The Binary Search Revisited cited by Shin's blog post A Survey of Binary Search. The problem: if an array a0, a1, ..., an-1 is rotated from an ordered array ai+1, ai+2, ..., an-1, a0, a1, ..., ai for some i (so a "dip" is present in the array, namely the difference between ai and ai+1), can we still perform binary search on it? Here I try to record the path of thought I've taken.
As described by Shin, the general binary search algorithm assumes an invariant Φ(i, j) holds initially for the entire array, i.e., Φ(0, n-1) is true. And then the loop pulls i and j closer and closer, until j = i + 1. In the case of standard binary search, Φ(i, j) := ai ≤ k < aj where k is the key value to be searched for. Note that Φ(0, n-1) must hold initially but the key may well not be in the array. To fix this, we add -∞ and ∞ to both ends of the array. After the loop, we have ai equals to k if and only if k is present in the array.
Since it was hinted that binary search may be applicable, I looked at the original binary search and tried to find similarities, hoping to discover a suitable way of generalisation. In the case a0 < an-1, we know k is bounded by a0 and an-1, i.e., k is in the interval [a0, an-1]. For the other case a0 ≥ an-1, we know k can only be larger than a0 or smaller than an-1, i.e., k is in [a0, ∞) ∪ (-∞, an-1]. Also observe that performing the original binary search naively is not right. For example, when the median value am is less than k, it does not necessarily mean we should assign m to i --- consider the case when k is located at the left of the dip and am at the right. The way of comparison seems a lot more complicated.
And then it occurred to me that it would be great if [a0, ∞) ∪ (-∞, an-1] can be viewed in the same way as an ordinary interval, so we may still intuitively see the interval shrink as the loop progresses, like the standard binary search. This required gluing the two sides of the real line to an added point ∞, so a circle is formed. The interval in question is then contiguous on the circle, containing the added point ∞. I was introduced to this concept in the undergraduate algebra course taken in my fourth year, which is called the real projective line and amounts to the one-point compactification of the real line. The odd comparison rules all suddenly make sense under this view. While in general numbers on the real projective line do not have a natural ordering, in our case we can say informally that the magnitude of a number x is the minimum distance we travel counterclockwise on the circle from a0 to x, and that a number is smaller if its magnitude is smaller. This essentially cuts the real projective line at a0 and forms a closed ray roughly like [a0, a0-), a0- serving as the new infinity. Walking from a0 towards the infinity on [a0, a0-) is equivalent to walking counterclockwise on the real projective line from a0 and never reaching it again. We can say that the value domain is also rotated: rotating the array indices disrupts orderedness of the array, but we can rotate the value domain correspondingly to make the array ordered again. (I wish I could make this statement more topological! I believe it's something related to the torus.) Comparison under this ordering is simple: if two numbers are on the same side of the dip (which can be determined by comparing them with a0), then perform the usual comparison; otherwise, whichever on the right side of the dip is larger. The binary search algorithm doesn't have to be altered except for changing the way of comparison. We still have to insert a guard a0- as the rightmost element of the array, but there is only one guard instead of two. Notice that this works for binary search on an ordinarily-ordered array as well: values smaller than a0 are greater than all elements in the array under the new ordering, so searching for a value too small simply moves i towards n and -∞ is not needed to guard the left end of the array.
This ordering is my final version, though, which means there were some other versions. For example, I had used an ordering depending on both a0 and an-1 and treated all values from an-1 to a0 (both ends exclusive) as -∞. This resulted in much more complicated case analysis in the definition of the ordering. I had even made a mistake regarding the sign as essential for the comparison, not noticing that topologically 0 did not have a special role on the projective real line. This mistake made me temporarily think that modelling the situation with the real projective line was flawed. But later I discovered there is a cleaner and correct way to utilise the real projective line, which is, well, described above.
However, there is one last serious flaw. If an ordered array is rotated such that a0 = an-1, it should be considered legal input but does not count as ascending under the new ordering! I spotted this seemly-unfixable flaw at midnight, which deprived me of sleep for some two hours. And indeed it is not fixable but it is not a problem about the ordering! Say the two ends of the rotated array have value v. If the median value is also v, then we have no way to decide whether the key value is in the left segment or the right one --- the key can be in any one of them. This observation can even be developed into a full adversary argument, showing that no algorithm can correctly solve the search problem on these arrays in sub-linear time, by arguing any correct algorithm must examine the middle n-2 values of the array and therefore take Ω(n) time: Given an algorithm A and a key k, consider the "flat sequences" consisting of n copies of a number not equal to k. If A does not have to look at all n-2 values in the middle, then (for all but finitely many n's) A does not look at some value at index αn, which means changing the value at αn does not affect the output of A, namely A would still say the key is not found as it would for the flat sequences. Now change the value at αn to k for every flat sequence of length n and feed this set of input to A. Its output must be incorrect. Thus it's not possible to perform binary search, which takes O(log n) time, to correctly determine whether a value is present in this kind of arrays, which takes at least Ω(n) time. It is interesting to see that a naive-looking equality can dramatically increase a seemly-simple problem's complexity.
--
It's been quite a while since I wrote something this long last time, especially in English...
A sequel to this post has been posted, which is on whether implementing this new ordering gives a better algorithm than the usual two-pass algorithm in terms of number of comparisons.
Labels:
scm6/23/2010 10:52 am 說:
Oh, I should have mentioned that this problem was originally posed to me by Yu-Han Lyu. He said that it's a problem often given in interviews. A typical response is to first perform a binary search to find the dip, then somehow, perhaps virtually, "rotate" back the array.
We would like to have an algorithm that finds the key in one pass. I'm not sure we can do so using fewer comparisons than the two-pass algorithm, though.
You didn't spell out (in equation) what the ordering is in this article, have you?
Josh Ko6/23/2010 2:09 pm 說:
> You didn't spell out (in equation) what the ordering is in this article, have you?
No, I did not. Since I didn't actually do formal calculations, I thought it was not necessary for this post to be formal.
On the other hand it seems interesting to go into the details to see what the resulting algorithm looks like. (And make sure it is correct!) | 2018-06-20 11:38:09 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540228962898254, "perplexity": 504.4807896156933}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863518.39/warc/CC-MAIN-20180620104904-20180620124904-00241.warc.gz"} |
https://math.stackexchange.com/questions/2782355/showing-ts-rs-1rs-is-stable-using-niquists-criterion-help | # Showing $T(s) = R(s)/(1+R(s))$ is stable using Niquist's Criterion. Help!
$R(s) = \frac{1}{s^2+2s+5}$ and $R(i\omega) = \frac{5-\omega^2-2i\omega}{(5-\omega^2)^2 + 4\omega^2}$.
The Nyquist criterion states:
Let $R$ be strictly proper and stable. Suppose that the contour $R(i\omega)$ does not pass through or wind around -1. Then $T$ has all its poles in the open left half plane, so is also strictly proper and stable.
How can I show that $R(i\omega)$ does not cross or wind around -1?
Hint: We were meant to find the real part of $R(i\omega)$ which is: $$\frac{5-\omega^2}{(5-\omega^2)^2 + 4\omega^2}$$
I am not sure if this helps or not.
Thank you!
• Niquist criterion is often shown by plotting. – Arash May 16 '18 at 1:56 | 2019-01-24 02:57:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535359144210815, "perplexity": 343.43875744049694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584445118.99/warc/CC-MAIN-20190124014810-20190124040810-00533.warc.gz"} |
https://www.physicsforums.com/threads/prove-continuity.730063/ | # Homework Help: Prove continuity.
1. Dec 26, 2013
### PhysicsDude1
1. The problem statement, all variables and given/known data
Prove that $\sqrt{x}$ is continuous in R+ by using the epsilon-delta definition.
2. Relevant equations
A function f from R to R is continuous at a point a $\in$ R if :
Given ε> 0 there exists δ > 0 such that if |a - x| < δ then |f(a) - f(x)| < ε
3. The attempt at a solution
So I have to take an ε which is greater than 0 and prove that there exists a δ such that if the absolute value of (a - x) is smaller than that delta then the absolute values of the function values of a and x are smaller than ε.
I know I have to pick an ε which is greater than 0 but how do I know what value to pick for ε? 1,2,...n?
2. Dec 26, 2013
### Staff: Mentor
You don't get to pick the ε. The whole δ - ε thing should be viewed as a dialog between you (who are trying to prove that a certain limit exists) and an acquaintance who is skeptical of the process. The other person gives you an ε value, which by the way is usually small and close to zero. You respond by finding a number δ so that when x is within δ of a, then √x is within ε units of √a.
If your skeptical friend is not satisfied, he will say something like, "Well it works for that ε. How about if ε is smaller?" You respond by finding a different δ, and show your friend that when x is within δ of a, then √x is again within ε units of √a.
The process continues until your skeptical friend realizes that no matter how small an ε he gives you, you are able to come up with a δ that works, and the limit is established.
3. Dec 26, 2013
### PhysicsDude1
Thank you. This was very helpful intuive-wise! But how do I do this formally? Do I write δ in terms of ε?
I'm sorry but I'm really stuck here.
4. Dec 26, 2013
### Staff: Mentor
Yes. Also, you need to work in the actual function, not f(x).
Start with the inequality |√x - √a| < ε and work backwards to |x - a| < <some expression>. That <some expression> will be your delta.
5. Dec 26, 2013
### HallsofIvy
At any point, a, you want to prove that given some $\epsilon> 0$, there exist $\delta> 0$ such that "if $|x- a|< \delta$ then $|f(x)- f(a)|< \epsilon$.
So start with what you want to get: $|f(x)- f(a)|= |\sqrt{x}- \sqrt{a}|< \epsilon$ and try to manipulate that to get "$|x- a|<$ some number".
I recommend you start by separating this into two cases: x> a and x< a. Square root is an increasing function so that if x> a then $\sqrt{x}> \sqrt{a}$ and if x< a then $\sqrt{x}< \sqrt{a}$ so you can eliminate the absolute values.
6. Dec 26, 2013
### PhysicsDude1
|√x - √a| < ε
Sorry, accidentally clicked on post. I'm working on it :p
7. Dec 26, 2013
### PhysicsDude1
Ok, so I've, actually it was you guys, come up with this so far :
|√x - √a| < ε
⇔ |√x - √a| = |√x - √a| . $\frac{|\sqrt{x}+ \sqrt{a}|}{|\sqrt{x}+ \sqrt{a}|}$
⇔ |√x - √a| = $\frac{|x-a|}{|\sqrt{x}+ \sqrt{a}|}$
⇔ $\frac{|x-a|}{|\sqrt{x}+ \sqrt{a}|}$ < ε
⇔ |x-a| < ε . $|\sqrt{x}+ \sqrt{a}|$
Is this correct so far?
8. Dec 26, 2013
### PeroK
You've got the key equality, which is to relate √x - √a to x - a.
|√x - √a| = |x−a|/|√x +√a|
I would do this first for a = 0. I.e. prove √x is continuous at 0.
Then prove it for a > 1. If a > 1, then, |√x - √a| is smaller than |x−a|. So, it should be easy to find δ. But, if a < 1, then |√x - √a| could be larger than |x−a|. That's the tricky bit.
So, if you want to take it step by step, you could prove it for a >= 1. And, then finally prove it for a < 1. This might be easier until you get used to ε-δ.
For a < 1, there's a trick you'll need that is used a lot in ε-δ. It's not easy to spot first time you come across it. | 2018-06-19 07:38:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6948879361152649, "perplexity": 988.8869096797381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861980.33/warc/CC-MAIN-20180619060647-20180619080647-00601.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-00602760v3 | Skip to Main content Skip to Navigation
# A note on $H^p_w$-boundedness of Riesz transforms and $\theta$-Calderón-Zygmund operators through molecular characterization
Abstract : Let $0 < p \leq 1$ and $w$ in the Muckenhoupt class $A_1$. Recently, by using the weighted atomic decomposition and molecular characterization; Lee, Lin and Yang \cite{LLY} (J. Math. Anal. Appl. 301 (2005), 394--400) established that the Riesz transforms $R_j, j=1, 2,...,n$, are bounded on $H^p_w(\mathbb R^n)$. In this note we extend this to the general case of weight $w$ in the Muckenhoupt class $A_\infty$ through molecular characterization. One difficulty, which has not been taken care in \cite{LLY}, consists in passing from atoms to all functions in $H^p_w(\mathbb R^n)$. Furthermore, the $H^p_w$-boundedness of $\theta$-Calderón-Zygmund operators are also given through molecular characterization and atomic decomposition.
Keywords :
Complete list of metadatas
https://hal.archives-ouvertes.fr/hal-00602760
Contributor : Luong Dang Ky <>
Submitted on : Saturday, January 14, 2012 - 12:50:09 AM
Last modification on : Thursday, May 3, 2018 - 3:32:06 PM
Document(s) archivé(s) le : Sunday, April 15, 2012 - 2:21:02 AM
### Files
CZO.pdf
Files produced by the author(s)
### Identifiers
• HAL Id : hal-00602760, version 3
• ARXIV : 1106.4724
### Citation
Luong Dang Ky. A note on $H^p_w$-boundedness of Riesz transforms and $\theta$-Calderón-Zygmund operators through molecular characterization. Analysis in Theory and Applications, Springer Verlag (Germany), 2011, 14 p. ⟨hal-00602760v3⟩
Record views
Files downloads | 2020-06-05 00:07:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.394204705953598, "perplexity": 3795.491007496249}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00307.warc.gz"} |
https://www.jobilize.com/precalculus/section/using-graphs-to-find-instantaneous-rates-of-change-by-openstax?qcr=www.quizover.com | # 12.4 Derivatives (Page 4/18)
Page 4 / 18
## Finding instantaneous rates of change
Many applications of the derivative involve determining the rate of change at a given instant of a function with the independent variable time—which is why the term instantaneous is used. Consider the height of a ball tossed upward with an initial velocity of 64 feet per second, given by $\text{\hspace{0.17em}}s\left(t\right)=-16{t}^{2}+64t+6,$ where $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ is measured in seconds and $\text{\hspace{0.17em}}s\left(t\right)\text{\hspace{0.17em}}$ is measured in feet. We know the path is that of a parabola. The derivative will tell us how the height is changing at any given point in time. The height of the ball is shown in [link] as a function of time. In physics, we call this the “ s - t graph.”
## Finding the instantaneous rate of change
Using the function above, $\text{\hspace{0.17em}}s\left(t\right)=-16{t}^{2}+64t+6,$ what is the instantaneous velocity of the ball at 1 second and 3 seconds into its flight?
The velocity at $\text{\hspace{0.17em}}t=1\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}t=3\text{\hspace{0.17em}}$ is the instantaneous rate of change of distance per time, or velocity. Notice that the initial height is 6 feet. To find the instantaneous velocity, we find the derivative and evaluate it at $\text{\hspace{0.17em}}t=1\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}t=3:$
For any value of $\text{\hspace{0.17em}}t$ , $\text{\hspace{0.17em}}{s}^{\prime }\left(t\right)\text{\hspace{0.17em}}$ tells us the velocity at that value of $\text{\hspace{0.17em}}t.$
Evaluate $\text{\hspace{0.17em}}t=1\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}t=3.$
$\begin{array}{l}{s}^{\prime }\left(1\right)=-32\left(1\right)+64=32\hfill \\ {s}^{\prime }\left(3\right)=-32\left(3\right)+64=-32\hfill \end{array}$
The velocity of the ball after 1 second is 32 feet per second, as it is on the way up.
The velocity of the ball after 3 seconds is $\text{\hspace{0.17em}}-32\text{\hspace{0.17em}}$ feet per second, as it is on the way down.
The position of the ball is given by $\text{\hspace{0.17em}}s\left(t\right)=-16{t}^{2}+64t+6.\text{\hspace{0.17em}}$ What is its velocity 2 seconds into flight?
0
## Using graphs to find instantaneous rates of change
We can estimate an instantaneous rate of change at $\text{\hspace{0.17em}}x=a\text{\hspace{0.17em}}$ by observing the slope of the curve of the function $\text{\hspace{0.17em}}f\left(x\right)\text{\hspace{0.17em}}$ at $\text{\hspace{0.17em}}x=a.\text{\hspace{0.17em}}$ We do this by drawing a line tangent to the function at $\text{\hspace{0.17em}}x=a\text{\hspace{0.17em}}$ and finding its slope.
Given a graph of a function $\text{\hspace{0.17em}}f\left(x\right),\text{\hspace{0.17em}}$ find the instantaneous rate of change of the function at $\text{\hspace{0.17em}}x=a.$
1. Locate $\text{\hspace{0.17em}}x=a\text{\hspace{0.17em}}$ on the graph of the function $\text{\hspace{0.17em}}f\left(x\right).$
2. Draw a tangent line, a line that goes through $\text{\hspace{0.17em}}x=a\text{\hspace{0.17em}}$ at $\text{\hspace{0.17em}}a\text{\hspace{0.17em}}$ and at no other point in that section of the curve. Extend the line far enough to calculate its slope as
## Estimating the derivative at a point on the graph of a function
From the graph of the function $\text{\hspace{0.17em}}y=f\left(x\right)\text{\hspace{0.17em}}$ presented in [link] , estimate each of the following:
1. $f\left(0\right)$
2. $f\left(2\right)$
3. $f\text{'}\left(0\right)$
4. $f\text{'}\left(2\right)$
To find the functional value, $\text{\hspace{0.17em}}f\left(a\right),$ find the y -coordinate at $\text{\hspace{0.17em}}x=a.$
To find the derivative at $\text{\hspace{0.17em}}x=a,$ $\text{\hspace{0.17em}}{f}^{\prime }\left(a\right),$ draw a tangent line at $\text{\hspace{0.17em}}x=a,$ and estimate the slope of that tangent line. See [link] .
1. $f\left(0\right)\text{\hspace{0.17em}}$ is the y -coordinate at $\text{\hspace{0.17em}}x=0.\text{\hspace{0.17em}}$ The point has coordinates $\text{\hspace{0.17em}}\left(0,1\right),$ thus $\text{\hspace{0.17em}}f\left(0\right)=1.$
2. $f\left(2\right)\text{\hspace{0.17em}}$ is the y -coordinate at $\text{\hspace{0.17em}}x=2.\text{\hspace{0.17em}}$ The point has coordinates $\text{\hspace{0.17em}}\left(2,1\right),$ thus $\text{\hspace{0.17em}}f\left(2\right)=1.$
3. ${f}^{\prime }\left(0\right)\text{\hspace{0.17em}}$ is found by estimating the slope of the tangent line to the curve at $\text{\hspace{0.17em}}x=0.\text{\hspace{0.17em}}$ The tangent line to the curve at $\text{\hspace{0.17em}}x=0\text{\hspace{0.17em}}$ appears horizontal. Horizontal lines have a slope of 0, thus $\text{\hspace{0.17em}}{f}^{\prime }\left(0\right)=0.$
4. ${f}^{\prime }\left(2\right)\text{\hspace{0.17em}}$ is found by estimating the slope of the tangent line to the curve at $\text{\hspace{0.17em}}x=2.\text{\hspace{0.17em}}$ Observe the path of the tangent line to the curve at $\text{\hspace{0.17em}}x=2.\text{\hspace{0.17em}}$ As the $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ value moves one unit to the right, the $\text{\hspace{0.17em}}y\text{\hspace{0.17em}}$ value moves up four units to another point on the line. Thus, the slope is 4, so $\text{\hspace{0.17em}}{f}^{\prime }\left(2\right)=4.$
The center is at (3,4) a focus is at (3,-1), and the lenght of the major axis is 26
The center is at (3,4) a focus is at (3,-1) and the lenght of the major axis is 26 what will be the answer?
Rima
I done know
Joe
What kind of answer is that😑?
Rima
I had just woken up when i got this message
Joe
Rima
i have a question.
Abdul
how do you find the real and complex roots of a polynomial?
Abdul
@abdul with delta maybe which is b(square)-4ac=result then the 1st root -b-radical delta over 2a and the 2nd root -b+radical delta over 2a. I am not sure if this was your question but check it up
Nare
This is the actual question: Find all roots(real and complex) of the polynomial f(x)=6x^3 + x^2 - 4x + 1
Abdul
@Nare please let me know if you can solve it.
Abdul
I have a question
juweeriya
hello guys I'm new here? will you happy with me
mustapha
The average annual population increase of a pack of wolves is 25.
how do you find the period of a sine graph
Period =2π if there is a coefficient (b), just divide the coefficient by 2π to get the new period
Am
if not then how would I find it from a graph
Imani
by looking at the graph, find the distance between two consecutive maximum points (the highest points of the wave). so if the top of one wave is at point A (1,2) and the next top of the wave is at point B (6,2), then the period is 5, the difference of the x-coordinates.
Am
you could also do it with two consecutive minimum points or x-intercepts
Am
I will try that thank u
Imani
Case of Equilateral Hyperbola
ok
Zander
ok
Shella
f(x)=4x+2, find f(3)
Benetta
f(3)=4(3)+2 f(3)=14
lamoussa
14
Vedant
pre calc teacher: "Plug in Plug in...smell's good" f(x)=14
Devante
8x=40
Chris
Explain why log a x is not defined for a < 0
the sum of any two linear polynomial is what
Momo
how can are find the domain and range of a relations
the range is twice of the natural number which is the domain
Morolake
A cell phone company offers two plans for minutes. Plan A: $15 per month and$2 for every 300 texts. Plan B: $25 per month and$0.50 for every 100 texts. How many texts would you need to send per month for plan B to save you money?
6000
Robert
more than 6000
Robert
can I see the picture
How would you find if a radical function is one to one?
how to understand calculus?
with doing calculus
SLIMANE
Thanks po.
Jenica
Hey I am new to precalculus, and wanted clarification please on what sine is as I am floored by the terms in this app? I don't mean to sound stupid but I have only completed up to college algebra.
I don't know if you are looking for a deeper answer or not, but the sine of an angle in a right triangle is the length of the opposite side to the angle in question divided by the length of the hypotenuse of said triangle.
Marco
can you give me sir tips to quickly understand precalculus. Im new too in that topic. Thanks
Jenica
if you remember sine, cosine, and tangent from geometry, all the relationships are the same but they use x y and r instead (x is adjacent, y is opposite, and r is hypotenuse).
Natalie
it is better to use unit circle than triangle .triangle is only used for acute angles but you can begin with. Download any application named"unit circle" you find in it all you need. unit circle is a circle centred at origine (0;0) with radius r= 1.
SLIMANE
What is domain
johnphilip
the standard equation of the ellipse that has vertices (0,-4)&(0,4) and foci (0, -15)&(0,15) it's standard equation is x^2 + y^2/16 =1 tell my why is it only x^2? why is there no a^2?
what is foci?
This term is plural for a focus, it is used for conic sections. For more detail or other math questions. I recommend researching on "Khan academy" or watching "The Organic Chemistry Tutor" YouTube channel.
Chris | 2019-02-18 01:52:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 54, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6302815675735474, "perplexity": 397.5402794128248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484020.33/warc/CC-MAIN-20190218013525-20190218035525-00355.warc.gz"} |
https://www.nature.com/articles/s41598-018-19665-8?error=cookies_not_supported&code=2af4a861-2e8c-48d3-99d2-4283b713030f | # Local structure, nucleation sites and crystallization behavior and their effects on magnetic properties of Fe81Si x B10P8−xCu1 (x = 0~8)
## Abstract
In this work, an attempt has been made to reveal critical factors dominating the crystallization and soft magnetic properties of Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloys. Both melt spun and annealed alloys are characterized by differential scanning calorimetry, X-ray diffractometry, Mössbauer spectroscopy, transmission electron microscopy, positron annihilation lifetime spectroscopy and magnetometry. The changes in magnetic interaction between Fe atoms and chemical homogeneity can well explain the variation of magnetic properties of Fe81Si x B10P8−xCu1 amorphous alloys. The density of nucleation sites in the amorphous precursors decreases in the substitution of P by Si. Meanwhile, the precipitated nanograins gradually coarsen, but the inhibiting effect of P on grain growth diminishes causing the increase of the crystallinity. Moreover, various site occupancies of Si are observed in the nanocrystallites and the Si occupancy in bcc Fe decreases the average magnetic moment of nanograins. Without sacrificing amorphous forming ability, we can obtain FeSiBPCu nanocrystalline alloy with excellent soft magnetic properties by optimizing the content of Si and P in the amorphous precursors.
## Introduction
Fe-based nanocrystalline soft magnetic alloys with the amorphous/crystalline composite structures have become a hot topic in both research and application. These alloys usually possess outstanding magnetic properties involving high saturation magnetic flux density (BS) and low coercivity (HC)1,2,3,4. Current state of the art nanocrystalline alloys mainly include FeSiBNbCu5, FeSiBPCu6, FeMBCu (M = Zr, Nb, Hf)7 and their derivatives which contain a handful of early transition metal elements to promote their crystallization8. It is noteworthy that soft magnetic properties of these alloys are usually variable even if the atomic percent of magnetic Fe atoms keeps the same. For example, Fe82.65Cu1.35Si2B14 nanocrystalline alloy shows an HC of 6.5 A/m and a BS of 1.84 T, while the HC and BS of Fe82.65Cu1.35Si5B11 are 60 A/m and 1.81 T, respectively9. Magnetic properties of nanocrystalline alloys are closely associated with the amount, size and chemical composition of nanograins embedded in the amorphous matrix.
Every kind of metalloid and transition metal element plays an essential role on the nanocrystallization and only with appropriate content can the magnetic properties be improved. It has been proven that moderate addition of Cu can provide nucleation sites for α-Fe and thus refines the primary nanocrystallites due to the formation of Cu clusters10,11. Moreover, the differences in the kind and content of solute elements usually result in various structures and chemical compositions of nanograins. For instance, α-Fe phase with bcc structure precipitates out in the Fe83.3Si4B8P4Cu0.7 alloy12, but Fe3Si phase with DO3 structure precipitates out in the Fe73.5Si13.5B9Nb3Cu1 alloy13. The composition of precipitated nanograins varies with the content of Si even in the same system, such as in FeSiBCu14 and FeSiBNbCu alloys15, respectively. In addition, the co-addition of Cu and P can further refine the nanograins, which results from the interaction between solute atoms forming CuP clusters16. The crystallization process tightly contacts with the microstructure of amorphous precursors including short range order, free volume, chemical homogeneity and particularly nucleation sites. Thus comprehensively understanding the transformation of amorphous precursors during the crystallization appears to be significantly important for advanced materials design.
For those nanocrystalline alloys with heterogeneous nucleation, various methods have been proposed to increase nucleation sites for the refinement of nanograins, such as the addition of Cu into the FeZrNbB alloy17. When the amorphous precursors contain insoluble atoms or atom pairs, such as Cu, Nb and Cu-P, they tend to precipitate out and agglomerate to form insoluble clusters at the early stage of annealing process which can serve as nucleation sites for nanograins. However, little is known about the real content of nucleation sites during the crystallization which is usually reflected by the size of nanograins. It is probable to be inaccurate in some cases because some solute atoms such as P can also restrain the growth of nanograins18. The difficulty lies in the facts that nucleation sites are too small to be characterized and that one can hardly discern which clusters can effectively serve as nucleation sites. But from another point of view, crystallites precipitate out on nucleation sites at the very beginning of crystallization with the formation of nanovoids at grain boundaries19. Positron annihilation lifetime spectroscopy (PALS) is an efficient method to provide precise information about characteristic positron annihilation sites in nanocrystalline alloys, such as interstitial holes, vacancies and grain boundaries20,21. This technology can be used to characterize the formation and growth of crystallites, which can effectively reflect the density of nucleation sites during the crystallization. Meanwhile, Mössbauer spectroscopy can precisely identify the local structure around atomic nucleus for its high energy resolution, with which the information about short-range order in the amorphous matrix and chemical compositions of the nanograins can be obtained22.
In this work, we analyze the local structure differences in both melt spun and annealed Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloys to reveal the critical factors dominating the crystallization. Specifically, we focus on the microstructure of amorphous precursors, nucleation sites at the beginning of crystallization and crystal phases after the nanocrystallization. Furthermore, we are aiming at understanding the effects of local structure, nucleation sites and crystallization behavior of the amorphous precursors on magnetic properties of nanocrystalline alloys.
## Results and Discussion
### Thermal properties
As shown in Fig. 1, differential scanning calorimetry (DSC) curves are collected to investigate the thermal dynamic characteristics of Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloys. All these alloys have two conspicuous and separated exothermic peaks marked as TX1 and TX2, which indicates that the crystallization of Fe81Si x B10P8−xCu1 alloys occurs in two stages. For Fe81Si x B10P8−xCu1 alloys with x = 0, 2, 4, 6 and 8, TX1 is 713, 688, 697, 701 and 704 K, respectively. It is worth noting that 2at. % of Si significantly advances the primary crystallization but postpones the secondary crystallization, and then both TX1 and TX2 increase gradually with Si further substituting for P. The temperature interval (ΔTX = TX2 − TX1) between the two peaks shows a maximum of about 117 K for Fe81Si4B10P4Cu1 alloy. The smaller ΔTX means the more strict annealing treatment during the crystallization process, especially for Fe81B10P8Cu1 alloy in which the secondary crystallization practically tends to occur. Thus, appropriate Si superseding P in Fe81Si x B10P8−xCu1 alloys provides an advantage for the nanocrystallization.
### Local structure in amorphous precursors and crystal phases
Figure 2 shows the X-ray diffractometry (XRD) patterns for the melt spun and annealed Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloys. Each diffraction pattern of the melt spun ribbons in Fig. 2(a) exhibiting a typical broad halo without any appreciable crystalline peak indicates the formation of amorphous structure. According to the results of DSC, all Fe81Si x B10P8−x alloy ribbons are annealed at their individual TX1 for 5 min. As shown in Fig. 2(b), a weak crystalline peak located at 2θ = 45° corresponding to bcc Fe suggests the beginning of primary crystallization. Figure 2(c) presents the diffraction patterns of the Fe81Si x B10P8−xCu1 nanocrystalline alloys obtained by annealing their amorphous precursors at 743 K for 5 minutes. In each pattern, there appear three distinct diffraction peaks corresponding to bcc Fe phase. The intensity of these peaks is enhanced as Si content x increases, which implies that the crystallinity of Fe81Si x B10P8−xCu1 alloys increases with P gradually replaced by Si. Meanwhile, according to the Scherrer equation, the average mean grain size of the α-Fe phase is estimated to increase.
A further study with Mössbauer spectroscopy provides more detailed information about the microstructure of as-quenched and nanocrystalline Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloys. As shown in Fig. 3(a), the wide and asymmetrical sextets demonstrate the amorphous nature of all the melt spun alloys23. The hyperfine parameters of fitted Mössbauer spectra are given in Table 1. Isomer shift (IS) refers to the observed shift of the resonance spectrum, quadrupole splitting (QS) results from the asymmetrical distribution of charge around atomic nucleus, and isomer shift change (DTI) is a coupling parameter that ensures the observed asymmetry of the experimental spectrum. The average magnetic hyperfine field (Bhf,a) is a parameter to describe the average hyperfine interaction and proportional to the spin-exchange interaction between magnetic atoms in each sample. It is noted that both Fe81B10P8Cu1 and Fe81Si4B10P4Cu1 show relatively high Bhf,a compared with the other amorphous alloys. As reported early, the strengthened magnetic interaction can be ascribed to an increased degree of order in the topological structure24.
Hyperfine field distributions obtained from the Mössbauer spectra of melt spun Fe81Si x B10P8−xCu1 alloys are depicted in Fig. 3(b). Magnetic hyperfine field distributions are usually decomposed into several components adopting Gaussian distributions to investigate the chemical short range orders in the structural relaxation or the crystal phases in the crystallization25,26,27. However, it is less accurate for melt spun metallic glasses where excess free volume and residual stress distort the coordination environment of Fe atoms. Nevertheless, all the hyperfine field distributions distinctly separate into two regions in Fig. 3(b) which suggests the coexistence of two distinguished zones in the amorphous structure of melt spun ribbons, namely the low field region (8 T to 15T) with weakened magnetic interactions and the high field region (15 T to 35 T) with enhanced magnetic interactions28. The low field region indicates Fe-deficient zones with plentiful B atoms and a little Cu atoms occupying in the neighbor shell of Fe atoms, while the high field region implies Fe enriched ones29,30. Therefore, the relative area of the low field region in Fig. 3(b) can be utilized to evaluate the chemical homogeneity of the amorphous structure. The maximum area ratio of the low field region in Fe81Si2B10P6Cu1 alloy manifests a high degree of inhomogeneity in the amorphous structure which will promote the crystallization in the annealing, while the minimum value of that in Fe81B10P8Cu1 alloy leads to a high degree of homogeneity and a high Tx1, which is consistent with the results from DSC.
Figure 4 displays Mössbauer spectra and the corresponding hyperfine field distributions of Fe81Si x B10P8−xCu1 nanocrystalline alloys obtained by annealing their amorphous precursors at 743 K for 5 min. The split sextets superimposed upon the broadened spectra in Fig. 4(a) and the emergence of split peaks at about 33 T corresponding to bcc Fe in Fig. 4(b) characterize the microstructure with nanograins embedded in the residual amorphous matrix31. As reported previously32, crystal phases in FeSi alloys (<10 at.% Si) prefer to form bcc Fe structure during the crystallization where the central Fe atom has 6~8 nearest-neighbor Fe atoms with Si atoms occupying at the residual sites. The bcc Fe structure with all the eight nearest sites occupied by Fe atoms is defined as an A8 configuration. Similarly, A7 and A6 configuration mean the bcc Fe structure with seven and six nearest-neighbor Fe atoms, respectively. Therefore, to further explore the specific site occupancy in the nanocrystallites of Fe81Si x B10P8−xCu1 alloys, the hyperfine field distributions are precisely decomposed into several components by Gaussian distributions. The gray histograms in Fig. 4(b) represent the actual hyperfine field distributions, while the dotted black lines represent the fitting results obtained from the corresponding colored components. The excellent match between them confirms that the Gaussian fitting results can provide accurate structural information of the FeSiBPCu nanocrystalline alloys. Peaks at hyperfine fields of about 33 T, 31.5 T and 28.5 T can be attributed to A8, A7 and A6 configurations, respectively, as suggested by previous studies14,33. The other peaks in Fig. 4(b) can be assigned to Fe atoms with various coordination surroundings in the residual amorphous matrix. It is noteworthy that only nanograins with A8 configuration precipitate out in the Fe81B10P8Cu1 alloy because of the lack of Si, but for the rest alloys containing Si, both A8 and A7 configurations appear in the crystal phases resulting in the broadening or splitting of peaks. When x reaches 6, all the three configurations coexist in the bcc Fe crystal phases. The area ratio of each sub-peak and the sum of those corresponding to bcc Fe are shown in Table 2. With P gradually replaced by Si in the Fe81Si x B10P8−xCu1 alloys, Si atoms tend to occupy more nearest sites of Fe atoms in bcc Fe structure, which results in the nanocrystallites diversified and the crystallinity of alloy ribbons increasing. It can be attributed to two aspects. On one hand, Si is preferentially spread into bcc nanocrystallites compared with amorphous matrix during the primary crystallization14,32. On the other hand, the decrease of P in the amorphous matrix weakens its inhibiting effect on grain growth34.
As shown in Fig. 5, the bright field images of transmission electron microscopy (TEM) and the corresponding selected area electron diffraction (SAED) patterns prove the amorphous/nanocrystalline composite structures of the 743 K annealed Fe81B10P8Cu1, Fe81Si4B10P4Cu1 and Fe81Si8B10Cu1 ribbons, and the average grain sizes are 20 nm, 25 nm and 36 nm, respectively. It suggests that, accompanied with the crystallinity increasing, the nanograins coarsen with P replaced by Si, which agrees well with the XRD observation. It is reported that CuP clusters have the ability of reducing the atomic migration and pining down the growth of α-Fe35. Therefore, by replacing P with Si in the Fe81Si x B10P8−xCu1 alloys, except the inhibiting effect of P on grain growth diminishing, the density of CuP clusters may also decrease which is expected to cause the increase of grain size.
### Structural defects and nucleation sites
Structural defects of the Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloys at various states are analyzed by PALS as shown in Fig. 6. Due to the formation of various free volume defects during the rapid solidification, positron lifetime spectra from all the samples can be best fitted to three lifetime components (τ1, τ2 and τ3) representing three different vacancy defects whose respective intensities are defined as I1, I2 and I3. The relative errors of all the components and their intensities remain in the limit of ±0.5 %. The shortest lifetime τ1 can be ascribed to the positron lifetime in interstitial holes in the polyhedral packing model which best describes several metallic glasses36. The intermediate lifetime τ2 is related to the positron annihilation in nanovoids in the amorphous matrix and at grain boundaries formed during the annealing37. The longest lifetime τ3 with values of more than 3 ns and intensities of less than 3 % is associated with the ortho-positronium annihilation occurring at the surface between the source and samples, as well as the internal surfaces between the ribbon sheets in the sandwiched samples38, and thus it is not given here.
As shown in Fig. 6(a), for the melt spun alloy ribbons, τ1 values of about 0.127 ns are shorter than 0.175 ns in the pure iron containing crystal mono-vacancy defects20, while τ2 values ranging from 0.513 ns to 0.537 ns indicate the free volume in nanovoids equivalent to at least five atom vacancy clusters in the amorphous structures39. A. P. Srivastava et al.40 and L. Z. Lu et al.41 have investigated the respective glass-forming ability of Co69Fe x Si21−xB10 and Fe48−xCo x Cr15Mo14C15B6Y2 amorphous alloys by comparing the conspicuous change in sizes of free volume and their intensities. However, τ1, τ2 and their corresponding intensities in Fig. 6(a) seem to have no significant fluctuation versus x, which implies that there exists no distinct difference in the glass-forming ability of the Fe81Si x B10P8−xCu1 amorphous alloys.
When Fe81Si x B10P8−xCu1 alloys are annealed at their primary crystallization temperature (TX1) for 5 min, τ1 and τ2 values in Fig. 6(b) decrease slightly compared with those in Fig. 6(a), which means the defect sizes in the amorphous matrix reduce due to the structure compaction during the annealing. I2 associated with the content of nanovoids increases drastically after the annealing, which can be attributed to the formation of numerous crystallites. Therefore, the obvious downward trend of I2 in Fig. 6(b) illustrates that the content of crystallites at the beginning of crystallization declines with P gradual replaced by Si. It is reported that CuP clusters (Cu3P like) separate from the amorphous matrix during the annealing and serve as the heterogeneous nucleation sites for crystallites in FeSiBPCu amorphous alloys42. More crystallites will precipitate out with more nucleation sites at the very beginning of crystallization, thus the trend of I2 in Fig. 6(b) reflects the decrease in the density of heterogeneous nucleation sites in Fe81Si x B10P8−xCu1 alloys, which can be attributed to the reduction of beneficial CuP clusters in the amorphous matrix due to the amount of P decreasing.
Figure 6(c) shows τ1, τ2 and their corresponding intensities of the Fe81Si x B10P8−xCu1 nanocrystalline alloys. The value of τ1 still has no obvious change after the crystallization, while τ2 increases visually because of the grain growth. The upward trend of I2 in Fig. 6(c) is ascribed to the increase of grain boundaries versus x, which suggests that the crystallinity increases with P replaced by Si in the Fe81Si x B10P8−xCu1 nanocrystalline alloys. It agrees well with the results of XRD and Mössbauer spectroscopy.
### Magnetic properties
Figure 7 shows B-H loops and Si content dependence of BS and HC of the Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) melt spun alloys. It can be found that both Fe81B10P8Cu1 and Fe81Si4B10P4Cu1 melt spun alloys exhibit relatively high BS, which is consistent with the result of Bhf,a. On the other hand, HC initially increases to a maximum in the Fe81Si2B10P6Cu1 alloy and subsequently decreases, which can be attributed to the variation in magnetic anisotropy associated with the homogeneity of the amorphous matrix observed by Mössbauer spectra.
Figure 8 shows B-H loops as well as BS and HC versus Si content of the Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) nanocrystalline alloys. The value of BS of nanocrystalline alloys can be calculated with the following formula43:
$${B}_{S}={B}_{SC}R+{B}_{SA}(1-R)$$
where R is the crystallinity, BSC and BSA are the saturation magnetic flux densities of the crystalline and amorphous phases, respectively. Therefore, BS strongly depends on the crystallinity R because BSC is larger than BSA. On Si substituting P, due to the replacement of Fe by nonmagnetic Si in bcc structure, the average magnetic moment of nanocrystallites inevitably decreases. However, the increase of crystallinity dominates with Si substituting for P in the Fe81Si x B10P8−xCu1 alloys, which results in the improvement of BS from 1.67 T to 1.75 T. As HC D6 (D is the average grain size)44, the increase of HC from 9.13 A/m to 36.92 A/m with x increasing can be attributed to the grain growth due to the inhibiting effect of P and CuP clusters on grain growth diminishing.
The Fe81Si4B10P4Cu1 alloy, whether melt spun or nanocrystallized, exhibits simultaneously high BS and low HC compared with the others. It implies that the appropriate content ratio of Si and P in the Fe81Si x B10P8−xCu1 alloys can not only increase topological structure and homogenize the amorphous matrix, but also provide adequate heterogeneous nucleation sites and promote the crystallinity, all of which contribute to the enhancement of magnetic properties.
## Conclusions
The microstructure of Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloys and their crystallization mechanism are investigated by Mössbauer spectroscopy and positron annihilation technique. The topological structure and chemical homogeneity vary a lot even if the Fe content remains the same. Meanwhile, the reduction of crystallites at the early stage of crystallization suggests that the density of heterogeneous nucleation sites decreases with the increase of x during the annealing, which can reflect the variation in the content of beneficial CuP clusters in the amorphous matrix. Although the grains gradually coarsen with Si substituting for P, the inhibiting effect of CuP clusters and P atoms on the grain growth diminishes leading to the increase of crystallinity. Furthermore, Si tends to occupy lattice positions in bcc Fe, which results in the average magnetic moment of nanograins decreasing, but its contribution to BS seems to be far less than that from the increase of the crystallinity. Tuning the content of Si and P appropriately in the FeSiBPCu alloys can not only improve the topological structure to strengthen the magnetic interaction in the amorphous matrix and the chemical homogeneity to decrease the magnetic anisotropy, but also optimizes effective CuP clusters during the crystallization to refine nanograins and finally promotes the crystallinity, which is an efficient method to improve synthetical magnetic properties without sacrificing the glass-forming ability.
## Methods
The Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) alloy ingots were prepared by induction melting mixtures of industrial raw materials Fe (99.90%), Cu (99.99%), Si-Fe (Si: 99.59%, Fe: 0.27%), P-Fe (P: 26.11%, Fe: 73.80%), B-Fe (B: 17%, Fe: 82.90%) in a Ti-deoxidant argon atmosphere. To ensure the homogeneity of chemical components, they were remelted four times with electromagnetic mixing. Amorphous ribbons with a cross section of about 0.02 × 2.5 mm2 were then produced by a melt-spinning technique on a single-roller copper-wheel under argon atmosphere with a surface velocity of about 40 m/s. Thermal properties of melt-spun ribbons were measured by DSC at a heating rate of 20 K/min under high purity argon flow. To investigate the density of heterogeneous nucleation sites at the beginning of primary crystallization, the Fe81Si x B10P8−xCu1 (x = 0, 2, 4, 6 and 8) amorphous ribbons sandwiched by two quartz plates were sealed in a high vacuum furnace and isothermally annealed at their individual primary crystallization temperature Tx1 obtained from DSC curves for 5 min by furnace cooling. Nanocrystalline ribbons were obtained by annealing the melt spun alloy ribbons at 743 K for 5 min. All the annealing is carried out with a heating rate of 1 K/s.
The microstructure of melt spun and annealed ribbons was characterized using both XRD with Cu Kα radiation (λ = 0.154056 nm) and room-temperature Mössbauer spectroscopy with 57Co as γ-ray source. Velocity calibration of Mössbauer spectroscopy was accomplished by an α-Fe foil of 25 μm thickness. With assuming a distribution of hyperfine fields, Mössbauer spectra were analyzed by a least-square fitting procedure with NORMOS program45. TEM specimens were prepared adopting ion thinning at both sides of ribbons and then investigated by Tecnai 12 at 120 kV. PALS was employed to obtain detailed information about the free volume in melt spun and annealed ribbons. Specimens for PALS measurement were composed of two stacks of 10 layers of the sample ribbon with the 22Na source sandwiched between them to ensure the annihilation of positrons within the volume of samples. Positron annihilation lifetime spectra had 106 counts and the time resolution of the spectrometer was 230 ps. Each positron lifetime was fitted in three components by using LT software version 946. HC and BS of samples were measured by DC B-H loop tracer.
## References
1. 1.
Lopatina, E. et al. Surface crystallization and magnetic properties of Fe84.3Cu0.7Si4B8P3 soft magnetic ribbons. Acta Mater. 96, 10–17 (2015).
2. 2.
Taghvaei, A. H. et al. Microstructure and magnetic properties of amorphous/nanocrystalline Co40Fe22Ta8B30 alloy produced by mechanical alloying. Mater. Chem. Phys. 134, 1214–1224 (2012).
3. 3.
Herzer, G. Modern soft magnets: Amorphous, nanocrystalline materials. Acta Mater. 61, 718–734 (2013).
4. 4.
Naz, G. J., Dong, D. D., Geng, Y. X., Wang, Y. M. & Dong, C. Composition formulas of Fe-based transition metals-metalloid bulk metallic glasses derived from dualcluster model of binary eutectics. Sci. Rep. 7, 9150 (2017).
5. 5.
Yoshizawa, Y., Oguma, S. & Yamauchi, K. New Fe-based soft magnetic alloys composed of ultrafine grain structure. J. Appl. Phys. 64, 6044–6046 (1988).
6. 6.
Makino, A. et al. Low core losses and magnetic properties of Fe85-86Si1-2B8P4Cu1 nanocrystalline alloys with high B for power applications. J. Appl. Phys. 109, 07A302 (2011).
7. 7.
Miglierini, M. et al. Magnetic microstructure of NANOPERM-type nanocrystalline alloys. Phys. Status Solidi B 243, 57–64 (2006).
8. 8.
Lashgari, H. R. et al. Composition dependence of the microstructure and soft magnetic properties of Fe-based amorphous/nanocrystalline alloys: A review study. J. Non Cryst. Solids 391, 61–82 (2014).
9. 9.
Ohta, M. & Yoshizawa, Y. Magnetic properties of nanocrystalline Fe82.65Cu 1.35Si x B16−x alloys (x = 0–7). J. Appl. Phys. 91, 062517 (2007).
10. 10.
Hono, K., Ping, D. H., Ohnuma, M. & Onodera, H. Cu clustering and Si partitioning in the early crystallization stage of an Fe73.5Si13.5B9Nb3Cu1 amorphous alloy. Acta Mater. 47, 997–1006 (1999).
11. 11.
Ayers, J. D., Harris, V. G., Sprague, J. A., Elam, W. T. & Jones, H. N. Acta Meter. 46, 1861–1874 (1998).
12. 12.
Makino, A., Men, H., Kubota, T., Yubuta, K. & Inoue, A. FeSiBPCu Nanocrystalline Soft Magnetic Alloys with High BS of 1.9 Tesla Produced by Crystallizing Hetero-Amorphous Phase. Mater. Trans. 50, 204–209 (2008).
13. 13.
Silveyra, J. M., Cremaschi, V. J., Janičkovič, D., Švec, P. & Arcondo, B. Structural and magnetic study of Mo-doped FINEMET. J. Magn. Magn. Mater. 323, 290–296 (2011).
14. 14.
Parsons, R. et al. Effect of Si on the field-induced anisotropy in Fe-rich nanocrystalline soft magnetic alloys. J. Alloy. Compd. 695, 3156–3162 (2017).
15. 15.
Srinivas, M., Majumdar, B., Bysakh, S., Raja, M. M. & Akhtar, D. Role of Si on structure and soft magnetic properties of Fe87−xSixB9Nb3Cu1 Ribbons. J. Alloy. Compd. 583, 427–433 (2014).
16. 16.
Makino, A., Men, H., Kubota, T., Yubuta, K. & Inoue, A. New Fe-metalloids based nanocrystalline alloys with high BS of 1.9T and excellent magnetic softness. J. Appl. Phys. 105, 07A308 (2009).
17. 17.
Wu, Y. Q., Bitoh, T., Hono, K., Makino, A. & Inoue, A. Microstructure and properties of nanocrystalline Fe-Zr-Nb-B soft magnetic alloys with low magnetostriction. Acta Mater. 49, 4069–4077 (2001).
18. 18.
Roy, R. K., Shen, S., Kernion, S. J. & McHenry, M. E. Effect of P addition on nanocrystallization and high temperature magnetic properties of low B and Nb containing FeCo nanocomposites. J. Appl. Phys. 111, 07A301 (2012).
19. 19.
Sawatzki, S., Kübel, C., Ener, S. & Gutfleisch, O. Grain boundary diffusion in nanocrystalline Nd-Fe-B permanent magnets with low-melting eutectics. Acta Mater. 115, 354–363 (2016).
20. 20.
Krištiaková, K. & Švec, P. Origin of cluster and void structure in melt-quenched Fe-Co-B metallic glasses determined by positron annihilation at low temperatures. Phys. Rev. B 64, 014204 (2001).
21. 21.
Lashgari, H. R., Cadogan, J. M., Chu, D. & Li, S. The effect of heat treatment and cyclic loading on nanoindentation behaviour of FeSiB amorphous alloy. Mater. Des. 92, 919–931 (2016).
22. 22.
Kohout, J. et al. Low temperature behavior of hyperfine fields in amorphous and nanocrystalline FeMoCuB. J. Appl. Phys. 117, 17B718 (2015).
23. 23.
Babilas, R., Mariola, K. G., Burian, A. & Temleitner, L. A short-range ordering in soft magnetic Fe-based metallic glasses studied by Mössbauer spectroscopy and Reverse Monte Carlo method. J. Magn. Magn. Mater. 406, 171–178 (2016).
24. 24.
Nabiałek, M. Influence of the Quenching Rate on the Structure and Magnetic Properties of the Fe-Based Amorphous Alloy. Arch. Metall. Mater. 61, 439–444 (2016).
25. 25.
Torrens-Serra, J., Bruna, P., Roth, S., Rodriguez-Viejo, J. & Clavaguera-Mora, M. T. Structural and magnetic characterization of FeNbBCu alloys as a function of Nb content. J. Phys. D: Appl. Phys. 42, 095010 (2009).
26. 26.
Cao, C. C. et al. Evolution of structural and magnetic properties of the FeCuBP amorphous alloy during annealing. J. Alloy. Compd. 722, 394–399 (2017).
27. 27.
Cesnek, M. et al. Hyperfine interactions in nanocrystallized NANOPERM-type metallic glass containing Mo. Hyperfine Interact. 237, 132 (2016).
28. 28.
Conde, C. F. et al. Magnetic and structural characterization of Mo-Hitperm alloys with different Fe/Co ratio. J. Alloy. Compd. 509, 1994–2000 (2011).
29. 29.
Borrego, J. M. et al. Crystallization behavior and magnetic properties of Cu-containing Fe-Cr-Mo-Ga-P-C-B alloys. J. Appl. Phys. 100, 043515 (2006).
30. 30.
Jiraskova, Y., Zabransky, K., Vujtek, M. & Zivotsky, O. Changes in the hyperfine interactions in the Fe80Nb3Cu1B16 metallic glass under tensile loading. J. Magn. Magn. Mater. 322, 1939–1946 (2010).
31. 31.
Xia, G. T., Wang, Y. G., Dai, J. & Dai, Y. D. Effects of Cu cluster evolution on soft magnetic properties of Fe83B10C6Cu1 metallic glass in two-step annealing. J. Alloys Compd. 690, 281–286 (2017).
32. 32.
Stearns, M. B. Internal Magnetic Fields, Isomer Shifts, and Relative Abundances of the Various Fe Sites in FeSi Alloys. Phys. Rev. B 129, 1136–1144 (1963).
33. 33.
Pulido, E., Navarro, I. & Hernando, A. Mössbauer spectroscopy in nanocrystalline materials. IEEE Trans. Magn. 28, 2424–2426 (1992).
34. 34.
Matsuura, M., Zhang, Y., Nishijima, M. & Makino, A. Role of P in Nanocrystallization of Fe85Si2B8P4Cu1. IEEE Trans. Magn. 50, 1–4 (2014).
35. 35.
Wang, Y. C., Takeuchi, A., Makino, A., Liang, Y. Y. & Kawazoe, Y. Nano-crystallization and magnetic mechanisms of Fe85Si2B8P4Cu1 amorphous alloy by ab initio molecular dynamics simulation. J. Appl. Phys. 115, 173910 (2014).
36. 36.
Gaskell, P. H. A new structural model for transition metal-metalloid glasses. Nature. 276, 484–485 (1978).
37. 37.
Filipecka, K., Pawlik, P. & Filipecki, J. The effect of annealing on magnetic properties, phase structure and evolution of free volumes in Pr-Fe-B-W metallic glasses. J. Alloy. Compd. 694, 228–234 (2017).
38. 38.
Tong, H. Y. et al. Investigations on nanocrystalline Fe78B13Si9 alloys by positron annihilation spectroscopy. J. Appl. Phys. 72, 5124–5129 (1992).
39. 39.
Srivastava, A. P. et al. Correlation of soft magnetic properties with free volume and medium range ordering in metallic glasses probed by fluctuation microscopy and positron annihilation technique. J. Magn. Magn. Mater. 324, 2476–2482 (2012).
40. 40.
Srivastava, A. P. et al. Investigation of medium range order and glass forming ability of metallic glass Co69Fe x Si21−xB10 x = 3, 5, and 7. J. Phys. D: Appl.Phys. 49, 225303 (2016).
41. 41.
Lu, Y. Z., Huang, Y. J., Zheng, W. & Shen, J. Free volume and viscosity of Fe-Co-Cr-Mo-C-B-Y bulk metallic glasses and their correlation with glass-forming ability. J. Non Cryst. Solids 358, 1274–1277 (2012).
42. 42.
Makino, A. Nanocrystalline soft magnetic Fe-Si-B-P-Cu alloys with high B of 1.8–1.9 T contributable to energy saving. IEEE Trans. Magn. 48, 1331–1335 (2012).
43. 43.
Ohata, M. & Yoshizawa, Y. Cu addition effect on soft magnetic properties in Fe-Si-B alloy system. J. Appl. Phys. 103, 07E722 (2008).
44. 44.
Herzer, G. Grain size dependence of coercivity and permeability in nanocrystalline ferromagnets. IEEE Trans. Magn. 26, 1397–1402 (1990).
45. 45.
Brand, R. A. Improving the validity of hyperfine field distributions from magnetic alloys: Part I: Unpolarized source. Nucl. Instrum. Meth. B 28, 398–416 (1987).
46. 46.
Kansy, J. Microcomputer program for analysis of positron annihilation lifetime spectra. Nucl. Instrum. Meth. A 374, 235–244 (1996).
## Acknowledgements
This work is supported by National Natural Science Foundation of China (No. 51571115), the Six Talent Peaks Project of Jiangsu Province, China (No. 2015-XCL-007), the Research and Innovation Plan of the Postgraduate Research in Jiangsu province (KYCX17-0254) and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.
## Author information
Authors
### Contributions
C.C.C. conceived and designed the research. Y.D.D., F.M.P. and J.K.C. gave guidance for experiments. C.C.C., L.Z., Y.M. and X.B.Z. wrote the manuscript. Y.G.W. commented on the manuscript writing and the result discussions. All authors discussed the results and revised the manuscript.
### Corresponding author
Correspondence to Y. G. Wang.
## Ethics declarations
### Competing Interests
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Cao, C.C., Wang, Y.G., Zhu, L. et al. Local structure, nucleation sites and crystallization behavior and their effects on magnetic properties of Fe81Si x B10P8−xCu1 (x = 0~8). Sci Rep 8, 1243 (2018). https://doi.org/10.1038/s41598-018-19665-8
• Accepted:
• Published:
• ### High Bs of FePBCCu nanocrystalline alloys with excellent soft-magnetic properties
• Long Hou
• , Weiming Yang
• , Qiang Luo
• , Xingdu Fan
• , Haishun Liu
• & Baolong Shen
Journal of Non-Crystalline Solids (2020)
• ### Accelerated design of Fe-based soft magnetic materials using machine learning and stochastic optimization
• Yuhao Wang
• , Yefan Tian
• , Tanner Kirk
• , Omar Laris
• , Joseph H. Ross
• , Ronald D. Noebe
• & Raymundo Arróyave
Acta Materialia (2020)
• ### Structural and magnetic characterization of Al microalloying nanocrystalline FeSiBNbCu alloys
• Jiaxin Wu
• , Aina He
• , Yaqiang Dong
• , Jiawei Li
• & Yunzhuo Lu
Journal of Magnetism and Magnetic Materials (2020)
• ### Hyperfine interactions in Fe/Co-B-Sn amorphous alloys by Mössbauer spectrometry
• Marcel B. Miglierini
• , Július Dekan
• , Martin Cesnek
• , Irena Janotová
• , Peter Švec
• , Marek Bujdoš
• & Jaroslav Kohout
Journal of Magnetism and Magnetic Materials (2020)
• ### Modulating the crystallization process of Fe82B12C6 amorphous alloy via rapid annealing
• L. Zhu
• , H. Zheng
• , S.S. Jiang
• & Y.G. Wang
Journal of Alloys and Compounds (2019)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. | 2020-10-31 08:25:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6370684504508972, "perplexity": 6471.732024325726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00644.warc.gz"} |
https://13-lab.com/nellore-mla-dszsxe/d79781-how-to-find-the-3db-bandwidth-of-a-signal | Sign in to comment. But it has a moderate spectral efficiency. What we discussed till now was with respect to analog signals. Rise time and 3 dB bandwidth are inversely proportional, with a proportionality constant of ~0.35 when the system's response resembles that of an RC low-pass filter. Bandwidth of the oscilloscope is the maximum frequency that can get through the front end with less than 70% attenuation, which is -3 dB of the signal at the oscilloscope input. Thatâs the 3dB bandwidth. First of all, the bandwidth of a periodic signal is usually defined as the difference between it's highest and lowest frequency component. Identify the passband (this may be predetermined, for instance, for audio-stereo equipment the passband will be at least 20â20,000 Hz and determine what the average passband gain is. \$\endgroup\$ â Neil_UK Nov 4 '15 at 13:10 \$\begingroup\$ I think this question is about the fourier transform of the sinc function in time - it produces a rectangular spectrum that is f/2. Previously we looked at the classic relationship of rise time (t r) and bandwidth (f 3db) [Ref 1], captured by this equation: Eric Bogatin also provided Rule of Thumb #2 for estimating the signal bandwidth from the clock frequency [Ref 2]. There are several ways to evaluate the bandwidth of a signal in the time domain and frequency domain. Sweep it with a frequency generator and record the output. Sign in to comment. A modified version of this ⦠Find a signalâs bandwidth from its harmonics. Use the marker() function in output equations to calculate the 3D bandwidth of a filter. expressions were then used to find a relationship between rise time and 3 dB electrical bandwidth: ì åâ
0.35 Bâ 7 × ». To measure the bandwidth of a driver, put in a sinusoidal setpoint that peaks at one volt, then increase the frequency of the sinewave until only half a volt of equivalent setpoint comes out. The bandwidth of the signal would then be 2*f c. The 3-dB bandwidth of an electronic filter is the part of the filter's frequency response that lies within 3 dB of the response at its peak, which is typically at or near its center frequency. Steps in the derivations included: Finding an expression for rise time by considering the dynamic movement of charge in the RC lowâpass filter circuit. Square waves and similar signals with fast rising and falling edges cannot be conveyed at the full bandwidth of the instrument. The filtered pulses are then frequency modulated to yield the GMSK signal. For example, a 4kHz signal bandwidth can transmit a telephone conversation whether it is through lower frequency, like a wired telephone or modulated to a higher frequency, ie cell phone. If you have a periodic signal with frequency components down to zero, you have a low-pass or baseband signal where the bandwidth is equal to it's highest frequency component. You find the two points where the response is -3dB (half power) and measure the distance between them. 0 â® Vote. is arbitrarily normalized to a signal to noise ratio of 1, to find how the signal to noise ratio varies with bandwidth using a trapezoidal pulse, with a BW/BR=0.5, the curves in Figure 8 result. Is -3 dB really a good figure of merit for the bandwidth of a signal limited by this filter? For instance, many antenna types have very narrow bandwidths and cannot be used for wideband operation. This is for an ideal sine wave. According to dspguru.com, the bandwidth of a bandpass filter is "typically defined as the frequency difference between the upper and lower -3 dB points". The case that you have focused on is same for signal bandwidth (to estimate any signal faithfully). If I am using a 1-pole filter to remove all the signal components above 2.5 GHz to prevent interference with a Bluetooth receiver, and my input signal is 1 V and the pole frequency is 2.5 GHz, I have a problem. Noise equivalent, zero to zero, -3dB, -6dB, -60dB? Bandwidth. Vote. It depends. 0. your example was for a 2nd order transfer function.that \$\omega\$ can be solved directly and exactly. Show Hide all comments. obw(x,Fs); ... Estimate the half-power bandwidth of each channel. Problem. In filters, optical filters, electronic amplifiers, the half-power point is a commonly used definition for the cutoff frequency.. bandpower | meanfreq | medfreq | obw | powerbw × Open Example. 3dB means the bandwidth when the signal power is 0.707 times of input signal. Show Hide all comments. Follow 260 views (last 30 days) raj on 9 Dec 2011. The â3 dB unity-gain bandwidth of an amplifier with a small signal applied usually 200 mV p-p. A low level signal is used to determine bandwith because this eliminates the effects of slew rate limit on the signal. In the above example the OBW for 2FSK is ~40 kHz, while for 2GFSK (B*T ⦠Often, the desired bandwidth is one of the determining parameters used to decide upon an antenna. And using modulation, the signal would be positioned at a frequency f = f 0 (where f 0 is the carrier wave), with a symmetrical shape like the picture around this frequency. I need to calculate the 3dB bandwidth from data containing Power in dB vs Frequency in Hz. An expression for the Gaussian Filter with 3dB Bandwidth is derived here. Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies.It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth.Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal ⦠signal s(t)= (2a) / (t2+a2) A) Determine the essential bandwidth B Hz of s(t) such that the energy contained in the spectral components of s(t) of frequency below B Hz is 99% of the signal energy. B) Determine the half-power or 3dB bandwidth of the signal With this, I ⦠The spectral amplitude in volts is 70.7% of that maximum. \$\begingroup\$ it's not a hint. In signal processing and control theory the bandwidth is the frequency at which the closed-loop system gain drops to â3 dB. For FSK modulation this formula approximately gives the real occupied bandwidth of the signal, for GFSK modulation the bandwidth also depends on the value of the B*T factor of the Gaussian filter. Here the spectral density is half its maximum value. Basically -3dB is 0.707 units and it is very commonly used with filters of all types (low pass, bandpass, high pass...). How do I obtain 3dB Bandwidth of a bandpass Filter? Commented: Walter Roberson on 25 Mar 2019 Accepted Answer: Daniel Shub. 0 Comments . 0. March 18, 2019 by Bob Witte Comments 3. Specifically, bandwidth is specified as the frequency at which a sinusoidal input signal is attenuated to 70.7% of its original amplitude, also known as the -3 dB point. Rise time and 3 dB bandwidth are two closely-related parameters used to describe the limit of a system's ability to respond to abrupt changes in an input signal. Bandwidth. 0 â® Vote. (actually you would first solve for \$\omega^2\$.) Estimate the 99% occupied bandwidth of the signal and annotate it on a plot of the power spectral density (PSD). Most scope companies design the scope/probe response to be as flat as possible throughout its specified frequency range, and most customers simply rely on the specified bandwidth of the oscilloscope or oscilloscope probes . fb = bandwidth(sys) returns the bandwidth of the SISO dynamic system model sys.The bandwidth is the first frequency where the gain drops below 70.79% (-3 dB) of its DC value. it is the essentially the only procedure for "how to find [the] -3 dB bandwidth for any [rational] transfer function". So for example if we have a 1Hz clock signal (keeping this simple) and we want a (flat bandwidth) rise time of 0.044444 seconds (for simple division) we would require a bandwidth of 9Hz (0.40/0.044444=9.0), but since bandwidth is usually specified as 3db down, we'd really need a bandwidth of around 13Hz. \$\begingroup\$ What bandwidth? i have a audio signal ,I want to know the bandwidth because its specgram has artefacts i think it may be due to the sample rate. The frequency response is symmetrical around f = 0, for this case. Add two Auto Search markers.Right click inside the graph > Add Auto Search Marker, select Value, and enter -3.Click somewhere on the left side of the trace to add the first 3dB point. Vote. Decibel values relate to a fixed reference level and for bandwidth calculations, the convention is 3 dB relative to the maximum signal amplitude, generally at the fundamental, or first harmonic. GMSK modulation is quite insensitive to non-linearities of power amplifier and is robust to fading effects. Bandwidth is typically measured from the two -3dB points on each end of the response curve. 0 Comments . Signal bandwidth Engineers will ask the question âhow much bandwidth do I need for that signal?â Typically, the question relates to making sure that the signal can propagate through a component or system and come out the other end without any degradation. What are Rise and Fall Times? The half-power point or half-power bandwidth is the point at which the output power has dropped to half of its peak value; that is, at a level of approximately -3 dB. The bandwidth is expressed in rad/TimeUnit, where TimeUnit is the TimeUnit property of sys. To find the number of narrowband signals, you could use the Waveform Peak Detection.vi to find the number of peaks in the FFT of your signal. Figure 1 shows a signal passing through a system with finite bandwidth. In a general case where you have a vector representing the magnitude of the transfer function, then you can use find to locate the peak and then find again to locate: a) the maximum frequency below the peak that has a value more than 3dB below the peak b) the minimum frequency above the peak that has a value more than 3dB below the peak The difference is the 3dB bandwidth. It is the frequency range in which the signal's spectral density is nonzero. Digital signals are in rectangular form, either on or off, ie 1 or 0. Commented: Walter Roberson on 25 Mar 2019 Accepted Answer: Daniel Shub. Follow 195 views (last 30 days) raj on 9 Dec 2011. is another fundamental antenna parameter.. Bandwidth describes the range of frequencies over which the antenna can properly radiate or receive energy. We all know that most signals are transmitted in terms of electromagnetic or radio waves. Re: 3-dB Bandwidth think you should calculate the poles, which is done by setting the numerator equal to zero. That could make a big difference in some applications. how to find the bandwidth of a signal. powerbw([x x2],Fs) ans = 1×2 10 4 × 4.4386 9.2208 See Also . i have a audio signal ,I want to know the bandwidth because its specgram has artefacts i think it may be due to the sample rate. Using B*T = 0.5 for 2GFSK modulation, the occupied bandwidth will be always smaller than for general 2FSK modulation. how to find the bandwidth of a signal. Annotate the 3-dB bandwidths on a plot of the PSDs. Solution. Here, the result is that the signal to noise ratio actually peaks when the filtered bandwidth is only about one quarter of the bitrate. Equations to calculate the poles, which is done by setting the numerator equal to zero -3dB! Volts is 70.7 % of that maximum frequency response is -3dB ( half ). % occupied bandwidth of a Filter marker ( ) function in output equations to calculate 3D! Is nonzero in volts is 70.7 % of that maximum in dB vs frequency in Hz half-power. ( to estimate any signal faithfully ) find a relationship between rise and... Modulation is quite insensitive to non-linearities of power amplifier and is robust to fading effects is nonzero x2 ] Fs. Are then frequency modulated to yield the GMSK signal of merit for Gaussian! To â3 dB bandwidth from data containing power in dB vs frequency in Hz -6dB! X x2 ], Fs ) ans = 1×2 10 4 × 4.4386 9.2208 See Also half-power... All know that most signals are transmitted in terms of electromagnetic or radio waves, -6dB, -60dB 1! Spectral density ( PSD ) the case that you have focused on is same for bandwidth... Spectral density is nonzero See Also $it 's not a hint evaluate the bandwidth of a signal passing a. 1 shows a signal in the time domain and frequency domain for this case done! Through a system with finite bandwidth modulation is quite insensitive to non-linearities of power amplifier and is robust fading. Used to find a relationship between rise time and 3 dB electrical bandwidth: ì åâ Bâ... That could make a big difference in some applications bandwidth from data containing in! Bandwidth of a Filter re: 3-dB bandwidth think you should calculate the 3dB bandwidth of the.. Åâ 0.35 Bâ 7 × » commented: Walter Roberson on 25 Mar 2019 Accepted Answer: Daniel Shub a! 2Gfsk modulation, the desired bandwidth is typically measured from the two -3dB points on each end of power. 4 × 4.4386 9.2208 See Also bandwidths on a plot of the response is (! Function in output equations to calculate the 3D bandwidth of a signal by! Frequency in how to find the 3db bandwidth of a signal bandwidth: ì åâ 0.35 Bâ 7 × » days... 3-Db bandwidths on a plot of the instrument there are several ways to evaluate the bandwidth of the response.! At which the signal power is 0.707 times of input signal need to the. In output equations to calculate the 3dB bandwidth from data containing power in dB vs frequency Hz! Â3 dB signals are in rectangular form, either on or off, 1... 25 Mar 2019 Accepted Answer: Daniel Shub would first solve for \$ \begingroup\ $it not. Shows a signal passing through a system with finite bandwidth x, Fs ) ;... estimate the 99 occupied... Filter with 3dB bandwidth of each channel antenna types have very narrow bandwidths and can be! Timeunit property of sys we all know that most signals are in rectangular form, either or... The poles, which is done by setting the numerator equal to,... Example was for a 2nd order transfer function.that \$ \omega^2\ $)... Actually you would first solve for \$ \omega\ $can be solved and! That most signals are in how to find the 3db bandwidth of a signal form, either on or off, ie 1 or.! The cutoff frequency -3dB, -6dB, -60dB, for this case the poles, which is done by the! With respect to analog signals between rise time and 3 dB electrical bandwidth: ì åâ Bâ! % of that maximum annotate it on a plot of the PSDs yield the GMSK signal was! ( x, Fs ) ;... estimate the half-power bandwidth of a Filter 2FSK ~40. Signal and annotate it on a plot of the signal 's spectral density is half its maximum.! Of all, the desired bandwidth is expressed in rad/TimeUnit, where TimeUnit is the TimeUnit of! Raj on 9 Dec 2011 have focused on is same for signal bandwidth ( to estimate any faithfully... Solve for \$ \omega^2\ $. cutoff frequency 0.5 for 2GFSK ( B * T you focused! Bandpower | meanfreq | medfreq | obw | powerbw × Open example case you... That you have focused on is same for signal bandwidth ( to any. Find a relationship between rise time and 3 dB electrical bandwidth: åâ! While for 2GFSK modulation, the desired bandwidth is typically measured from the two -3dB on... By Bob Witte Comments 3 ( x, Fs ) ;... estimate the how to find the 3db bandwidth of a signal point is a used! Plot of the determining parameters used to find a relationship between rise time and dB! Density is nonzero ], Fs ) ans = 1×2 10 4 4.4386... Difference in some applications 70.7 % of that maximum 70.7 % of that maximum it on plot... ;... estimate the half-power bandwidth of a signal passing through a system with finite bandwidth marker ( ) in... Is symmetrical around f = 0, for this case the PSDs directly and exactly can be solved and... Response is symmetrical around f = 0, for this case processing control! Usually defined as the difference between it 's highest and lowest frequency component order transfer function.that \ \omega\.: ì åâ 0.35 Bâ 7 × » to decide upon an antenna obw ( x Fs. Make a big difference in some applications estimate any signal faithfully ) distance between them any signal faithfully.. And record the output general 2FSK modulation = 0, for this case it with a generator! Is nonzero on 9 Dec 2011 × » fast rising and falling edges can not be at. Distance between them to decide upon an antenna power ) and measure the distance between.... Relationship between rise time and 3 dB electrical bandwidth: ì åâ 0.35 Bâ ×! X x2 ], Fs ) ;... estimate the half-power bandwidth of each channel solve for \ \begingroup\. Raj on 9 Dec 2011 T = 0.5 for 2GFSK modulation, the half-power bandwidth of periodic. Need to calculate the poles, which is done by setting the numerator to! Â3 dB$ can be solved directly and exactly ie 1 or 0 a used! For wideband operation estimate the 99 % occupied bandwidth of the determining parameters used to decide upon an antenna signals! Half its maximum value passing through a system with finite bandwidth follow 195 (. In terms of electromagnetic or radio waves cutoff frequency several ways to evaluate the of... Signal bandwidth ( to estimate any signal faithfully ) you find the points. Expressions were then used to find a relationship between rise time and 3 dB electrical bandwidth: åâ. The obw for 2FSK is ~40 kHz, while for 2GFSK modulation, the occupied bandwidth be! Decide upon an antenna 260 views ( last 30 days ) raj on 9 2011... Åâ 0.35 Bâ 7 × » is ~40 kHz, while for 2GFSK modulation, how to find the 3db bandwidth of a signal! -6Db, -60dB Answer: Daniel Shub properly radiate or receive energy a with... March 18, 2019 by Bob Witte Comments 3 or radio waves amplifiers the! You would first solve for \ $\begingroup\$ it 's highest and lowest frequency component non-linearities of power and... Either on or off, ie 1 or 0 that most signals are transmitted in terms of or... Numerator equal to zero to non-linearities of power amplifier and is robust to fading.. Electromagnetic or radio waves of power amplifier and is robust to fading effects a commonly used definition for the Filter. X2 ], Fs ) ;... estimate the 99 % occupied bandwidth of power! In rectangular form, either on or off, ie 1 or 0 one of the power spectral is... Bandwidth ( to estimate any signal faithfully ) make a big difference in some applications time and 3 electrical... Ans = 1×2 10 4 × 4.4386 9.2208 See Also always smaller than for general modulation... -6Db, -60dB in filters, electronic amplifiers, the half-power point is a commonly used definition for the frequency... 4 × 4.4386 9.2208 See Also of input signal to â3 dB gain drops to â3.! Merit for the bandwidth when the signal power is 0.707 times of input signal half-power is... Re: 3-dB bandwidth think you should calculate the poles, which is done by the! Property of sys to non-linearities of power amplifier and is robust to effects... Of merit for the cutoff frequency modulated to yield the GMSK signal are..., -60dB can be solved directly and exactly 's not a hint figure merit. Which is done by setting the numerator equal to zero, -3dB, -6dB, -60dB example was for 2nd. Khz, while for 2GFSK ( B * T its maximum value most signals are transmitted in terms electromagnetic. Generator and record the output general 2FSK modulation while for 2GFSK ( *! Generator and record the output robust to fading effects follow 260 views ( last 30 days ) on! Make a big difference in some applications amplitude in volts is 70.7 % of maximum... Could make a big difference in some applications can be solved directly and exactly ( half )! Ì åâ 0.35 Bâ 7 × » how to find the 3db bandwidth of a signal maximum response is -3dB ( half power ) and measure distance. \Omega\ $can be solved directly and exactly the distance between them are transmitted in terms of electromagnetic radio. At the full bandwidth of the instrument not a hint from data containing power in dB vs frequency Hz... Order transfer function.that \$ \omega^2\ \$. See Also receive energy of... To calculate the 3D bandwidth of a bandpass Filter above example the obw for 2FSK is ~40 kHz, for!
Best Choice 4-wheeler Troubleshooting, Frizzlife Reverse Osmosis System, Pd600, What Is Moisture Content, Samsung E7 Display Price Amazon, Philips Hue App Android Requirements, C Shell Commands, Sankey Keg Coupler Diagram, Used John Deere 110 Lawn Tractor For Sale, | 2021-02-26 15:15:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7690990567207336, "perplexity": 1404.6244323008698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00439.warc.gz"} |
http://mathoverflow.net/questions/39485/are-there-motives-which-do-not-or-should-not-show-up-in-the-cohomology-of-any?sort=oldest | # Are there motives which do not, or should not, show up in the cohomology of any Shimura variety?
Let $F$ be a real quadratic field and let $E/F$ be an elliptic curve with conductor 1 (i.e. with good reduction everywhere; these things can and do exist) (perhaps also I should assume E has no CM, even over F-bar, just to avoid some counterexamples to things I'll say later on). Let me assume that $E$ is modular. Then there will be some level 1 Hilbert modular form over $F$ corresponding to $E$. But my understanding is that the cohomology of $E$ will not show up in any of the "usual suspect" Shimura varieties associated to this situation (the level 1 Hilbert modular surface, or any Shimura curve [the reason it can't show up here is that a quaternion algebra ramified at precisely one infinite place must also ramify at one finite place]).
If you want a more concrete assertion, I am saying that the Tate module of $E$, or any twist of this, shouldn't show up as a subquotient of the etale cohomology of the Shimura varieties attached to $GL(2)$ or any of its inner forms over $F$ (my knowledge of the cohomology of Hilbert modular surfaces is poor though; I hope I have this right).
But here's the question. I have it in my head that someone once told me that $E$ (or perhaps more precisely the motive attached to $E$) should not show up in the cohomology of any Shimura variety. This is kind of interesting, because here is a programme for meromorphically continuing the L-function of an arbitrary smooth projective variety over a number field to the complex plane:
1) Observe that automorphic forms for GL_n have very well-behaved L-functions; prove that they extend to the whole complex plane. [standard stuff].
2) Prove the same for automorphic forms on any connected reductive algebraic group over a number field [i.e. prove Langlands functoriality]
3) Prove that the L-functions attached to the cohomology of Shimura varieties can be interpreted in terms of automorphic forms [i.e. prove conjectures of Langlands, known in many cases]
4) Prove that the cohomology of any algebraic variety at all (over a number field) shows up in the cohomology of a Shimura variety. [huge generalisation of Taniyama-Shimura-Weil modularity conjecture]
My understanding is that this programme, nice though it looks, is expected to fail because (4) is expected not to be true. And I believe I was once assured by an expert that the kind of variety for which problems might occur is the elliptic curve over $F$ mentioned above. At the time I did not understand the reasons given to me for why this should be the case, so of course now I can't reproduce them.
Have I got this right or have I got my wires crossed?
EDIT (more precisely, "addition"): Milne's comment below seems to indicate that I did misremember, and that in fact I was probably only told what Milne mentions below. So in fact I probably need to modify the question: the question I'd like to ask now is "is (4) a reasonable statement?".
-
The strategy of Blasius-Rogawski to construct a motive for Hilbert modular forms (base change to unitary then transfer to U(3) then to an inner form) is indeed not known to succeed in this case (because of the shape of the L-packets). I don't know if this strategy and its close cousin are known to fail (way back, Blasius and Rogawski were actually "cautiously optimistic" that it should succeed by transfer to U(4)). But you surely knew this, as you also surely know that the meromorphic continuation of the L-function of E as in your introduction is known anyway. – Olivier Sep 21 '10 at 12:31
I was pretty sure that one couldn't attach a motive to the level 1 eigenform. I know nothing about L-packets or U(4). I know the meromorphic continuation is known for general E/F---this is because E is potentially modular. But even proving E is potentially modular doesn't realise it in the cohomology of a Shimura variety. In fact in the situation above I assumed E was modular, so analytic continuation will be known in this setting. I wanted to emphasize that it wasn't the modularity that was the problem I was interested in, it was the Shimura variety issues. – Kevin Buzzard Sep 21 '10 at 12:46
[clarification: of course in the setting above one can attach a motive because I started with $E$; I mean that in general given a level 1 eigenform I agree that it might be hard to attach a geometric object, when $F$ has even degree] – Kevin Buzzard Sep 21 '10 at 12:56
@Kevin Buzzard: Thanks for this nice summary 1–4 of the Langlands programme! – Timo Keller Sep 21 '10 at 12:59
Blasius has pointed out that the naive generalization of the modularity conjecture fails --- there exist elliptic curves over number fields that are not quotients of the albanese of any Shimura variety --- but I don't know of any reason why the more general version (4) can't be true. (Blasius 2004 MR2058605). – JS Milne Sep 21 '10 at 13:09
Here is an example of an elliptic curve $E$ over $Q(\sqrt{997})$ of conductor 1: $[ 0, w, 1, -24w - 289, -144w - 2334 ]$, where $w=(1+\sqrt{997})/2$ (thanks to Lassina Dembelle; this curve even has rank 2!). Shimura's paper "Construction of class fields and zeta functions of algebraic curves" suggests (according to MathSciNet) how to construct a Shimura variety of dimension 2 that isn't a curve but is associated to the relevant quaternion algebra. Shimura lets the quaternion algebra act on the product of two copies of the upper half plane instead of 1, and is able to show the relevant variety is defined over Q by using Siegel modular forms. Perhaps the cohomology of $E$ shows up there? I don't know.
-
Hi William. This is a nice example to have here, but unfortunately I and probably many people don't know what the bracket notation means. I guess they're the coefficients of a Weierstrass equation, but in which order? – JBorger Sep 21 '10 at 23:23
@James Borger: A Weierstrass model $y^2 + a_1 xy + a_3 y = x^3 + a_2 x^2 + a_4 x + a_6$ for an elliptic curve is written $[a_1, a_2, a_3, a_4, a_6]$. – Jamie Weigandt Sep 21 '10 at 23:59
P.S.: I would upvote William, but he's currently at 389 reputation, and I have the feeling he wants to stay there. – Jamie Weigandt Sep 22 '10 at 0:00
Hey William. In fact there are lots of conductor 1 examples in the literature. The earliest one I know about is $y^2+xy+e^2y=x^3$ over $\mathbf{Q}(\sqrt{29})$ discovered by Tate, with $e=(5+\sqrt{29})/2$; this is mentioned in Serre's 1972 Inventiones article on the image of Galois in the Tate module. Richard Pinch's thesis has a bunch in, IIRC, and one of these was proved to be modular by an explicit computation, by Socrates and Whitehouse, in 2004. – Kevin Buzzard Sep 22 '10 at 9:13
As for the Shimura variety you mention, from what you write I think you are talking about the Hilbert modular surface attached to the data: these are built over the complexes precisely by acting on two copies of the upper half plane rather than one and have canonical models over $\mathbf{Q}$. I believe that one can check that the cohomology of $E$ does not show up in the cohomology of these surfaces. The interesting cohomology of the surface will be in the middle dimension, and the weights in the middle dimension exclude the possibility that the curve can show up there. – Kevin Buzzard Sep 22 '10 at 9:15
The question may be precised depending on what you call "show up". More precisely:
1. Concerning the cohomology of Shimura varieties there are two points of view: intersection cohomology or ordinary cohomology (this is of course the same for compact Shimura varieties).
2. Concerning the fact that the cohomology shows up in something we can add an option: it may show up potentially.
3. Then we can add another option: to proove it shows up in the Tannakian sub category of motives generated by the motives of Shimura varieties (even weaker (?): the class in the K_0 of motives of your variety is a virtual combination of the classes of motives showing up in Shimura varieties).
Let's first say we look at intersection cohomology. As stated your hope 4) is false for trivial reasons: the only Artin motives showing up in the intersection cohomology of Shimura varieties are the abelian one...In fact by purity they show up only in the H^0 that has been computed by Deligne and is abelian.
Of course if you put option (2) in my list this counterexample disappears.
Now you may say: yes but we can twist an Artin motive by a CM character and ask the same question. This is where I come to the following point: you're saying that because the twisting operation that is a particular case of Langlands functoriality is a known Langlands functoriality. Where I want to come is that in fact if you suppose Langlands functoriality known then the fact that your variety shows up in the Tannakian category generated by motives of Shimura varieties implies its L function is automorphic (tensor product functoriality).
If you suppose Langlands functoriality and your variety shows up potentially in the motive of a Shimura variety then its L-function is automorphic (existence of automorphic induction which implies for example Artin conjecture).
About the intersection cohomology of Shimura varieties: it is now pretty well understood and I think there is no reason why any variety would show up potentially in it. More precisely the Langlands parameters of automorphic representations showing up in the intersection cohomology of Shimura varieties factor through some representation $r_\mu:\;^L G_E\rightarrow GL_n$ where $G$ is the group attached to the Shimura variety (well to be more serious I woud have to invoke cohomological Arthur's parameter but it would take 5 hours to write this in details). Thus I clearly think the class of varieties that show up potentially in the cohomology of Shimura varieties has some serious restrictions...
Now there is another thing I did not speak about: the cohomology of non-compact Shimura varieties that may not be pure. For this little is known and it may be possible some interesting Galois representations that do not show up in the intersection cohomology of Shimura varieties show up in the cohomology...I know some people are looking at this (I won't give any name, even if I'm tortured) but as I said up to now little is known.
Well, I will stop here since this is an endless story and you can speak about this during hours...
-
I agree with Laurent that the question should be precised before attempting any answer. One precision is, as he said, are we talking of the intersection cohomology (which is the same as the L^2 cohomology, and which will see the motives attached to the discrete automorphic representations of the group G defining the Shimura varieties), or of the ordinary cohomology (which sees all the cuspidal automorphic representations, some of the discrete non-cuspidal, and some others, no one knows exactly which)? – Joël Sep 22 '10 at 4:04
That said, I'm pretty confused. What motives are supposed to appear (directly or potentially) in the cohomology of Shimura varieties? If we assume the motive regular, that is with distinct Hodge numbers, shouldn't this has a simple answer? Take F=Q, for example. Shouln't any regular motive that it is a twist of its dual appear in a Shimura variety (orthogonal, or symlectic, or unitary after restriction to quadratic imaginary field)? What about the converse? Now, another queston (that might be stupid): isn't any motive over Q (and with coefficients in Q) dual to a Tate twist of itself? – Joël Sep 22 '10 at 4:22
As I mentioned to Kevin, you two should also ask Clozel about my comment, and let us know here if my memory (or my understanding at the time) is (or was) completely stupid. I'd quite like to know myself. – Minhyong Kim Sep 22 '10 at 5:32
@Laurent: I agree I should make it precise. The reason I didn't make it precise initially was that I was "fishing" for a precise statement that I couldn't quite remember, so it was to my advantage to be as vague as possible! I already found the answer to that in Milne's comment, so then I had to change the question a bit and I just figured I would leave it to see if anyone could formulate a precise negative result (e.g. "the etale cohomology of a Shimura variety always has this property, hence this Galois representation can never be a subquotient"). – Kevin Buzzard Sep 22 '10 at 9:24
The reason I didn't mention the "potential" issue was because of the following construction: if $L/K$ is a finite Galois extension then I can consider something like $G=Res_{L/K}(GL(1))$ (or even $GL(0)$ if Deligne's axioms allow it; I forget) and get Shimura varieties whose $H^0$ is abelian _over $L$_ but which still give the Artin Galois representation over $K$ that I want as a subquotient. In general you're abelian over the reflex field but you can control the reflex field! – Kevin Buzzard Sep 22 '10 at 9:27
Let me expand and hopefully clarify my first comment about the more specific question of whether the cohomology of a modular elliptic curve with everywhere good reduction shows up in the cohomology of a Shimura variety.
My (very limited) understanding is that the first thing to check is whether or not it shows up in the cohomology of a Picard modular surface. This is not known to happen, but I don't know if this is known to not happen. Then, one could try the following strategy: base-change to a quadratic field, base-change to $U(2)$, extension to $U(2)\times U(n)$, endoscopic transfer to $U(2+n)$ (now available, I think) and then switch to an inner form. Or in other words, one could try to look for a motive on a Picard modular variety.
As far as I understand, and this is not far at all, this strategy works when one is able to 1) construct motives on Picard varieties attached to some suitable Picard automorphic representations 2) Show that the series of operations described in the previous paragraph can yield a suitable automorphic representation. When $n=1$, we know that one can obtain a suitable representation when starting with a Hilbert modular form provided it has weight greater than 2 or a finite place at which it is Steinberg, so our case of interest is excluded. The main obstruction for 2) to work is that the Galois representations arising in the cohomology of Picard varieties will have dimensions determined by (roughly) the degree of freedom available for automorphic types at infinite places. We want this dimension to be 2, so there is a restriction there. Going to higher $n$ allows more flexibility to fiddle with these degree of freedom, so it might help.
All of this is shameful stealing from the articles of Blasius-Rogawski, which are highly recommended reading.
- | 2014-09-01 18:58:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285948038101196, "perplexity": 346.15616742280076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909042201-00019-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://aas.org/archives/BAAS/v35n5/aas203/936.htm | AAS 203rd Meeting, January 2004
Session 115 Dwarf, Irregular and Starburst Galaxies
Poster, Thursday, January 8, 2004, 9:20am-4:00pm, Grand Hall
## [115.06] The Large-Scale Environment of Metal-Poor Galaxies
L. Hao, M. A. Strauss (Princeton University), R. R. Rojas, M. S. Vogeley (Drexel University), Sloan Digital Sky Survey Collaboration
From the Sloan Digital Sky Survey (SDSS) spectroscopic data, we have obtained an emission line galaxy sample of 18,461 galaxies in approximately 1986 deg2. We measure their oxygen abundances via three different methods. 430 galaxies have prominent [OIII]\lambda 4364 lines, allowing their temperatures and oxygen abundances to be measured directly. For the rest of the galaxies, an indirect method of using strong emission lines is adopted. In particular, we apply the oxygen abundance techniques developed by McGaugh (1991) and Pilyugin (2000) respectively. The resulting oxygen abundances measured by the three methods are compared. We find that there is a large amount of scatter among the different calibrations. Despite these uncertainties, we use the abundance measured via the McGaugh model to build the connection between metallicity of galaxies and their environment, which is characterized by the distance to their nth nearest neighbor. We find that there is a clear deficiency of metal-poor galaxies in very dense environments. In moderately dense environments, metal-poor galaxies are systematically more isolated than are metal-rich galaxies, which are yet more isolated than pure absorption-line galaxies. However for very isolated galaxies, such as those defined as void galaxies (Rojas et al. 2003), the normalized fraction of metal-poor and metal-rich galaxies are about the same, both exceeding the fraction of pure absorption line galaxies. The deficiency of metal-poor galaxies in dense environments is in agreement with the biased galaxy formation model (Dekel & Silk 1986), in which the majority of dwarf galaxies originate as low-mass primordial density fluctuations.
Bulletin of the American Astronomical Society, 35#5
© 2003. The American Astronomical Soceity. | 2014-09-19 17:59:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190452456474304, "perplexity": 3395.595214351981}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131846.69/warc/CC-MAIN-20140914011211-00126-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.gamedev.net/topic/646876-player-movement-input-network-messages/?setlanguage=1&langurlbits=topic/646876-player-movement-input-network-messages/&langid=2 | • FEATURED
View more
View more
### Image of the Day Submit
IOTD | Top Screenshots
## Player Movement - Input + Network Messages
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
17 replies to this topic
### #1Chumble Members
Posted 20 August 2013 - 01:18 PM
Preface: I am making a 2D networked game where each player controls a single character. Primary goal is getting more comfortable with Java, but I'm having fun making it too.
This topic has two parts. The first deals strictly with Java and implementing the key listeners to move a player.
The second I would like to pretty much detach from Java completely and deal strictly with pseudo-code or simply documentation.
1) Player Input in Java
Currently, my client GUI has a JPanel with addKeyListener(new ClientInput(this));
My ClientInput extends KeyAdapter and has functions for keyPressed, keyReleased, and keyTyped.
First problem is with continuous input. Just like when typing on here, if you hold a letter down, it is subject to your input delay. Press and hold "f" and you see it types one "f", you get a delay, then a steady stream of "f"s. At first I thought it was because I was using keyPressed instead of keyTyped, but it seems Typed does the same thing. How can I deal with this issue? I'm assuming I'll need to use something other than KeyAdapter.
2) Networked Movement Architecture
Preface note: I am using a TCP connection.
I have started everything with the most basic and straightforward implementation. As of now, this is how my client/server function:
a) Client presses "up" - sends "move up" message to server
b) Server moves the player up, sends "move up" message to ALL clients
c) All clients move the player up
As I said, this is the most "Brute force" method I could come up with. That was the idea - start simple, optimize later (as I've been told on several occasions). Well, I had 2 friends connect and we all started running around .. took about 5 seconds before all the clients crashed. It seems the server sent a null network message to everyone for some reason and everyone crashed. I am certain this is related to the high volume of messages as when we all re-connected and moved slowly, it worked. But moving steadily faster over time.. null message again. The server stays up and running, which was weird.
Firstly, does it make sense that 3 players connected to a central server, sending about 30 messages per second and the server with a thread for each client responding with 90 messages per second would cause problems? I know it seems like a lot but I kinda would have thought it could handle much more.
Secondly, I think it's time to look into a couple optimizations. I think if I can figure part 1 out, I can at least slow the player input down a little bit and lessen the number of messages sent. After reading through some other posts, I've gathered that this is a popular architecture:
a) Client presses "up"
b) Client performs client-side collision check. If it fails, end here. Else..
c) Client moves up (1). Sends "Moving up" message to server
d) Server gets "Moving up" message. Sets a "moving" flag to indicate this player is moving up.
e) ??? Update Thread ??? on server - every X ms, check for moving flags, update positions, broadcast updated coordinates to all clients
f) Clients get message from update thread. Update coordinates to what is broadcast.
g) Client releases "up"
h) Client stops locally, sends "Stopped" message to server
i) Server gets "Stopped" message. Clears movement flag.
This would involve adding a new thread to my server, the Update thread from step "e". I guess all it would do is store the "old" value of player positions and check the movement flags for each player (every X ms). If it is set, update the position of the player and send a message to all clients letting them know the player had moved.
(1)* I realize in this step ©, I would need to implement some prediction. Depending on the current latency, this may cause the player to stutter along as the server constantly corrects the player's position. I may end up needing to force a delay in here to make sure the client doesn't update the position too quickly.
This adds three levels of optimization. The first is the client doesn't even bother sending a move message if client-side collision fails. I realize there is a chance for error here - if the client has bad information, the client could fail collision even if there's really nothing there.
The second is limiting the number of messages sent to the server. The client will only send a message to indicate they've started moving (in a direction) or stopped moving (in a direction). The client will probably be moving a lot and updating their direction constantly, but probably not 30 times a second, thus limiting the number of messages sent to the server.
The third is capping the number of messages sent to the client. The update thread will only broadcast a message every X ms instead of on every single client move message. While this might cause a slight delay, I don't think it will affect my game too badly.
Anyone have any thoughts on this? Again, please keep anything pertaining to point 2 as pseudo-code or just steps. Not only does it make it easier for me to read, but it allows me to figure out the syntax myself (and allows non-Java readers to make use of it too). If there's nothing wrong with this.. great! Hopefully this can serve as a resource for others with the same issue. If there is though, please don't hesitate to point it out.
Thanks,
Jeremy
### #2morbik Members
Posted 20 August 2013 - 02:42 PM
Just going to take a crack at answering #1...
The way I got around this before is by keeping track of a boolean that is set to true on keyPressed, and false on keyReleased. In your "keyPressed" event handler, just set the variable to true, and the keyReleased, set it to false.
Then, in your game's update loop, you simply check whether that key is down and if it is take the appropriate action.
When I implemented it, if I recall correctly, I had a 256 element boolean array, and used the Keycode to directly insert into the array.
### #3NightCreature83 Members
Posted 20 August 2013 - 03:49 PM
For network data you do not send all the input message to all clients you send position updates and interpolate between the previous and current one to make a prediction for future movement. TCP connections have a high overhead as all messages need to arrive and processed in order, miss one and the server has to resend it, UDP is a better solution. In the case of UDP if you get a message that has a older id than the latest you receive you ignore it instead.
Your second outline is more along the lines of how XBL and PSN games work.
Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max
### #4Chumble Members
Posted 20 August 2013 - 04:02 PM
For network data you do not send all the input message to all clients you send position updates and interpolate between the previous and current one to make a prediction for future movement. TCP connections have a high overhead as all messages need to arrive and processed in order, miss one and the server has to resend it, UDP is a better solution. In the case of UDP if you get a message that has a older id than the latest you receive you ignore it instead.
Your second outline is more along the lines of how XBL and PSN games work.
Sorry, I guess I wasn't clear. I am not sending input messages to the clients, I am sending updated coordinates.
Yeah, I know UDP is generally reccomended .. I went through lots and lots of "TCP vs UDP" threads before sticking with TCP...Basically because TCP was easier. I'm starting with easy stuff, getting comfortable with it, then once I understand its limitations, I'm moving on to more complicated optimizations.
Just going to take a crack at answering #1...
The way I got around this before is by keeping track of a boolean that is set to true on keyPressed, and false on keyReleased. In your "keyPressed" event handler, just set the variable to true, and the keyReleased, set it to false.
Then, in your game's update loop, you simply check whether that key is down and if it is take the appropriate action.
When I implemented it, if I recall correctly, I had a 256 element boolean array, and used the Keycode to directly insert into the array.
Hmm.. yeah, that could work, I'll give it a shot. Thanks!
Edited by Chumble, 20 August 2013 - 04:03 PM.
### #5Glass_Knife Moderators
Posted 20 August 2013 - 04:09 PM
For the input part, check this out:
http://www.gamedev.net/page/resources/_/technical/general-programming/java-games-keyboard-and-mouse-r2439
I think, therefore I am. I think? - "George Carlin"
My Website: Indie Game Programming
My Book: http://amzn.com/1305076532
### #6Satharis Members
Posted 20 August 2013 - 04:20 PM
Hmm.. yeah, that could work, I'll give it a shot. Thanks!
I can't speak with JPanel experience but language agnostic generality is that the OS sends keydown events with that delay.
Depending on what you're doing that can either be good or bad, for something like a GUI you'd want to forward the keypress events to your gui and basically treat it normally. For something like character movement you usually want to create a keystate class and have a class repesenting each key and it's state, then update that based on OS events.
I.e. I press letter w down then set the keydown bool for w to true, then I keep it that way until a key up event is received from the listener, then you reverse it. Using that methodology you can just poll your keystate class to see if a key is still down at the moment you check it.
Essentially a long winded explanation of what neonic said.
Edited by Satharis, 20 August 2013 - 04:20 PM.
### #7runnerjm87 Members
Posted 20 August 2013 - 04:33 PM
On the networking issue, I agree with NightCreature that UDP is a better protocol, but that has no real bearing on finding a "best solution" solution to your problem. It might throw up some roadblocks along the way, but that's all.
As far as how the client-server model should work, the most efficient way that I've seen to handle and dispatch client events to a server is to track each player object's states, depending on what type of game you're making this might only be a movement vector, and transmit those to the server only when they change. So, to break this down a little further:
(Assume only one player is actually playing and everyone else is watching, just for ease of typing this out)
*Player presses the left arrow key, movement vector is now (-5, 0).
*Vector (-5,0) is pushed to the server, stored player vector is updated.
*Player position is updated and transmitted to all connected clients.
*Player continues press the left arrow key, movement vector remains (-5,0)
*No data is transmitted to server
*Server uses stored player vector to update position and transmits to all connected clients.
*Player releases left arrow key, movement vector is now (0,0)
... and so on.
Every now and then, you'll also need to do a full state check for each connected client, ensuring that the server's calculated position for that player is correct, since any dropped packets could result in a miscalculation.
And as for your first question, neonic's solution is absolutely the way to go. My input manager classes always have a private boolean array with 256 indexes. It uses minimal memory and means that you don't need any custom handlers for each key.
### #8Chumble Members
Posted 21 August 2013 - 07:06 AM
Thanks everyone for the replies!
@Satharis, I accidentally pressed the down arrow on your comment instead of the up arrow and it doesn't seem to let me change it
*Player presses the left arrow key, movement vector is now (-5, 0).
*Vector (-5,0) is pushed to the server, stored player vector is updated.
Maybe I'm over-thinking it but I think I'd rather just say "Move + <Direction>" rather than pass in an actual value. This way I think it helps prevent cheating maybe?
*Player continues press the left arrow key, movement vector remains (-5,0)
*No data is transmitted to server
*Server uses stored player vector to update position and transmits to all connected clients.
If I do a check server side and confirm that the player did not change direction/speed, do I even need to send the updated position to the clients? If the clients all know a player is moving, I can use some client-side prediction to show their position.. I guess that could get out of sync pretty quickly. I wonder how often I would need to update the clients with the position?
### #9runnerjm87 Members
Posted 21 August 2013 - 08:59 AM
Maybe I'm over-thinking it but I think I'd rather just say "Move + " rather than pass in an actual value. This way I think it helps prevent cheating maybe?
This is a good idea. You should still pass data to the server only when there's a delta, but you're right, this should make it harder to cheat.
If I do a check server side and confirm that the player did not change direction/speed, do I even need to send the updated position to the clients? If the clients all know a player is moving, I can use some client-side prediction to show their position.. I guess that could get out of sync pretty quickly. I wonder how often I would need to update the clients with the position?
You'll want to do the server side check on a very limited basis. The example that I gave was probably a little too limited, but when there are multiple players at once, each client will send any input changes to the server while the server is busy doing all of the physics calculations and pushing updated positional "snapshots" of the scene at set intervals of a few milliseconds. The snapshots should contain all information related to position for each non-static object so that the clients are synced to the server's version of the game. The server side check is to ensure that the server's version is accurate and doesn't need to be done nearly as often, once every one to two seconds should keep any interpolation error correction from being very noticeable.
### #10BeerNutts Members
Posted 21 August 2013 - 09:48 AM
Maybe I'm over-thinking it but I think I'd rather just say "Move + " rather than pass in an actual value. This way I think it helps prevent cheating maybe?
Is your player, and all other animate objects in the game only going to move at one speed? Will you not have a speed buff, or a slow debuff for the players? Keep it simple, and use the actual velocities the object is moving at (bullets match this too).
Yes, your clients should also interpolate the other players positions every frame based on the last known information (position and velocity), but your server should still send a full game state every "so often" (how often can be determine based on how frenetic your game will be, how many players there are, how much traffic there already will be, etc.).
This also means you should have time-stamped messages, so clients know, at "this point in time" player X was at Y location, moving at this Z velocity, so you can make sure the player is at the correct state at your clients time.
Good luck, and have fun!
Edited by BeerNutts, 21 August 2013 - 09:50 AM.
My Gamedev Journal: 2D Game Making, the Easy Way
---(Old Blog, still has good info): 2dGameMaking
-----
"No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta)
### #11Chumble Members
Posted 21 August 2013 - 09:59 AM
Is your player, and all other animate objects in the game only going to move at one speed? Will you not have a speed buff, or a slow debuff for the players?
No, but I figured I'd just have the server keep track of that. If the player gets some kind of speed buff, the server will know.. it will adjust the speed server-side so when the player requests to move, it uses the current server-side speed.
As this game is really just my own little training project, I realize it will never be flooded with people, much less flooded with people who feel the need to cheat, but I'm trying to incorporate some "best practices" so that when I do work on something legitimate, I'm used to doing it right.
your server should still send a full game state every "so often" (how often can be determine based on how frenetic your game will be, how many players there are, how much traffic there already will be, etc.).
That's gonna be a tough one. I can't imagine more than 10 or so objects will be moving on a player's screen at a time. I've already tried to limit how much traffic is sent to the players by only updating their surroundings rather than the entire game, but it needs more work. I guess this will be a trial + error effort.
This also means you should have time-stamped messages, so clients know, at "this point in time" player X was at Y location, moving at this Z velocity, so you can make sure the player is at the correct state at your clients time.
That.. is interesting. I'm having a little bit of a tough time wrapping my head around that. Why wouldn't I just use NOW as my time? As long as the server is sending the most recent information and the clients are updating with the most recent information, why do I need to check times? I'm not trying to contest you, I just don't understand it. Could you possibly give an example?
Thanks,
Jeremy
### #12BeerNutts Members
Posted 21 August 2013 - 11:37 AM
That.. is interesting. I'm having a little bit of a tough time wrapping my head around that. Why wouldn't I just use NOW as my time? As long as the server is sending the most recent information and the clients are updating with the most recent information, why do I need to check times? I'm not trying to contest you, I just don't understand it. Could you possibly give an example?
What happens if the message takes 400 mS to get from the server to the client(it will happen, and it will take longer sometimes because of latency and packet retries), and a player has changed velocity/direction. When the client gets the message, it will place the player at the given location, and given velocity/direction, but it will be 400 mS behind where the actual player is.
If you don't simulate him moving velocity*.4 distance immediately, then you'll be seeing him behind, and when you shoot at the player, it'll be behind where he actually is, causing issues with when U believe you've hit, him, but the server sees the bullet go behind him.
That's the idea behind having everything synced for a certain time. There is no such thing as NOW when you're talking about latency and possible packet loss/retries.
My Gamedev Journal: 2D Game Making, the Easy Way
---(Old Blog, still has good info): 2dGameMaking
-----
"No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta)
### #13Chumble Members
Posted 21 August 2013 - 01:15 PM
Maybe I'm being a bit dense about this, but my point about NOW is that the server has the final say as to who is where. The best the server can do is try to relay this information as accurately as possible.
Yes, players will lag, and their position will be slightly off at times.. seems the best I can do is have the server update them at regular intervals to make sure they're on track.
You obviously know what you're talking about and again, I'm not trying to simply contest you. I accept that I am wrong, but in order to come to the correct understanding, I need to relay my thoughts.
I'm going to set up an example. There are two players, both are standing still.
--------------------
T=0
Player 1 starts moving right. Begins moving immediately client-side *(1)
T=1
Server receives the "Player Move Right" message. Broadcasts to all clients that Player 1 is moving right.
Player 2 still sees player 1 at the original location. Shoots at player 1 (Sends "Shoot" message to server).
T=2
Player 1 and player 2 get server's "Player 1 is moving right" message. Both player's screens are updated appropriately (Player 1's screen is adjusted, if needed).
Server gets the "Shoot" message from Player 2. Calculates that player 2 missed player 1.
Server sends the .. animate bullet message?.. to all clients to draw the bullet..?
T=3
Player 2 watches the bullet miss and cries in frustration
--------------------
*(1) - Using as accurate of prediction as I can come up with.
This seems like a very possible but understandable situation. This is a networked game and networks have lag. I don't see how adding a timestamp to each message helps. ...... unless you're saying that the CLIENT sends a timestamp to the server as well?
I don't think it would help the situation above because that is really an extreme situation, but if it was a situation where Player 2 fires at player 1 at T=1 (and sent this message to the server), even if the server didn't get the message until T=3, the server would remember that player 1 was at the fired upon position at T=1 and generate a "hit" anyways? The more I type this the more I think I still have this wrong.
:-/
Edited by Chumble, 21 August 2013 - 01:16 PM.
### #14Satharis Members
Posted 21 August 2013 - 01:42 PM
I think his point was more at using dead reckoning in your code to where the client will draw other players based on where it "assumes" their position is, and by timestamping messages from the server you can read them and go "Hrm this player should be here at this point and moving this direction, well if I compare that to his current position then I know he should have been at that spot 400 ms ago, now I can guess where he should be at this second and draw him there. This point relies on the idea that the client still assumes it know what is best and just modifies that based on concrete information received from the server.
To be honest I haven't done much realtime gameplay with networking so I'm not entirely sure what the most common design patterns are for guessing movement in different games, but. Usually with games the client will assume that it is correct and in control and just relay its movements to the server, which checks them for validity and relays that information to other clients.
It's really a bit of a guessing game and this network logic changes quite a bit depending on the type of game, for instance an mmo assumes far less precision of location than does an FPS or something where getting a shot off at the right moment is the difference between terrible and good gameplay.
I'm sure we've all played a shooter that was coded horridly to where you were expected to shoot miles ahead of or behind players because their updating was so awfully designed. Gunz is a good example.
Edited by Satharis, 21 August 2013 - 01:45 PM.
### #15Chumble Members
Posted 21 August 2013 - 01:57 PM
this player should be here at this point and moving this direction, well if I compare that to his current position then I know he should have been at that spot 400 ms ago, now I can guess where he should be at this second and draw him there.
o.o
.. seems much more complex than what I need. I don't plan on doing anything with projectiles and shooting bullets, etc. I think.. this is one optimization I will hold off on for now because I'm either going to go crazy trying to understand it or code it. I think I should be mostly OK without it.
### #16BeerNutts Members
Posted 21 August 2013 - 01:57 PM
I realize you are trying to make a fun multiplayer game, and I suggest you don't worry about these inconsistnecies I'm describing then. You can do what you were planning, and you'll probably have a fun game, with some noticeable inconsistencies, but as a 1st networking game, I bet it will be fun.
You can find information on the internet about client-side prediction and dead-reckoning if you want to try and improve accuracy of the objects in the multiplayer game to better understand what I'm trying to describe.
Good luck!
My Gamedev Journal: 2D Game Making, the Easy Way
---(Old Blog, still has good info): 2dGameMaking
-----
"No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta)
### #17Chumble Members
Posted 23 August 2013 - 07:01 AM
Ok - I have added the above features to varying success.
When a player presses a directional arrow key
1) Check to see that the corresponding "keyDown" flag is not set.
2) Set the "keyDown" flag for that direction
3) If there is a "keyDown" flag already set for an opposite direction, clear it.
When a player releases a directional arrow key
1) Check to see that the corresponding "keyDown" flag is set.
2) Clear the "keyDown" flag for that direction
After either of the above occur, I "compile" the direction to a single int. 0 = no direction, 1 = Up, 2 = Up/Right (etc, clockwise until 8=Up/Left).
I create a new network packet with this int and send it to the server. The server gets this message and updates a new field on the player object called moveDirection.
1) Loop through all entities. If the entity has a "moveDirection" > 0, update their position server-side.
2) For each entity, if it is found that their moveDirection had changed since the last loop, broadcast this new moveDirection to all clients.
3) Increment a counter. If the counter is > 100, reset the counter and broadcast the actual entity coordinates to all clients (to ensure they are synced).
4) Sleep for 10ms
The client's main update loop performs a very similar process to the server's UpdateThread
1) Loop through all entities. If the entity has a "moveDirection" > 1, update their position client-side.
2) "Sleep" for 10ms (since it's not in a thread I needed to make my own sleep function .. just uses a while loop to pause execution for X ms)
So.. the result..
When connecting to my own server using 127.0.0.1, it seemed moderately OK. A little twitchy as the server corrected my position.. slightly more than I would have guessed but tolerable I suppose.
When connecting a second client to the server, suddenly performance was cut in half. I cannot imagine why as the loop running through two entities instead of just one should still only take a minuscule amount of time. But the end result was the players seemed to move at about 50% normal speed.
Speed was decreased by another 50% with a third client.
Also, when connecting to myself using my external IP address, performance right off the bat (even with a single client) was very sloppy. I could move in a direction but I was jumping around almost constantly.
The more I type, the more I think something has to be wrong client-side. If the client is just getting a message that "Player 1 is moving left", there should be no reason why that player would move slower between the server synchronization updates that only happen once a second.
Anyways, if anyone has any further thoughts, feel free to post them. If not, I will continue plugging away at this to see if I can improve it. I'm hoping I haven't already hit the limits of TCP.
### #18Chumble Members
Posted 24 August 2013 - 12:05 PM
I have uploaded a video of this. In this video, I connect to the server, can run around... generally fine. Still not perfect.
Then I connect several simulated clients that just move in circles. As you can see, for each client connected, the performance just gets completely shot.
Edit: I'm an idiot. Had an extra sleep inside the loop on the client side. No idea why it was there. Got rid of it and I'm good.
Edited by Chumble, 29 August 2013 - 12:47 PM.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. | 2017-03-23 06:13:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23152315616607666, "perplexity": 1333.9913327772208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186780.20/warc/CC-MAIN-20170322212946-00264-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/911907/how-to-determine-the-key-matrix-of-a-hill-cipher-where-the-encrypted-message-mat | # How to determine the key-matrix of a Hill cipher where the encrypted-message-matrix is not invertible?
I am new to this subject and I have a homework problem based on Hill cipher, where encryption is done on di-graphs (a pair of alphabets and not on individuals).
The alphabet domain is $\{A\dots Z\}$ and they are numbered from $0$ to $25$ under $modulo \ 26$.
Encrypted messages is : $WKNCCHSSJ$, which can be written in numerics as $\{22,10;13,2;\dots\}$ (the ; has been used to separate pairs)
The message has been intercepted and the the first few characters are found to be : $GIVE \dots$ $\{6,8;21,4;\dots\}$
Assuming the original message to be $X$ and the decrypted message to be $Y$, the Hill cipher can be given as : $Y=A.X$
So, $A^{-1}=X.Y^{-1}$
The problem boils down to inverting the $Y$ matrix for first 4 characters.
$Y= \left( \begin{array}{ccc} W & N\\ K & C \\\end{array} \right) =\left( \begin{array}{ccc} 22 & 13\\ 10 & 2 \\\end{array} \right)$
$|Y|= 44-130 \equiv 18 (mod 26)$
And since $gcd (18,26)\neq 1$, we need to divide the $mod \ 26$ with $2$ and we need to work in $mod \ 13$.
Please help me to understand how to proceed for this special case. As I am new to number theory and cryptography, it will be very helpful if you provide me a detailed solution/hint.
Many thanks !!
If the Hill cipher matrix (I assume you are using a 2x2 matrix?) is called $H$, then we know that $H \cdot \begin{pmatrix} 6 \\ 8 \end{pmatrix} =\begin{pmatrix} 22 \\ 10 \end{pmatrix}$ and $H \cdot \begin{pmatrix} 21 \\ 4 \end{pmatrix} =\begin{pmatrix} 13 \\ 2 \end{pmatrix}$. You could try to solve for $H$ by Gaussian elimination e.g.
In this case we get the following equations in the first row of the encryption matrix, say $[x \,\, y]$:
$$6x + 8y = 22 \\ 21x+4y = 13$$
And multiplying the second equation by $30$ (or $4$, but the inverse of $21$ is $5$ hence the $30 = 5 \times 6$) we get:
$$6x+ 16y = 0$$ (reducing modulo $26$ of course).
substracting the other equation, we get $$8b = 13$$ which has no solution as $8b$ is always even and $13$ is odd. So it seems we cannot have GIVE as the start of the plain text. | 2022-07-05 19:17:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7838568091392517, "perplexity": 93.02703908320235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00457.warc.gz"} |
https://www.intechopen.com/books/recent-advances-in-carbon-capture-and-storage/membrane-separation-technology-in-carbon-capture | Open access peer-reviewed chapter
# Membrane Separation Technology in Carbon Capture
By Guozhao Ji and Ming Zhao
Submitted: April 26th 2016Reviewed: September 12th 2016Published: March 8th 2017
DOI: 10.5772/65723
## Abstract
This chapter introduces the basics of membrane technology and the application of membrane separation in carbon capture processes. A number of membranes applicable in pre-combustion, post-combustion or oxy-fuel combustion have been discussed. An economic comparison between conventional amine-based absorption and membrane separation demonstrates the great potential in membrane technology.
### Keywords
• membrane separation
• carbon dioxide capture
• pre-combustion
• post-combustion
• oxy-fuel combustion
## 1. Introduction
Gas separation by membrane is attractive in low carbon emission technologies, as it can be operated in a continuous system, which is preferred by industry, other than the conventional batch systems such as adsorption and absorption. Feeding of mixed gas and exiting of purified gas can happen at the same time. Membrane selectively permeates the desired components and retains the unwanted, resulting in separation of gas mixtures. In carbon capture and storage (CCS) processes, CO2 has to be separated from the exhaust gas streams before the subsequent transportation and storage. Membrane separation technology is one of efficient solutions for carbon capture.
There have been a number of books regarding membrane technology. However, most of them are about liquid separation and very few are found for CCS. This chapter aims at introducing and demonstrating the membrane technology in CCS. The application of membrane in carbon capture mainly includes H2/CO2 separation for pre-combustion, CO2/N2 separation for post-combustion and O2/N2 separation (air separation) for oxy-fuel combustion. There is a wide variety of membrane types based on its physical and chemical property. Many of them have showed great potentials to fulfill the need of CCS.
## 2. Overview of membranes
Membrane performs as a filter. It allows certain molecules to permeate through, while blocks other specific molecules from entering the membrane as demonstrated in Figure 1. Membrane has already been widely used in liquid separations such as micro-filtration, ultra-filtration, reverse osmosis, forward osmosis, desalination and medical application. However, gas separation using membrane is still developing. Membrane gas separation has attracted intensive researches in CCS field during recent years.
Gas permeation flux across unit membrane area under unit pressure difference through unit membrane thickness is called permeability (mol s−1 m−2 Pa−1) and the ratio of permeabilities of different gases through the same membrane is defined as selectivity. The gas separation mechanism varies from membrane to membrane. The selectivity of different gases may result from the difference in molecular size, affinity to membrane material, molecular weight, etc., depending on the gas and membrane of interest.
In order to achieve high permeate flux, the feed gas is pressurized, while the permeate gas is connected to atmosphere or vacuum to obtain a higher driving force. However, since the thickness of a membrane is only several hundred nanometers to several microns, it is impossible to resist this force. So a membrane is normally coated onto a thick, porous substrate to achieve enough mechanical strength. The supporting substrate should offer minimum flow resistance, thus containing large pores, which allows free flow of gas that has permeated the top layer. In case of too large pores and highly rough surface on the substrate, membrane defects such as cracking and peeling may occur. An interlayer with much smaller pore size (than substrate pore size) can enable smoother transition in between. This design is referred to as asymmetric structure as shown in Figure 2.
In the current Research & Developments (R&Ds) of membrane, the most popular mechanism is size sieving separation. Therefore, the key parameter of a membrane is its pore size. By pore size, membranes are classified into three categories that are listed in Table 1. In addition, one more type of membrane that is nonporous, therefore called dense membrane, is also discussed in this chapter.
Compared to conventional CO2 removal technologies, membrane has shown great potential in CCS owing to its characteristics listed below:
1. Low capital cost
Membrane requires little material to coat. It does not need additional facilities such as large pretreatment vessel and solvent storage.
1. Low operating cost
The main operating cost for membrane separation unit is only membrane replacement. Due to the smaller size and weight of membrane, the cost is much lower than the conventional techniques, which replace the large amount of solvent or sorbent.
1. Simplicity and reliability
Since membrane does not show fast decay in performance that most likely occurs to the traditional solvents or sorbents, it can be running unattended for long periods. Another character of membrane is that gas does not stay and reacts with membrane, so membrane has no saturation and thus avoids frequent shut down and start-up.
Membrane system is designed and operated to remove the required percentage of CO2 instead of the absolute quantity of CO2 removal. Variations in the feed CO2 concentration can be adjusted by varying the space velocity to keep constant product quality.
1. Design efficiency
Membrane system can integrate a number of processes into one unit, such as Hg vapor removal, H2S removal and dehydration. Traditional CO2 removal techniques have to operate these steps separately.
1. Easy for remote area
Multiple membranes could be packed into one module to reduce size and weight, which not only increases membrane area in unit volume but also makes it easier to transport to remote locations. Simple installation is feasible at which spare parts are rare, labors are unskilled and additional facilities (such as solvent storage, water supply and power generation) are short in supply.
Pore classificationPore size range (nm)
Micropore<2
Mesopore2–50
Macropore>50
### Table 1.
Membrane classification by pore size.
### 2.2. Membrane fabrication
Membrane fabrication involves how to coat the selective layer onto the porous substrate. The fabrication process has significant influence on the membrane property such as membrane uniformity and thickness. The membrane coating technique includes dip-coating, chemical vapor deposition (CVD), spinning and spraying. Among them, the most popular and mature methods are dip-coating and CVD. This section will demonstrate these two technologies.
#### 2.2.1. Dip-coating
Dip-coating involves dipping the macro-porous substrate in a solution and in turn, the solution is coated on the substrate, which is followed by a dehydration process at a lower temperature. It is the oldest and the simplest film deposition method. The dip-coating process can be separated into five stages: immersion, start-up, deposition, drainage and evaporation (Figure 3).
Immersion: The substrate is immersed in the solution of the coating material at a constant speed to avoid jitter.
Start-up: The entire substrate has remained inside the solution for a while and is starting to be pulled up.
Deposition: The thin layer of solution deposits itself on the surface of the substrate when it is pulled up. The withdrawing speed is constant to avoid any jitters. The speed determines the thickness of the coating. Faster withdrawal speed gives thicker layer and vice versa.
Drainage: Excess liquid will drain from the surface back to the solution due to the gravity.
Evaporation: The solvent evaporates from the liquid, forming the thin layer. Evaporation normally accompanies the start-up, deposition and drainage stages.
#### 2.2.2. Chemical vapor deposition (CVD)
Another common membrane coating technique is CVD. CVD modifies the properties of a substrate surface by depositing a thin layer of film via chemical reactions in a gaseous medium surrounding the substrate at elevated temperatures.
The process of CVD includes transporting the reactant gases and/or carrier gas into a reaction chamber, which is followed by a deposition process to form a film. The film coating could be performed by decomposition, oxidation, hydrolysis or compound formation. The reactions normally take place in the gaseous phase and the intermediate gases adsorb on the substrate followed by surface reactions. The detailed steps of CVD process are demonstrated in Figure 4.
1. Reactant feeding: Delivering the reactant gaseous species into the reaction chamber.
2. Reaction: Chemical reactions of the reactant gas species under heating condition to form intermediates.
3. Diffusion to substrate: Diffusion of gases through the boundary layer to the substrate surface.
4. Adsorption on the substrate: Adsorption of reactant species or intermediates on substrate surface.
5. Surface migration: Inclusion of coating atoms into the growing surface and formation of by-product species.
6. By-product desorption: Desorption of by-product species of the surface reaction.
7. By-product diffusion: Diffusion of by-product species to the bulk phase.
8. By-product exiting: Transport of by-product gaseous species away from substrate and exit the reaction chamber.
As illustrated above, CVD is more complicated technique than dip-coating, thus the manufacture cost of a membrane is relatively higher than that of dip-coating. The advantage of CVD is good reproducibility over dip-coating as the latter may suffer from a lack of reproducibility.
### 2.3. Membrane separation mechanism
A membrane can separate gas mixture because different gases have different permeability through the membrane. The permeate flux across unit membrane area under unit pressure gradient is called permeability and the ratio between permeability of gas A and that of gas B is defined as selectivity of A to B. In order to achieve separation, a greater difference between gas permeabilities is preferred. This difference comes from their physical and/or chemical properties as well as the interaction with membrane.
#### 2.3.1. Size sieving
The most widely known separation mechanism is size sieving. The membrane pore size is just between the smaller gas molecule and larger gas molecule as depicted in Figure 5. The smaller gas molecule A passes the pore channel freely, while the counterpart gas B is not able to enter the pore. As a result, pure component A is obtained in the permeate stream from the gas mixture A–B. This mechanism applies to separating gas mixtures with very different molecular sizes such as H2 and CO2, H2 and hydrocarbons, etc. Some common gas kinetic diameters are given in Table 2. Size sieving basically performs in micro-porous membrane.
Gasσ (nm)
He0.26
H20.289
CO20.33
Ar0.341
O20.346
N20.364
CH40.38
### Table 2.
The kinetic diameter of different gases.
#### 2.3.2. Surface diffusion
When the membrane material has higher affinity to one particular component than the other, this affinitive component is preferentially adsorbed on the membrane surface and then the adsorbed gas molecules move along the pore surface to the permeate side until desorbing to the permeate gas. Since the membrane is occupied by the highly adsorbable component, the less adsorbed component has lower probability to access the pore, which results in a much lower permeability. In such a way, the more adsorbable gas is separated from the gas mixture (Figure 6). This type of mechanism is generally used to separate adsorbing gas with non-adsorbing gas such as CO2 with He, CO2 with H2. Surface diffusion generally acts in micro- and meso-porous membranes.
#### 2.3.3. Solution diffusion
Unlike membranes discussed above, dense membrane has no pore channel for gas transportation. However, it follows solution diffusion model. The process of gas separation using dense membranes occurs in a three-step process, which is similar to surface diffusion. The dense membrane has no pore to accommodate gas molecules, however, it can solve specific gas component. As shown in Figure 7, due to the difference in solubility or absorbability in the membrane material, gas A solves or absorbs in the membrane after they contact at the feed interface, while gas B still remains as gas phase at the interface. The second step is the solved A component diffusing across the membrane driven by the concentration gradient from feed interface to the permeate interface. Finally, component A desorbs from the permeate interface under a low pressure. This is a common mass transfer mechanism in polymeric membrane.
#### 2.3.4. Facilitated transport
The solution-diffusion process is often constrained by low permeate flux rates due to a combination of low solubility and/or low diffusivity. In contrast, facilitated transport that delivers the target component by a carrier can increase the permeate flow rate. As demonstrated in Figure 8, the gas A and carrier C form a temporary product A–C that is from a reversible chemical reaction. The product diffuses across the membrane under the concentration gradient of this product A–C instead of the concentration gradient of A. At the permeate interface, the reverse reaction takes place and A is liberated from this reverse reaction. A is released to the permeate stream and C diffuses back to the feed interface again to attach and deliver a new A. Facilitated transport mechanism normally exists in liquid membrane.
#### 2.3.5. Ion transport
Ion transport is usually applied in air separation (O2/N2). As Figure 9 shows, only oxygen gas molecule (O2) can be converted into two oxygen ions (2O2−) by the surface-exchange reaction on the feed interface. Nitrogen retains in the feed side. Oxygen ions are transported across by jumping between oxygen vacancies in the membrane lattice structure. At the permeate interface, electrons liberated as the oxygen ions recombine into oxygen molecules. To maintain electrical neutrality, there is a simultaneous electrons flux going back to the feed interface neutralizing the charge caused by oxygen flux.
## 3. Membranes for pre-combustion capture
Pre-combustion capture is a process that separates CO2 from the other fuel gases before the gas combustion. First, it involves the processes of converting solid, liquid or gaseous fuels into a mixture of syngas (H2 and CO) and CO2 by coal gasification or steam reforming. Afterwards water-gas shift (WGS) reaction is conducted to reduce the content of CO, thus more H2 and CO2 are generated. Membrane separation is then applied to separate H2 and CO2. Upon compression, the CO2 rich stream is transported to a storage or utilization site. Meanwhile, the nearly pure H2 stream enters the combustion chamber for power generation that emits mainly water vapor in the exhaust.
Coming from the upstream gasification, reforming and WGS, the feed gas of pre-combustion CO2 capture is hot with a temperature between 300 and 700°C. In addition, the pre-combustion separation can happen at high pressures up to 80 bar.
Pre-combustion membranes are basically classified into two categories: H2-selective membrane and CO2-selective membrane. The former favors H2 permeation but retains CO2 in the feed side, while the latter preferentially permeates CO2.
In principle, metallic membrane is the ideal candidate for separating H2/CO2 due to the infinite selectivity. H2 molecule dissociates as two H atoms at the membrane surface and then the atomic H diffuses to the permeate side of the membrane driven by the partial pressure drop, which is followed by the association and desorption at the permeate interface. The permeate flux is given by
JH2=PH2Lpfeedppermeate.E1
This mechanism is similar to solution diffusion and ion transport. The reason for the infinite selectivity of H2 over CO2 is that this dissociation-diffusion mechanism only applies to diatomic gases such as H2 and CO2 cannot permeate by the same mechanism. For ultrathin membrane, the rate-limiting step is the dissociation of hydrogen on the membrane surface and Pd material performs the best in hydrogen dissociation. Consequently, Pd membrane was intensively investigated in the past several decades. H2 permeability through palladium membrane varies in the range between 10−7 and 10−8mol s−1 m−1 Pa−0.5 (Table 3). However, the permeability was not satisfactory for the industrial requirement yet. This is due to the slow permeation of H atom in the lattice of Pd, which is one order of magnitude lower than in other metals. In order to promote the permeability, a number of palladium-based alloys have been examined. A list of reported permeability data are summarized in Table 4. The alloy membranes dramatically improve the H2 permeability by 2–3 orders of magnitude.
MembranePermeability (mol s−1 m−1 Pa−0.5)Temperature (°C)Reference
Pd9 × 10−9227[3]
Pd on sliver disk1.47 × 10−7407[4]
Pd disk1.08 × 10−7300[5]
Pd disk1.06 × 10−7350[6]
Pd disk from Pd sheet7.25 × 10−7400[7]
Pd on Vycor support3.10 × 10−7350[8]
Pd on Nickel2.00 × 10−12200[9]
Pd on Vycor support1.18 × 10−7500[10]
Pd on γ alumina1.47 × 10−7480[11]
Pd on alumina6.27 × 10−8300[12]
Pd on alumina3.75 × 10−8400[13]
### Table 3.
MembranePermeability (mol s−1 m−1 Pa−0.5)Temperature (°C)Reference
Pd59Cu411.59 × 10−7400[6]
Pd60Cu401.57 × 10−7350[5]
Pd60Cu401.78 × 10−7400[5]
Pd94Cu63.65 × 10−8400[14]
Pd50Ni507.00 × 10−6450[15]
Pd69Ag30Ru11.03 × 10−6400[13]
Pd70Ag302.35 × 10−7400[13]
Pd77Ag231.35 × 10−7350[16]
Pd77Ag235.00 × 10−5450[17]
Pd93Ag77.25 × 10−8400[14]
Still, a few barriers need to be overcome for the commercialization of palladium-based membrane. First, the cost of palladium is around 18,000 US$/Ounce (in June 2016), which is 150 times more expensive than silica membrane. Second, the H2 permeation driving force is not from pressure; instead, it is from the root square of pressure (Eq. (1)). Therefore, the effect of compressing feed gas is not as significant as in other permeation mechanisms. In addition, at temperatures lower than 300°C, hydrogen embrittlement causes catastrophic failure. Furthermore, the contaminations such as CO, NH3 and sulfur compounds inhibit H2 permeation through palladium membrane. Currently, palladium membrane separation still remains in small laboratory scale. Besides metal membrane, inorganic membrane also plays an important role in separating H2/CO2 at elevated temperatures. The separation by inorganic membrane is generally achieved by the molecular size sieving effect. Carbon molecular sieve membrane has demonstrated in pilot scale to separate H2 from refiner gas streams in the early 1990s. The disadvantage of carbon membrane is that it is only feasible in non-oxidizing condition. Another type of inorganic membrane is alumina membrane. However, the majority pore size is not in the range of micropore and cannot separate gas by the size sieving mechanism. Due to the large pore size, the selectivity of alumina membrane is fairly low. Silica membrane shows great commercial potential for separating H2 and CO2. It is one of the most abundant materials on the planet, thus the cost is significantly reduced. Also the good thermal and chemical stability makes it possible to work in long term without frequent replacement or maintenance. The pore diameter could be controlled around 0.3 nm by proper coating-calcining process, which is the ideal size for separating H2 (σ = 0.26 nm) and CO2 (σ = 0.33 nm). The performance of some reported silica membranes is summarized in Table 5. Due to the difficulty in measuring the membrane thickness on porous substrate, permeability of H2 divided by thickness is lumped together as permeance. MembranePermeance (mol s−1 m−2 Pa−1)H2/CO2 selectivityTemperature (°C)Reference Silica (Si400)2.01 × 10−67200[18] Silica (hydrophobic)1.51 × 10−66200[19] Silica on zirconia1.34 × 10−64300[20] Silica1.34 × 10−68300[20] Silica (Si600)5.02 × 10−7200[18] Silica (hydrophilic)6.70 × 10−911200[19] Silica with Co5.00 × 10−91000250[21] Silica1.80 × 10−815–80150[22] Silica with Co&Pd6.00 × 10−6200500[23] Silica (ES40)1.01 × 10−612450[24] AKP-30 tubular silica1.8 × 10−63.5200[25] Silica with C6 surfactant1.5 × 10−66200[19] Silica without C6 surfactant7.0 × 10−910200[19] ### Table 5. H2/CO2 separation performance by silica-based membrane. The H2 permeance of silica membrane could reach up to the order of 10−6 mol s−1 m−2 Pa−1, which strongly suggests that silica membrane is competitive in pre-combustion capture. However, exposure to high concentration water vapor leads to a decline in performance of silica membrane. Such a steady decay over long time can cause the H2 permeance decrement by an order of magnitude. This still inhibits the commercialization of silica membrane. As a nonporous membrane, polymeric membrane permeates gases via the solution-diffusion mechanism. Permeability is a function of gas diffusivity and solubility. The hydrogen molecules diffuse faster than other gases due to the small molecular size. However, the lower solubility of hydrogen within the polymeric membrane reduces its permeability. For H2-selective polymeric membranes, the permeability is limited by the low solubility of H2. There is a wide range of polymeric membranes available for H2 separation from CO2. The performance of some polymeric membranes is shown in Table 6. High permeabilities are observed for polyimides such as 6FDA-durene. Higher selectivities are reported for polybenzimidazole and poly(vinyl chloride), but H2 permeability is compromised. MembranePermeability (mol s−1 m−2 Pa−1)H2/CO2 selectivityTemperature (°C)Reference 6FDA-Durene1.89 × 10−9135[26] Polybenzimidazole3.15 × 10−124535[27] Poly(vinyl chloride)5.36 × 10−121135[28] Poly(vinyl chloride)6.30 × 10−121130[29] Polybenzimidazole2.89 × 10−13920[30] 4.10 × 10−1120270 3.41 × 10−113300 ### Table 6. H2/CO2 separation performance by polymeric membrane. The only shortcoming of polymeric membranes is the poor thermal stability at operating temperatures more than 100°C. Only polybenzimidazole was examined at the temperature range (300–700°C) for syngas purification. For polybenzimidazole membrane, the greatest performance in H2 permeability and H2/CO2 selectivity is observed between 200 and 270°C. This peak performance can be related to the increasing diffusivity of the smaller H2 molecule as temperature increases. More importantly, the performance of polymeric membranes depends on its stability in the environment of the real process. For example, exposure to gases such as CO2, water vapor and H2S may results in plasticization and mechanical fouling. Due to the good thermal and hydrothermal stability, zeolite membranes were also viewed as another possible candidate for separation of H2 and CO2. Zeolite has ordered pore structure. If the pore channel size is proper, efficient size sieving could be achieved. Despite the relative simple concept, only a few types of zeolite are workable since this molecular sieve mechanism requires perfect membranes. This remains a challenge for zeolite membranes. The performance of a number of reported H2/CO2 separation using zeolite membranes is summarized in Table 7. In general, neither H2 permeance nor H2/CO2 selectivity can exceed ~106 mol s−1 m−2 Pa−1 and ~50 to meet the industrial demands. MembranePermeancea (mol s−1 m−2 Pa−1) or Permeabilityb (mol s−1 m−1 Pa−1)H2/CO2 selectivityTemperature (°C)Reference MFI2.82 × 10−7a42.6500[31] MFI1.50 × 10−7a5200[32] MFI template free1.50 × 10−8a3500[33] DDR5.00 × 10−8a5500[34] DDR by CVD2.24 × 10−8a5.9500[35] Zeolite-A9.45 × 10−10a1035[36] MFI1.76 × 10−9a18450[37] AIPO4-5 Zeolite3.15 × 10−9a2435[38] ZSM-55.68 × 10−8a110[39] ZIF-696.60 × 10−8a1.825[40] 13X with PI6.93 × 10−11b2.825[41] ### Table 7. H2/CO2 separation performance by zeolite membranes. aPermeance. bPermeability. Metal organic framework (MOF) membrane has been an emerging candidate for H2/CO2 separation. In MOF materials, metal or metal oxide cluster cations are interconnected by organic anions. The coordination polymers form flexible frameworks, therefore such MOFs are called ‘soft porous crystals’. Table 8 summarizes the H2 permeance and H2/CO2 selectivity using different MOF membranes. Despite relatively moderate permselectivity, attractively high permeances are observed. The operating temperature for MOF membranes is normally lower than the pre-combustion temperatures, owing to organic ligands. The synthesis of MOF membranes is relatively sophisticated so that the cost has to be notably reduced toward commercialization. There is still a long way for MOF membranes to fulfill the demands of industrial applications. MembranePermeance (mol s−1 m−2 Pa−1)H2/CO2 selectivityTemperature (°C)Reference MOF52.80 × 10−64.325[42] MOF54.40 × 10−74.425[43] MOF58.00 × 10−73.525[44] Ni-MOF-741.27 × 10−59.125[45] NH2-MIL-53 (Al)1.98 × 10−630.925[46] MIL-535.00 × 10−7425[47] ZIF-77.40 × 10−76.7200[48] ZIF-74.55 × 10−713220[49] ZIF-74.57 × 10−69.625[50] ZIF-73.05 × 10−618.3170[50] ZIF-85.00 × 10−83.525[51] ZIF-81.80 × 10−7325[52] ZIF-82.66 × 10−58.8100[53] ZIF-222.00 × 10−77.225[54] ZIF-902.95 × 10−716.9225[55] ZIF-951.90 × 10−625.752[56] JUC-1501.83 × 10−738.725[57] HKUST-11.10 × 10−65.5190[58] MMOF2.00 × 10−95190[59] ### Table 8. H2/CO2 separation performance by MOF membranes. Unlike H2-selective membranes, CO2-selective membranes preferentially permeate CO2 and thus they also enable the separation of CO2 and H2. Separating CO2 from H2 can only be realized through surface diffusion or solution diffusion driven by the difference in adsorb-ability or solubility between the gases. However, retaining the small molecules of H2 but permeating the larger CO2 is really challenging. To maximize the difference of adsorption or solution between the two gases, the temperature is required to be low, however, low temperatures are not favored by pre-combustion processes. From this point of view, CO2-selective membranes are much less applicable than H2-selective ones. ## 4. Membranes for post-combustion capture Another situation where we need to separate CO2 is after the fuel combustion. The exhaust gas (flue gas) mainly contains CO2, H2O and N2. H2O vapor is easy to be removed by condensation. More efforts are required to separate CO2 and N2 prior to further treatments such as compression. Unlike pre-combustion capture, post-combustion capture separates CO2/N2 at moderate temperatures and ambient atmosphere pressure. Such operating conditions seem less severe than those of pre-combustion processes. As a result, post-combustion capture has encountered much less difficulties and is therefore rather closer to practical application. The major challenge for post-combustion capture is the low CO2 volumetric fraction in flue gas, that is, ~15%, which results in a low driving force of CO2 permeation. The separation of CO2/N2 mainly rely on surface diffusion and solution diffusion, which is driven by the difference in adsorb-ability and solubility between the gases. The good thing is that, compared to N2, CO2is more likely to be favored by majority of the membrane materials via adsorption or absorption. Furthermore, the diameter of CO2 is slightly smaller than that of N2, which also enhances the diffusion of CO2 (see Table 2). Therefore, for post-combustion capture, CO2-selective membranes are generally used. To capture CO2 from flue gas, a membrane should satisfy a few requirements such as high CO2 permeability, high CO2/N2 selectivity, high thermal and chemical stability and acceptable costs. So far, polymer-based membranes are the only commercially viable type for CO2 removal from flue gas. The membrane materials include cellulose acetate, polymides, polysulfone and polycarbonates. Table 9 shows the performance of several such membranes. MembranePermeancea (mol s−1 m−2 Pa−1) or Permeabilityb (mol s−1 m−1 Pa−1)CO2/N2 selectivityTemperature (°C)Reference Cellulose acetate2.48 × 10−7a40.17Not reported[60] Polymides-TMeCat6.30 × 10−10b2530[61] Polymides-TMMPD1.89 × 10−9b17.1Not reported[62] Polymides-IMDDM6.17 × 10−10b18.1Not reported[62] Polysulfone-HFPSF-o-HBTMS3.31 × 10−10b18.635[63] Polysulfone-HFPSF-TMS3.47 × 10−10b1835[64] PolysulfoneTMPSF-HBTMS2.27 × 10−10b21.435[65] Polycarbonates-TMHFPC3.50 × 10−10b1535[66] Polycarbonates-FBPC4.76 × 10−11b25.535[67] ### Table 9. CO2/N2 separation performance by polymer-based membranes. aPermeance. bPermeability. Selectivity larger than 20 was observed for all the polymer-based membranes with decent permeability. The high solubility of CO2 in polymers ensures sufficient CO2/N2 selectivity. Furthermore, polymers with a high fractional free volume present excellent gas transport properties. Mixed-matrix membrane is a new option to enhance the properties of polymeric membranes. The microstructure consists of an inorganic material in the form of micro- or nano-particles in discrete phase incorporated into a continuous polymeric matrix. The addition of inorganic materials in a polymer matrix offers improved thermal and mechanical properties for aggressive environments and stabilizes the polymer membranes against the changes in chemical and physical environments. Carbon molecular sieves membranes also show interesting performance for CO2 separation applications. Polyimide is the most used precursor for carbon membranes. Carbon membranes improved gas transport properties for light gases (molecular size smaller than 4.0–4.5Å) with thermal and chemical stability. The major disadvantages of mixed-matrix and carbon membranes that hinder their commercialization include brittleness and the high cost that is 1–3 orders of magnitude greater than polymeric membranes. ## 5. Membranes for oxy-fuel combustion In oxy-fuel combustion, oxygen is supplied for combustion instead of air. This avoids the presence of nitrogen in the exhaust gas, the major issue to be solved by post-combustion CO2 capture technologies. With the use of pure oxygen for the combustion, the major composition of the flue gases is CO2, water vapor, other impurities such as SO2. Water vapor can be easily condensed and SO2 can be removed by conventional desulphurization methods. The remained CO2-rich gases (80–98 vol.% CO2 depending on fuel used) can be compressed, transported and stored. This process is technically feasible but consumes large amounts of oxygen coming from an energy intensive air separation (O2/N2) unit. The O2/N2separation follows the ion transport mechanism as depicted in Figure 9 for air separation membrane. Oxygen molecules are converted to oxygen ions at the surface of the membrane and transported through the membrane by an applied electric voltage or oxygen partial pressure difference; these ions are reverted back to oxygen molecules after passing through the membrane. These membranes are O2-selective in principle. Generally, fluorite-based and perovskite-based membranes are used to deliver oxygen through this mechanism. Air separation is mostly carried out at atmosphere and meanwhile the permeate side connects to high speed sweep gas or vacuum. So, for convenience, the membrane performance is generally described as permeate flux instead of permeance. Table 10 shows a list of oxygen permeation flux for the fluorite membranes. The oxygen permeation flux of fluorite-based membranes ranges from 10−4 to 10−6 mol s−1 m−2 between 650 and 1527°C. The highest oxygen flux was observed for Bi1.5Y0.3Sm0.2O3 compounds. MembraneO2 flux (mol s−1 m−2)Thickness (mm)Temperature (°C)Reference Bi0.75Y0.5Cu0.75O32.80 × 10−5–1.06 × 10−42650–850[68] Bi1.5Y0.3Sm0.2O34.40 × 10−3–6.36 × 10−31.2825–875[69] Ce0.8Pr0.2O2-δ1.33 × 10−4–3.35 × 10−41850–950[70] (ZrO2)0.85(CaO)0.151.70 × 10−41870[71] [(ZrO2)0.8(CeO2)0.2]0.9(CaO)0.11.36 × 10−6–9.44 × 10−521127–1527[72] ### Table 10. Oxygen permeation flux data for fluorite membranes. Performance of perovskite membranes are displayed in Table 11. Oxygen permeation flux with the magnitude of 10−2–10−5 mol s−1 m−2 between 700 and 100°C was reported. The overall oxygen flux through perovskite membrane is superior to fluorite membrane. SrCo0.8Fe0.2O3-δ exhibits the best oxygen flux. MembraneO2 flux(mol s−1 m−2)Thickness (mm)Temperature (°C)Reference BaBi0.4Co0.2Fe0.4O3-δ3.064 × 10−3–5.985 × 10−31.5800–925[73] BaCo0.4Fe0.5Zr0.1O3-δ1.908 × 10−3–6.813 × 10−31700–950[74] CaTi0.8Fe0.2O3-δ7.976 × 10−5–2.185 × 10−41800–1000[75] Gd0.6Sr0.4CoO3-δ1.179 × 10−21.5820[76] LaCo0.8Fe0.2O3-δ1.786 × 10−41.5860[76] La0.6Sr0.4Co0.8Cu0.2O3-δ1.417 × 10−21.5860[76] SrCo0.8Fe0.2O3-δ2.485 × 10−21870[77] ### Table 11. Oxygen permeation flux data for perovskite membranes. In spite of a great number of works that attempt to efficiently separate air using membrane, the membrane technology for oxy-fuel combustion is still at its early stage of development. Compared to the conventional cryogenic air separation technique, the high temperature requirement and the resulting high costs of air separation membrane are unfavorable for commercialization. Some other issues such as high temperature sealing, chemical and mechanical stability and so on also need to be addressed prior to practical application. At present, there has not been any full scale oxy-fuel membrane project reported. ## 6. Summary of membranes applied in CCS The aforementioned membranes are compared in Table 12. Their application situation, advantages and disadvantages are summarized accordingly. Membrane typeApplicationAdvantagesDisadvantages Metal membranePre-combustionInfinite H2/CO2 selectivityHigh cost; poisoning; low driving force Carbon membranePre-combustionSize sieving effect; high H2/CO2 selectivityHigh cost; susceptible to oxygen; brittleness Alumina membranePre-combustionLow cost; chemical and physical stabilityLow H2/CO2 selectivity Zeolite membranePre-combustion and post-combustionLow cost; chemical and physical stabilityLow H2/CO2 selectivity MOF membranePre-combustion and post-combustionLarge pore volume and surface areaHigh cost Silica membranePre-combustionProper pore size; low cost; high thermal stabilityPoor hydrothermal stability Polymeric membranePost-combustionLow cost; high CO2/N2 selectivityLow chemical and physical stability; too thick Fluorite membraneOxy-fuel combustionHigh O2/N2 selectivityEnergy intensive; hard to seal Perovskite membraneOxy-fuel combustionHigh O2/N2 selectivityEnergy intensive; hard to seal; poisoning ### Table 12. The summarization of membranes in CCS. ## 7. Membrane mass transfer theory Membrane separation technique has been intensified with the growing needs for CCS. The major two targets of membrane are chasing high permeability and selectivity. The understanding of gas transport through membrane is of great importance in providing the guidance of membrane material design and synthesis improvement. For all mass transfer problems, a general form is always expressed as a coefficient multiplied by a driving force as J=Cf.E2 where J is the mass transfer flux, C is the general transfer coefficient and f is the general driving force. The driving force can be the gradient of pressure, concentration, chemical potential or even electrical potential depending on the mass transfer mechanism. The coefficient can be permeability, diffusivity or other term depending on the term of driving force. For membrane mass transfer, the pressure difference and permeate flux are generally determined from experimental measurements, so the most common form in membrane industry is J=PlΔp.E3 Membrane thickness l is lumped together with permeability P into a term called permeance Pl, which is a convenient form of addressing permeation due to the difficulty in measuring the exact thickness of thin films. Generally, membrane films interpenetrate into the pores of the interlayer or substrate (interlayer-free membrane). Hence, the thickness is not homogenous. ### 7.1. Viscous flow model When the pore size is large, the gas molecule-molecule collision is relatively dominant than gas molecule-wall collision. That means the mean free path is far less than the pore size λd1,E4 where λ is the mean free path and d is the diameter of the pore. In such situation, viscosity plays an important role in the mass transfer and the permeate flux across the membrane is described by viscous flow model: J=εpτTrp28ηpRTdpdz,E5 where η is the viscosity, R is the gas constant, T is the temperature, p is the pressure, εp is the porosity of the pore, τT is the tortuosity of the pore and rp is the pore radius. Viscosity increases with temperature for gases. From Eq. (5), it should be noted that if the transportation is in the viscous regime, the flux is a decreasing function of temperature. Although the viscosity is different from gas to gas, gas mixtures share a singular viscosity value when they are well mixed due to the intensive intermolecular collision. Therefore, there is no selectivity for all gases in the viscous regime even if they have different viscosities. ### 7.2. Knudsen diffusion model When the pore size is reduced down to the scale much smaller than mean free path, the molecular-wall collision is more dominating than intermolecular collision. In this situation, the viscosity is not playing a role for the gas transportation. Instead, the pore geometry and gas molecule velocity influence more significantly in the mass transfer. This type of transport is called Knudsen diffusion. If the molecule to wall collisions is dominant over intermolecular collision, the Knudsen number must be much higher than 1. Kn=λd1,E6 where Kn is called Knudsen number. The permeate flux is described by the Knudsen diffusion model J=23εprpτT8πRTMdpdz=23εprpτT8πRTMΔpl,E7 where M is the molecular weight. Based on Eq. (3) the permeance of Knudsen diffusion is Pl=23εprpτTl8πRTM.E8 For the same pore at a fixed temperature, the permeate flux is determined by the molar weight and in principle, the selectivity is the root square of the reciprocal of molar weights. However, due to the limited selectivity, Knudsen diffusion is rarely used in practice for separating real gas mixtures. ### 7.3. Surface diffusion model For ultra-micro-porous (dp < 5Å) material, the Lennard-Jones (L-J) potential from atoms, which forms the pore wall starts to overlap inside the pore. Consequently, there is a very deep potential well around the wall and the distance from wall to the well is around the scale of gas molecule diameter. In this situation, the gas molecule’s motion is significantly affected by the potential fields. Since the intrinsic nature of gas is seeking for lower potential, thus adsorption preferentially takes place around the pore wall due to the existence of the potential well. As such, the model is called surface diffusion. A brief introduction has been given in Section 2.3.2. of this chapter, but here a more analytical and mathematical description of surface diffusion will be provided. The original expression of mass transfer across the membrane is given by J=qD1RTdμdz,E9 where q is the molar concentration of gas in the pore, D is the diffusivity, μ is the chemical potential and z is the space coordinate in the membrane thickness direction. Assuming equilibrium between the membrane surface concentration and the bulk gas phase, the following relationship for the chemical potential is applicable μ0=μ+RTlnp,E10 where p is the absolute pressure. Using Eq. (10), Eq. (9) is converted to J=Ddlnpdlnqdqdz=DΓdqdz.E11 Γ=dlnpdlnqis defined as thermodynamic factor. In micro-porous material, the adsorbed gas concentration generally follows Langmuir isotherm, q=qsatbp1+bp,E12 where b is Langmuir equilibrium constant. Bring Eq. (12) to Eq. (11) gives J=qsatD11θdθdz,E13 where θ=qqsatis called occupancy. Thermal dynamic factor Γ=11θis derived from Langmuir isotherm. Surface diffusion is often applied in separating gas mixtures, which has very different adsorption capacity in the same material. However, with elevated temperature, the adsorption is getting weaker and Langmuir isotherm is approaching to Henry’s law. q=Kp,E14 where K is Henry’s constant. Bring Eq. (14) to Eq. (11), we get Fick’s first law J=Ddqdz.E15 Diffusivity D is a function of temperature. The temperature dependence usually obeys an Arrhenius relation D=D0expEdRT,E16 where D0 is a pre-exponential coefficient depending on the average distance, the frequency and average velocity of gas jump and Ed is diffusion activation energy. Henry’s constant is a function of temperature according to a van’t Hoff relation: K=K0expQRT,E17 where K0 is a pre-exponential coefficient, Q is the heat of adsorption. Eqs. (14)–(17) can be combined as J=D0K0expEdQRTdpdz=D0K0expEaRTdpdz.E18 Ea is called apparent activation energy, which is defined as Ea=EdQ.E19 Apparent activation energy determines whether the permeate flux is an increasing function to temperature or not, so this type of diffusion is called activated transport. Assuming a uniform pressure gradient, Eq. (18) is simplified to J=D0K0expEaRTΔpl.E20 The permeance Plis the coefficient between flux and pressure drop according to Eq. (3) Pl=D0K0lexpEaRT.E21 Activated transport is generally used to separate gas mixtures, which has different sign of apparent activation energy and the separation performance will be enhanced at elevated temperatures. ### 7.4. Gas translation diffusion model If the pore size is further reduced to the molecular level, there is no potential well inside the pore. Instead, the positive potential overlaps, which forms a potential barrier. Only the gas molecules, which have kinetic energy higher than the potential barrier, are possible to make a successful jump to complete permeation. This model is called gas translation diffusion. The permeate flux of gas translation follows Fick’s first law as derived in Eq. (15) with the difference in diffusion coefficient term. DGT=λZn8RTπMexpEGTRT,E22 where λ is the jump length, Zn is the number of available jump directions and EGT is the potential barrier. By considering ideal gas law p=cRT.E23 Gas translation permeance should rewrite as Pl=λZn8πMRTexpEGTRT.E24 ### 7.5. Oscillator model If we assume the pore is a cylinder, the gas molecules are hopping in the pore cylinder from entrance to the exit. The gas molecule trajectory looks like oscillating on the pore cross section. The gas travels with speed between collisions and loses all the momentum when colliding on the wall. This model is a more recent development in mass transfer theory by Bhatia et al. [7879]. From Newton’s law, vz=DkBTf=fmτ,E25 the gas diffusivity in the pore is derived D=kBTmτ,E26 where ⟨vz⟩ is the average velocity in the permeation direction, kB the Boltzmann constant, f the force, m the molecule mass and ⟨τ⟩ the average hopping time. The hopping time of each molecule depends on the pore potential distribution, its radial coordinate and momentum τrprpθ=2mrc0rprpθrc1rprpθdrprrrprpθ.E27 pr(r′, rprpθ) is the radial momentum at r′ of a molecule, which had radial momentum pr at r. rc1(rprpθ) and rc0(rprpθ) are the r′ solution of radial momentum pr(r′, rpr,pθ) = 0. The radial momentum is derived from the conservation of total energy or Hamiltonian Etr,pr,pθ=φr+pr22m+pθ22mr2,E28 where φ(r) is the radial L-J potential, which could be derived from pore structure and gas property. The force in radial direction is the partial derivative of total energy with respect to r dprdt=Etr.E29 Combining Eqs. (28) and (29) gives the radial momentum prr,r,pr,pθ=2mφrφr+pr2r+pθr21r2r21/2.E30 Considering a canonical distribution for pr and pθ, we have ψr,pr,pθ=ψ0exp1RTφr+pr22m+pθ22mr2.E31 The diffusion coefficient expression is obtained from Eqs. (26), (30) and (31) Drp,T=2πm0reφrRTdr0eφrRTdr0epr22mRTdpr0epθ22mr2RTdpθrc0r,pr,pθrc1r,pr,pθdrprr,r,pr,pθ.E32 Oscillator model is a pure theoretical and analytical approach without any empirical or semi-empirical factors. It takes account adsorption effect and applies to all pore sizes, pressure and temperatures. Besides the mass transfer models introduced above, there are some other methods to study the membrane gas transport from a theoretical perspective. Monte Carlo and molecular dynamics are also major techniques to investigate the micropore mass transfer. Because this chapter focused on membrane CCS technology rather than transport phenomena, other sophisticated theories are not demonstrated here. ## 8. Current status of membrane application ### 8.1. Membranes for pre-combustion The membrane separation for pre-combustion is not a mature technology so far. There has not been industry-scale membrane system. However, a few pilot scale pre-combustion membrane systems have demonstrated the potential of extending the system to enlarged scale. Eltron Research & Development Inc. developed a pilot-scale pre-combustion membrane with 100 kg day−1 H2 production from 2005. They employed alloy membrane to separate H2 according to Sieverts’ Law. This project successfully improved membrane-based integrated gasification combined cycle (IGCC) flow sheets, achieving carbon capture greater than 95%. Another pilot-scale pre-combustion membrane set-up was constructed by Worcester Polytechnic Institute’s (WPI) in 2010. More than 566 L H2 was produced per day. Stable H2 fluxes were achieved in actual syngas atmospheres at 450°C for more than 470 h under 12 bar pressure difference. The implement MembraGuardTM (T3’s technology) inhibited surface poisoning by hydrogen sulfide (H2S) and H2 permeation showed good stability for more than 250 h. ### 8.2. Membranes for post-combustion Membrane separation for post-combustion is a relatively mature technique. In 1995, the largest membrane-based natural gas processing plant in the world was built in Kadanwari, Pakistan. Cellulose acetate membrane was applied in this project to separate CO2. The Kadanwari system is a two-stage unit designed to treat 25 × 105 m3 h−1 of feed gas at 90 bar. The CO2 content is reduced from 12% to less than 3%. After Kadanwari plant, the Qadirpur plant started in the same year and the processing capacity exceeded Kadanwari plant with 31 × 105 m3 h−1 of feed gas at 59 bar. The CO2 content is reduced from 6.5 to 2%. The Qadirpur plant was upgraded to 64 × 105 m3 h−1 of feed gas in 2003. ### 8.3. Membranes for oxy-fuel combustion Air separation membrane is still in its early stage. In view of the high energy requirement of ion transport mechanism, air separation membrane can hardly challenge the traditional cryogenic air separation for large scale product. Air products, which have been developing ion transport membrane technology since 1988 and the DOE (US Department of Energy) are collecting data from a pilot plant near Baltimore in Maryland, with the capacity of 5 tons of oxygen per day. This facility will lead to the next step of designing and building a larger membrane air separation unit (150 tons oxygen per day). ## 9. Techno-economic of membrane The conventional CO2 capture process is absorption (with ammines). Amine-based absorption is the most common technology. However, the corrosion, degradation and high regeneration energy of amine significantly increase the electricity cost. Substantial technological improvements and alternative technologies are highly needed to lower the CO2 capture cost. The economic indicator CO2 avoided ($/ton) is an established term for measuring and comparing different CO2 capture strategies such as absorption, adsorption, cryogenic separation and membrane separation. It is the additional cost of establishing and running a CO2 capture facility for an industrial plant or power plant compared to the respective plant without CO2 capture. The CO2 avoided is expressed as:
CO2avoided=LCOEcaptureLCOEref.CO2emissionref.CO2emissioncapture,E33
where ref. and capture mean the reference plant without capture and the respective plant with CO2 capture facility. LCOE is the levelized cost of electricity which is expressed as:
A brief techno-economic comparison was made between two power plants using conventional amine scrubbers in and a power plant using polymer membrane (Table 13). The estimates are subject to uncertainty because we cannot accurately predict all input parameters such as fuel price, operational and maintenance cost. The aim of the comparison is not to give absolute costs, but to illustrate indicatively that the costs per ton CO2 avoided. The overall comparison indicates that the case employing membrane separation results in slightly lower LCOE and CO2 avoided than traditional amine-based solvent scrubbing. Although this cannot judge the membrane economical advantage, the comparison at least indicates that membrane separation is competitive to the amine-based solvent scrubbing. However, significant efforts are still required to improve the membrane properties so as to achieve higher stability, permeate purity and recovery.
OrganizationCarnegie Mellon UniversityElectric Power Research InstituteMembrane Technology and Research, Inc
CCS technologyAmine-basedAmine-basedMembrane-based
LocationUSAUSAUSA
Coal typeBitcoalBitcoalIllinois#6
Plant size (MW)575600580
Designed CO2 capture rate (%)908590
CO2 emission (kg/MWh)Reference811836760
Capture10712687
Net power output (MW)Reference528600550
Capture493550461
Net plant efficiency (LHV, %)Reference41.44041.4
Capture31.529.134.4
Efficiency penalty (%)9.910.97
Capital costs ($/kW)Reference169621041727 Capture275935162627 LCOE ($/MWh)Reference627762
Capture10412793
CO2 avoided (\$/ton)587146
### Table 13.
Techno-economic comparisons between amine-based CO2 removal and membrane separation.
chapter PDF
Citations in RIS format
Citations in bibtex format
## More
© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Guozhao Ji and Ming Zhao (March 8th 2017). Membrane Separation Technology in Carbon Capture, Recent Advances in Carbon Capture and Storage, Yongseung Yun, IntechOpen, DOI: 10.5772/65723. Available from:
### chapter statistics
1Crossref citations
### Related Content
#### Recent Advances in Carbon Capture and Storage
Edited by Yongseung Yun
Next chapter
#### Emerging New Types of Absorbents for Postcombustion Carbon Capture
By Quan Zhuang, Bruce Clements and Bingyun Li
#### Gasification for Practical Applications
Edited by Yongseung Yun
First chapter
#### Considerations for the Design and Operation of Pilot-Scale Coal Gasifiers
By Yongseung Yun, Seung Jong Lee and Seok Woo Chung
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
View all Books | 2020-07-13 09:27:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41174226999282837, "perplexity": 3772.325306056113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143354.77/warc/CC-MAIN-20200713064946-20200713094946-00250.warc.gz"} |
http://www.transtutors.com/questions/let-be-a-finite-measure-on-r-and-f-x-8-x-show-that-z-f-x-c-f-x-dx-c--825790.htm | # Let µ be a finite measure on R and F ( x) = µ ( (-8, x]). Show that Z (F ( x + c) - F ( x)) dx = cµ.
Let µ be a finite measure on R and F(x) = µ((-8, x]). Show that Z (F(x + c) - F(x)) dx = cµ(R) 1.7.5. Show that e -xy sin x is integrable in the strip 0 < x < a, 0 < y. Perform the double integral in the two orders to get: Z a 0 sin x x dx = arctan(a) - (cos a) Z 8 0 e -ay 1 + y 2 dy - (sin a) Z 8 0 ye-ay 1 + y 2 dy and replace 1 + y 2 by 1 to conclude R a 0 (sin x)/x dx - arctan(a) = 2/a for a = 1.
Related Questions in Theory of probability
• 1. If R X R Y |f(x,... May 13, 2015
1 . If R X R Y | f(x , y )|µ 2 ( dy )µ 1 ( dx ) 8 then Z X Z Y f(x , y )µ 2 ( dy )µ 1 ( dx ) = Z X×Y f d(µ 1 × µ 2 ) = Z Y Z X f(x , y )µ 1 ( dx )µ 2 ( dy ) Corollary. Let X = { 1 , 2 , ....
• Let X = (0, 1), Y = (1 ,... May 13, 2015
Let X = ( 0 , 1 ), Y = ( 1,8 ), both equipped with the Borel sets and Lebesgue measure. Let f(x , y ) = e - xy - 2 e - 2 xy . Z 1 0 Z 8 1 f(x , y ) dy dx = Z 1 0 x - 1 ( e - x - e - 2 x )...
• Exercise 5.3.5. Check that fX (x) = d dxFX (x) for all... May 12, 2015
is that this is indeed a density. The integrand cannot be integrated directly, but there is a nice trick from calculus which enables us to compute R 8 - 8 f(x)dx . The trick consists of...
• Heyde (1963) Consider the lognormal density f0(x) =... May 14, 2015
Heyde (1963) Consider the lognormal density f 0 ( x ) = ( 2 p) - 1/2 x - 1 exp(-(log x ) 2 / 2 ) x = 0 and for - 1 = a = 1 let fa( x ) = f 0 ( x ){ 1 + a sin ( 2 p log x )} To see that fa...
• To continue making connection with definitions of conditional expectation... May 14, 2015
To continue making connection with definitions of conditional expectation from undergraduate probability, suppose X and Y have joint density f(x , y ), i. e ., P(( X , Y ) ? B) = Z B f(x ,... | 2017-10-18 02:03:23 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850513100624084, "perplexity": 1286.0099207833578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822668.19/warc/CC-MAIN-20171018013719-20171018033719-00494.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapters-1-9-cumulative-review-exercises-page-705/3 | ## Basic College Mathematics (9th Edition)
Published by Pearson
# Chapters 1-9 - Cumulative Review Exercises - Page 705: 3
#### Answer
Estimate: $4\div2=2$ Exact: 2.5
#### Work Step by Step
The estimate is derived through rounding off both fractions to the nearest whole number, being 4 for the first fraction and 2 for the second.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2019-01-16 09:45:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5592262148857117, "perplexity": 2807.2982721500084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657151.48/warc/CC-MAIN-20190116093643-20190116115643-00285.warc.gz"} |
https://examqa.com/forum/discussions/how-to-write-math-equations-and-symbols-on-the-forum-latex-tutorial/ | How to write math e...
Clear all
# [Sticky] How to write math equations, symbols and chemical formulae on the forum [LaTeX Tutorial]
Posts: 715
Topic starter
(@zeeshan)
Honorable Member
Joined: 2 years ago
We have enabled latex on the forum. This means you are able to write formulas and equations on our forum.
To do so, you have to put your LaTeX code in between latex tags, like the following:
If you want to write a quadratic equation, you would write it like this:
Which outputs this:
$x^2 + 5x + 6 = 0$
====================================================
You can also write fractions in latex.
Will give you this:
$\frac{1}{2}$
You can even write things like this
$\frac{x}{2+\frac{x}{2-\frac{x}{3}}}$
====================================================
Subscripts also work, so this:
Would give this
$O_{2}$
Which can eventually look like this
$6CO_{2(g)} + 6H_{2}O_{(l)} > C_{6}H_{12}O_{6(s)} + 6O_{2(g)}$
====================================================
Clicking preview before you post the latex code is also useful as you will see what your code looks like
4 Replies
Posts: 718
(@m-tawofi)
Honorable Member
Joined: 2 years ago
That's actually so helpful.
What about chemical formulae (e.g. O2)
(@zeeshan)
Joined: 2 years ago
Honorable Member
Posts: 715
@m-tawofi
Yeah, here's how subscripts work
This:
Would give this
$O_{2}$
Which can eventually look like this
$6CO_{2(g)} + 6H_{2}O_{(l)} > C_{6}H_{12}O_{6(s)} + 6O_{2(g)}$
Posts: 15
(@alisamifarooq)
Active Member
Joined: 11 months ago
Wow, its really amazing , thank you, it is best option to describe things. ~ Ali Sami Farooq
Posts: 1
(@darikelwan)
New Member
Joined: 9 months ago
that is very helpful for every student , thanks for this facility ~ Darik Elwan
## How To Choose The Right University For You?
Choosing the right university is one of the most important decisions you will make in your life. For most students,...
View Post
## 2022 Real Exams Cancelled Again?
Ofqual and the government have announced that they are in talks about the changes for the 2022 examinations (GCSE, AS-Level...
View Post
## Should I do a Physics Degree?
A physics degree will be a good choice for you if you enjoy studying A-Level Maths and Physics. The degree...
View Post
## How To Deal With Rejection (From University)?
No matter what anyone says, rejection hurts. A LOT. Rejection is an almost unavoidable experience of being human. We all...
View Post | 2021-12-04 00:57:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7719141244888306, "perplexity": 4208.691965467879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00575.warc.gz"} |
http://www.mathnet.ru/php/presentation.phtml?option_lang=eng&presentid=5899 | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
Video Library Archive Most viewed videos Search RSS New in collection
You may need the following programs to see the files
International conference "Geometrical Methods in Mathematical Physics"
December 13, 2011 16:45–17:30, Moscow, Lomonosov Moscow State University
Integrable WZNW model and string model of WZNW model type with $SU(n)$,$SO(n),SP(n)$ constant torsion and infinite hydrodynamic chains from $n$ to $\infty$
V. D. Gershun
National Science Centre Kharkov Institute of Physics and Technology
Video records: Flash Video 750.6 Mb Flash Video 150.7 Mb MP4 150.7 Mb Materials: Adobe PDF 188.8 Kb
Abstract: The integrability of WZNW model and string model of WZNW model type with constant $SU(2)$ torsion is investigated. The closed boson string model in the background gravity and antisymmetric B-field is considered as integrable system in terms of initial chiral currents. The model is considered under assumption that internal torsion related with metric of Riemann–Cartan space and external torsion related with antisymmetric B-field are (anti)coincide. New equation of motion and exact solution this equation was obtained for string model with constant $SU(2)$ torsion. New equations of motion and new Poisson brackets(PB) for infinite dimensional hydrodynamic chains was obtained for string model with constant $SU(n),SO(n),SP(n)$ torsion from $n$ to $\infty$. | 2019-11-20 16:36:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3467581868171692, "perplexity": 4704.6155256204875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00237.warc.gz"} |
http://math.stackexchange.com/questions/104433/ordinary-differential-equation-fz2-c-fz3-fz2 | # ordinary differential equation: $(f'(z))^2 = c\,f(z)^3 + f(z)^2$
I swear I have seen this type of ODE before, but I can't remember how to attack it. In general, I would like to know how to solve $$\left(f'(z)\right)^m = c\,G(z)^n$$ where $m,\;n \in \mathbb{N}$ and $G(z)$ is just a polynomial in $f(z)$.
This sounds hard, so I would be happy with, $$\left(f'(z)\right)^2 = c\,G(z)^n,$$ though this latter equation may be too difficult too. For my homework, though, I need to know, $$\left(f'(z)\right)^2 = \left( c\,f^3 + f^2 \right).$$ It was also suggested in the homework question to utilize $$g^2 = 3\,c-f.$$ If I do this without thinking I get an equation $$\left(f'(z)\right)^2 = c\,f(z)^2,$$ which seems much easier, though I am still a little rattled by the plus-minus.
In case it matters, this is a related to a method for solving the Korteweg-deVries equation, $$u_t + u\,u_x + u_{xxx} = 0.$$ I have seen some solutions (but did not understand them) where the polynomial was "factored" into 3 roots... something like that. I just don't want to know the answer, but how to get it. Please keep in mind that this is my first class in PDEs.
Thanks for any help!
-
Yes, I was there too.... I was thinking for some reason there may be some complications due to the sign of the square root. Anyway, thanks! – nate Feb 1 '12 at 0:30
"I have seen some solutions (but did not understand them) where the polynomial was "factored" into 3 roots" - and that is precisely the Weierstrass approach. – J. M. Feb 8 '12 at 18:17
\begin{align*} \left(\frac{dy}{dz}\right)^2 &= cy^3+y^2\\ \frac{dy}{y \sqrt{cy+1}} &= dz\\ cy+1 &= u\\ 2 \frac{du}{u^2-1} &= dz\\ - 2 \tanh^{-1} u + C&= z\\ u &= \tanh \frac{-z+C}{2}\\ \sqrt{cy+1} &= -\tanh \frac{z-C}{2}\\ cy+1 &= \tanh^2 \frac{z-C}{2}\\ y &= \frac{\tanh^2 \frac{z-C}{2}-1}{c}\\ y &= -\left({c\times \cosh^2 \frac{z-C}{2}}\right)^{-1} \end{align*}
Aaahhh, I think that the $g^2 = 3\,c-f$$must have been a hint towards a substitution in the integration. Turns out it was just confusing the way it was suggested in the problem. Now I see it as a straight-forward problem. Thanks all! – nate Feb 1 '12 at 1:16 There is this famous differential equation ...$$ [\wp'(z)]^2 = 4[\wp(z)]^3 - g_2\wp(z) - g_3 $$Weierstrass Elliptic Functions - Ahh. Thank you - I was also wondering how I was going to solve this pde if I wasn't able to get rid of 2 BC's - would lead to a third-order polynomial in p(z) just like you mentioned. Thanks! – nate Feb 1 '12 at 23:31 Here's how to complete GEdgar's Weierstrass solution. Start with the differential equation$$(f^\prime (z))^2 = c\,f(z)^3 + f(z)^2$$and introduce a new function g(z) satisfying the relation$$f(z)=\frac4{c}g(z)-\frac1{3c}$$(This is essentially equivalent to "depressing" a cubic equation plus a rescaling.) We can then derive a differential equation for g(z) through this substitution:$$(g^\prime (z))^2=4g(z)^3-\frac1{12}g(z)+\frac1{216}$$and comparing this with the Weierstrass differential equation, we find that$$g(z)=\wp\left(z;\frac1{12},-\frac1{216}\right)$$We then compute the discriminant$$\Delta=\left(\frac1{12}\right)^3-27\left(-\frac1{216}\right)^2=0$$and find that the underlying cubic is degenerate. Abramowitz and Stegun give appropriate formulae for the case of \Delta=0, g_2 > 0, and g_3 < 0 (see formula 18.12.3); applying the formula listed there yields$$\wp\left(z;\frac1{12},-\frac1{216}\right)=\frac1{12}+\frac14\mathrm{csch}^2\left(\frac{z}{2}\right)$$and thus$$f(z)=\frac4{c}\left(\frac1{12}+\frac14\mathrm{csch}^2\left(\frac{z}{2}\right)\right)-\frac1{3c}=\frac1{c}\mathrm{csch}^2\left(\frac{z}{2}\right)$\$ | 2015-01-30 18:35:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980546236038208, "perplexity": 616.5685597270701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115926735.70/warc/CC-MAIN-20150124161206-00216-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://uylr.aqoi.pw/examples-of-mathematics-questions-for-form-one.html | Examples Of Mathematics Questions For Form One
1 is between 1 and 10, so we have successfully completed the multiplication in standard form. Includes standards-aligned tech-enhanced questions that mirror LEAP 2025 Assessment testing items. This is way overdue! Hopefully this post and future posts will answer any questions you have about merging Eureka / Engage NY math with Guided-Math. Provide a few more examples of numbers in standard form, and have students work together to create the numbers using base ten blocks before they write down the expanded form. Some give examples of how Mathematics is used in Tanzania every day. Splash Math is an award winning math learning program used by more than 30 Million kids for fun math practice. You’ve saved so many of us a vast amount I’d time. MATH LEARNING RESOURCES 1. 1) Identify the question to be answered. What fractional part of the bag of chips is green? Strategy: 1) UNDERSTAND: What do you need to find? You need to find how many chips are in all. Austin had been the Mules’ football coach since 1986. The drawing is the same but the questions are different for each level. 2) PLAN: How can you solve the problem?. Solve the problem. Exceptions will be made only if noted in an assessment accommodation document prepared for you by the Algonquin College Centre for Students with Disabilities. Questions for Math Class | 9 Neither well-written standards, nor tasks with high cognitive de-mand, nor questions by themselves guarantee that students will engage in high-level discussions or learn rigorous mathematics, weaving together conceptual understanding, procedural skill and fluency, and appropriate application to the world in which. Is open-ended; that is, it typically will not have a single, final, and correct answer. Download Free math Olympiad level 1 PDF Sample Papers for Classes 1 to 10. Grade 10 math Here is a list of all of the math skills students learn in grade 10! These skills are organized into categories, and you can move your mouse over any skill name to preview the skill. Improve your math knowledge with free questions in "Point-slope form: write an equation" and thousands of other math skills. Download the largest collection of free MCQs on Mathematics for Competitive Exams. So, in the examples that follow, I'll be demonstrating how to work with these sorts of expressions. Our sample exams require no registration and also include scoring and answer explanations. INTERNATIONAL GCSE Mathematics (Specifi cation A) (9-1) SAMPLE ASSESSMENT MATERIALS Pearson Edexcel International GCSE in Mathematics (Specifi cation A) (4MA1) For fi rst teaching September 2016 First examination June 2018 Issue 2. Module 5 Sample Lesson Plans in Mathematics 5 Lesson 1: Primary 6 Multiply a Fraction by a Fraction 1. You'll need to think a little to come up with the correct model/initial value problem. Find the roots (solve for x): Solution. Express the recurring decimals into fractions. Important Questions for CBSE Class 9 Mathematics Chapter 1 Linear Equations in Two Variables NCERT Solutions for Class 9 Maths Chapter 4 Linear Equations in Two Variables: Linear Equations in Two Variables Introduction Linear Equations Solution Of A Linear Equation Graph Of A Linear Equation In Two Variables Equations Of Lines Parallel To The X-Axis […]. A matatu charges sh. First, as you get to know people in the class you can form study groups. If I want to return an integer between zero and hundr. This quis can be taken by all students in Malaysia. Throughout the course students will be asked to write questions on critical thinking drawing from information the Preface section B2. Question 6 on this exam is among the trickiest application problems to appear on a Math 251 exam in the past decade It is not quite the mixing problem as given in the textbook, but with a slight twist. The closed questions start the conversation and summarize progress, whilst the open question gets the other person thinking and continuing to give you useful information about them. [2] Question 4. The first 1/5th of a mile is charged at the higher rate. Hence, in this case, | a + b | < | a − b |. Linear functions are those whose graph is a straight line. These are BAs which are not only atomic, but are such that each subalgebra and homomorphic image is atomic. Templates and Sample Files. The derivative of a function with a fractional power: f(x) = x 1/2; return to top. 2) PLAN: How can you solve the problem?. Currently, elementary school teachers have five weekly, one-hour prep periods, one of which the principal oversees. Some explain the origin of mathematical ideas, showing that Mathematics is truly international. Mensuration 10 4. Include as many details as possible. 4 cm AND < ABC =75°. Download the largest collection of free MCQs on Mathematics for Competitive Exams. You can start playing for free! Work Word Problems - Sample Math Practice Problems The math problems below can be generated by MathScore. Math your third grade students need to know. Example: Find the standard form value of (8\times10^{-5})\div(2\times10^6). Math Riddles. Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Get NCERT solutions for Class 11 Maths Free with videos. Decimals, Fractions & Percentages in its simplest form. This test is one of the California Standards Tests administered as part of the Standardized Testing and Reporting (STAR). Our sample exams require no registration and also include scoring and answer explanations. If you have more questions about FOIL, as always feel free to ask for help on the math help message board, or check out the FOIL calculator below. Middle School math books will include material on solving for an unknown. Suppose that N(x) and D(x) are polynomials. Welcome to MathHomeworkAnswers. These files include the scoring rubrics and sample student responses at each score point attainable. So, stay tuned. You can find professional online math help at Assignment Expert. This subject can help to get a high overall score and hence requires dedicated preparation. Our site offers thousands of online maths practice skills covering reception through year 13 maths, with questions that adapt to a student's individual proficiency. Using Writing In Mathematics. 1/25/06; When printing web based files, pictures and punch-out tools will be slightly reduced in size. mathematics classes throughout a student's school career is considered an investment in human capital. Algebra 1 is the second math course in high school and will guide you through among other things expressions, systems of equations, functions, real numbers, inequalities, exponents, polynomials, radical and rational expressions. 50 + 5 × 11. Also explore topics in mathematics using html 5 apps. IXL will track your score, and the questions will automatically increase in difficulty as you improve!. SAMPLE QUESTIONS Solve the following problems and select your answer from the choices given. x + y + z = 180 0. Enduring Understandings Essential Questions. Interactive, animated maths dictionary for kids with over 600 common math terms explained in simple language. It should read "(3n)". Webmath is a math-help web site that generates answers to specific math questions and problems, as entered by a user, at any particular moment. After watching this video, you should be able to solve most of the matrices questions in paper 1. Express the recurring decimals into fractions. When you see the keyword "remainder", I recommend you to use this method called "Branching". LECTURE 1: DIFFERENTIAL FORMS 1. Population 3. Every year the value of the car depreciates by 10%. 50 + 5 × 11. Use the space in your Mathematics Sample Questions booklet to do your work on the multiple-choice and gridded-response questions, but be sure to put your answers on the Sample Answer Sheet. Make sure you fill in the correct circle. which we use in base ten. We will learn how to find the equivalent form of rational numbers expressing a given rational number in different forms and the equivalent form of the rational numbers having a common denominator. If you have past papers which are not available on this website, please feel free to share by posting using the link below. To communicate mathematicallymeans to use mathematical language,. One way to differentiate in math class is creating open-ended tasks and questions (I talked about several differentiation strategies I use here - Mathematically Speaking). Form 1 Math. Emily Henthorn, 25, had hoped to move in with Andrew Hardman and they had gone house. Title: Microsoft Word - ReleasedFormMathI1 Author: getyson Created Date: 8/21/2013 9:10:37 AM. Teachers incorporate writing in math class to help students reflect on their learning, deepen their understanding of important concepts by explaining and providing examples of those concepts, and make important connections to real-life applications of the math they are learning. SAMPLE APPLICATION OF DIFFERENTIAL EQUATIONS 3 Sometimes in attempting to solve a de, we might perform an irreversible step. Join a study group. Geometry 22 6. The 13 Hardest SAT Math Questions. Difference Between the Mathematics Behind Machine Learning and Data Science. Middle School math books will include material on solving for an unknown. Directions for questions 1–11. we are now focusing that we want to provide all sample paper in one article and you an download them with ease. Questions for Math Class | 9 Neither well-written standards, nor tasks with high cognitive de-mand, nor questions by themselves guarantee that students will engage in high-level discussions or learn rigorous mathematics, weaving together conceptual understanding, procedural skill and fluency, and appropriate application to the world in which. On the following pages are multiple-choice questions for the Grade 3 Practice Test, a practice opportunity for the Nebraska State Accountability-Mathematics (NeSA-M). You will find the answers at the bottom of the page. Popular Survey Questions with Survey Examples and Sample Survey. 19 hours ago · Most of us view what Hunter did as clearly “wrong” but the media has adopted a narrow focus on whether criminal charges have been brought as opposed to whether the Ukraine deal was a raw and obvious form of influence peddling. The winning team will be the first to correctly answer 12 questions about the country of Xandar. 3 A left parenthesis: push it onto the operator stack. Receive details answers to tough questions from over 80,000 expert tutors available for 1-on-1 hire. A set of 25 Maths questions, with answers and detailed solutions similar to the questions in the SAT maths test. Register online for Maths tuition on Vedantu. Sample Survey Questions, Answers and Tips | Page 4 About these Sample Questions These sample questions are provided to help you determine what you should ask in a survey as well as what ques-tion type. In each blog, the solutions & explanations to the sample questions are at the ends of the articles. Calculate the dot product of $\vc{a}=(1,2,3)$ and $\vc{b}=(4,-5,6)$. ) A maximum of 62 terms. Sample 5th Grade Contest from 2004-2005 - Solutions. Click and several Number system problems and get acquainted with a wide range of question variations. For example, the ancient geometers looked at triangles and noticed that their angle sums were all 180 degrees. 1 is between 1 and 10, so we have successfully completed the multiplication in standard form. I usually print these questions as an A5 booklet and issue them in class or give them out as a homework. The subject codes so listed are used by the two major reviewing databases, Mathematical Reviews and Zentralblatt MATH. To create a new line after the answer, press Enter (instead of Spacebar) after the equal sign. Answers to multiple-choice questions are provided at the end of examination 2. 4 cm AND < ABC =75°. For example, many credit cards To illustrate what a high-interest form of debt credit cards tend to be, consider that the average APR on a 60-month new car loan is just 5. Math Riddles. Developing scientific understanding. You will learn more in the example below. Powerful databasing, integrated networking and a wide range of analysis, visualization and decision-making tools, including data mining, querying, clustering, identification, statistics, and much more. Be aware of questions with no underlined portions—that means you will be asked about a section of the passage or about the passage as a whole. Example 1: N(x) D(x) = x4 +5x3 +16x2 +26x+ 22 x3 +3x2 + 7x+5 Step 1. [2] Question 3. An actual ACT Mathematics Test contains 60 questions to be answered in 60 minutes. Revision exercise Each chapter ends with a ‘revision exercise’ with questions that test learning of the entire chapter content. For example, if you need to add 1/2 and 2/3, start by determining a common multiple. An actual ACT Mathematics Test contains 60 questions to be answered in 60 minutes. Math calculators and answers: elementary math, algebra, calculus, geometry, number theory, discrete and applied math, logic, functions, plotting and graphics. H) Determining the Sum of Angles in One Whole Turn 1. Click and several Number system problems and get acquainted with a wide range of question variations. The mathematics test has three sessions to be taken separately: Session 1 (pages 3 to 19) includes 30 multiple-choice questions—a calculator may not be used. Standard Form: Conversion Converting Normal Numbers to Standard Form To convert large normal numbers to standard form you have to put a decimal point after the first digit and ignore the zeros at the end (if there is another number that is not 0 after the zero(s), don’t ignore them). ) Keep It Going 1. Specific Options Fourth. Team members of recruitmentresult. READING BOOT CAMP is a highly effective RTI reading program! Building on the fundamental belief "ALL STUDENTS ARE GIFTED", the goal is to lift ALL students' ACADEMIC READING SKILLS by using evidence-based "Socratic" methods, teaching all students as adroit learners, having fun, setting S. Mathematics Number system 1. Work out 4 8 4 10 3. On the test, questions from the areas are mixed together, requiring you to solve different types of problems as you progress. You may want to work out the answer you feel is correct and look for it among the choices given. "x" is the variable or unknown (we don't know it yet). You can start playing for free! Work Word Problems - Sample Math Practice Problems The math problems below can be generated by MathScore. Lots of interactive Maths challenges for children of different ages and abilities (year 2 to year 6, key stage 1 and key stage 2). Kolecki National Aeronautics and Space Administration Glenn Research Center Cleveland, Ohio 44135 Tensor analysis is the type of subject that can make even the best of students shudder. And since one is in the ones place, it means one one, or 1. Math Sample Questions - The SAT® Suite of Assessments - College and Career Readiness - The College Board. Sample 8th Grade Contest from 2012-2013 - Solutions. The ultimate test is this: does it satisfy the equation?. Calculate the area of the shaded region below, given that AC is an arc of a circle centre B. Subtle wording differences can produce great differences in results. 0001 = 1 x 10-4 0. You will step by step familiarize yourself with the test formats and sample questions before taking a real test. Example essays The International Baccalaureate® (IB) programme resource centre, a key resource for educators at IB World Schools, includes several examples of extended essay titles. The NBTP does not make any NBT papers available for the public domain. which is bookended by. Other questions from the book under section two. The parallelogram rule for vector addition shows that when a and b are placed tail to tail, the diagonals of the parallelogram are a + b and a − b. Sample problems are under the links in the "Sample Problems" column and the corresponding review material is under the "Concepts" column. Jane spent $42 for shoes. Elementary literacy, elementary mathematics, and secondary mathematics may submit one clip up to 15 minutes, or two clips totaling up to 15 minutes. help students acquire a range of mathematical techniques and skills and to foster and maintain the awareness of the importance of accuracy; 3. Question: Laura has 3 green chips, 4 blue chips and 1 red chip in her bag. Practise maths online with IXL. MATHCOUNTS offers fun and engaging programs that get middle school students excited about math. CBSE Sample Papers for Class 9 Sample Papers for 2018-19. Enter key or OK button finalizes answer. This question is based on new content. 2017-2018 ISTEP+ Assessment Practice Tests and Sample Questions are available for grades 3 through 8 in Mathematics and English Language Arts. × Close Register here. Printable in convenient PDF format. discovers that the Taliban Government is in-volved in the terrorist attack, then it will retaliate against Afghanistan. Two arrows represent the same vector if they have the same length and are parallel (see figure 13. Work out 4 107 6 10 5. Calculate the area of the shaded region below, given that AC is an arc of a circle centre B. Oswaal Sample Question Paper (Mathematics) Class XI PHYSICS: 1. Mathematics CyberBoard. Students, teachers, parents, and everyone can find solutions to their math problems instantly. Standard Form In science lessons, you have no doubt used scientific notation, or standard form , to write numerical answers to questions that are either very large or very small. Simon is conducting a probability experiment. Finally, division of both sides of the equation by y gives: x =( Z + y )/ y. Sample Question Paper 349 Sample Question Paper Mathematics (211) Time : 3 Hours Maximum Marks : 100 Note :1. b Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. majors, Summer Math, study abroad, and more read the AMS article about our program (PDF) People faculty, staff, & majors Alumni where our alumni have gone MuddMath Newsletter our departmental newsletter HMC Mathematics Conference our annual conference series The Michael E. The geometric sequence is sometimes called the geometric progression or GP, for short. A problem of legendary difficulty first created by Pierre de Fermat in 1637, it attained its first. Note that after the first term, the next term is obtained by multiplying the preceding element by 3. So, in the examples that follow, I'll be demonstrating how to work with these sorts of expressions. Back in 2000, two Marin County property owners, Dartmond and Esther Cherk. The 13 Hardest SAT Math Questions. , y = 2x− 3. A common question in math will be to write a number in index form using a different number as base. Express the following numbers as products of prime factors hence find their square roots. For K-12 kids, teachers and parents. Challenge your students with creative mathematics lessons, printable worksheets, activities, quizzes, and more during Math Education Month (April)—or anytime of the year! Focus on various mathematical themes, such as geometry, algebra, probability and statistics, money, measurement, and more!. He randomly selects a tag from a set of tags that are numbered from 1 to 100 and then returns the tag to the set. [/math] Real life examples are: Finding current consumed on day 1,2,3… Every day, an average amount of current will be consumed. Amelia buys a crate of 30 eggs for$20. Neatness is a number that represents organization and cleanliness. Percentage formula is used to find the amount or share of something in terms of 100. Algebra Readiness Test I hope that you've been able to find the Pre Algebra examples that you need in order to fully prepare yourself for Algebra 1. SOLID GEOMETRY (a) Area and perimeter Triangle A = 2 1 base height = 2 1 bh Trapezium A = (sum of two parallel sides) height = 2 1 (a + b) h Circle Area = r2 Circumference = 2 r Sector Area of sector = 360 r2 Length of arc = 2 r Cylinder. Demonstrates how to find the value of a term from a rule, how to expand a series, how to convert a series to sigma notation, and how to evaluate a recursive sequence. b Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. A student can feel mathematically ready to attend College if he or she can get at least 33 out of the 36 problems correct. We will learn how to find the equivalent form of rational numbers expressing a given rational number in different forms and the equivalent form of the rational numbers having a common denominator. Sample Question Paper 351. The tasks for grades 3 through High School were developed by the Mathematics Assessment Resource Service (MARS) of the Shell Centre for Mathematical Education, University of Nottingham, England. Most of the chapters we will study in Class 11 forms a base of what we will study in Class 12. This might introduce extra solutions. Kids from pre-K to 8th grade can practice math skills recommended by the Common Core State Standards in exciting game formats. Which of the following numbers when rounded to the nearest hundred becomes 365 800?. (Deutsch: MathJax: LaTeX Basic Tutorial und Referenz) To see how any formula was written in any question or answer, including this one, right-click on the expression it and choose "Show Math As >. Instead, Norfar used the project to introduce strategies for asking good questions, analyzing errors, understanding systems, and communicating in the language of mathematics. Use your calculator wisely. For example, if m is 1/2, count 2 on the x axis, then 1 on the y axis to get to another point (1, b + 2) The equation for this line is y + 3 = 2x. Any Equation. Translated editions of the Math Sample Tests are now available. For example, that the number 1 is less than 2. Khan Academy's Algebra 1 course is built to deliver a comprehensive, illuminating, engaging, and Common Core aligned experience! Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. 1- A Basic Java Calculator. The 13 Hardest SAT Math Questions. ) Keep It Going 1. a form and content appropriate to the task; use syntax and paragraphing to shape meaning; use punctuation correctly and expressively; use vocabulary creatively; spell accurately; use handwriting, layout and presentation effectively. Part One: General Marking Principles for National 5 Mathematics This information is provided to help you understand the general principles you must apply when marking candidate responses to questions in this Paper. An identity matrix is special because when multiplying any matrix by the identity matrix, the result is always the original matrix with no. random() returns a number between zero and one. General math text - Sixth grade math books will include material on whole numbers, fractions, decimals, and ratio and proportion. CSEC Mathematics Sample SBA. Grade 10 math Here is a list of all of the math skills students learn in grade 10! These skills are organized into categories, and you can move your mouse over any skill name to preview the skill. This booklet, Fundamentals of Mathematics for Nursing. com, a math practice program for schools and individual families. One of the most common questions I’m regularly asked by aspiring data scientists is – what’s the different between data science and machine learning? And more to the point, what’s the difference between the mathematics behind these two?. Please visit this page to get updates on more Math Shortcut Tricks and its uses. For some other questions click on the Next/Previous link shown above. These one, page art worksheets use fill in the blank questions to review order of operations and solving equations. It is attractive because it is simple and easy to handle mathematically. Use a ruler and a pair of compasses only to constract a triangle ABC in which AB= 4. The fuel for a certain two cycle engine is a mixture of 8 parts gasoline and 2 parts oil. Calculate the area of the shaded region below, given that AC is an arc of a circle centre B. Standard Form. Popular Survey Questions with Survey Examples and Sample Survey. discovers that the Taliban Government is in-volved in the terrorist attack, then it will retaliate against Afghanistan. Thank you for doing this. AB=BC=14cm CD=8cm and angle ABD = 75 o (4 mks); A student took the measurements of his classroom and gave the width as 7m and the length as 9m. Make sure you fill in the correct circle. In a funneling question pattern, the focus is often on information gathering with one or two higher-order questions at the end. Year IV Post 1 Student Math Survey Write your math teacher's name in the box below: Your Math Teacher's Name: _____ (Example: Mrs. Note: If a +1 button is dark blue, you have already +1'd it. If you choose to use a calculator, be sure it is permitted, is working on test day, and has reliable batteries. If you see a pattern when you look systematically at specific examples, you can use that pattern to generalize what you see into a broader solution to a problem. Midterm and Final Exam Examples. Multiplying the last expression by 2 yields the above quadratic. Select the best version of the underlined part of the sentence. Thank you for your support! (If you are not logged into your Google account (ex. To view the papers click on links. You may not use a calculator during this session. Otherwise, this PDF offers helpful practice and is a great option if you've already used up all official resources. F 3 G 5 H -3 J -5 K None of these 3. Then log 5 25 = 2. Brainstorming key words that indicate mathematical equations: Explain to the class that whether they notice it or not, they are constantly interpreting key words in word problems in order to determine which mathematical operations to use in solving the problems. Sample Math Questions: Multiple-Choice. To this the charge for waiting time must be added, which is simply 9 x 20 cents = 180 cents = $1. 1) Identify the question to be answered. Students must see relationships between knowledge and the problem, diagnose materials, situations, and environments, separate problems into components parts, and relate parts to one another and the whole. Here are some examples:. 1 - How to write numbers in standard form As mentioned in the topic overview, standard form is used to write very large or very small numbers. One thing worth getting used to is the fact that the test involves different pieces of paper: 1. These strategies are especially effective during the Mental Math part of an Everyday Mathematics lesson. AB=BC=14cm CD=8cm and angle ABD = 75 o (4 mks); A student took the measurements of his classroom and gave the width as 7m and the length as 9m. The student may choose to convert all measures to inches, all measures to feet, or use a combination of both inches and feet. Almost every student needs math homework help, because solving math problems requires wide analytical knowledge. For example, it could find evidence for supersymmetry, providing indirect support for superstring theory. MATH LEARNING RESOURCES 1. If you're a "C" student in math, then join a group that has 2 or 3 "A" or "B" students so that you can raise your level. You may use a personal experience or you may create an example. If the required solution can be represented by a symbol, write it as such. Add, Subtract, Multiple and Divide and then display the output on the console. The only other example was Walter Smith who having lost 4 straight games would win 4 of his next 5 including substantial victories at home to West Ham (6-0 ) and Charlton (4-1). Start studying Sample Math Questions: Multiple-Choice. Some explain the origin of mathematical ideas, showing that Mathematics is truly international. So 4000 can be written as 4 × 10³. A good essential question. edu's online reading room since 1999. If we can get a short list which contains all solutions, we can then test out each one and throw out the invalid ones. Key Pieces of Standard Form in Mathematics. BrainMass is a community of academic subject Experts that provides online tutoring, homework help and Solution Library services across all subjects, to students of all ages at the University, College and High School levels. You can choose the difficulty level and size of maze. This test is one of the California Standards Tests administered as part of the Standardized Testing and Reporting (STAR). A bit of integral calculus allows one to predict that the maximum value must be 62. 50 + 5 × 11. Back in 2000, two Marin County property owners, Dartmond and Esther Cherk. Example 2: Reading Assignments. "x" is the variable or unknown (we don't know it yet). Math calculators and answers: elementary math, algebra, calculus, geometry, number theory, discrete and applied math, logic, functions, plotting and graphics. An identity matrix is special because when multiplying any matrix by the identity matrix, the result is always the original matrix with no. A Statistical Question is one in which has a variety of answers. b Represent sample spaces for compound events using methods such as organized lists, tables and tree diagrams. Free Applied Mathematics Online Practice Tests 15 Tests found for Applied Mathematics Analyse mathematics 17 Questions | 694 Attempts topologie, Mathematics, Applied Mathematics, Algebra Contributed By: Rafik Zeraoulia. com Previous 5 year Question Papers for 1st year and 2nd year of M. The answer is "YES," the smallest example being 36 = 62 = 1+2+3+4+5+6+7+8: So we might ask whether there are more examples and, if so, are there in-. Free Algebra 1 worksheets created with Infinite Algebra 1. Level M - Form 1 - Mathematics Computation: Fractions Mathematics Computation: Fractions. To start practicing, just click on any link. edu's online reading room since 1999. WORD PROBLEMS. Basic Electrical Formulas. 3, but it can also be shown as 0. Click and several Number system problems and get acquainted with a wide range of question variations. This helped to remind them to write neatly throughout the lesson. Toggle navigation Menu Menu Open Notifications Open Search Form. Become a member to download all our math worksheets instantly. com is an online resource used every day by thousands of teachers, students and parents. Find Form 4 Mathematics past papers here. ABC Physics Class XI Part-1 2. For example, it could find evidence for supersymmetry, providing indirect support for superstring theory. Popular Survey Questions with Survey Examples and Sample Survey. This quis can be taken by all students in Malaysia. On the test, questions from the areas are mixed together, requiring you to solve different types of problems as you progress. 1 day ago · These are the questions you should ask yourself before you go through with a divorce. ax ± b = c. Learn math with free interactive flashcards. Lists of mathematics topics. As a blended learning school, Cranbrook also tries to deliver a broad, rich learning experience to inspire active learners - and it’s also one of a number of Ted Wragg Trust schools in the to have partnered with a local tech company called Sparx around the development and integration of the latter’s personalised Maths learning technology. net) is part of the Revision World group, giving maths students free GCSE and A Level maths revision resources and maths exam advice. Mathematics Number system 1. This is no coincidence. [/math] Real life examples are: Finding current consumed on day 1,2,3… Every day, an average amount of current will be consumed. Used by over 7 million students, IXL provides unlimited practice in more than 4,500 maths and English language topics. A new puzzle is featured each month - The goal of the AIMS Puzzle Corner is to provide teachers with a variety of interesting puzzles that can be used to create a learning environment where students engage in doing mathematics just for the fun of it!. A problem of legendary difficulty first created by Pierre de Fermat in 1637, it attained its first. Each of the following statements is an implication: (1) If you score 85% or above in this class, then you will get an A. For K-12 kids, teachers and parents. If you took the October 2007 SAT Reasoning Test, you would have been given one of the essay prompts below: Prompt 1. Math—Sessions 1, 2, and 3 GENERAL INSTRUCTIONS The Math test has three sessions, two with multiple-choice questions and one with constructed-response questions. Thus the fare for the distance traveled is computed as$5. Decimals, Fractions & Percentages in its simplest form. These files include the scoring rubrics and sample student responses at each score point attainable. true B's C's A T T F B T T T -> B tells the truth C T T I have to use propositional logic to make out who lies and who tells the truth. If you choose to use a calculator, be sure it is permitted, is working on test day, and has reliable batteries. Two arrows represent the same vector if they have the same length and are parallel (see figure 13. 1 2 sin 2 Ar= θ Area of Segment: 1 2 (sin) 2 Ar=−θ θ 09 Differentiation Differentiation of Algebraic Function Differentiation of a Constant Differentiation of a Function I Differentiation of a Function II 11 0 yax dy ax ax a dx − = = == Example 3 3 y x dy dx = = 1 n n yx dy nx dx − = = Example 3 3 2 yx dy x dx = = is a constant 0 ya a. | 2019-11-14 07:34:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3257126808166504, "perplexity": 1263.4607740100744}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00088.warc.gz"} |
https://math.stackexchange.com/questions/2908263/how-can-we-prove-that-xn-1-always-be-a-multiple-of-x-1 | # How can we prove that x^n - 1 always be a multiple of x - 1 [duplicate]
It wonder with a finite series
$$y = x^0 + x^1 + x^2 + ... x^{n-1}$$
can be formulate into $$\frac{x^n - 1}{x - 1}$$
But I don't understand why $x^n - 1$ could be divide by $x-1$ and always be an integer. What is the relation between $x^a - 1$ and $x - 1$. It seem like a mystery
Are there any proof, if possible a visual proof, that would make it easy to understand this relation?
For a visual proof, try thinking in the other direction.
$(x-1)\cdot (x^0+x^1+x^2+\dots+x^{n-1})$
$= x\cdot (x^0+x^1+x^2+\dots+x^{n-1})-1\cdot (x^0+x^1+x^2+\dots+x^{n-1})$
$= (\color{blue}{x^1}+\color{green}{x^2}+\color{red}{x^3}+\dots+\color{orange}{x^{n-1}}+x^n) - (x^0+\color{blue}{x^1}+\color{green}{x^2}+\dots+\color{orange}{x^{n-1}})$
$=-x^0 + (x^1-x^1)+(x^2-x^2)+(x^3-x^3)+\dots+(x^{n-1}-x^{n-1})+x^n$
$=x^n-1$
Maybe the algebraically most obvious way to see it is as follows:
$$\begin{eqnarray} y & = & x^0 + \color{blue}{x^1 + x^2 + ... + x^{n-1}} \\ x\cdot y & = & \color{blue}{x^1 + x^2 + ... + x^{n-1}} + x^n\end{eqnarray}$$ By subtracting the equations you get $$\Rightarrow xy-y = (x-1)y = x^n-1$$
HINT
Let consider
$$(x-1)(x^{n-1}+x^{n-2}+\ldots+x+1)$$ | 2020-07-09 11:26:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8961049318313599, "perplexity": 177.38046629769394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00440.warc.gz"} |
http://agroreal911.sk/3a1raubi/4ed392-knitr-vs-rmarkdown | Be sure to put {r} after the 3 … If all fails. Close Visual Studio. Embed code with knitr syntaxDebug Mode learn more at rmarkdown.rstudio.com Rmd Reproducible Research At the click of a button, or the type of a command, you can rerun the code in an R Markdown file to reproduce your work and export the results as a finished report. which includes pandoc, and then include pandoc without your PATH. JSON but more human-readable. Also in 2012, R Markdown was created as a variant of Markdown that can embed R code chunks and that can be used with knitr to create reproducible web-based reports. The working directory in which to knit the document; uses knitr's root.dir knit option. Chunk output can be customized with knitr options, arguments set in the {} of a chunk header. cute little button, with a ball of yarn and a knitting needle.) The document is self contained and fully reproducible which makes it very easy to share. that our results are accompanied by the data and code needed to and you have access to that Markdown Quick Reference. source It will probably resonate most (if at all) with those who have some experience (mostly positive) generating reports from Rmarkdown files with knitr, but might have some gripes. code nor its output displayed. Use rmarkdown::render() to render/knit at cmd line. If NULL then the behavior will follow the knitr default, which is to use the parent directory of the document. You can download it here -> http://www.rstudio.com/products/rstudio/download/. After upgrading to R 3.5.0 and RStudio 1.1.453, chunk output with knitr::kable() is no longer rendered but kept as raw markdown. globally for a report to a scientific collaborator who wouldn’t want perhaps about Knitr with AsciiDoc or An important point: you need to be sure that these in-line bits of The other is the Official Swiss Army Knife™️ of document converters pandoc, which undergirds knitr and rmarkdown, with an optional layover in tex, where you can labor to get it "just right." The Markdown syntax has some You may be inclined to use largely the same set of chunk options to see all of the code. And I might want something like fig.width=12 It is based on pandoc. viewable by anyone). R includes a powerful and flexible system (Sweave) for creating dynamic reports and reproducible research using LaTeX. should they occur. Hi guys! your Path system environment variable. We’ll highlight a few common ones. Also note that, as in **knitr**, the root.dir chunk option applies only to chunks; relative paths in Markdown are still relative to the notebook's parent folder. that, using the purl command in knitr) and then First, we create.Rmd RMarkdown file. though it’s a bit ad-heavy. use fig.show="hide". Click … As we’ll discuss You can transform an R Markdown file in two ways. I’d set such options by having an initial code chunk like this: I snuck a few additional options in there: warning=FALSE and If I'm asking a question, I have already asked it on Stack Overflow or RStudio Community, waited for at least 24 hours, and included a … If I used RStudio, I’d (function () { Then if you want a particular chunk to have a different Note that the code chunk will still be evaluated and any outputs mirrored in the final document. r sprintf("%.2f", cor(x,y)). It’s a transparent engine for dynamic report generation in R. Knitr allows any input languages and any output markup languages. a bit longer. would still be displayed). Echo. this page, This will then be converted to html, It’s a transparent engine for dynamic report generation in R. Knitr allows any input languages and any output markup languages. As Click Restart Visual Studio, which should pick up the pandoc installation. fig.height. (The resulting .html file will be Markdown website almost install.packages("rmarkdown") R Markdown files are the source code for rich, reproducible documents. Use multiple languages including R, Python, and SQL. document.getElementsByTagName("head")[0].appendChild(script); Converting knitr/LaTeX to PDF RStudio. After upgrading to R 3.5.0 and RStudio 1.1.453, chunk output with knitr::kable() is no longer rendered but kept as raw markdown. The name is optional; if included, each code In R Studio, you simply create a new rmarkdown file, write your text/code, and select “Knit” (which runs knitr) to turn your raw text into a … discussed in R Markdown: The Definitive evaluated, gives the number of individuals. convenient “Markdown Quick Reference” document: a bit of R code that is initiated by a line like this: After the code, there’ll be a line with just three backticks. (By default, they are not the document, first with knitr and then with Technical aside: In setting the global chunk options with Thus, your report should never explicitly include numbers that are LaTeX ca… knitr provides self-contained HTML code that calls a Mathjax script to display formulas. Note that online sources are allowed. pandoc. At this point, I’d recommend going off and playing with R Markdown for the document to html. convert an R Markdown document to html is to open the document within In essence, you write a mixture of plain english with some different “code wrappers” to tell Rmarkdown how you want something to be interpreted. Write it using RStudio, where have easier access to this information. RStudio is especially useful when you’re first learning knitr and R Dont forget to load knitr previously. ... Rmarkdown - Introduction and Basics - Duration: 19:40. Let me start with WOO-HOO! files get placed in the Figs subdirectory. Specifically a data-science workflow, although it should be relevant for others. will be suppressed. RStudio. The bit of R code between them is evaluated and the result inserted. 9.2 Knitr: Rmarkdown Rmarkdown is slightly more complicated to produce but the code is simpler. That’s the default, but you could also use R Markdown is a figures. 1.5 R Markdown vs. Markdown. use R Markdown, a To install the rmarkdown package, use install.packages(rmarkdown). cderv February 18, 2019, 6:49am #3. For example, I might use include=FALSE or at least echo=FALSE To create a Slidy presentation from R Markdown, you specify the slidy_presentation output format in the YAML metadata of your document. If For figures, you’ll want to use options like fig.width and My solution to this problem is the This workflow saves time and … We can use the knitr function include_graphics which is convenient, as it takes care for the different output formats and provides some more features (see here the help file).. We’ll use R-Studio to create our R-Markdown document. You can click When you create a new post, you have to decide whether you want to use R Markdown or plain Markdown, as you can see from Figure 1.2.. Table 1.2 summarizes the main differences between the three options, followed by detailed explanations below. bunch of other packages), as well as CSS will only apply to HTML output. - **Warnings**: Inside a notebook chunk, warnings are always displayed immediately rather than being held until the end, as in options(warn = 1). This entire blogpost was generated by using a combination of R, knitr and markdown. Markdown to a Markdown document. Equations in R Markdown). If I’m writing a report for a collaborator, I’ll often use Thus, you want to set some chunk options defined via opts_chunk$set. Guide, it is best Jalayer Academy 57,861 views. We can use the knitr function include_graphics which is convenient, as it takes care for the different output formats and provides some more features (see here the help file).. Then, read a bit about figures and tables, my At the start of my R Markdown document, I’d include: And then later I could write r myround(cor(x,y), 2) Here’s an example R Markdown document, The simplest way to install pandoc is to just install the knitr will run each chunk of R code in the document and append the results of the code to the document next to the code chunk. Consider how authors typically include graphs (or tables, or numbers) in a report. 4.2 Slidy presentation. It’s as if you’d pulled out all of the R code as a single file behavior, for example, to have a different figure height, you’d will produce -0.00. your document will be converted to a PDF or Word .docx file, Also, any figures that are created will be given a correlation coefficient with 1000 data points, I don’t want to see You could use the R function round, like this: r round(cor(x,y), 2) The first official book authored by the core R Markdown developers that provides a comprehensive and accurate reference to the R Markdown ecosystem. as a global option, and then use include=TRUE for the chunks that Dont forget to load knitr previously. chunks will be evaluated, and then the code and/or output will be the knitr package. Display the current knitr engine. ), Rather than actually type that line, I include it within a This comment has been minimized. Use a productive notebook interface to weave together narrative text and code to produce elegantly formatted output. (and you can do Othewise you’ll just ; I have provided the necessary information about my issue. If you are not familiar with R Markdown, please see Appendix A for a quick tutorial. the knitting process is easy and you have easy access to that The first official book authored by the core R Markdown developers that provides a comprehensive and accurate reference to the R Markdown ecosystem. rmarkdown::render() will use knitr::knit() but won’t load When you process the R Markdown document with knitr, each of the code Note: the ending slash in Figs/ is important. in my R/broman package. rmarkdown package to process By filing an issue to this repo, I promise that. set with something like: I was confused about this at first: I’d use opts_knit$set when I Now that our R-Markdown document is complete with text, code and graphs, we can go ahead and click the little ‘Knit HTML’ button to generate a HTML file. you’re getting fancy you may need these package options, but initially no other special characters. If you use RStudio, the simplest way to However, in order to include the script in my blog posts I [took the script] ... You should ask Rmarkdown to post this on their page! Guide, lots of different possible “chunk options”. Next we’ll install a few packages just to ensure we have everything we need to get started. To embed R code into the document, code needs to be inserted as shown below. you can include LaTeX equations (see Close Visual Studio. names based on the name of the code chunk that produced them. Please refer to the following resources for more details– R-Markdown cheatsheet– Plotly R Library– Knitr. The problem is that Pandoc’s great power comes with a lot of command line options (more than 70), and knitr has the same problem of too many options. Sign in to view. lots of different possible “chunk options”. Note that online sources are allowed. document, you’ll see a little question mark button, with links to turn will make use of the The following Rmarkdown chunk shows the commands to see what are your current knitr engine settings. placed in the same directory as your .Rmd file.) Take a look at the one linked to @mattwarkentin.I'm away from LaTeX for a while, but the idea would be that a custom template exposes your commands for use in the same way that the default operates. pandoc, and 2 Likes. })(); Copyright © 2020 | MH Corporate basic by MH Themes, http://www.rstudio.com/products/rstudio/download/, Click here if you're looking to post or find an R/data-science job, PCA vs Autoencoders for Dimensionality Reduction, Simpson’s Paradox and Misleading Statistical Inference, R, Python & Julia in Data Science: A comparison. Use the code snippet below. and fig.height=6 if I generally want those sizes for my figures. Like If the workspace folder you open in VSCode already has a .Rprofile, you need to append the code above in this file too because ~/.Rprofile will not be executed when a local .Rprofile is found.. For example: Note that if include=FALSE, all of the code, results, and figures Each must be real R code, as R will be used to evaluate them. Over time, rmarkdown::render() got some new features that are very similar to features of ezknitr. Restart Visual Studio, which should pick up the pandoc installation. Hilary. You should now see a dialog as shown below. developed by the folks at Rstudio. “Open in browser” to open the document in your web browser, or script.type = "text/javascript"; This post is really about workflow. So mostly ignore specify a different option within that chunk. On a Mac, you’d use: In Windows, you’d include "c:\Program Files\RStudio\bin\pandoc" in An R Markdown document will have often have many code chunks. Set it to FALSE to hide the R code from showing up. and chunk-name above. If you’re using R to statistically explore data sets, and you need to write reports detailing your findings, you can benefit from using R Markdown. This post will be the first in a multi part series on how to embed Plotly graphs in R-Markdown documents as well as presentations. various variables in one code chunk are preserved in future The global chunk options become the defaults for the rest of the R Markdown still runs the code in the chunk, and the results can be used by other chunks. You can write your report using RMarkdown and knitr will automatically convert your report to a LaTeX file which it will use to create a PDF file for you. example, echo=FALSE indicates that the code will not be shown in the saved at all.). @StrictlyStat, R Markdown supports a reproducible workflow for dozens of static and dynamic output formats including HTML, PDF, MS … with the author name and date. This post will be the first in a multi part series on how to embed Plotly graphs in R-Markdown documents as well as presentations.. R-Markdown is a flavor of markdown which allows R-users to embed R code into a … (For example, see var script = document.createElement("script"); You can leave off the author and date if you want; you can leave off This workflow saves time and facilitates reproducible reports. One solution is to use the sprintf function, like so: Well, it is if you’re a C programmer. Rather, insert a bit of code that, when tutorial.). Quick Reference. Every method of presentation/document creation using rmarkdown has … that, and another window will open, and you’ll see knitr in action, reproducible research: Restart or Reload Window in VSCode. variant of Markdown Markdown, as it’s easy to create and view the corresponding html file, every time I’m writing a Markdown document. Use a productive notebook interface to weave together narrative text and code to produce elegantly formatted output. Click on File -> New File -> R Markdown as shown below. That is why we created the second generation of R Markdown, represented by the rmarkdown package, to provide reasonably good defaults and an R-friendly interface to customize Pandoc options. loaded the knitr package with library(knitr)). Turn your analyses into high quality documents, reports, presentations and dashboards with R Markdown. I don’t like that, nor does You can use RStudio to convert a .Rnw file to PDF and preview the result, in the same way you worked with R Markdown.. ), To convert your Markdown document to HTML, you’d then use, (Note that in Windows, it’s important to use double-quotes on the comments on reproducibility, and The advantage of giving each chunk a name You can include hyperlinks in there: This is called the YAML header. (One time only) Install pandoc from pandoc.org. 6 Essential R Packages for Programmers, Generalized nonlinear models in nnetsauce, LondonR Talks – Computer Vision Classification – Turning a Kaggle example into a clinical decision making tool, Boosting nonlinear penalized least squares, Click here to close (This popup will not appear again), You should now have a document which looks like this –. The script only works with environment variable TERM_PROGRAM=vscode. I seem to visit the For YAML is a Hopefully you can see how useful Rmarkdown can be. There are produce them. I have fully read the issue guide at https://yihui.name/issue/. You can create a slide show broken up into sections by using the ## heading tag (you can also create a new slide without a header using a horizontal rule (---).For example, here is a simple slide show (see Figure 4.2 for two sample … The Markdown syntax has some enhancements (see the R Markdown page); for example, you can include LaTeX equations (see Equations in R Markdown). A key motivation for knitr is It doesn’t teach you the syntax of R Markdown. the title, too. I’m very particular about the rounding of results, and you should be too. is that it will be easier to understand where to look for errors, Embed code with knitr syntaxDebug Mode learn more at rmarkdown.rstudio.com Rmd Reproducible Research At the click of a button, or the type of a command, you can rerun the code in an R Markdown file to reproduce your work and export the results as a finished report. that has embedded R code chunks, to fig.path='Figs' then the figures would go in the main directory but See also. Above, we use five arguments: include = FALSE prevents code and results from appearing in the finished file. below, we’ll use the - **Warnings**: Inside a notebook chunk, warnings are always displayed immediately rather than being held until the end, as in options(warn = 1). Then the code would be suppressed throughout, and any output final document (though any results/output would still be displayed). Welcome to this demo of how R code and results can be combined into an HTML report. knitr package plus a You use include=FALSE to have the chunk evaluated, but neither the Using the knitr function include_graphics. If you used (It’s a particularly The key thing for us to focus on are the code chunks, which look like this: In the midst of an otherwise plain Markdown document, you’ll have a It's fantastic! To use Sweave and knitr to create PDF reports, you will need to have LaTeX installed on your system. see the raw code and not the result that you want. output: html_document tells the knitr includes a lot of options; if We now have a nicely formatted HTML file ! you’ll just be using the chunk options and, particularly, the global Install the knitr and rmarkdown packages, which you can do from the interactive window: install.packages("knitr") install.packages("rmarkdown") myround Specifically a data-science workflow, although it should be relevant for others. The rmarkdown package will call the knitr package. But that would produce 0.9 instead of 0.90. For example: In a report to a collaborator, I might use include=FALSE, echo=FALSE for the chunk labels to use just letters, numbers, and dashes, and It’s usually best to give each code chunk a name, like simulate-data document. derived from the data. R-Markdown is a flavor of markdown which allows R-users to embed R code into a markdown document. So RStudio: click File → New File → R Markdown.) script.src = "https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"; Howevever, if you are familiar with R and Markdown, that should n… 0.9032738. Install the knitr and rmarkdown packages, which you can do from the interactive window: install.packages("knitr") install.packages("rmarkdown") (And really, you probably want to create the document in Also in 2012, R Markdown was created as a variant of Markdown that can embed R code chunks and that can be used with knitr to create reproducible web-based reports. be used with knitr to make it easy to That’s the point of the in-line code. package options, Using the knitr function include_graphics. Inserting an awesome interactive Plotly chart is as simple as printing out the plotly object in a code chunk. This tutorial teaches you how to install everything you need on a Mac running macOS High Sierra, as well as how to create R Markdown files and compile them into PDF files. discussed in R Markdown: The Definitive Important args: Posted on December 28, 2015 by Riddhiman in R bloggers | 0 Comments. Use multiple languages including R, Python, and SQL. Hi everyone: When knitting RStudio file to PDF, the longer lines of code and text keep getting cut off on the output. Ok ’ Plotly object in a css file called a stylesheet syntax of R Markdown a... Set it to FALSE to hide the figures would go in the finished file. ) button with! Code needs to be inserted as shown below interface to weave together narrative text code... Sweave ) for creating dynamic reports and reproducible research using LaTeX evaluated, neither. Document ; uses knitr 's root.dir knit option and fig.height=6 if I generally want those sizes for my figures )! Welcome to this repo, I include it within a GNU make file, like this: in Markdown... The finished file. ) series on how to embed R code, as R will be to. Creating dynamic reports and reproducible research using LaTeX the commands to see 0.9032738 for specifying data, of... Html report in two ways different possible “ chunk options ” post will be inserted as shown below doesn... Engine for dynamic report generation in R. knitr allows any input languages and any output languages... Of yarn and a knitting needle. ) in bliss with the direction your rmarkdown... My figures HTML document it produces and reproducible research using LaTeX I used RStudio, you ’ ll see. The simplest way to create a Slidy presentation from R Markdown..... Are the source code for rich, reproducible documents mirrored in the final document uses 's.... rmarkdown - Introduction and Basics - Duration: 19:40 probably want see. Code to produce elegantly formatted output to be inserted as shown below re! A Slidy presentation from R Markdown as shown below this: in R Markdown document to HTML is to the... Generally want those sizes for my figures code chunk needs a distinct name chunk will still be evaluated the. Which is to open the document be evaluated and any output markup languages t teach the. Needs to be sure to put { R } after the 3 … by filing an to! Be customized with knitr options, arguments set in the YAML metadata of your document bit R! As well as presentations hide '', -0.001 ) got some new features that are derived from data... Details– R-Markdown cheatsheet– Plotly R Library– knitr and date if you used fig.path='Figs ' the! Only ) Install pandoc from pandoc.org example, see this page, though it ’ s an R! A preview of the in-line code is indicated with R sprintf ( %.2f,... The chunk, and figures will still be evaluated and the result that you want ; can... Markdown. ) your report should never explicitly include numbers that are derived from data. Tutorial. ), although it should be relevant for others I 'm almost bliss! There any way we can wrap these lines … workflow to this information knitr default, should! Please refer to the following resources for more details– R-Markdown cheatsheet– Plotly R Library– knitr favor of opts_chunk set... Want those sizes for my figures lines in your document interactive Plotly chart as! Riddhiman in R bloggers | 0 Comments results will be placed in left... For dynamic report generation in R. knitr allows any input languages and outputs... Relevant for others standard Markdown file in two ways can be knitr: rmarkdown is. The figures, you ’ re a C programmer generated by using combination... Will still be shown a chunk header but figures will still be.. Knitting process is easy and you should now see a “ knit HTML button. Code needed to produce elegantly formatted output file with embedded R code its output.... Null then the behavior will follow the knitr default, which should up... T like that, nor does Hilary key motivation for knitr is reproducible research using.. To the following resources for more details– R-Markdown cheatsheet– Plotly R Library–.! Is simpler should pick up the pandoc installation Markdown syntax Markdown Quick Reference..: //www.rstudio.com/products/rstudio/download/ is if you use results= '' hide '' into an HTML report chunk options at the top your. Knitr provides self-contained HTML code that calls a Mathjax script to display formulas there! ; you can leave off the title, too RStudio, where the knitting process is easy you. Is to use knitr vs rmarkdown parent directory of the code nor its output displayed lines in your document with a of. Hyperlinks in there: this is called the YAML header sprintf ( rmarkdown )... There are lots of different possible “ chunk options become the defaults for the rest of the code. %.2f '', the simplest way to create a Slidy presentation from R Markdown are... Chunk header nor does Hilary t like that, nor does Hilary but with Figs as initial. Studio, which should pick up the pandoc installation system ( Sweave ) creating. I have provided the necessary information about my issue multi part series on how to embed Plotly graphs in documents... Leave off the author and date if you use RStudio, where the knitting process is easy and you easy. Of results, and you should be relevant for others is then ‘ knit ’ using to... Particularly cute little button, with a ball of yarn and a knitting needle. ) allows R-users to R! Rounding of results, and SQL, 2015 by Riddhiman in R Markdown: the Definitive guide lots. ; I have provided the necessary information about my issue presentations and dashboards with R Markdown. ) of R! In two ways rmarkdown rmarkdown is slightly more complicated to produce elegantly formatted output is the function!. ) ( rmarkdown ) a data-science workflow, although it should be relevant for others which makes it easy! Based on the Markdown website almost every time I ’ m very particular about the rounding results... Would be a pain to retype those options in every chunk next we ’ ll want to see what your... Use fig.show= '' hide '', the results will be used by chunks... Be inserted can wrap these lines … workflow use multiple languages including,. The in-line code Markdown which allows R-users to embed R code between them is evaluated and the result you. Format in the left panel and fill in title and author field and hit OK. Favor of opts_chunk \$ set ( ) interface to weave together narrative text and code needed to produce but code. Information about my issue time and … Close Visual Studio key motivation for knitr is reproducible research using.. See 0.9032738 format in the YAML metadata of your document be shown code into the document fully the. Typically include graphs ( or tables, or numbers ) in a report figures that very. But figures will be the first in a code chunk will still be.. A Slidy presentation from R Markdown document together narrative text and code to but! … by filing an issue to this demo of how R code showing. -0.001 ) will produce -0.00 are specified in a code chunk needs a distinct.. Research using LaTeX the result inserted this entire blogpost was knitr vs rmarkdown by a., that figure will be placed in the main directory but with Figs the. Cheat-Sheet on the name of the document, code needs to be sure to put R! Riddhiman in R Markdown document, code needs to be sure that these in-line bits of code aren t. It to FALSE to hide the R code into the document to HTML is to use largely same! 2Nd Edition a great way to create a Slidy presentation from R Markdown document, code needs to be that..., each code chunk a preview of the actual R code and not the that... Complicated to produce elegantly formatted output research using LaTeX RStudio, where the knitting process is easy and have..., code needs to be sure to put { R } after the …. Just to ensure we have everything we need to have the chunk evaluated, but neither the code still. Sweave and knitr to create our R-Markdown document as well as presentations where the knitting process is knitr vs rmarkdown and should. Are accompanied by the data and code to produce them in RStudio, you probably want to see what your. Something like fig.width=12 and fig.height=6 if I used RStudio, you want to use options like fig.width and.! = FALSE prevents code and results can be there: this is called the YAML metadata of document. Cderv February 18, 2019, 6:49am # 3 hyperlinks in there: is... Results can be used by other chunks an issue to this repo, I don ’ t need to sure. Display formulas the necessary information about my issue if it takes you a longer!: in R Markdown still runs the code is simpler ( or tables, or )... Are the knitr vs rmarkdown code for rich, reproducible documents is there any we. An HTML report R-users to embed R code in the final document doesn ’ t across! Standard Markdown file with embedded chunks of R Markdown. ) together narrative text and code to elegantly. Include=False to have the chunk evaluated, but neither the code is simpler, -0.001 ) got some features. A Mathjax script to display formulas css file called a stylesheet easy access to this,! Following rmarkdown chunk shows the commands to see 0.9032738 the defaults for the rest of the document in RStudio you! Yaml header installed on your system even if it takes you a bit longer if... Be inserted as shown below it using RStudio, you specify the slidy_presentation output format in the main directory with! 1000 data points, I seem to visit the Markdown website almost every time I ’ d recommend going and. | 2021-03-06 08:06:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38910651206970215, "perplexity": 3656.7730987996088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374616.70/warc/CC-MAIN-20210306070129-20210306100129-00633.warc.gz"} |
https://matheducators.stackexchange.com/tags/notation/hot?filter=year | # Tag Info
19
Common knowledge The formula $a^2+b^2 = c^2$ is common knowledge and the words for hypotenuse and leg (is "cathetus" not used in English?) are basic mathematical vocabulary. Including these seems a good idea. Connections to other mathematics The notation with AB, CA and BC might be something the students have used or will use in less analytical ...
19
3
It is important to become comfortable with disambiguation. For example, in listening we must use context to distinguish between the homophones “reed” and “read” as in “I will read a book tonight.” In reading, we must use context to know when to pronounce “read” with a long e sound as in “reed” or with a short e sound as in “red.” The brain is an amazing ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-09-21 23:33:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577213525772095, "perplexity": 512.9498244101394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00082.warc.gz"} |
http://www.zentralblatt-math.org/ioport/en/?q=ut:line%20search | History
Please fill in your query. A complete syntax description you will find on the General Help page.
first | previous | 1 21 41 61 81 101 | next | last
Result 1 to 20 of 633 total
A note on the implementation of an interior-point algorithm for nonlinear optimization with inexact step computations. (English)
Math. Program. 136, No. 1 (B), 209-227 (2012).
1
An inexact secant algorithm for large scale nonlinear systems of equalities and inequalities. (English)
Appl. Math. Modelling 36, No. 8, 3612-3620 (2012).
2
The onion diagram: a Voronoi-like tessellation of a planar line space and its applications. (English)
Int. J. Comput. Geom. Appl. 22, No. 1, 3-26 (2012).
3
An accelerated conjugate gradient algorithm with guaranteed descent and conjugacy conditions for unconstrained optimization. (English)
Optim. Methods Softw. 27, No. 4-5, 583-604 (2012).
4
A nonmonotone spectral projected gradient method for large-scale topology optimization problems. (English)
Numer. Algebra Control Optim. 2, No. 2, 395-412 (2012).
5
A note on “A new iteration method for the matrix equation $AX = B$”. (English)
Appl. Math. Comput. 218, No. 21, 10639-10641 (2012).
6
A sequential quadratic programming algorithm for nonconvex, nonsmooth constrained optimization. (English)
SIAM J. Optim. 22, No. 2, 474-500 (2012).
7
On the convergence of an active-set method for $\ell_1$ minimization. (English)
Optim. Methods Softw. 27, No. 6, 1127-1146 (2012).
8
Globally convergent modified Perry’s conjugate gradient method. (English)
Appl. Math. Comput. 218, No. 18, 9197-9207 (2012).
9
Two modifications of the method of the multiplicative parameters in descent gradient methods. (English)
Appl. Math. Comput. 218, No. 17, 8672-8683 (2012).
10
The global convergence of a descent PRP conjugate gradient method. (English)
Comput. Appl. Math. 31, No. 1, 59-83 (2012).
11
An approach for real-time recognition of online Chinese handwritten sentences. (English)
Pattern Recognition 45, No. 10, 3661-3675 (2012).
12
Another improved Wei-Yao-Liu nonlinear conjugate gradient method with sufficient descent property. (English)
Appl. Math. Comput. 218, No. 14, 7421-7430 (2012).
13
Some remarks on Newton’s algorithm. II. (English)
Int. J. Pure Appl. Math. 74, No. 3, 373-391 (2012).
14
A line search exact penalty method using steering rules. (English)
Math. Program. 133, No. 1-2(A), 39-73 (2012).
15
A new general form of conjugate gradient methods with guaranteed descent and strong global convergence properties. (English)
Numer. Algorithms 60, No. 1, 135-152 (2012).
16
Generalizing a theorem of Wilber on rotations in binary search trees to encompass unordered binary trees. (English)
Algorithmica 62, No. 3-4, 863-878 (2012).
17
A line search filter algorithm with inexact step computations for equality constrained optimization. (English)
Appl. Numer. Math. 62, No. 3, 212-223 (2012).
18
Multicriteria optimization with a multiobjective golden section line search. (English)
Math. Program. 131, No. 1-2(A), 131-161 (2012).
19
Global convergence of a modified Hestenes-Stiefel nonlinear conjugate gradient method with Armijo line search. (English)
Numer. Algorithms 59, No. 1, 79-93 (2012).
20
first | previous | 1 21 41 61 81 101 | next | last
Result 1 to 20 of 633 total | 2013-05-21 03:11:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24366764724254608, "perplexity": 2373.997605859143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699675907/warc/CC-MAIN-20130516102115-00026-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://drserendipity.com/notes/notes_by_subjects/artificial_intelligence/computer-vision/4-object-tracking-and-localization/4-6-matrices-and-transformation-of-state/2-quiz-kalman-filter-prediction-01-render-v2/ | # 2 – QUIZ Kalman Filter Prediction 01 RENDER V2
In Kalma Filter then, we’re going to build a two-dimensional estimate. One for the location and one for the velocity denoted x. But it also can be 0, it can be negative, or it can be positive. If initially, I know my location but not my velocity, then I represented Gaussian as elongated around the correct location. But really, really broad in the space of velocities. Now let’s look at the predictions step. In the prediction step, I don’t know my velocity. So, I can’t possibly predict for location I want to saw. But miraculously, they have some interesting correlation. So this for a second, just pick a point on this distribution over here. Let me assume my velocity is 0. Of course in practice I don’t know the velocity but let me assume for a moment the velocity is 0. Where would my posture be after the prediction? Well, we know we started location one, the velocity of 0, so my location would likely be here. Now let’s change my belief of the velocity and pick a different one. Let’s say the velocity is 1. Where would my prediction be one timestep later, starting at location one, and velocity 1? I give you three choices. Here, here, or here? Please pick the one that makes most sense. | 2023-02-03 03:33:05 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9011269807815552, "perplexity": 572.1076403712236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00063.warc.gz"} |
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-2-section-2-4-signed-fractions-exercises-page-118/26 | ## Elementary Technical Mathematics
$-2\dfrac{3}{10}$
Write the mixed number in improper fraction form to obtain: $=-\dfrac{4[}{5} + \left(-\dfrac{2(1)+1}{2}\right) \\=-\dfrac{4}{5} + \left(-\dfrac{3}{2}\right)$ Make the fractions similar using their LCD, which is $10$. to obtain: $=-\dfrac{4(2)}{5(2)}+\left(-\dfrac{3(5)}{2(5)}\right) \\=-\dfrac{8}{10} + \dfrac{-15}{10} \\=\dfrac{-8+(-15)}{10} \\=\dfrac{-23}{10} \\=-2\dfrac{3}{10}$ | 2020-06-04 22:06:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6949238777160645, "perplexity": 2198.2548110443263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347458095.68/warc/CC-MAIN-20200604192256-20200604222256-00036.warc.gz"} |
http://yglf.manukshop.de/propensity-model-vs-response-model.html | Managing transitions in care, especially among elderly patients, enhances patient experiences, improves health and quality-of-life outcomes, and. From 2005 to 2013, 1179 patients with T3 HCC who underwent HR or TACE were divided into two groups, HR group (n = 280) or TACE group (n = 899). We determined a propensity score for receipt of a total hip arthroplasty from a surgeon with ≤35 procedures using a logistic regression model. Coping with Unobservable Heterogeneity Time-Invariant Individual Fixed Effects Endogenous Switching Framework 3. Future Chrysler, Dodge, and Jeep Powertrain / Engines. WHAT IS A CUSTOMER PROPENSITY MODEL? A Customer Propensity Model is an equation that predicts the odds a customer will behave in a specific way. oFit a model to expected loss cost to produce loss cost. DOT's goal is to place an ERG in every public emergency service vehicle nationwide. They use it to measure the response that their articles are receiving, as a form of market research. Form an LLC, incorporate a business, make a will, register a trademark, get legal advice, and more online. In the statistical analysis of observational data, propensity score matching (PSM) is a statistical matching technique that attempts to estimate the effect of a treatment, policy, or other intervention by accounting for the covariates that predict receiving the treatment. Describe the complex dynamics of your plant using a variety of supported modeling approaches, and use the most appropriate approach for each component in your plant to create the system-level plant model. Subjects characterized as cortisol high responders (HRs) consume more calories after stress, but it is unknown whether cortisol responsiveness predicts a propensity for obesity. This leads to the development of all other characteristics and properties of these living organisms. Call in for the RADIO SHOW at 877-207-2276. can be used both to model and forecast the response series and to analyze the impact of the intervention. Propensity scores were calculated using the significant vari-. propensity synonyms, propensity pronunciation, propensity translation, English dictionary definition of propensity. ABC offers parents, psychologists, and educators a systematic way in which to look at the antecedent or precipitating event or occurrence. Intercepts. i + εi (1) where Wi is the wage, X. I have also been reading and viewing other related materials. How To Get Published. Not a member yet? Register if you are a: Model, Photographer, Stylist, Makeup or Hair Stylist, Casting Director, Agent, Magazine, PR or Ad agency, Production Company, Brand or just a Fan!. Locking and unlocking Model S is convenient. Start studying STRENGTHS AND WEAKNESSES of TRANSACTIONAL MODEL. with Love and Logic. So, how do we actually go about estimating the propensity score? Here, what we'll do is we'll treat the treatment itself A as if it was the outcome. exercise analogies for emotional work. Residuals are essentially the difference between the actual observed response values (distance to stop dist in our case) and the response values that the model predicted. Differences Between Predictive Modeling vs Predictive Analytics. The iPod Special Edition U2 is a standard iPod model with some differences, including: Black plastic exterior, red Click Wheel, signatures of the U2 band members engraved on the back, and "iPod Special Edition U2" engraved on the back. Cournot Model Assumptions: All firms produce an homogenous product The market price is therefore the result of the total supply (same price for all firms) Industrial Economics-Matilde Machado 3. Roy Spencer says, Additional evidence for lower climate sensitivity in the above plot is the observed response to the 1991 Pinatubo eruption: the temporary temperature dip in 1992-93, and subsequent recovery, is weaker in the observations than in the models. The benefit of the RTI model is that teachers do not wait until a child fails to give extra help, like they often do under the discrepancy model. For 150 years economic theory was built on the foundation laid with the publication of Scottish economist Adam Smith's book, An Inquiry into the Nature and Causes of the Wealth of Nations, in 1776. (2) The Solow-Swan Growth Model. The authors conducted a meta-analysis of 150 studies in which the risk-taking tendencies of male and female participants were compared. In my first loyalty segmentation article, I wrote about how to segment frequent flyer loyalty members based on demographics, account profile data and status levels. But by doing so, it raises a question of whether its analytics are relevant in an era. Quasi-experimental designs identify a comparison group that is as similar as possible to the treatment group in terms of baseline (pre-intervention) characteristics. For each of the decile a pricing test is conducted with three discrete price points and a control group. , 2012) - as well as on choosing predictors that are also associated with the survey outcomes of interest (Little and. Adjust for the propensity score in a logistic regression model. A model policy. Translation - Create and implement an action plan, evaluate outcomes, disseminate findings. Linear Probability Model Logit (probit looks similar) This is the main feature of a logit/probit that distinguishes it from the LPM – predicted probability of =1 is never below 0 or above 1, and the shape is always like the one on the right rather than a straight line. Welcome to the National Institute for Direct Instruction Print Email The National Institute for Direct Instruction (NIFDI) is the world's foremost Direct Instruction (DI) support provider. Choline acetyltransferase gene-overexpressing transgenic mice (ChAT tgm), a model of the non-neuronal cardiac cholinergic system (NNCCS), protects the heart from ischaemic insults, shows specific central phenotypes compatible with vagus nerve stimulation (VS), and consequently re-educate the central nervous system (CNS) to reset stress responses. M = 1 / MPS is commonly used to calculate the expenditure multiplier. Public Health Service in order to understand the failure of people to adopt disease prevention strategies or screening tests for the early detection of disease. , probability) to be treated as a function of the observed variables. Propensity model Predicts a customer's purchase behavior for a future time period Based on everything we know about the customer as of the cutoff date It is predictive in nature In the training. The parameters are estimated using maximum likelihood (OLS, WLS, and GLS are versions of maximum. For many years, the standard tool for propensity score matching in Stata has been the psmatch2 command, written by Edwin Leuven and Barbara Sianesi. Introducing Charge Hydration Asymmetry into the Generalized Born Model Abhishek Mukhopadhyay,† Boris H. Model E+ (typically dichotomous) as a function of covariates using entire cohort: −E+ is outcome for propensity score estimation. The cognitive model describes how people’s perceptions of, or spontaneous thoughts about, situations influence their emotional, behavioral (and often physiological) reactions. The propensity scores, which are the probabilities of receiving VATS given potential confounders of treatment assignment, were estimated by a multiple logistic regression. Preparatory-response theory - Compensatory response model - Rescorla-Wagner model • Practical applications of Pavlovian conditioning - Understanding the nature of phobias - Treating phobias - Aversion therapy. January 2015 2. And that’s not quite the right model for these kinds of practices. Dose-Response: An International Journal Volume 11|Issue 4 Article 6 12-2013 LINEAR NO-THRESHOLD MODEL VS. The use of propensity score methods (Rosenbaum & Rubin, 1983) for estimating causal effects in observational studies or certain kinds of quasi-experiments has been increasing in the social sciences (Thoemmes & Kim, 2011) and in medical research (Austin, 2008) in the last decade. , 2012) - as well as on choosing predictors that are also associated with the survey outcomes of interest (Little and. A second reason for introducing the model is that we can use it to derive the aggregate demand curve for the model of aggregate demand and aggregate supply. Porsche has announced that its Mission E concept vehicle could be ready for production as early as 2019, challenging the likes of the Tesla Model S in the EV market. Mavericks vs. In spirit you're doing the same thing, but some people feel that the latter method better highlights the causal task at hand. Propensity scores Examples Outline Setting Controlling for selection bias Examples Ideal case Real world Introduction to propensity scores Propensity scores are a tool for helping to control for bias due to heterogeneity and imbalance in comparative clinical studies. innovation and industrial competitiveness by advancing measurement science, standards, and technology for engineered systems in ways that enhance economic security and improve quality of life. can be included in a propensity score model include age, gender, geographic location, and variables that reflect health status at the time of group assignment. Click to edit Master text styles [Confidential] Propensity Models with Logistic Regression—Economic Analysis Jeffrey Strickland, Ph. An earlier version of this paper was presented at the meeting of the American Evaluation Association, San Antonio, 2010. Propensity definition, a natural inclination or tendency: a propensity to drink too much. The discrepancy model: Through the traditional discrepancy model, a learning disability has been determined primarily through a combination of cognitive (intellectual) and academic (achievement) testing. CHAPTER 2 THEORIES OF ORGANIZED CRIMINAL BEHAVIOR 61 commission exists whose function is to arbitrate disputes between families and assign territory (discussed later in the chapter). The RACI model is a relatively straightforward tool that can be used for identifying roles and responsibilities during an organizational change process. •Can we should use propensity score techniques? •Yes! Remember the 1st property of propensity score is balancing: W⊥ X|e(X), which has nothing to do with potential outcomes. Response Model and what's best. On the other hand C is endogenous, because it's determined inside the model, by the consumption function. D The combined board is available on single-phase standard and service entrance models. IdP Initiated SSO. In models, the focus is on estimating the model parameters. Gina Armendariz, Ed. Propensity score matching is a popular way to make causal inferences about a binary treatment in observational data. The behavior is an action taken by the student that would be observable to two or more people, who would objectively be able to note the same behavior. This article is a brief introduction to these skills. So, how do we actually go about estimating the propensity score? Here, what we'll do is we'll treat the treatment itself A as if it was the outcome. You need to analyze customer based on their recent behavior and long term habit. You're a naturally skeptical person, and given that your last two startups failed from what you believe to be a lack of data, you're giving everything an extra critical eye. regression model was created using sixteen predictor variables chosen for their association with SSI. When an ARIMA model includes other time series as input variables, the model is sometimes referred to as an ARIMAX model. We offer online resources, unmatched training opportunities both for individual professionals and for organizations, and provide clinical services to clients at our Philadelphia headquarters. You need to analyze customer based on their recent behavior and long term habit. , Poisson, logit, linear, probit) to estimate different effect measures (while taking into account the fact that matching took place). The threshold is historically most common, as it represents the score where there is a 50% probability of choosing that response. Be sure to look at both the step- and impulse-response plots. Watch the live radio show, M-F, 6-7PM, EST on Facebook. See Bulletin 056 for details. Results: We identified 10,868 patients, of whom 8,553 had spinal anesthesia and 2,315 had general anesthesia. Multivariate Linear Regression Models Regression analysis is used to predict the value of one or more responses from a set of predictors. The Propensity Score Model Goal: Covariate balance Popular method for estimating PS is logistic regression, though others exist (e. The basic inference tools (e. −Logistic regression typically used. If a voltage is applied across one of the arms, a phase shift is induced for the wave passing through that arm. It always holds that ln ˆ() L MFull ln ˆ() L MIntercept ln ˆ(). behavioral finance Chris House has a new blog post that is pretty dismissive of behavioral economics: In the early 2000’s, my colleagues and I were anticipating a flood of newly minted behavioral Ph. 1 propensity to take the drink, a second bucket covers users with a 0. In my first loyalty segmentation article, I wrote about how to segment frequent flyer loyalty members based on demographics, account profile data and status levels. Model Generation. Learn, teach, and study with Course Hero. 883 CMV/late HFOV) with significant hypoxia (oxygenation index ≥8). One limitation to the use of standardized differences is the lack of consensus as to what value of a standardized difference denotes important residual imbalance between treated and untreated subjects. G c can be interpreted as elasticity when a = 0. Overview of Ford's history. Propensity models also help identify the need for a discount to encourage full price shoppers. The Gosset link model enables us to account for symmetrically distributed heavy tails in the latent variable model for binary response. Similarly, a propensity model can identify those customers who need extra attention. That is, if a doctor prescribes you anti-anxiety pills or sedation pills – you are not truly experiencing the grief in full effect – you are being subdued from it – potentially interfering with the five stages of grief and eventual acceptance of reality. Model Fitting Heuristic Model. Within a random forests model specification across repeated samples, does standardized mean difference of. The score is a predicted probability that students receive a treatment, given their observed characteristics. NSFG: Monitoring Mean Propensity Does timing of the estimate make a difference? Compare two models Model using data available that day Model using data available at the end 12/23 James Wagner and Frost Hubbard Sequential Models. Start studying STRENGTHS AND WEAKNESSES of TRANSACTIONAL MODEL. But hopefully it is good enough to be useful. Response Marketing Group is a company dedicated to providing world-class financial education tailored to the needs of goal-oriented individuals. You can order the kit directly from Magnepan. I’ve been testing MakePrintable to repair errant STL 3D models and found that its capabilities are quite unique among the many model repair options available to 3D print operators. a) Running an imputation model defined by the chosen variables to create imputed data sets. The Bose F1 Model 812 12” 1000-watt powered loudspeaker is the first powered loudspeaker that delivers the benefits of an adjustable array in a portable system package. In the Keynesian model it can be easily shown mathematically that the multiplier is simple the reciprocal of the marginal propensity to save. Cloud Firestore is a NoSQL, document-oriented database. A biological population with plenty of food, space to grow, and no threat from predators, tends to grow at a rate that is proportional to the population-- that is, in each unit of time, a certain percentage of the individuals produce new individuals. With thanks to oh2o and redriderbob • updated 12/26/2017. Transitional Care Model The nursing-led Transitional Care Model (TCM), pioneered at the University of Pennsylvania, has been at the forefront of evidence-based care across settings and providers. Amabile Harvard Business School To appear in Encyclopedia of Management Theory (Eric H. In Pavlovian conditioning, the subject learns to associate a previously unrelated neutral stimulus. It is not so easy with a wall-mounted speaker. 0394e+19 Prob > F = 0. 070) for patients not receiving t-PA treatment. Because the performance of PSM hinges upon how well we can predict the propensity scores, we will use factor-variable notation to include both linear and quadratic terms for mage, the only continuous variable in. The IPTW method can remove systematic differences between FFX and GnP on observed characteristics to a comparable degree compared to propensity score matching, without having to reduce the current sample size to estimate the average treatment effect. Following the early 1970s there was hiatus in empirical tests of parallel vs. model parameters describing the response: dead time, rise time, gain • Do both in a sequence - done in real process control ID packages • Pre-filter data. Thomas Gant, Keith Crowland Data & Information Management Enhancement (DIME) Kaiser Permanente. A biological population with plenty of food, space to grow, and no threat from predators, tends to grow at a rate that is proportional to the population-- that is, in each unit of time, a certain percentage of the individuals produce new individuals. On the other hand, restriction to more tractable spaces such as $\cal{F}_\textrm{logistic}$ is well known to lead to the issue of model misspecification, which occurs when the space considered does not contain the propensity score. Using the Forward selection method, the two covariates Dis and Mult were entered in the model which significantly (0. must be estimated. The behavior is an action taken by the student that would be observable to two or more people, who would objectively be able to note the same behavior. The same set of parameter values and initial conditions will lead to an ensemble of different. Once the researcher has decided to. The rate at which the confidence intervals widen is not a reliable guide to model quality: what is important is the model should be making the correct assumptions about how uncertain the future is. Arun K Mandapaka, Amit Singh Kushwah, Dr. The degree of hypoxia was the most significant contributor to the propensity score model. Probability is thought of as a physical propensity, or disposition, or tendency of a given type of physical situation to yield an outcome of a certain kind, or to yield a long. For many years, the standard tool for propensity score matching in Stata has been the psmatch2 command, written by Edwin Leuven and Barbara Sianesi. Cloud Firestore is a NoSQL, document-oriented database. In this review, the strengths and weaknesses that are inherent to specific tumor models are analyzed. In this case, the staff might use the model to identify those members who don't require a brochure and would simply renew after receiving an invoice. It is a Circular Model, so that communication is something circular in nature Encoder – Who does encoding or Sends the message (message originates) Decoder – Who receives the message Interpreter – Person trying to understand (analyses, perceive) or interpret Note: From the message starting to ending, there is an interp. Define propensity. 0063 for Mult) contribute to the prediction of time. 4 seconds) and this really puts how fast the bike is in perspective. Logistic Growth Model Part 1: Background: Logistic Modeling. How well does Apple's Directly Responsible Individual (DRI) model work in practice? This answer was originally answered on Quora by Gloria Lin. Models where both predictors and responses are functions. The functions , and are response kernels that describe the effect of spike emission and spike reception on the variable u i. Propensity Score Estimation Identify potential confounders. Linear Probability Model Logit (probit looks similar) This is the main feature of a logit/probit that distinguishes it from the LPM – predicted probability of =1 is never below 0 or above 1, and the shape is always like the one on the right rather than a straight line. Then it estimates the conditional expectation of the outcome given the observed treatment and the estimated GPS by calling the routine doseresponse_model. Mathematical Models of Cancer Stem Cells Franziska Michor From the Computational Biology Center, Memorial Sloan Kettering Cancer Center, New York, NY. Propensity models also help identify the need for a discount to encourage full price shoppers. with Love and Logic. The model also shows degrees of probable inappropriateness. 5 Gy • LQ model overestimates the tolerance for small doses per fraction. Drafting Note: If a State has adopted the NAIC Annuity Disclosure Model Regulation, the State should insert an additional phrase in paragraph (1) above to explain that the requirements of this section are intended to supplement and not replace the disclosure requirements of the NAIC Annuity Disclosure Model Regulation. The Keynesian Model and the Classical Model of the Economy. Prediction differs from simulation in that it uses both measured input and measured output when computing the system response. The model also shows degrees of probable inappropriateness. Hobbylinc carries 130 ho scale model train sets at discounts up to 35%. The phrase healthy workplace “model” is used to mean the abstract representation of the structure, content, processes and system of the healthy workplace concept. It has been suggested that the five-factor model was not so much a theory, but rather, just an idea or a means of classification. • Run generalized linear model with participation and propensity as coefficients • SAS Global Forum: Paper 314-2012 Propensity Score Analysis and Assessment of Propensity Score Approaches using SAS Procedures. Damien Carru. The BMFA has around 820 clubs affiliated to it, with a combined membership from clubs and individual members of over 36,000. Cloud Firestore is a NoSQL, document-oriented database. ambulatory patients with community-acquired pneumonia. Ties in ranking should be arbitrarily broken by assigning a higher rank to who. LegalZoom is the nation's leading provider of personalized, online legal solutions and legal documents for small businesses and families. Researchers first estimate a propensity score for each student (or other unit) in the sample (Rosenbaum and Rubin, 1983). ABC offers parents, psychologists, and educators a systematic way in which to look at the antecedent or precipitating event or occurrence. Jazz matchup 10,000 times. How to Apply the Model. January 2015 2. , Poisson, logit, linear, probit) to estimate different effect measures (while taking into account the fact that matching took place). Common Market Response Models that Incorporate these Phenomena G Aggregate response models G Individual response models ME Basics Aggregate Response Models: Fractional Root Model Y = a + bXc Gc=1/2: square root model, c=-1: reciprocal model (Y approaches a when x gets large. Related: Best Monitors IPS vs TN vs VA – Pros and Cons. I'm trying to integrate with SE API and I'm almost done. 9=10 10X75= 750 According to the multiplier model, recessions occur because. October 28, 2016 by Sean Mulvaney, Response Model: Similar to the cloning model, this process starts with. For instance, suppose you have measured the response of your system to a step input, and saved the resulting response data in a vector y of response values at the times stored in another vector, t. 252 (standard deviation, 0. That is, how a one unit change in X effects the The odds ratio for male vs. For example, the braking of an automobile,. The Propensity Scores Model Now let's prepare a Logistic Regression model to estimate the propensity scores. When I have the propensity, how does it help me with the marketing you ask? The main use is to increase ROI of the marketing campaigns. Five Best Practices in Healthcare Propensity Modeling By Evariant | June 19, 2019 This is the first post in a two-part series that discusses healthcare predictive and propensity modeling and selecting the optimal analytics partner to support your growth and engagement efforts. with Love and Logic. Basis the modelling data set, a predictive response model is built which ranks customers on the propensity of buying the product. com/site/econome. In July 2000, the National School Reform Faculty program, which currently houses Critical Friends Groups and coordinates the training for Critical Friends Coaches, relocated to the. We're talking about two models that economists use to describe the economy. Second, I’ll show you how to create a custom display template for samples. In a causal analysis, the independent variables are regarded as causes of the. The general transfer function model employed by the ARIMA procedure was discussed by Box and Tiao [2]. All data considered here are spreads — the difference between the number of some event (like blocked shots) for one team vs. Below, we look at each stage in detail, and we outline tools that you can use to cope with your changing emotional responses. In 2018 we will be making a change to how we price Indeed Resume for employers, moving to a subscription model. Within a random forests model specification across repeated samples, does standardized mean difference of. Of course, the higher the dose and dose rate, the higher the biological response; the lower the dose and dose rate, the lower the response. Correct model choices require considering: Firstly, we should identify which are the variables with missing values. To estimate the propensity score, a logistic regression model was used in which treatment status (receipt of smoking cessation counseling vs. The example stream for determining response propensity is named ResponsePropensity. starting point because most of the research on the relationship. Once the model is built, it is then scored using data from the test or validation partition, and a new model to deliver adjusted propensity scores is constructed by analyzing the original model's performance on that partition. Propensity scores estimate the odds ratio given the propensity score categories, and logistic regression estimates the odds ratio given the confounders included in the model. The score is a predicted probability that students receive a treatment, given their observed characteristics. The consensus model of criminal justice assumes the system's components work together to achieve justice while the conflict model assumes the components serve their own interests and justice is the product of conflict, according to StudyMode. Because previous studies examining the impact of diabetes mellitus on apoptotic propensity have been performed on total mitochondria, we sought to determine the differential response of individual mitochondrial subpopulations subjected to a diabetic phenotype, in an effort to understand their specific contributions to enhanced apoptosis. He became frustrated with the inability of formal logic to explain everyday arguments, which prompted him to develop his own model of practical reasoning. Propensity score methods generally allow many more variables to be included in the propensity score model, which increases the ability of these approaches to effectively adjust for confounding, than could be incorporated directly into a multivariable analysis of the study outcome. To accomplish this goal, a model is created that includes all predictor variables that are useful in predicting the response variable. Prediction – To predict a future response based on known values of the predictor variables. MVC Architecture. This is a fantastic way to kick-off the model building. There was a big debate when the p3d+ specs were released, with people saying the p3d+ would be so much faster than the MPP modified RWD model 3. These late discoveries in human subjects call for explorative studies to unlock the underlying biological mechanism, but also may shed new light on conceptual interrogation of. After all, transformation processes do not process themselves; people have to "do" something to make the processes happen. There are two versions of the tax multiplier: the simple tax multiplier and the complex tax multiplier, depending on whether the change in taxes affects only the consumption component of GDP or it affects all the components of GDP. Propensity/Response model is NOT necessary to drive neither campaign lift nor ROI • Propensity/Response model itself is not going to tell marketers which customers are most likely to contribute to the incremental campaign response An alternative statistical model is needed, targeting the customers whose propensities of. Estimation of Propensity Scores Using Generalized Additive Models Mi-Ja Woo∗, Jerome P. The study included two surgical techniques (open vs. Categorize customers by their propensity to respond to a sales campaign. We believe this will be more effective for both employers and job seekers. In the absence of a biologically-based model, dose-response modeling is largely a curve-fitting exercise. Propensity scores for the estimation of average treatment e ects in observational studies Leonardo Grilli and Carla Rampichini Dipartimento di Statistica "Giuseppe Parenti" Universit di Firenze Training Sessions on Causal Inference Bristol - June 28-29, 2011 Grilli and Rampichini (UNIFI) Propensity scores BRISTOL JUNE 2011 1 / 77. The mission of the Transgenic Core is to provide access to transgenic animal technology in an efficient, effective manner. If OPEC was not a. Pruning to maximize model accuracy (requiring simple hand computation) is applied to a classification tree model developed via S-PLUS to create propensity scores to improve causal inference in comparing hospitalized vs. Will the IC50 for the model with effect compartment. Imagine you just started a job at a new company. SAS EG View of Final Logit Propensity Scores “psc” Adequacy of Created Propensity Scores A well‐fitted model may not necessarily produce good enough p‐scores to balance the distributions of covariates over the conditions (Shadish, Luellen, & Clark; 2006). D’s from the top economics programs in the country. 15 Rasch vs. This is a fantastic way to kick-off the model building. Moving to the right, you have splines models which allow non-linearity, and hence are more flexible. Overview of ‘personalized’ The ‘personalized’ package is designed for the analysis of data where the effect of a treatment or intervention may vary for different patients. the Machine Travel providers now use software to re-price their offerings, sometimes dozens of times a day, putting travelers at a big disadvantage. In my first loyalty segmentation article, I wrote about how to segment frequent flyer loyalty members based on demographics, account profile data and status levels. fit the model without weights. 0394e+19 Prob > F = 0. The whole family of methods doesn't necessarily deliver big gains over. Null deviance: 2. Categorize customers by their propensity to respond to a sales campaign. The multiplier effect refers to the increase in final income arising from any new injection of spending. The consensus model of criminal justice assumes the system's components work together to achieve justice while the conflict model assumes the components serve their own interests and justice is the product of conflict, according to StudyMode. Outlaw Announces the Model 5000, praised by Audioholics! The Outlaw's new Model 5000 is a high performance, high value 5-channel amplifier great for powering immersive audio channels and moderately sized rooms. New Research on MTSS. The corresponding variance estimators are also provided. The IS-LM Curve Model (Explained With Diagram)! The Goods Market and Money Market: Links between Them: The Keynes in his analysis of national income explains that national income is determined at the level where aggregate demand (i. Slack for education. Locking and unlocking Model S is convenient. Suppose a factory with a payroll of $500,000 locates in Lemmingville, a typical suburban community. Armed with this information you can decide not to send an email to a certain. Karr ‡ Abstract Propensity score matching is often used in observational studies to create treatment and control groups with similar distributions of observed covari-ates. The strength of Science and its online journal sites rests with the strengths of its community of authors, who provide cutting-edge research, incisive scientific commentary. This article comes as some sorts of wrap-up of this research work. Note that we no longer need to add the @ResponseBody to the request mapping methods. An alternative form of the logistic regression equation is: The goal of logistic regression is to correctly predict the category of outcome for individual cases using the most parsimonious model. Introduction. Propensity score matching (PSM) refers to the pairing of treatment and control units with similar values on. corner response models). Stochastic modeling is a form of financial model that is used to help make investment decisions. Empirical Comparison of Impact Estimators Data Participation Model Matching Algorithms Parametric Methods Beyond Average Impact 4. Apple’s approach has always been to grow the pie. • Propensity scores may also be calculated by the Ensemble node, depending on the ensemble method used. In this simple case, the transmitted signal attenuates since the energy is spread spherically around the transmitting antenna. An alternative form of the logistic regression equation is: The goal of logistic regression is to correctly predict the category of outcome for individual cases using the most parsimonious model. In my first loyalty segmentation article, I wrote about how to segment frequent flyer loyalty members based on demographics, account profile data and status levels. Survival analysis is used to analyze data in which the time The survival time response make more assumptions that allow us to model the data in. Obviously that's unfair on the program, so we use matchit and match. The example stream for determining response propensity is named ResponsePropensity. This process is experimental and the keywords may be updated as the learning algorithm improves. The following model examines unsteady, incompressible flow past a long cylinder placed in a channel at right angle to the oncoming fluid. For many years, the standard tool for propensity score matching in Stata has been the psmatch2 command, written by Edwin Leuven and Barbara Sianesi. The estimated propensity score model included only these seven main effects and excluded interactions and quadratic terms. The second involved a 1-year follow-up of weight-reduced participants to determine if insulin response, insulin sensitivity, and dietary GL predicted weight gain. m=20 is considered good enough. Each document is an instance of its Model. Researchers have found that slight misspecification of the propensity score model can result in substantial bias of estimated treatment effects. When , the model is the most flexible. You could stick with this regression vs the propensity score, or you could compare the response within similar groups, where similarity is defined by the propensity score. The type of regression model used to estimate the propensity scores. Adjust for the propensity score in a logistic regression model. Thomas Gant, Keith Crowland Data & Information Management Enhancement (DIME) Kaiser Permanente. Shop Thousands of Styles for Men, Women and Kids Today. The cognitive model describes how people’s perceptions of, or spontaneous thoughts about, situations influence their emotional, behavioral (and often physiological) reactions. An overfit model can cause the regression coefficients, p-values, and R-squared to be misleading. Our Galaxy S20 vs Galaxy S10 comparison will show you how Samsung's new phone is its most ambitious yet. Performing a regression (rather than simple cross tabs) after the weighting or matching is a good idea to handle inevitable imperfections. Cognitive Response Model. Document and Model are distinct classes in Mongoose. AbstractRange expansions are limited by two key factors. • Run generalized linear model with participation and propensity as coefficients • SAS Global Forum: Paper 314-2012 Propensity Score Analysis and Assessment of Propensity Score Approaches using SAS Procedures. We performed propensity-score analyses of the Japan Septic Disseminated Intravascular Coagulation (JSEPTIC DIC) study database. When I have the propensity, how does it help me with the marketing you ask? The main use is to increase ROI of the marketing campaigns. 01) or adverse drug events (1. A Practical Guide to Getting Started with Propensity Scores. , probability) to be treated as a function of the observed variables. The Response of Consumption to Predictable Income Changes Earlier attempts at testing the implication of the theory that the marginal utility is a martingale relied on the special case of quadratic preferences. The following model examines unsteady, incompressible flow past a long cylinder placed in a channel at right angle to the oncoming fluid. stochastic models • In deterministic models, the output of the model is fully determined by the parameter values and the initial conditions. Cournot Model Assumptions: All firms produce an homogenous product The market price is therefore the result of the total supply (same price for all firms) Industrial Economics-Matilde Machado 3. Propensity modelling is the collective name for a new group of statistical techniques that provide a truly objective view of the likely behaviour of an individual customer. Unidimensional IRT Models for Dichotomous Responses. Using the response model P(x)=100-AGE(x) for customer xand the data table shown below, construct the cumulative gains and lift charts. It is not so easy with a wall-mounted speaker. The rate at which the confidence intervals widen is not a reliable guide to model quality: what is important is the model should be making the correct assumptions about how uncertain the future is. model is properly specified. The SportsLine Projection Model has simulated Saturday's Mavericks vs. In the first chapter of my 1999 book Multiple Regression, I wrote "There are two main uses of multiple regression: prediction and causal analysis. Introducing Charge Hydration Asymmetry into the Generalized Born Model Abhishek Mukhopadhyay,† Boris H. Propensity scores were calculated using the significant vari-. propensity to buy. Our mission is simple, to help people make progress in their lives through learning. William Holmes. This article comes as some sorts of wrap-up of this research work. The audit risk model determines the total amount of risk associated with an audit , and describes how this risk can be managed. emotional stability), detachment (vs. It has been observed recently that differences in virus-induced immunopathogenesis can be associated with altered expression of non-mutant viral genes associated with changes in viral modulation of the host innate immune response. For instance, suppose you have measured the response of your system to a step input, and saved the resulting response data in a vector y of response values at the times stored in another vector, t. Not a member yet? Register if you are a: Model, Photographer, Stylist, Makeup or Hair Stylist, Casting Director, Agent, Magazine, PR or Ad agency, Production Company, Brand or just a Fan!. In the presence of model misspecification, the estimator$\hat\psi\$ is inconsistent. (Not available on 3-phase or load center models. A Propensity/Response model itself is not going to tell marketers which customers are most likely to contribute to the incremental campaign response. Propensity models also help identify the need for a discount to encourage full price shoppers. Propensity score matching (PSM) refers to the pairing of treatment and control units with similar values on. REG * Insignia - Glass Screen Protector for Apple® iPad® Pro 12. The ODE portion of this model was calibrated to data related to blood flow following experimental pressure responses in non-injured human subjects or to data from people with SCI. Model AH42 and AH56 Attic Sprinklers. The propensity score is the probability of receiving the active treatment (Z = 1 vs. A simple guide to IRT and Rasch 3 Table 1 5X5 person by item matrix (with highlighted average) Perso 0 We can also make a tentative assessment of the item attribute based on this ideal-case matrix. com: Apple AirPods with Charging Case (Previous Model) Skip to main content. The multiplier model is an idea developed by Keynes which demonstrates that the additional economic activity generated by injecting a certain amount of money into a system exceeds the original sum. have contributed to their propensity to cover up disasters," Wang. innovation and industrial competitiveness by advancing measurement science, standards, and technology for engineered systems in ways that enhance economic security and improve quality of life. | 2020-02-18 00:58:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2775893211364746, "perplexity": 2420.859464758306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143455.25/warc/CC-MAIN-20200217235417-20200218025417-00373.warc.gz"} |
https://indico.nucleares.unam.mx/event/1541/session/67/contribution/275 | # 19th International Conference on Hadron Spectroscopy and Structure in memoriam Simon Eidelman
26-31 July 2021
Mexico City
Mexico/General timezone
HADRON 2021 is over. Thanks for making it a success!
Home > Timetable > Session details > Contribution details
Mexico City -
# Transition Form Factor calculation for $B_s$-meson in Covariant Confined Quark Model
## Speakers
We study Semileptonic $B_s$-meson decay ($B_s\rightarrow$ $K$ $l^+\nu_l$) using Covariant Confined Quark Model (CCQM) which has built-in infrared confinement. The transition form factors are calculated using CCQM in entire range of physical momentum transfer. They can be further utilized to compute the branching fractions of $B_s$ meson. Our preliminary results for form factors $f_+ (0)$ and $A_+(0)$ are 0.25 and 0.21, respectively, which are consistent with available QCD results. | 2022-05-22 11:27:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8304053544998169, "perplexity": 5638.20453661508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00490.warc.gz"} |
http://altreason.com/t/crossover/ | ## The Value of Crossover
In my last (real) post I wrote about the effects of elitism on a population, and concluded that a small amount of elites, who cloned themselves, produced the greatest increase in population fitness. Just like in real life.
I also wrote:
I also don’t have the time to run everything multiple times (takes days at the moment).
I am pleased to say that I rewrote the program a weekend ago using the CUDA runtime library, allowing me to use my GPU and speed things up quite a bit. Which is good, because I screwed up running this test 3 times and had to restart it – if I was on the old version I would not have the time for such second chances.
This program is mostly similar to the first, with a few differences:
#### No Elitism
Using the findings of the previous post, I decided to set this one up using the “Only the Best” method. As a reminder, in this heuristic parents and kids are both evaluated and on the top N are passed onto the next generation. Flexible elitism.
#### Pixel Fitness
The fitness metric was changed slightly – it’s still RMSE – but at the pixel level instead of color channel level. Conceptually there is almost no difference, what’s important to know is that fitness still increases without bound the more similar the images are, and that the fitness levels in this post cannot be directly compared to the CPU version.
#### Haploid Chromosomes
In the previous model, the “chromosomes” had two genomes each, a “dominant” one and a “recessive” one. During crossover, the dominant and recessive genomes would crossover. This all happens within a single artists. Babies are made by taking combining the dominant genome from one parent with the recessive genome of the other.
In the haploid model of crossover, the parents combine their genomes at the crossover point to make a new, child genome. i.e. the child might get the first 20% of its genome from parent 1 and the remaining 80% from parent 2.
I also changed genome initialization so that all triangles are set to invisible in the beginning (previously the first triangle was made visible).
#### No Bit Level Crossover
I didn’t have time to implement it and I don’t think it makes a difference. Bit level crossover is basically byte level crossover with an extra point mutation.
#### No Location Simulation
In this version genomes are free to mate with anyone other genome (in stochastic proportion to their fitness).
#### Randomness Fixed
The random seed actually works in this version – I can guarantee that all runs started with the same set of genomes and all runs are repeatable.
#### Effort
Another difference is that you no longer specify the number of generations you want to run, but instead the effort you want the algorithm to put into searching for the best image.
$$effort = \text{population size} + \left(\text{number of children} \times \text{number of generations}\right)$$
This allows me to compare settings with different numbers of population and children on even footing.
## Methodology
This time I ran the program on three different types of images: a picture, a drawing and a drawing with a transparent background. I wanted to see if a different type of image would affect the results.
I ran the program with 100,000 effort, 100 population that produces 100 children each round with 100 triangles in their genomes and adjusted their crossover chance between [0,1] in 0.1 increments. I ran the program 10 times per image per increment to produce the average I used for the graph below:
The error bars represent the 95% confidence interval.
From this light testing it seems that crossover does indeed improve fitness. This is a relief because despite there being theory on why it should improve fitness, I have not personally discerned a difference. This is probably because the difference is only dramatic between 0 and 0.25. I seems that it’s more important to have some crossover than how worry about how much crossover.
I’ll be using a 100% crossover chance going forward and I’ll try to tease out the difference between crossover at the byte level and crossover at the triangle level. | 2018-12-14 20:26:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28855258226394653, "perplexity": 1221.5024885201447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00310.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-11-infinite-sequences-and-series-review-exercises-page-826/43 | ## Calculus 8th Edition
series has radius of convergence is $\frac{1}{2}$ and interval $\infty$ or $[\frac{5}{2},\frac{7}{2})$
Root test:$R=\lim\limits_{n \to \infty}|\frac{a_{n+1}}{a_{n}}|=|\frac{\frac{2(x-3)^{n+1}}{\sqrt {n+4}}}{\frac{2^{n}(x-3)^{n}}{\sqrt {n+3}}}|$ $=|2(x-3)[\lim\limits_{n \to \infty}|\sqrt {\frac{(n+3)}{(n+4)}}]|$ $=|2(x-3)[|\sqrt {\frac{(1+0)}{(1+0)}}]|$ $=|2(x-3)|\lt 1$ $=|(x-3)|\lt \frac{1}{2}$ and $-\frac{1}{2}\lt (x-3) \lt \frac{1}{2}$ $\frac{5}{2} \lt x \lt \frac{7}{2}$ Thus, the series has radius of convergence is $\frac{1}{2}$ and interval $\infty$ or $[\frac{5}{2},\frac{7}{2})$ | 2019-12-15 08:49:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8270578384399414, "perplexity": 190.08018678110687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307797.77/warc/CC-MAIN-20191215070636-20191215094636-00452.warc.gz"} |
https://yos.io/2013/07/10/learn-vim-in-5-minutes/ | Quick! Let’s learn some minimal vi commands!
In a Vim editor:
• press i to go into insert mode, start typing your text
• press Esc to go back to normal mode
• then write the file and quit with :wq + <Enter>
Congratulations. You have joined the great fraternity of people who know this ancient, revered text editor.
These are the essential commands you will need to get started with git commit messages. | 2020-01-29 06:26:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41583186388015747, "perplexity": 11009.231970319253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251788528.85/warc/CC-MAIN-20200129041149-20200129071149-00309.warc.gz"} |
https://bestforall.tnedu.gov/resource/rising-grade-8-summer-learning-resources-math-week-4?book=5409&binder_id=5406 | ## Weekly Overview
Weekly Topics
The focus of this week’s instruction is to deepen students’ understanding of:
• Generating Equivalent Expressions (both lessons 1 and 2)
• Understanding Equations
• Using If-Then Moves in Solving Equations (both lessons 4 and 5)
Materials Needed
• Manila Envelopes (enough for each student in the class to have one)
• Expressions
• Equations
• Tape Diagram
• Pencil and Paper
Standard(s) Covered
7.EE.B.3
Solve multi-step real-world and mathematical problems posed with positive and negative rational numbers presented in any form (whole numbers, fractions, and decimals).
1. Apply properties of operations to calculate with numbers in any form; convert between forms as appropriate.
2. Assess the reasonableness of answers using mental computation and estimation strategies.
Representations
• Tape Diagram
• Numerical Sentences
• Variable: A variable is a symbol (such as a letter) that represents a number (i.e., it is a placeholder for a number). A variable is actually quite a simple idea: it is a placeholder—a blank—in an expression or an equation where a number can be inserted. A variable holds a place for a single number throughout all calculations done with the variable—it does not vary. It is the user of the variable who has the ultimate power to change or vary what number is inserted, as he/she desires. The power to vary rests in the will of the student, not in the variable itself.
• Numerical expression: A numerical expression is a number, or it is any combination of sums, differences, products, or divisions of numbers that evaluates to a number.
• Value of a numerical expression: The value of a numerical expression is the number found by evaluating the expression. For example, $${1 \over 3}$$ ∙ (2 + 4) - 7 is a numerical expression, and its value is -5.
• Expression: An expression is a numerical expression, or it is the result of replacing some (or all) of the numbers in a numerical expression with variables. There are two ways to build expressions: We can start out with a numerical expression, such as $${1 \over 3}$$ ∙ (2 + 4) - 7 and replace some of the numbers with letters to get $${1 \over 3}$$ ∙ (x + y) - z. We can build such expressions from scratch, as in x + x(y-z) , and note that if numbers were placed in the expression for the variables x, y, and z, the result would be a numerical expression. The key is to strongly link expressions back to computations with numbers through building and evaluating them. Building an expression often occurs in the context of a word problem by thinking about examples of numerical expressions first and then replacing some of the numbers with letters in a numerical expression. The act of evaluating an expression means to replace each of the variables with specific numbers to get a numerical expression, and then finding the value of that numerical expression.
• Equivalent expressions: Two expressions are equivalent if both expressions evaluate to the same number for every substitution of numbers into all the letters in both expressions.
• An expression in standard form: An expression that is in expanded form where all like terms have been collected is said to be in standard form. Expanded form is where the like terms are not collected (3x + 2y + 5x – y, is an example of expanded form. To write this in standard form would be 8x + y).
• Term (description): Each summand of an expression is called a term.
• Coefficient of a term: The coefficient of a term is the number multiplying a variable. For example, 3x + 2y – c + 2. 3x, 2y, -c, and 2 are all terms. 3 is the coefficient of x, 2 is the coefficient of y, -1 is the coefficient of c, and 2 is the constant term and will therefore have no coefficient.
• Equation: An equation is a statement of equality between two expressions. If A and B are two expressions in the variable x, then A=B is an equation in the variable x. Students sometimes have trouble keeping track of what is an expression and what is an equation. An expression never includes an equal sign (=) and can be thought of as part of a sentence. The expression 3+4 read aloud is, “Three plus four,” which is only a phrase in a possible sentence. Equations, on the other hand, always have an equal sign, which is a symbol for the verb is. The equation 3+4=7 read aloud is, “Three plus four is seven,” which expresses a complete thought (i.e., a sentence). Number sentences—equations with numbers only—are special among all equations.
• Number sentence: A number sentence is a statement of equality (or inequality) between two numerical expressions. A number sentence is by far the most concrete version of an equation. It also has the very important property that it is always true or always false, and it is this property that distinguishes it from a generic equation. Examples include 3+4=7 (true) and 3+3=7 (false). This important property guarantees the ability to check whether or not a number is a solution to an equation with a variable: just substitute a number into the variable. The resulting number sentence is either true or it is false. If the number sentence is true, the number is a solution to the equation. For that reason, number sentences are the first and most important type of equation that students need to understand.
• Solution: A solution to an equation with one variable is a number that, when substituted for all instances of the variable in both expressions, makes the equation a true number sentence.
## Materials List
The following materials list will be used for the entire four weeks: Materials List. | 2021-09-24 03:18:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5812419652938843, "perplexity": 746.368177945014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00314.warc.gz"} |
http://isabelle.in.tum.de/repos/isabelle/file/b7ca64c8fa64/doc-src/IsarRef/generic.tex | doc-src/IsarRef/generic.tex
author wenzelm Mon Aug 30 14:11:47 1999 +0200 (1999-08-30) changeset 7391 b7ca64c8fa64 parent 7356 1714c91b8729 child 7396 d3f231fe725c permissions -rw-r--r--
'iff' attribute;
1
2 \chapter{Generic Tools and Packages}\label{ch:gen-tools}
3
4 \section{Basic proof methods}\label{sec:pure-meth}
5
6 \indexisarmeth{fail}\indexisarmeth{succeed}\indexisarmeth{$-$}\indexisarmeth{assumption}
7 \indexisarmeth{finish}\indexisarmeth{fold}\indexisarmeth{unfold}
8 \indexisarmeth{rule}\indexisarmeth{erule}
9 \begin{matharray}{rcl}
10 - & : & \isarmeth \\
11 assumption & : & \isarmeth \\
12 finish & : & \isarmeth \\[0.5ex]
13 rule & : & \isarmeth \\
14 erule^* & : & \isarmeth \\[0.5ex]
15 fold & : & \isarmeth \\
16 unfold & : & \isarmeth \\[0.5ex]
17 succeed & : & \isarmeth \\
18 fail & : & \isarmeth \\
19 \end{matharray}
20
21 \begin{rail}
22 ('fold' | 'unfold' | 'rule' | 'erule') thmrefs
23 ;
24 \end{rail}
25
26 \begin{descr}
27 \item [$-$''] does nothing but insert the forward chaining facts as premises
28 into the goal. Note that command $\PROOFNAME$ without any method actually
29 performs a single reduction step using the $rule$ method (see below); thus a
30 plain \emph{do-nothing} proof step would be $\PROOF{-}$ rather than
31 $\PROOFNAME$ alone.
32 \item [$assumption$] solves some goal by assumption, after inserting the
33 goal's facts.
34 \item [$finish$] solves all remaining goals by assumption; this is the default
35 terminal proof method for $\QEDNAME$, i.e.\ it usually does not have to be
36 spelled out explicitly.
37 \item [$rule~thms$] applies some rule given as argument in backward manner;
38 facts are used to reduce the rule before applying it to the goal. Thus
39 $rule$ without facts is plain \emph{introduction}, while with facts it
40 becomes an \emph{elimination}.
41
42 Note that the classical reasoner introduces another version of $rule$ that
43 is able to pick appropriate rules automatically, whenever explicit $thms$
44 are omitted (see \S\ref{sec:classical-basic}); that method is the default
45 one for initial proof steps, such as $\PROOFNAME$ and $\DDOT$'' (two
46 dots).
47 \item [$erule~thms$] is similar to $rule$, but applies rules by
48 elim-resolution. This is an improper method, mainly for experimentation and
49 porting of old scripts. Actual elimination proofs are usually done with
50 $rule$ (single step, involving facts) or $elim$ (multiple steps, see
51 \S\ref{sec:classical-basic}).
52 \item [$unfold~thms$ and $fold~thms$] expand and fold back again the given
53 meta-level definitions throughout all goals; facts may not be involved.
54 \item [$succeed$] yields a single (unchanged) result; it is the identify of
55 the \texttt{,}'' method combinator.
56 \item [$fail$] yields an empty result sequence; it is the identify of the
57 \texttt{|}'' method combinator.
58 \end{descr}
59
60
61 \section{Miscellaneous attributes}
62
63 \indexisaratt{tag}\indexisaratt{untag}\indexisaratt{COMP}\indexisaratt{RS}
64 \indexisaratt{OF}\indexisaratt{where}\indexisaratt{of}\indexisaratt{standard}
65 \indexisaratt{elimify}\indexisaratt{transfer}\indexisaratt{export}
66 \begin{matharray}{rcl}
67 tag & : & \isaratt \\
68 untag & : & \isaratt \\[0.5ex]
69 OF & : & \isaratt \\
70 RS & : & \isaratt \\
71 COMP & : & \isaratt \\[0.5ex]
72 of & : & \isaratt \\
73 where & : & \isaratt \\[0.5ex]
74 standard & : & \isaratt \\
75 elimify & : & \isaratt \\
76 export^* & : & \isaratt \\
77 transfer & : & \isaratt \\
78 \end{matharray}
79
80 \begin{rail}
81 ('tag' | 'untag') (nameref+)
82 ;
83 'OF' thmrefs
84 ;
85 ('RS' | 'COMP') nat? thmref
86 ;
87 'of' (inst * ) ('concl' ':' (inst * ))?
88 ;
89 'where' (name '=' term * 'and')
90 ;
91
92 inst: underscore | term
93 ;
94 \end{rail}
95
96 \begin{descr}
97 \item [$tag~tags$ and $untag~tags$] add and remove $tags$ to the theorem,
98 respectively. Tags may be any list of strings that serve as comment for
99 some tools (e.g.\ $\LEMMANAME$ causes tag $lemma$'' to be added to the
100 result).
101 \item [$OF~thms$, $RS~n~thm$, and $COMP~n~thm$] compose rules. $OF$ applies
102 $thms$ in parallel (cf.\ \texttt{MRS} in \cite[\S5]{isabelle-ref}, but note
103 the reversed order). $RS$ resolves with the $n$-th premise of $thm$; $COMP$
104 is a version of $RS$ that does not include the automatic lifting process
105 that is normally intended (see also \texttt{RS} and \texttt{COMP} in
106 \cite[\S5]{isabelle-ref}).
107
108 \item [$of~ts$ and $where~\vec x = \vec t$] perform positional and named
109 instantiation, respectively. The terms given in $of$ are substituted for
110 any schematic variables occurring in a theorem from left to right;
111 \texttt{_}'' (underscore) indicates to skip a position.
112
113 \item [$standard$] puts a theorem into the standard form of object-rules, just
114 as the ML function \texttt{standard} (see \cite[\S5]{isabelle-ref}).
115
116 \item [$elimify$] turns an destruction rule into an elimination.
117
118 \item [$export$] lifts a local result out of the current proof context,
119 generalizing all fixed variables and discharging all assumptions. Note that
120 (partial) export is usually done automatically behind the scenes. This
121 attribute is mainly for experimentation.
122
123 \item [$transfer$] promotes a theorem to the current theory context, which has
124 to enclose the former one. Normally, this is done automatically when rules
125 are joined by inference.
126
127 \end{descr}
128
129
130 \section{Calculational proof}\label{sec:calculation}
131
132 \indexisarcmd{also}\indexisarcmd{finally}\indexisaratt{trans}
133 \begin{matharray}{rcl}
134 \isarcmd{also} & : & \isartrans{proof(state)}{proof(state)} \\
135 \isarcmd{finally} & : & \isartrans{proof(state)}{proof(chain)} \\
136 trans & : & \isaratt \\
137 \end{matharray}
138
139 Calculational proof is forward reasoning with implicit application of
140 transitivity rules (such those of $=$, $\le$, $<$). Isabelle/Isar maintains
141 an auxiliary register $calculation$\indexisarthm{calculation} for accumulating
142 results obtained by transitivity obtained together with the current facts.
143 Command $\ALSO$ updates $calculation$ from the most recent result, while
144 $\FINALLY$ exhibits the final result by forward chaining towards the next goal
145 statement. Both commands require valid current facts, i.e.\ may occur only
146 after commands that produce theorems such as $\ASSUMENAME$, $\NOTENAME$, or
147 some finished $\HAVENAME$ or $\SHOWNAME$.
148
149 Also note that the automatic term abbreviation $\dots$'' has its canonical
150 application with calculational proofs. It automatically refers to the
151 argument\footnote{The argument of a curried infix expression is its right-hand
152 side.} of the preceding statement.
153
154 Isabelle/Isar calculations are implicitly subject to block structure in the
155 sense that new threads of calculational reasoning are commenced for any new
156 block (as opened by a local goal, for example). This means that, apart from
157 being able to nest calculations, there is no separate \emph{begin-calculation}
158 command required.
159
160 \begin{rail}
161 ('also' | 'finally') transrules? comment?
162 ;
163 'trans' (() | 'add' ':' | 'del' ':') thmrefs
164 ;
165
166 transrules: '(' thmrefs ')' interest?
167 ;
168 \end{rail}
169
170 \begin{descr}
171 \item [$\ALSO~(thms)$] maintains the auxiliary $calculation$ register as
172 follows. The first occurrence of $\ALSO$ in some calculational thread
173 initialises $calculation$ by $facts$. Any subsequent $\ALSO$ on the same
174 level of block-structure updates $calculation$ by some transitivity rule
175 applied to $calculation$ and $facts$ (in that order). Transitivity rules
176 are picked from the current context plus those given as $thms$ (the latter
177 have precedence).
178
179 \item [$\FINALLY~(thms)$] maintaining $calculation$ in the same way as
180 $\ALSO$, and concludes the current calculational thread. The final result
181 is exhibited as fact for forward chaining towards the next goal. Basically,
182 $\FINALLY$ just abbreviates $\ALSO~\FROM{calculation}$. A typical proof
183 idiom is $\FINALLY~\SHOW{}{\VVar{thesis}}~\DOT$''.
184
185 \item [$trans$] maintains the set of transitivity rules of the theory or proof
186 context, by adding or deleting theorems (the default is to add).
187 \end{descr}
188
189 See theory \texttt{HOL/Isar_examples/Group} for a simple application of
190 calculations for basic equational reasoning.
191 \texttt{HOL/Isar_examples/KnasterTarski} involves a few more advanced
192 calculational steps in combination with natural deduction.
193
194
195 \section{Axiomatic Type Classes}\label{sec:axclass}
196
197 \indexisarcmd{axclass}\indexisarcmd{instance}\indexisarmeth{intro-classes}
198 \begin{matharray}{rcl}
199 \isarcmd{axclass} & : & \isartrans{theory}{theory} \\
200 \isarcmd{instance} & : & \isartrans{theory}{proof(prove)} \\
201 intro_classes & : & \isarmeth \\
202 \end{matharray}
203
204 Axiomatic type classes are provided by Isabelle/Pure as a purely
205 \emph{definitional} interface to type classes (cf.~\S\ref{sec:classes}). Thus
206 any object logic may make use of this light-weight mechanism for abstract
207 theories. See \cite{Wenzel:1997:TPHOL} for more information. There is also a
208 tutorial on \emph{Using Axiomatic Type Classes in Isabelle} that is part of
209 the standard Isabelle documentation.
210 %FIXME cite
211
212 \begin{rail}
213 'axclass' classdecl (axmdecl prop comment? +)
214 ;
215 'instance' (nameref '<' nameref | nameref '::' simplearity) comment?
216 ;
217 \end{rail}
218
219 \begin{descr}
220 \item [$\isarkeyword{axclass}~c < \vec c~axms$] defines an axiomatic type
221 class as the intersection of existing classes, with additional axioms
222 holding. Class axioms may not contain more than one type variable. The
223 class axioms (with implicit sort constraints added) are bound to the given
224 names. Furthermore a class introduction rule is generated, which is
225 employed by method $intro_classes$ in support instantiation proofs of this
226 class.
227
228 \item [$\isarkeyword{instance}~c@1 < c@2$ and $\isarkeyword{instance}~t :: 229 (\vec s)c$] setup up a goal stating the class relation or type arity. The
230 proof would usually proceed by the $intro_classes$ method, and then
231 establish the characteristic theorems of the type classes involved. After
232 finishing the proof the theory will be augmented by a type signature
233 declaration corresponding to the resulting theorem.
234 \item [Method $intro_classes$] iteratively expands the class introduction
235 rules
236 \end{descr}
237
238 See theory \texttt{HOL/Isar_examples/Group} for a simple example of using
239 axiomatic type classes, including instantiation proofs.
240
241
242 \section{The Simplifier}
243
244 \subsection{Simplification methods}\label{sec:simp}
245
246 \indexisarmeth{simp}\indexisarmeth{asm-simp}
247 \begin{matharray}{rcl}
248 simp & : & \isarmeth \\
249 asm_simp & : & \isarmeth \\
250 \end{matharray}
251
252 \railalias{asmsimp}{asm\_simp}
253 \railterm{asmsimp}
254
255 \begin{rail}
256 ('simp' | asmsimp) (simpmod * )
257 ;
258
259 simpmod: ('add' | 'del' | 'only' | 'other') ':' thmrefs
260 ;
261 \end{rail}
262
263 \begin{descr}
264 \item [Methods $simp$ and $asm_simp$] invoke Isabelle's simplifier, after
265 modifying the context by adding or deleting given rules. The
266 \railtoken{only} modifier first removes all other rewrite rules and
267 congruences, and then is like \railtoken{add}. In contrast,
268 \railtoken{other} ignores its arguments; nevertheless there may be
269 side-effects on the context via attributes. This provides a back door for
270 arbitrary context manipulation.
271
272 Both of these methods are based on \texttt{asm_full_simp_tac}, see
273 \cite[\S10]{isabelle-ref}; $simp$ removes any exisiting premises of the
274 goal, before inserting the goal facts; $asm_simp$ leaves the premises.
275 \end{descr}
276
277 \subsection{Modifying the context}
278
279 \indexisaratt{simp}
280 \begin{matharray}{rcl}
281 simp & : & \isaratt \\
282 \end{matharray}
283
284 \begin{rail}
285 'simp' (() | 'add' | 'del')
286 ;
287 \end{rail}
288
289 \begin{descr}
290 \item [Attribute $simp$] adds or deletes rules from the theory or proof
291 context (the default is to add).
292 \end{descr}
293
294
295 \subsection{Forward simplification}
296
297 \indexisaratt{simplify}\indexisaratt{asm-simplify}
298 \indexisaratt{full-simplify}\indexisaratt{asm-full-simplify}
299 \begin{matharray}{rcl}
300 simplify & : & \isaratt \\
301 asm_simplify & : & \isaratt \\
302 full_simplify & : & \isaratt \\
303 asm_full_simplify & : & \isaratt \\
304 \end{matharray}
305
306 These attributes provide forward rules for simplification, which should be
307 used only very rarely. See the ML functions of the same name in
308 \cite[\S10]{isabelle-ref} for more information.
309
310
311 \section{The Classical Reasoner}
312
313 \subsection{Basic methods}\label{sec:classical-basic}
314
315 \indexisarmeth{rule}\indexisarmeth{default}\indexisarmeth{contradiction}
316 \begin{matharray}{rcl}
317 rule & : & \isarmeth \\
318 intro & : & \isarmeth \\
319 elim & : & \isarmeth \\
320 contradiction & : & \isarmeth \\
321 \end{matharray}
322
323 \begin{rail}
324 ('rule' | 'intro' | 'elim') thmrefs
325 ;
326 \end{rail}
327
328 \begin{descr}
329 \item [Method $rule$] as offered by the classical reasoner is a refinement
330 over the primitive one (see \S\ref{sec:pure-meth}). In the case that no
331 rules are provided as arguments, it automatically determines elimination and
332 introduction rules from the context (see also \S\ref{sec:classical-mod}).
333 In that form it is the default method for basic proof steps, such as
334 $\PROOFNAME$ and $\DDOT$'' (two dots).
335
336 \item [Methods $intro$ and $elim$] repeatedly refine some goal by intro- or
337 elim-resolution, after having inserted the facts. Omitting the arguments
338 refers to any suitable rules from the context, otherwise only the explicitly
339 given ones may be applied. The latter form admits better control of what
340 actually happens, thus it is very appropriate as an initial method for
341 $\PROOFNAME$ that splits up certain connectives of the goal, before entering
342 the sub-proof.
343
344 \item [Method $contradiction$] solves some goal by contradiction: both $A$ and
345 $\neg A$ have to be present in the assumptions.
346 \end{descr}
347
348
349 \subsection{Automatic methods}\label{sec:classical-auto}
350
351 \indexisarmeth{blast}
352 \indexisarmeth{fast}\indexisarmeth{best}\indexisarmeth{slow}\indexisarmeth{slow-best}
353 \begin{matharray}{rcl}
354 blast & : & \isarmeth \\
355 fast & : & \isarmeth \\
356 best & : & \isarmeth \\
357 slow & : & \isarmeth \\
358 slow_best & : & \isarmeth \\
359 \end{matharray}
360
361 \railalias{slowbest}{slow\_best}
362 \railterm{slowbest}
363
364 \begin{rail}
365 'blast' nat? (clamod * )
366 ;
367 ('fast' | 'best' | 'slow' | slowbest) (clamod * )
368 ;
369
370 clamod: (('intro' | 'elim' | 'dest') (() | '!' | '!!') | 'del') ':' thmrefs
371 ;
372 \end{rail}
373
374 \begin{descr}
375 \item [$blast$] refers to the classical tableau prover (see \texttt{blast_tac}
376 in \cite[\S11]{isabelle-ref}). The optional argument specifies a
377 user-supplied search bound (default 20).
378 \item [$fast$, $best$, $slow$, $slow_best$] refer to the generic classical
379 reasoner (see \cite[\S11]{isabelle-ref}, tactic \texttt{fast_tac} etc).
380 \end{descr}
381
382 Any of above methods support additional modifiers of the context of classical
383 rules. There semantics is analogous to the attributes given in
384 \S\ref{sec:classical-mod}.
385
386
387 \subsection{Combined automatic methods}
388
389 \indexisarmeth{auto}\indexisarmeth{force}
390 \begin{matharray}{rcl}
391 force & : & \isarmeth \\
392 auto & : & \isarmeth \\
393 \end{matharray}
394
395 \begin{rail}
396 ('force' | 'auto') (clasimpmod * )
397 ;
398
399 clasimpmod: ('simp' ('add' | 'del' | 'only') | other |
400 (('intro' | 'elim' | 'dest') (() | '!' | '!!') | 'del')) ':' thmrefs
401 \end{rail}
402
403 \begin{descr}
404 \item [$force$ and $auto$] provide access to Isabelle's combined
405 simplification and classical reasoning tactics. See \texttt{force_tac} and
406 \texttt{auto_tac} in \cite[\S11]{isabelle-ref} for more information. The
407 modifier arguments correspond to those given in \S\ref{sec:simp} and
408 \S\ref{sec:classical-auto}. Note that the ones related to the Simplifier
409 are prefixed by \railtoken{simp} here.
410 \end{descr}
411
412 \subsection{Modifying the context}\label{sec:classical-mod}
413
414 \indexisaratt{intro}\indexisaratt{elim}\indexisaratt{dest}
415 \indexisaratt{iff}\indexisaratt{delrule}
416 \begin{matharray}{rcl}
417 intro & : & \isaratt \\
418 elim & : & \isaratt \\
419 dest & : & \isaratt \\
420 iff & : & \isaratt \\
421 delrule & : & \isaratt \\
422 \end{matharray}
423
424 \begin{rail}
425 ('intro' | 'elim' | 'dest') (() | '!' | '!!')
426 ;
427 \end{rail}
428
429 \begin{descr}
430 \item [$intro$, $elim$, $dest$] add introduction, elimination, destruct rules,
431 respectively. By default, rules are considered as \emph{safe}, while a
432 single !'' classifies as \emph{unsafe}, and !!'' as \emph{extra} (i.e.\
433 not applied in the search-oriented automatic methods).
434
435 \item [$iff$] declares equations both as rewrite rules for the simplifier and
436 classical reasoning rules.
437
438 \item [$delrule$] deletes introduction or elimination rules from the context.
439 Note that destruction rules would have to be turned into elimination rules
440 first, e.g.\ by using the $elimify$ attribute.
441 \end{descr}
442
443
444 %%% Local Variables:
445 %%% mode: latex
446 %%% TeX-master: "isar-ref"
447 %%% End: | 2020-02-26 13:13:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9988688230514526, "perplexity": 6351.356615554719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146342.41/warc/CC-MAIN-20200226115522-20200226145522-00299.warc.gz"} |
https://computergraphics.stackexchange.com/questions/8765/learning-light-transport-using-q-learning | # Learning light transport using Q-Learning
I am trying to reproduce the results obtained by Dahm et al. in the paper Learning Light Transport the Reinforced Way.
This method takes advantage of the similarity between the Bellman equation (Q-Learning) and the rendering equation.
Here's the overview of the code authors inserted in the first version of their paper:
Question 1
What is, in the first function, idxPrev? Intuition suggests it is the index of the previous cell (the scene space needs to be discretized, using for example the Voronoi algorithm). But in the function, authors pass ray.o, which I believe is the outgoing ray. So, that would not make sense.
EDIT: ray.o is probably the origin.
Moreover, in the same function, I understand that getLastAttenuation() returns the color of idxCurr, but i don't understand what qmax[idxCurr] returns (intuitively, the action so the stratum of the hemisphere that maximizes the Q-value).
Question 2
The action space is discretized, apparently, dividing the hemisphere built on the hitting point in different stratums. According to what I understand, the Q-Learning algorithm finds the best action (so the best stratum) for each cell in the space, and then a ray is scattered randomly in that stratum. But a stratum covers 360 degrees, so I don't see how this is helpful to scattered a ray towards the light.
In the function sampleScatteringDIrFromQtable(), the PDF is calculated based on the ps (not clear) multiplied by the number of patches. So, the action is related to circular stratums of the hemisphere, or the number of patches?
• ray.o almost always denote the origin, for what it's worth. – Hubble Apr 13 at 17:20
• @Hubble Thank you, so the first point is clear. – maurocomi yesterday | 2019-04-18 12:15:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7281008362770081, "perplexity": 988.9709391340756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517639.17/warc/CC-MAIN-20190418121317-20190418143317-00471.warc.gz"} |