url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://brilliant.org/problems/inequality-in-7-variables/
# Geometry? Geometry Level 5 $\large \min{\left|\frac{x_i-x_j}{1+x_ix_j}\right|} \leq C$ For all 7-tuples of real numbers $$(x_1, x_2,...,x_7)$$, the following inequality holds, where the minimum is taken over all $$1 \leq i < j \leq 7$$. Let $$D$$ be the smallest possible value of $$C$$. If we can have $$\min{\left|\dfrac{x_i-x_j}{1+x_ix_j}\right|} = D$$, submit your answer as $$\dfrac{1}{D^2} + 2017$$. If not, then submit your answer as $$\dfrac{1}{D^2} + 1$$. ×
2017-12-16 03:41:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312469720840454, "perplexity": 169.87615941355114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00721.warc.gz"}
https://xyz5261941.wordpress.com/2015/07/27/notes-on-divisors/
Notes on divisors-1 Introduction Divisors are a very important tool in algebraic geometry. Let’s quote an example from Hartshorne’s ‘Algebraic Geometry’: let $C$ be a non-singular projective curve in the projective plane $X=\mathbb{P}^2_k$ over a field $k$. Now let $L$ be any line in $X$, and we write $L\bigcap C$ for their intersection, which consists of finitely many points counted with multiplicity. Now let’s vary the line $L$, then we obtain a family of finite sets $L\bigcap C$. It is not hard to see that we can construct the embedding of $C$ in $X$ using this family of sets $L\bigcap C$. This is a very powerful tool in studying this kind of embeddings. Weil divisors The simplest kind of divisors is perhaps the Weil divisors. André Weil(photo from here) Suppose that $X$ is a scheme, we say that $X$ is regular of co-dimensional one if all its local rings $\mathfrak{O}_{X,x}$ of dimensional $1$ is regular. Let me explain these terms. Let $R$ be a ring, then for any prime ideal $\mathfrak{p}$ of $R$, its height $h(\mathfrak{p})$ is defined to be the largest integer $n$ such that there exists a strictly increasing sequence of prime ideals $\mathfrak{p}_0\subset \mathfrak{p}_1\subset...\subset\mathfrak{p}_n=\mathfrak{p}$. And the dimension $dim(R)$ of $R$ is defined to be the maximum of heights of its prime ideals. This is the so-called Krull dimension of $R$. For some rings $R$, this $dim(R)$ can be infinite, even if $R$ is a noetherian ring, $dim(R)$ can still be infinite, the classical examples are perhaps due to Nagata. Let $k$ be a field and consider $R=k[x_1,x_2,...,x_n,...]$ the polynomial ring on infinite variables. We set the prime ideals to be $\mathfrak{p}_n=R$ generated by the variables in the bracket. And we set $S=R-\bigcup_n \mathfrak{p}_n$, a multiplicative closed subset of $R$, then we take the localization $R'=S^{-1}R$. It can be shown that the maximal ideals of $R'$ are of the form $\mathfrak{p}_nR'$, so we have that $h(\mathfrak{p}_nR')\geq (n+1)^2-n^2$. Thus we have that $dim(R')=\infty$(cf. Eisenbud’s ‘Commutative algebra with a view towards algebraic geometry’, Exercise 9.6). Yet the good news for algebraic geometers is that for noetherian local rings, their dimensions are always finite. We can find this result in Atiyah-Macdonald’s ‘commutative algebra’, the last chapter on dimension theory. The usefulness of the dimensions of a ring gives a good criterion for a variety to be non-singular at a point. For that, we need the concept of regular rings. Let $R$ be a noetherian local ring with maximal ideal $\mathfrak{m}$ and residue field $k=R/\mathfrak{m}$, then we say that $R$ is regular if $dim_k(\mathfrak{m}/\mathfrak{m}^2)=dim(R)$. Let $Y\subset\mathbb{A}^n_k$ be an affine variety and $P$ is a point in $Y$, then $Y$ is non-singular at $P$ if and only if $\mathfrak{O}_{Y,P}$ is regular. The importance of this result is that it gives an intrisinc description of singular points of a variety. Of course, this criterion can be generalized to other situations since it is a local criterion. Now let’s return to our origin: we say that a scheme $X$ is regular of co-dimension one if all its local rings $\mathfrak{O}_{X,x}$ of dimension $1$ are regular rings. So it is clear that nonsingular varieties are regular of co-dimension $1$(considering its associated scheme). Another important class of examples is noetherian normal schemes. Recall that a normal scheme $X$ is a scheme such that all of its local rings $\mathfrak{O}_{X,x}$ are integrally closed. We can use Proposition 9.2 of Atiyah-Macdonald’s ‘commutative algebra’ to show that if a noetherian local ring of dimension one is integrally closed, then it is regular. Note that the word ‘co-dimension’ refers to the dimension of the local rings. In this post we shall assume that a scheme $X$ is noetherian integral separated, regular of co-dimension one. We will see why we make these assumptions on $X$. A prime divisor of $X$ is a closed integral subscheme $Y$ of $X$ of co-dimension $1$(here integral scheme correspond to algebraic varieties, ‘integral’ means reduced and irreducible). Then a Weil divisor of $X$ is define to be an element of the free abelian group on the basis of all prime divisors of $X$. If $D=\sum_in_iY_i$ is a Weil divisor on $X$, we say that $D$ is effective if $n_i\geq 0$ for all $i$. Note that if $Y$ is a prime divisor of $X$, then by definition, $Y$ is irreducible, thus it has a (unique) generic point, say, $y_0\in Y$. We have that $\mathfrak{O}_{Y,y_0}$ is a discrete valuation ring(this also comes from Atiyah-Macdonald’s book, prop 9.2) with quotient field $K$, and we write $v_Y$ for this valuation. We can show that under the assumptions, $K$ is equal to the function field on $X$. Now for any $f\in K^*$, which is also an element in the quotient field of $\mathfrak{O}_{X,y_0}$ where $x$ is the generic point of a prime divisor $Y$, we set $(f)=\sum_Yv_Y(f)Y$, which is thus a Weil divisor on $X$(if we admit the result that $v_Y(f)=0$ for almost all such $Y$). We call $(f)$ a principal Weil divisor on $X$, and we say that two Weil divisors are linearly equivalent if their difference is a principal Weil divisor. We will say something more on the properties of Weil divisors later on. For the present, we will turn to the definition of Cartier divisors. Pierre Emil Jean Cartier(photo from here) Cartier divisors Suppose again that $X$ is a scheme and $U=Spec(A)\subset X$ is an affine open subset of $X$. We write $S$ to be the subset of $A$ consisting of non-zero divisors and $K(U)=S^{-1}A$, which we call the total quotient ring of $A$(or the total fraction ring of $A$). For general open subset $U\subset X$, we set $S$ to be the subset of $\Gamma(U,\mathfrak{O}_X)$ consisting of elements $f\in \Gamma(U,\mathfrak{O}_X)$ such that $f_x$ is not a zero-divisor for each $x\in U$. Then we define a presheaf of rings on $X$ to be: for each $U$, the ring is $S^{-1}\Gamma(U,\mathfrak{O}_X)$. Then we call the associated sheaf $\mathfrak{K}$ to be the sheaf of total quotient rings of $\mathfrak{O}_X$. And $\mathfrak{K}^*$ is the sheaf of multiplicative groups, which consists of invertible elements in the sheaf $\mathfrak{K}$. Similarly, we have $\mathfrak{O}_X^*$. Now we define a Cartier divisor to be a global section of the sheaf $\mathfrak{K}^*/\mathfrak{O}_X^*$ the sheaf of multiplicaitve groups(since these groups are abelian, we shall write them additively if there is no ambiguity). The set of Cartier divisors on $X$ clearly forms an abelian group. A Cartier divisor is principal if it is in the image of the map $\Gamma(X,\mathfrak{K}^*)\rightarrow\Gamma(X,\mathfrak{K}^*/\mathfrak{O}_X^*)$. Two Cartier divisors are called linearly equivalent if they differ by a principal Cartier divisor. Weil divisors v.s. Cartier divisors At first sight, there is no relation between Weil divisors and Cartier divisors. Yet the following proposition tells us that in special cases these two are in fact the same thing. Suppose $X$ is an integral, separated noetherian scheme. Suppose also that all its local rings are UFDs, then the abelian group $Div(X)$ of Weil divisors on $X$ and the group of Cartier divisors $\Gamma(X,\mathfrak{K}^*/\mathfrak{O}_X^*)$ are isomorphic. Besides, the principal Weil divisors corresponds to the principal Cartier divisors. We shall prove this proposition in the next post.
2018-02-20 20:57:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 116, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536140561103821, "perplexity": 86.57339867256856}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00555.warc.gz"}
https://mathoverflow.net/questions/337379/global-sections-in-proper-flat-families
# Global sections in proper flat families I was trying to unravel comments under the question When is it true that the ring of global regular functions on a projective variety is just the base ring?. It is apparently being claimed that if a proper morphism of finite presentation between schemes $$X\rightarrow S$$ is flat and has geometrically connected & reduced fibers then the natural map $$\mathcal{O}_S(S)\rightarrow \mathcal{O}_X(X)$$ is an isomorphism. Is it true? By Grothendieck's coherence theorem the map is finite but I am not sure what to do next. Maybe I should also mention that in my book empty space is connected (this probably does not change anything). Actually, flat morphism locally of finite presentation is open https://stacks.math.columbia.edu/tag/01UA so as long as we assume that $$S$$ is connected the fibers will be non-empty. • Sketch: 1) For $S = \operatorname{Spec} k$ for $k$ a field, use flat base change to $\bar k$ and the assumption on the geometric fiber. 2) In general, the flatness assumption + cohomology and base change/Grauert tell you that the formation of $f_* \mathcal{O}_X$ commutes with arbitrary base change $S'\to S$, 3) combine 1+2 and see that $\mathcal{O}_S\to f_* \mathcal{O}_X$ is an isomorphism on all fibers, and hence an isomorphism. – Piotr Achinger Jul 31 at 20:38 One only has to observe that if the fibers are geometrically connected & reduced, then in the Stein factorization $$X\to S'\to S$$, where $$S'\to S$$ is finite, that morphism also has to have geometrically connected & reduced fibers, which means that it must be an isomorphism (each scheme theoretic fiber is a geometrically connected & reduced zero dimensional scheme, i.e., a reduced point). • in fact, it seems to me that the example in the linked page starting with "Let $M=R\oplus\dots$" is a counterexample to the statement in absence of fin. pres. assumption (it has two at least two $R$-direct summands in the global sections). – user143832 Jul 31 at 21:05 • I am not sure what example you are referring to, I did not find a line with "$M=R\oplus\dots$" there. However, two things: i) on the stacks project, the proof actually considers this issue by writing $f$ as a limit of finitely presented morphisms, and ii) as I mentioned I am not sure what your example is, but direct sums tend to have non-connected fibers... – Sándor Kovács Aug 1 at 7:29
2019-08-18 00:01:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101552963256836, "perplexity": 153.7892267655878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00531.warc.gz"}
http://math.stackexchange.com/questions/63932/complex-torus-and-integral-of-forms
# Complex torus and integral of forms Start with a complex torus $X$ of dimension $n$ and a basis $e_1,...,e_{2n}$ for its first integral cohomology. Let $f_1,...f_{2n}$ be the dual basis vectors, so they are in $H^{1}(X,\mathbb Z)^*$. But this is isomorphic to $H^{1}(X^{*},\mathbb Z)$, with $X^{*}$ the dual torus. Let $I$ denote a subset of {1,...,2n} and denote with $e_I$ the wedge product of the basis vectors $e_i$ with respect to $I$ (where you order them also succedingly by size). Let $K$ be the complement of $I$ in {1,...,2n}. Now my question: is the integral $\int_X e_I\wedge e_K$ the same as $\int_{X^*} f_K \wedge f_I$ (observe the order)? Of course one considers the integrand as a $2n-$form on the torus resp. it's dual by regarding the cohomology with coefficients in $\mathbb C$. -
2016-02-07 22:39:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995490312576294, "perplexity": 179.3339928067069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701151789.24/warc/CC-MAIN-20160205193911-00124-ip-10-236-182-209.ec2.internal.warc.gz"}
http://pari.math.u-bordeaux.fr/dochtml/html.stable/Functions_related_to_general_number_fields.html
### Functions related to general number fields #### Number field structures Let K = Q[X] / (T) a number field, Z_K its ring of integers, T belongs to Z[X] is monic. Three basic number field structures can be associated to K in GP: * nf denotes a number field, i.e. a data structure output by nfinit. This contains the basic arithmetic data associated to the number field: signature, maximal order (given by a basis nf.zk), discriminant, defining polynomial T, etc. * bnf denotes a "Buchmann's number field", i.e. a data structure output by bnfinit. This contains nf and the deeper invariants of the field: units U(K), class group Cl(K), as well as technical data required to solve the two associated discrete logarithm problems. * bnr denotes a "ray number field", i.e. a data structure output by bnrinit, corresponding to the ray class group structure of the field, for some modulus f. It contains a bnf, the modulus f, the ray class group Cl_f(K) and data associated to the discrete logarithm problem therein. #### Algebraic numbers and ideals An algebraic number belonging to K = Q[X]/(T) is given as * a t_INT, t_FRAC or t_POL (implicitly modulo T), or * a t_POLMOD (modulo T), or * a t_COL v of dimension N = [K:Q], representing the element in terms of the computed integral basis, as sum(i = 1, N, v[i] * nf.zk[i]). Note that a t_VEC will not be recognized. An ideal is given in any of the following ways: * an algebraic number in one of the above forms, defining a principal ideal. * a prime ideal, i.e. a 5-component vector in the format output by idealprimedec or idealfactor. * a t_MAT, square and in Hermite Normal Form (or at least upper triangular with non-negative coefficients), whose columns represent a Z-basis of the ideal. One may use idealhnf to convert any ideal to the last (preferred) format. * an extended ideal is a 2-component vector [I, t], where I is an ideal as above and t is an algebraic number, representing the ideal (t)I. This is useful whenever idealred is involved, implicitly working in the ideal class group, while keeping track of principal ideals. Ideal operations suitably update the principal part when it makes sense (in a multiplicative context), e.g. using idealmul on [I,t], [J,u], we obtain [IJ, tu]. When it does not make sense, the extended part is silently discarded, e.g. using idealadd with the above input produces I+J. The "principal part" t in an extended ideal may be represented in any of the above forms, and also as a factorization matrix (in terms of number field elements, not ideals!), possibly the empty matrix [;] representing 1. In the latter case, elements stay in factored form, or famat for factorization matrix, which is a convenient way to avoid coefficient explosion. To recover the conventional expanded form, try nffactorback; but many functions already accept famats as input, for instance ideallog, so expanding huge elements should never be necessary. #### Finite abelian groups A finite abelian group G in user-readable format is given by its Smith Normal Form as a pair [h,d] or triple [h,d,g]. Here h is the cardinality of G, (d_i) is the vector of elementary divisors, and (g_i) is a vector of generators. In short, G = oplus_{i <= n} (Z/d_iZ) g_i, with d_n | ... | d_2 | d_1 and prod d_i = h. This information can also be retrieved as G.no, G.cyc and G.gen. * a character on the abelian group oplus (Z/d_iZ) g_i is given by a row vector chi = [a_1,...,a_n] such that chi(prod g_i^{n_i}) = exp(2iPisum a_i n_i / d_i). * given such a structure, a subgroup H is input as a square matrix in HNF, whose columns express generators of H on the given generators g_i. Note that the determinant of that matrix is equal to the index (G:H). #### Relative extensions We now have a look at data structures associated to relative extensions of number fields L/K, and to projective Z_K-modules. When defining a relative extension L/K, the nf associated to the base field K must be defined by a variable having a lower priority (see Section [Label: se:priority]) than the variable defining the extension. For example, you may use the variable name y to define the base field K, and x to define the relative extension L/K. #### Class field theory A modulus, in the sense of class field theory, is a divisor supported on the non-complex places of K. In PARI terms, this means either an ordinary ideal I as above (no Archimedean component), or a pair [I,a], where a is a vector with r_1 {0,1}-components, corresponding to the infinite part of the divisor. More precisely, the i-th component of a corresponds to the real embedding associated to the i-th real root of K.roots. (That ordering is not canonical, but well defined once a defining polynomial for K is chosen.) For instance, [1, [1,1]] is a modulus for a real quadratic field, allowing ramification at any of the two places at infinity, and nowhere else. A bid or "big ideal" is a structure output by idealstar needed to compute in (Z_K/I)^*, where I is a modulus in the above sense. It is a finite abelian group as described above, supplemented by technical data needed to solve discrete log problems. Finally we explain how to input ray number fields (or bnr), using class field theory. These are defined by a triple A, B, C, where the defining set [A,B,C] can have any of the following forms: [bnr], [bnr,subgroup], [bnf,mod], [bnf,mod,subgroup]. The last two forms are kept for backward compatibility, but no longer serve any real purpose (see example below); no newly written function will accept them. * bnf is as output by bnfinit, where units are mandatory unless the modulus is trivial; bnr is as output by bnrinit. This is the ground field K. * mod is a modulus f, as described above. * subgroup a subgroup of the ray class group modulo f of K. As described above, this is input as a square matrix expressing generators of a subgroup of the ray class group bnr.clgp on the given generators. The corresponding bnr is the subfield of the ray class field of K modulo f, fixed by the given subgroup. ? K = bnfinit(y^2+1); ? bnr = bnrinit(K, 13) ? %.clgp %3 = [36, [12, 3]] ? bnrdisc(bnr); \\ discriminant of the full ray class field ? bnrdisc(bnr, [3,1;0,1]); \\ discriminant of cyclic cubic extension of K We could have written directly ? bnrdisc(K, 13); ? bnrdisc(K, 13, [3,1;0,1]); avoiding one bnrinit, but this would actually be slower since the bnrinit is called internally anyway. And now twice! #### General use All the functions which are specific to relative extensions, number fields, Buchmann's number fields, Buchmann's number rays, share the prefix rnf, nf, bnf, bnr respectively. They take as first argument a number field of that precise type, respectively output by rnfinit, nfinit, bnfinit, and bnrinit. However, and even though it may not be specified in the descriptions of the functions below, it is permissible, if the function expects a nf, to use a bnf instead, which contains much more information. On the other hand, if the function requires a bnf, it will not launch bnfinit for you, which is a costly operation. Instead, it will give you a specific error message. In short, the types nf <= bnf <= bnr are ordered, each function requires a minimal type to work properly, but you may always substitute a larger type. The data types corresponding to the structures described above are rather complicated. Thus, as we already have seen it with elliptic curves, GP provides "member functions" to retrieve data from these structures (once they have been initialized of course). The relevant types of number fields are indicated between parentheses: bid (bnr ) : bid ideal structure. bnf (bnr, bnf ) : Buchmann's number field. clgp (bnr, bnf ) : classgroup. This one admits the following three subclasses: cyc : cyclic decomposition (SNF). gen : generators. no : number of elements. diff (bnr, bnf, nf ) : the different ideal. codiff (bnr, bnf, nf ) : the codifferent (inverse of the different in the ideal group). disc (bnr, bnf, nf ) : discriminant. fu (bnr, bnf ) : fundamental units. index (bnr, bnf, nf ) : index of the power order in the ring of integers. mod (bnr ) : modulus. nf (bnr, bnf, nf ) : number field. pol (bnr, bnf, nf ) : defining polynomial. r1 (bnr, bnf, nf ) : the number of real embeddings. r2 (bnr, bnf, nf ) : the number of pairs of complex embeddings. reg (bnr, bnf ) : regulator. roots (bnr, bnf, nf ) : roots of the polynomial generating the field. sign (bnr, bnf, nf ) : signature [r1,r2]. t2 (bnr, bnf, nf ) : the T_2 matrix (see nfinit). tu (bnr, bnf ) : a generator for the torsion units. zk (bnr, bnf, nf ) : integral basis, i.e. a Z-basis of the maximal order. zkst (bnr ) : structure of (Z_K/m)^*. Deprecated. The following member functions are still available, but deprecated and should not be used in new scripts : futu (bnr, bnf, ) : [u_1,...,u_r,w], (u_i) is a vector of fundamental units, w generates the torsion units. tufu (bnr, bnf, ) : [w,u_1,...,u_r], (u_i) is a vector of fundamental units, w generates the torsion units. For instance, assume that bnf = bnfinit(pol), for some polynomial. Then bnf.clgp retrieves the class group, and bnf.clgp.no the class number. If we had set bnf = nfinit(pol), both would have output an error message. All these functions are completely recursive, thus for instance bnr.bnf.nf.zk will yield the maximal order of bnr, which you could get directly with a simple bnr.zk. #### Class group, units, and the GRH Some of the functions starting with bnf are implementations of the sub-exponential algorithms for finding class and unit groups under GRH, due to Hafner-McCurley, Buchmann and Cohen-Diaz-Olivier. The general call to the functions concerning class groups of general number fields (i.e. excluding quadclassunit) involves a polynomial P and a technical vector tech = [c_1, c_2, nrpid ], where the parameters are to be understood as follows: P is the defining polynomial for the number field, which must be in Z[X], irreducible and monic. In fact, if you supply a non-monic polynomial at this point, gp issues a warning, then transforms your polynomial so that it becomes monic. The nfinit routine will return a different result in this case: instead of res, you get a vector [res,Mod(a,Q)], where Mod(a,Q) = Mod(X,P) gives the change of variables. In all other routines, the variable change is simply lost. The tech interface is obsolete and you should not tamper with these parameters. Indeed, from version 2.4.0 on, * the results are always rigorous under GRH (before that version, they relied on a heuristic strengthening, hence the need for overrides). * the influence of these parameters on execution time and stack size is marginal. They can be useful to fine-tune and experiment with the bnfinit code, but you will be better off modifying all tuning parameters in the C code (there are many more than just those three). We nevertheless describe it for completeness. The numbers c_1 <= c_2 are non-negative real numbers. By default they are chosen so that the result is correct under GRH. For i = 1,2, let B_i = c_i(log |d_K|)^2, and denote by S(B) the set of maximal ideals of K whose norm is less than B. We want S(B_1) to generate Cl(K) and hope that S(B_2) can be proven to generate Cl(K). More precisely, S(B_1) is a factorbase used to compute a tentative Cl(K) by generators and relations. We then check explicitly, using essentially bnfisprincipal, that the elements of S(B_2) belong to the span of S(B_1). Under the assumption that S(B_2) generates Cl(K), we are done. User-supplied c_i are only used to compute initial guesses for the bounds B_i, and the algorithm increases them until one can prove under GRH that S(B_2) generates Cl(K). A uniform result of Bach says that c_2 = 12 is always suitable, but this bound is very pessimistic and a direct algorithm due to Belabas-Diaz-Friedman is used to check the condition, assuming GRH. The default values are c_1 = c_2 = 0. When c_1 is equal to 0 the algorithm takes it equal to c_2. nrpid is the maximal number of small norm relations associated to each ideal in the factor base. Set it to 0 to disable the search for small norm relations. Otherwise, reasonable values are between 4 and 20. The default is 4. Warning. Make sure you understand the above! By default, most of the bnf routines depend on the correctness of the GRH. In particular, any of the class number, class group structure, class group generators, regulator and fundamental units may be wrong, independently of each other. Any result computed from such a bnf may be wrong. The only guarantee is that the units given generate a subgroup of finite index in the full unit group. You must use bnfcertify to certify the computations unconditionally. Remarks. You do not need to supply the technical parameters (under the library you still need to send at least an empty vector, coded as NULL). However, should you choose to set some of them, they must be given in the requested order. For example, if you want to specify a given value of nrpid, you must give some values as well for c_1 and c_2, and provide a vector [c_1,c_2,nrpid]. Note also that you can use an nf instead of P, which avoids recomputing the integral basis and analogous quantities. #### bnfcertify(bnf,{flag = 0}) bnf being as output by bnfinit, checks whether the result is correct, i.e. whether it is possible to remove the assumption of the Generalized Riemann Hypothesis. It is correct if and only if the answer is 1. If it is incorrect, the program may output some error message, or loop indefinitely. You can check its progress by increasing the debug level. The bnf structure must contain the fundamental units: ? K = bnfinit(x^3+2^2^3+1); bnfcertify(K) *** at top-level: K=bnfinit(x^3+2^2^3+1);bnfcertify(K) *** ^------------- *** bnfcertify: missing units in bnf. ? K = bnfinit(x^3+2^2^3+1, 1); \\ include units ? bnfcertify(K) %3 = 1 If flag is present, only certify that the class group is a quotient of the one computed in bnf (much simpler in general); likewise, the computed units may form a subgroup of the full unit group. In this variant, the units are no longer needed: ? K = bnfinit(x^3+2^2^3+1); bnfcertify(K, 1) %4 = 1 The library syntax is long bnfcertify0(GEN bnf, long flag ). Also available is GEN bnfcertify(GEN bnf) (flag = 0). #### bnfcompress(bnf) Computes a compressed version of bnf (from bnfinit), a "small Buchmann's number field" (or sbnf for short) which contains enough information to recover a full bnf vector very rapidly, but which is much smaller and hence easy to store and print. Calling bnfinit on the result recovers a true bnf, in general different from the original. Note that an snbf is useless for almost all purposes besides storage, and must be converted back to bnf form before use; for instance, no nf*, bnf* or member function accepts them. An sbnf is a 12 component vector v, as follows. Let bnf be the result of a full bnfinit, complete with units. Then v[1] is bnf.pol, v[2] is the number of real embeddings bnf.sign[1], v[3] is bnf.disc, v[4] is bnf.zk, v[5] is the list of roots bnf.roots, v[7] is the matrix W = bnf[1], v[8] is the matrix matalpha = bnf[2], v[9] is the prime ideal factor base bnf[5] coded in a compact way, and ordered according to the permutation bnf[6], v[10] is the 2-component vector giving the number of roots of unity and a generator, expressed on the integral basis, v[11] is the list of fundamental units, expressed on the integral basis, v[12] is a vector containing the algebraic numbers alpha corresponding to the columns of the matrix matalpha, expressed on the integral basis. All the components are exact (integral or rational), except for the roots in v[5]. The library syntax is GEN bnfcompress(GEN bnf). #### bnfdecodemodule(nf,m) If m is a module as output in the first component of an extension given by bnrdisclist, outputs the true module. ? K = bnfinit(x^2+23); L = bnrdisclist(K, 10); s = L[1][2] %1 = [[Mat([8, 1]), [[0, 0, 0]]], [Mat([9, 1]), [[0, 0, 0]]]] ? bnfdecodemodule(K, s[1][1]) %2 = [2 0] [0 1] The library syntax is GEN decodemodule(GEN nf, GEN m). #### bnfinit(P,{flag = 0},{tech = []}) Initializes a bnf structure. Used in programs such as bnfisprincipal, bnfisunit or bnfnarrow. By default, the results are conditional on the GRH, see [Label: se:GRHbnf]. The result is a 10-component vector bnf. This implements Buchmann's sub-exponential algorithm for computing the class group, the regulator and a system of fundamental units of the general algebraic number field K defined by the irreducible polynomial P with integer coefficients. If the precision becomes insufficient, gp does not strive to compute the units by default (flag = 0). When flag = 1, we insist on finding the fundamental units exactly. Be warned that this can take a very long time when the coefficients of the fundamental units on the integral basis are very large. If the fundamental units are simply too large to be represented in this form, an error message is issued. They could be obtained using the so-called compact representation of algebraic numbers as a formal product of algebraic integers. The latter is implemented internally but not publicly accessible yet. tech is a technical vector (empty by default, see [Label: se:GRHbnf]). Careful use of this parameter may speed up your computations, but it is mostly obsolete and you should leave it alone. The components of a bnf or sbnf are technical and never used by the casual user. In fact: never access a component directly, always use a proper member function. However, for the sake of completeness and internal documentation, their description is as follows. We use the notations explained in the book by H. Cohen, A Course in Computational Algebraic Number Theory, Graduate Texts in Maths 138, Springer-Verlag, 1993, Section 6.5, and subsection 6.5.5 in particular. bnf[1] contains the matrix W, i.e. the matrix in Hermite normal form giving relations for the class group on prime ideal generators (p_i)_{1 <= i <= r}. bnf[2] contains the matrix B, i.e. the matrix containing the expressions of the prime ideal factorbase in terms of the p_i. It is an r x c matrix. bnf[3] contains the complex logarithmic embeddings of the system of fundamental units which has been found. It is an (r_1+r_2) x (r_1+r_2-1) matrix. bnf[4] contains the matrix M"_C of Archimedean components of the relations of the matrix (W|B). bnf[5] contains the prime factor base, i.e. the list of prime ideals used in finding the relations. bnf[6] used to contain a permutation of the prime factor base, but has been obsoleted. It contains a dummy 0. bnf[7] or bnf.nf is equal to the number field data nf as would be given by nfinit. bnf[8] is a vector containing the classgroup bnf.clgp as a finite abelian group, the regulator bnf.reg, a 1 (used to contain an obsolete "check number"), the number of roots of unity and a generator bnf.tu, the fundamental units bnf.fu. bnf[9] is a 3-element row vector used in bnfisprincipal only and obtained as follows. Let D = U W V obtained by applying the Smith normal form algorithm to the matrix W ( = bnf[1]) and let U_r be the reduction of U modulo D. The first elements of the factorbase are given (in terms of bnf.gen) by the columns of U_r, with Archimedean component g_a; let also GD_a be the Archimedean components of the generators of the (principal) ideals defined by the bnf.gen[i]^bnf.cyc[i]. Then bnf[9] = [U_r, g_a, GD_a]. bnf[10] is by default unused and set equal to 0. This field is used to store further information about the field as it becomes available, which is rarely needed, hence would be too expensive to compute during the initial bnfinit call. For instance, the generators of the principal ideals bnf.gen[i]^bnf.cyc[i] (during a call to bnrisprincipal), or those corresponding to the relations in W and B (when the bnf internal precision needs to be increased). The library syntax is GEN bnfinit0(GEN P, long flag, GEN tech = NULL, long prec). Also available is GEN Buchall(GEN P, long flag, long prec), corresponding to tech = NULL, where flag is either 0 (default) or nf_FORCE (insist on finding fundamental units). The function GEN Buchall_param(GEN P, double c1, double c2, long nrpid, long flag, long prec) gives direct access to the technical parameters. #### bnfisintnorm(bnf,x) Computes a complete system of solutions (modulo units of positive norm) of the absolute norm equation Norm(a) = x, where a is an integer in bnf. If bnf has not been certified, the correctness of the result depends on the validity of GRH. See also bnfisnorm. The library syntax is GEN bnfisintnorm(GEN bnf, GEN x). The function GEN bnfisintnormabs(GEN bnf, GEN a) returns a complete system of solutions modulo units of the absolute norm equation |Norm(x) |= |a|. As fast as bnfisintnorm, but solves the two equations Norm(x) = ± a simultaneously. #### bnfisnorm(bnf,x,{flag = 1}) Tries to tell whether the rational number x is the norm of some element y in bnf. Returns a vector [a,b] where x = Norm(a)*b. Looks for a solution which is an S-unit, with S a certain set of prime ideals containing (among others) all primes dividing x. If bnf is known to be Galois, set flag = 0 (in this case, x is a norm iff b = 1). If flag is non zero the program adds to S the following prime ideals, depending on the sign of flag. If flag > 0, the ideals of norm less than flag. And if flag < 0 the ideals dividing flag. Assuming GRH, the answer is guaranteed (i.e. x is a norm iff b = 1), if S contains all primes less than 12log(disc(Bnf))^2, where Bnf is the Galois closure of bnf. See also bnfisintnorm. The library syntax is GEN bnfisnorm(GEN bnf, GEN x, long flag). #### bnfisprincipal(bnf,x,{flag = 1}) bnf being the number field data output by bnfinit, and x being an ideal, this function tests whether the ideal is principal or not. The result is more complete than a simple true/false answer and solves general discrete logarithm problem. Assume the class group is oplus (Z/d_iZ)g_i (where the generators g_i and their orders d_i are respectively given by bnf.gen and bnf.cyc). The routine returns a row vector [e,t], where e is a vector of exponents 0 <= e_i < d_i, and t is a number field element such that x = (t) prod_i g_i^{e_i}. For given g_i (i.e. for a given bnf), the e_i are unique, and t is unique modulo units. In particular, x is principal if and only if e is the zero vector. Note that the empty vector, which is returned when the class number is 1, is considered to be a zero vector (of dimension 0). ? K = bnfinit(y^2+23); ? K.cyc %2 = [3] ? K.gen %3 = [[2, 0; 0, 1]] \\ a prime ideal above 2 ? P = idealprimedec(K,3)[1]; \\ a prime ideal above 3 ? v = bnfisprincipal(K, P) %5 = [[2]~, [3/4, 1/4]~] ? idealmul(K, v[2], idealfactorback(K, K.gen, v[1])) %6 = [3 0] [0 1] ? % == idealhnf(K, P) %7 = 1 The binary digits of flag mean: * 1: If set, outputs [e,t] as explained above, otherwise returns only e, which is much easier to compute. The following idiom only tests whether an ideal is principal: is_principal(bnf, x) = !bnfisprincipal(bnf,x,0); * 2: It may not be possible to recover t, given the initial accuracy to which bnf was computed. In that case, a warning is printed and t is set equal to the empty vector []~. If this bit is set, increase the precision and recompute needed quantities until t can be computed. Warning: setting this may induce very lengthy computations. The library syntax is GEN bnfisprincipal0(GEN bnf, GEN x, long flag). Instead of the above hardcoded numerical flags, one should rather use an or-ed combination of the symbolic flags nf_GEN (include generators, possibly a place holder if too difficult) and nf_FORCE (insist on finding the generators). #### bnfissunit(bnf,sfu,x) bnf being output by bnfinit, sfu by bnfsunit, gives the column vector of exponents of x on the fundamental S-units and the roots of unity. If x is not a unit, outputs an empty vector. The library syntax is GEN bnfissunit(GEN bnf, GEN sfu, GEN x). #### bnfisunit(bnf,x) bnf being the number field data output by bnfinit and x being an algebraic number (type integer, rational or polmod), this outputs the decomposition of x on the fundamental units and the roots of unity if x is a unit, the empty vector otherwise. More precisely, if u_1,...,u_r are the fundamental units, and zeta is the generator of the group of roots of unity (bnf.tu), the output is a vector [x_1,...,x_r,x_{r+1}] such that x = u_1^{x_1}... u_r^{x_r}.zeta^{x_{r+1}}. The x_i are integers for i <= r and is an integer modulo the order of zeta for i = r+1. Note that bnf need not contain the fundamental unit explicitly: ? setrand(1); bnf = bnfinit(x^2-x-100000); ? bnf.fu *** at top-level: bnf.fu *** ^-- *** _.fu: missing units in .fu. ? u = [119836165644250789990462835950022871665178127611316131167, \ 379554884019013781006303254896369154068336082609238336]~; ? bnfisunit(bnf, u) %3 = [-1, Mod(0, 2)]~ The given u is the inverse of the fundamental unit implicitly stored in bnf. In this case, the fundamental unit was not computed and stored in algebraic form since the default accuracy was too low. (Re-run the command at \g1 or higher to see such diagnostics.) The library syntax is GEN bnfisunit(GEN bnf, GEN x). #### bnfnarrow(bnf) bnf being as output by bnfinit, computes the narrow class group of bnf. The output is a 3-component row vector v analogous to the corresponding class group component bnf.clgp (bnf[8][1]): the first component is the narrow class number v.no, the second component is a vector containing the SNF cyclic components v.cyc of the narrow class group, and the third is a vector giving the generators of the corresponding v.gen cyclic groups. Note that this function is a special case of bnrinit. The library syntax is GEN buchnarrow(GEN bnf). #### bnfsignunit(bnf) bnf being as output by bnfinit, this computes an r_1 x (r_1+r_2-1) matrix having ±1 components, giving the signs of the real embeddings of the fundamental units. The following functions compute generators for the totally positive units: /* exponents of totally positive units generators on bnf.tufu */ tpuexpo(bnf)= { my(S,d,K); S = bnfsignunit(bnf); d = matsize(S); S = matrix(d[1],d[2], i,j, if (S[i,j] < 0, 1,0)); S = concat(vectorv(d[1],i,1), S); \\ add sign(-1) K = lift(matker(S * Mod(1,2))); if (K, mathnfmodid(K, 2), 2*matid(d[1])) } /* totally positive units */ tpu(bnf)= { my(vu = bnf.tufu, ex = tpuexpo(bnf)); vector(#ex-1, i, factorback(vu, ex[,i+1])) \\ ex[,1] is 1 } The library syntax is GEN signunits(GEN bnf). #### bnfsunit(bnf,S) Computes the fundamental S-units of the number field bnf (output by bnfinit), where S is a list of prime ideals (output by idealprimedec). The output is a vector v with 6 components. v[1] gives a minimal system of (integral) generators of the S-unit group modulo the unit group. v[2] contains technical data needed by bnfissunit. v[3] is an empty vector (used to give the logarithmic embeddings of the generators in v[1] in version 2.0.16). v[4] is the S-regulator (this is the product of the regulator, the determinant of v[2] and the natural logarithms of the norms of the ideals in S). v[5] gives the S-class group structure, in the usual format (a row vector whose three components give in order the S-class number, the cyclic components and the generators). v[6] is a copy of S. The library syntax is GEN bnfsunit(GEN bnf, GEN S, long prec). #### bnrL1(bnr, {H}, {flag = 0}) Let bnr be the number field data output by bnrinit(,,1) and H be a square matrix defining a congruence subgroup of the ray class group corresponding to bnr (the trivial congruence subgroup if omitted). This function returns, for each character chi of the ray class group which is trivial on H, the value at s = 1 (or s = 0) of the abelian L-function associated to chi. For the value at s = 0, the function returns in fact for each chi a vector [r_chi, c_chi] where L(s, chi) = c.s^r + O(s^{r + 1}) near 0. The argument flag is optional, its binary digits mean 1: compute at s = 0 if unset or s = 1 if set, 2: compute the primitive L-function associated to chi if unset or the L-function with Euler factors at prime ideals dividing the modulus of bnr removed if set (that is L_S(s, chi), where S is the set of infinite places of the number field together with the finite prime ideals dividing the modulus of bnr), 3: return also the character if set. K = bnfinit(x^2-229); bnr = bnrinit(K,1,1); bnrL1(bnr) returns the order and the first non-zero term of L(s, chi) at s = 0 where chi runs through the characters of the class group of K = Q(sqrt{229}). Then bnr2 = bnrinit(K,2,1); bnrL1(bnr2,,2) returns the order and the first non-zero terms of L_S(s, chi) at s = 0 where chi runs through the characters of the class group of K and S is the set of infinite places of K together with the finite prime 2. Note that the ray class group modulo 2 is in fact the class group, so bnrL1(bnr2,0) returns the same answer as bnrL1(bnr,0). This function will fail with the message *** bnrL1: overflow in zeta_get_N0 [need too many primes]. if the approximate functional equation requires us to sum too many terms (if the discriminant of K is too large). The library syntax is GEN bnrL1(GEN bnr, GEN H = NULL, long flag, long prec). #### bnrclassno(A,{B},{C}) Let A, B, C define a class field L over a ground field K (of type [bnr], [bnr, subgroup], or [bnf, modulus], or [bnf, modulus,subgroup], Section [Label: se:CFT]); this function returns the relative degree [L:K]. In particular if A is a bnf (with units), and B a modulus, this function returns the corresponding ray class number modulo B. One can input the associated bid (with generators if the subgroup C is non trivial) for B instead of the module itself, saving some time. This function is faster than bnrinit and should be used if only the ray class number is desired. See bnrclassnolist if you need ray class numbers for all moduli less than some bound. The library syntax is GEN bnrclassno0(GEN A, GEN B = NULL, GEN C = NULL). Also available is GEN bnrclassno(GEN bnf,GEN f) to compute the ray class number modulo f. #### bnrclassnolist(bnf,list) bnf being as output by bnfinit, and list being a list of moduli (with units) as output by ideallist or ideallistarch, outputs the list of the class numbers of the corresponding ray class groups. To compute a single class number, bnrclassno is more efficient. ? bnf = bnfinit(x^2 - 2); ? L = ideallist(bnf, 100, 2); ? H = bnrclassnolist(bnf, L); ? H[98] %4 = [1, 3, 1] ? l = L[1][98]; ids = vector(#l, i, l[i].mod[1]) %5 = [[98, 88; 0, 1], [14, 0; 0, 7], [98, 10; 0, 1]] The weird l[i].mod[1], is the first component of l[i].mod, i.e. the finite part of the conductor. (This is cosmetic: since by construction the Archimedean part is trivial, I do not want to see it). This tells us that the ray class groups modulo the ideals of norm 98 (printed as %5) have respectively order 1, 3 and 1. Indeed, we may check directly: ? bnrclassno(bnf, ids[2]) %6 = 3 The library syntax is GEN bnrclassnolist(GEN bnf, GEN list). #### bnrconductor(A,{B},{C},{flag = 0}) Conductor f of the subfield of a ray class field as defined by [A,B,C] (of type [bnr], [bnr, subgroup], [bnf, modulus] or [bnf, modulus, subgroup], Section [Label: se:CFT]) If flag = 0, returns f. If flag = 1, returns [f, Cl_f, H], where Cl_f is the ray class group modulo f, as a finite abelian group; finally H is the subgroup of Cl_f defining the extension. If flag = 2, returns [f, bnr(f), H], as above except Cl_f is replaced by a bnr structure, as output by bnrinit(,f,1). The library syntax is GEN bnrconductor0(GEN A, GEN B = NULL, GEN C = NULL, long flag). Also available is GEN bnrconductor(GEN bnr, GEN H, long flag) #### bnrconductorofchar(bnr,chi) bnr being a big ray number field as output by bnrinit, and chi being a row vector representing a character as expressed on the generators of the ray class group, gives the conductor of this character as a modulus. The library syntax is GEN bnrconductorofchar(GEN bnr, GEN chi). #### bnrdisc(A,{B},{C},{flag = 0}) A, B, C defining a class field L over a ground field K (of type [bnr], [bnr, subgroup], [bnf, modulus] or [bnf, modulus, subgroup], Section [Label: se:CFT]), outputs data [N,r_1,D] giving the discriminant and signature of L, depending on the binary digits of flag: * 1: if this bit is unset, output absolute data related to L/Q: N is the absolute degree [L:Q], r_1 the number of real places of L, and D the discriminant of L/Q. Otherwise, output relative data for L/K: N is the relative degree [L:K], r_1 is the number of real places of K unramified in L (so that the number of real places of L is equal to r_1 times N), and D is the relative discriminant ideal of L/K. * 2: if this bit is set and if the modulus is not the conductor of L, only return 0. The library syntax is GEN bnrdisc0(GEN A, GEN B = NULL, GEN C = NULL, long flag). #### bnrdisclist(bnf,bound,{arch}) bnf being as output by bnfinit (with units), computes a list of discriminants of Abelian extensions of the number field by increasing modulus norm up to bound bound. The ramified Archimedean places are given by arch; all possible values are taken if arch is omitted. The alternative syntax bnrdisclist(bnf,list) is supported, where list is as output by ideallist or ideallistarch (with units), in which case arch is disregarded. The output v is a vector of vectors, where v[i][j] is understood to be in fact V[2^{15}(i-1)+j] of a unique big vector V. (This awkward scheme allows for larger vectors than could be otherwise represented.) V[k] is itself a vector W, whose length is the number of ideals of norm k. We consider first the case where arch was specified. Each component of W corresponds to an ideal m of norm k, and gives invariants associated to the ray class field L of bnf of conductor [m, arch]. Namely, each contains a vector [m,d,r,D] with the following meaning: m is the prime ideal factorization of the modulus, d = [L:Q] is the absolute degree of L, r is the number of real places of L, and D is the factorization of its absolute discriminant. We set d = r = D = 0 if m is not the finite part of a conductor. If arch was omitted, all t = 2^{r_1} possible values are taken and a component of W has the form [m, [[d_1,r_1,D_1],..., [d_t,r_t,D_t]]], where m is the finite part of the conductor as above, and [d_i,r_i,D_i] are the invariants of the ray class field of conductor [m,v_i], where v_i is the i-th Archimedean component, ordered by inverse lexicographic order; so v_1 = [0,...,0], v_2 = [1,0...,0], etc. Again, we set d_i = r_i = D_i = 0 if [m,v_i] is not a conductor. Finally, each prime ideal pr = [p,alpha,e,f,beta] in the prime factorization m is coded as the integer p.n^2+(f-1).n+(j-1), where n is the degree of the base field and j is such that pr = idealprimedec(nf,p)[j]. m can be decoded using bnfdecodemodule. Note that to compute such data for a single field, either bnrclassno or bnrdisc is more efficient. The library syntax is GEN bnrdisclist0(GEN bnf, GEN bound, GEN arch = NULL). #### bnrinit(bnf,f,{flag = 0}) bnf is as output by bnfinit, f is a modulus, initializes data linked to the ray class group structure corresponding to this module, a so-called bnr structure. One can input the associated bid with generators for f instead of the module itself, saving some time. (As in idealstar, the finite part of the conductor may be given by a factorization into prime ideals, as produced by idealfactor.) The following member functions are available on the result: .bnf is the underlying bnf, .mod the modulus, .bid the bid structure associated to the modulus; finally, .clgp, .no, .cyc, .gen refer to the ray class group (as a finite abelian group), its cardinality, its elementary divisors, its generators (only computed if flag = 1). The last group of functions are different from the members of the underlying bnf, which refer to the class group; use bnr.bnf.xxx to access these, e.g. bnr.bnf.cyc to get the cyclic decomposition of the class group. They are also different from the members of the underlying bid, which refer to (Z_K/f)^*; use bnr.bid.xxx to access these, e.g. bnr.bid.no to get phi(f). If flag = 0 (default), the generators of the ray class group are not computed, which saves time. Hence bnr.gen would produce an error. If flag = 1, as the default, except that generators are computed. The library syntax is GEN bnrinit0(GEN bnf, GEN f, long flag). Instead the above hardcoded numerical flags, one should rather use GEN Buchray(GEN bnf, GEN module, long flag) where flag is an or-ed combination of nf_GEN (include generators) and nf_INIT (if omitted, return just the cardinal of the ray class group and its structure), possibly 0. #### bnrisconductor(A,{B},{C}) A, B, C represent an extension of the base field, given by class field theory (see Section [Label: se:CFT]). Outputs 1 if this modulus is the conductor, and 0 otherwise. This is slightly faster than bnrconductor. The library syntax is long bnrisconductor0(GEN A, GEN B = NULL, GEN C = NULL). #### bnrisprincipal(bnr,x,{flag = 1}) bnr being the number field data which is output by bnrinit(,,1) and x being an ideal in any form, outputs the components of x on the ray class group generators in a way similar to bnfisprincipal. That is a 2-component vector v where v[1] is the vector of components of x on the ray class group generators, v[2] gives on the integral basis an element alpha such that x = alphaprod_ig_i^{x_i}. If flag = 0, outputs only v_1. In that case, bnr need not contain the ray class group generators, i.e. it may be created with bnrinit(,,0) If x is not coprime to the modulus of bnr the result is undefined. The library syntax is GEN bnrisprincipal(GEN bnr, GEN x, long flag). Instead of hardcoded numerical flags, one should rather use GEN isprincipalray(GEN bnr, GEN x) for flag = 0, and if you want generators: bnrisprincipal(bnr, x, nf_GEN) #### bnrrootnumber(bnr,chi,{flag = 0}) If chi = chi is a character over bnr, not necessarily primitive, let L(s,chi) = sum_{id} chi(id) N(id)^{-s} be the associated Artin L-function. Returns the so-called Artin root number, i.e. the complex number W(chi) of modulus 1 such that Lambda(1-s,chi) = W(chi) Lambda(s,\overline{chi}) where Lambda(s,chi) = A(chi)^{s/2}gamma_chi(s) L(s,chi) is the enlarged L-function associated to L. The generators of the ray class group are needed, and you can set flag = 1 if the character is known to be primitive. Example: bnf = bnfinit(x^2 - x - 57); bnr = bnrinit(bnf, [7,[1,1]], 1); bnrrootnumber(bnr, [2,1]) returns the root number of the character chi of Cl_{7 oo _1 oo _2}(Q(sqrt{229})) defined by chi(g_1^ag_2^b) = zeta_1^{2a}zeta_2^b. Here g_1, g_2 are the generators of the ray-class group given by bnr.gen and zeta_1 = e^{2iPi/N_1}, zeta_2 = e^{2iPi/N_2} where N_1, N_2 are the orders of g_1 and g_2 respectively (N_1 = 6 and N_2 = 3 as bnr.cyc readily tells us). The library syntax is GEN bnrrootnumber(GEN bnr, GEN chi, long flag, long prec). #### bnrstark(bnr,{subgroup}) bnr being as output by bnrinit(,,1), finds a relative equation for the class field corresponding to the modulus in bnr and the given congruence subgroup (as usual, omit subgroup if you want the whole ray class group). The main variable of bnr must not be x, and the ground field and the class field must be totally real. When the base field is Q, the vastly simpler galoissubcyclo is used instead. Here is an example: bnf = bnfinit(y^2 - 3); bnr = bnrinit(bnf, 5, 1); bnrstark(bnr) returns the ray class field of Q(sqrt{3}) modulo 5. Usually, one wants to apply to the result one of rnfpolredabs(bnf, pol, 16) \\ compute a reduced relative polynomial rnfpolredabs(bnf, pol, 16 + 2) \\ compute a reduced absolute polynomial The routine uses Stark units and needs to find a suitable auxiliary conductor, which may not exist when the class field is not cyclic over the base. In this case bnrstark is allowed to return a vector of polynomials defining independent relative extensions, whose compositum is the requested class field. It was decided that it was more useful to keep the extra information thus made available, hence the user has to take the compositum herself. Even if it exists, the auxiliary conductor may be so large that later computations become unfeasible. (And of course, Stark's conjecture may simply be wrong.) In case of difficulties, try rnfkummer: ? bnr = bnrinit(bnfinit(y^8-12*y^6+36*y^4-36*y^2+9,1), 2, 1); ? bnrstark(bnr) *** at top-level: bnrstark(bnr) *** ^------------- *** bnrstark: need 3919350809720744 coefficients in initzeta. *** Computation impossible. ? lift( rnfkummer(bnr) ) time = 24 ms. %2 = x^2 + (1/3*y^6 - 11/3*y^4 + 8*y^2 - 5) The library syntax is GEN bnrstark(GEN bnr, GEN subgroup = NULL, long prec). #### dirzetak(nf,b) Gives as a vector the first b coefficients of the Dedekind zeta function of the number field nf considered as a Dirichlet series. The library syntax is GEN dirzetak(GEN nf, GEN b). #### factornf(x,t) Factorization of the univariate polynomial x over the number field defined by the (univariate) polynomial t. x may have coefficients in Q or in the number field. The algorithm reduces to factorization over Q (Trager's trick). The direct approach of nffactor, which uses van Hoeij's method in a relative setting, is in general faster. The main variable of t must be of lower priority than that of x (see Section [Label: se:priority]). However if non-rational number field elements occur (as polmods or polynomials) as coefficients of x, the variable of these polmods must be the same as the main variable of t. For example ? factornf(x^2 + Mod(y, y^2+1), y^2+1); ? factornf(x^2 + y, y^2+1); \\ these two are OK ? factornf(x^2 + Mod(z,z^2+1), y^2+1) *** at top-level: factornf(x^2+Mod(z,z *** ^-------------------- *** factornf: inconsistent data in rnf function. ? factornf(x^2 + z, y^2+1) *** at top-level: factornf(x^2+z,y^2+1 *** ^-------------------- *** factornf: incorrect variable in rnf function. The library syntax is GEN polfnf(GEN x, GEN t). #### galoisexport(gal,{flag}) gal being be a Galois group as output by galoisinit, export the underlying permutation group as a string suitable for (no flags or flag = 0) GAP or (flag = 1) Magma. The following example compute the index of the underlying abstract group in the GAP library: ? G = galoisinit(x^6+108); ? s = galoisexport(G) %2 = "Group((1, 2, 3)(4, 5, 6), (1, 4)(2, 6)(3, 5))" ? extern("echo \"IdGroup("s");\" | gap -q") %3 = [6, 1] ? galoisidentify(G) %4 = [6, 1] This command also accepts subgroups returned by galoissubgroups. To import a GAP permutation into gp (for galoissubfields for instance), the following GAP function may be useful: PermToGP := function(p, n) return Permuted([1..n],p); end; gap> p:= (1,26)(2,5)(3,17)(4,32)(6,9)(7,11)(8,24)(10,13)(12,15)(14,27) (16,22)(18,28)(19,20)(21,29)(23,31)(25,30) gap> PermToGP(p,32); [ 26, 5, 17, 32, 2, 9, 11, 24, 6, 13, 7, 15, 10, 27, 12, 22, 3, 28, 20, 19, 29, 16, 31, 8, 30, 1, 14, 18, 21, 25, 23, 4 ] The library syntax is GEN galoisexport(GEN gal, long flag). #### galoisfixedfield(gal,perm,{flag},{v = y}) gal being be a Galois group as output by galoisinit and perm an element of gal.group, a vector of such elements or a subgroup of gal as returned by galoissubgroups, computes the fixed field of gal by the automorphism defined by the permutations perm of the roots gal.roots. P is guaranteed to be squarefree modulo gal.p. If no flags or flag = 0, output format is the same as for nfsubfield, returning [P,x] such that P is a polynomial defining the fixed field, and x is a root of P expressed as a polmod in gal.pol. If flag = 1 return only the polynomial P. If flag = 2 return [P,x,F] where P and x are as above and F is the factorization of gal.pol over the field defined by P, where variable v (y by default) stands for a root of P. The priority of v must be less than the priority of the variable of gal.pol (see Section [Label: se:priority]). Example: ? G = galoisinit(x^4+1); ? galoisfixedfield(G,G.group[2],2) %2 = [x^2 + 2, Mod(x^3 + x, x^4 + 1), [x^2 - y*x - 1, x^2 + y*x - 1]] computes the factorization x^4+1 = (x^2-sqrt{-2}x-1)(x^2+sqrt{-2}x-1) The library syntax is GEN galoisfixedfield(GEN gal, GEN perm, long flag, long v = -1), where v is a variable number. #### galoisgetpol(a,{b},{s}) Query the galpol package for a polynomial with Galois group isomorphic to GAP4(a,b), totally real if s = 1 (default) and totally complex if s = 2. The output is a vector [pol, den] where * pol is the polynomial of degree a * den is the denominator of nfgaloisconj(pol). Pass it as an optional argument to galoisinit or nfgaloisconj to speed them up: ? [pol,den] = galoisgetpol(64,4,1); ? G = galoisinit(pol); time = 352ms ? galoisinit(pol, den); \\ passing 'den' speeds up the computation time = 264ms ? % == % %4 = 1 \\ same answer If b and s are omitted, return the number of isomorphism classes of groups of order a. The library syntax is GEN galoisgetpol(long a, long b, long s). Also available is GEN galoisnbpol(long a) when b and s are omitted. #### galoisidentify(gal) gal being be a Galois group as output by galoisinit, output the isomorphism class of the underlying abstract group as a two-components vector [o,i], where o is the group order, and i is the group index in the GAP4 Small Group library, by Hans Ulrich Besche, Bettina Eick and Eamonn O'Brien. This command also accepts subgroups returned by galoissubgroups. The current implementation is limited to degree less or equal to 127. Some larger "easy" orders are also supported. The output is similar to the output of the function IdGroup in GAP4. Note that GAP4 IdGroup handles all groups of order less than 2000 except 1024, so you can use galoisexport and GAP4 to identify large Galois groups. The library syntax is GEN galoisidentify(GEN gal). #### galoisinit(pol,{den}) Computes the Galois group and all necessary information for computing the fixed fields of the Galois extension K/Q where K is the number field defined by pol (monic irreducible polynomial in Z[X] or a number field as output by nfinit). The extension K/Q must be Galois with Galois group "weakly" super-solvable, see below; returns 0 otherwise. Hence this permits to quickly check whether a polynomial of order strictly less than 36 is Galois or not. The algorithm used is an improved version of the paper "An efficient algorithm for the computation of Galois automorphisms", Bill Allombert, Math. Comp, vol. 73, 245, 2001, pp. 359--375. A group G is said to be "weakly" super-solvable if there exists a normal series {1} = H_0 \triangleleft H_1 \triangleleft...\triangleleft H_{n-1} \triangleleft H_n such that each H_i is normal in G and for i < n, each quotient group H_{i+1}/H_i is cyclic, and either H_n = G (then G is super-solvable) or G/H_n is isomorphic to either A_4 or S_4. In practice, almost all small groups are WKSS, the exceptions having order 36(1 exception), 48(2), 56(1), 60(1), 72(5), 75(1), 80(1), 96(10) and \geq 108. This function is a prerequisite for most of the galoisxxx routines. For instance: P = x^6 + 108; G = galoisinit(P); L = galoissubgroups(G); vector(#L, i, galoisisabelian(L[i],1)) vector(#L, i, galoisidentify(L[i])) The output is an 8-component vector gal. gal[1] contains the polynomial pol (gal.pol). gal[2] is a three-components vector [p,e,q] where p is a prime number (gal.p) such that pol totally split modulo p , e is an integer and q = p^e (gal.mod) is the modulus of the roots in gal.roots. gal[3] is a vector L containing the p-adic roots of pol as integers implicitly modulo gal.mod. (gal.roots). gal[4] is the inverse of the Vandermonde matrix of the p-adic roots of pol, multiplied by gal[5]. gal[5] is a multiple of the least common denominator of the automorphisms expressed as polynomial in a root of pol. gal[6] is the Galois group G expressed as a vector of permutations of L (gal.group). gal[7] is a generating subset S = [s_1,...,s_g] of G expressed as a vector of permutations of L (gal.gen). gal[8] contains the relative orders [o_1,...,o_g] of the generators of S (gal.orders). Let H_n be as above, we have the following properties: * if G/H_n ~ A_4 then [o_1,...,o_g] ends by [2,2,3]. * if G/H_n ~ S_4 then [o_1,...,o_g] ends by [2,2,3,2]. * for 1 <= i <= g the subgroup of G generated by [s_1,...,s_g] is normal, with the exception of i = g-2 in the A_4 case and of i = g-3 in the S_A case. * the relative order o_i of s_i is its order in the quotient group G/<s_1,...,s_{i-1}>, with the same exceptions. * for any x belongs to G there exists a unique family [e_1,...,e_g] such that (no exceptions): -- for 1 <= i <= g we have 0 <= e_i < o_i -- x = g_1^{e_1}g_2^{e_2}...g_n^{e_n} If present den must be a suitable value for gal[5]. The library syntax is GEN galoisinit(GEN pol, GEN den = NULL). #### galoisisabelian(gal,{flag = 0}) gal being as output by galoisinit, return 0 if gal is not an abelian group, and the HNF matrix of gal over gal.gen if fl = 0, 1 if fl = 1. This command also accepts subgroups returned by galoissubgroups. The library syntax is GEN galoisisabelian(GEN gal, long flag). #### galoisisnormal(gal,subgrp) gal being as output by galoisinit, and subgrp a subgroup of gal as output by galoissubgroups,return 1 if subgrp is a normal subgroup of gal, else return 0. This command also accepts subgroups returned by galoissubgroups. The library syntax is long galoisisnormal(GEN gal, GEN subgrp). #### galoispermtopol(gal,perm) gal being a Galois group as output by galoisinit and perm a element of gal.group, return the polynomial defining the Galois automorphism, as output by nfgaloisconj, associated with the permutation perm of the roots gal.roots. perm can also be a vector or matrix, in this case, galoispermtopol is applied to all components recursively. Note that G = galoisinit(pol); galoispermtopol(G, G[6])~ is equivalent to nfgaloisconj(pol), if degree of pol is greater or equal to 2. The library syntax is GEN galoispermtopol(GEN gal, GEN perm). #### galoissubcyclo(N,H,{fl = 0},{v}) Computes the subextension of Q(zeta_n) fixed by the subgroup H \subset (Z/nZ)^*. By the Kronecker-Weber theorem, all abelian number fields can be generated in this way (uniquely if n is taken to be minimal). The pair (n, H) is deduced from the parameters (N, H) as follows * N an integer: then n = N; H is a generator, i.e. an integer or an integer modulo n; or a vector of generators. * N the output of znstar(n). H as in the first case above, or a matrix, taken to be a HNF left divisor of the SNF for (Z/nZ)^* (of type N.cyc), giving the generators of H in terms of N.gen. * N the output of bnrinit(bnfinit(y), m, 1) where m is a module. H as in the first case, or a matrix taken to be a HNF left divisor of the SNF for the ray class group modulo m (of type N.cyc), giving the generators of H in terms of N.gen. In this last case, beware that H is understood relatively to N; in particular, if the infinite place does not divide the module, e.g if m is an integer, then it is not a subgroup of (Z/nZ)^*, but of its quotient by {± 1}. If fl = 0, compute a polynomial (in the variable v) defining the the subfield of Q(zeta_n) fixed by the subgroup H of (Z/nZ)^*. If fl = 1, compute only the conductor of the abelian extension, as a module. If fl = 2, output [pol, N], where pol is the polynomial as output when fl = 0 and N the conductor as output when fl = 1. The following function can be used to compute all subfields of Q(zeta_n) (of exact degree d, if d is set): polsubcyclo(n, d = -1)= { my(bnr,L,IndexBound); IndexBound = if (d < 0, n, [d]); bnr = bnrinit(bnfinit(y), [n,[1]], 1); L = subgrouplist(bnr, IndexBound, 1); vector(#L,i, galoissubcyclo(bnr,L[i])); } Setting L = subgrouplist(bnr, IndexBound) would produce subfields of exact conductor n oo . The library syntax is GEN galoissubcyclo(GEN N, GEN H = NULL, long fl, long v = -1), where v is a variable number. #### galoissubfields(G,{flags = 0},{v}) Outputs all the subfields of the Galois group G, as a vector. This works by applying galoisfixedfield to all subgroups. The meaning of the flag fl is the same as for galoisfixedfield. The library syntax is GEN galoissubfields(GEN G, long flags, long v = -1), where v is a variable number. #### galoissubgroups(G) Outputs all the subgroups of the Galois group gal. A subgroup is a vector [gen, orders], with the same meaning as for gal.gen and gal.orders. Hence gen is a vector of permutations generating the subgroup, and orders is the relatives orders of the generators. The cardinal of a subgroup is the product of the relative orders. Such subgroup can be used instead of a Galois group in the following command: galoisisabelian, galoissubgroups, galoisexport and galoisidentify. To get the subfield fixed by a subgroup sub of gal, use galoisfixedfield(gal,sub[1]) The library syntax is GEN galoissubgroups(GEN G). Sum of the two ideals x and y in the number field nf. The result is given in HNF. ? K = nfinit(x^2 + 1); ? a = idealadd(K, 2, x + 1) \\ ideal generated by 2 and 1+I %2 = [2 1] [0 1] ? pr = idealprimedec(K, 5)[1]; \\ a prime ideal above 5 ? idealadd(K, a, pr) \\ coprime, as expected %4 = [1 0] [0 1] This function cannot be used to add arbitrary Z-modules, since it assumes that its arguments are ideals: ? b = Mat([1,0]~); ? idealadd(K, b, b) \\ only square t_MATs represent ideals *** idealadd: non-square t_MAT in idealtyp. ? c = [2, 0; 2, 0]; idealadd(K, c, c) \\ non-sense %6 = [2 0] [0 2] ? d = [1, 0; 0, 2]; idealadd(K, d, d) \\ non-sense %7 = [1 0] [0 1] In the last two examples, we get wrong results since the matrices c and d do not correspond to an ideal: the Z-span of their columns (as usual interpreted as coordinates with respect to the integer basis K.zk) is not an O_K-module. To add arbitrary Z-modules generated by the columns of matrices A and B, use mathnf(concat(A,B)). The library syntax is GEN idealadd(GEN nf, GEN x, GEN y). x and y being two co-prime integral ideals (given in any form), this gives a two-component row vector [a,b] such that a belongs to x, b belongs to y and a+b = 1. The alternative syntax idealaddtoone(nf,v), is supported, where v is a k-component vector of ideals (given in any form) which sum to Z_K. This outputs a k-component vector e such that e[i] belongs to x[i] for 1 <= i <= k and sum_{1 <= i <= k}e[i] = 1. The library syntax is GEN idealaddtoone0(GEN nf, GEN x, GEN y = NULL). #### idealappr(nf,x,{flag = 0}) If x is a fractional ideal (given in any form), gives an element alpha in nf such that for all prime ideals p such that the valuation of x at p is non-zero, we have v_{p}(alpha) = v_{p}(x), and v_{p}(alpha) >= 0 for all other p. If flag is non-zero, x must be given as a prime ideal factorization, as output by idealfactor, but possibly with zero or negative exponents. This yields an element alpha such that for all prime ideals p occurring in x, v_{p}(alpha) is equal to the exponent of p in x, and for all other prime ideals, v_{p}(alpha) >= 0. This generalizes idealappr(nf,x,0) since zero exponents are allowed. Note that the algorithm used is slightly different, so that idealappr(nf, idealfactor(nf,x)) may not be the same as idealappr(nf,x,1). The library syntax is GEN idealappr0(GEN nf, GEN x, long flag). #### idealchinese(nf,x,y) x being a prime ideal factorization (i.e. a 2 by 2 matrix whose first column contains prime ideals, and the second column integral exponents), y a vector of elements in nf indexed by the ideals in x, computes an element b such that v_{p}(b - y_{p}) >= v_{p}(x) for all prime ideals in x and v_{p}(b) >= 0 for all other p. The library syntax is GEN idealchinese(GEN nf, GEN x, GEN y). #### idealcoprime(nf,x,y) Given two integral ideals x and y in the number field nf, returns a beta in the field, such that beta.x is an integral ideal coprime to y. The library syntax is GEN idealcoprime(GEN nf, GEN x, GEN y). #### idealdiv(nf,x,y,{flag = 0}) Quotient x.y^{-1} of the two ideals x and y in the number field nf. The result is given in HNF. If flag is non-zero, the quotient x.y^{-1} is assumed to be an integral ideal. This can be much faster when the norm of the quotient is small even though the norms of x and y are large. The library syntax is GEN idealdiv0(GEN nf, GEN x, GEN y, long flag). Also available are GEN idealdiv(GEN nf, GEN x, GEN y) (flag = 0) and GEN idealdivexact(GEN nf, GEN x, GEN y) (flag = 1). #### idealfactor(nf,x) Factors into prime ideal powers the ideal x in the number field nf. The output format is similar to the factor function, and the prime ideals are represented in the form output by the idealprimedec function, i.e. as 5-element vectors. The library syntax is GEN idealfactor(GEN nf, GEN x). #### idealfactorback(nf,f,{e},{flag = 0}) Gives back the ideal corresponding to a factorization. The integer 1 corresponds to the empty factorization. If e is present, e and f must be vectors of the same length (e being integral), and the corresponding factorization is the product of the f[i]^{e[i]}. If not, and f is vector, it is understood as in the preceding case with e a vector of 1s: we return the product of the f[i]. Finally, f can be a regular factorization, as produced by idealfactor. ? nf = nfinit(y^2+1); idealfactor(nf, 4 + 2*y) %1 = [[2, [1, 1]~, 2, 1, [1, 1]~] 2] [[5, [2, 1]~, 1, 1, [-2, 1]~] 1] ? idealfactorback(nf, %) %2 = [10 4] [0 2] ? f = %1[,1]; e = %1[,2]; idealfactorback(nf, f, e) %3 = [10 4] [0 2] ? % == idealhnf(nf, 4 + 2*y) %4 = 1 If flag is non-zero, perform ideal reductions (idealred) along the way. This is most useful if the ideals involved are all extended ideals (for instance with trivial principal part), so that the principal parts extracted by idealred are not lost. Here is an example: ? f = vector(#f, i, [f[i], [;]]); \\ transform to extended ideals ? idealfactorback(nf, f, e, 1) %6 = [[1, 0; 0, 1], [2, 1; [2, 1]~, 1]] ? nffactorback(nf, %[2]) %7 = [4, 2]~ The extended ideal returned in %6 is the trivial ideal 1, extended with a principal generator given in factored form. We use nffactorback to recover it in standard form. The library syntax is GEN idealfactorback(GEN nf, GEN f, GEN e = NULL, long flag ). #### idealfrobenius(nf,gal,pr) Let K be the number field defined by nf and assume K/Q be a Galois extension with Galois group given gal = galoisinit(nf), and that pr is the prime ideal P in prid format, and that P is unramified. This function returns a permutation of gal.group which defines the automorphism sigma = (P\over K/Q ), i.e the Frobenius element associated to P. If p is the unique prime number in P, then sigma(x) = x^p mod \P for all x belongs to Z_K. ? nf = nfinit(polcyclo(31)); ? gal = galoisinit(nf); ? pr = idealprimedec(nf,101)[1]; ? g = idealfrobenius(nf,gal,pr); ? galoispermtopol(gal,g) %5 = x^8 This is correct since 101 = 8 mod 31. The library syntax is GEN idealfrobenius(GEN nf, GEN gal, GEN pr). #### idealhnf(nf,u,{v}) Gives the Hermite normal form of the ideal uZ_K+vZ_K, where u and v are elements of the number field K defined by nf. ? nf = nfinit(y^3 - 2); ? idealhnf(nf, 2, y+1) %2 = [1 0 0] [0 1 0] [0 0 1] ? idealhnf(nf, y/2, [0,0,1/3]~) %3 = [1/3 0 0] [0 1/6 0] [0 0 1/6] If b is omitted, returns the HNF of the ideal defined by u: u may be an algebraic number (defining a principal ideal), a maximal ideal (as given by idealprimedec or idealfactor), or a matrix whose columns give generators for the ideal. This last format is a little complicated, but useful to reduce general modules to the canonical form once in a while: * if strictly less than N = [K:Q] generators are given, u is the Z_K-module they generate, * if N or more are given, it is assumed that they form a Z-basis of the ideal, in particular that the matrix has maximal rank N. This acts as mathnf since the Z_K-module structure is (taken for granted hence) not taken into account in this case. ? idealhnf(nf, idealprimedec(nf,2)[1]) %4 = [2 0 0] [0 1 0] [0 0 1] ? idealhnf(nf, [1,2;2,3;3,4]) %5 = [1 0 0] [0 1 0] [0 0 1] Finally, when K is quadratic with discriminant D_K, we allow u = Qfb(a,b,c), provided b^2 - 4ac = D_K. As usual, this represents the ideal a Z + (1/2)(-b + sqrt{D_K}) Z. ? K = nfinit(x^2 - 60); K.disc %1 = 60 ? idealhnf(K, qfbprimeform(60,2)) %2 = [2 1] [0 1] ? idealhnf(K, Qfb(1,2,3)) *** at top-level: idealhnf(K,Qfb(1,2,3 *** ^-------------------- *** idealhnf: Qfb(1, 2, 3) has discriminant != 60 in idealhnf. The library syntax is GEN idealhnf0(GEN nf, GEN u, GEN v = NULL). Also available is GEN idealhnf(GEN nf, GEN a). #### idealintersect(nf,A,B) Intersection of the two ideals A and B in the number field nf. The result is given in HNF. ? nf = nfinit(x^2+1); ? idealintersect(nf, 2, x+1) %2 = [2 0] [0 2] This function does not apply to general Z-modules, e.g. orders, since its arguments are replaced by the ideals they generate. The following script intersects Z-modules A and B given by matrices of compatible dimensions with integer coefficients: ZM_intersect(A,B) = { my(Ker = matkerint(concat(A,B))); mathnf( A * Ker[1..#A,] ) } The library syntax is GEN idealintersect(GEN nf, GEN A, GEN B). #### idealinv(nf,x) Inverse of the ideal x in the number field nf, given in HNF. If x is an extended ideal, its principal part is suitably updated: i.e. inverting [I,t], yields [I^{-1}, 1/t]. The library syntax is GEN idealinv(GEN nf, GEN x). #### ideallist(nf,bound,{flag = 4}) Computes the list of all ideals of norm less or equal to bound in the number field nf. The result is a row vector with exactly bound components. Each component is itself a row vector containing the information about ideals of a given norm, in no specific order, depending on the value of flag: The possible values of flag are: 0: give the bid associated to the ideals, without generators. 1: as 0, but include the generators in the bid. 2: in this case, nf must be a bnf with units. Each component is of the form [bid,U], where bid is as case 0 and U is a vector of discrete logarithms of the units. More precisely, it gives the ideallogs with respect to bid of bnf.tufu. This structure is technical, and only meant to be used in conjunction with bnrclassnolist or bnrdisclist. 3: as 2, but include the generators in the bid. 4: give only the HNF of the ideal. ? nf = nfinit(x^2+1); ? L = ideallist(nf, 100); ? L[1] %3 = [[1, 0; 0, 1]] \\ A single ideal of norm 1 ? #L[65] %4 = 4 \\ There are 4 ideals of norm 4 in Z[i] ? nf = nfinit(x^2+1); ? L = ideallist(nf, 100, 0); ? l = L[25]; vector(#l, i, l[i].clgp) %3 = [[20, [20]], [16, [4, 4]], [20, [20]]] ? l[1].mod %4 = [[25, 18; 0, 1], []] ? l[2].mod %5 = [[5, 0; 0, 5], []] ? l[3].mod %6 = [[25, 7; 0, 1], []] where we ask for the structures of the (Z[i]/I)^* for all three ideals of norm 25. In fact, for all moduli with finite part of norm 25 and trivial Archimedean part, as the last 3 commands show. See ideallistarch to treat general moduli. The library syntax is GEN ideallist0(GEN nf, long bound, long flag). #### ideallistarch(nf,list,arch) list is a vector of vectors of bid's, as output by ideallist with flag 0 to 3. Return a vector of vectors with the same number of components as the original list. The leaves give information about moduli whose finite part is as in original list, in the same order, and Archimedean part is now arch (it was originally trivial). The information contained is of the same kind as was present in the input; see ideallist, in particular the meaning of flag. ? bnf = bnfinit(x^2-2); ? bnf.sign %2 = [2, 0] \\ two places at infinity ? L = ideallist(bnf, 100, 0); ? l = L[98]; vector(#l, i, l[i].clgp) %4 = [[42, [42]], [36, [6, 6]], [42, [42]]] ? La = ideallistarch(bnf, L, [1,1]); \\ add them to the modulus ? l = La[98]; vector(#l, i, l[i].clgp) %6 = [[168, [42, 2, 2]], [144, [6, 6, 2, 2]], [168, [42, 2, 2]]] Of course, the results above are obvious: adding t places at infinity will add t copies of Z/2Z to the ray class group. The following application is more typical: ? L = ideallist(bnf, 100, 2); \\ units are required now ? La = ideallistarch(bnf, L, [1,1]); ? H = bnrclassnolist(bnf, La); ? H[98]; %6 = [2, 12, 2] The library syntax is GEN ideallistarch(GEN nf, GEN list, GEN arch). #### ideallog(nf,x,bid) nf is a number field, bid is as output by idealstar(nf, D,...) and x a non-necessarily integral element of nf which must have valuation equal to 0 at all prime ideals in the support of D. This function computes the discrete logarithm of x on the generators given in bid.gen. In other words, if g_i are these generators, of orders d_i respectively, the result is a column vector of integers (x_i) such that 0 <= x_i < d_i and x = prod_i g_i^{x_i} (mod ^*D) . Note that when the support of D contains places at infinity, this congruence implies also sign conditions on the associated real embeddings. See znlog for the limitations of the underlying discrete log algorithms. The library syntax is GEN ideallog(GEN nf, GEN x, GEN bid). #### idealmin(nf,ix,{vdir}) This function is useless and kept for backward compatibility only, use idealred. Computes a pseudo-minimum of the ideal x in the direction vdir in the number field nf. The library syntax is GEN idealmin(GEN nf, GEN ix, GEN vdir = NULL). #### idealmul(nf,x,y,{flag = 0}) Ideal multiplication of the ideals x and y in the number field nf; the result is the ideal product in HNF. If either x or y are extended ideals, their principal part is suitably updated: i.e. multiplying [I,t], [J,u] yields [IJ, tu]; multiplying I and [J, u] yields [IJ, u]. ? nf = nfinit(x^2 + 1); ? idealmul(nf, 2, x+1) %2 = [4 2] [0 2] ? idealmul(nf, [2, x], x+1) \\ extended ideal * ideal %4 = [[4, 2; 0, 2], x] ? idealmul(nf, [2, x], [x+1, x]) \\ two extended ideals %5 = [[4, 2; 0, 2], [-1, 0]~] If flag is non-zero, reduce the result using idealred. The library syntax is GEN idealmul0(GEN nf, GEN x, GEN y, long flag). See also GEN idealmul(GEN nf, GEN x, GEN y) (flag = 0) and GEN idealmulred(GEN nf, GEN x, GEN y) (flag != 0). #### idealnorm(nf,x) Computes the norm of the ideal x in the number field nf. The library syntax is GEN idealnorm(GEN nf, GEN x). #### idealnumden(nf,x) Returns [A,B], where A,B are coprime integer ideals such that x = A/B, in the number field nf. ? nf = nfinit(x^2+1); ? idealnumden(nf, (x+1)/2) %2 = [[1, 0; 0, 1], [2, 1; 0, 1]] The library syntax is GEN idealnumden(GEN nf, GEN x). #### idealpow(nf,x,k,{flag = 0}) Computes the k-th power of the ideal x in the number field nf; k belongs to Z. If x is an extended ideal, its principal part is suitably updated: i.e. raising [I,t] to the k-th power, yields [I^k, t^k]. If flag is non-zero, reduce the result using idealred, throughout the (binary) powering process; in particular, this is not the same as as idealpow(nf,x,k) followed by reduction. The library syntax is GEN idealpow0(GEN nf, GEN x, GEN k, long flag). See also GEN idealpow(GEN nf, GEN x, GEN k) and GEN idealpows(GEN nf, GEN x, long k) (flag = 0). Corresponding to flag = 1 is GEN idealpowred(GEN nf, GEN vp, GEN k). #### idealprimedec(nf,p) Computes the prime ideal decomposition of the (positive) prime number p in the number field K represented by nf. If a non-prime p is given the result is undefined. The result is a vector of prid structures, each representing one of the prime ideals above p in the number field nf. The representation pr = [p,a,e,f,mb] of a prime ideal means the following: a and is an algebraic integer in the maximal order Z_K and the prime ideal is equal to p = pZ_K + aZ_K; e is the ramification index; f is the residual index; finally, mb is the multiplication table associated to the algebraic integer b is such that p^{-1} = Z_K+ b/ pZ_K, which is used internally to compute valuations. In other words if p is inert, then mb is the integer 1, and otherwise it's a square t_MAT whose j-th column is b.nf.zk[j]. The algebraic number a is guaranteed to have a valuation equal to 1 at the prime ideal (this is automatic if e > 1). The components of pr should be accessed by member functions: pr.p, pr.e, pr.f, and pr.gen (returns the vector [p,a]): ? K = nfinit(x^3-2); ? L = idealprimedec(K, 5); ? #L \\ 2 primes above 5 in Q(2^(1/3)) %3 = 2 ? p1 = L[1]; p2 = L[2]; ? [p1.e, p1.f] \\ the first is unramified of degree 1 %4 = [1, 1] ? [p2.e, p2.f] \\ the second is unramified of degree 2 %5 = [1, 2] ? p1.gen %6 = [5, [2, 1, 0]~] ? nfbasistoalg(K, %[2]) \\ a uniformizer for p1 %7 = Mod(x + 2, x^3 - 2) The library syntax is GEN idealprimedec(GEN nf, GEN p). #### idealprincipalunits(nf,pr,k) Given a prime ideal in idealprimedec format, returns the multiplicative group (1 + pr) / (1 + pr^k) as an abelian group. This function is much faster than idealstar when the norm of pr is large, since it avoids (useless) work in the multiplicative group of the residue field. ? K = nfinit(y^2+1); ? P = idealprimedec(K,2)[1]; ? G = idealprincipalunits(K, P, 20); ? G.cyc [512, 256, 4] \\ Z/512 x Z/256 x Z/4 ? G.gen %5 = [[-1, -2]~, 1021, [0, -1]~] \\ minimal generators of given order The library syntax is GEN idealprincipalunits(GEN nf, GEN pr, long k). #### idealramgroups(nf,gal,pr) Let K be the number field defined by nf and assume that K/Q is Galois with Galois group G given by gal = galoisinit(nf). Let pr be the prime ideal P in prid format. This function returns a vector g of subgroups of gal as follow: * g[1] is the decomposition group of P, * g[2] is G_0(P), the inertia group of P, and for i >= 2, * g[i] is G_{i-2}(P), the i-2-th \idx{ramification group} of P. The length of g is the number of non-trivial groups in the sequence, thus is 0 if e = 1 and f = 1, and 1 if f > 1 and e = 1. The following function computes the cardinality of a subgroup of G, as given by the components of g: card(H) =my(o=H[2]); prod(i=1,#o,o[i]); ? nf=nfinit(x^6+3); gal=galoisinit(nf); pr=idealprimedec(nf,3)[1]; ? g = idealramgroups(nf, gal, pr); ? apply(card,g) %4 = [6, 6, 3, 3, 3] \\ cardinalities of the G_i ? nf=nfinit(x^6+108); gal=galoisinit(nf); pr=idealprimedec(nf,2)[1]; ? iso=idealramgroups(nf,gal,pr)[2] %4 = [[Vecsmall([2, 3, 1, 5, 6, 4])], Vecsmall([3])] ? nfdisc(galoisfixedfield(gal,iso,1)) %5 = -3 The field fixed by the inertia group of 2 is not ramified at 2. The library syntax is GEN idealramgroups(GEN nf, GEN gal, GEN pr). #### idealred(nf,I,{v = 0}) LLL reduction of the ideal I in the number field nf, along the direction v. The v parameter is best left omitted, but if it is present, it must be an nf.r1 + nf.r2-component vector of non-negative integers. (What counts is the relative magnitude of the entries: if all entries are equal, the effect is the same as if the vector had been omitted.) This function finds a "small" a in I (see the end for technical details). The result is the Hermite normal form of the "reduced" ideal J = r I/a, where r is the unique rational number such that J is integral and primitive. (This is usually not a reduced ideal in the sense of Buchmann.) ? K = nfinit(y^2+1); ? P = idealprimedec(K,5)[1]; ? idealred(K, P) %3 = [1 0] [0 1] More often than not, a principal ideal yields the unit ideal as above. This is a quick and dirty way to check if ideals are principal, but it is not a necessary condition: a non-trivial result does not prove that the ideal is non-principal. For guaranteed results, see bnfisprincipal, which requires the computation of a full bnf structure. If the input is an extended ideal [I,s], the output is [J,sa/r]; this way, one can keep track of the principal ideal part: ? idealred(K, [P, 1]) %5 = [[1, 0; 0, 1], [-2, 1]~] meaning that P is generated by [-2, 1] . The number field element in the extended part is an algebraic number in any form or a factorization matrix (in terms of number field elements, not ideals!). In the latter case, elements stay in factored form, which is a convenient way to avoid coefficient explosion; see also idealpow. Technical note. The routine computes an LLL-reduced basis for the lattice I equipped with the quadratic form || x ||_v^2 = sum_{i = 1}^{r_1+r_2} 2^{v_i}varepsilon_i|sigma_i(x)|^2, where as usual the sigma_i are the (real and) complex embeddings and varepsilon_i = 1, resp. 2, for a real, resp. complex place. The element a is simply the first vector in the LLL basis. The only reason you may want to try to change some directions and set some v_i != 0 is to randomize the elements found for a fixed ideal, which is heuristically useful in index calculus algorithms like bnfinit and bnfisprincipal. Even more technical note. In fact, the above is a white lie. We do not use ||.||_v exactly but a rescaled rounded variant which gets us faster and simpler LLLs. There's no harm since we are not using any theoretical property of a after all, except that it belongs to I and is "expected to be small". The library syntax is GEN idealred0(GEN nf, GEN I, GEN v = NULL). #### idealstar(nf,I,{flag = 1}) Outputs a bid structure, necessary for computing in the finite abelian group G = (Z_K/I)^*. Here, nf is a number field and I is a modulus: either an ideal in any form, or a row vector whose first component is an ideal and whose second component is a row vector of r_1 0 or 1. Ideals can also be given by a factorization into prime ideals, as produced by idealfactor. This bid is used in ideallog to compute discrete logarithms. It also contains useful information which can be conveniently retrieved as bid.mod (the modulus), bid.clgp (G as a finite abelian group), bid.no (the cardinality of G), bid.cyc (elementary divisors) and bid.gen (generators). If flag = 1 (default), the result is a bid structure without generators. If flag = 2, as flag = 1, but including generators, which wastes some time. If flag = 0, only outputs (Z_K/I)^* as an abelian group, i.e as a 3-component vector [h,d,g]: h is the order, d is the vector of SNF cyclic components and g the corresponding generators. The library syntax is GEN idealstar0(GEN nf, GEN I, long flag). Instead the above hardcoded numerical flags, one should rather use GEN Idealstar(GEN nf, GEN ideal, long flag), where flag is an or-ed combination of nf_GEN (include generators) and nf_INIT (return a full bid, not a group), possibly 0. This offers one more combination: gen, but no init. #### idealtwoelt(nf,x,{a}) Computes a two-element representation of the ideal x in the number field nf, combining a random search and an approximation theorem; x is an ideal in any form (possibly an extended ideal, whose principal part is ignored) * When called as idealtwoelt(nf,x), the result is a row vector [a,alpha] with two components such that x = aZ_K+alphaZ_K and a is chosen to be the positive generator of xcapZ, unless x was given as a principal ideal (in which case we may choose a = 0). The algorithm uses a fast lazy factorization of xcap Z and runs in randomized polynomial time. * When called as idealtwoelt(nf,x,a) with an explicit non-zero a supplied as third argument, the function assumes that a belongs to x and returns alpha belongs to x such that x = aZ_K + alphaZ_K. Note that we must factor a in this case, and the algorithm is generally much slower than the default variant. The library syntax is GEN idealtwoelt0(GEN nf, GEN x, GEN a = NULL). Also available are GEN idealtwoelt(GEN nf, GEN x) and GEN idealtwoelt2(GEN nf, GEN x, GEN a). #### idealval(nf,x,pr) Gives the valuation of the ideal x at the prime ideal pr in the number field nf, where pr is in idealprimedec format. The library syntax is long idealval(GEN nf, GEN x, GEN pr). #### matalgtobasis(nf,x) nf being a number field in nfinit format, and x a (row or column) vector or matrix, apply nfalgtobasis to each entry of x. The library syntax is GEN matalgtobasis(GEN nf, GEN x). #### matbasistoalg(nf,x) nf being a number field in nfinit format, and x a (row or column) vector or matrix, apply nfbasistoalg to each entry of x. The library syntax is GEN matbasistoalg(GEN nf, GEN x). #### modreverse(z) Let z = Mod(A, T) be a polmod, and Q be its minimal polynomial, which must satisfy {deg}(Q) = {deg}(T). Returns a "reverse polmod" Mod(B, Q), which is a root of T. This is quite useful when one changes the generating element in algebraic extensions: ? u = Mod(x, x^3 - x -1); v = u^5; ? w = modreverse(v) %2 = Mod(x^2 - 4*x + 1, x^3 - 5*x^2 + 4*x - 1) which means that x^3 - 5x^2 + 4x -1 is another defining polynomial for the cubic field Q(u) = Q[x]/(x^3 - x - 1) = Q[x]/(x^3 - 5x^2 + 4x - 1) = Q(v), and that u \to v^2 - 4v + 1 gives an explicit isomorphism. From this, it is easy to convert elements between the A(u) belongs to Q(u) and B(v) belongs to Q(v) representations: ? A = u^2 + 2*u + 3; subst(lift(A), 'x, w) %3 = Mod(x^2 - 3*x + 3, x^3 - 5*x^2 + 4*x - 1) ? B = v^2 + v + 1; subst(lift(B), 'x, v) %4 = Mod(26*x^2 + 31*x + 26, x^3 - x - 1) If the minimal polynomial of z has lower degree than expected, the routine fails ? u = Mod(-x^3 + 9*x, x^4 - 10*x^2 + 1) ? modreverse(u) *** modreverse: domain error in modreverse: deg(minpoly(z)) < 4 *** Break loop: type 'break' to go back to GP prompt ["e_DOMAIN", "modreverse", "deg(minpoly(z))", "<", 4, Mod(-x^3 + 9*x, x^4 - 10*x^2 + 1)] break> minpoly(u) x^2 - 8 The library syntax is GEN modreverse(GEN z). #### newtonpoly(x,p) Gives the vector of the slopes of the Newton polygon of the polynomial x with respect to the prime number p. The n components of the vector are in decreasing order, where n is equal to the degree of x. Vertical slopes occur iff the constant coefficient of x is zero and are denoted by LONG_MAX, the biggest single precision integer representable on the machine (2^{31}-1 (resp. 2^{63}-1) on 32-bit (resp. 64-bit) machines), see Section [Label: se:valuation]. The library syntax is GEN newtonpoly(GEN x, GEN p). #### nfalgtobasis(nf,x) Given an algebraic number x in the number field nf, transforms it to a column vector on the integral basis nf.zk. ? nf = nfinit(y^2 + 4); ? nf.zk %2 = [1, 1/2*y] ? nfalgtobasis(nf, [1,1]~) %3 = [1, 1]~ ? nfalgtobasis(nf, y) %4 = [0, 2]~ ? nfalgtobasis(nf, Mod(y, y^2+4)) %4 = [0, 2]~ This is the inverse function of nfbasistoalg. The library syntax is GEN algtobasis(GEN nf, GEN x). #### nfbasis(T) Let T(X) be an irreducible polynomial with integral coefficients. This function returns an integral basis of the number field defined by T, that is a Z-basis of its maximal order. The basis elements are given as elements in Q[X]/(T): ? nfbasis(x^2 + 1) %1 = [1, x] This function uses a modified version of the round 4 algorithm, due to David Ford, Sebastian Pauli and Xavier Roblot. Local basis, orders maximal at certain primes. Obtaining the maximal order is hard: it requires factoring the discriminant D of T. Obtaining an order which is maximal at a finite explicit set of primes is easy, but if may then be a strict suborder of the maximal order. To specify that we are interested in a given set of places only, we can replace the argument T by an argument [T,listP], where listP encodes the primes we are interested in: it must be a factorization matrix, a vector of integers or a single integer. * Vector: we assume that it contains distinct prime numbers. * Matrix: we assume that it is a two-column matrix of a (partial) factorization of D; namely the first column contains primes and the second one the valuation of D at each of these primes. * Integer B: this is replaced by the vector of primes up to B. Note that the function will use at least O(B) time: a small value, about 10^5, should be enough for most applications. Values larger than 2^{32} are not supported. In all these cases, the primes may or may not divide the discriminant D of T. The function then returns a Z-basis of an order whose index is not divisible by any of these prime numbers. The result is actually a global integral basis if all prime divisors of the field discriminant are included! Note that nfinit has built-in support for such a check: ? K = nfinit([T, listP]); ? nfcertify(K) \\ we computed an actual maximal order %2 = []; The first line initializes a number field structure incorporating nfbasis([T, listP] in place of a proven integral basis. The second line certifies that the resulting structure is correct. This allows to create an nf structure associated to the number field K = Q[X]/(T), when the discriminant of T cannot be factored completely, whereas the prime divisors of disc K are known. Of course, if listP contains a single prime number p, the function returns a local integral basis for Z_p[X]/(T): ? nfbasis(x^2+x-1001) %1 = [1, 1/3*x - 1/3] ? nfbasis( [x^2+x-1001, [2]] ) %2 = [1, x] The Buchmann-Lenstra algorithm. We now complicate the picture: it is in fact allowed to include composite numbers instead of primes in listP (Vector or Matrix case), provided they are pairwise coprime. The result will still be a correct integral basis if the field discriminant factors completely over the actual primes in the list. Adding a composite C such that C^2 divides D may help because when we consider C as a prime and run the algorithm, two good things can happen: either we succed in proving that no prime dividing C can divide the index (without actually needing to find those primes), or the computation exhibits a non-trivial zero divisor, thereby factoring C and we go on with the refined factorization. (Note that including a C such that C^2 does not divide D is useless.) If neither happen, then the computed basis need not generate the maximal order. Here is an example: ? B = 10^5; ? P = factor(poldisc(T), B)[,1]; \\ primes <= B dividing D + cofactor ? basis = nfbasis([T, listP]) ? disc = nfdisc([T, listP]) We obtain the maximal order and its discriminant if the field discriminant factors completely over the primes less than B (together with the primes contained in the addprimes table). This can be tested as follows: check = factor(disc, B); lastp = check[-1..-1,1]; if (lastp > B && !setsearch(addprimes(), lastp), warning("nf may be incorrect!")) This is a sufficient but not a necessary condition, hence the warning, instead of an error. N.B. lastp is the last entry in the first column of the check matrix, i.e. the largest prime dividing nf.disc if <= B or if it belongs to the prime table. The function nfcertify speeds up and automates the above process: ? B = 10^5; ? nf = nfinit([T, B]); ? nfcertify(nf) %3 = [] \\ nf is unconditionally correct ? basis = nf.zk; ? disc = nf.disc; The library syntax is nfbasis(GEN T, GEN *d, GEN listP = NULL), which returns the order basis, and where *d receives the order discriminant. #### nfbasistoalg(nf,x) Given an algebraic number x in the number field nf, transforms it into t_POLMOD form. ? nf = nfinit(y^2 + 4); ? nf.zk %2 = [1, 1/2*y] ? nfbasistoalg(nf, [1,1]~) %3 = Mod(1/2*y + 1, y^2 + 4) ? nfbasistoalg(nf, y) %4 = Mod(y, y^2 + 4) ? nfbasistoalg(nf, Mod(y, y^2+4)) %4 = Mod(y, y^2 + 4) This is the inverse function of nfalgtobasis. The library syntax is GEN basistoalg(GEN nf, GEN x). #### nfcertify(nf) nf being as output by nfinit, checks whether the integer basis is known unconditionally. This is in particular useful when the argument to nfinit was of the form [T, listP], specifying a finite list of primes when p-maximality had to be proven. The function returns a vector of composite integers. If this vector is empty, then nf.zk and nf.disc are correct. Otherwise, the result is dubious. In order to obtain a certified result, one must completely factor each of the given integers, then addprime each of them, then check whether nfdisc(nf.pol) is equal to nf.disc. The library syntax is GEN nfcertify(GEN nf). #### nfdetint(nf,x) Given a pseudo-matrix x, computes a non-zero ideal contained in (i.e. multiple of) the determinant of x. This is particularly useful in conjunction with nfhnfmod. The library syntax is GEN nfdetint(GEN nf, GEN x). #### nfdisc(T) field discriminant of the number field defined by the integral, preferably monic, irreducible polynomial T(X). Returns the discriminant of the number field Q[X]/(T), using the Round 4 algorithm. Local discriminants, valuations at certain primes. As in nfbasis, the argument T can be replaced by [T,listP], where listP is as in nfbasis: a vector of pairwise coprime integers (usually distinct primes), a factorization matrix, or a single integer. In that case, the function returns the discriminant of an order whose basis is given by nfbasis(T,listP), which need not be the maximal order, and whose valuation at a prime entry in listP is the same as the valuation of the field discriminant. In particular, if listP is [p] for a prime p, we can return the p-adic discriminant of the maximal order of Z_p[X]/(T), as a power of p, as follows: ? padicdisc(T,p) = p^valuation(nfdisc(T,[p]), p); ? nfdisc(x^2 + 6) %1 = -24 %2 = 8 %3 = 3 The library syntax is nfdisc(GEN T) (listP = NULL). Also available is GEN nfbasis(GEN T, GEN *d, GEN listP = NULL), which returns the order basis, and where *d receives the order discriminant. Given two elements x and y in nf, computes their sum x+y in the number field nf. The library syntax is GEN nfadd(GEN nf, GEN x, GEN y). #### nfeltdiv(nf,x,y) Given two elements x and y in nf, computes their quotient x/y in the number field nf. The library syntax is GEN nfdiv(GEN nf, GEN x, GEN y). #### nfeltdiveuc(nf,x,y) Given two elements x and y in nf, computes an algebraic integer q in the number field nf such that the components of x-qy are reasonably small. In fact, this is functionally identical to round(nfdiv(nf,x,y)). The library syntax is GEN nfdiveuc(GEN nf, GEN x, GEN y). #### nfeltdivmodpr(nf,x,y,pr) Given two elements x and y in nf and pr a prime ideal in modpr format (see nfmodprinit), computes their quotient x / y modulo the prime ideal pr. The library syntax is GEN nfdivmodpr(GEN nf, GEN x, GEN y, GEN pr). This function is normally useless in library mode. Project your inputs to the residue field using nf_to_Fq, then work there. #### nfeltdivrem(nf,x,y) Given two elements x and y in nf, gives a two-element row vector [q,r] such that x = qy+r, q is an algebraic integer in nf, and the components of r are reasonably small. The library syntax is GEN nfdivrem(GEN nf, GEN x, GEN y). #### nfeltmod(nf,x,y) Given two elements x and y in nf, computes an element r of nf of the form r = x-qy with q and algebraic integer, and such that r is small. This is functionally identical to x - nfmul(nf,round(nfdiv(nf,x,y)),y). The library syntax is GEN nfmod(GEN nf, GEN x, GEN y). #### nfeltmul(nf,x,y) Given two elements x and y in nf, computes their product x*y in the number field nf. The library syntax is GEN nfmul(GEN nf, GEN x, GEN y). #### nfeltmulmodpr(nf,x,y,pr) Given two elements x and y in nf and pr a prime ideal in modpr format (see nfmodprinit), computes their product x*y modulo the prime ideal pr. The library syntax is GEN nfmulmodpr(GEN nf, GEN x, GEN y, GEN pr). This function is normally useless in library mode. Project your inputs to the residue field using nf_to_Fq, then work there. #### nfeltnorm(nf,x) Returns the absolute norm of x. The library syntax is GEN nfnorm(GEN nf, GEN x). #### nfeltpow(nf,x,k) Given an element x in nf, and a positive or negative integer k, computes x^k in the number field nf. The library syntax is GEN nfpow(GEN nf, GEN x, GEN k). GEN nfinv(GEN nf, GEN x) correspond to k = -1, and GEN nfsqr(GEN nf,GEN x) to k = 2. #### nfeltpowmodpr(nf,x,k,pr) Given an element x in nf, an integer k and a prime ideal pr in modpr format (see nfmodprinit), computes x^k modulo the prime ideal pr. The library syntax is GEN nfpowmodpr(GEN nf, GEN x, GEN k, GEN pr). This function is normally useless in library mode. Project your inputs to the residue field using nf_to_Fq, then work there. #### nfeltreduce(nf,a,id) Given an ideal id in Hermite normal form and an element a of the number field nf, finds an element r in nf such that a-r belongs to the ideal and r is small. The library syntax is GEN nfreduce(GEN nf, GEN a, GEN id). #### nfeltreducemodpr(nf,x,pr) Given an element x of the number field nf and a prime ideal pr in modpr format compute a canonical representative for the class of x modulo pr. The library syntax is GEN nfreducemodpr(GEN nf, GEN x, GEN pr). This function is normally useless in library mode. Project your inputs to the residue field using nf_to_Fq, then work there. #### nfelttrace(nf,x) Returns the absolute trace of x. The library syntax is GEN nftrace(GEN nf, GEN x). #### nfeltval(nf,x,pr) Given an element x in nf and a prime ideal pr in the format output by idealprimedec, computes their the valuation at pr of the element x. The same result could be obtained using idealval(nf,x,pr) (since x would then be converted to a principal ideal), but it would be less efficient. The library syntax is long nfval(GEN nf, GEN x, GEN pr). #### nffactor(nf,T) Factorization of the univariate polynomial T over the number field nf given by nfinit; T has coefficients in nf (i.e. either scalar, polmod, polynomial or column vector). The factors are sorted by increasing degree. The main variable of nf must be of lower priority than that of T, see Section [Label: se:priority]. However if the polynomial defining the number field occurs explicitly in the coefficients of T as modulus of a t_POLMOD or as a t_POL coefficient, its main variable must be the same as the main variable of T. For example, ? nf = nfinit(y^2 + 1); ? nffactor(nf, x^2 + y); \\ OK ? nffactor(nf, x^2 + Mod(y, y^2+1)); \\ OK ? nffactor(nf, x^2 + Mod(z, z^2+1)); \\ WRONG It is possible to input a defining polynomial for nf instead, but this is in general less efficient since parts of an nf structure will then be computed internally. This is useful in two situations: when you do not need the nf elsewhere, or when you cannot compute the field discriminant due to integer factorization difficulties. In the latter case, if you must use a partial discriminant factorization (as allowed by both nfdisc or nfbasis) to build a partially correct nf structure, always input nf.pol to nffactor, and not your makeshift nf: otherwise factors could be missed. The library syntax is GEN nffactor(GEN nf, GEN T). #### nffactorback(nf,f,{e}) Gives back the nf element corresponding to a factorization. The integer 1 corresponds to the empty factorization. If e is present, e and f must be vectors of the same length (e being integral), and the corresponding factorization is the product of the f[i]^{e[i]}. If not, and f is vector, it is understood as in the preceding case with e a vector of 1s: we return the product of the f[i]. Finally, f can be a regular factorization matrix. ? nf = nfinit(y^2+1); ? nffactorback(nf, [3, y+1, [1,2]~], [1, 2, 3]) %2 = [12, -66]~ ? 3 * (I+1)^2 * (1+2*I)^3 %3 = 12 - 66*I The library syntax is GEN nffactorback(GEN nf, GEN f, GEN e = NULL). #### nffactormod(nf,Q,pr) Factors the univariate polynomial Q modulo the prime ideal pr in the number field nf. The coefficients of Q belong to the number field (scalar, polmod, polynomial, even column vector) and the main variable of nf must be of lower priority than that of Q (see Section [Label: se:priority]). The prime ideal pr is either in idealprimedec or (preferred) modprinit format. The coefficients of the polynomial factors are lifted to elements of nf: ? K = nfinit(y^2+1); ? P = idealprimedec(K, 3)[1]; ? nffactormod(K, x^2 + y*x + 18*y+1, P) %3 = [x + (2*y + 1) 1] [x + (2*y + 2) 1] ? P = nfmodprinit(K, P); \\ convert to nfmodprinit format ? nffactormod(K, x^2 + y*x + 18*y+1) [x + (2*y + 1) 1] [x + (2*y + 2) 1] Same result, of course, here about 10% faster due to the precomputation. The library syntax is GEN nffactormod(GEN nf, GEN Q, GEN pr). #### nfgaloisapply(nf,aut,x) Let nf be a number field as output by nfinit, and let aut be a Galois automorphism of nf expressed by its image on the field generator (such automorphisms can be found using nfgaloisconj). The function computes the action of the automorphism aut on the object x in the number field; x can be a number field element, or an ideal (possibly extended). Because of possible confusion with elements and ideals, other vector or matrix arguments are forbidden. ? nf = nfinit(x^2+1); ? L = nfgaloisconj(nf) %2 = [-x, x]~ ? aut = L[1]; /* the non-trivial automorphism */ ? nfgaloisapply(nf, aut, x) %4 = Mod(-x, x^2 + 1) ? P = idealprimedec(nf,5); /* prime ideals above 5 */ ? nfgaloisapply(nf, aut, P[2]) == P[1] %7 = 0 \\ !!!! ? idealval(nf, nfgaloisapply(nf, aut, P[2]), P[1]) %8 = 1 The surprising failure of the equality test (%7) is due to the fact that although the corresponding prime ideals are equal, their representations are not. (A prime ideal is specificed by a uniformizer, and there is no guarantee that applying automorphisms yields the same elements as a direct idealprimedec call.) The automorphism can also be given as a column vector, representing the image of Mod(x, nf.pol) as an algebraic number. This last representation is more efficient and should be preferred if a given automorphism must be used in many such calls. ? nf = nfinit(x^3 - 37*x^2 + 74*x - 37); ? l = nfgaloisconj(nf); aut = l[2] \\ automorphisms in basistoalg form %2 = -31/11*x^2 + 1109/11*x - 925/11 ? L = matalgtobasis(nf, l); AUT = L[2] \\ same in algtobasis form %3 = [16, -6, 5]~ ? v = [1, 2, 3]~; nfgaloisapply(nf, aut, v) == nfgaloisapply(nf, AUT, v) %4 = 1 \\ same result... ? for (i=1,10^5, nfgaloisapply(nf, aut, v)) time = 1,451 ms. ? for (i=1,10^5, nfgaloisapply(nf, AUT, v)) time = 1,045 ms. \\ but the latter is faster The library syntax is GEN galoisapply(GEN nf, GEN aut, GEN x). #### nfgaloisconj(nf,{flag = 0},{d}) nf being a number field as output by nfinit, computes the conjugates of a root r of the non-constant polynomial x = nf[1] expressed as polynomials in r. This also makes sense when the number field is not Galois since some conjugates may lie in the field. nf can simply be a polynomial. If no flags or flag = 0, use a combination of flag 4 and 1 and the result is always complete. There is no point whatsoever in using the other flags. If flag = 1, use nfroots: a little slow, but guaranteed to work in polynomial time. If flag = 2 (OBSOLETE), use complex approximations to the roots and an integral LLL. The result is not guaranteed to be complete: some conjugates may be missing (a warning is issued if the result is not proved complete), especially so if the corresponding polynomial has a huge index, and increasing the default precision may help. This variant is slow and unreliable: don't use it. If flag = 4, use galoisinit: very fast, but only applies to (most) Galois fields. If the field is Galois with weakly super-solvable Galois group (see galoisinit), return the complete list of automorphisms, else only the identity element. If present, d is assumed to be a multiple of the least common denominator of the conjugates expressed as polynomial in a root of pol. This routine can only compute Q-automorphisms, but it may be used to get K-automorphism for any base field K as follows: rnfgaloisconj(nfK, R) = \\ K-automorphisms of L = K[X] / (R) { my(polabs, N); R *= Mod(1, nfK.pol); \\ convert coeffs to polmod elts of K polabs = rnfequation(nfK, R); N = nfgaloisconj(polabs) % R; \\ Q-automorphisms of L \\ select the ones that fix K select(s->subst(R, variable(R), Mod(s,R)) == 0, N); } K = nfinit(y^2 + 7); rnfgaloisconj(K, x^4 - y*x^3 - 3*x^2 + y*x + 1) \\ K-automorphisms of L The library syntax is GEN galoisconj0(GEN nf, long flag, GEN d = NULL, long prec). Use directly GEN galoisconj(GEN nf, GEN d), corresponding to flag = 0, the others only have historical interest. #### nfhilbert(nf,a,b,{pr}) If pr is omitted, compute the global quadratic Hilbert symbol (a,b) in nf, that is 1 if x^2 - a y^2 - b z^2 has a non trivial solution (x,y,z) in nf, and -1 otherwise. Otherwise compute the local symbol modulo the prime ideal pr, as output by idealprimedec. The library syntax is long nfhilbert0(GEN nf, GEN a, GEN b, GEN pr = NULL). Also available is long nfhilbert(GEN bnf,GEN a,GEN b) (global quadratic Hilbert symbol). #### nfhnf(nf,x) Given a pseudo-matrix (A,I), finds a pseudo-basis in Hermite normal form of the module it generates. The library syntax is GEN nfhnf(GEN nf, GEN x). Also available: GEN rnfsimplifybasis(GEN bnf, GEN x) simplifies the pseudo-basis given by x = (A,I). The ideals in the list I are integral, primitive and either trivial (equal to the full ring of integer) or non-principal. #### nfhnfmod(nf,x,detx) Given a pseudo-matrix (A,I) and an ideal detx which is contained in (read integral multiple of) the determinant of (A,I), finds a pseudo-basis in Hermite normal form of the module generated by (A,I). This avoids coefficient explosion. detx can be computed using the function nfdetint. The library syntax is GEN nfhnfmod(GEN nf, GEN x, GEN detx). #### nfinit(pol,{flag = 0}) pol being a non-constant, preferably monic, irreducible polynomial in Z[X], initializes a number field structure (nf) associated to the field K defined by pol. As such, it's a technical object passed as the first argument to most nfxxx functions, but it contains some information which may be directly useful. Access to this information via member functions is preferred since the specific data organization specified below may change in the future. Currently, nf is a row vector with 9 components: nf[1] contains the polynomial pol (nf.pol). nf[2] contains [r1,r2] (nf.sign, nf.r1, nf.r2), the number of real and complex places of K. nf[3] contains the discriminant d(K) (nf.disc) of K. nf[4] contains the index of nf[1] (nf.index), i.e. [Z_K : Z[theta]], where theta is any root of nf[1]. nf[5] is a vector containing 7 matrices M, G, roundG, T, MD, TI, MDI useful for certain computations in the number field K. * M is the (r1+r2) x n matrix whose columns represent the numerical values of the conjugates of the elements of the integral basis. * G is an n x n matrix such that T2 = ^t G G, where T2 is the quadratic form T_2(x) = sum |sigma(x)|^2, sigma running over the embeddings of K into C. * roundG is a rescaled copy of G, rounded to nearest integers. * T is the n x n matrix whose coefficients are {Tr}(omega_iomega_j) where the omega_i are the elements of the integral basis. Note also that det(T) is equal to the discriminant of the field K. Also, when understood as an ideal, the matrix T^{-1} generates the codifferent ideal. * The columns of MD (nf.diff) express a Z-basis of the different of K on the integral basis. * TI is equal to the primitive part of T^{-1}, which has integral coefficients. * Finally, MDI is a two-element representation (for faster ideal product) of d(K) times the codifferent ideal (nf.disc*nf.codiff, which is an integral ideal). MDI is only used in idealinv. nf[6] is the vector containing the r1+r2 roots (nf.roots) of nf[1] corresponding to the r1+r2 embeddings of the number field into C (the first r1 components are real, the next r2 have positive imaginary part). nf[7] is an integral basis for Z_K (nf.zk) expressed on the powers of theta. Its first element is guaranteed to be 1. This basis is LLL-reduced with respect to T_2 (strictly speaking, it is a permutation of such a basis, due to the condition that the first element be 1). nf[8] is the n x n integral matrix expressing the power basis in terms of the integral basis, and finally nf[9] is the n x n^2 matrix giving the multiplication table of the integral basis. If a non monic polynomial is input, nfinit will transform it into a monic one, then reduce it (see flag = 3). It is allowed, though not very useful given the existence of nfnewprec, to input a nf or a bnf instead of a polynomial. ? nf = nfinit(x^3 - 12); \\ initialize number field Q[X] / (X^3 - 12) ? nf.pol \\ defining polynomial %2 = x^3 - 12 ? nf.disc \\ field discriminant %3 = -972 ? nf.index \\ index of power basis order in maximal order %4 = 2 ? nf.zk \\ integer basis, lifted to Q[X] %5 = [1, x, 1/2*x^2] ? nf.sign \\ signature %6 = [1, 1] ? factor(abs(nf.disc )) \\ determines ramified primes %7 = [2 2] [3 5] ? idealfactor(nf, 2) %8 = [[2, [0, 0, -1]~, 3, 1, [0, 1, 0]~] 3] \\ p_2^3 Huge discriminants, helping nfdisc. In case pol has a huge discriminant which is difficult to factor, it is hard to compute from scratch the maximal order. The special input format [pol, B] is also accepted where pol is a polynomial as above and B has one of the following forms * an integer basis, as would be computed by nfbasis: a vector of polynomials with first element 1. This is useful if the maximal order is known in advance. * an argument listP which specifies a list of primes (see nfbasis). Instead of the maximal order, nfinit then computes an order which is maximal at these particular primes as well as the primes contained in the private prime table (see addprimes). The result is unconditionnaly correct when the discriminant nf.disc factors completely over this set of primes. The function nfcertify automates this: ? pol = polcompositum(x^5 - 101, polcyclo(7))[1]; ? nf = nfinit( [pol, 10^3] ); ? nfcertify(nf) %3 = [] A priori, nf.zk defines an order which is only known to be maximal at all primes <= 10^3 (no prime <= 10^3 divides nf.index). The certification step proves the correctness of the computation. If flag = 2: pol is changed into another polynomial P defining the same number field, which is as simple as can easily be found using the polredbest algorithm, and all the subsequent computations are done using this new polynomial. In particular, the first component of the result is the modified polynomial. If flag = 3, apply polredbest as in case 2, but outputs [nf,Mod(a,P)], where nf is as before and Mod(a,P) = Mod(x,pol) gives the change of variables. This is implicit when pol is not monic: first a linear change of variables is performed, to get a monic polynomial, then polredbest. The library syntax is GEN nfinit0(GEN pol, long flag, long prec). Also available are GEN nfinit(GEN x, long prec) (flag = 0), GEN nfinitred(GEN x, long prec) (flag = 2), GEN nfinitred2(GEN x, long prec) (flag = 3). Instead of the above hardcoded numerical flags in nfinit0, one should rather use GEN nfinitall(GEN x, long flag, long prec), where flag is an or-ed combination of * nf_RED: find a simpler defining polynomial, * nf_ORIG: if nf_RED set, also return the change of variable, * nf_ROUND2: Deprecated. Slow down the routine by using an obsolete normalization algorithm (do not use this one!), * nf_PARTIALFACT: Deprecated. Lazy factorization of the polynomial discriminant. Result is conditional unless nfcertify can certify it. #### nfisideal(nf,x) Returns 1 if x is an ideal in the number field nf, 0 otherwise. The library syntax is long isideal(GEN nf, GEN x). #### nfisincl(x,y) Tests whether the number field K defined by the polynomial x is conjugate to a subfield of the field L defined by y (where x and y must be in Q[X]). If they are not, the output is the number 0. If they are, the output is a vector of polynomials, each polynomial a representing an embedding of K into L, i.e. being such that y | x o a. If y is a number field (nf), a much faster algorithm is used (factoring x over y using nffactor). Before version 2.0.14, this wasn't guaranteed to return all the embeddings, hence was triggered by a special flag. This is no more the case. The library syntax is GEN nfisincl(GEN x, GEN y). #### nfisisom(x,y) As nfisincl, but tests for isomorphism. If either x or y is a number field, a much faster algorithm will be used. The library syntax is GEN nfisisom(GEN x, GEN y). #### nfkermodpr(nf,x,pr) Kernel of the matrix a in Z_K/pr, where pr is in modpr format (see nfmodprinit). The library syntax is GEN nfkermodpr(GEN nf, GEN x, GEN pr). This function is normally useless in library mode. Project your inputs to the residue field using nfM_to_FqM, then work there. #### nfmodprinit(nf,pr) Transforms the prime ideal pr into modpr format necessary for all operations modulo pr in the number field nf. The library syntax is GEN nfmodprinit(GEN nf, GEN pr). #### nfnewprec(nf) Transforms the number field nf into the corresponding data using current (usually larger) precision. This function works as expected if nf is in fact a bnf (update bnf to current precision) but may be quite slow (many generators of principal ideals have to be computed). The library syntax is GEN nfnewprec(GEN nf, long prec). See also GEN bnfnewprec(GEN bnf, long prec) and GEN bnrnewprec(GEN bnr, long prec). #### nfroots({nf},x) Roots of the polynomial x in the number field nf given by nfinit without multiplicity (in Q if nf is omitted). x has coefficients in the number field (scalar, polmod, polynomial, column vector). The main variable of nf must be of lower priority than that of x (see Section [Label: se:priority]). However if the coefficients of the number field occur explicitly (as polmods) as coefficients of x, the variable of these polmods must be the same as the main variable of t (see nffactor). It is possible to input a defining polynomial for nf instead, but this is in general less efficient since parts of an nf structure will be computed internally. This is useful in two situations: when you don't need the nf, or when you can't compute its discriminant due to integer factorization difficulties. In the latter case, addprimes is a possibility but a dangerous one: roots will probably be missed if the (true) field discriminant and an addprimes entry are strictly divisible by some prime. If you have such an unsafe nf, it is safer to input nf.pol. The library syntax is GEN nfroots(GEN nf = NULL, GEN x). See also GEN nfrootsQ(GEN x), corresponding to nf = NULL. #### nfrootsof1(nf) Returns a two-component vector [w,z] where w is the number of roots of unity in the number field nf, and z is a primitive w-th root of unity. ? K = nfinit(polcyclo(11)); ? nfrootsof1(K) %2 = [22, [0, 0, 0, 0, 0, -1, 0, 0, 0, 0]~] ? z = nfbasistoalg(K, %[2]) \\ in algebraic form %3 = Mod(-x^5, x^10 + x^9 + x^8 + x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1) ? [lift(z^11), lift(z^2)] \\ proves that the order of z is 22 %4 = [-1, -x^9 - x^8 - x^7 - x^6 - x^5 - x^4 - x^3 - x^2 - x - 1] This function guesses the number w as the gcd of the #k(v)^* for unramified v above odd primes, then computes the roots in nf of the w-th cyclotomic polynomial: the algorithm is polynomial time with respect to the field degree and the bitsize of the multiplication table in nf (both of them polynomially bounded in terms of the size of the discriminant). Fields of degree up to 100 or so should require less than one minute. The library syntax is GEN rootsof1(GEN nf). Also available is GEN rootsof1_kannan(GEN nf), that computes all algebraic integers of T_2 norm equal to the field degree (all roots of 1, by Kronecker's theorem). This is in general a little faster than the default when there are roots of 1 in the field (say twice faster), but can be much slower (say, days slower), since the algorithm is a priori exponential in the field degree. #### nfsnf(nf,x) Given a Z_K-module x associated to the integral pseudo-matrix (A,I,J), returns an ideal list d_1,...,d_n which is the \idx{Smith normal form} of x. In other words, x is isomorphic to Z_K/d_1oplus...oplusZ_K/d_n and d_i divides d_{i-1} for i >= 2. See Section [Label: se:ZKmodules] for the definition of integral pseudo-matrix; briefly, it is input as a 3-component row vector [A,I,J] where I = [b_1,...,b_n] and J = [a_1,...,a_n] are two ideal lists, and A is a square n x n matrix with columns (A_1,...,A_n), seen as elements in K^n (with canonical basis (e_1,...,e_n)). This data defines the Z_K module x given by (b_1e_1oplus...oplus b_ne_n) / (a_1A_1oplus...oplus a_nA_n) , The integrality condition is a_{i,j} belongs to b_i a_j^{-1} for all i,j. If it is not satisfied, then the d_i will not be integral. Note that every finitely generated torsion module is isomorphic to a module of this form and even with b_i = Z_K for all i. The library syntax is GEN nfsnf(GEN nf, GEN x). #### nfsolvemodpr(nf,a,b,P) Let P be a prime ideal in modpr format (see nfmodprinit), let a be a matrix, invertible over the residue field, and let b be a column vector or matrix. This function returns a solution of a.x = b; the coefficients of x are lifted to nf elements. ? K = nfinit(y^2+1); ? P = idealprimedec(K, 3)[1]; ? P = nfmodprinit(K, P); ? a = [y+1, y; y, 0]; b = [1, y]~ ? nfsolvemodpr(K, a,b, P) %5 = [1, 2]~ The library syntax is GEN nfsolvemodpr(GEN nf, GEN a, GEN b, GEN P). This function is normally useless in library mode. Project your inputs to the residue field using nfM_to_FqM, then work there. #### nfsubfields(pol,{d = 0}) Finds all subfields of degree d of the number field defined by the (monic, integral) polynomial pol (all subfields if d is null or omitted). The result is a vector of subfields, each being given by [g,h], where g is an absolute equation and h expresses one of the roots of g in terms of the root x of the polynomial defining nf. This routine uses J. Klüners's algorithm in the general case, and B. Allombert's galoissubfields when nf is Galois (with weakly supersolvable Galois group). The library syntax is GEN nfsubfields(GEN pol, long d). #### polcompositum(P,Q,{flag = 0}) P and Q being squarefree polynomials in Z[X] in the same variable, outputs the simple factors of the étale Q-algebra A = Q(X, Y) / (P(X), Q(Y)). The factors are given by a list of polynomials R in Z[X], associated to the number field Q(X)/ (R), and sorted by increasing degree (with respect to lexicographic ordering for factors of equal degrees). Returns an error if one of the polynomials is not squarefree. Note that it is more efficient to reduce to the case where P and Q are irreducible first. The routine will not perform this for you, since it may be expensive, and the inputs are irreducible in most applications anyway. In this case, there will be a single factor R if and only if the number fields defined by P and Q are disjoint. Assuming P is irreducible (of smaller degree than Q for efficiency), it is in general much faster to proceed as follows nf = nfinit(P); L = nffactor(nf, Q)[,1]; vector(#L, i, rnfequation(nf, L[i])) to obtain the same result. If you are only interested in the degrees of the simple factors, the rnfequation instruction can be replaced by a trivial poldegree(P) * poldegree(L[i]). If flag = 1, outputs a vector of 4-component vectors [R,a,b,k], where R ranges through the list of all possible compositums as above, and a (resp. b) expresses the root of P (resp. Q) as an element of Q(X)/(R). Finally, k is a small integer such that b + ka = X modulo R. A compositum is often defined by a complicated polynomial, which it is advisable to reduce before further work. Here is an example involving the field Q(zeta_5, 5^{1/5}): ? L = polcompositum(x^5 - 5, polcyclo(5), 1); \\ list of [R,a,b,k] ? [R, a] = L[1]; \\ pick the single factor, extract R,a (ignore b,k) ? R \\ defines the compositum %3 = x^20 + 5*x^19 + 15*x^18 + 35*x^17 + 70*x^16 + 141*x^15 + 260*x^14\ + 355*x^13 + 95*x^12 - 1460*x^11 - 3279*x^10 - 3660*x^9 - 2005*x^8 \ + 705*x^7 + 9210*x^6 + 13506*x^5 + 7145*x^4 - 2740*x^3 + 1040*x^2 \ - 320*x + 256 ? a^5 - 5 \\ a fifth root of 5 %4 = 0 ? [T, X] = polredbest(R, 1); ? T \\ simpler defining polynomial for Q[x]/(R) %6 = x^20 + 25*x^10 + 5 ? X \\ root of R in Q[y]/(T(y)) %7 = Mod(-1/11*x^15 - 1/11*x^14 + 1/22*x^10 - 47/22*x^5 - 29/11*x^4 + 7/22,\ x^20 + 25*x^10 + 5) ? a = subst(a.pol, 'x, X) \\ a in the new coordinates %8 = Mod(1/11*x^14 + 29/11*x^4, x^20 + 25*x^10 + 5) ? a^5 - 5 %9 = 0 The library syntax is GEN polcompositum0(GEN P, GEN Q, long flag). Also available are GEN compositum(GEN P, GEN Q) (flag = 0) and GEN compositum2(GEN P, GEN Q) (flag = 1). #### polgalois(T) Galois group of the non-constant polynomial T belongs to Q[X]. In the present version 2.7.0, T must be irreducible and the degree d of T must be less than or equal to 7. If the galdata package has been installed, degrees 8, 9, 10 and 11 are also implemented. By definition, if K = Q[x]/(T), this computes the action of the Galois group of the Galois closure of K on the d distinct roots of T, up to conjugacy (corresponding to different root orderings). The output is a 4-component vector [n,s,k,name] with the following meaning: n is the cardinality of the group, s is its signature (s = 1 if the group is a subgroup of the alternating group A_d, s = -1 otherwise) and name is a character string containing name of the transitive group according to the GAP 4 transitive groups library by Alexander Hulpke. k is more arbitrary and the choice made up to version 2.2.3 of PARI is rather unfortunate: for d > 7, k is the numbering of the group among all transitive subgroups of S_d, as given in "The transitive groups of degree up to eleven", G. Butler and J. McKay, Communications in Algebra, vol. 11, 1983, pp. 863--911 (group k is denoted T_k there). And for d <= 7, it was ad hoc, so as to ensure that a given triple would denote a unique group. Specifically, for polynomials of degree d <= 7, the groups are coded as follows, using standard notations In degree 1: S_1 = [1,1,1]. In degree 2: S_2 = [2,-1,1]. In degree 3: A_3 = C_3 = [3,1,1], S_3 = [6,-1,1]. In degree 4: C_4 = [4,-1,1], V_4 = [4,1,1], D_4 = [8,-1,1], A_4 = [12,1,1], S_4 = [24,-1,1]. In degree 5: C_5 = [5,1,1], D_5 = [10,1,1], M_{20} = [20,-1,1], A_5 = [60,1,1], S_5 = [120,-1,1]. In degree 6: C_6 = [6,-1,1], S_3 = [6,-1,2], D_6 = [12,-1,1], A_4 = [12,1,1], G_{18} = [18,-1,1], S_4^ -= [24,-1,1], A_4 x C_2 = [24,-1,2], S_4^ += [24,1,1], G_{36}^ -= [36,-1,1], G_{36}^ += [36,1,1], S_4 x C_2 = [48,-1,1], A_5 = PSL_2(5) = [60,1,1], G_{72} = [72,-1,1], S_5 = PGL_2(5) = [120,-1,1], A_6 = [360,1,1], S_6 = [720,-1,1]. In degree 7: C_7 = [7,1,1], D_7 = [14,-1,1], M_{21} = [21,1,1], M_{42} = [42,-1,1], PSL_2(7) = PSL_3(2) = [168,1,1], A_7 = [2520,1,1], S_7 = [5040,-1,1]. This is deprecated and obsolete, but for reasons of backward compatibility, we cannot change this behavior yet. So you can use the default new_galois_format to switch to a consistent naming scheme, namely k is always the standard numbering of the group among all transitive subgroups of S_n. If this default is in effect, the above groups will be coded as: In degree 1: S_1 = [1,1,1]. In degree 2: S_2 = [2,-1,1]. In degree 3: A_3 = C_3 = [3,1,1], S_3 = [6,-1,2]. In degree 4: C_4 = [4,-1,1], V_4 = [4,1,2], D_4 = [8,-1,3], A_4 = [12,1,4], S_4 = [24,-1,5]. In degree 5: C_5 = [5,1,1], D_5 = [10,1,2], M_{20} = [20,-1,3], A_5 = [60,1,4], S_5 = [120,-1,5]. In degree 6: C_6 = [6,-1,1], S_3 = [6,-1,2], D_6 = [12,-1,3], A_4 = [12,1,4], G_{18} = [18,-1,5], A_4 x C_2 = [24,-1,6], S_4^ += [24,1,7], S_4^ -= [24,-1,8], G_{36}^ -= [36,-1,9], G_{36}^ += [36,1,10], S_4 x C_2 = [48,-1,11], A_5 = PSL_2(5) = [60,1,12], G_{72} = [72,-1,13], S_5 = PGL_2(5) = [120,-1,14], A_6 = [360,1,15], S_6 = [720,-1,16]. In degree 7: C_7 = [7,1,1], D_7 = [14,-1,2], M_{21} = [21,1,3], M_{42} = [42,-1,4], PSL_2(7) = PSL_3(2) = [168,1,5], A_7 = [2520,1,6], S_7 = [5040,-1,7]. Warning. The method used is that of resolvent polynomials and is sensitive to the current precision. The precision is updated internally but, in very rare cases, a wrong result may be returned if the initial precision was not sufficient. The library syntax is GEN polgalois(GEN T, long prec). To enable the new format in library mode, set the global variable new_galois_format to 1. #### polred(T,{flag = 0}) This function is deprecated, use polredbest instead. Finds polynomials with reasonably small coefficients defining subfields of the number field defined by T. One of the polynomials always defines Q (hence is equal to x-1), and another always defines the same number field as T if T is irreducible. All T accepted by nfinit are also allowed here; in particular, the format [T, listP] is recommended, e.g. with listP = 10^5 or a vector containing all ramified primes. Otherwise, the maximal order of Q[x]/(T) must be computed. The following binary digits of flag are significant: 1: Possibly use a suborder of the maximal order. The primes dividing the index of the order chosen are larger than primelimit or divide integers stored in the addprimes table. This flag is deprecated, the [T, listP] format is more flexible. 2: gives also elements. The result is a two-column matrix, the first column giving primitive elements defining these subfields, the second giving the corresponding minimal polynomials. ? M = polred(x^4 + 8, 2) %1 = [1 x - 1] [1/2*x^2 x^2 + 2] [1/4*x^3 x^4 + 2] [x x^4 + 8] ? minpoly(Mod(M[2,1], x^4+8)) %2 = x^2 + 2 The library syntax is polred(GEN T) (flag = 0). Also available is GEN polred2(GEN T) (flag = 2). The function polred0 is deprecated, provided for backward compatibility. #### polredabs(T,{flag = 0}) Returns a canonical defining polynomial P for the number field Q[X]/(T) defined by T, such that the sum of the squares of the modulus of the roots (i.e. the T_2-norm) is minimal. Different T defining isomorphic number fields will yield the same P. All T accepted by nfinit are also allowed here, e.g. non-monic polynomials, or pairs [T, listP] specifying that a non-maximal order may be used. Warning 1. Using a t_POL T requires fully factoring the discriminant of T, which may be very hard. The format [T, listP] computes only a suborder of the maximal order and replaces this part of the algorithm by a polynomial time computation. In that case the polynomial P is a priori no longer canonical, and it may happen that it does not have minimal T_2 norm. The routine attempts to certify the result independently of this order computation (as per nfcertify: we try to prove that the order is maximal); if it fails, the routine returns 0 instead of P. In order to force an output in that case as well, you may either use polredbest, or polredabs(,16), or polredabs([T, nfbasis([T, listP])]) (In all three cases, the result is no longer canonical.) Warning 2. Apart from the factorization of the discriminant of T, this routine runs in polynomial time for a fixed degree. But the complexity is exponential in the degree: this routine may be exceedingly slow when the number field has many subfields, hence a lot of elements of small T_2-norm. If you do not need a canonical polynomial, the function polredbest is in general much faster (it runs in polynomial time), and tends to return polynomials with smaller discriminants. The binary digits of flag mean 1: outputs a two-component row vector [P,a], where P is the default output and Mod(a, P) is a root of the original T. 4: gives all polynomials of minimal T_2 norm; of the two polynomials P(x) and ± P(-x), only one is given. 16: Possibly use a suborder of the maximal order, without attempting to certify the result as in Warning 1: we always return a polynomial and never 0. The result is a priori not canonical. ? T = x^16 - 136*x^14 + 6476*x^12 - 141912*x^10 + 1513334*x^8 \ - 7453176*x^6 + 13950764*x^4 - 5596840*x^2 + 46225 ? T1 = polredabs(T); T2 = polredbest(T); ? [ norml2(polroots(T1)), norml2(polroots(T2)) ] %3 = [88.0000000, 120.000000] ? [ sizedigit(poldisc(T1)), sizedigit(poldisc(T2)) ] %4 = [75, 67] The library syntax is GEN polredabs0(GEN T, long flag). Instead of the above hardcoded numerical flags, one should use an or-ed combination of * nf_PARTIALFACT: possibly use a suborder of the maximal order, without attempting to certify the result. * nf_ORIG: return [P, a], where Mod(a, P) is a root of T. * nf_RAW: return [P, b], where Mod(b, T) is a root of P. The algebraic integer b is the raw result produced by the small vectors enumeration in the maximal order; P was computed as the characteristic polynomial of Mod(b, T). Mod(a, P) as in nf_ORIG is obtained with modreverse. * nf_ADDZK: if r is the result produced with some of the above flags (of the form P or [P,c]), return [r,zk], where zk is a Z-basis for the maximal order of Q[X]/(P). * nf_ALL: return a vector of results of the above form, for all polynomials of minimal T_2-norm. #### polredbest(T,{flag = 0}) Finds a polynomial with reasonably small coefficients defining the same number field as T. All T accepted by nfinit are also allowed here (e.g. non-monic polynomials, nf, bnf, [T,Z_K_basis]). Contrary to polredabs, this routine runs in polynomial time, but it offers no guarantee as to the minimality of its result. This routine computes an LLL-reduced basis for the ring of integers of Q[X]/(T), then examines small linear combinations of the basis vectors, computing their characteristic polynomials. It returns the separable P polynomial of smallest discriminant (the one with lexicographically smallest abs(Vec(P)) in case of ties). This is a good candidate for subsequent number field computations, since it guarantees that the denominators of algebraic integers, when expressed in the power basis, are reasonably small. With no claim of minimality, though. It can happen that iterating this functions yields better and better polynomials, until it stabilizes: ? \p5 ? P = X^12+8*X^8-50*X^6+16*X^4-3069*X^2+625; ? poldisc(P)*1. %2 = 1.2622 E55 ? P = polredbest(P); ? poldisc(P)*1. %4 = 2.9012 E51 ? P = polredbest(P); ? poldisc(P)*1. %6 = 8.8704 E44 In this example, the initial polynomial P is the one returned by polredabs, and the last one is stable. If flag = 1: outputs a two-component row vector [P,a], where P is the default output and Mod(a, P) is a root of the original T. ? [P,a] = polredbest(x^4 + 8, 1) %1 = [x^4 + 2, Mod(x^3, x^4 + 2)] ? charpoly(a) %2 = x^4 + 8 In particular, the map Q[x]/(T) \to Q[x]/(P), x|--->Mod(a,P) defines an isomorphism of number fields, which can be computed as subst(lift(Q), 'x, a) if Q is a t_POLMOD modulo T; b = modreverse(a) returns a t_POLMOD giving the inverse of the above map (which should be useless since Q[x]/(P) is a priori a better representation for the number field and its elements). The library syntax is GEN polredbest(GEN T, long flag). #### polredord(x) Finds polynomials with reasonably small coefficients and of the same degree as that of x defining suborders of the order defined by x. One of the polynomials always defines Q (hence is equal to (x-1)^n, where n is the degree), and another always defines the same order as x if x is irreducible. Useless function: try polredbest. The library syntax is GEN polredord(GEN x). #### poltschirnhaus(x) Applies a random Tschirnhausen transformation to the polynomial x, which is assumed to be non-constant and separable, so as to obtain a new equation for the étale algebra defined by x. This is for instance useful when computing resolvents, hence is used by the polgalois function. The library syntax is GEN tschirnhaus(GEN x). #### rnfalgtobasis(rnf,x) Expresses x on the relative integral basis. Here, rnf is a relative number field extension L/K as output by rnfinit, and x an element of L in absolute form, i.e. expressed as a polynomial or polmod with polmod coefficients, not on the relative integral basis. The library syntax is GEN rnfalgtobasis(GEN rnf, GEN x). #### rnfbasis(bnf,M) Let K the field represented by bnf, as output by bnfinit. M is a projective Z_K-module of rank n (M\otimes K is an n-dimensional K-vector space), given by a pseudo-basis of size n. The routine returns either a true Z_K-basis of M (of size n) if it exists, or an n+1-element generating set of M if not. It is allowed to use an irreducible polynomial P in K[X] instead of M, in which case, M is defined as the ring of integers of K[X]/(P), viewed as a Z_K-module. The library syntax is GEN rnfbasis(GEN bnf, GEN M). #### rnfbasistoalg(rnf,x) Computes the representation of x as a polmod with polmods coefficients. Here, rnf is a relative number field extension L/K as output by rnfinit, and x an element of L expressed on the relative integral basis. The library syntax is GEN rnfbasistoalg(GEN rnf, GEN x). #### rnfcharpoly(nf,T,a,{var = 'x}) Characteristic polynomial of a over nf, where a belongs to the algebra defined by T over nf, i.e. nf[X]/(T). Returns a polynomial in variable v (x by default). ? nf = nfinit(y^2+1); ? rnfcharpoly(nf, x^2+y*x+1, x+y) %2 = x^2 + Mod(-y, y^2 + 1)*x + 1 The library syntax is GEN rnfcharpoly(GEN nf, GEN T, GEN a, long var = -1), where var is a variable number. #### rnfconductor(bnf,pol) Given bnf as output by bnfinit, and pol a relative polynomial defining an Abelian extension, computes the class field theory conductor of this Abelian extension. The result is a 3-component vector [conductor,rayclgp,subgroup], where conductor is the conductor of the extension given as a 2-component row vector [f_0,f_ oo ], rayclgp is the full ray class group corresponding to the conductor given as a 3-component vector [h,cyc,gen] as usual for a group, and subgroup is a matrix in HNF defining the subgroup of the ray class group on the given generators gen. The library syntax is GEN rnfconductor(GEN bnf, GEN pol). #### rnfdedekind(nf,pol,{pr},{flag = 0}) Given a number field K coded by nf and a monic polynomial P belongs to Z_K[X], irreducible over K and thus defining a relative extension L of K, applies Dedekind's criterion to the order Z_K[X]/(P), at the prime ideal pr. It is possible to set pr to a vector of prime ideals (test maximality at all primes in the vector), or to omit altogether, in which case maximality at all primes is tested; in this situation flag is automatically set to 1. The default historic behavior (flag is 0 or omitted and pr is a single prime ideal) is not so useful since rnfpseudobasis gives more information and is generally not that much slower. It returns a 3-component vector [max, basis, v]: * basis is a pseudo-basis of an enlarged order O produced by Dedekind's criterion, containing the original order Z_K[X]/(P) with index a power of pr. Possibly equal to the original order. * max is a flag equal to 1 if the enlarged order O could be proven to be pr-maximal and to 0 otherwise; it may still be maximal in the latter case if pr is ramified in L, * v is the valuation at pr of the order discriminant. If flag is non-zero, on the other hand, we just return 1 if the order Z_K[X]/(P) is pr-maximal (resp. maximal at all relevant primes, as described above), and 0 if not. This is much faster than the default, since the enlarged order is not computed. ? nf = nfinit(y^2-3); P = x^3 - 2*y; ? pr3 = idealprimedec(nf,3)[1]; ? rnfdedekind(nf, P, pr3) %2 = [1, [[1, 0, 0; 0, 1, 0; 0, 0, 1], [1, 1, 1]], 8] ? rnfdedekind(nf, P, pr3, 1) %3 = 1 In this example, pr3 is the ramified ideal above 3, and the order generated by the cube roots of y is already pr3-maximal. The order-discriminant has valuation 8. On the other hand, the order is not maximal at the prime above 2: ? pr2 = idealprimedec(nf,2)[1]; ? rnfdedekind(nf, P, pr2, 1) %5 = 0 ? rnfdedekind(nf, P, pr2) %6 = [0, [[2, 0, 0; 0, 1, 0; 0, 0, 1], [[1, 0; 0, 1], [1, 0; 0, 1], [1, 1/2; 0, 1/2]]], 2] The enlarged order is not proven to be pr2-maximal yet. In fact, it is; it is in fact the maximal order: ? B = rnfpseudobasis(nf, P) %7 = [[1, 0, 0; 0, 1, 0; 0, 0, 1], [1, 1, [1, 1/2; 0, 1/2]], [162, 0; 0, 162], -1] ? idealval(nf,B[3], pr2) %4 = 2 It is possible to use this routine with non-monic P = sum_{i <= n} a_i X^i belongs to Z_K[X] if flag = 1; in this case, we test maximality of Dedekind's order generated by 1, a_n alpha, a_nalpha^2 + a_{n-1}alpha,..., a_nalpha^{n-1} + a_{n-1}alpha^{n-2} +...+ a_1alpha. The routine will fail if P is 0 on the projective line over the residue field Z_K/pr (FIXME). The library syntax is GEN rnfdedekind(GEN nf, GEN pol, GEN pr = NULL, long flag). #### rnfdet(nf,M) Given a pseudo-matrix M over the maximal order of nf, computes its determinant. The library syntax is GEN rnfdet(GEN nf, GEN M). #### rnfdisc(nf,pol) Given a number field nf as output by nfinit and a polynomial pol with coefficients in nf defining a relative extension L of nf, computes the relative discriminant of L. This is a two-element row vector [D,d], where D is the relative ideal discriminant and d is the relative discriminant considered as an element of nf^*/{nf^*}^2. The main variable of nf must be of lower priority than that of pol, see Section [Label: se:priority]. The library syntax is GEN rnfdiscf(GEN nf, GEN pol). #### rnfeltabstorel(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being an element of L expressed as a polynomial modulo the absolute equation rnf.pol, computes x as an element of the relative extension L/K as a polmod with polmod coefficients. ? K = nfinit(y^2+1); L = rnfinit(K, x^2-y); ? L.pol %2 = x^4 + 1 ? rnfeltabstorel(L, Mod(x, L.pol)) %3 = Mod(x, x^2 + Mod(-y, y^2 + 1)) ? rnfeltabstorel(L, Mod(2, L.pol)) %4 = 2 ? rnfeltabstorel(L, Mod(x, x^2-y)) *** at top-level: rnfeltabstorel(L,Mod *** ^-------------------- *** rnfeltabstorel: inconsistent moduli in rnfeltabstorel: x^2-y != x^4+1 The library syntax is GEN rnfeltabstorel(GEN rnf, GEN x). #### rnfeltdown(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being an element of L expressed as a polynomial or polmod with polmod coefficients, computes x as an element of K as a polmod, assuming x is in K (otherwise a domain error occurs). ? K = nfinit(y^2+1); L = rnfinit(K, x^2-y); ? L.pol %2 = x^4 + 1 ? rnfeltdown(L, Mod(x^2, L.pol)) %3 = Mod(y, y^2 + 1) ? rnfeltdown(L, Mod(y, x^2-y)) %4 = Mod(y, y^2 + 1) ? rnfeltdown(L, Mod(y,K.pol)) %5 = Mod(y, y^2 + 1) ? rnfeltdown(L, Mod(x, L.pol)) *** at top-level: rnfeltdown(L,Mod(x,x *** ^-------------------- *** rnfeltdown: domain error in rnfeltdown: element not in the base field The library syntax is GEN rnfeltdown(GEN rnf, GEN x). #### rnfeltnorm(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being an element of L, returns the relative norm N_{L/K}(x) as an element of K. ? K = nfinit(y^2+1); L = rnfinit(K, x^2-y); ? rnfeltnorm(L, Mod(x, L.pol)) %2 = Mod(x, x^2 + Mod(-y, y^2 + 1)) ? rnfeltnorm(L, 2) %3 = 4 ? rnfeltnorm(L, Mod(x, x^2-y)) The library syntax is GEN rnfeltnorm(GEN rnf, GEN x). #### rnfeltreltoabs(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being an element of L expressed as a polynomial or polmod with polmod coefficients, computes x as an element of the absolute extension L/Q as a polynomial modulo the absolute equation rnf.pol. ? K = nfinit(y^2+1); L = rnfinit(K, x^2-y); ? L.pol %2 = x^4 + 1 ? rnfeltreltoabs(L, Mod(x, L.pol)) %3 = Mod(x, x^4 + 1) ? rnfeltreltoabs(L, Mod(y, x^2-y)) %4 = Mod(x^2, x^4 + 1) ? rnfeltreltoabs(L, Mod(y,K.pol)) %5 = Mod(x^2, x^4 + 1) The library syntax is GEN rnfeltreltoabs(GEN rnf, GEN x). #### rnfelttrace(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being an element of L, returns the relative trace N_{L/K}(x) as an element of K. ? K = nfinit(y^2+1); L = rnfinit(K, x^2-y); ? rnfelttrace(L, Mod(x, L.pol)) %2 = 0 ? rnfelttrace(L, 2) %3 = 4 ? rnfelttrace(L, Mod(x, x^2-y)) The library syntax is GEN rnfelttrace(GEN rnf, GEN x). #### rnfeltup(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being an element of K, computes x as an element of the absolute extension L/Q as a polynomial modulo the absolute equation rnf.pol. ? K = nfinit(y^2+1); L = rnfinit(K, x^2-y); ? L.pol %2 = x^4 + 1 ? rnfeltup(L, Mod(y, K.pol)) %4 = Mod(x^2, x^4 + 1) ? rnfeltup(L, y) %5 = Mod(x^2, x^4 + 1) ? rnfeltup(L, [1,2]~) \\ in terms of K.zk %6 = Mod(2*x^2 + 1, x^4 + 1) The library syntax is GEN rnfeltup(GEN rnf, GEN x). #### rnfequation(nf,pol,{flag = 0}) Given a number field nf as output by nfinit (or simply a polynomial) and a polynomial pol with coefficients in nf defining a relative extension L of nf, computes an absolute equation of L over Q. The main variable of nf must be of lower priority than that of pol (see Section [Label: se:priority]). Note that for efficiency, this does not check whether the relative equation is irreducible over nf, but only if it is squarefree. If it is reducible but squarefree, the result will be the absolute equation of the étale algebra defined by pol. If pol is not squarefree, raise an e_DOMAIN exception. ? rnfequation(y^2+1, x^2 - y) %1 = x^4 + 1 ? T = y^3-2; rnfequation(nfinit(T), (x^3-2)/(x-Mod(y,T))) %2 = x^6 + 108 \\ Galois closure of Q(2^(1/3)) If flag is non-zero, outputs a 3-component row vector [z,a,k], where * z is the absolute equation of L over Q, as in the default behavior, * a expresses as a t_POLMOD modulo z a root alpha of the polynomial defining the base field nf, * k is a small integer such that theta = beta+kalpha is a root of z, where beta is a root of pol. ? T = y^3-2; pol = x^2 +x*y + y^2; ? [z,a,k] = rnfequation(T, pol, 1); ? z %4 = x^6 + 108 ? subst(T, y, a) %5 = 0 ? alpha= Mod(y, T); ? beta = Mod(x*Mod(1,T), pol); ? subst(z, x, beta + k*alpha) %8 = 0 The library syntax is GEN rnfequation0(GEN nf, GEN pol, long flag). Also available are GEN rnfequation(GEN nf, GEN pol) (flag = 0) and GEN rnfequation2(GEN nf, GEN pol) (flag = 1). #### rnfhnfbasis(bnf,x) Given bnf as output by bnfinit, and either a polynomial x with coefficients in bnf defining a relative extension L of bnf, or a pseudo-basis x of such an extension, gives either a true bnf-basis of L in upper triangular Hermite normal form, if it exists, and returns 0 otherwise. The library syntax is GEN rnfhnfbasis(GEN bnf, GEN x). #### rnfidealabstorel(rnf,x) Let rnf be a relative number field extension L/K as output by rnfinit and x be an ideal of the absolute extension L/Q given by a Z-basis of elements of L. Returns the relative pseudo-matrix in HNF giving the ideal x considered as an ideal of the relative extension L/K, i.e. as a Z_K-module. The reason why the input does not use the customary HNF in terms of a fixed Z-basis for Z_L is precisely that no such basis has been explicitly specified. On the other hand, if you already computed an (absolute) nf structure Labs associated to L, and m is in HNF, defining an (absolute) ideal with respect to the Z-basis Labs.zk, then Labs.zk * m is a suitable Z-basis for the ideal, and rnfidealabstorel(rnf, Labs.zk * m) converts m to a relative ideal. ? K = nfinit(y^2+1); L = rnfinit(K, x^2-y); Labs = nfinit(L.pol); ? m = idealhnf(Labs, 17, x^3+2); ? B = rnfidealabstorel(L, Labs.zk * m) %3 = [[1, 8; 0, 1], [[17, 4; 0, 1], 1]] \\ pseudo-basis for m as Z_K-module ? A = rnfidealreltoabs(L, B) %4 = [17, x^2 + 4, x + 8, x^3 + 8*x^2] \\ Z-basis for m in Q[x]/(L.pol) ? mathnf(matalgtobasis(Labs, A)) %5 = [17 8 4 2] [ 0 1 0 0] [ 0 0 1 0] [ 0 0 0 1] ? % == m %6 = 1 The library syntax is GEN rnfidealabstorel(GEN rnf, GEN x). #### rnfidealdown(rnf,x) Let rnf be a relative number field extension L/K as output by rnfinit, and x an ideal of L, given either in relative form or by a Z-basis of elements of L (see Section [Label: se:rnfidealabstorel]). This function returns the ideal of K below x, i.e. the intersection of x with K. The library syntax is GEN rnfidealdown(GEN rnf, GEN x). #### rnfidealhnf(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being a relative ideal (which can be, as in the absolute case, of many different types, including of course elements), computes the HNF pseudo-matrix associated to x, viewed as a Z_K-module. The library syntax is GEN rnfidealhnf(GEN rnf, GEN x). #### rnfidealmul(rnf,x,y) rnf being a relative number field extension L/K as output by rnfinit and x and y being ideals of the relative extension L/K given by pseudo-matrices, outputs the ideal product, again as a relative ideal. The library syntax is GEN rnfidealmul(GEN rnf, GEN x, GEN y). #### rnfidealnormabs(rnf,x) Let rnf be a relative number field extension L/K as output by rnfinit and let x be a relative ideal (which can be, as in the absolute case, of many different types, including of course elements). This function computes the norm of the x considered as an ideal of the absolute extension L/Q. This is identical to idealnorm(rnf, rnfidealnormrel(rnf,x)) but faster. The library syntax is GEN rnfidealnormabs(GEN rnf, GEN x). #### rnfidealnormrel(rnf,x) Let rnf be a relative number field extension L/K as output by rnfinit and let x be a relative ideal (which can be, as in the absolute case, of many different types, including of course elements). This function computes the relative norm of x as an ideal of K in HNF. The library syntax is GEN rnfidealnormrel(GEN rnf, GEN x). #### rnfidealreltoabs(rnf,x) Let rnf be a relative number field extension L/K as output by rnfinit and let x be a relative ideal, given as a Z_K-module by a pseudo matrix [A,I]. This function returns the ideal x as an absolute ideal of L/Q in the form of a Z-basis, given by a vector of polynomials (modulo rnf.pol). The reason why we do not return the customary HNF in terms of a fixed Z-basis for Z_L is precisely that no such basis has been explicitly specified. On the other hand, if you already computed an (absolute) nf structure Labs associated to L, then xabs = rnfidealreltoabs(L, x); xLabs = mathnf(matalgtobasis(Labs, xabs)); computes a traditional HNF xLabs for x in terms of the fixed Z-basis Labs.zk. The library syntax is GEN rnfidealreltoabs(GEN rnf, GEN x). #### rnfidealtwoelt(rnf,x) rnf being a relative number field extension L/K as output by rnfinit and x being an ideal of the relative extension L/K given by a pseudo-matrix, gives a vector of two generators of x over Z_L expressed as polmods with polmod coefficients. The library syntax is GEN rnfidealtwoelement(GEN rnf, GEN x). #### rnfidealup(rnf,x) Let rnf be a relative number field extension L/K as output by rnfinit and let x be an ideal of K. This function returns the ideal xZ_L as an absolute ideal of L/Q, in the form of a Z-basis, given by a vector of polynomials (modulo rnf.pol). The reason why we do not return the customary HNF in terms of a fixed Z-basis for Z_L is precisely that no such basis has been explicitly specified. On the other hand, if you already computed an (absolute) nf structure Labs associated to L, then xabs = rnfidealup(L, x); xLabs = mathnf(matalgtobasis(Labs, xabs)); computes a traditional HNF xLabs for x in terms of the fixed Z-basis Labs.zk. The library syntax is GEN rnfidealup(GEN rnf, GEN x). #### rnfinit(nf,pol) nf being a number field in nfinit format considered as base field, and pol a polynomial defining a relative extension over nf, this computes data to work in the relative extension. The main variable of pol must be of higher priority (see Section [Label: se:priority]) than that of nf, and the coefficients of pol must be in nf. The result is a row vector, whose components are technical. In the following description, we let K be the base field defined by nf and L/K the large field associated to the rnf. Furthermore, we let m = [K:Q] the degree of the base field, n = [L:K] the relative degree, r_1 and r_2 the number of real and complex places of K. Acces to this information via member functions is preferred since the specific data organization specified below will change in the future. rnf[1](rnf.pol) contains the relative polynomial pol. rnf[2] contains the integer basis [A,d] of K, as (integral) elements of L/Q. More precisely, A is a vector of polynomial with integer coefficients, d is a denominator, and the integer basis is given by A/d. rnf[3] (rnf.disc) is a two-component row vector [d(L/K),s] where d(L/K) is the relative ideal discriminant of L/K and s is the discriminant of L/K viewed as an element of K^*/(K^*)^2, in other words it is the output of rnfdisc. rnf[4](rnf.index) is the ideal index f, i.e. such that d(pol)Z_K = f^2d(L/K). rnf[5] is currently unused. rnf[6] is currently unused. rnf[7] (rnf.zk) is the pseudo-basis (A,I) for the maximal order Z_L as a Z_K-module: A is the relative integral pseudo basis expressed as polynomials (in the variable of pol) with polmod coefficients in nf, and the second component I is the ideal list of the pseudobasis in HNF. rnf[8] is the inverse matrix of the integral basis matrix, with coefficients polmods in nf. rnf[9] is currently unused. rnf[10] (rnf.nf) is nf. rnf[11] is the output of rnfequation(K, pol, 1). Namely, a vector [P, a, k] describing the absolute extension L/Q: P is an absolute equation, more conveniently obtained as rnf.polabs; a expresses the generator alpha = y mod K.pol of the number field K as an element of L, i.e. a polynomial modulo the absolute equation P; k is a small integer such that, if beta is an abstract root of pol and alpha the generator of K given above, then P(beta + kalpha) = 0. Caveat.. Be careful if k != 0 when dealing simultaneously with absolute and relative quantities since L = Q(beta + kalpha) = K(alpha), and the generator chosen for the absolute extension is not the same as for the relative one. If this happens, one can of course go on working, but we advise to change the relative polynomial so that its root becomes beta + k alpha. Typical GP instructions would be [P,a,k] = rnfequation(K, pol, 1); if (k, pol = subst(pol, x, x - k*Mod(y, K.pol))); L = rnfinit(K, pol); rnf[12] is by default unused and set equal to 0. This field is used to store further information about the field as it becomes available (which is rarely needed, hence would be too expensive to compute during the initial rnfinit call). The library syntax is GEN rnfinit(GEN nf, GEN pol). #### rnfisabelian(nf,T) T being a relative polynomial with coefficients in nf, return 1 if it defines an abelian extension, and 0 otherwise. ? K = nfinit(y^2 + 23); ? rnfisabelian(K, x^3 - 3*x - y) %2 = 1 The library syntax is long rnfisabelian(GEN nf, GEN T). #### rnfisfree(bnf,x) Given bnf as output by bnfinit, and either a polynomial x with coefficients in bnf defining a relative extension L of bnf, or a pseudo-basis x of such an extension, returns true (1) if L/bnf is free, false (0) if not. The library syntax is long rnfisfree(GEN bnf, GEN x). #### rnfisnorm(T,a,{flag = 0}) Similar to bnfisnorm but in the relative case. T is as output by rnfisnorminit applied to the extension L/K. This tries to decide whether the element a in K is the norm of some x in the extension L/K. The output is a vector [x,q], where a = Norm(x)*q. The algorithm looks for a solution x which is an S-integer, with S a list of places of K containing at least the ramified primes, the generators of the class group of L, as well as those primes dividing a. If L/K is Galois, then this is enough; otherwise, flag is used to add more primes to S: all the places above the primes p <= flag (resp. p|flag) if flag > 0 (resp. flag < 0). The answer is guaranteed (i.e. a is a norm iff q = 1) if the field is Galois, or, under GRH, if S contains all primes less than 12log^2|disc(M)|, where M is the normal closure of L/K. If rnfisnorminit has determined (or was told) that L/K is Galois, and flag != 0, a Warning is issued (so that you can set flag = 1 to check whether L/K is known to be Galois, according to T). Example: bnf = bnfinit(y^3 + y^2 - 2*y - 1); p = x^2 + Mod(y^2 + 2*y + 1, bnf.pol); T = rnfisnorminit(bnf, p); rnfisnorm(T, 17) checks whether 17 is a norm in the Galois extension Q(beta) / Q(alpha), where alpha^3 + alpha^2 - 2alpha - 1 = 0 and beta^2 + alpha^2 + 2alpha + 1 = 0 (it is). The library syntax is GEN rnfisnorm(GEN T, GEN a, long flag). #### rnfisnorminit(pol,polrel,{flag = 2}) Let K be defined by a root of pol, and L/K the extension defined by the polynomial polrel. As usual, pol can in fact be an nf, or bnf, etc; if pol has degree 1 (the base field is Q), polrel is also allowed to be an nf, etc. Computes technical data needed by rnfisnorm to solve norm equations Nx = a, for x in L, and a in K. If flag = 0, do not care whether L/K is Galois or not. If flag = 1, L/K is assumed to be Galois (unchecked), which speeds up rnfisnorm. If flag = 2, let the routine determine whether L/K is Galois. The library syntax is GEN rnfisnorminit(GEN pol, GEN polrel, long flag). #### rnfkummer(bnr,{subgp},{d = 0}) bnr being as output by bnrinit, finds a relative equation for the class field corresponding to the module in bnr and the given congruence subgroup (the full ray class field if subgp is omitted). If d is positive, outputs the list of all relative equations of degree d contained in the ray class field defined by bnr, with the same conductor as (bnr, subgp). Warning. This routine only works for subgroups of prime index. It uses Kummer theory, adjoining necessary roots of unity (it needs to compute a tough bnfinit here), and finds a generator via Hecke's characterization of ramification in Kummer extensions of prime degree. If your extension does not have prime degree, for the time being, you have to split it by hand as a tower / compositum of such extensions. The library syntax is GEN rnfkummer(GEN bnr, GEN subgp = NULL, long d, long prec). #### rnflllgram(nf,pol,order) Given a polynomial pol with coefficients in nf defining a relative extension L and a suborder order of L (of maximal rank), as output by rnfpseudobasis(nf,pol) or similar, gives [[neworder],U], where neworder is a reduced order and U is the unimodular transformation matrix. The library syntax is GEN rnflllgram(GEN nf, GEN pol, GEN order, long prec). #### rnfnormgroup(bnr,pol) bnr being a big ray class field as output by bnrinit and pol a relative polynomial defining an Abelian extension, computes the norm group (alias Artin or Takagi group) corresponding to the Abelian extension of bnf = bnr.bnf defined by pol, where the module corresponding to bnr is assumed to be a multiple of the conductor (i.e. pol defines a subextension of bnr). The result is the HNF defining the norm group on the given generators of bnr.gen. Note that neither the fact that pol defines an Abelian extension nor the fact that the module is a multiple of the conductor is checked. The result is undefined if the assumption is not correct. The library syntax is GEN rnfnormgroup(GEN bnr, GEN pol). #### rnfpolred(nf,pol) THIS FUNCTION IS OBSOLETE: use rnfpolredbest instead. Relative version of polred. Given a monic polynomial pol with coefficients in nf, finds a list of relative polynomials defining some subfields, hopefully simpler and containing the original field. In the present version 2.7.0, this is slower and less efficient than rnfpolredbest. Remark. this function is based on an incomplete reduction theory of lattices over number fields, implemented by rnflllgram, which deserves to be improved. The library syntax is GEN rnfpolred(GEN nf, GEN pol, long prec). #### rnfpolredabs(nf,pol,{flag = 0}) THIS FUNCTION IS OBSOLETE: use rnfpolredbest instead. Relative version of polredabs. Given a monic polynomial pol with coefficients in nf, finds a simpler relative polynomial defining the same field. The binary digits of flag mean The binary digits of flag correspond to 1: add information to convert elements to the new representation, 2: absolute polynomial, instead of relative, 16: possibly use a suborder of the maximal order. More precisely: 0: default, return P 1: returns [P,a] where P is the default output and a, a t_POLMOD modulo P, is a root of pol. 2: returns Pabs, an absolute, instead of a relative, polynomial. Same as but faster than rnfequation(nf, rnfpolredabs(nf,pol)) 3: returns [Pabs,a,b], where Pabs is an absolute polynomial as above, a, b are t_POLMOD modulo Pabs, roots of nf.pol and pol respectively. 16: possibly use a suborder of the maximal order. This is slower than the default when the relative discriminant is smooth, and much faster otherwise. See Section [Label: se:polredabs]. Warning. In the present implementation, rnfpolredabs produces smaller polynomials than rnfpolred and is usually faster, but its complexity is still exponential in the absolute degree. The function rnfpolredbest runs in polynomial time, and tends to return polynomials with smaller discriminants. The library syntax is GEN rnfpolredabs(GEN nf, GEN pol, long flag). #### rnfpolredbest(nf,pol,{flag = 0}) Relative version of polredbest. Given a monic polynomial pol with coefficients in nf, finds a simpler relative polynomial P defining the same field. As opposed to rnfpolredabs this function does not return a smallest (canonical) polynomial with respect to some measure, but it does run in polynomial time. The binary digits of flag correspond to 1: add information to convert elements to the new representation, 2: absolute polynomial, instead of relative. More precisely: 0: default, return P 1: returns [P,a] where P is the default output and a, a t_POLMOD modulo P, is a root of pol. 2: returns Pabs, an absolute, instead of a relative, polynomial. Same as but faster than rnfequation(nf, rnfpolredbest(nf,pol)) 3: returns [Pabs,a,b], where Pabs is an absolute polynomial as above, a, b are t_POLMOD modulo Pabs, roots of nf.pol and pol respectively. ? K = nfinit(y^3-2); pol = x^2 +x*y + y^2; ? [P, a] = rnfpolredbest(K,pol,1); ? P %3 = x^2 - x + Mod(y - 1, y^3 - 2) ? a %4 = Mod(Mod(2*y^2+3*y+4,y^3-2)*x + Mod(-y^2-2*y-2,y^3-2), x^2 - x + Mod(y-1,y^3-2)) ? subst(K.pol,y,a) %5 = 0 ? [Pabs, a, b] = rnfpolredbest(K,pol,3); ? Pabs %7 = x^6 - 3*x^5 + 5*x^3 - 3*x + 1 ? a %8 = Mod(-x^2+x+1, x^6-3*x^5+5*x^3-3*x+1) ? b %9 = Mod(2*x^5-5*x^4-3*x^3+10*x^2+5*x-5, x^6-3*x^5+5*x^3-3*x+1) ? subst(K.pol,y,a) %10 = 0 ? substvec(pol,[x,y],[a,b]) %11 = 0 The library syntax is GEN rnfpolredbest(GEN nf, GEN pol, long flag). #### rnfpseudobasis(nf,pol) Given a number field nf as output by nfinit and a polynomial pol with coefficients in nf defining a relative extension L of nf, computes a pseudo-basis (A,I) for the maximal order Z_L viewed as a Z_K-module, and the relative discriminant of L. This is output as a four-element row vector [A,I,D,d], where D is the relative ideal discriminant and d is the relative discriminant considered as an element of nf^*/{nf^*}^2. The library syntax is GEN rnfpseudobasis(GEN nf, GEN pol). #### rnfsteinitz(nf,x) Given a number field nf as output by nfinit and either a polynomial x with coefficients in nf defining a relative extension L of nf, or a pseudo-basis x of such an extension as output for example by rnfpseudobasis, computes another pseudo-basis (A,I) (not in HNF in general) such that all the ideals of I except perhaps the last one are equal to the ring of integers of nf, and outputs the four-component row vector [A,I,D,d] as in rnfpseudobasis. The name of this function comes from the fact that the ideal class of the last ideal of I, which is well defined, is the Steinitz class of the Z_K-module Z_L (its image in SK_0(Z_K)). The library syntax is GEN rnfsteinitz(GEN nf, GEN x). #### subgrouplist(bnr,{bound},{flag = 0}) bnr being as output by bnrinit or a list of cyclic components of a finite Abelian group G, outputs the list of subgroups of G. Subgroups are given as HNF left divisors of the SNF matrix corresponding to G. If flag = 0 (default) and bnr is as output by bnrinit, gives only the subgroups whose modulus is the conductor. Otherwise, the modulus is not taken into account. If bound is present, and is a positive integer, restrict the output to subgroups of index less than bound. If bound is a vector containing a single positive integer B, then only subgroups of index exactly equal to B are computed. For instance ? subgrouplist([6,2]) %1 = [[6, 0; 0, 2], [2, 0; 0, 2], [6, 3; 0, 1], [2, 1; 0, 1], [3, 0; 0, 2], [1, 0; 0, 2], [6, 0; 0, 1], [2, 0; 0, 1], [3, 0; 0, 1], [1, 0; 0, 1]] ? subgrouplist([6,2],3) \\ index less than 3 %2 = [[2, 1; 0, 1], [1, 0; 0, 2], [2, 0; 0, 1], [3, 0; 0, 1], [1, 0; 0, 1]] ? subgrouplist([6,2],[3]) \\ index 3 %3 = [[3, 0; 0, 1]] ? bnr = bnrinit(bnfinit(x), [120,[1]], 1); ? L = subgrouplist(bnr, [8]); In the last example, L corresponds to the 24 subfields of Q(zeta_{120}), of degree 8 and conductor 120 oo (by setting flag, we see there are a total of 43 subgroups of degree 8). ? vector(#L, i, galoissubcyclo(bnr, L[i])) will produce their equations. (For a general base field, you would have to rely on bnrstark, or rnfkummer.) The library syntax is GEN subgrouplist0(GEN bnr, GEN bound = NULL, long flag). #### zetak(nfz,x,{flag = 0}) znf being a number field initialized by zetakinit (not by nfinit), computes the value of the Dedekind zeta function of the number field at the complex number x. If flag = 1 computes Dedekind Lambda function instead (i.e. the product of the Dedekind zeta function by its gamma and exponential factors). CAVEAT. This implementation is not satisfactory and must be rewritten. In particular * The accuracy of the result depends in an essential way on the accuracy of both the zetakinit program and the current accuracy. Be wary in particular that x of large imaginary part or, on the contrary, very close to an ordinary integer will suffer from precision loss, yielding fewer significant digits than expected. Computing with 28 digits of relative accuracy, we have ? zeta(3) %1 = 1.202056903159594285399738161 ? zeta(3-1e-20) %2 = 1.202056903159594285401719424 ? zetak(zetakinit(x), 3-1e-20) %3 = 1.2020569031595952919 \\ 5 digits are wrong ? zetak(zetakinit(x), 3-1e-28) %4 = -25.33411749 \\ junk * As the precision increases, results become unexpectedly completely wrong: ? \p100 ? zetak(zetakinit(x^2-5), -1) - 1/30 %1 = 7.26691813 E-108 \\ perfect ? \p150 ? zetak(zetakinit(x^2-5), -1) - 1/30 %2 = -2.486113578 E-156 \\ perfect ? \p200 ? zetak(zetakinit(x^2-5), -1) - 1/30 %3 = 4.47... E-75 \\ more than half of the digits are wrong ? \p250 ? zetak(zetakinit(x^2-5), -1) - 1/30 %4 = 1.6 E43 \\ junk The library syntax is GEN gzetakall(GEN nfz, GEN x, long flag, long prec). See also GEN glambdak(GEN znf, GEN x, long prec) or GEN gzetak(GEN znf, GEN x, long prec). #### zetakinit(bnf) Computes a number of initialization data concerning the number field associated to bnf so as to be able to compute the Dedekind zeta and lambda functions, respectively zetak(x) and zetak(x,1), at the current real precision. If you do not need the bnfinit data somewhere else, you may call it with an irreducible polynomial instead of a bnf: it will call bnfinit itself. The result is a 9-component vector v whose components are very technical and cannot really be used except through the zetak function. This function is very inefficient and should be rewritten. It needs to computes millions of coefficients of the corresponding Dirichlet series if the precision is big. Unless the discriminant is small it will not be able to handle more than 9 digits of relative precision. For instance, zetakinit(x^8 - 2) needs 440MB of memory at default precision. This function will fail with the message *** bnrL1: overflow in zeta_get_N0 [need too many primes]. if the approximate functional equation requires us to sum too many terms (if the discriminant of the number field is too large). The library syntax is GEN initzeta(GEN bnf, long prec)`.
2015-05-07 01:36:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872675359249115, "perplexity": 2701.6762807096043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430460084453.65/warc/CC-MAIN-20150501060124-00036-ip-10-235-10-82.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/44697/why-do-poles-in-the-left-half-of-the-s-plane-make-a-system-stable
# Why do poles in the left half of the S plane make a system stable? A point on the S-plane (where $s=\sigma+j\omega$) represents a signal with a given frequency (given by the imaginary component) and which either decays, increases or stays stable (depending on the value on the real component). Doing the maths, by converting $e^{-st}$ to a cosine and sine pair, I can see that points the left hand side of the plane describe signals that increase infinitely and points in the right hand side of the plane describe points that decrease to infinity. Given this, why is it true that having poles of a transfer function (the frequency values which make the gain of the system infinite), lying in the left hand side of the S-plane (the side which makes signals increase infinity), makes a system stable ? • H(s) verses H(-s) – user28715 Oct 26 '17 at 15:29 An $n$th order linear system is asymptotically stable only if all of the components in the homogeneous response from a finite set of initial conditions decay to zero as time increases, or mathematically written: $\displaystyle{\lim_{t \to \infty}}\sum_{i=1}^{n}C_ie^{p_it}=0$ ($p_i$ are the system poles). As time increases (in a stable system) all components of the homogeneous response must decay to zero.If any pole has a positive real part there is a component in the output that increases without bound, causing the system to be unstable. So, in order for a linear system to be stable, all of its poles must have negative real parts (they must all lie within the left-half of the s-plane). An "unstable" pole, lying in the right half of the s-plane, generates a component in the system homogeneous response that increases without bound from any finite initial conditions. I think I have answered this: If the points on the S plane that make the system respond infinitely lie in the left hand side of the S plane then, because points in the left hand side of the plane increase in amplitude as time progresses, as time progresses for the system a larger and larger signal is required to make the system respond infinitely. And, conversely, if a pole exists in the right hand side of the plane, as time increases the system will respond infinitely to smaller and smaller amplitudes of this frequency meaning that as time proceeds and infinitely small amount of this frequency will cause the system to saturate. • Also see bounded-input bounded-output (BIBO) stability for a good definition of stability. Oct 26 '17 at 15:31 • Ok, thanks Olli, I had had a look but there are a lot of terms in there that I don't understand. Was my answer essentially true? Oct 26 '17 at 15:45 On left hand side as t approached infinity output approaches infinity(BIBO). But on Right hand side of the s-plane apparently as t approaches 0 output approaches infinty which is not an ideal case in real life signals ultimately make the control systems unstable.
2021-10-19 22:17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8556157946586609, "perplexity": 396.3084055629432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585281.35/warc/CC-MAIN-20211019202148-20211019232148-00023.warc.gz"}
https://pypi.org/project/celestine/
A simple image viewer. ## Project description View, tag, and organize your photos. Work in progress. Not currently functional. By default this is a command line application. Should work anywhere Python 3 code can be run. The code has several folders dedicated to specific pluggins. If package not found, code not run. Can actually be deleted if desired. Older description: Decided that the main reason behind this was an attempt to use HTML as my GUI. Most files in this project are dedicated to viewing and modifying local images. Will try to update the project to reflect that. ## Primary Goals • Is an offline only application. (The only internet used is when you use pip.) • Has no required dependencies. (All you need to run this package is Python.) • Can be customized to use different packages. (See the list below for supported packages.) ### Offline Only The only internet you need is when you install with pip. ### Custom Install Choose which packages to use. Swap packages • Has no required dependencies. (All you need to run this package is Python.) • Can be customized to use different packages. (See the list below for supported packages.) Secondary goals: - Advanced tag searching. (Most websites have really lousy topic filters.) What this is not: - This is not a photo editor. - This is not a photo downloader. - This is not a mobile application. ## Requirements Python Need Optional Features Why Python REQUIRED This project is written in Python. tkinter OPTIONAL tcl/tk and IDLE Use this if you don’t trust DearPyGui. unittest OPTIONAL Python test suite Use this to run the tests yourself. Package Need PyPi Why Celestine Image Viewer REQUIRED This is the project you are trying to install. DearPyGui RECOMMENDED pip install dearpygui Without this, you need tkinter or the command line. Pillow RECOMMENDED pip install Pillow Without this most images wont load. (Old notes. Still useful. Need to add to table above somehow.) Recomended Python >= 3.9 (PEP 584) Pillow >= 7.2.0 (TIFF BYTE tags format) Recommended Packages: Tkinter: Included in most python distributions, which means no installation or setup required. Adds GUI to application. Makes it easier for the average user to use. (Plus now you can actually see the images.) Optional Packages: Additional GUI libraries and features may be added in the future. Some configuration may be needed to toggle these additional features. Rexex parser: Adds support for wildcard searches using ‘*’. command core only the basics here use on web server or as external library ## Inspiration Safebooru - And the thousands of other booru sites. Board Game Geek - Epic advancned search. ## Project details Uploaded source Uploaded py3
2022-10-04 11:20:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17329490184783936, "perplexity": 6920.831423222357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00424.warc.gz"}
https://sportyr.sportsdataverse.org/articles/sportyR.html
Welcome to sportyR! I’m Ross Drucker, the author of the sportyR package. My aim with this package is to provide high-quality, reliable, baseline plots to use for geospatial analysis of sports data. I’m excited to showcase some of the main functionalities of the package here, as well as continue to develop the package to meet the needs of the sports analytics community. ### Installing R, RStudio, and sportyR (This section courtesy of Saiem Gilani. Give him a follow!) 2. Select the appropriate link for your operating system (Windows, Mac OS X, or Linux) • Mac OS X - Select Latest Release, but check to make sure your OS is the correct version. Look through Binaries for Legacy OS X Systems if you are on an older release • Linux - Select the appropriate distro and follow the installation instructions 3. Start peering over the RStudio IDE Cheatsheet. An IDE is an integrated development environment. 4. For Windows users: I recommend you install Rtools. This is not an R package! It is “a collection of resources for building packages for R under Microsoft Windows, or for building R itself”. Go to https://cran.r-project.org/bin/windows/Rtools/ and follow the directions for installation. sportyR is live on CRAN, and the most recent release can be installed by running: # Install released version from CRAN install.packages("sportyR") If you’re more into the development version of the package, try this: # Install development version from GitHub devtools::install_github("sportsdataverse/sportyR") Once the library is installed, be sure to load it into the working environment. # Required to use package library(sportyR) ### Understanding and Exploring the Package The package itself is really an extension of ggplot2, but the aim is to focus specifically on a sports playing surface. So that begs the question: what sports can we plot using sportyR? You’re in luck: these kinds of questions are natively answered by what I’ve called the cani_{question}() family of functions. They’re designed to answer questions like Can I plot a soccer pitch? or Can I plot a PHF ice rink? and that’s the exact syntax you can follow to have the package answer those questions. Here’s an example: # Find out if you can plot a soccer pitch cani_plot_sport("soccer") #> geom_soccer() can be used to plot for the following leagues: EPL, FIFA, MLS, NCAA, NWSL or # See if a league comes pre-packaged with sportyR cani_plot_league("PHF") #> A plot for PHF can be created via the geom_hockey() function I’ll highlight the fact that these are case-insensitive searches. Ask away to your heart’s content! There’s one other cani_{question}() function I’ll highlight more in a bit, but first let’s start acting on the answers to these kinds of questions. ### The geom_{sport}() Functions Now that we can ask questions to the package and get answers, let’s start using this information to make plots. Say for example we’re interested in drawing a regulation NBA basketball court. sportyR seeks to make this as easy as possible: # Draw a regulation NBA basketball court geom_basketball("nba") Easy as that to get started. Here’s a quick overview of the arguments (which are included for all of the geom_{sport}() functions): • league: This is a required parameter, but custom is a viable value for any sport. As a quick note, using this custom option will require you to specify all parameters of the surface you’re looking to create. This is case-insensitive • display_range: This automatically “zooms” in on the area of the plot you’re interested in. Valid ranges here vary by sport, but can be found by calling ?geom_{sport} and reading about the display ranges • x_trans and y_trans: By default, the origin of the coordinate system always lies at the center of the plot. For example, (0, 0) on a basketball court lies along the division line and on the line that connects the center of each basket. If you want to shift the origin (and therefore the entire plot), use x_trans and y_trans to do so • {surface_type}_updates: A list of updates to the parameters that define the surface. I’ll demo how to use this to change a hockey rink in a different vignette, but I’ll call this out here • color_updates: A list that contains updates to the features’ colors on the plot. These are named by what the feature is, using snake_case to specify the names. To get the list of color names you can change, try running cani_color_league_features() with your desired league • rotation: An angle (in degrees) that you’d like to rotate the plot by, where +is counterclockwise • xlims and ylims: Any limits you’d like to put on the plot in the x and y direction. These will overwrite anything set by the display_range parameter • {surface}_units: If your data is in units that are different than how the rule book of the league specifies the units (e.g. you’ve got NHL data in inches, but the rule book describes the rink in feet), change this parameter to match the units you’ve got your data in. You’re welcome to change the units of the data as well, but this is provided for convenience
2022-10-05 02:24:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3148599863052368, "perplexity": 1909.7076784464914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00256.warc.gz"}
https://scioly.org/forums/viewtopic.php?f=297&t=12385
## Circuit Lab B/C UTF-8 U+6211 U+662F Exalted Member Posts: 1491 Joined: January 18th, 2015, 7:42 am Division: C State: PA ### Circuit Lab B/C Reposting my question from General Chat Copper has one valence electron, with a density of 8.94 grams per cubic centimeter and an atomic weight of approximately 64 grams per mole. Suppose a copper wire has a current density of 18.8 amperes per square millimeter. Find the drift velocity inside the wire. mdv2o5 Member Posts: 19 Joined: July 9th, 2018, 7:24 am State: IN Location: MA ### Re: Circuit Lab B/C Reposting my question from General Chat Copper has one valence electron, with a density of 8.94 grams per cubic centimeter and an atomic weight of approximately 64 grams per mole. Suppose a copper wire has a current density of 18.8 amperes per square millimeter. Find the drift velocity inside the wire. If the answer above is right, then here's the next question: (this one is for Div C according to the new rules) Determine the output voltage as a function of the source voltage, R1, and R2. A note on studying for op amps and also a bit of a hint: Ideally, inverting and non-inverting op amp configurations should be memorized/included on your notes, but it's always good to be able to derive these formulas using basic circuit laws and the ideal op amp assumptions since it's really easy to create an op amp circuit that does not fall neatly into a standard configuration. UTF-8 U+6211 U+662F Exalted Member Posts: 1491 Joined: January 18th, 2015, 7:42 am Division: C State: PA ### Re: Circuit Lab B/C Determine the output voltage as a function of the source voltage, R1, and R2. A note on studying for op amps and also a bit of a hint: Ideally, inverting and non-inverting op amp configurations should be memorized/included on your notes, but it's always good to be able to derive these formulas using basic circuit laws and the ideal op amp assumptions since it's really easy to create an op amp circuit that does not fall neatly into a standard configuration. Interesting introduction problem to op-amps for me! Edit: Small error. See next post for corrected work and answer. Last edited by UTF-8 U+6211 U+662F on September 5th, 2018, 6:05 pm, edited 1 time in total. mdv2o5 Member Posts: 19 Joined: July 9th, 2018, 7:24 am State: IN Location: MA ### Re: Circuit Lab B/C Determine the output voltage as a function of the source voltage, R1, and R2. A note on studying for op amps and also a bit of a hint: Ideally, inverting and non-inverting op amp configurations should be memorized/included on your notes, but it's always good to be able to derive these formulas using basic circuit laws and the ideal op amp assumptions since it's really easy to create an op amp circuit that does not fall neatly into a standard configuration. Interesting introduction problem to op-amps for me! Close! Check the KCL expression again (and remember that I = V/R) UTF-8 U+6211 U+662F Exalted Member Posts: 1491 Joined: January 18th, 2015, 7:42 am Division: C State: PA ### Re: Circuit Lab B/C Close! Check the KCL expression again (and remember that I = V/R) Whoops. I always mess up Ohm's law for some reason (I should really make sure to double check). Take two mdv2o5 Member Posts: 19 Joined: July 9th, 2018, 7:24 am State: IN Location: MA ### Re: Circuit Lab B/C Close! Check the KCL expression again (and remember that I = V/R) Whoops. I always mess up Ohm's law for some reason (I should really make sure to double check). Take two Looks good! A neat little trick... UTF-8 U+6211 U+662F Exalted Member Posts: 1491 Joined: January 18th, 2015, 7:42 am Division: C State: PA ### Re: Circuit Lab B/C A neat little trick... All right, thanks! Onto basic circuit safety: Touching two nodes of a DC voltage of above approximately what voltage can be lethal? What kind of factors affect whether a shock might be lethal? What current range is usually lethal? What are some harmful effects of having a current flow through you that is below the lethal limit (i.e. not enough to get you killed)? Jacobi Exalted Member Posts: 137 Joined: September 4th, 2018, 7:47 am State: - ### Re: Circuit Lab B/C All right, thanks! Onto basic circuit safety: Touching two nodes of a DC voltage of above approximately what voltage can be lethal? What kind of factors affect whether a shock might be lethal? What current range is usually lethal? What are some harmful effects of having a current flow through you that is below the lethal limit (i.e. not enough to get you killed)? UTF-8 U+6211 U+662F Exalted Member Posts: 1491 Joined: January 18th, 2015, 7:42 am Division: C State: PA ### Re: Circuit Lab B/C All right, thanks! Onto basic circuit safety: Touching two nodes of a DC voltage of above approximately what voltage can be lethal? What kind of factors affect whether a shock might be lethal? What current range is usually lethal? What are some harmful effects of having a current flow through you that is below the lethal limit (i.e. not enough to get you killed)? Correct! Although, it's important to note that for current to happen, you do need a voltage. That's why you see signs like "High Voltage" on fences. The rule that current not voltage kills isn't entirely true because of this and is only a guideline. Be very wary of voltage, as it's impossible to know the resistance of your body at any given moment. Jacobi Exalted Member Posts: 137 Joined: September 4th, 2018, 7:47 am State: - ### Re: Circuit Lab B/C Write the voltage drop equations for a capacitor and a resistor. UTF-8 U+6211 U+662F Exalted Member Posts: 1491 Joined: January 18th, 2015, 7:42 am Division: C State: PA ### Re: Circuit Lab B/C Write the voltage drop equations for a capacitor and a resistor. $V_C = \frac{Q}{C}$ $V_R = IR$ Jacobi Exalted Member Posts: 137 Joined: September 4th, 2018, 7:47 am State: - ### Re: Circuit Lab B/C Write the voltage drop equations for a capacitor and a resistor. $V_C = \frac{Q}{C}$ $V_R = IR$ Awesome! You next. Jacobi Exalted Member Posts: 137 Joined: September 4th, 2018, 7:47 am State: - ### Re: Circuit Lab B/C Write the voltage drop equations for a capacitor and a resistor. $V_C = \frac{Q}{C}$ $V_R = IR$ Awesome! You next. UTF-8 U+6211 U+662F Exalted Member Posts: 1491 Joined: January 18th, 2015, 7:42 am Division: C State: PA ### Re: Circuit Lab B/C Write the voltage drop equations for a capacitor and a resistor. $V_C = \frac{Q}{C}$ $V_R = IR$ Awesome! You next. All right! Describe the behavior of an RC circuit which is a) charging and b) discharging. Jacobi Exalted Member Posts: 137 Joined: September 4th, 2018, 7:47 am State: - ### Re: Circuit Lab B/C All right! Describe the behavior of an RC circuit which is a) charging and b) discharging. a) The voltage through the resistor starts high and decays exponentially as the capacitor becomes saturated. The charged capacitor has an extremely high voltage drop. b) The voltage through the resistor starts high and decays exponentially as the capacitor discharges. The capacitor acts as a voltage source in the absence of other sources.
2019-09-21 06:48:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6520504355430603, "perplexity": 3654.2667719565247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574286.12/warc/CC-MAIN-20190921063658-20190921085658-00071.warc.gz"}
https://math.stackexchange.com/questions/2892440/off-diagonal-part-of-psd-matrix-eigenvalue-bound
# Off-diagonal part of PSD matrix Eigenvalue Bound Let $A\succeq 0$ and $M = A - \operatorname{diag}(A)$ be $M$ modified by setting the diagonal terms to zero. While $\operatorname{diag}(A)\succeq 0$, $M$ need not be positive or negative definite (consider $A$ as the all-ones matrix which has characteristic polynomial $(-1)^n\bigl(x - (n-1)\bigr)(x+1)^{n-1}$). Can we bound $\|M\|$ in terms of $\|A\|$? At least in the $A = \mathbf{1}\mathbf{1}^*$ example $\|M\|\leq \|A\|$ holds. • Related question here. – cdipaolo Aug 23 '18 at 18:24 $\newcommand\diag{\operatorname{diag}}$This is true. Decompose $M = A - \diag A$ and pick $x$ of unit length so that $x^* M x = \|M\|$. Then $$\|M\| = x^* M x = x^* A x - x^* \diag(A) x \leq \|A\| - x^*\diag(A) x \leq \|A\|$$ since $\diag(A)\succeq 0$. The more specific bound $$\|M\| \leq \|A\| - \min\{a_{ii}\}$$ holds similarly. • How do we know that $x^TAx\leq||A||$? – MrYouMath Aug 24 '18 at 18:18 • @MrYouMath This. For self-adjoint $B$, $\|B\| = \max_{\|x\|=1}x^* B x$ and $x$ here had unit length. That's also how we can come up with $x^*\operatorname{diag}(A)x \geq \min\{a_{ii}\}$. – cdipaolo Aug 24 '18 at 18:19
2019-11-22 21:00:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950026035308838, "perplexity": 392.7470661546269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00359.warc.gz"}
https://www.physicsforums.com/threads/showing-that-a-factor-group-is-abelian.395428/
# Homework Help: Showing that a factor group is abelian 1. Apr 14, 2010 ### valexodamium This problem is from Seth Warner's Modern Algebra; problem number 12.21 (so Google can find it.) It's actually in the free preview, find it http://books.google.com/books?id=jd... Algebra "12.21"&pg=PA90#v=onepage&q&f=false" 1. The problem is stated as: If h is an endomorphism of a group G such that $$\kappa$$a $$\circ$$ h = h $$\circ$$ $$\kappa$$a for every a $$\in$$ G, then the set H = {x$$\in$$G : h(x) = h(h(x))} is a normal subgroup of G, and G/H is an abelian group.​ 2. $$\kappa$$a is the inner automorphism defined by a, and is defined as $$\kappa$$a(x) = a$$\Delta$$x$$\Delta$$a*​ where $$\Delta$$ is the group's binary operator and a* is the inverse of a. We are also given a theorem that states that G/H is abelian iff $$x\Delta y\Delta x*$$$$\Delta y* \in H \forall x, y \in G$$ (which I also had to prove, but I'll spare you.) 3. Now, I managed to prove that H is a subgroup and that H is normal. I am at a loss, however, as to how to prove that G/H is abelian. I understand that I'm supposed to show that x$$\Delta$$y$$\Delta$$x*$$\Delta$$y* $$\in$$ H $$\forall$$ x,y , but I can only see a way to do that if either x or y is in H, for example: if x is in H, then x$$\Delta$$H = H​ and since y$$\Delta$$H$$\Delta$$y* = H $$\forall$$ y because H is normal, x$$\Delta$$(y$$\Delta$$H$$\Delta$$y*) = H​ which contains x$$\Delta$$y$$\Delta$$x*$$\Delta$$y* because H contains x*. but the challenge is to show that this is true $$\forall$$ x,y$$\in$$G Last edited by a moderator: Apr 25, 2017 2. Apr 15, 2010 ### ystael First of all, Seth Warner should be beaten with a rubber hose for inflicting this notation on you. The operation in a group should be written $$xy$$, not $$x\triangle y$$, and the inversion is $$x^{-1}$$, not $$x^*$$. (Unless the group is abelian and written in additive notation, in which case it's $$x + y$$ and $$-x$$, respectively.) Incidentally, the triangle symbol is \triangle, not \Delta which is a Greek letter. Now that I've got that off my chest ... You need to show that $$xyx^{-1}y^{-1} \in H$$ for every $$x, y \in G$$. The condition for $$z \in H$$ is that $$h(h(z)) = h(z)$$. Why not just play around with $$h(h(xyx^{-1}y^{-1}))$$ and see what happens? The condition $$\kappa_a \circ h = h \circ \kappa_a$$ for every $$a \in G$$ gives you some clever tricks to manipulate this with; if you don't see how, write out the equation $$(\kappa_a \circ h)(z) = (h \circ \kappa_a)(z)$$ for a given $$z\in G$$. Last edited: Apr 15, 2010 3. Apr 15, 2010 ### valexodamium Thanks! That and h(xy) = h(x)h(y) did the trick. (And yeah, the book's notation is weird. It seems that it starts using more conventional notation more often eventually, though.)
2018-06-20 12:14:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886649489402771, "perplexity": 632.5220786424127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863518.39/warc/CC-MAIN-20180620104904-20180620124904-00211.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=150&t=30270&p=94324
## Stoichiometric coefficients $K = \frac{k_{forward}}{k_{reverse}}$ Rebecca Doan 2L Posts: 51 Joined: Thu Jul 27, 2017 3:01 am ### Stoichiometric coefficients Are stoichiometric coefficients used for elementary reactions? In the book, it states that they are not. However, in lecture, Lavelle has provided examples where there are coefficients in the elementary reactions of a mechanism. Scott Chin_1E Posts: 55 Joined: Sat Jul 22, 2017 3:00 am ### Re: Stoichiometric coefficients I think the stoichiometric coefficients are there to help us determine the rate law of the elementary step. It would give us the order of the reactant in the rate law. Sarah Maraach 2K Posts: 31 Joined: Thu Jul 27, 2017 3:00 am ### Re: Stoichiometric coefficients Yes, stoichiometric coefficients are used in the elementary steps' rate laws, but you can't use it for the overall rate law as the coefficients may not align with the order. Phillip Tran Posts: 54 Joined: Thu Jul 13, 2017 3:00 am ### Re: Stoichiometric coefficients only for elementary steps sallina_yehdego 2E Posts: 75 Joined: Sun Apr 29, 2018 3:00 am ### Re: Stoichiometric coefficients Yes, they are used for elementary step rate laws, but always keep in mind you cannot use it when you're trying to solve for the rate law of the overall reaction because the coefficients may not always line up.
2019-10-17 00:41:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6680508255958557, "perplexity": 2872.4318121279157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00002.warc.gz"}
http://calculator.tutorvista.com/dividing-polynomials-calculator.html
To get the best deal on Tutoring, call 1-855-666-7440 (Toll Free) Top Dividing Polynomials Calculator Top Dividing Polynomials Calculator helps to divide two polynomials. We may use any polynomials such as monomials, binomials, trinomials etc. Hence it can be used for multiplying polynomials with monomials or binomials or trinomials etc. It is sometimes known as polynomial division calculator that divides the polynomial hence called divide polynomials calculator. It is the tool that even acts as a dividing polynomials by monomials calculator. You just have to enter the numerator and denominator polynomial and get its answer instantly. You can see a default Polynomial given below. Click on "Divide", it will divide each term of the given dividend by its divisor and simplify. Steps for Dividing Polynomials Back to Top Step 1 : First, separate the dividend and divisor. Divide each term of the dividend by its divisor. Step 2 : Now add those answers together, and simplify it further. Example Problems on Dividing Polynomials Back to Top 1. Divide: $\frac{x^{2} + 3x + 5}{x + 1}$ Step 1 : Need to divide $\frac{x^{2} + 3x + 5}{x + 1}$ Step 2 : x2 + 3x + 5 = (2 + x)*(1 + x) + 3 Which is in the form of Dividend = (Quotient)(Divisor) +Remainder. So,  Quotient = (2 + x) Remainder = 3 Answer  : Quotient = (2 + x) Remainder = 3 2. Divide $\frac{x^4+3x^2-6}{x^2+x}$ Step 1 : Need to divide $\frac{x^4+3x^2-6}{x^2+x}$ Step 2 : x2 + 3x2 - 6 = (x2 - x + 4)*(x2 + x) + (-4x - 6) Which is in the form of Dividend = (Quotient)(Divisor) +Remainder. So,  Quotient = (x2 - x + 4) Remainder = -4x - 6 Answer  : Quotient = (x2 - x + 4) Remainder = -4x - 6 *AP and SAT are registered trademarks of the College Board.
2017-04-29 05:30:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7915066480636597, "perplexity": 2497.5853709693793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00605-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.jobilize.com/online/course/1-3-energy-in-the-simple-harmonic-oscillator-by-openstax?qcr=www.quizover.com
1.3 Energy in the simple harmonic oscillator Page 1 / 1 How to calculate the energy in a simple harmonic oscillator. Energy in sho Recall that the total energy of a system is: $E=KE+PE=K+U$ We also know that the kinetic energy is $K=\frac{1}{2}m{{\text{v}}}^{2}$ But what is $U$ ? For a conservative Force( $\oint \stackrel{⃗}{F}d\stackrel{⃗}{x}=0$ ) - eg. gravity, electrical... (no friction) we know that the work done by anexternal force is stored as $U$ . For the case of a mass on a spring, the external force is opposite the springForce (That is it has the opposite sign from the spring force).: ${F}_{ext}=kx$ (i.e. This is the force you use to pull the mass and stretch the spring beforeletting go and making it oscillate.) Thus $U={\int }_{0}^{x}kxdx=\frac{1}{2}k{x}^{2}$ This gives: $\begin{array}{c}E=\frac{1}{2}m{{\text{v}}}^{2}+\frac{1}{2}k{x}^{2}\\ =\frac{1}{2}m{\left(\frac{dx}{dt}\right)}^{2}+\frac{1}{2}k{x}^{2}\end{array}$ It is important to realize that any system that is represented by either of these two equations below represents oscillating system $m\frac{{d}^{2}x}{d{t}^{2}}+kx=0$ $\frac{1}{2}m{\left(\frac{dx}{dt}\right)}^{2}+\frac{1}{2}k{x}^{2}=E$ To calculate the energy in the system it is helpful to take advantage of the fact that we can calculate the energy at any point in x. For example in thecase of the simple harmonic oscillator we have that: $x=A{e}^{i\left(\omega t+\alpha \right)}$ We can choose $t$ such that $x=A$ Now remember that when I write $x=A{e}^{i\left(\omega t+\alpha \right)}$ I "really" (pun intended) mean $x=Re\left[A{e}^{i\left(\omega t+\alpha \right)}\right]$ Likewise then $\stackrel{˙}{x}=Re\left[i\omega A{e}^{i\left(\omega t+\alpha \right)}\right]$ At the point in time where $x=A$ this gives us $\stackrel{˙}{x}=Re\left[i\omega A\right]=0$ Thus at that point in time we have $\stackrel{˙}{x}=0$ . We can now substitute that and $x=A$ into $E=\frac{1}{2}m{\left(\frac{dx}{dt}\right)}^{2}+\frac{1}{2}k{x}^{2}$ we obtain $E=\frac{1}{2}k{A}^{2}$ This is an important point. The energy in the oscillator is proportional to the amplitude squared! where we get a research paper on Nano chemistry....? nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review Ali what are the products of Nano chemistry? There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others.. learn Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level learn da no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts Bhagvanji hey Giriraj Preparation and Applications of Nanomaterial for Drug Delivery revolt da Application of nanotechnology in medicine what is variations in raman spectra for nanomaterials ya I also want to know the raman spectra Bhagvanji I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian yes that's correct Professor I think Professor Nasa has use it in the 60's, copper as water purification in the moon travel. Alexandre nanocopper obvius Alexandre what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION Anam analytical skills graphene is prepared to kill any type viruses . Anam Any one who tell me about Preparation and application of Nanomaterial for drug Delivery Hafiz what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Got questions? Join the online conversation and get instant answers!
2020-12-01 15:27:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6656195521354675, "perplexity": 1478.5119104094229}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674594.59/warc/CC-MAIN-20201201135627-20201201165627-00625.warc.gz"}
https://www.bakpax.com/assignment-library/assignments/using-the-pythagorean-theorem-and-similarity-goboj-jorih
# Using the Pythagorean Theorem and Similarity ID: goboj-jorih Illustrative Math Subject: Geometry # Using the Pythagorean Theorem and Similarity Classroom: Due: Student Name: Date Submitted: ##### Problem 1 1) In right triangle $ABC$, altitude $CD$ is drawn to its hypotenuse. Select all triangles which must be similar to triangle $ABC$. Write each corresponding letter in the answer box and separate letters with commas. a) $ABC$ $\quad\quad$ b) $ACD$ $\quad\quad$ c) $BCD$ $\quad\quad$ d) $BDC$ $\quad\quad$ e) $CAD$ $\quad\quad$ f) $CBD$ ##### Problem 2 2) In right triangle $ABC$, altitude $CD$ with length $h$ is drawn to its hypotenuse. We also know $AD = 12$ and $DB = 3$. What is the value of $h$? ##### Problem 3 3) In triangle $ABC$ (not a right triangle), altitude $CD$ is drawn to side $AB$. The length of $AB$ is $c$. Which of the following statements must be true? a) $\text{The measure of angle } ACB \text{ is the same measure as angle } B \text{.}$b) $b^2 \ = \ c^2 \ +\ a^2$c) $\text{Triangle } ADC \text{ is similar to triangle } ACB \text{.}$d) $\text{The area of triangle } ABC \text{ equals } \frac{1}{2}h\cdot c$ ##### Problem 4 4) Quadrilateral $ABCD$ is similar to quadrilateral $A'B'C'D'$. Select all equations that could be used to solve for missing lengths. Write each corresponding letter in the answer box and separate letters with commas. a) $\frac{A'B'}{AB} = \frac{A'C'}{AC}$ $\quad\quad$ b) $\frac{A'B'}{AB} = \frac{AC}{A'C'}$ $\quad\quad$ c) $\frac{A'B'}{C'D'} = \frac{AB}{CD}$ $\quad\quad$ d) $\frac{AD}{A'D'} = \frac{BC}{B'C'}$ $\quad\quad$ e) $\frac{AB}{A'D'} = \frac{AD}{A'B'}$ ##### Problem 5 Segment $A'B'$ is parallel to segment $AB$. 5) What is the length of segment $A'A$? 6) What is the length of segment $B'B$? ##### Problem 6 7) Lines $BC$ and $DE$ are both vertical. What is the length of $AD$? a) $\text{4.5}$b) $\text{5}$c) $\text{7.5}$d) $\text{10}$ ##### Problem 7 8) Triangle $DEF$ is formed by connecting the midpoints of the sides of triangle $ABC$. Select all true statements. Write each corresponding letter in the answer box and separate letters with commas. a) Triangle $BDE$ is congruent to triangle $FCE$ $\quad\quad$ b) Triangle $BDE$ is congruent to triangle $FDA$ c) $BD$ is congruent to $FE$ $\quad\quad$ d) The length of $BC$ is 8 $\quad\quad$ e) The length of $BC$ is 6
2020-07-07 17:54:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 64, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4777333736419678, "perplexity": 6468.808121814853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00274.warc.gz"}
http://andrew-algorithm.blogspot.com/2014/10/uva-problem-679.html
## Friday, October 10, 2014 ### UVa Problem 679 - Dropping Balls Problem: Solution: After taking a brief break - I started to code again. Now I am on divide and conquer chapter. This problem has a straightforward simulation algorithm, which I implemented. In the worst case it takes $Q \log n$ time. That's not good since $Q$ can be large. Off my head I don't have idea, so I try to spot patterns. Spotting pattern is a good way for solving ProjectEuler problems, so I used it. Here is my blog of my solved Project Euler problems! First of all, I dumped the leaf numbers for each drop, they look like this. 8 12 10 14 9 13 11 15 I don't know if you can feel it or not, but I do feel there is something going on with its binary representation, so I looked at their binary representations. 00001000 00001100 00001010 00001110 00001001 00001101 00001011 00001111 Now the pattern is very obvious to me now. All I need to do is to implement the pattern, and I did, it got accepted. Code: #include "stdafx.h" // http://uva.onlinejudge.org/external/6/679.html #include "UVa679.h" #include <iostream> #include <bitset> using namespace std; // #define simulation int UVa679() { int number_of_cases; cin >> number_of_cases; for (int c = 0; c < number_of_cases; c++) { int depth; int drops; int number_of_internal_nodes = 1; cin >> depth; cin >> drops; #ifdef simulation // We don't need to keep the flag for the leaves for (int d = 0; d < (depth - 1); d++) { number_of_internal_nodes *= 2; } number_of_internal_nodes--; bool* tree = new bool[number_of_internal_nodes]; for (int i = 0; i < number_of_internal_nodes; i++) { tree[i] = false; } for (int i = 0; i < drops; i++) { int ball_position = 1; // The ball only go down (depth - 1) times for (int j = 0; j < depth - 1; j++) { if (tree[ball_position - 1] == false) { tree[ball_position - 1] = true; // Go to left ball_position = ball_position * 2; } else { tree[ball_position - 1] = false; ball_position = ball_position * 2 + 1; } } // Showing the leaf number for all drops // cout << ball_position << endl; // Showing the binary representation of the solution cout << bitset<8>(ball_position) << endl; } delete[] tree; #else /* * The key to this smart algorithm is based on the observation of the sequence in binary form. * Running the simulation and shows the sequence in binary can easily reveal the pattern. */ // The binary algorithm starts with 0 drops drops--; int solution = 0; for (int i = 0; i < (depth - 1); i++) { // Computing the ith bit if (drops >= 1 << (depth - 2 - i)) { drops -= 1 << (depth - 2 - i); solution += 1 << i; } } solution += 1 << (depth - 1); cout << solution << endl; #endif } return 0; }
2018-05-21 22:49:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4073666036128998, "perplexity": 2815.125546198533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864558.8/warc/CC-MAIN-20180521220041-20180522000041-00168.warc.gz"}
https://ai.stackexchange.com/tags/terminology/new
# Tag Info 1 In machine learning, a tensor is a multi-dimensional array (i.e. a generalization of a matrix to more than 2 dimensions), which has some properties, such as the number of dimensions or the shape, and to which you can apply operations (for example, you can take the mean of all elements across all dimensions). So, a scalar is a 0-d tensor (no dimensions), a ... 1 Your interpretation is definitely correct. As you correctly pointed out, the derivative of softplus is continuous and $n$-times differentiable, that makes the function smooth, which is not the case for ReLU. What is quite interesting here is why softplus can be called an approximation to ReLU. If we break down the definition of softplus, we note that the ... 0 Channels can be thought of as alternate numbers in the same space. As an example, the three colour channels of a typical image are often values for amount of red, green or blue light received from each position within the picture. Your 1D convolution example has one input channel and one output channel. Depending on what the input represents, you might have ... 6 Have a look at these graphics showing popular linear units (image taken from Clevert et al. 2016): You can see that these functions are linear functions for $x > 0$, that's why they are called Linear Units. For example, the ELU is defined as ELU(x) = \begin{cases} x &\text{if } x > 0\\ \alpha (\exp(x)-1) & \text{if } x \leq 0. \end{cases} $... 1 Imagine the tensor as a some generalized$n$-dimensional hyperrectangle sliced into$n$-dimensional hypercubes. Each element of the tensor is labeled by the position along the given axis, say$(x_1, x_2, \ldots)$. Axis is not a property of tensor, rather the tensor is embedded in a$n$-dimensional space, where the axes are chosen along the sides of the ... 2 With this link I could read the paper. Thanks. So there is this discipline called sensor fusion. It is very sounded in the field of Autonomous Vehicles where in order to take one decision (whether to break or not) you have to take into account information for multiple sources: car mounted cameras, LIDAR, ultrasound, radar... So the term "fusion" ... 0 Also, keep in mind that not just any augmentation of the loss function is a regularization. For example, you can add terms to a loss function that enforce constraints on the solution but do not prevent overfitting nor facilitate generalization. 3 Regularization is not limited to methods like L1/L2 regularization which are specific versions of what you showed. Regularization is any technique that would prevent network from overfitting and help network to be more generalizable to unseen data. Some other techniques are Dropout, Early Stopping, Data Augmentation, limiting the capacity of network by ... 1 Updating model for each training example means batch size of 1, aka stochastic gradient descent(SGD). 1 iteration is defined as forward propagate, calculate loss, backpropagate and finally update weights. Since batch size is 1, running 5 epochs on 100 training examples with SGD means you will do 500 iterations, yes. 4 XPU is a device abstraction for Intel heterogeneous computation architectures, which can be mapped to CPU, GPU, FPGA and other accelerators. The "X" from XPU is just like a variable, like in maths, so you can do X=C and you get CPU accceleration, or X=G and you get GPU acceleration... That's the intuition behind that abstract name. In order to ... 2 Yes, it is a bit misleading. What it really means is input channels, so it would be: nn.Conv2d: Applies a 2D convolution over an input signal composed of several input channels. So, why don't just use channels instead of input planes? Well, initially the major deep learning applications were used for computer vision or image processing approaches. In CV or ... 2 The fact is you can always express an affine transformation as a linear transformation (more convenient because it is just a matrix/dot product). For instance, given an input$\textbf{x}=[x_1, ..., x_n]$, some weights$\textbf{a} = [a_1, a_2, ..., a_n]$and a bias$b \in \mathbb{R}$, you can express the affine operation$y = \textbf{a}\cdot \textbf{x} + b$... 2 In linear algebra, a linear transformation (aka linear map or linear transform)$f: \mathcal{V} \rightarrow \mathcal{W}$is a function that satisfies the following two conditions$f(u + v)=f(u)+f(v)$(additivity)$f(\alpha u) = \alpha f(u)$(scalar multiplication), where$u$and$v$vectors (i.e. elements of a vector space, which can also be$\mathbb{R}$[... 0 Without the specific context, I cannot give a definitive answer, but it's very likely that a "differentiable architecture" refers to a neural network that represents/computes a differentiable function (so you need to use differentiable activation functions, such as the sigmoid), i.e. you can take the partial derivatives of the loss function with ... 3 A smooth function is usually defined to be a function that is$n$-times continuously differentiable, which means that$f$,$f'$,$\dots$,$f^{(n - 1)}$are all differentiable and$f^{(n)}$is continuous. Such functions are also called$C^n$functions. It can be a bit of a vague term; some people might even stretch the definition and say any continuous ... 0 We need to compute the gradients in-order to train the deep neural networks. Deep neural network consists of many layers. Weight parameters are present between the layers. Since we need to compute the gradients of loss function for each weight, we use an algorithm called backprop. It is an abbreviation for backpropagation, which is also called as error ... 3 Essentially, any data you use to train or develop the model shouldn't be used as test data. In principle, "unseen" data gives a good estimate for the generalisation performance of the model; but this is only valid if the data really is unseen and hasn't been used in the model development process. If you've been tuning a model to increase its ... 0 I would like to add "The Master Algorithm" by Pedro Domingos. I would say it's more philosophical but still provides high level discussions about differences between algorithms. He also has a sense of humor which makes it a lighter read. 0 The famous book Artificial Intelligence: A Modern Approach (by Stuart Russell and Peter Norvig) covers all or most of the theoretical aspects of artificial intelligence (such as deep learning) and it also dedicates one chapter to the common philosophical topics that you mention. 1 I think that these terms may be used inconsistently across sources. If someone says held-out dataset, I would immediately think of a dataset that is not used for training, but can be used for anything else, validation (hyper-parameter tuning or early stopping) or testing; so, to determine what they are referring to, I would probably take into account the ... 1 I know at least one example where the rank of the dataset (more specifically, the rank of a matrix that is computed from the design matrix, i.e. the matrix with your data, which I will describe more in detail below) can have an impact on the number of solutions that you can have or how you find those solutions. I am thinking of linear regression. So, in ... 3 First of all, I don't know of any textbook that clarifies these terms, but, although I am not a statistician, in addition to the other answer, one possible way to look at it is as follows. You use probability theory to model your problem. For example, if it's a classification problem, you could define the conditional probability distribution$p(y \mid x)\$, ... Top 50 recent answers are included
2021-07-31 12:12:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9810308218002319, "perplexity": 779.8053959823337}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00245.warc.gz"}
https://chemistry.stackexchange.com/questions/34490/relationship-between-the-first-and-the-second-quantum-number
# Relationship between the first and the second quantum number Does the secondary quantum number tell how many subshells a specific principal quantum number shell has? E.g., if the principal quantum number is $n$, there are ($n-1$) subshells. No, it does not. It is the principal quantum number $n$ which does it (as you actually mentioned yourself later in the question). The second quantum number $l$ describes the shape of the orbitals in a particular subshell. Say, $l=0$ for $\mathrm{s}$-subshell of any shell and all $\mathrm{s}$-orbitals have a spherical shape. If the principal quantum number is $n$, there are $(n−1)$ subshells. Why $(n−1)$? Think about the most trivial example: how many subshells the first shell ($n=1$) have? One, not zero, so $(n−1)$ is obviously wrong formula. The situation is in fact simpler: $n$-th shell has $n$ subshells. • A small detail to add to Wildcat's excellent answer: the $\ell$ quantum number does not tell you the total number of subshells, but you may infer the minimum value of the principal quantum number $n$ and therefore the minimum number of subshells. For example, if $\ell=3$ then $n\ge 4$. – Paul Nov 5 '19 at 18:58
2021-08-06 00:50:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905624270439148, "perplexity": 471.03580987723143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00625.warc.gz"}
https://math.stackexchange.com/questions/1918427/how-is-an-empty-set-a-member-of-this-power-set
# How is an empty set a member of this power set? I'm reading the section on power sets in Book of Proof, and the chapter includes this statement (Example 1.4 #13) of what isn't included in the power set: $$P(\{1,\{1,2\}\})=\{\emptyset,\{\{1\}\},\{\{1,2\}\},\{\emptyset,\{1,2\}\}\}...\text{wrong because }\{\{1\}\}\subsetneq\{1,\{1,2\}\}$$ I understand $\{\{1\}\}\subsetneq\{1,\{1,2\}\}$, but why is the last element, $\{\emptyset,\{1,2\}\}$, in the power set if the empty set is not an element of the original set? • Why do you think $\{\emptyset,\{1,2\}\} \in P(\{1,\{1,2\}\})$? The statement absolutely never makes this claim. It isn't true. So why do you think the statement is claiming such? – fleablood Sep 7 '16 at 20:29 • The empty set is a subset of every set. The power set is the set of all subsets. So the empty set is a member of every power set. – Doug M Sep 7 '16 at 20:31 • @DougM That's not what the OP asked. The OP asked about $\{\emptyset, \{1,2\}\}$. That is not an element of the power set because (as the OP correctly argued) the empty set is not a member of the original set (and hence can not be a member of a subset). In short, the OP is 100% correct. But the author of the book never claimed it was. – fleablood Sep 7 '16 at 20:38 • To be thorough. $\emptyset \in P$. $\{\{1\}\} \not \in P$. $\{\{1,2\}\} \in P$. $\{\emptyset,\{1,2\}\} \not \in P$. And finally $\{1,\{1,2\}\} \in P$ but $\{1,\{1,2\}\}$ wasn't listed in the set claimed to be the power set. So that set is not the Power set for three reasons. The book gave one. The op gave another. I gave a third.... and maybe I missed a 4th.... who knows.... – fleablood Sep 7 '16 at 20:43 • Yes, a fourth would be $\{1\} \in P$ which wasn't listed in the set. Basically the given set over bracketed almost consistently. – fleablood Sep 7 '16 at 20:46 Suppose that $A$ is the set $\{0,1,2\}$, then $A=\{3,4,5\}$ is wrong because $3\notin A$. It's true that neither $4$ nor $5$ are elements of $A$ as well, but one counterexample is enough to disprove a statement. In a nutshell, you're right, but so is the book. Both statements are valid counterexamples, but one is enough. It isn't: $$\mathscr P(\{1,\{1,2\}\})=\{\emptyset,\{1\},\{\{1,2\}\},\{1,\{1,2\}\}\}.$$ If the author didn't comment on why $\{\emptyset,\{1,2\}\}$ is not in the set, it is likely because the author intended to comment on a particular reason why the proposed power set wasn't the correct power set. That is, he decided to be explicit about why $\{\{1\}\}$ isn't in the power set. Perhaps to be more complete, the author could have commented on why $\{\emptyset,\{1,2\}\}$ is also not in the power set, but it only takes one counterexample to do the job. • I disagree that it is necessarily a typo. – Asaf Karagila Sep 7 '16 at 20:30 • @AsafKaragila good point. – Alex Ortiz Sep 7 '16 at 20:30 • It's not a typo. In showing {{1}} was not a subset the author didn't have to say anything about {emptyset,{1,2}} and the author didn't say anything about {emptyset,{1,2}}. If they author had said anything it'd be that it is not in the power set either. – fleablood Sep 7 '16 at 20:31 • @fleablood noted. Thanks. – Alex Ortiz Sep 7 '16 at 20:32 • It's not a "typo" but it is an omission. However it is an omission that the author was allowed to omit. – fleablood Sep 7 '16 at 20:47
2020-01-27 09:27:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5930467844009399, "perplexity": 369.09086479828113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00253.warc.gz"}
http://mathoverflow.net/questions/163011/homology-of-the-fixed-points-of-the-singular-complex-of-a-g-space
# Homology of the fixed points of the singular complex of a G-space I posted the following to stackexchange a while ago [1], without any answers. Maybe the question is too unmotivated, but it seems very natural to me. Suppose $X$ is a topological space and $G$ a finite group acting on it. We can form the singular complex $C_\bullet(X),$ and then taking homology gives singular homology: $H_*(X) = h_* C_\bullet(X).$ Since $X$ comes with a $G$-action, we could look at the fixed points of this action: $H_*(X^G) = H_* C_\bullet(X^G).$ By functoriality of the singular complex, $C_\bullet(X)$ also affords a $G$-action, and we can take its fixed point homology: $H'_* = h_* (C_\bullet(X)^G).$ Finally, we could also take the fixed points in homology: $H_*(X)^G.$ My question is: how does $H'_*$ (homology of the fixed points of the singular complex) relate to other more typical invariants of $X,$ such as e.g. the ones displayed above ($H_*(X),$ $H_*(X^G),$ $H_*(X)^G$)? - I guess one should write $C_\bullet(X)^G=Hom_{{\mathbb Z }G}({\mathbb Z},C_\bullet(X))$, use a projective resolution of $\mathbb Z$, analyse the resulting double complex. –  nsrt Apr 10 '14 at 12:42
2015-03-03 17:04:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9233143925666809, "perplexity": 225.58917392287302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463340.29/warc/CC-MAIN-20150226074103-00255-ip-10-28-5-156.ec2.internal.warc.gz"}
https://elibm.org/article/10012141
Reprint: On lattice points in $n$-dimensional star bodies (1946) Doc. Math. Extra Vol. Mahler Selecta, 483-520 (2019) DOI: 10.25537/dm.2019.SB-483-520 Summary Let $F(X)=F(x_1,\ldots,x_n)$ be a continuous non-negative function of $X=(x_1,\ldots,x_n)$ that satisfies $F(tX)=|t|F(X)$ for all real numbers $t$. The set $K$ in $n$-dimensional Euclidean space $\mathbb{R}^n$ defined by $F(X)\leqslant 1$ is called a star body. In this paper, Mahler studies the lattices $\Lambda$ in $\mathbb{R}^n$ which are of minimum determinant and have no point except $(0,\ldots,0)$ inside $K$. He investigates how many points of such lattices lie on, or near to, the boundary of $K$, and considers in detail the case when $K$ admits an infinite group of linear transformations into itself. \par Reprint of the author's paper [Proc. R. Soc. Lond., Ser. A 187, 151--187 (1946; Zbl 0060.11710)]. 11-03, 11H16
2023-02-01 16:16:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8264106512069702, "perplexity": 518.4929374457806}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00705.warc.gz"}
http://en.wikipedia.org/wiki/Average_treatment_effect
# Average treatment effect The average treatment effect (ATE) is a measure used to compare treatments (or interventions) in randomized experiments, evaluation of policy interventions, and medical trials. The ATE measures the difference in mean (average) outcomes between units assigned to the treatment and units assigned to the control. In a randomized trial (i.e., an experimental study), the average treatment effect can be estimated from a sample using a comparison in mean outcomes for treated and untreated units. However, the ATE is generally understood as a causal parameter (i.e., an estimand or property of a population) that a researcher desires to know, defined without reference to the study design or estimation procedure. Both observational and experimental study designs may enable one to estimate an ATE in a variety of ways. ## General definition Originating from early statistical analysis in the fields of agriculture and medicine, the term "treatment" is now applied, more generally, to other fields of natural and social science, especially psychology, political science, and economics such as, for example, the evaluation of the impact of public policies. The nature of a treatment or outcome is relatively unimportant in the estimation of the ATE—that is to say, calculation of the ATE requires that a treatment be applied to some units and not others, but the nature of that treatment (e.g., a pharmaceutical, an incentive payment, a political advertisement) is irrelevant to the definition and estimation of the ATE. The expression "treatment effect" refers to the causal effect of a given treatment or intervention (for example, the administering of a drug) on an outcome variable of interest (for example, the health of the patient). In the Neyman-Rubin "Potential Outcomes Framework" of causality a treatment effect is defined for each individual unit in terms of two "potential outcomes." Each unit has one outcome that would manifest if the unit were exposed to the treatment and another outcome that would manifest if the unit were exposed to the control. The "treatment effect" is the difference between these two potential outcomes. However, this individual-level treatment effect is unobservable because individual units can only receive the treatment or the control, but not both. Random assignment to treatment ensures that units assigned to the treatment and units assigned to the control are identical (over a large number of iterations of the experiment). Indeed, units in both groups have identical distributions of covariates and potential outcomes. Thus the average outcome among the treatment units serves as a counterfactual for the average outcome among the control units. The differences between these two averages is the ATE, which is an estimate of the central tendency of the distribution of unobservable individual-level treatment effects.[1] If a sample is randomly constituted from a population, the ATE from the sample (the SATE) is also an estimate of the population ATE (or PATE).[2] While an experiment ensures, in expectation, that potential outcomes (and all covariates) are equivalently distributed in the treatment and control groups, this is not the case in an observational study. In an observational study, units are not assigned to treatment and control randomly, so their assignment to treatment may depend on unobserved or unobservable factors. Observed factors can be statistically controlled (e.g., through regression or matching), but any estimate of the ATE could be confounded by unobservable factors that influenced which units received the treatment versus the control. ## Formal definition In order to define formally the ATE, we define two potential outcomes : $y_{0i}$ is the value of the outcome variable for individual $i$ if he is not treated, $y_{1i}$ is the value of the outcome variable for individual $i$ if he is treated. For example, $y_{0i}$ is the health status of the individual if he is not administered the drug under study and $y_{1i}$ is the health status if he is administered the drug. The treatment effect for individual $i$ is given by $y_{1i}-y_{0i}=\beta_{i}$. In the general case, there is no reason to expect this effect to be constant across individuals. Let $E[.]$ denote the expectation operator for any given variable (that is, the average value of the variable across the whole population of interest). The Average treatment effects is given by: $E[y_{1i}-y_{0i}]$. If we could observe, for each individual, $y_{1i}$ and $y_{0i}$ among a large representative sample of the population, we could estimate the ATE simply by taking the average value of $y_{1i}-y_{0i}$ for the sample: $\frac{1}{N} \cdot \sum_{i=1}^N (y_{1i}-y_{0i})$ (where $N$ is the size of the sample). The problem is that we can not observe both $y_{1i}$ and $y_{0i}$ for each individual. For example, in the drug example, we can only observe $y_{1i}$ for individuals who have received the drug and $y_{0i}$ for those who did not receive it; we do not observe $y_{0i}$ for treated individuals and $y_{1i}$ for untreated ones. This fact is the main problem faced by scientists in the evaluation of treatment effects and has triggered a large body of estimation techniques. ## Estimation Depending on the data and its underlying circumstances, many methods can be used to estimate the ATE. The most common ones are Once a policy change occurs on a population, a regression can be run controlling for the treatment. The resulting equation would be $y = \Beta_{0} + \delta_{0}d2 + \Beta_{1}dT + \delta_{1}d2 \cdot dT ,$ where y is the response variable and $\delta_{1}$ measures the effects of the policy change on the population. The difference in differences equation would be $\hat \delta_{1} = (\bar y_{2,T} - \bar y_{1,T}) - (\bar y_{2,C} - \bar y_{1,C}) ,$ where T is the treatment group and C is the control group. In this case the $\delta_{1}$ measures the effects of the treatment on the average outcome and is the average treatment effect. From the diffs-in-diffs example we can see the main problems of estimating treatment effects. As we can not observe the same individual as treated and non-treated at the same time, we have to come up with a measure of counterfactuals to estimate the average treatment effect. ## An example Consider an example where all units are unemployed individuals, and some experience a policy intervention (the treatment group), while others do not (the control group). The causal effect of interest is the impact a job search monitoring policy (the treatment) has on the length of an unemployment spell: On average, how much shorter would one's unemployment be if they experienced the intervention? The ATE, in this case, is the difference in expected values (means) of the treatment and control groups' length of unemployment. A positive ATE, in this example, would suggest that the job policy increased the length of unemployment. A negative ATE would suggest that the job policy decreased the length of unemployment. An ATE estimate equal to zero would suggest that there was no advantage or disadvantage to providing the treatment in terms of the length of unemployment. Determining whether an ATE estimate is distinguishable from zero (either positively or negatively) requires statistical inference. Because the ATE is an estimate of the average effect of the treatment, a positive or negative ATE does not indicate that any particular individual would benefit or be harmed by the treatment. ## References 1. ^ Holland, Paul W. (1986). "Statistics and Causal Inference". J. Amer. Statist. Assoc. 81 (396): 945–960. doi:10.1080/01621459.1986.10478354. JSTOR 2289064. 2. ^ Imai, Kosuke; King, Gary; Stuart, Elizabeth A. (2008). "Misunderstandings Between Experimentalists and Observationalists About Causal Inference". J. R. Stat. Soc. Series A 171 (2): 481–502. doi:10.1111/j.1467-985X.2007.00527.x.
2014-09-17 00:17:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6215060949325562, "perplexity": 818.5705467245697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657120057.96/warc/CC-MAIN-20140914011200-00063-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://eventuallyalmosteverywhere.wordpress.com/2018/11/21/random-graphs-lecture-3/
Lecture 3 – Couplings, comparing distributions I am aiming to write a short post about each lecture in my ongoing course on Random Graphs. Details and logistics for the course can be found here. In this third lecture, we made our first foray into the scaling regime for G(n,p) which will be the main focus of the course, namely the sparse regime when $p=\frac{\lambda}{n}$. The goal for today was to give a self-contained proof of the result that in the subcritical setting $\lambda<1$, there is no giant component, that is, a component supported on a positive proportion of the vertices, with high probability as $n\rightarrow\infty$. More formally, we showed that the proportion of vertices contained within the largest component of $G(n,\frac{\lambda}{n})$ vanishes in probability: $\frac{1}{n} \left| L_1\left(G\left(n,\frac{\lambda}{n}\right)\right) \right| \stackrel{\mathbb{P}}\longrightarrow 0.$ The argument for this result involves an exploration process of a component of the graph. This notion will be developed more formally in future lectures, aiming for good approximation rather than bounding arguments. But for now, the key observation is that when we ‘explore’ the component of a uniformly chosen vertex $v\in[n]$ outwards from v, at all times the number of ‘children’ of v which haven’t already been considered is ‘at most’ $\mathrm{Bin}(n-1,\frac{\lambda}{n})$. Since, for example, if we already know that eleven vertices, including the current one w are in C(v), then the distribution of the number of new vertices to be added to consideration because they are directly connected to w has conditional distribution $\mathrm{Bin}(n-11,\frac{\lambda}{n})$. Firstly, we want to formalise the notion that this is ‘less than’ $\mathrm{Bin}(n,\frac{\lambda}{n})$, and also that, so long as we don’t replace 11 by a linear function of n, that $\mathrm{Bin}(n-11,\frac{\lambda}{n})\stackrel{d}\approx \mathrm{Poisson}(\lambda)$. Couplings to compare distributions coupling of two random variables (or distributions) X and Y is a realisation $(\hat X,\hat Y)$ on the same probability space with correct marginals, that is $\hat X\stackrel{d}=X,\quad \hat Y\stackrel{d}=Y.$ We saw earlier in the course that we could couple G(n,p) and G(n,q) by simulating both from the same family of uniform random variables, and it’s helpful to think of this in general: ‘constructing the distributions from the same source of randomness’. Couplings are a useful notion to digest at this point, as they embody a general trend in discrete probability theory. Wherever possible, we try to do as we can with the random objects, before starting any calculations. Think about the connectivity property of G(n,p) as discussed in the previous lecture. This can be expressed directly as a function of p in terms of a large sum, but showing it is an increasing function of p is essentially impossible by computation, whereas this is very straightforward using the coupling. We will now review how to use couplings to compare distributions. For a real-valued random variable X, with distribution function $F_X$, we always have the option to couple with a uniform U(0,1) random variable. That is, when $U\sim U[0,1]$, we have $(F_X^{-1}(U)\stackrel{d}= X$, where the inverse of the distribution function is defined (in the non-obvious case of atoms) as $F_X^{-1}(u)=\inf\left\{ x\in\mathbb{R}\,:\, F(x)\ge u\right\}.$ Note that when the value taken by U increases, so does the value taken by $F_X^{-1}(U)$. This coupling can be used simultaneously on two random variables X and Y, as $(F_X^{-1}(U),F_Y^{-1}(U))$, to generate a coupling of X and Y. The total variation distance between two probability measures is $d_{\mathrm{TV}}(\mu,\nu):= \sup_{A}|\mu(A)-\nu(A)|$, with supremum taken over all events in the joint support S of $\mu,\nu$. This is particularly clear in the case of discrete measures, as then $d_{\mathrm{TV}}(\mu,\nu)=\frac12 \sum_{x\in S} \left| \mu\left(\{x\}\right) - \nu\left(\{x\}\right) \right|.$ (Think of the difference in heights between the bars, when you plot $\mu,\nu$ simultaneously as a bar graph…) The total variation distances records how well we can couple two distributions, if we want them to be equal as often as possible. It is therefore a bad measure of distributions with different support. For example, the distributions $\delta_0$ and $\delta_{1/n}$ are distance 1 apart (the maximum) for all values of n. Similarly, the uniform distribution on [0,1] and the uniform distribution on $\{0,1/n,2/n,\ldots, n-1/n, 1\}$ are also distance 1 apart. When there is more overlap, the following result is useful. Proposition: Any coupling $(\hat X,\hat Y)$ of $X\sim \mu,\,Y\sim \nu$ satisfies $\mathbb{P}(X=Y)\le 1-d_{\mathrm{TV}}(\mu,\nu)$, and there exists a coupling such that equality is achieved. Proof: The first part of the resultis shown by $\mathbb{P}\left(\hat X=\hat Y = x\right)\le \mathbb{P}\left(\hat X=x\right) = \mu\left(\{x\}\right),$ and so we may also bound by $\nu\left(\{x\}\right)$. Thus $\mathbb{P}\left(\hat X=\hat Y\right) = \sum_x \min\left( \mu\left(\{x\}\right),\, \nu\left(\{x\}\right) \right),$ $\mathbb{P}\left(\hat X \ne\hat Y\right) = 1 - \sum_x \min\left(\mu\left(\{x\}\right),\, \nu\left(\{x\}\right) \right)$ $= \sum_x \left[ \frac12 \mu(\{x\}) + \frac12 \nu(\{x\}) - \min\left( \mu(\{x\}), \,\nu(\{x\})\right) \right] = \frac12 \sum_x \left| \mu(\{x\}) - \nu(\{x\}) \right|.$ The second part of the theorem is more of an exercise in notation, and less suitable for this blog article. Crucially, observe that the canonical coupling discussed above is definitely not necessarily the best coupling for total variation distance. For example, the uniform distribution on $\{0,1,2,\ldots, n-1\}$ and the uniform distribution on $\{1,2,\ldots, n-1, n\}$ are very close in the total variation distance, but under the canonical coupling involving the distribution function, they are never actually equal! Stochastic domination Given two random variables X and Y, we say that Y stochastically dominates X if $\mathbb{P}(X\ge x)\le \mathbb{P}(Y\ge x),\quad \forall x\in\mathbb{R}$, and write $X\le_{st}Y$. If we are aiming for results in probability, then it is easy to read results in probability for X off from results in probability for Y, at least in one direction. For this relation, it turns out that the canonical coupling is particularly useful, unsurprisingly so since it is defined in terms of the distribution function, which precisely describes the tail probabilities. In fact the reverse implication is true too, as stated in the following theorem. Theorem (Strassen): $X\le_{st} Y$ if and only if there exists a coupling $(\hat X,\hat Y)$ such that $\mathbb{P}(\hat X\le \hat Y)=1$. Proof: The reverse implication is clear since $\mathbb{P}\left(\hat X \ge x\right) = \mathbb{P}\left( \hat Y\ge \hat X\ge x\right) \le \mathbb{P}\left( \hat Y \ge x\right),$ and $\hat X,\,\hat Y$ have the correct marginals. Examples: In many situations, the coupling description makes it much easier to verify the stochastic domination relation. • $\lambda\le \mu$ implies $\mathrm{Poisson}(\lambda)\le_{st} \mathrm{Poisson}(\mu)$. Checking the tail of the Poisson mass function would be a bit of a nightmare, but since we have the superposition coupling $\mathrm{Po}(\mu)\stackrel{d}= \mathrm{Po}(\lambda) + \mathrm{Po}(\mu-\lambda)$, for independent RVs on the RHS, we can read it off. • Similarly $\mathrm{Bin}(n,p)\le \mathrm{Bin}(m,q)$ when $n\le m,\, p\le q$. Again, checking the tail would be computationally-annoying. • One of the most useful examples is the size-biased distribution, obtained from a distribution X by $\mathbb{P}(X_{sb}=x) \propto x\mathbb{P}(X=x)$, when X is a non-negative RV. As discussed on one of the exercises, |C(v)|, which we study repeatedly, can be expressed as the size-biased version of a uniform choice from amongst the list of component sizes in the graph. We have $X\le_{st} X_{sb}$, which is intuitively reasonable since the size-biased distribution ‘biases in favour of larger values’. 1) The fact that we study the component $C(v)$ containing a uniformly chosen vertex $v\in[n]$ is another example of ‘revealing’ the total randomness in a helpful order. Note that $|C(v)|$ requires extra randomness on top of the randomness of the graph itself. When analysing $C(v)$ it was convenient to sample the graph section-by-section having settled on v. However, for the final comparison of $|C(v)|$ and $|L_1(G)|$, the opposite order is helpful. Since, whenever there is a component of size at least $\epsilon n$, the probability that we pick v within that component is at least $\frac{\epsilon n}{n}$. Thus we can conclude that $\mathbb{P}\left(|C(v)|\ge \epsilon n\right) \ge \frac{\epsilon n}{n}\mathbb{P}\left( \left|L_1\left(G\left(n,\frac{\lambda}{n}\right)\right) \right| \ge \epsilon n\right).$ Here, since we are aiming for convergence in probability, $\epsilon>0$ is fixed, and $\frac{1}{n}\left|L_1\left(G\left(n,\frac{\lambda}{n}\right)\right) \right|\stackrel{\mathbb{P}}\longrightarrow 0,$ follows from $\frac{1}{n}\left| C(v) \right| \stackrel{\mathbb{P}}\longrightarrow 0,$ which we have shown using the exploration process. 2) In fact, the final step of the argument can be strengthened to show that $\lim_{K\rightarrow\infty}\limsup_{n\rightarrow\infty}\mathbb{P}\left(\left|L_1\left(G\left(n,\frac{\lambda}{n}\right)\right) \right)\ge K\log n\right)=0,$ though this requires familiarity with Chernoff bounds / Large Deviations estimates, which would have been a distraction to have introduced at exactly this point in the course. Showing that the size of the largest component is concentrated in probability on the scale of $\log n$ is also possible, though for this we obviously need to track exact errors, rather than just use bounding arguments. 3) There are many more ways to compare distributions, with and without couplings, some of which have been of central importance in studying the age distributions corresponding to mean field forest fires, as introduced in a recent paper with Ed Crane and Balazs Rath, which can, as of this morning, be found on Arxiv. In the interests of space, further discussion of more exotic measures on distributions is postponed to a separate post.
2019-09-22 01:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 63, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8897401690483093, "perplexity": 279.1636828965834}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574765.55/warc/CC-MAIN-20190922012344-20190922034344-00320.warc.gz"}
https://zbmath.org/?q=an:06871371
# zbMATH — the first resource for mathematics Recovery of the Schrödinger operator on the half-line from a particular set of eigenvalues. (English) Zbl 1391.34040 Let $$\lambda_j(q,h_n),\; n\geq 1,$$ be an eigenvalue of the Sturm-Liouville operator $-y''(x)+q(x)y(x)=\lambda y(x),\; x>0,$ $(1+x)q(x)\in L(0,\infty),\quad y'(0)-h_ny(0)=0,$ with fixed $$j$$. It is proved that if the sequence $$\{h_n\}$$ has a limit point, then the specification of $$\lambda_j(q,h_n)$$, $$n\geq 1$$, uniquely determines $$q$$. ##### MSC: 34A55 Inverse problems involving ordinary differential equations 34B24 Sturm-Liouville theory 34L40 Particular ordinary differential operators (Dirac, one-dimensional Schrödinger, etc.) 47E05 General theory of ordinary differential operators (should also be assigned at least one other classification number in Section 47-XX) ##### Keywords: Sturm-Liouville operators; inverse spectral problem Full Text: ##### References: [1] T. Aktosun, Inverse scattering for vowel articulation with frequency-domain data, Inverse Problems 21 (2005), no. 3, 899-914. https://doi.org/10.1088/0266-5611/21/3/007 · Zbl 1084.35117 [2] T. Aktosun, P. Sacks and M. Unlu, Inverse problems for selfadjoint Schrödinger operators on the half line with compactly supported potentials, J. Math. Phys. 56 (2015), no. 2, 022106, 33 pp. https://doi.org/10.1063/1.4907558 · Zbl 1310.81145 [3] T. Aktosun and R. Weder, Inverse spectral-scattering problem with two sets of discrete spectra for the radial Schrödinger equation, Inverse Problems 22 (2006), no. 1, 89-114. https://doi.org/10.1088/0266-5611/22/1/006 · Zbl 1096.34064 [4] B. J. Forbes, E. R. Pike and D. B. Sharp, The acoustical Klein-Gordon equation: The wave-mechanical step and barrier potential functions, J. Acoust. Soc. Am. 114 (2003), no. 3, 1291-1302. https://doi.org/10.1121/1.1590314 [5] G. Freiling and V. Yurko, Inverse Sturm-Liouville Problems and Their Applications, Nova Science Publishers, Huntington, NY, 2001. · Zbl 1037.34005 [6] F. Gesztesy and B. Simon, A new approach to inverse spectral theory, II: General real potentials and the connection to the spectral measure, Ann. of Math. 152 (2000), no. 2, 593-643. https://doi.org/10.2307/2661393 · Zbl 0983.34013 [7] E. Korotyaev, Inverse resonance scattering on the half line, Asymptot. Anal. 37 (2004), no. 3-4, 215-226. · Zbl 1064.34007 [8] V. A. Marchenko, Sturm-Liouville Operators and Applications, Operator Theory: Advances and Applictions 22, Birkhäuser Verlag, Basel, 1986. https://doi.org/10.1007/978-3-0348-5485-6 [9] J. R. McLaughlin and W. Rundell, A uniqueness theorem for an inverse Sturm-Liouville problem, J. Math. Phys. 28 (1987), no. 7, 1471-1472. https://doi.org/10.1063/1.527500 · Zbl 0633.34016 [10] A. G. Ramm, Property C for ordinary differential equations and applications to inverse scattering, Z. Anal. Anwendungen 18 (1999), no. 2, 331-348. https://doi.org/10.4171/zaa/885 · Zbl 0935.35177 [11] —-, One-dimensional inverse scattering and spectral problems, Cubo 6 (2004), no. 1, 313-426. [12] —-, Recovery of the potential from $$I$$-function, Rep. Math. Phys. 74 (2014), no. 2, 135-143. https://doi.org/10.1016/s0034-4877(14)00020-2 · Zbl 1310.47062 [13] A. G. Ramm and B. Simon, A new approach to inverse spectral theory, III: Short-range potentials, J. Anal. Math. 80 (2000), no. 1, 319-334. https://doi.org/10.1007/bf02791540 · Zbl 0980.34019 [14] X.-C. Xu, C.-F. Yang and H.-Z. You, Inverse spectral analysis for Regge problem with partial information on the potential, Results Math. 71 (2017), no. 3-4, 983-996. https://doi.org/10.1007/s00025-015-0523-6 · Zbl 1380.34041 [15] V. Yurko, Method of Spectral Mappings in the Inverse Problem Theory, Inverse and Ill-posed Problems Series, VSP, Utrecht, 2002. · Zbl 1098.34008 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-06-15 17:17:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.742637038230896, "perplexity": 1746.7692083155237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621450.29/warc/CC-MAIN-20210615145601-20210615175601-00284.warc.gz"}
https://www.gamedev.net/forums/topic/343157-i-am-hapy-look-at-the-code-i-made-today/
Public Group # I am hapy look at the code I made today. This topic is 4671 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts ##### Share on other sites Try using a switch instead. Much pweetier! [grin] ##### Share on other sites How long have you been learning C++? So, are you creating a Text Based game of some sort? Just wondering. Well, I better get back to work on my code. I am creating a Pong Clone in SDL right now. Later. Keep up the good work. EDIT:OH YEAH, I MEANT TO AGREE WITH THE OTHER GUY AND SAY USE A SWITCH. IT WILL LOOK ALOT BETTER AND WELL, IT WILL BE SHORTER. ##### Share on other sites Hi, welcome to the forums! Nice looking code u got there.... Looks like code for an RPG game if im correct. Keep at it, ur doin great!! - Oh yea, can u please giv me a positive rating?? People have been giving me negative ratings for no apperrant reason! How unfair. Im a nice guy. ##### Share on other sites Quote: Original post by xorTry using a switch instead. Much pweetier! [grin] Yes, please. switch/case statements are so much more elegant and easier to read than sifing through if statement upon if statemnet. Also, indent more as you dive deeper and deeper through the if statements. It lets the reader know when the statement ends and what code is part of what statement. ##### Share on other sites I have been learing c++ for about a week are so. Mybe a week ½. Right now I am not making a text based game but what I do is that when I read a char are a couple of pages in my Learn to programe C++. I take what I just read and Try to make stuff realted to games becasue my Dream in life is to be a Game programer. I still have a far way to go I know, But I feel I am making pretty good progress. ##### Share on other sites Oh yeah, you can use the source tags. like this. I will put your code in the source tag. // StatsUpgrade shop made by Crosis On September 3 2005#include <iostream>using namespace std;int main (){int Hp; // This is the Player's Base Hp.int Defense; // This is the Players base defense.int Intellegence; // This is the Players base Inteleegence.int Gold; // This is the Players Base Gold.Hp = 20;Defense = 15;Intellegence = 10;Gold = 1000;cout <<" Hello and Welcome to My Shop.\n";cout << " Is there Any Services I can Intrest you in.\n";int choice;cout << " 1 - HpUpgrade.\n";cout << " 2 - DefenseUpgrade.\n";cout << " 3 - Intellegence.\n";cin >> choice;if ( choice == 1 ){cout << " I see that You are Intrested in the HpUpgrade.\n";cout << " It Will cost 500 gold, for 1 Point of Hp.\n";cout << " Do you want to do this?\n";int choice;cout << " 1 - yes.\n";cout << " 2 - no.\n";cin >> choice;if ( choice ==1 ){cout << " That will be 500 gold.\n";Gold = Gold - 500;cout << " Your gold left is " << Gold << endl;Hp = Hp + 1;cout << " After the UpGrade You'r Hp is " << Hp << endl;}elsecout << " Please come back When you want the Upgrade.\n";}elseif ( choice == 2 ){cout << " I see that You are intrested in the Defense upgrade.\n";cout << " It will cost 400 gold . for 1 point of Defense upgrade.\n";cout << " Do you want to do this.\n";int choice;cout << " 1 - Yes.\n";cout << " 2 - No.\n";cin >> choice;if ( choice == 1 ){cout << " That will be 400 gold for the Upgrade.\n";Gold = Gold - 400;cout << " Your gold left after Paying him is. " << Gold << endl;Defense = Defense + 1;cout << " Your defesne after the upgrade is " << Defense << endl;}elsecout << " Please Come back when you have the Money.\n";}elseif ( choice == 3 ){cout << " I see that you are Intrested in the Intellegence Upgrade.\n";cout << " It will cost 100 for 1 point of Intellegence upgrade.\n";cout << " Do you want to do this?\n";int choice;cout << " 1 - Yes.\n";cout << " 2 - No.\n";cin >> choice;if ( choice == 1 ){cout << " That Will Be 100 gold for the Upgrade.\n";Gold = Gold - 100;cout << " Your gold left after paying him is " << Gold << endl;Intellegence = Intellegence + 1;cout << " Your Intellegence after the Upgrade is " << Intellegence << endl;}else cout << " Please come back When you have the Money.\n";} I did that by....[ source] and [/ source](without the spaces) It makes it ALOT easier to read. ##### Share on other sites It looks good. [smile] Two suggestions: - Use the switch construct rather than a long string of ifs. It's easier to read and often more intuitive. - It's unnecessary to write "Intelligent = Intelligent + 1". C++ provides two syntactic mechanisms for doing this concisely. They both ultimately add 1 to the variable: Pre-increment: "++Intelligence;" This adds 1 to Intelligence immediately. Post-increment: "Intelligence++;" This adds 1 to Intelligence after the current line is finished executing. There are also the decrement equivalents, notated with a "--" as opposed to "++". ##### Share on other sites Good job. Personally, I would move all the code in the if sections into functions, and then call them when you want to use them instead of writing the code every time. Example: if ( choice == 1 ){cout << " I see that You are Intrested in the HpUpgrade.\n";cout << " It Will cost 500 gold, for 1 Point of Hp.\n";cout << " Do you want to do this?\n";int choice;cout << " 1 - yes.\n";cout << " 2 - no.\n";cin >> choice;if ( choice ==1 ){cout << " That will be 500 gold.\n";Gold = Gold - 500;cout << " Your gold left is " << Gold << endl;Hp = Hp + 1;cout << " After the UpGrade You'r Hp is " << Hp << endl;}elsecout << " Please come back When you want the Upgrade.\n";} Perhaps have: if (choice==1){some_function_that_does_everything_above();} That way you can reuse it later on in the game, and can make it much easier to work with. Just a suggestion [grin] Other than that, great job. It's always nice to see people's code [smile] ##### Share on other sites Look around for using shorthand operators. They reduce how much you have to type by a lot. Instead of Gold = Gold - 500; you can use Gold -= 500; This works with the four basic math operators (+ - * /) • 34 • 12 • 10 • 9 • 9 • ### Forum Statistics • Total Topics 631354 • Total Posts 2999507 ×
2018-06-18 12:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2548517882823944, "perplexity": 4777.625234678285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00139.warc.gz"}
https://stats.stackexchange.com/questions/473231/how-do-i-interpret-the-results-of-a-paired-t-test
# How do I interpret the results of a paired t-test? I'm new to R and statistical data analysis in general. Here is what I am trying to do: I have data from patients before and after a therapy. I want to compare the means of the two measures to see if there is a significant difference to see if the therapy had any effect. First I checked whether the data is normal distributed by doing a shapiro-wilk test: differences <- data$SymptomsBefore - data$SymptomsAfter #calculate the differences shapiro.test(differences) #do the test Output: Shapiro-Wilk normality test data: differences W = 0.92445, p-value = 0.2878 --> Since the p-value is bigger than 0.05, I can assume that the data is normal distributed. So now I can do a paired samples t-test. Since I don't know if the effect is positive or negative I choose the two-tailed option. t.test(data$SymptomsBefore, data$SymptomsAfter, paired = TRUE, alternative = "two") Here is the result: Paired t-test data: data$SymptomsBefore and data$SymptomsAfter t = -2.8939, df = 12, p-value = 0.01348 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.5506404 -0.2185903 sample estimates: mean of the differences -0.8846154 So, if I interpret the results correctly, this means there was a significant difference in the means (because the p-value was less than 0.05), meaning the therapy had an effect. Also, on average the symptoms were -0.8846154 lower than before the therapy. In a paper I would report that like this: "The results indicate that the therapy resulted in an decrease in Symptoms, t(12) = -2.8939, p = .01348." Is my interpretation of the R-output correct? Am I missing something? Is there something I still need to check that I didn't think of? • --> Since the p-value is bigger than 0.05, I can assume that the data is normal distributed. ... this is false. Since the p-value is bigger than 0.05, you can not reject the H0. (... and H0 is normality) – jogo Jun 20 at 20:16 • 12 degrees of freedom, so an N of 13 in each group? That's not enough for your Shapiro test to tell you anything. (Nobody can tell whether data is normal from 12 samples.) Run a non-parametric test, e.g. wilcox.test(). – dash2 Jun 20 at 21:01 • Also, without a control group, you cannot ascribe the difference to the therapy. All you can say is that there’s was a difference. (The patients might have improved even if they hadn’t received the therapy.) – Limey Jun 20 at 21:26 • @dash2 How powerful would you expect wilcox.test to be with so few observations? – Dave Jun 20 at 22:16 • Not very powerful! But the t test is only going to be more powerful because it makes assumptions that cannot be validated (and if SymptomsBefore is a count, are very unlikely to be true....) For example, here's what happens if you run shapiro tests with very non-normal data: tmp <- replicate(1000, {x <- runif(13); shapiro.test(x)$p.value}) ; table(tmp < 0.05). I get power of about 10% to reject normality. – dash2 Jun 22 at 10:18 ## 1 Answer I will consider your Question along with several of the accumulated Comments: Are data normal? A Shapiro-Wilk test on a sample of size 13 can give you some idea whether the population from which the sample was chosen was uniform. Below we look at 100,000 simulated samples of size 13, from each of standard uniform, exponential, and normal populations. • As is to be expected, the Shapiro-Wilk test at the 5% level did reject about 5% of truly normal samples as not consistent with normal data. • However, only about 10% of samples from a uniform population (mainly symmetrical and almost never with outliers) were rejected as not normal, and • About 59% of samples from an exponential population (highly skewed and typically with outliers) were rejected as not normal. The S-W test is one of the best tests of normality, but hardly definitive for samples as small as yours. set.seed(2020) pv.u = replicate(10^5, shapiro.test(runif(13))$p.val ) mean(pv.u <= .05) [1] 0.10436 pv.e = replicate(10^5, shapiro.test(rexp(13))$p.val ) mean(pv.e <= .05) [1} 0.59062 pv.n = replicate(10^5, shapiro.test(rnorm(13))$p.val ) mean(pv.n <= .05) [1] 0.04858 Paired t test or Wilcoxon signed rank test? Moreover, it is not the individual observations that need to be nearly normal in order to get reliable results from a paired t test. It is the sample mean that needs to be nearly normal. If a sample of about a dozen differences is roughly symmetrical and without extreme outliers, then one can usually rely on a paired t test to give useful results. I have only your results from a paired t test. Without your data, I can't look to see for sure what results you would have gotten from a Wilcoxon signed rank test on the differences. Especially with such a small sample, the Wilcoxon test is less powerful (likely to reject when $$H_0$$ untrue) than a t test. Working backward from your printout it, I deduce that you must have had $$\bar D \approx 0.88$$ with standard deviations $$S_D \approx 1.1.$$ Symmetrical data centered at 0.88 and with SD 1.1 would lead the Wilcoxon test to reject with probability about 73%. So using a Wilcoxon test might not have been a bad choice. Without seeing your actual data, I can only guess. pv.wilcox = replicate(10^5, wilcox.test(rnorm(13, .88, 1.1))\$p.val) mean(pv.wilcox <= .05) [1] 0.73003 Your summary. In your summary you can only claim that patients reported lower symptom scores after the period of treatment than before. The improvement might have been either from the treatment or the passage of time. I think it is reasonable to give the mean of difference, the SD of differences, and the number of differences in treatment scores along with the P-value. Whether this is for a local tech report or to be submitted to a journal for publication, an editor may ask you for a slightly different summary, perhaps including Cohen's D (which can be obtained from sample size, mean and SD). If you feel that a decrease of about 1 (0.88) on your symptom scale amounts to an important improvement for patients, then somewhere in your report you need to explain your scale and say why the average observed difference is of practical importance. If you have no control group, you must say so explicitly. Note: If you have any data on decrease in symptoms for untreated subjects, it would be useful to mention that, commenting on similarities and differences (demographic and seriousness of condition) of those patients from your treated ones. • There's a small error in the code. You have mean(pv.u < 0.5) instead of mean(pv.u < 0.05). So you overestimate the Shapiro test's power to reject normality at 5% significance. – dash2 Jun 22 at 10:22 • You might want to mention that the test on runif now only rejects 10% of the time not 74% :-) – dash2 Jun 23 at 18:26 • @dash2: Thanks for letting me know about the errors. I hope I have fixed everything now. – BruceET Jun 23 at 18:32
2020-09-24 23:49:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49059736728668213, "perplexity": 889.8851499985534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400221382.33/warc/CC-MAIN-20200924230319-20200925020319-00316.warc.gz"}
https://math.stackexchange.com/questions/1154483/show-that-lvert-a-rvert-22-leq-lvert-a-rvert-1-lvert-a-rvert-infty?noredirect=1
Show that $\lVert A \rVert_2^2 \leq \lVert A \rVert _1 \lVert A \rVert _ \infty$ With the definition of $\lVert A \rVert_2$ and $\lVert A \rVert_1$ and $\lVert A \rVert_ \infty$ that is: \begin{gather} \lVert A\rVert_1 = \max_{j} \sum_{i=1}^m \lvert a_{ij}\rvert\\ \lVert A\rVert_2 = \sqrt{\rho(A^HA)}\\ \lVert A\rVert_\infty = \max_{i} \sum_{j=1}^n \lvert a_{ij}\rvert \end{gather} prove that: $$\lVert A \rVert_2^2 \leq \lVert A \rVert_1 \lVert A \rVert_\infty$$ • Rather than \parallel, use \lVert and \rVert for the norms, that gives proper spacing when rendered. – Daniel Fischer Feb 18 '15 at 13:11 Let $\|\cdot\|$ be any matrix norm induced by a vector norm. Then we have $$\|A\|_2^2= \rho(AA^H) \leq \|AA^H\| \leq \|A\|\|A^H\|.$$ Here the first inequality follows from a "famous theorem" (see e.g. Proposition 4.4) and the second inequality follows from the fact that $\|\cdot\|$ is a matrix norm induced by a vector norm and thus is submultiplicative. Finally note that $\|A\|_1 =\|A^H\|_\infty$.
2019-10-18 16:16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.992724597454071, "perplexity": 645.6334328874317}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00400.warc.gz"}
https://nt246.github.io/NTRES-6100-data-science/lesson7-data-wrangling1.html
#### Required: Chapter 5.1-5.5 in R for Data Science by Hadley Wickham & Garrett Grolemund ## Learning objectives for today’s class Now that we have explored some of the powerful ways ggplot lets us visualize data, let’s take a step back and discuss tools to get data into the right format we need for downstream analysis. Often you’ll need to create some new variables or summaries, or maybe you just want to rename the variables or reorder the observations in order to make the data a little easier to work with. Data scientists, according to interviews and expert estimates, spend from 50 percent to 80 percent of their time mired in the mundane labor of collecting and preparing data, before it can be explored for useful information. - NYTimes (2014) By the end of today’s class, you should be able to: • Subset and rearrange data with key dplyr functions • Pick observations by their values filter() • Pick variables by their names select() • Use piping (%>%) to implement function chains • Make sure your RStudio working files are sync’ed to GitHub Acknowledgements: Today’s lecture is adapted (with permission) from the excellent Ocean Health Index Data Science Training and Jenny Bryan’s lectures from STAT545 at UBC: Introduction to dplyr. ## What is data wrangling? What are some common things you like to do with your data? Maybe remove rows or columns, do calculations and maybe add new columns? This is called data wrangling (“the process of cleaning, structuring and enriching raw data into a desired format for better analysis in less time” by one definition). It’s not data management or data manipulation: you keep the raw data raw and do these things programatically in R with the tidyverse. We are going to introduce you to data wrangling in R first with the tidyverse. The tidyverse is a suite of packages that match a philosophy of data science developed by Hadley Wickham and the RStudio team. I find it to be a more straight-forward way to learn R. We will also show you by comparison what code will look like in “Base R”, which means, in R without any additional packages (like the “tidyverse” package) installed. I like David Robinson’s blog post on the topic of teaching the tidyverse first. For some things, base-R is more straightforward, and we’ll show you that too. ### Taking notes Just like last class, we’ll keep practicing our GitHub/version control integration by pushing our notes onto our course repo. Please open the R Project that is associated with your class repository (the name should be ntres-6100-YOUR_USERNAME (replace YOUR_USERNAME with your GitHub user ID). Open an R-script or an RMarkdown file and save it within your course-notes subdirectory as data_wrangling.R [or whatever you want to call it]. Use this R-script type along in the examples we’ll go through together today (if that works for you), and for your exercises. ### Load tidyverse (which has dplyr inside) In your R Markdown file, let’s make sure we’ve got our libraries loaded. Write the following: library(tidyverse) ## install.packages("tidyverse") This is becoming standard practice for how to load a library in a file, and if you get an error that the library doesn’t exist, you can install the package easily by running the code within the comment (highlight install.packages("tidyverse") and run it). ## Coronavirus data set As the COVID-19 crisis is at the forefront of everyone’s minds at the moment, lets use a dataset tallying daily developments in recorded Coronavirus cases across the world, so that we can develop our data wrangling skills by exploring global patterns in the pandemic. We will use a dataset compiled in the coronavirus R package developed by Rami Krispin. This package (hosted on GitHub) pulls raw data from the Johns Hopkins University Center for Systems Science and Engineering (JHU CCSE) Coronavirus repository and is frequently updated. The compiled package is available to install like any other package from CRAN with install.packages. However, it keeps getting updated, so to make sure that we’re always working with the most current version, we will import the dataset directly from GitHub. Let’s first check when the coronavirus.csv file on the coronavirus package GitHub page was last updated. If we click on the coronavirus.csv file, we’ll see that it’s too large to display on GitHub in data-view mode. We can read this data into R directly from GitHub, without downloading it. But we can’t read this data in view-mode. We have to click on the View raw link in the view window. This displays it as the raw csv file, without formatting. Copy the url for raw data: https://raw.githubusercontent.com/RamiKrispin/coronavirus/master/csv/coronavirus.csv Now, let’s go back to RStudio. In our R Markdown, let’s read this csv file and name the variable “coronavirus”. We will use the read_csv() function from the readr package (part of the tidyverse, so it’s already installed!). # read in corona .csv coronavirus <- read_csv('https://raw.githubusercontent.com/RamiKrispin/coronavirus/master/csv/coronavirus.csv') For today, don’t worry about how the read_csv() function works - we will cover the details of data import functions in a few weeks. Once we have the data loaded, let’s start getting familiar with its content and format. Let’s inspect: ## explore the coronavirus dataset coronavirus # this is super long! Let's inspect in different ways Let’s use head and tail: head(coronavirus) # shows first 6 tail(coronavirus) # shows last 6 head(coronavirus, 10) # shows first X that you indicate tail(coronavirus, 12) # guess what this does! We can also see the coronavirus variable in RStudio’s Environment pane (top right) More ways to learn basic info on a data.frame. names(coronavirus) dim(coronavirus) # ?dim dimension ncol(coronavirus) # ?ncol number of columns nrow(coronavirus) # ?nrow number of rows A statistical overview can be obtained with summary(), or with skimr::skim() summary(coronavirus) # If we don't already have skimr installed, we will need to install it # install.packages('skimr') library(skimr) # install.packages("skimr") skim(coronavirus) ### Look at the variables inside a data.frame To specify a single variable from a data.frame, use the dollar sign $. The $ operator is a way to extract of replace parts of an object — check out the help menu for $. It’s a common operator you’ll see in R. coronavirus$cases # very long! hard to make sense of... head(coronavirus$cases) # can do the same tests we tried before str(coronavirus$cases) # it is a single numeric vector summary(coronavirus$cases) # same information, formatted slightly differently ## dplyr basics OK, so let’s start wrangling with dplyr. There are five dplyr functions that you will use to do the vast majority of data manipulations: • filter(): pick observations by their values • select(): pick variables by their names • mutate(): create new variables with functions of existing variables • arrange(): reorder the rows • summarise(): collapse many values down to a single summary These can all be used in conjunction with group_by() which changes the scope of each function from operating on the entire dataset to operating on it group-by-group. These six functions provide the verbs for a language of data manipulation. We will cover at least the first two today and continue with the rest on Wednesday. All verbs work similarly: 1. The first argument is a data frame. 2. The subsequent arguments describe what to do with the data frame. You can refer to columns in the data frame directly without using $. 3. The result is a new data frame. Together these properties make it easy to chain together multiple simple steps to achieve a complex result. ## filter() subsets data row-wise (observations). You will want to isolate bits of your data; maybe you want to only look at a single country or specific days or months. R calls this subsetting. filter() is a function in dplyr that takes logical expressions and returns the rows for which all are TRUE. Visually, we are doing this (thanks RStudio for your cheatsheet): Are you familiar with how to invoke logical expressions in R? If not, here is an overview here. We’ll use > and == here. filter(coronavirus, cases > 0) You can say this out loud: “Filter the coronavirus data for cases greater than 0”. Notice that when we do this, all the columns are returned, but only the rows that have the a non-zero case count. We’ve subsetted by row. Let’s try another: “Filter the coronavirus data for the country US”. filter(coronavirus, country == "US") Note that when you run that line of code, dplyr executes the filtering operation and returns a new data frame. dplyr functions never modify their inputs, so if you want to save the result, you’ll need to use the assignment operator, <-: coronavirus_us <- filter(coronavirus, country == "US") How about if we want two country names? We can’t use a single instance of the == operator here, because it can only operate on one thing at a time. We can use Boolean operators for this: & is “and”, | is “or”, and ! is “not”. So if we want records from both the US and Canada, we can type filter(coronavirus, country == "US" | country == "Canada") A useful short-hand for this problem is x %in% y. This will select every row where x is one of the values in y. We could use it to rewrite the code above: filter(coronavirus, country %in% c("US", "Canada")) How about if we want only the death counts in the US? You can pass filter different criteria: # We can use either of these notations: filter(coronavirus, country == "US", type == "death") filter(coronavirus, country == "US" & type == "death") ## Your turn - Exercise 1 1a: What is the total number of deaths in the US reported in this dataset up to now? Hint: You can do this in 2 steps by assigning a variable and then using the sum() function. 1b: Subset the data to only show the death counts in three European countries yesterday. Then, sync to Github.com (pull, stage, commit, push). click to see one approach This is one way to do it based on what we have learned so far: Question 1a: x <- filter(coronavirus, country == "US", type == "death") sum(x$cases) # Also, remember that the output from filter() is a dataframe, so you can use the$ operator on the called function directly: sum(filter(coronavirus, country == "US", type == "death")$cases) Question 1b: #Example: filter(coronavirus, country %in% c("Denmark", "Italy", "Spain"), type == "death", date == "2021-02-28") ## select() subsets data column-wise (variables) We use select() to subset the data on variables or columns. Visually, we are doing this (thanks RStudio for your cheatsheet): We can select multiple columns with a comma, after we specify the data frame (coronavirus). select(coronavirus, date, country, type, cases) Note how the order of the columns also have been rearranged to match the order they are listed in the select() function. We can also use - to deselect columns select(coronavirus, -lat, -long) # you can use - to deselect columns ### Your turn - Exercise 2 Create a new dataframe including only the country, lat, and long variables, listed in this order. Now make one listed in order lat, long, country. click to see one approach In this case, we have very few variables that we can easily select one by one, but for datasets with lots of variables with standardized names, some of the built-in helper functions may be helpful, e.g.: select(coronavirus, country:long) select(coronavirus, contains('o')) select(coronavirus, ends_with('e')) # Also, compare the output of these: select(coronavirus, casetype = type) select(coronavirus, casetype = type, everything()) rename(coronavirus, casetype = type) ## Use select() and filter() together We’ve explored the functions select() and filter() separately. Now let’s combine them and filter to retain only records for the US and remove the lat, long and province columns (because this dataset doesn’t currently have data broken down by US state). We’ll save this subsetted data as a variable. Actually, as two temporary variables, which means that for the second one we need to operate on coronavirus_us, not coronavirus. coronavirus_us <- filter(coronavirus, country == "US") coronavirus_us2 <- select(coronavirus_us, -lat, -long, -province) We also could have called them both coronavirus_us and overwritten the first assignment. Either way, naming them and keeping track of them gets super cumbersome, which means more time to understand what’s going on and opportunities for confusion or error. Good thing there is an awesome alternative. ## Meet the new pipe %>% operator Before we go any further, we should explore the new pipe operator that dplyr imports from the magrittr package by Stefan Bache. If you have have not used this before, this is going to change your life (at least your coding life…). You no longer need to enact multi-operation commands by nesting them inside each other. And we won’t need to make temporary variables like we did in the US example above. This new syntax leads to code that is much easier to write and to read: it actually tells the story of your analysis. Here’s what it looks like: %>%. The RStudio keyboard shortcut: Ctrl + Shift + M (Windows), Cmd + Shift + M (Mac). Let’s demo then I’ll explain: coronavirus %>% head() This is equivalent to head(coronavirus). This pipe operator takes the thing on the left-hand-side and pipes it into the function call on the right-hand-side. It literally drops it in as the first argument. Never fear, you can still specify other arguments to this function! To see the first 3 rows of coronavirus, we could say head(coronavirus, 3) or this: coronavirus %>% head(3) I’ve advised you to think “gets” whenever you see the assignment operator, <-. Similarly, you should think “and then” whenever you see the pipe operator, %>%. One of the most awesome things about this is that you START with the data before you say what you’re doing to DO to it. So above: “take the coronavirus data, and then give me the first three entries”. This means that instead of this: ## instead of this... coronavirus_us <- filter(coronavirus, country == "US") coronavirus_us2 <- select(coronavirus_us, -lat, -long, -province) ## ...we can do this coronavirus_us <- coronavirus %>% filter(country == "US") coronavirus_us2 <- coronavirus_us %>% select(-lat, -long, -province) So you can see that we’ll start with coronavirus in the first example line, and then coronavirus_us in the second. This makes it a bit easier to see what data we are starting with and what we are doing to it. …But, we still have those temporary variables so we’re not truly that better off. But get ready to be majorly impressed: ### Revel in the convenience We can use the pipe to chain those two operations together: coronavirus_us <- coronavirus %>% filter(country == "US") %>% select(-lat, -long, -province) What’s happening here? In the second line, we were able to delete coronavirus_us2 <- coronavirus_us, and put the pipe operator above. This is possible since we wanted to operate on the coronavirus_us data. And we weren’t truly excited about having a second variable named coronavirus_us2 anyway, so we can get rid of it. This is huge, because most of your data wrangling will have many more than 2 steps, and we don’t want a coronavirus_us17! By using multiple lines I can actually read this like a story and there aren’t temporary variables that get super confusing. In my head: “start with the coronavirus data, and then filter for the US, and then drop the variables lat, long, and province.” Being able to read a story out of code like this is really game-changing. We’ll continue using this syntax as we learn the other dplyr verbs. Compare with some base R code to accomplish the same things. Base R requires subsetting with the [rows, columns] notation. This notation is something you’ll see a lot in base R. The brackets [ ] allow you to extract parts of an object. Within the brackets, the comma separates rows from columns. If we don’t write anything after the comma, that means “all columns”. And if we don’t write anything before the comma, that means “all rows”. Also, the$ operator is how you access specific columns of your dataframe. #There are many ways we could subset columns, here's one way: coronavirus[coronavirus$country == "US", colnames(coronavirus) %in% c("lat", "long", "province")==FALSE] ## repeat coronavirus, [i, j] indexing is distracting. ##### Never index by blind numbers! #There are many ways we could subset columns, here's another (bad choice) head(coronavirus) coronavirus[coronavirus$country == "US", c(2, 5:7)] Why is this a terrible idea? • It is not self-documenting. What are the columns were retaining here? • It is fragile. This line of code will produce different results if someone changes the organization of the dataset, e.g. adds new variables. This is especially risky if we index rows by numbers as a sorting action earlier in the script would then give unexpected results. This call explains itself and is fairly robust. coronavirus_us <- coronavirus %>% filter(country == "US") %>% select(-lat, -long, -province) Here’s a caricature slide Nicolas posted on Slack to help build intuition about the differences between the tidyverse and base R syntax for data wrangling ## Your turn - Exercise 3 Use the %>% piping function to subset the coronavirus dataset to only include the daily death counts in the US, Canada, and Mexico and including only the following variables in this order: country, date, cases. Then combine your new data wrangling skills with the ggplot skills to covered last week visualize the how the daily death counts have changed over time in those three countries. Yes! You can pipe data into ggplot - try it! If you have more time, try exploring other patterns in the data. Pick a different set of countries to display or show how the daily counts of confirmed cases, deaths and recoveries compare. Save your R script (knit if you’ve been working in an RMarkdown file), and sync it to GitHub (pull, stage, commit, push) ## More ways to select columns If we have time, we’ll explore some additional options for select() We will continue in the next class with learning more useful dplyr functions.
2021-06-14 16:04:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2097940444946289, "perplexity": 3173.908094456329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00581.warc.gz"}
https://mathematica.stackexchange.com/tags/kernel/hot
# Tag Info ## Hot answers tagged kernel 32 Short answer The local variables of the form varname$... are used by the system, and it is unwise to use symbols with such names as local variables. With, like many other lexical scoping constructs, performs excessive renamings, often even in cases where it isn't strictly necessary. This probably has to do with efficiency - full analysis may be more costly. ... 31 You can select which kernel is used by your notebook from the menu item Evaluation -> Notebook's Kernel. By default you will probably only have one kernel called Local available. If your Mathematica license allows for it (typically licenses allow for two simultaneous kernels on a machine), you can add new kernels by selecting the Evaluation -> Kernel ... 29 What was saved was the content of sol, which happens to contain the solution to your equation (you explicitly set it to that), and therefore is certainly sufficient for your plot. Saving Kernel state however would involve saving things like the random seed, so the following would give the same output twice (using a hypothetical function SaveKernelState and ... 29 There is an easy way to keep your data in the notebook itself and NOT to save them in external file - using Compress. As @Leonid says here and I already mentioned this before in this answer for similar case with Interpolation function. Start from some output you need: sol = NDSolve[{D[u[t, x], t] == D[u[t, x], x, x], u[0, x] == 0, u[t, 0] == Sin[t], u[t,... 29 You should consider using the sandbox functionality. You can create a subkernel and put it in sandbox mode this way: link = LinkLaunch[First[$CommandLine]<> " -wstp -noicon"]; LinkWrite[link, Unevaluated@EvaluatePacket[Developer`StartProtectedMode[]]]; You can then interact with this subkernel using the standard LinkWrite and LinkRead functions. If ... 27 There are two processes running. The first process is the FrontEnd. The FrontEnd receives your keypresses and renders text and plots. The second process is the Kernel. The Kernel receives commands to perform calculations, stores the states of variables, and does pretty much all the calculating. When you press Alt-., the FrontEnd immediately receives ... 26 Since one may not always accurately predict when MemoryContrained is needed, I recommend setting up a watch-dog task. Belisarius described how to do this here in answer to my question. I will reproduce it below as answers that are merely links are discouraged. In Mathematica 8 you could start a memory watchdog, something along the lines of: ... 25 As people have figured out in the comments, this was a quite deliberate decision on our part. One which I can take a significant amount of credit/responsibility/blame for. First a little bit about the extra kernel. The kernel is enabled using a password which causes it to run in Wolfram Player mode. It runs using the same binary as the regular kernel, ... 21 To access the errors, you need to invoke the Front End directly from the kernel. In effect, you end up telling the kernel to tell the FE to tell the kernel to do something, so that the FE can report any errors it finds. The method I use is ClearAll[getFrontEndErrors]; SetAttributes[getFrontEndErrors, HoldAllComplete]; getFrontEndErrors[expr_] := Block[{... 20 Here is the method I outlined. I'll illustrate on a small example where we split matrix into top and bottom halves. In[794]:= SeedRandom[1111]; halfsize = 3; mat = RandomInteger[{-4, 4}, {2*halfsize, 10}] Out[796]= {{-3, -1, 3, -3, 3, 3, 3, 3, 4, 2}, {3, 3, -3, 0, 0, 1, -2, -4, 0, -1}, {-3, 4, 3, 0, -2, 4, 3, -2, -2, -2}, {2, 2, 4, 0, -4, 4, -1, -4, ... 20 You could launch a different kernel and use that to run the computation. You will be controlling this "slave kernel" from another Mathematica session. This will allow you to script even quitting and restarting the slave kernel. Using parallel tools This is simpler and I recommend trying this approach first. Launch a single kernel: kernel = ... 19 The option to save a variable, a value, in a notebook, that I find simple and deserves a chance is to store them in the notebook's tagging rules. You can compress it if you want, or you can autoload it through an initialization cell or through the NotebookDynamicExpression too. The core is this: r = RandomReal[{-1, 1}, 1000000]; CurrentValue[InputNotebook[]... 19 You need to daemonize your script: nohup math -script test.txt 0<&- &>/dev/null & Now this will run as a background process with no output captured. If your script does indeed produce output, just replace /dev/null with the filename. In order to daemonize something you need to disconnect all the automatically connected streams (stdin, ... 16 This can be relatively easily done using extremely useful $FrontEnd option "ClearEvaluationQueueOnKernelQuit" introduced by Chris Degnen. Usage Print @$SessionID quitAndEvaluate[ Print @ $SessionID ] 25183094379509806957 25183094575602627552 quitAndEvaluate[] will restart kernel without aby additional tasks. It may be useful if you want to ... 16 I have been solving exactly the same problem about 2 years ago (http://community.wolfram.com/groups/-/m/t/125587?p_p_auth=aZGMz5bs). Students are uploading piece of Mathematica (Wolfram Language) code which is run by a testing script (in Mathematica) and the results are compared with a reference solution. To prevent the students to run potentially dangerous ... 16 The kernel crashes due to stack overflow. It is not safe to recurse too deeply. Increasing$RecursionLimit to values that are too great (and actually recursing that deep) risks a crash. (So yes, in a way it's due to insufficient memory, but it has nothing to do with memoization. It is due to insufficient stack space.) From the documentation: On most ... 15 Does running Quit[] do what you want? 15 In addition to Mr.Wizard's answer. In many cases it is very practical to stop the evaluation when the actual amount of free physical memory in your system becomes less than specified threshold. You can get the amount of free physical memory very efficiently via NETLink call to GlobalMemoryStatusEx function of kernel32.dll (which is available both under 32 ... 15 In addition to assigning to In, the Mathematica main loop assigns the input to InString before it is parsed as an expression. You can then retrieve InString[1] and parse the result with ToExpression, wrapping it in Defer to prevent it from evaluating immediately: In[5]:= ToExpression[InString[1], StandardForm, Defer] Out[5]= Round[SessionTime[]] You can ... 15 There is a setting in Mathematica that controls whether it can access the internet. Go to Preferences -> Internet Connectivity and uncheck "Allow the Wolfram System to access the Internet". Disabling this will disable some features that depend on internet access, such as Wolfram|Alpha queries. This setting can also be controlled by the $AllowInternet ... 15 One approach would be to run the evaluation in a second kernel which is controlled from a main kernel through MathLink/WSTP. Then your main kernel can detect if the MathLink connection dies. You can implement this manually (a lot of work), or you can try to do it using the parallel computing tools, where much of the groundwork is already laid down. In ... 14 You can use GNU screen to make a sort of persistent terminal that allows you to resume work wherever you left off. Take a look at the many tutorials available. It's not completely clear from your question whether the better solution is this, or nohup (see Stefan's answer). Use nohup if your workflow is non-interactive: log in, start a batch job that ... 14 After some spelunking, I found a file which contains a lot of initialization code, including reading the kernel init.m file, loading Autoload packages, loading anything set with the -initfile option, starting the paclet manager (which may autoload packages), and many other things. It is SystemFiles/Kernel/SystemResources/$SystemID/sysinit.m Towards the ... 14 Apparently, Throw is deactivated during kernel initialization. The following function can determine if Throw is inoperative: throwInoperativeQ[] := CheckAll[Catch[Throw[False]], # /. Null -> True &] The undocumented function CheckAll is used here because Check also appears to be unreliable when Throw is inoperative. If we make the assumption that ... 13 As an alternative to DumpSave, what I've done in the past was to Compress the relevant results and store them within the same notebook. You can optionally set things up so that your data/results would self-uncompress themselves upon being called the first time. For one example of such use, see this answer. In any case, the advantage of this approach is ... 13 Yes, the Mathematica application on Mac OS contains a few external binaries, which are mostly used for importing and exporting. These files have suffix .exe: $find "/Applications/Mathematica 8.app" -name '*.exe'|wc -l 49 But even though .exe is a prefix common for Windows executables, it doesn’t mean that it can’t be used for other things. In fact, Mac OS ... 13 Updated This happens because your DynamicModule returns a dynamic object of which x is passed on to the front-end before the scheduled task starts, so the front-end-x cannot be modified anymore by any process (more details at the end). The problem can be further simplified. This works: RemoveScheduledTask@ScheduledTasks[]; DynamicModule[{x = 0}, ... 13 Assuming FrontEnd survives, prepare 3 cells: (*init cell, won't be needed later*) state = CurrentValue[EvaluationNotebook[], {"TaggingRules", "state"}] = 0; SetOptions[ #, {CellTags -> {"Procedure"}, ShowCellTags -> True} ]& /@ {NextCell[], NextCell @ NextCell[]}; CurrentValue[$FrontEndSession, "ClearEvaluationQueueOnKernelQuit"] = False; ... 12 You can put your calculation inside TimeConstrained. However in your case, probably the better idea is to restrict the used memory. That's done with MemoryConstrained. If you don't want to figure out the available memory yourself, see here for how to do it automatically. For example this terminates a calculation if the calculation needs more than 1 GB of ... 12 You can make use of either TimeConstrained or MemoryConstrained to terminate evaluation when it runs out of time or memory respectively. For example, if you have a function that has a reasonable memory footprint, but takes time to evaluate, you can abort evaluation after a certain amount of time (in seconds) has elapsed, as: TimeConstrained[Eigenvalues@... Only top voted, non community-wiki answers of a minimum length are eligible
2019-12-08 12:33:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28678375482559204, "perplexity": 1623.0340857726496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00246.warc.gz"}
https://search.test.datacite.org/works/10.17889/e111307v1
### Replication data for: Why Don't Households Smooth Consumption? Evidence from a $25 Million Experiment Jonathan A. Parker This paper evaluates theoretical explanations for the propensity of households to increase spending in response to the arrival of predictable, lump-sum payments, using households in the Nielsen Consumer Panel who received$25 million in randomly distributed stimulus payments. The pattern of spending is inconsistent with models in which identical households cycle rapidly through high and low-response states as they manage liquidity, but is instead highly predictable by income years before the payment. Spending responses are... This data repository is not currently reporting usage information. For information on how your repository can submit usage information, please see our documentation.
2020-01-27 00:02:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28142961859703064, "perplexity": 8086.245572031131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00397.warc.gz"}
http://library.cirm-math.fr/ListRecord.htm?List=folder&folder=131
• D F Nous contacter 0 # Algebraic and Complex Geometry  | enregistrements trouvés : 133 O Sélection courante (0) : Tout sélectionner / Tout déselectionner P Q ## Post-edited  Braids and Galois groups Matzat, B. Heinrich (Auteur de la Conférence) | CIRM (Editeur ) arithmetic fundamental group - Galois theory - braid groups - rigid analytic geometry - rigidity of finite groups ## Post-edited  Unirational varieties - Part 1 Mella, Massimiliano (Auteur de la Conférence) | CIRM (Editeur ) The aim of these talks is to give an overview to unirationality problems. I will discuss the behaviour of unirationality in families and its relation with rational connectedness. Then I will concentrate on hypersurfaces and conic bundles. These special classes of varieties are a good place where to test different techniques and try to approach the unirationality problem via rational connectedness. ## Post-edited  The Weil algebra of a Hopf algebra Dubois-Violette, Michel (Auteur de la Conférence) | CIRM (Editeur ) We give a summary of a joint work with Giovanni Landi (Trieste University) on a non commutative generalization of Henri Cartan's theory of operations, algebraic connections and Weil algebra. ## Post-edited  Commutative algebra for Artin approximation - Part 1 Hauser, Herwig (Auteur de la Conférence) | CIRM (Editeur ) In this series of four lectures we develop the necessary background from commutative algebra to study solution sets of algebraic equations in power series rings. A good comprehension of the geometry of such sets should then yield in particular a "geometric" proof of the Artin approximation theorem. In the first lecture, we review various power series rings (formal, convergent, algebraic), their topology ($m$-adic, resp. inductive limit of Banach spaces), and give a conceptual proof of the Weierstrass division theorem. Lecture two covers smooth, unramified and étale morphisms between noetherian rings. The relation of these notions with the concepts of submersion, immersion and diffeomorphism from differential geometry is given. In the third lecture, we investigate ring extensions between the three power series rings and describe the respective flatness properties. This allows us to prove approximation in the linear case. The last lecture is devoted to the geometry of solution sets in power series spaces. We construct in the case of one $x$-variable an isomorphism of an $m$-adic neighborhood of a solution with the cartesian product of a (singular) scheme of finite type with an (infinite dimensional) smooth space, thus extending the factorization theorem of Grinberg-Kazhdan-Drinfeld. In this series of four lectures we develop the necessary background from commutative algebra to study solution sets of algebraic equations in power series rings. A good comprehension of the geometry of such sets should then yield in particular a "geometric" proof of the Artin approximation theorem. In the first lecture, we review various power series rings (formal, convergent, algebraic), their topology ($m$-adic, resp. inductive limit of Banach ... 13J05 ## Post-edited  Zeta functions and monodromy Veys, Wim (Auteur de la Conférence) | CIRM (Editeur ) The $p$-adic Igusa zeta function, topological and motivic zeta function are (related) invariants of a polynomial $f$, reflecting the singularities of the hypersurface $f = 0$. The first one has a number theoretical flavor and is related to counting numbers of solutions of $f = 0$ over finite rings; the other two are more geometric in nature. The monodromy conjecture relates in a mysterious way these invariants to another singularity invariant of $f$, its local monodromy. We will discuss in this survey talk rationality issues for these zeta functions and the origins of the conjecture. The $p$-adic Igusa zeta function, topological and motivic zeta function are (related) invariants of a polynomial $f$, reflecting the singularities of the hypersurface $f = 0$. The first one has a number theoretical flavor and is related to counting numbers of solutions of $f = 0$ over finite rings; the other two are more geometric in nature. The monodromy conjecture relates in a mysterious way these invariants to another singularity invariant of ... ## Post-edited  Arc spaces and singularities in the minimal model program - Lecture 1 de Fernex, Tommaso (Auteur de la Conférence) | CIRM (Editeur ) The space of formal arcs of an algebraic variety carries part of the information encoded in a resolution of singularities. This series of lectures addresses this fact from two perspectives. In the first two lectures, we focus on the topology of the space of arcs, proving Kolchin's irreducibility theorem and discussing the Nash problem on families of arcs through the singularities of the variety; recent results on this problem are proved in the second lecture. The last two lectures are devoted to some applications of arc spaces toward a conjecture on minimal log discrepancies known as inversion of adjunction. Minimal log discrepancies are invariants of singularities appearing in the minimal model program, a quick overview of which is given in the third lecture. The space of formal arcs of an algebraic variety carries part of the information encoded in a resolution of singularities. This series of lectures addresses this fact from two perspectives. In the first two lectures, we focus on the topology of the space of arcs, proving Kolchin's irreducibility theorem and discussing the Nash problem on families of arcs through the singularities of the variety; recent results on this problem are proved in the ... ## Post-edited  Caustics of world sheets in Lorentz-Minkowski $3$-space Izumiya, Shyuichi (Auteur de la Conférence) | CIRM (Editeur ) Caustics appear in several areas in Physics (i.e., geometrical optics [10], the theory of underwater acoustic [2] and the theory of gravitational lensings [11], and so on) and Mathematics (i.e., classical differential geometry [12, 13] and the theory of differential equations [6, 7, 15], and so on). Originally the notion of caustics belongs to geometrical optics, which has strongly stimulated the study of singularities [14]. Their singularities are now understood as a special class of singularities, so called Lagrangian singularities [1, 16]. In this talk we start to describe the classical notion of evolutes (i.e., focal sets) in Euclidean plane (or, space) as caustics for understanding what are the caustics. The evolute is defined to be the envelope of the family of normal lines to a curve (or, a surface). The basic idea is that we may regard the normal line as a ray emanate from the curve (or, the surface), so that the evolute can be considered as a caustic in geometrical optics. Then we consider surfaces in Lorentz-Minkowski $3$-space and explain the direct analogy of the evolute (the Lorentzian evolute) of a timelike surface, whose singularities are the same as those of the evolute of a surface in Euclidean space generically. This case the normal lines of a timelike surface are spacelike, so these are not corresponding to rays in the physical sense. Therefore, the Lorentz evolute is not a caustic in the sense of geometric optics. In Lorentz-Minkowski $3$-space, the ray emanate from a spacelike curve is a normal line of the curve whose directer vector is lightlike, so the family of rays forms a lightlike surface (i.e., a light sheet). The set of critical values of the light sheet is called a lightlike focal curve along a spacelike curve. Actually, the notion of light sheets is important in Physics which provides models of several kinds of horizons in space-times [5]. On the other hand, a world sheet in a Lorentz-Minkowski $3$-space is a timelike surface consisting of a one-parameter family of spacelike curves. Each spacelike curve is called a momentary curve. We consider the family of lightlike surfaces along momentary curves in the world sheet. The locus of the singularities (the lightlike focal curves) of lightlike surfaces along momentary curves form a caustic. This construction is originally from the theoretical physics (the string theory, the brane world scenario, the cosmology, and so on) [3, 4]. Moreover, we have no notion of the time constant in the relativity theory. Hence everything that is moving depends on the time. Therefore, we consider world sheets in the relativity theory. In order to understand the situation easily, we only consider 2-dimensional world sheets in Lorentz-Minkowski $3$-space. We remark that we have results for higher dimensional cases and for other Lorentz space-forms similar to this special case [8, 9]. Caustics appear in several areas in Physics (i.e., geometrical optics [10], the theory of underwater acoustic [2] and the theory of gravitational lensings [11], and so on) and Mathematics (i.e., classical differential geometry [12, 13] and the theory of differential equations [6, 7, 15], and so on). Originally the notion of caustics belongs to geometrical optics, which has strongly stimulated the study of singularities [14]. Their singularities ... ## Post-edited  Indices of vector fields on singular varieties and the Milnor number Seade, José (Auteur de la Conférence) | CIRM (Editeur ) Let $(V,p)$ be a complex isolated complete intersection singularity germ (an ICIS). It is well-known that its Milnor number $\mu$ can be expressed as the difference: $$\mu = (-1)^n ({\rm Ind}_{GSV}(v;V) - {\rm Ind}_{rad}(v;V)) \;,$$ where $v$ is a continuous vector field on $V$ with an isolated singularity at $p$, the first of these indices is the GSV index and the latter is the Schwartz (or radial) index. This is independent of the choice of $v$. In this talk we will review how this formula extends to compact varieties with non-isolated singularities. This depends on two different ways of extending the notion of Chern classes to singular varieties. On elf these are the Fulton-Johnson classes, whose 0-degree term coincides with the total GSV-Index, while the others are the Schwartz-McPherson classes, whose 0-degree term is the total radial index, and it coincides with the Euler characteristic. This yields to the well known notion of Milnor classes, which extend the Milnor number. We will discuss some geometric facts about the Milnor classes. Let $(V,p)$ be a complex isolated complete intersection singularity germ (an ICIS). It is well-known that its Milnor number $\mu$ can be expressed as the difference: $$\mu = (-1)^n ({\rm Ind}_{GSV}(v;V) - {\rm Ind}_{rad}(v;V)) \;,$$ where $v$ is a continuous vector field on $V$ with an isolated singularity at $p$, the first of these indices is the GSV index and the latter is the Schwartz (or radial) index. This is independent of the choice ... ## Post-edited  Invariants of determinantal varieties Ruas, Maria Aparecida Soares (Auteur de la Conférence) | CIRM (Editeur ) We review basic results on determinantal varieties and show how to apply methods of singularity theory of matrices to study their invariants and geometry. The Nash transformation and the Euler obstruction of Essentially Isolated Determinantal Singularities (EIDS) are discussed. To illustrate the results we compute the Euler obstruction of corank one EIDS with non isolated singularities. ## Post-edited  Geometric Langlands correspondence and topological field theory - Part 1 Ben-Zvi, David (Auteur de la Conférence) | CIRM (Editeur ) Kapustin and Witten introduced a powerful perspective on the geometric Langlands correspondence as an aspect of electric-magnetic duality in four dimensional gauge theory. While the familiar (de Rham) correspondence is best seen as a statement in conformal field theory, much of the structure can be seen in the simpler (Betti) setting of topological field theory using Lurie's proof of the Cobordism Hypothesis. In these lectures I will explain this perspective and illustrate its applications to representation theory following joint work with Nadler as well as Brochier, Gunningham, Jordan and Preygel. Kapustin and Witten introduced a powerful perspective on the geometric Langlands correspondence as an aspect of electric-magnetic duality in four dimensional gauge theory. While the familiar (de Rham) correspondence is best seen as a statement in conformal field theory, much of the structure can be seen in the simpler (Betti) setting of topological field theory using Lurie's proof of the Cobordism Hypothesis. In these lectures I will explain ... ## Post-edited  Algebraic cycles on varieties over finite fields Pirutka, Alena (Auteur de la Conférence) | CIRM (Editeur ) Let $X$ be a projective variety over a field $k$. Chow groups are defined as the quotient of a free group generated by irreducible subvarieties (of fixed dimension) by some equivalence relation (called rational equivalence). These groups carry many information on $X$ but are in general very difficult to study. On the other hand, one can associate to $X$ several cohomology groups which are "linear" objects and hence are rather simple to understand. One then construct maps called "cycle class maps" from Chow groups to several cohomological theories. In this talk, we focus on the case of a variety $X$ over a finite field. In this case, Tate conjecture claims the surjectivity of the cycle class map with rational coefficients; this conjecture is still widely open. In case of integral coefficients, we speak about the integral version of the conjecture and we know several counterexamples for the surjectivity. In this talk, we present a survey of some well-known results on this subject and discuss other properties of algebraic cycles which are either proved or expected to be true. We also discuss several involved methods. Let $X$ be a projective variety over a field $k$. Chow groups are defined as the quotient of a free group generated by irreducible subvarieties (of fixed dimension) by some equivalence relation (called rational equivalence). These groups carry many information on $X$ but are in general very difficult to study. On the other hand, one can associate to $X$ several cohomology groups which are "linear" objects and hence are rather simple to ... ## Post-edited  Local Weyl equivalence of Fuchsian equations Yakovenko, Sergei (Auteur de la Conférence) | CIRM (Editeur ) Classifying regular systems of first order linear ordinary equations is a classical subject going back to Poincare and Dulac. There is a gauge group whose action can be described and an integrable normal form produced. A similar problem for higher order differential equations was never addressed, perhaps because the corresponding equivalence relationship is not induced by any group action. Still one can develop a reasonable classification theory, largely parallel to the classical theory. This is a joint work with Shira Tanny from the Weizmann Institiute, see http://arxiv.org/abs/1412.7830. Classifying regular systems of first order linear ordinary equations is a classical subject going back to Poincare and Dulac. There is a gauge group whose action can be described and an integrable normal form produced. A similar problem for higher order differential equations was never addressed, perhaps because the corresponding equivalence relationship is not induced by any group action. Still one can develop a reasonable classification ... ## Post-edited  The category MF in the semistable case Faltings, Gerd (Auteur de la Conférence) | CIRM (Editeur ) For smooth schemes the category $MF$ (defined by Fontaine for DVR's) realises the "mysterious functor", and provides natural systems of coeffients for crystalline cohomology. We generalise it to schemes with semistable singularities. The new technical features consist mainly of different methods in commutative algebra 14F30 ## Post-edited  Coupled rotations and snow falling on cedars McMullen, Curtis T. (Auteur de la Conférence) | CIRM (Editeur ) We study cascades of bifurcations in a simple family of maps on the circle, and connect this behavior to the geometry of an absolute period leaf in genus $2$. The presentation includes pictures of an exotic foliation of the upper half plane, computed with the aid of the Möller-Zagier formula. ## Post-edited  $H^{3}$ non ramifié et cycles de codimension 2 Colliot-Thélène, Jean-Louis (Auteur de la Conférence) | CIRM (Editeur ) Le troisième groupe de cohomologie non ramifiée d'une variété lisse, à coefficients dans les racines de l'unité tordues deux fois, intervient dans plusieurs articles récents, en particulier en relation avec le groupe de Chow de codimension 2. On fera un tour d'horizon : espaces homogènes de groupes algébriques linéaires; variétés rationnellement connexes sur les complexes; images d'applications cycle sur les complexes, sur un corps fini, sur un corps de nombres. Le troisième groupe de cohomologie non ramifiée d'une variété lisse, à coefficients dans les racines de l'unité tordues deux fois, intervient dans plusieurs articles récents, en particulier en relation avec le groupe de Chow de codimension 2. On fera un tour d'horizon : espaces homogènes de groupes algébriques linéaires; variétés rationnellement connexes sur les complexes; images d'applications cycle sur les complexes, sur un corps fini, sur un ... ## Post-edited  Whitney problems and real algebraic geometry Fefferman, Charles (Auteur de la Conférence) | CIRM (Editeur ) This talk sketches connections between Whitney problems and e.g. the problem of deciding whether a given rational function on $\mathbb{R}^n$ belongs to $C^m$. ## Post-edited  On the remodeling conjecture for toric Calabi-Yau 3-orbifolds Liu, Chiu-Chu Melissa (Auteur de la Conférence) | CIRM (Editeur ) The remodeling conjecture proposed by Bouchard-Klemm-Marino-Pasquetti relates Gromov-Witten invariants of a semi-projective toric Calabi-Yau 3-orbifold to Eynard-Orantin invariants of the mirror curve of the toric Calabi-Yau 3-fold. It can be viewed as a version of all genus open-closed mirror symmetry. In this talk, I will describe results on this conjecture based on joint work with Bohan Fang and Zhengyu Zong. ## Post-edited  Stability and applications to birational and hyperkaehler geometry - Lecture 1 Bayer, Arend (Auteur de la Conférence) | CIRM (Editeur ) This lecture series will be an introduction to stability conditions on derived categories, wall-crossing, and its applications to birational geometry of moduli spaces of sheaves. I will assume a passing familiarity with derived categories. - Introduction to stability conditions. I will start with a gentle review of aspects of derived categories. Then an informal introduction to Bridgeland's notion of stability conditions on derived categories [2, 5, 6]. I will then proceed to explain the concept of wall-crossing, both in theory, and in examples [1, 2, 4, 6]. - Wall-crossing and birational geometry. Every moduli space of Bridgeland-stable objects comes equipped with a canonically defined nef line bundle. This systematically explains the connection between wall-crossing and birational geometry of moduli spaces. I will explain and illustrate the underlying construction [7]. - Applications : Moduli spaces of sheaves on $K3$ surfaces. I will explain how one can use the theory explained in the previous talk in order to systematically study the birational geometry of moduli spaces of sheaves, focussing on $K3$ surfaces [1, 8]. This lecture series will be an introduction to stability conditions on derived categories, wall-crossing, and its applications to birational geometry of moduli spaces of sheaves. I will assume a passing familiarity with derived categories. - Introduction to stability conditions. I will start with a gentle review of aspects of derived categories. Then an informal introduction to Bridgeland's notion of stability conditions on derived categories ... ## Post-edited  Rank 3 rigid representations of projective fundamental groups Simpson, Carlos (Auteur de la Conférence) | CIRM (Editeur ) This is joint with Adrian Langer. Let $X$ be a smooth complex projective variety. We show that every rigid integral irreducible representation $\pi_1(X,x) \to SL(3,\mathbb{C})$ is of geometric origin, i.e. it comes from a family of smooth projective varieties. The underlying theorem is a classification of VHS of type $(1,1,1)$ using some ideas from birational geometry. ## Post-edited  The non-archimedean SYZ fibration and Igusa zeta functions - Part 1 Nicaise, Johannes (Auteur de la Conférence) | CIRM (Editeur ) The SYZ fibration is a conjectural geometric explanation for the phenomenon of mirror symmetry for maximal degenerations of complex Calabi-Yau varieties. I will explain Kontsevich and Soibelman's construction of the SYZ fibration in the world of non-archimedean geometry, and its relations with the Minimal Model Program and Igusa's p-adic zeta functions. No prior knowledge of non-archimedean geometry is assumed. These lectures are based on joint work with Mircea Mustata and Chenyang Xu. The SYZ fibration is a conjectural geometric explanation for the phenomenon of mirror symmetry for maximal degenerations of complex Calabi-Yau varieties. I will explain Kontsevich and Soibelman's construction of the SYZ fibration in the world of non-archimedean geometry, and its relations with the Minimal Model Program and Igusa's p-adic zeta functions. No prior knowledge of non-archimedean geometry is assumed. These lectures are based on joint ... ##### Codes MSC Nuage de mots clefs ici Z
2017-07-28 14:53:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7224985361099243, "perplexity": 844.2265512371337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550969387.94/warc/CC-MAIN-20170728143709-20170728163709-00689.warc.gz"}
https://stats.stackexchange.com/questions/69144/calculating-prediction-interval
Calculating Prediction Interval I have the following data located here. I am attempting to calculate the 95% confidence interval on the mean purity when the hydrocarbon percentage is 1.0. In R, I enter the following. > predict(purity.lm, newdata=list(hydro=1.0), interval="confidence", level=.95) fit lwr upr 1 89.66431 87.51017 91.81845 However, how can I derive this result myself? I attempted to use the following equation. $$s_{new}=\sqrt{s^2\left(1+\frac{1}{N}+\frac{(x_{new}-\bar x)^2}{\sum(x_i-\bar x)^2}\right)}$$ And I enter the following in R. > SSE_line = sum((purity - (77.863 + 11.801*hydro))^2) > MSE = SSE_line/18 > t.quantiles <- qt(c(.025, .975), 18) > prediction = B0 + B1*1 > SE_predict = sqrt(MSE)*sqrt(1+1/20+(mean(hydro)-1)^2/sum((hydro - mean(hydro))^2)) > prediction + SE_predict*t.quantiles [1] 81.80716 97.52146 My results are different from R's predict function. What am I misunderstanding about prediction intervals? • How are you calculating the MSE in your code? – user25658 Sep 4 '13 at 5:25 • I added the calculation to the post. – idealistikz Sep 4 '13 at 5:29 • as MMJ suggested you should try predict(purity.lm, newdata=list(hydro=1.0), interval="prediction", level=.95) – vinux Sep 4 '13 at 9:30 Confidence interval $$s_{new}=\sqrt{s^2\left(\frac{1}{N}+\frac{(x_{new}-\bar x)^2}{\sum(x_i-\bar x)^2}\right)}$$ Prediction interval $$s_{new}=\sqrt{s^2\left(1+\frac{1}{N}+\frac{(x_{new}-\bar x)^2}{\sum(x_i-\bar x)^2}\right)}$$
2019-10-19 14:52:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8596435189247131, "perplexity": 7512.575012835156}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986696339.42/warc/CC-MAIN-20191019141654-20191019165154-00438.warc.gz"}
https://www.albert.io/ie/act-science/correct-plot-of-data
? Free Version Easy # Correct Plot of Data ACTSCI-JKTOTR The length of the day changes from place to place and from season to season. The table below shows the times of sunrise and sunset on a certain day in August in certain cities of the world. Latitude is the angular distance of a place north or south of the earth's equator, usually expressed in degrees and minutes. The table also shows the latitudinal positions of these cities. Table 2.1 City Sunrise time (in hours) Sunset time (in hours) Latitude Tokyo 4:50 18:45 35$^\circ$ 41' N New Delhi 5:53 18:55 28$^\circ$ 36' N Johannesburg 6:45 17:40 26$^\circ$ 12' N London 5:30 20:45 51$^\circ$ 30' N New York 5:55 19:59 40$^\circ$ 42' N Los Angeles 6:05 19:50 34$^\circ$ 3' N Adelaide 6:45 17:55 34$^\circ$ 56' N Figure 2.1 Which of these is the correct plot based on the data given in table 2.1? A B C D
2016-12-10 01:19:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20936638116836548, "perplexity": 6946.749143725642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542932.99/warc/CC-MAIN-20161202170902-00286-ip-10-31-129-80.ec2.internal.warc.gz"}
http://maps.thefullwiki.org/Multiplication
# Multiplication: Map ### Map showing all locations mentioned on Wikipedia article: 3 × 4 = 12, so twelve dots can be arranged in three rows of four (or four columns of three). Multiplication is the mathematical operation of scaling one number by another. It is one of the four basic operations in elementary arithmetic (the others being addition, subtraction and division). Multiplication is defined for whole numbers in terms of repeated addition; for example, 3 multiplied by 4 (often said as "3 times 4") can be calculated by adding 4 copies of 3 together: 3 \times 4 = 3 + 3 + 3 + 3 = 12.\!\, Multiplication of rational numbers (fractions) and real numbers is defined by systematic generalization of this basic idea. Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have given lengths (for numbers generally). The inverse of multiplication is division: as 3 times 4 is equal to 12, so 12 divided by 3 is equal to 4. Multiplication is generalized further to other types of numbers (such as complex numbers) and to more abstract constructs such as matrices. ## Notation and terminology The multiplication sign. Multiplication is often written using the multiplication sign "×" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, 2\times 3 = 6 (verbally, "two times three equals six") 3\times 4 = 12 2\times 3\times 5 = 6\times 5 = 30 2\times 2\times 2\times 2\times 2 = 32 There are several other common notations for multiplication: • Multiplication is sometimes denoted by either a middle dot or a period: The middle dot is standard in the United States, the United Kingdom, and other countries where the period is used as a decimal point. In other countries that use a comma as a decimal point, either the period or a middle dot is used for multiplication. • The asterisk (as in 5*2) is often used in programming languages because it appears on every keyboard and is easier to see on older monitors. • This usage originated in the FORTRAN programming language. • In algebra, multiplication involving variables is often written as a juxtaposition (e.g. xy for x times y or 5x for five times x). • This notation can also be used for quantities that are surrounded by parentheses (e.g. • 5(2) or (5)(2) for five times two). • In matrix multiplication, there is actually a distinction between the cross and the dot symbols. • The cross symbol generally denotes a vector multiplication, while the dot denotes a scalar multiplication. • A like convention distinguishes between the cross product and the dot product of two vectors. The numbers to be multiplied are generally called the "factors" or "multiplicands". When thinking of multiplication as repeated addition, the number to be multiplied is called the "multiplicand", while the number of multiples is called the "multiplier". In algebra, a number that is the multiplier of a variable or expression (e.g. the 3 in 3xy2) is called a coefficient. The result of a multiplication is called a product, and is a multiple of each factor that is an integer. For example 15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5. ## Computation The common methods for multiplying numbers using pencil and paper require a multiplication table of memorized or consulted products of small numbers (typically any two numbers from 0 to 9), however one method, the peasant multiplication algorithm, does not. Multiplying numbers to more than a couple of decimal places by hand is tedious and error prone. Common logarithms were invented to simplify such calculations. The slide rule allowed numbers to be quickly multiplied to about three places of accuracy. Beginning in the early twentieth century, mechanical calculators, such as the Marchant, automated multiplication of up to 10 digit numbers. Modern electronic computers and calculators have greatly reduced the need for multiplication by hand. ### Historical algorithms Methods of multiplication were documented in the Egyptian, Greek, Babylonian, Indus valley, and Chinese civilizations. The Ishango bone, dated to about 18,000 to 20,000 BC, hints at a knowledge of multiplication in the Upper Paleolithic era in Central Africa. #### Egyptians The Egyptian method of multiplication of integers and fractions, documented in the Ahmes Papyrus, was by successive additions and doubling. For instance, to find the product of 13 and 21 one had to double 21 three times, obtaining 1 × 21 = 21, 2 × 21 = 42, 4 × 21 = 84, 8 × 21 = 168. The full product could then be found by adding the appropriate terms found in the doubling sequence: 13 × 21 = (1 + 4 + 8) × 21 = (1 × 21) + (4 × 21) + (8 × 21) = 21 + 84 + 168 = 273. #### Babylonians The Babylonians used a sexagesimal positional number system, analogous to the modern day decimal system. Thus, Babylonian multiplication was very similar to modern decimal multiplication. Because of the relative difficulty of remembering 60 × 60 different products, Babylonian mathematicians employed multiplication tables. These tables consisted of a list of the first twenty multiples of a certain principal number n: n, 2n, ..., 20n; followed by the multiples of 10n: 30n 40n, and 50n. Then to compute any sexagesimal product, say 53n, one only needed to add 50n and 3n computed from the table. #### Chinese In the mathematical text Zhou Bi Suan Jing, dated prior to 300 B.C., and the Nine Chapters on the Mathematical Art, multiplication calculations were written out in words, although the early Chinese mathematicians employed an abacus in hand calculations involving addition and multiplication. #### Indus Valley The early Indian mathematicians of the Indus Valley Civilization used a variety of intuitive tricks to perform multiplication. Most calculations were performed on small slate hand tablets, using chalk tables. One technique was that of lattice multiplication (or gelosia multiplication). Here a table was drawn up with the rows and columns labelled by the multiplicands. Each box of the table was divided diagonally into two, as a triangular lattice. The entries of the table held the partial products, written as decimal numbers. The product could then be formed by summing down the diagonals of the lattice. #### Modern method The modern method of multiplication based on the Hindu-Arabic numeral system was first described by Brahmagupta. Brahmagupta gave rules for addition, subtraction, multiplication and division. Henry Burchard Fine, then professor of Mathematics at Princeton University, wrote the following: The Indians are the inventors not only of the positional decimal system itself, but of most of the processes involved in elementary reckoning with the system. Addition and subtraction they performed quite as they are performed nowadays; multiplication they effected in many ways, ours among them, but division they did cumbrously. ## Products of sequences ### Capital pi notation The product of a sequence of terms can be written with the product symbol, which derives from the capital letter Π in the Greek alphabet. Unicode position U+220F (∏) contains a glyph for denoting such a product, distinct from U+03A0 (Π), the letter.The meaning of this notation is given by: \prod_{i=m}^{n} x_{i} = x_{m} \cdot x_{m+1} \cdot x_{m+2} \cdot \,\,\cdots\,\, \cdot x_{n-1} \cdot x_{n}. The subscript gives the symbol for a dummy variable (i in this case), called the "index of multiplication" together with its lower bound (m), whereas the superscript (here n) gives its upper bound. The lower and upper bound are expressions denoting integers. The factors of the product are obtained by taking the expression following the product operator, with successive integer values substituted for the index of multiplication, starting from the lower bound and incremented by 1 up to and including the upper bound. So, for example: \prod_{i=2}^{6} \left(1 + {1\over i}\right) = \left(1 + {1\over 2}\right) \cdot \left(1 + {1\over 3}\right) \cdot \left(1 + {1\over 4}\right) \cdot \left(1 + {1\over 5}\right) \cdot \left(1 + {1\over 6}\right) = {7\over 2}. In case m = n, the value of the product is the same as that of the single factor xm. If m > n, the product is the empty product, with the value 1. ### Infinite products One may also consider products of infinitely many terms; these are called infinite products. Notationally, we would replace n above by the lemniscate . In the reals, the product of such a series is defined as the limit of the product of the first n terms, as n grows without bound. That is, by definition, \prod_{i=m}^{\infty} x_{i} = \lim_{n\to\infty} \prod_{i=m}^{n} x_{i}. One can similarly replace m with negative infinity, and define: \prod_{i=-\infty}^\infty x_i = \left(\lim_{m\to-\infty}\prod_{i=m}^0 x_i\right) \cdot \left(\lim_{n\to\infty}\prod_{i=1}^n x_i\right), provided both limits exist. ## Interpretation ### Cartesian product The definition of multiplication as repeated addition provides a way to arrive at a set-theoretic interpretation of multiplication of cardinal numbers. In the expression \displaystyle n \cdot a = \underbrace{a + \cdots + a}_{n}, if the n copies of a are to be combined in disjoint union then clearly they must be made disjoint; an obvious way to do this is to use either a or n as the indexing set for the other. Then, the members of n \cdot a\, are exactly those of the Cartesian product n \times a\,. The properties of the multiplicative operation as applying to natural numbers then follow trivially from the corresponding properties of the Cartesian product. ## Properties For integers, fractions, and real and complex numbers, multiplication has certain properties: Commutative property The order in which two numbers are multiplied does not matter: :x\cdot y = y\cdot x. Associative property Expressions solely involving multiplication are invariant with respect to order of operations: :(x\cdot y)\cdot z = x\cdot(y\cdot z) Distributive property Holds with respect to addition over multiplication. This identity is of prime importance in simplifying algebraic expressions: :x\cdot(y + z) = x\cdot y + x\cdot z Identity element The multiplicative identity is 1; anything multiplied by one is itself. This is known as the identity property: :x\cdot 1 = x Zero element Anything multiplied by zero is zero. This is known as the zero property of multiplication: :x\cdot 0 = 0 Inverse property Every number x, except zero, has a multiplicative inverse, \frac{1}{x}, such that x\cdot\left(\frac{1}{x}\right) = 1. Order preservation Multiplication by a positive number preserves order: if a > 0, then if b > c then ab > ac. Multiplication by a negative number reverses order: if a <&NBSP;0 and="" b > c then ab <&NBSP;ac. • Negative one times any number is equal to the opposite of that number. :(-1)\cdot x = (-x) • Negative one times negative one is positive one. :(-1)\cdot (-1) = 1 Other mathematical systems that include a multiplication operation may not have all these properties. For example, multiplication is not, in general, commutative for matrices and quaternions. ### Proofs Not all of these properties are independent; some are a consequence of the others. A property that can be proven from the others is the zero property of multiplication. It is proven by means of the distributive property. We assume all the usual properties of addition and subtraction, and −x means the same as 0 − x. \begin{align}& {} \qquad x\cdot 0 \\& {} = (x\cdot 0) + x - x \\& {} = (x\cdot 0) + (x\cdot 1) - x \\& {} = x\cdot (0 + 1) - x \\& {} = (x\cdot 1) - x \\& {} = x - x \\& {}= 0\end{align} So we have proven: x\cdot 0 = 0 The identity (−1) · x = −x can also be proven using the distributive property: \begin{align}& {} \qquad(-1)\cdot x \\& {} = (-1)\cdot x + x - x \\& {} = (-1)\cdot x + 1\cdot x - x \\& {} = (-1 + 1)\cdot x - x \\& {} = 0\cdot x - x \\& {} = 0 - x \\& {} = -x\end{align} The proof that (−1) · (−1) = 1 is now easy: \begin{align}& {} \qquad (-1)\cdot (-1) \\& {} = -(-1) \\& {} = 1\end{align} ## Multiplication with Peano's axioms In the book Arithmetices principia, nova methodo exposita, Giuseppe Peano proposed a new definition for multiplication based on his axioms for natural numbers. a\times 1=a a\times b'=(a\times b)+a Here, b′ represents the successor of b, or the natural number which follows b. With his other nine axioms, it is possible to prove common rules of multiplication, such as the distributive or associative properties. ## Multiplication with set theory It is possible, though difficult, to create a recursive definition of multiplication with set theory. Such a system usually relies on the Peano definition of multiplication. ## Multiplication in group theory There are many sets that, under the operation of multiplication, satisfy the axioms that define group structure. These axioms are closure, associativity, and the inclusion of an identity element and inverses. A simple example is the set of non-zero rational numbers. Here we have identity 1, as opposed to groups under addition where the identity is typically 0. Note that with the rationals, we must exclude zero because, under multiplication, it does not have an inverse: there is no rational number that can be multiplied by zero to result in 1. In this example we have an abelian group, but that is not always the case. To see this, look at the set of invertible square matrices of a given dimension, over a given field. Now it is straightforward to verify closure, associativity, and inclusion of identity (the identity matrix) and inverses. However, matrix multiplication is not commutative, therefore this group is nonabelian. Another fact of note is that the integers under multiplication is not a group, even if we exclude zero. This is easily seen by the nonexistence of an inverse for all elements other than 1 and -1. Multiplication in group theory is typically notated either by a dot, or by juxtaposition (the omission of an operation symbol between elements). So multiplying element a by element b could be notated a \cdot b or ab. When referring to a group via the indication of the set and operation, the dot is used, e.g. our first example could be indicated by \left( \mathbb{Q}\smallsetminus \{ 0 \} ,\cdot \right) ## Multiplication of different kinds of numbers Numbers can count (3 apples), order (the 3rd apple), or measure (3.5 feet high); as the history of mathematics has progressed from counting on our fingers to modelling quantum mechanics, multiplication has been generalized to more complicated and abstract types of numbers, and to things that are not numbers (such as matrices) or do not look much like numbers (such as quaternions). Integers N\times M is the sum of M copies of N when N and M are positive whole numbers. This gives the number of things in an array N wide and M high. Generalization to negative numbers can be done by N\times (-M) = (-N)\times M = - (N\times M) and (-N)\times (-M) = N\times M. The same sign rules apply to rational and real numbers. Rational numbers Generalization to fractions \frac{A}{B}\times \frac{C}{D} is by multiplying the numerators and denominators respectively: \frac{A}{B}\times \frac{C}{D} = \frac{(A\times C)}{(B\times D)}. This gives the area of a rectangle \frac{A}{B} high and \frac{C}{D} wide, and is the same as the number of things in an array when the rational numbers happen to be whole numbers. Real numbers (x)(y) is the limit of the products of the corresponding terms in certain sequences of rationals that converge to x and y, respectively, and is significant in calculus. This gives the area of a rectangle x high and y wide. See Products of sequences, above. Complex numbers Considering complex numbers z_1 and z_2 as ordered pairs of real numbers (a_1, b_1) and (a_2, b_2), the product z_1\times z_2 is (a_1\times a_2 - b_1\times b_2, a_1\times b_2 + a_2\times b_1). This is the same as for reals, a_1\times a_2, when the imaginary parts b_1 and b_2 are zero. Further generalizations See Multiplication in group theory, above, and Multiplicative Group, which for example includes matrix multiplication. A very general, and abstract, concept of multiplication is as the "multiplicatively denoted" (second) binary operation in a ring. An example of a ring which is not any of the above number systems is a polynomial ring (you can add and multiply polynomials, but polynomials are not numbers in any usual sense.) Division Often division, \frac{x}{y}, is the same as multiplication by an inverse, x\left(\frac{1}{y}\right). Multiplication for some types of "numbers" may have corresponding division, without inverses; in an integral domain x may have no inverse "\frac{1}{x}" but \frac{x}{y} may be defined. In a division ring there are inverses but they are not commutative (since \left(\frac{1}{x}\right)\left(\frac{1}{y}\right) is not the same as \left(\frac{1}{y}\right)\left(\frac{1}{x}\right), \frac{x}{y} may be ambiguous). ## Exponentiation When multiplication is repeated, the resulting operation is known as exponentiation. For instance, the product 2×2×2 of three factors of two is "two raised to the third power", and is denoted by 23, a two with a superscript three. In this example, the number two is the base, and three is the exponent. In general, the exponent (or superscript) indicates how many times to multiply base by itself, so that the expression a^n = \underbrace{a\times a \times \cdots \times a}_n indicates that the base a to be multiplied by itself n times. ## Notes 1. Henry B. Fine. The Number System of Algebra – Treated Theoretically and Historically, (2nd edition, with corrections, 1907), page 90, http://www.archive.org/download/numbersystemofal00fineuoft/numbersystemofal00fineuoft.pdf 2. PlanetMath: Peano arithmetic
2019-11-18 05:49:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643757939338684, "perplexity": 751.3633432845155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00389.warc.gz"}
http://nrich.maths.org/2684/solution
### Gold Again Without using a calculator, computer or tables find the exact values of cos36cos72 and also cos36 - cos72. ### Pythagorean Golden Means Show that the arithmetic mean, geometric mean and harmonic mean of a and b can be the lengths of the sides of a right-angles triangle if and only if a = bx^3, where x is the Golden Ratio. ### Golden Triangle Three triangles ABC, CBD and ABD (where D is a point on AC) are all isosceles. Find all the angles. Prove that the ratio of AB to BC is equal to the golden ratio. # Golden Construction ##### Stage: 5 Challenge Level: Thank you to Shaun from Nottingham High School and Andrei from Tudor Vianu National College, Bucharest, Romania for these solutions. (1) Drawing the figure, I observe that ratios $AE/AD$ and $BC/BE$ are approximately equal, having a value of 1.6. (2) From Pythagoras' Theorem I calculate $MC$ (in the right-angled triangle $MBC$): \eqalign{ MC^2 &= BC^2 + MB^2 = 1 + 1/4 \cr MC &= \sqrt5 /2 }. So $AE=(\sqrt 5 + 1)/2$ and $BE=(\sqrt 5 - 1)/2$. The ratios are: $${AE\over AD}= {\sqrt 5 + 1\over 2}$$ and $${BC\over BE}= {1\over (\sqrt 5 - 1)/2} = {\sqrt 5 + 1\over 2}.$$ So, $AE/AD = BC/BE.$ (3) From this equality of ratios, I find out that $$BE = {AD.BC\over AE} = {1\over \phi}$$ But $AE = AB + BE$ so $$\phi = 1 + {1\over \phi}.$$ (4) Substituting $\phi = 1$ the left hand side of this expression is less than the right hand side. If we increase the value given to $\phi$ the left hand side increases and the right hand side decreases continuously. Substituting $\phi = 2$ the left hand side is greater than the right hand side so the value of $\phi$ which satisfies this equation must lie between $1$ and $2$. The two solutions of the equation can be found at the intersection of the cyan curve ($y=1 + 1/x$) and magenta curve ($y=x)$. Only the positive value is considered and it is approximately 1.618. (5) Now, I solve the equation. It is equivalent to $\phi^2 - \phi -1 = 0$ so the solutions are $$\phi_1 = {1-\sqrt 5\over 2}$$ and $$\phi_1 = {1+\sqrt 5 \over 2}.$$ Only the second solution is valid because $\phi > 0$ .
2015-07-01 20:22:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981968343257904, "perplexity": 406.9120880815769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095184.64/warc/CC-MAIN-20150627031815-00021-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.itl.nist.gov/div898/handbook/prc/section2/prc231.htm
7. Product and Process Comparisons 7.2. Comparisons based on data from one process 7.2.3. Are the data consistent with a nominal standard deviation? ## Confidence interval approach Confidence intervals for the standard deviation Confidence intervals for the true standard deviation can be constructed using the chi-square distribution. The $$100(1-\alpha)$$ % confidence intervals that correspond to the tests of hypothesis on the previous page are given by 1. Two-sided confidence interval for $$\sigma$$ $$\frac{s\sqrt{N-1}}{\sqrt{ \chi^2_{1-\alpha/2, N-1} }} \le \sigma \le \frac{s\sqrt{N-1}}{\sqrt{ \chi^2_{\alpha/2, N-1} }} \, ,$$ 2. Lower one-sided confidence interval for $$\sigma$$ $$\sigma \ge \frac{s\sqrt{N-1}}{\sqrt{ \chi^2_{1-\alpha, N-1} }} \, ,$$ 3. Upper one-sided confidence interval for $$\sigma$$ $$0 \le \sigma \le \frac{s\sqrt{N-1}}{\sqrt{ \chi^2_{\alpha, N-1} }} \, .$$ For case (1), $$\chi_{\alpha/2}^2$$ is the $$\alpha/2$$ critical value from the chi-square distribution with $$N -1$$ degrees of freedom and similarly for cases (2) and (3). Critical values can be found in the chi-square table in Chapter 1. Choice of risk level $$\alpha$$ can change the conclusion Confidence interval (1) is equivalent to a two-sided test for the standard deviation. That is, if the hypothesized or nominal value, $$\sigma_0$$, is not contained within these limits, then the hypothesis that the standard deviation is equal to the nominal value is rejected. A dilemma of hypothesis testing A change in $$\alpha$$ can lead to a change in the conclusion. This poses a dilemma. What should $$\alpha$$ be? Unfortunately, there is no clear-cut answer that will work in all situations. The usual strategy is to set $$\alpha$$ small so as to guarantee that the null hypothesis is wrongly rejected in only a small number of cases. The risk, $$\beta$$, of failing to reject the null hypothesis when it is false depends on the size of the discrepancy, and also depends on $$\alpha$$. The discussion on the next page shows how to choose the sample size so that this risk is kept small for specific discrepancies.
2019-11-20 22:44:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574397563934326, "perplexity": 199.40719229994065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00185.warc.gz"}
https://techwhiff.com/learn/fifteen-items-or-less-the-number-of-customers-in/175636
# Fifteen items or less: The number of customers in line at a supermarket express checkout counter... ###### Question: Fifteen items or less: The number of customers in line at a supermarket express checkout counter is a random variable with the following probability distribution. 1 0.15 2 0.30 3 0.20 4 0.10 5 0.05 P(x) 0.20 Send data to Excel Part 1 of 7 (a) Find P(5). P(5) = 0 Part 2 of 7 (b) Find P(No less than 4). P(No less than 4) = Part 3 of 7 (c) Find the probability that no one is in line. The probability that no one is in line is Part 4 of 7 (d) Find the probability that at least three people are in line. The probability that at least three people are in line is Part 5 of 7 (e) Compute the mean H. Round the answer to two decimal places. Hy = 0 Part 6 of 7 (f) Compute the standard deviation o. Round the answer to three decimal places. Part 7 of 7 (9) If each customer takes 3 minutes to check out, what is the probability that it will take more than 6 minutes for all the customers currently in line to check out? The probability that it will take more than 6 minutes for all the customers currently in line to check out is #### Similar Solved Questions ##### Question 6 1 pts Positional discrimination still exists in the U.S. professional sports leagues: on the... Question 6 1 pts Positional discrimination still exists in the U.S. professional sports leagues: on the playing fields. in the coaching ranks. in front office positions. all of the above. Question 7 1 pts Consumer discrimination differs from other forms of discrimination because unlike employer and ... ##### Discuss the role of nursing scholarship in the future of the profession in health care discuss the role of nursing scholarship in the future of the profession in health care... ##### 1 A class consists of 1 3 freshmen, a sophomores, and 3 juniors, the rest are... 1 A class consists of 1 3 freshmen, a sophomores, and 3 juniors, the rest are seniors. What fraction of the class is seniors? The class consists of seniors. (Simplify your answer. Type a fraction.)... ##### -b. Prepare balance sheet for the business as of the end of 2020. 2. Prepare a... -b. Prepare balance sheet for the business as of the end of 2020. 2. Prepare a calculation to show how much profit was earned by the business during 2020. Analysis Component: Compare the increase in assets from December 31, 2019, to December 31, 2020., and complete the following table. The accou... ##### Predict the product 1. Mg°, CH3CH2OCH2CHg مل 3. H2O, HCI Predict the product 1. Mg°, CH3CH2OCH2CHg مل 3. H2O, HCI... ##### Marks People Univers X -Assignments-PHYS-155(02,0 x usask.ca/bbcswebdav/pid-2494580-dt-content-rid-118336 D PHYS155,2019A2Jm.pdf , Google影, × 40201 901/PHYS 155-2019-A2_T%281%29.pdf tabel... marks People Univers X -Assignments-PHYS-155(02,0 x usask.ca/bbcswebdav/pid-2494580-dt-content-rid-118336 D PHYS155,2019A2Jm.pdf , Google影, × 40201 901/PHYS 155-2019-A2_T%281%29.pdf tabel your submission by printing your class, section, name, nsia, and student number in the upper right-... ##### Shadee Corp. expects to sell 550 sun visors in May and 380 in June. Each visor... Shadee Corp. expects to sell 550 sun visors in May and 380 in June. Each visor sells for $16. Shadee’s beginning and ending finished goods inventories for May are 65 and 40 units, respectively. Ending finished goods inventory for June will be 65 units. Each visor requires a total of$5.50 in d... ##### In which one of the following do the two names refer to one and the same... In which one of the following do the two names refer to one and the same thing? (a) Krebs cycle and Calvin cycle (b) tricarboxylic acid cycle and citric acid cycle (c) citric acid cycle and Calvin cycle (d) tricarboxylic acid cycle and urea cycle... ##### A professor found that historically, the scores on the final exam tend to follow a normal... A professor found that historically, the scores on the final exam tend to follow a normal distribution.  A random sample of nine test scores from the current class had a mean score of 187.9 points and a sample standard deviation of 32.4 points. Find the 90% confidence interval for the popu... ##### Option A Option B Direct materials used Option C 90000 (a) $77,100$85,400 41.700 Direct labor... Option A Option B Direct materials used Option C 90000 (a) $77,100$85,400 41.700 Direct labor Manufacturing overhead Total manufacturing costs 50,900 43.000 183.900 43,000 50200 49.000 184600 161800 (d) Work in process 1/1/20 23400 ) 19.900 23200 (h) Total cost of work in process 207,300 1740d (e) ... ##### ) Many companies must monitor the effluent that is discharged from their plants into rivers and... ) Many companies must monitor the effluent that is discharged from their plants into rivers and waterways. It is the law that the concentration of arsenic, which is used in the production of integrated circuits, be below a certain limit. Suppose that the concentration of arsenic in an effluent fro... ##### Instruction Chart of Accounts Journal A 1: III nstructions On March 1, it was discovered that... Instruction Chart of Accounts Journal A 1: III nstructions On March 1, it was discovered that the following errors took place in joumaizing and posting transactions: a. Rent Expense of \$4,800 paid for the current month was recorded as a debit to Miscellaneous Expense and a credit to Rent Expense. b.... ##### Which of the following should be shown on a statement of cash flows under the financing... Which of the following should be shown on a statement of cash flows under the financing activities section? the proceeds from the sale of a building the purchase of a long-term investment in the common stock of another company the payment of cash to retire a long-term note the issuance of a long-ter... ##### 2. G Give a detail ive a detailed synthetic route to A. using starting material with... 2. G Give a detail ive a detailed synthetic route to A. using starting material with 3 or fewer carbons. ed arrow pushing mechanism for each reaction in the route. (33pts) A.... ##### Determine whether each of the following statements are true or false, where all the vectors are... Determine whether each of the following statements are true or false, where all the vectors are in R". Justify each answer. Complete parts (a) through (e) a. Not every linearly independent set in R" is an orthogonal set. OA True. For example, the vectors are linearly independent but not orth...
2023-03-28 20:36:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2821233868598938, "perplexity": 2032.0995812446417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00658.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-3-derivatives-section-3-3-differentiation-rules-exercises-3-3-page-125/5
## Thomas' Calculus 13th Edition First Derivative: $y'=4x^{2}-1$ Second Derivative: $y''=8x$ Using the power rule First Derivative $y=\frac{4x^{3}}{3}-x$ $y'=\frac{(4)(3)x^{3-1}}{3}-(1)x^{1-1}$ $y'=4x^{2}-1$ Second Derivative $y'=4x^{2}-1$ $y''=(4)(2)x^{2-1}-(0)$ $y''=8x$ (the derivative of 1 will be 0 because of the rule of the constant function).
2019-04-24 14:31:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803789258003235, "perplexity": 365.1974250651858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578643556.86/warc/CC-MAIN-20190424134457-20190424160457-00174.warc.gz"}
https://blog.csdn.net/wr132/article/details/51548485
## colorfulshark Linux Kernel Developer(WindRiver System) # CodeForces 148D Bag of mice 概率DP 1. 公主抓到白老鼠; 2. 公主抓到黑老鼠,龙抓到白老鼠;(输,不可向后推进) 3. 公主抓到黑老鼠,龙抓到黑老鼠,跑掉黑老鼠; 4. 公主抓到黑老鼠,龙抓到黑老鼠,跑掉白老鼠; D. Bag of mice time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output The dragon and the princess are arguing about what to do on the New Year's Eve. The dragon suggests flying to the mountains to watch fairies dancing in the moonlight, while the princess thinks they should just go to bed early. They are desperate to come to an amicable agreement, so they decide to leave this up to chance. They take turns drawing a mouse from a bag which initially contains w white and b black mice. The person who is the first to draw a white mouse wins. After each mouse drawn by the dragon the rest of mice in the bag panic, and one of them jumps out of the bag itself (the princess draws her mice carefully and doesn't scare other mice). Princess draws first. What is the probability of the princess winning? If there are no more mice in the bag and nobody has drawn a white mouse, the dragon wins. Mice which jump out of the bag themselves are not considered to be drawn (do not define the winner). Once a mouse has left the bag, it never returns to it. Every mouse is drawn from the bag with the same probability as every other one, and every mouse jumps out of the bag with the same probability as every other one. Input The only line of input data contains two integers w and b (0 ≤ w, b ≤ 1000). Output Output the probability of the princess winning. The answer is considered to be correct if its absolute or relative error does not exceed10 - 9. Examples input 1 3 output 0.500000000 input 5 5 output 0.658730159 Note Let's go through the first sample. The probability of the princess drawing a white mouse on her first turn and winning right away is 1/4. The probability of the dragon drawing a black mouse and not winning on his first turn is 3/4 * 2/3 = 1/2. After this there are two mice left in the bag — one black and one white; one of them jumps out, and the other is drawn by the princess on her second turn. If the princess' mouse is white, she wins (probability is 1/2 * 1/2 = 1/4), otherwise nobody gets the white mouse, so according to the rule the dragon wins. #include <iostream> #include <cstdio> using namespace std; const int MAX_N = 1000 + 10; double dp[MAX_N][MAX_N]; int w, b; int main() { while(scanf("%d%d", &w, &b) != EOF) { for(int i = 0; i <= b; i++) dp[0][i] = 0.0; for(int i = 1; i <= w; i++) dp[i][0] = 1.0; for(int i = 1; i <= w; i++) { for(int j = 1; j <= b; j++) { dp[i][j] = 0; //抓住白的 dp[i][j] += ((double)(i)) / (i + j); //抓住黑的然后龙抓住白的 dp[i][j] += 0.0; //抓住黑的龙抓住黑的,跑出来黑的 if(j >= 3) dp[i][j] += ((double)(j)) / (i + j) * ((double)(j - 1)) / (i + j - 1) * ((double)(j - 2)) / (i + j - 2) * dp[i][j - 3]; //抓住黑的龙抓住黑的,跑出来白的 if(j >= 2) dp[i][j] += ((double)(j)) / (i + j) * ((double)(j - 1)) / (i + j - 1) * ((double)(i) / (i + j - 2)) * dp[i - 1][j - 2]; } } printf("%.9lf\n", dp[w][b]); } return 0; } #### codeforces 148D Bag of mice(概率dp) 2016-08-04 21:41:15 #### Codeforces 148D D Bag of mice(概率dp) 2014-04-20 23:06:37 #### codeforces 148D之概率DP 2014-05-11 14:45:47 #### CodeForces 148D Bag of mice(概率DP) 2016-08-02 23:20:33 #### CodeForces - 148D Bag of mice——概率dp 2018-05-23 14:18:26 #### Codeforces 148D Bag of mice(概率dp) 2015-01-19 15:07:56 #### CodeForces 148D Bag of mice [概率DP] 2015-12-08 16:22:17 #### CodeForces - 148D Bag of mice 概率DP 2017-02-18 17:43:44 #### CodeForces - 148D Bag of mice(概率dp) 2014-11-05 12:33:35 #### CodeForces 148D Bag of mice(概率DP) 2016-03-16 21:15:41
2018-07-18 11:44:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17600014805793762, "perplexity": 3444.824399032647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590127.2/warc/CC-MAIN-20180718095959-20180718115959-00318.warc.gz"}
http://mathhelpforum.com/algebra/218529-solve-system-equations-using-row-operations-augmented-matrix.html
# Thread: solve system of equations using row operations on augmented matrix 1. ## solve system of equations using row operations on augmented matrix Hey guys I really need some help on solving 2 algebra problems. The question states: Solve the system of equations using row operations on the augmented matrix. Problem 1. x+y-z=5 x+2y-3z=9 x-y+3z=3 Problem 2. x-y+z=14 3x+2y+z=19 -2x+y-z=-21 I just want to know the steps on how to solve it one by one and I could solve the problem but as of now I am clueless on how to solve it. If you guys could help me out I would really appreciate it. Thank you in advance. 2. ## Re: solve system of equations using row operations on augmented matrix [1 1 -1 : 5] [1 2 -3 : 9] [1 -1 3 : 3] The left side of the : is the coefficients of x,y,z in our system and the numbers on the right side are our values. We need to be this is row echloen form. First replace row 2 by (-row 2 + row 1). Then [1 1 -1 : 5] [0 -1 2 : -4] [1 -1 3 : 3] Now, replace row 3 by (-row 3 + row 1). Then [1 1 -1 : 5] [0 -1 2 : -4] [0 2 -4 : -2] Now multiply row 2 by -1. Then [1 1 -1 : 5] [0 1 -2 : 4] [0 2 -4 : -2] Now, replace row 3 by (-1/2 row 3 + row 2). Then [1 1 -1 : 5] [0 1 -2 : 4] [0 0 0 : 5] This third row implies that 0x + 0y + 0z = 5 i.e. 0 = 5 which is impossible. No solution! 3. ## Re: solve system of equations using row operations on augmented matrix Hello, irv1234! Here's the second one. $\begin{array}{ccc}x-y+z&=&14 \\3x+2y+z&=&19 \\ \text{-}2x+y-z&=&\text{-}21 \end{array}$ $\text{We have: }\:\left|\begin{array}{ccc|c}1&\text{-}1&1&14 \\ 3&2&1&19 \\ \text{-}2&1&\text{-}1 & \text{-}21 \end{array}\right|$ $\begin{array}{c}\text{Switch} \\ R_2\:\&\,R_3 \\ \end{array}\left|\begin{array}{ccc|c}1&\text{-}1&1&14 \\ \text{-}2&1&\text{-}1 & \text{-}21 \\ 3&2&1&19 \end{array}\right|$ $\begin{array}{c} \\ R_2+2R_1 \\ R_3-3R_1 \end{array}\left|\begin{array}{ccc|c}1&\text{-}1&1&14 \\ 0&\text{-}1&1&7 \\ 0&5&\text{-}2&\text{-}23\end{array}\right|$ $\begin{array}{c}R_1-R_2 \\ \\ R_3+5R_2\end{array}\left|\begin{array}{ccc|c}1&0&0 &7 \\ 0&\text{-}1&1&7 \\ 0&0&3&12\end{array}\right|$ . . . $\begin{array}{c} \\ \text{-}1R_2 \\ \frac{1}{3}R_3 \end{array} \left|\begin{array}{ccc|c}1&0&0&7 \\ 0&1&\text{-}1 & \text{-}7 \\ 0&0&1&4 \end{array}\right|$ $\begin{array}{c} \\ R_2+R_3 \\ \\ \end{array} \left|\begin{array}{ccc|c}1&0&0&7 \\ 0&1&0&\text{-}3 \\ 0&0&1&4 \end{array}\right|$ $\text{Therefore: }\:\begin{Bmatrix}x &=& 7 \\ y &=& \text{-}3 \\ z &=& 4 \end{Bmatrix}$
2017-07-21 05:10:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6534088850021362, "perplexity": 790.9619587021064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423716.66/warc/CC-MAIN-20170721042214-20170721062214-00462.warc.gz"}
http://www.quantwolf.com/doc/simplestrategies/ch03.html
# Chapter 3BSP and BOP Strategy The most difficult situation to deal with is when the probabilities are both unknown and changing in time. This is the problem you are faced with in the financial markets. There is no doubt that the market experiences trends. There are periods of time when the probability of an upward move is high and periods when it is low. The trends can also change quickly and unexpectedly. In a situation like this, using a large amount of historical data to estimate probabilities is probably not very useful and may even be counterproductive. The simplest way to deal with this situation is to assume that today's probabilities are the same as yesterday's. In other words, you bet that the market will do the same today as it did yesterday. If it went up yesterday then bet that it will go up today. If it went down then bet that it will go down. If a probability bias or trend lasts at least a few days then this simple strategy can be effective but errors will occur when the bias suddenly reverses. Let's now analyze this strategy for the case of the stock market. The $$A$$ outcome is that the market closes above the previous day's close and the $$B$$ outcome is that it closes below. The analysis will be simplified by assuming that the size of the returns for up and down days are the same but just different in sign. This is of course unrealistic but not an unreasonable first approximation. It means that we are treating the stock market as being similar to the coin flip game discussed above with the only difference being a scale factor for the expectation that is equal to the assumed size of the returns. The analysis will be further simplified if we define the $$A$$ and $$B$$ probabilities as follows \begin{eqnarray}\label{eq50} P(A) & = & p = \frac{1}{2} + b\\ P(B) & = & 1-p = \frac{1}{2} - b\nonumber \end{eqnarray} The $$b$$ parameter explicitly shows the degree of bias. It can range from -1/2 to +1/2. For $$b=-1/2$$ a down move is guaranteed and for $$b=+1/2$$ an up move is guaranteed. When $$b=0$$ there is no bias and up or down moves are equally likely (see the Single Coin Model, section 15.2, for more discussion on bias). Now if you bet that things will go the same as the previous day then you will win if the outcome for the two days is $$AA$$ or $$BB$$. The probability of this happening is $$\label{eq60} P(AA \mathrm{or} BB) = (\frac{1}{2} + b)^2 + (\frac{1}{2} - b)^2$$ You will lose if the outcome for the two days is $$AB$$ or $$BA$$. The probability of this happening is $$\label{eq70} P(AB \mathrm{or} BA) = 2(\frac{1}{2} + b)(\frac{1}{2} - b)$$ Assuming the returns are $$\pm r$$, the expected return is \begin{eqnarray}\label{eq80} E & = & r((\frac{1}{2} + b)^2 + (\frac{1}{2} - b)^2) - r(2(\frac{1}{2} + b)(\frac{1}{2} - b))\nonumber\\ & = & r4b^2 \end{eqnarray} The extraordinary thing about this result is that since the bias parameter is squared, you will get a positive expectation if it is positive or negative. It makes no difference, and you don't have to know or guess, what the direction of the bias is. Just bet the same as the previous day and you will have a positive expectation as long as there is some bias, however small. Of course this is not the whole story. The assumption in the above analysis is that the bias today is the same as yesterday. This will not be true when the bias switches. If the bias switches very often then the errors will probably erase the positive expectation. Another thing to consider is the variance of this strategy. To get the variance just add the win and lose probabilities and subtract the square of the expectation. This gives: $$\label{eq90} \mathrm{Var} = r^2(1 - 16b^4)$$ If the bias is small the variance will be very large. This means that periods of large losses are possible even though the expectation is positive. With a small bias you can expect a lot of volatility. There is a complement to the "bet the same as previous" (BSP) strategy and that is the "bet the opposite of previous" (BOP) strategy. If the market went up yesterday then bet that it will go down today and vice versa. The assumption here is that the bias switches from day to day. This means that the sign of the $$b$$ parameter in the $$P(A)$$ and $$P(B)$$ formulas switches from one day to the next. Using this strategy you will now lose on $$AA$$ or $$BB$$ and win on $$AB$$ or $$BA$$. Due to the switching of the sign on $$b$$, the probabilities for these two outcomes are: \begin{eqnarray}\label{eq100} P(AA \mathrm{or} BB) & = & 2(\frac{1}{2} + b)(\frac{1}{2} - b)\\ P(AB \mathrm{or} BA) & = & (\frac{1}{2} + b)^2 + (\frac{1}{2} - b)^2\nonumber \end{eqnarray} The win and lose probabilities are the same as for BSP strategy and so the expectation and variance must be exactly the same. The BSP and BOP strategies complement each other and every day one of the strategies will be successful. The market today will either move in the same direction as yesterday or the opposite. There are no other possibilities (remember we count no movement as a down day). The market will go through periods when the BSP strategy is dominant and periods when the BOP strategy dominates. The holy grail of trading systems is to come up with a way of knowing when to switch between them. One way of trying to deal with this trend switching process is by using Markov models. We will discuss some simple trading systems based on Markov models further on in the book but first we look at some examples of using a pure BSP or BOP strategy.
2018-01-23 19:18:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.6726154685020447, "perplexity": 334.4286636625425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892238.78/warc/CC-MAIN-20180123191341-20180123211341-00171.warc.gz"}
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;ae175715.0102&FT=P&P=2913193&H=N&S=a
## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE Options: Use Forum View Use Monospaced Font Show Text Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>] Subject: Re: xparse From: Lars Hellström <[log in to unmask]> Reply To: Mailing list for the LaTeX3 project <[log in to unmask]> Date: Tue, 1 Sep 2009 17:10:30 +0200 Content-Type: text/plain Parts/Attachments: text/plain (33 lines) Joseph Wright skrev: > Lars Hellström wrote: >> I was thinking more about single spaces, as in >> >> \moveto 0 0 \curveto 47 0 100 53 100 100 >> >> (the idea being to express a bunch of graphic data compactly while still >> allowing the code to survive reflowing in a text editor), but this is of >> course on the boundary of what can be considered LaTeX2e-ish syntax. > > Personally, I'm not a fan of that input syntax: I prefer something like > the pgf approach. The idea was indeed that these should boil down to \pgfpathqmoveto and \pgfpathqcurveto respectively; I just wanted something more compact at FMi-level -2 (or thereabout). (So everything would be in a special environment, and instead of \moveto the command name might really be \M.) > However, I did a quick test and as I hoped you can do > this with > > \DeclareDocumentCommand \moveto { u{~} u{~} } { ... } > > Not sure how robust this is, but if you really want to do it you can at > least have a go. I'll hopefully do it using \def fairly soon (need to generate the curve data first), but my concern was rather for someone in the far future who might not have \def readily available and thus wanting to go via
2022-12-06 20:41:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9645646214485168, "perplexity": 14801.346029176117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00104.warc.gz"}
https://2011.igem.org/Team:DTU-Denmark/Technical_stuff_math
# Math ## Simplification and construction of ODEs The general reaction scheme is \begin{align} \color{blue} m + \color{red}s &\mathop{\rightleftharpoons}_{k_{-1,s}}^{k_{1,s}} c_{ms} \mathop{\rightarrow}^{k_{2,s}} (1 - p_s) \color{red} s \\ \color{red} s + \color{green} r &\mathop{\rightleftharpoons}_{k_{-1,r}}^{k_{1,r}} c_{sr} \mathop{\rightarrow}^{k_{2,r}} (1-p_r) \color{green} r \end{align} According to mass action kinetics the temporal behavior of the state variables $(m, s, r, c_{ms}, c_{sr})$ can be described by the following set of ordinary differential equations. Background degradation is assumed to be proportional to concentration and production rates are included as separate terms. \begin{align} \frac{dm}{dt} &= \alpha_m - \beta_m m - k_{1,s}ms + k_{-1,s} c_{ms} \\ \frac{ds}{dt} &= \alpha_s - \beta_s s - k_{1,s}ms + (k_{-1,s}+(1-p_s)k_{2,s}) c_{ms} - k_{1,r}sr + k_{-1,r} c_{sr} \\ \frac{dr}{dt} &= \alpha_r - \beta_r r - k_{1,r} sr + (k_{-1,r} + (1-p_r)k_{2,r}) c_{sr} \\ \frac{d c_{ms}}{dt} &= k_{1,s}ms -(k_{-1,s}+k_{2,s})c_{ms} \\ \frac{d c_{sr}}{dt} &= k_{1,r}sr - (k_{-1,r} + k_{2,r}) c_{sr} \end{align} Assuming $\frac{d c_{ms}}{dt} = 0$ and $\frac{d c_{sr}}{dt} = 0$ it follows that \begin{align} c_{ms} &= \frac{k_{1,s}ms}{k_{-1,s} + k_{2,s}} \\ c_{sr} &= \frac{k_{1,r}sr}{k_{-1,r}+k_{2,r}} \end{align} which is inserted into the ODEs to elliminate $c_{ms}$ and $c_{sr}$ \begin{align} \frac{dm}{dt} &= \alpha_m - \beta_m m - (k_{1,s} - \frac{k_{1,s}k_{-1,s}}{k_{-1,s} + k_{2,s}}) ms \\ \frac{ds}{dt} &= \alpha_s - \beta_s s - \bigg( k_{1,s} - \frac{k_{1,s}k_{-1,s}}{k_{-1,s} + k_{2,s}} - (1-p_s)\frac{k_{1,s}k_{2,s}}{k_{-1,s} + k_{2,s}} \bigg) ms - \bigg( k_{1,r} - \frac{k_{1,r}k_{-1,r}}{k_{-1,r}+k_{2,r}} \bigg) sr \\ \frac{dr}{dt} &= \alpha_r - \beta_r r - \bigg( k_{1,r} - \frac{k_{1,r} k_{-1,r}}{k_{-1,r} + k_{2,r}} - (1- p_r) \frac{k_{1,r} k_{2,r}}{k_{-1,r} + k_{2,r}} \bigg) sr \end{align} For both $ms$ and $sr$ \begin{align} k_1-\frac{k_1 k_{-1}}{k_{-1} + k_2} = \frac{k_1(k_{-1} + k_2)}{k_{-1} + k_2} - \frac{k_1 k_{-1}}{k_{-1} + k_2} = \frac{k_1 k_2}{k_{-1} + k_2} \end{align} and the equations simplify to \begin{align} \label{system} \frac{dm}{dt} &= \alpha_m - \beta_m m - k_s ms \\ \frac{ds}{dt} &= \alpha_s - \beta_s s - p_s k_s ms - k_r sr \\ \frac{dr}{dt} &= \alpha_r - \beta_r r - p_r k_r sr \end{align} where $k_s = \frac{k_{1,s} k_{2,s}}{k_{-1,s} + k_{2,s}}$ and $k_r = \frac{k_{1,r} k_{2,r}}{k_{-1,r} + k_{2,r}}$. To simplify the notation on steady-state analysis we define a vector of variables ${\bf x} = (m, s, r)$ and a hovering dot as ${\bf \dot{x}} = \frac{d \bf x}{dt}$. Thus the steady-state problem can be compactly specified as ${\bf \dot{x}} = 0$ with some solution $\bf x^*$ satisfying the condition. In this notation the system of differential equations are given by $\dot{\bf x} = {\bf f}({\bf x})$ where ${\bf f}({\bf x}) = (f_1({\bf x}), f_2({\bf x}), f_3({\bf x}))$. For Model 1 the steady state problem $${\bf \dot{x}} = \begin{bmatrix} \alpha_m - \beta_m m - k_s m s \\ \alpha_s - \beta_s - k_r s r \\ \alpha_r - \beta_r r \end{bmatrix} = 0$$ is solvable by back substitution and $${\bf x^*} = \begin{bmatrix} \frac{\alpha_m (\beta_r \beta_s + \alpha_r k_r)}{\beta_r \alpha_s k_s + \beta_m \beta_r \beta_s + \beta_m \alpha_r k_r} \\ \frac{\alpha_s}{\beta_s + \frac{k_r \alpha_r}{\beta_r}} \\ \frac{\alpha_r}{\beta_r} \end{bmatrix}$$ The Jacobian given by $J_{ij} = \frac{\partial f_i ({\bf x})}{\partial x_j}$ for the simplified system of equations is calculated $$\renewcommand{\arraystretch}{1.4} \frac{\partial f_i ({\bf x})}{\partial x_j} = \begin{bmatrix} \frac{\partial f_1}{\partial m} & \frac{\partial f_1}{\partial s} & \frac{\partial f_1}{\partial r} \\ \frac{\partial f_2}{\partial m} & \frac{\partial f_2}{\partial s} & \frac{\partial f_2}{\partial r} \\ \frac{\partial f_3}{\partial m} & \frac{\partial f_3}{\partial s} & \frac{\partial f_3}{\partial r} \end{bmatrix} = \begin{bmatrix} - \beta_m - k_s s & - k_s m & 0 \\ - p_s k_s s & - p_s k_s m - k_r r & - k_r s \\ 0 & - p_r k_r r & - \beta_r - p_r k_r s \end{bmatrix}$$ For Model 1 $p_s = 0$ and $p_r = 0$ the Jacobian is triangular and the eigenvalues are the diagonal entries $\lambda = (- \beta_m - k_s s, \ -k_r r, \ - \beta_r)$. Because all eigenvalues have negative real parts any steady-state is stable. For model 2 $p_s = 0$ and $p_r = 1$ the eigenvalues of the Jacobian are more complicated $$\lambda = \begin{bmatrix} \frac{1}{2} \bigg( -\beta_r - k_r s - k_r r \pm \sqrt{\beta_r^2 + k_r^2 s^2 + 2 k_r^2 r s + k_r^2 r^2 + 2 k_r \beta_r s - 2 k_r \beta_r r} \bigg) \\ - \beta_m - k_s s \end{bmatrix}$$ but fortunately every eigenvalue has negative real parts if $$\beta_r + k_r s + k_r r > \sqrt{\beta_r^2 + k_r^2 s^2 + 2 k_r^2 r s + k_r^2 r^2 + 2 k_r \beta_r s - 2 k_r \beta_r r}$$ equivalent to $$\beta_r^2 + k_r^2 s^2 + 2 k_r^2 r s + k_r^2 r^2 + 2 k_r \beta_r s + 2 k_r \beta_r r > \beta_r^2 + k_r^2 s^2 + 2 k_r^2 r s + k_r^2 r^2 + 2 k_r \beta_r s - 2 k_r \beta_r r$$ or simply $$k_r \beta_r r > - k_r \beta_r r$$ which is true for any steady-state. In conclusion any analytical steady-state is stable. ## Parameter estimation First order background decay of RNA is modeled by assuming that the decay is a stochastic process. $$\label{decay} \frac{dN}{dt} = - \beta N$$ where N is the amount of RNA and $\beta$ is a rate constant. The expression is analytically solvable by separation of variables $$\int \frac{1}{N} dN = - \beta \int dt$$ equivalent to \begin{align} N(t) &= N_0 e^{-\beta t} \\ \beta &= \frac{ln 2}{t_{1/2}} \end{align} Biological measurements of degradation often assumes first order decay for an experimental setup. One way to experimentally determine the decay rate is to inhibit transcription at $t_0$ and measure the amount of RNA at different time points (Ross, 1995). Background degradation half-lifes of both chiP and chiX is determined to be $\sim$ 27 $min^{-1}$ (Overgaard, 2009). Hence $$\beta_s = 0.0257 min^{-1}$$ Our m consists of chiP fused to lacZ. lacZ mRNA is reported to have a half-life of $1.7 min^{-1}$(Liang, 1999). The actual value of the fusion RNA depends on the particular set of cis-RNA stability determinants but might not be easily inferable from the primary structure. The rate limiting step in mRNA degradation is thought to be endonucleolytic cleavage in the 5' untranslated region (Bechhofer, 1993). In our chiP-lacZ RNA the 5' UTR consists of the 5' UTR of chiP and we might argue that the stability determinants of lacZ have no effect. $$\beta_m = 0.0257 min ^{-1}$$ A wide range of sRNA stabilities have been reported (Vogel, 2003) half-lifes ranging from 2-32 min. Based on this limited dataset we could restrict $\beta_r = (0.02 - 0.35) min^{-1}$. Second order degradation is the coupled RNA degradation and is represented by. $$\frac{dm}{dt} = - \beta_m m - k_s m s$$ This representation assumes unchanged background degradation. Separation of variables leads to $$\label{second_order} \int \frac{1}{m} dm = - \beta_m \int dt - k_s \int s dt$$ Assuming that s is constant at the conditions at which the data is measured the expression simplifies to $$k_s = \frac{1}{[s]} \bigg( \frac{ln 2}{t_{1/2}} - \beta_m \bigg)$$ Overgaard et al measured the s coupled half-life of m to be $\sim 4 min^{-1}$ at $[s] = 180 nmol$, read by eye from fig 1B @MicM(pBAD-micM/pNDM-ybfM). Hence $$k_s = 0.000820 \ (nmol \ min)^{-1}$$ ## Dimensionless analysis For model 1 $p_s = 0$ the governing equations are \begin{align} \frac{dm}{dt} &= \alpha_m - \beta_m m - k_s ms \\ \frac{ds}{dt} &= \alpha_s - \beta_s s - k_r sr \\ \frac{dr}{dt} &= \alpha_r - \beta_r r - p_r k_r s r \end{align} We restate the equations into a dimensionless form by rescaling in effect introducing the new variables: $\tau = t \beta_m$, $M = \frac{\beta_m}{\alpha_m}m$, $S = \frac{\beta_s}{\alpha_m}s$, $R = \frac{\beta_r}{\alpha_m}r$. By substitution the equations are algebraically manipulated into \begin{align} \frac{dM}{d \tau} &= 1 - M - \frac{k_s \alpha_m}{\beta_m \beta_s} MS \\ \frac{dS}{d \tau} &= \frac{\beta_s}{\beta_m} \bigg(\frac{\alpha_s}{\alpha_m} - S - \frac{k_r \alpha_m}{\beta_s \beta_r} SR \bigg) \\ \frac{dR}{d \tau} &= \frac{\beta_r}{\beta_m} \bigg(\frac{\alpha_r}{\alpha_m} - R - p_r \frac{k_r \alpha_m}{\beta_s \beta_r} S R\bigg) \end{align} At first this seems like an unnecessary complicated way of rewriting the equations, but the dimensionless parameter groups $p = (\frac{k_s \alpha_m}{\beta_m \beta_s}, \frac{k_r \alpha_m}{\beta_s \beta_r}, \frac{\beta_s}{\beta_m}, \frac{\beta_r}{\beta_m}, \frac{\alpha_s}{\alpha_m}, \frac{\alpha_r}{\alpha_m} )$ has 6 effective parameters fully determining the dynamics and steady-state of the system whereas the original equations had 8 parameters. $p_r$ is asumed to be either one or zero.
2023-02-09 05:45:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 15, "x-ck12": 0, "texerror": 0, "math_score": 1.0000072717666626, "perplexity": 2805.277654014896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501407.6/warc/CC-MAIN-20230209045525-20230209075525-00828.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-11th-edition/chapter-4-section-4-2-exponential-functions-4-2-exercises-page-409/37
## College Algebra (11th Edition) The parent function is $f(x)=2^x$ (with red) the given function is $g(x)=2^{x-1}+2$ (with blue). The parent function can be graphed by calculating a few coordinates and connecting them with a smooth curve: $f(-2)=2^{-2}=(\frac{1}{4})$ $f(-1)=2^{-1}=(\frac{1}{2}$ $f(0)=2^0=1$ $f(1)=2^1=2$ $f(2)=2^2=4$ For every corresponding x-value the following equation is true: $f(x-1)+2=g(x)$ This means that the graph is translated 1 unit right and 2 units up ($g(x)$ involves a horizontal shift of 1 to the right and also a vertical shift of 2 upwards.). First, the horizontal shift. We only consider the g(x) function as $g'(x)=2^{x-1}$ For example if $f(0)=1$ in the original $f(x)$, this will be equal to $g'(1)=f(1-1)=f(0)=1$. Here,$f(0)=g'(1)$ also, $f(1)=g'(2)$ We can see that here, each point in the parent function was moved to the right by 1 unit. Second, we translate this $g'(x)$ function to get the originally given function. Now, the following equation is true: $g'(x)+2=g(x)$ Every $g'(x)$ value will be increased by 2. For example if $g'(1)=1$, this will be translated as $g'(1)+2=1+2=3$. We can see that here, the $g(x)$ is greater than $g'(x)$ for every corresponding x-value by 2.)
2018-12-18 18:57:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906112909317017, "perplexity": 192.92720043740988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829568.86/warc/CC-MAIN-20181218184418-20181218210418-00117.warc.gz"}
https://www.functori.com/docs/mitten/scenarios/delegate-selection.html
# delegate_selection The selection strategy for delegates (i.e. validators in the committee) at each level and round. It can be either : ##### random #![allow(unused)] fn main() { Random } This follows what is implemented in the Tezos node. Delegates are chosen at random depending on their stake and a sampling strategy. Use this if you want to be as close as possible to a mainnet Tezos node and if your scenario does not depend on the delegates and the committee. ##### round_robin #![allow(unused)] fn main() { Round_robin } Delegates are chosen in a round robin manner over the list of nodes. ##### round_robin_over #![allow(unused)] fn main() { Round_robin_over [ [N "b1"; N "b2"; N "b3"]; [N "b2"]; [N "b3"; N "b1"; N "b2"] ] } The explicit list of who is in the committee and who is the baker for each level and round. The last item of the list is iterated over in a round robin manner for the latter levels. In the example above, the first level (of the protocol) will have b1 be the proposer for round 0, b2 be the proposer for round 1, b3 be the proposer for round 2, b1 be the proposer for round 3, and so on. In the second level, b2 is the proposer for all the rounds. In subsequent levels, the proposer is chosen in the list [b3, b1, b2] by an index i who is computed (by round robin) by: or more generally, if the delegate selection strategy is Round_robin_over l (and we note by the element of the list modulo its size): • When the level is smaller than : • When the level is greater or equal than :
2023-03-29 01:19:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5759581327438354, "perplexity": 1761.155219117066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00020.warc.gz"}
https://paperswithcode.com/paper/statistical-models-for-degree-distributions
# Statistical Models for Degree Distributions of Networks We define and study the statistical models in exponential family form whose sufficient statistics are the degree distributions and the bi-degree distributions of undirected labelled simple graphs. Graphs that are constrained by the joint degree distributions are called $dK$-graphs in the computer science literature and this paper attempts to provide the first statistically grounded analysis of this type of models... (read more) PDF Abstract No code implementations yet. Submit your code now
2020-09-24 13:50:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3504268527030945, "perplexity": 1123.216797827987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00078.warc.gz"}
https://styler.r-lib.org/dev/reference/parse_transform_serialize_r.html
Wrapper function for the common three operations. parse_transform_serialize_r( text, transformers, base_indention, warn_empty = TRUE ) Arguments text The text to parse. Passed to cache_make_key() to generate a key. Integer scalar indicating by how many spaces the whole output text should be indented. Note that this is not the same as splitting by line and add a base_indention spaces before the code in the case multi-line strings are present. See 'Examples'. Whether or not a warning should be displayed when text does not contain any tokens. parse_transform_serialize_roxygen()
2021-10-26 12:30:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4022887051105499, "perplexity": 1579.7241216899808}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587877.85/warc/CC-MAIN-20211026103840-20211026133840-00159.warc.gz"}
https://repository.nwu.ac.za/handle/10394/11187?show=full
dc.contributor.author Barnard, Etienne en_US dc.date.accessioned 2014-08-15T12:56:30Z dc.date.available 2014-08-15T12:56:30Z dc.date.issued 2011 en_US dc.identifier.citation Barnard, E. 2011. Determination and the no-free-lunch paradox. Neural Computation, 23(7):1899-1909. [https://doi.org/10.1162/NECO_a_00137] en_US dc.identifier.issn 0899-7667 dc.identifier.issn 1530-888X dc.identifier.uri http://hdl.handle.net/10394/11187 dc.identifier.uri https://doi.org/10.1162/NECO_a_00137 dc.description.abstract We discuss the no-free-lunch NFL theorem for supervised learning as a logical paradox—that is, as a counterintuitive result that is correctly proven from apparently incontestable assumptions. We show that the uniform prior that is used in the proof of the theorem has a number of unpalatable consequences besides the NFL theorem, and propose a simple definition of determination (by a learning set of given size) that casts additional suspicion on the utility of this assumption for the prior. Whereas others have suggested that the assumptions of the NFL theorem are not practically realistic, we show these assumptions to be at odds with supervised learning in principle. This analysis suggests a route toward the establishment of a more realistic prior probability for use in the extended Bayesian framework. dc.language en dc.publisher MIT Press dc.title Determination and the no-free-lunch paradox en_US dc.type Article dc.contributor.researchID 21021287 - Barnard, Etienne  ## Files in this item FilesSizeFormatView There are no files associated with this item. Theme by
2019-09-20 03:10:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148229718208313, "perplexity": 4818.6793321128835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573827.2/warc/CC-MAIN-20190920030357-20190920052357-00075.warc.gz"}
https://community.wolfram.com/groups/-/m/t/1747131
# Sparse Ruler Conjecture Posted 7 months ago 2423 Views | 2 Replies | 8 Total Likes | What is the smallest set so that differences of members of the set give all values from $1$ to $n$? This is a famous problem worked on by Paul Erd?s, Marcel J. E. Golay, John Leech, Alfréd Rényi, László Rédei, Solomon W. Golomb, Alfred Brauer and C. Brian Haselgrove, among many others. For example, {0, 1, 6, 9, 11, 13} covers 1 to 13. 1-0, 13-11, 9-6, 13-9, 6-1, 6-0, 13-6, 9-1, 9-0, 11-1, 11-0, 13-1, 13-0. For 13, the smallest set is 6.John Leech ("On the "Representation of 1,2,...,n by Differences", J. of London Math Soc, April 1956) gave the definitive answer: a lower bound of $\sqrt{2.434 n}$ (blue) and an "asymptotic" upper bound of $\sqrt{3.348 n}$ (green). The 63 year Leech bounds are still widely cited and used. Even today the Wikipedia article on sparse rulers cites the bound as a disproof for the Optimal Ruler Conjecture of Peter Luschny. The Leech bound is cited by thousands of papers, some published just a few weeks ago. We can compare the Leech bounds to best known actual values (yellow) to 1750. ListPlot[{Table[Sqrt[2.434 n] - ( Sqrt[3 n + 9/4]), {n, 1, 1750}], Table[minimalSparse[[n]] - ( Sqrt[3 n + 9/4]), {n, 1, 1750}], Table[ Sqrt[3.348 n] - ( Sqrt[3 n + 9/4]), {n, 1, 1750}]}, AspectRatio -> 1/5] See all those lines of yellow dots? There is a deeper pattern to those lines. Now that I've extended results to 1750 and collected thousands of Wichmann-like ruler constructions, it seems the Leech bounds are wrong. This seems to be a computational problem rather than an analysis problem.I can also strengthen Peter's conjecture. Sparse Ruler Conjecture. If a minimal sparse ruler of length $n$ has $m$ marks,easy (true to at least 1790): $m-\lceil \sqrt{3*n +9/4} \rfloor \in (0,1)$.hard (true to at least 473): $m+\frac{1}{2} \ge \sqrt{3 \times n +9/4} \ge m-1$. At Sparse Rulers I have a Wolfram Demonstration on sparse rulers up to length 1750.A sparse ruler of length $n$ has $\lceil \sqrt{3*n +9/4} \rfloor + k$ marks, with $\lceil x \rfloor$ intended as the round function and $k$ as the excess. The excess $k$ is 0 until 51, 59, 69, ... ( A308766) where $k=1$, the black square values in the pattern below to 1750. Minimal marks $m$ are listed in A046693 and verified to length 213. Values increment down, then across. Bottom row values are Wichmann rulers A289761. The columns correspond to lines in the pattern above. This grid with 1750 Tooltips is part of the Sparse Rulers Demonstration.With more optimal sparse rulers the $(0,1)$ pattern for $m - \lceil \sqrt{3 \times n +9/4} \rfloor$ might look like the following if the hard form of the conjecture is true. Some of the 0-excess sparse rulers are really hard to find. It took trillions of computations to get to these patterns.These sparse rulers are vital for finding various high-valence graceful graphs, such as the one below.I was greatly helped by counts and code from Parallel Computation of Sparse Rulers by Arch Robison. I asked him if he still had his data, but he hadn't kept a copy of it. The data is lost. But Tomas Sirgedas was able to recreate some of the data to value 150.You can compare the 63-year old mathematical bound to the data listed in the Sparse Rulers Wolfram Demonstration.On 1st of August I identified about two thousand Wichmann-like ruler constructions that are infinite and built a few more algorithms to extend the pattern of A046693(n) - A309407(n) to n=4443. A lot of the black and green squares are likely white and black instead, but there's a definite weird pattern to this."Dark Satanic Mills on a cloudy day" -- NJA Sloane.With another overnight run I cleaned up the pattern more. Those green areas have excess 2 and are at positions {1792, 2096, 2097, 2098, 2099, 2429, 2782, 2783, 3161}. The attached file (sparsedata.nb) has my sparse ruler collection as of Thu 1 Aug 2019. Can anyone find sparse rulers with fewer marks for a given length? I have nine improvements so far that aren't included in the notebook.{0, 1966, {{1,14,27,55,28,55,28,1},{13,1,13,11,1,10,14,13}}}{0, 2286, {{1,15,29,59,30,59,30,1},{14,1,14,12,1,11,15,14}}}{0, 2630, {{1,16,31,63,32,63,32,1},{15,1,15,13,1,12,16,15}}}{0, 2931, {{1,17,33,67,34,67,34,1},{16,1,16,13,1,13,17,16}}}{0, 3319, {{1,18,35,71,36,71,36,1},{17,1,17,14,1,14,18,17}}}{0, 3390, {{1,18,35,71,36,71,36,1},{17,1,17,15,1,14,18,17}}}{0, 3806, {{1,19,37,75,38,75,38,1},{18,1,18,16,1,15,19,18}}}{0, 4167, {{1,20,39,79,40,79,40,1},{19,1,19,16,1,16,20,19}}}{0, 4246, {{1,20,39,79,40,79,40,1},{19,1,19,17,1,16,20,19}}} If anyone has trouble using the notebook (sparsedata.nb) in the above post, let me know. The SparseRuleDemo might also be handy.I submitted a new Wolfram Demonstration on Wichmann-like rulers (see attached WichmannLikeRulers.nb). You can get a sneak peak of it here. The 2216 Wichmann rules presented each produces an infinite number of sparse rulers. For lengths under 4444, I have about half a million sparse rulers at the moment. A full third of them comes from this set of Wichmann-like rules.If you'd like to produce sparse rulers with any integer length, this set of variants will provide a good start.Are there any infinite rules I'm missing here?The attached notebook, FindWichmann.nb, takes advantage of the 886 infinite Wichmann-like sparse ruler constructions given in the Wolfram Demonstration above. For any positive integer it will find the best sparse ruler that can be made from these Wichmann-like constructions within a few seconds. For example, let's make a sparse ruler of length 10 million. FindWichmann[10000000] {{{500, 22, {835, 2151}}, {501, 22, {835, 2151}}}, {{828, 24, {832, 2165}}, {829, 24, {832, 2165}}, {830, 24, {832, 2163}}}} WichmannRulerPlain[wichmannrules[[500]], {835, 2151}] A sparseness value of 22 is really awful compared to what I usually see.Sparseness is marks - round(sqrt(3 length +9/4)).The Leech bound is floor(sqrt(3.348 length) - round(sqrt(3 length +9/4)) ).So how bad is a sparseness of 22 compared to the Leech bound? LeechBound[10000000] 309 My instant direct construction is considerably better than the upper bound given by Leech. Let's try a higher value, 24! (620448401733239439360000). LeechBound[24!] 76959449642 FindWichmann[24!] {{{605, 852645032, {235567620692, 422893418966}}, .... WichmannRulerPlain[wichmannrules[[605]], {235567620692, 422893418966}] That is a sparse ruler with an excess of 852645032, considerably better than the Leech bound of 76959449642. That very likely isn't the optimal sparse ruler for length 24!, but it's reasonably minimal for a fast construction.Does this construction method ever exceed the Leech bound? It's not so good up to 117, where it exceed the Leech bound at lengths {1, 2, 3, 4, 5, 6, 13, 17, 23, 27, 34, 40, 41, 48, 51, 58, 59, 66, \ 69, 75, 76, 77, 86, 87, 88, 95, 96, 97, 98, 99, 109, 110, 113, 117}. I've found one case so far after that, 4211 has a Leech bound sparseness of 6 and an instantaneous construction sparseness of 8. ListPlot[{Table[values[[n]], {n, 1, 5813}], Table[ Sqrt[3.348 n] - ( Sqrt[3 n + 9/4]), {n, 1, 5813}]}, AspectRatio -> 1/5] LeechBound[4211] 6 FindWichmann[4211] {{46, 8, {13, 12}}, {258, 8, {11, 66}}, {272, 8, {11, 65}}, ... Through much, much slower methods I have 10 sparse rulers of length 4211 with an excess of 1. With instant constructions here's what a sparseness diagram looks like to length 5763, with gray=0, black=1, red=2, orange=3, yellow=4, green=5, green=6, purple=8.For tedious slow constructions, here are the best known sparseness values to length 4443. Doing things the hard way gives much cleaner results. Some of the more optimal sparse rulers took months for me to find. No sparse rulers past length 213 have been proven optimal.Looks like I need more of these infinite Wichmann-like ruler constructions and perhaps some hard-coded rulers for under 120. Still, it is handy to have near instant constructions.Unsolved:1. Does the FindWichmann construction ever exceed the Leech bound past 4211?2. What are other Wichmann-like constructions that work infinitely for all input values? EDIT: I was told I was lemon-picking, finding the worst possible outcomes. Here's some cherry-picking for likely optimal sparse rulers of some arbitrary large values. FindWichmann[#][[1, 1]] & /@ {7^7, 2^27, 3^27} {{166, 1, {268, 490}}, {1, 0, {3326, 6759}}, {255, 1, {398580, 1594322}}} Row[{WichmannDisplay[wichmannrules[[166]], {268, 490}], WichmannDisplay[wichmannrules[[1]], {3326, 6759}], WichmannDisplay[wichmannrules[[255]], {398580, 1594322}]}, " "] A046693[n]-A309407[n] -- Dark Mills pattern, 10501 terms.Gray=0 , Black = 1, Red = 2.Bottom row n values are https://oeis.org/A289761 . On August 13. I managed to clean up a lot of dust on this sequence. Some of the 1's and 2's might be 1 lower.Got to take a look at a three day run on another computer. Values for length 1313, 1358, 1448, 1583, 1673 are +1 dust in the image below due to the following sparse rulers: {{{1,11,1,21,1,21,22,45,23,1},{10,1,1,1,1,1,9,18,10,10}}, {{1,11,1,21,1,21,22,45,23,1},{10,1,1,1,1,1,9,19,10,10}}, {{1,11,1,21,1,21,22,45,23,1},{10,1,1,1,1,1,9,21,10,10}}, {{1,11,1,21,1,21,22,45,23,1},{10,1,1,1,1,1,9,24,10,10}}, {{1,11,1,21,1,21,22,45,23,1},{10,1,1,1,1,1,9,26,10,10}}} Attachments: 2 Replies Sort By: Posted 7 months ago Hi. I was doing some work you may find useful. I was just recently introduced to p-adic... stuff? So, I'm sorry in advanced if I get my terminology off.I was introduced to these because of something I was working on in binary that begged for more convenient notation (infinite divergent series in binary).... anyway, for clarity everything below is in base 2 besides the rationals which are in standard base 10. Ie ...111.0 = 1+2+4... etc. So I let x=...1010.0 X/2=...10101.0 Then added 1 to x get. x+1=...101011.0 Then x+(x/2) to get = (-1/3)+(-2/3)=-1 What I found out was that the results I found where contrary to p-adic notation. I "should" have gotten plus or minus 1/3, but instead got -1/3 and -2/3. I also got that -...111.0=0(note the negative). I should have gotten that -...111.0=1.I suspect that these differences may be due to the "first prime" in p-adic being considered a zero, and not a 1, but I have no idea. Like I said, I was just recently lead to this, and have basically no undergraduate mathematical training, but I hope it helps. I suspect it may be related.
2020-02-28 06:47:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6471135020256042, "perplexity": 2284.6432316951227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147054.34/warc/CC-MAIN-20200228043124-20200228073124-00049.warc.gz"}
http://math.stackexchange.com/questions/201168/projection-onto-closed-convex-set
# Projection onto closed convex set Show that the function defined by $f(t)=|P_{D}(x+td)-x|$ is nondecreasing, where $D$ is closed convex, $x\in D$, $t\geq 0$, $d\in \mathbb{R}^{n}$ and $P_{D}$ is projection onto D. I tried to solve this question in a lot of ways, for example, if we use the polarization identity, we get $$\frac{1}{2}(|P_{D}(x+td)-x|^{2}-|P_{D}(x+sd)-x|^{2})= \\ \langle P_{D}(x+td)-x+P_{D}(x+sd)-x,P_{D}(x+td)-x-(P_{D}(x+sd)-x)\rangle$$ where $\langle,\rangle$ stands for the usual scalar product on $\mathbb R^{n}$. I tried to explore this equality, but i got nothing. Here is some inequalities that maybe can help: if D is closed and convex then: $$<x-P_{D}(x),v-P_{D}(x)>\ \leq \ 0 \ \forall v\in D$$ $$<x-v,v-P_{D}(x)>\ \leq \ 0 \ \forall v\in D$$ I appreciate some help, this is a problem from my homework. Thanks - Eer...what is $\,P_D\,$?? –  DonAntonio Sep 23 '12 at 14:40 @DonAntonio I guess from the title that $P_D$ is the orthogonal projection onto $D$. –  Siminore Sep 23 '12 at 14:53 Suppose $v, w \in \mathbb{R}^n$ and let $z = P_D(w) - P_D(v)$. Then (using your first inequality) we have $$\langle v, z \rangle \leq \langle P_D(v), z \rangle \leq \langle P_D(w), z \rangle \leq \langle w, z \rangle.$$ Now take $v = x + t_1d$ and $w = x + t_2d$ for some $t_2 > t_1 > 0$. Then these inequalities can be rewritten as $$t_1 \langle d, z \rangle \leq \langle P_D(v) - x, z \rangle \leq \langle P_D(w) - x, z \rangle \leq t_2 \langle d, z \rangle.$$ This implies that $\langle d, z \rangle \geq 0$ and in particular $$\langle P_D(v) - x, z \rangle \geq 0.$$ Then $$||P_D(w) - x||^2 = ||P_D(v) - x + z||^2 = ||P_D(v)-x||^2 + 2 \langle P_D(v) - x, z\rangle + ||z||^2 \geq ||P_D(v)-x||^2.$$ - Nice answer, thanks WimC. –  Tomás Oct 1 '12 at 12:37 Simplifications: we can assume that $x=0$ (since we could shift the whole picture by $-x$), so $0\in D$. Denote $P_t:=P_D(td)$ and $P_s:=P_D(sd)$, $\ 0<t<s$. We want to prove that $|P_s|^2 > |P_t|^2$, i.e. $$\langle P_s - P_t, P_s+P_t\rangle \ge 0$$ That's where I could get, also $t=1$ can be a simplification, but doesn't matter much. Now we can use that $0$, $P_s$, $P_t$ and $\displaystyle\frac{P_s+P_t}2$ are all in $D$. I think we are very near... - Did you get some result with this, because i tried this one too with no results. Im gonna edit my post and put some inequalities that are true for this case. The inequality u used can be taken as a definition of projection in this case (Closed Convex). –  Tomás Sep 24 '12 at 12:23 Not yet, I saw it at first glance that it is similar to your polarization, but then I also realized that not exactly the same. Anyway, is it clear, why those properties hold for $P_D$? Maybe the same method is useful. If I find something, post it, but I think, we're near.. –  Berci Sep 24 '12 at 15:34 The first property is an immediate consequence of the definition of projection. You can find it, for example, in the book of Brezis Functional Analysis. The other one is an exercise (consequence of the first one).In fact they are equivalent. –  Tomás Sep 24 '12 at 22:39
2015-08-28 07:37:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748215675354004, "perplexity": 229.41674812711062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062327.4/warc/CC-MAIN-20150827025422-00001-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.leedsmathstuition.co.uk/2013/06/calculus-of-residues/
# Calculus of Residues After blowing off the cobwebs after a couple of years I have been looking at some of the notes that I made some years back on some courses that I took at Warwick on Complex Analysis and Vector Analysis. Integration has always been one of my favourite areas of mathematics. At A-Level I learned lots of different techniques for calculating some interesting integrals – but A-Level only just skims the surface when it comes to integration and it can be difficult for A-Level students (through no fault of their own) to appreciate the significance of integration. Integration by Parts, Integration by substitution and reduction formula are all great but there are still many integrals which require more advanced techniques to calculate. Contour integrals and the calculus of residues can often come to the rescue. Contour integrals are a way of passing difficult integrals over a real-interval such as $$\int_{-\infty}^{\infty}\!{\dfrac{x^{2}}{1+x^{4}}\mathrm{d}x}$$ into the complex plane and taking advantage of the Cauchy integral theorem and the calculus of residues. I remember how it felt when I first learned the formula for integration by parts because it meant that I was able to find integrals that were previously impossible for me to calculate – even though I have done contour integrals before it has been very exciting for me to re-discover them. Looking through one of my books I came across this problem – show that for $a>1$ $$\int_{0}^{2\pi}\!\frac{\mathrm{sin}2\theta}{(a+\mathrm{cos}\theta)(a-\mathrm{sin}\theta)}\;\mathrm{d}\theta = -4\pi\left(1-\frac{2a\sqrt{a^{2}-1}}{2a^{2}-1}\right)$$ After spending a good deal of one of my afternoons wrestling with the algebra I managed to arrive at a solution which you can download here as a pdf. Here is a graph of the integrand in the case when $a=2$ As you can see from the diagram, the area bounded by the curve and the $x$-axis certainly exists but trying to calculate this integral using A-Level techniques is going to be incredibly difficult if not impossible (if anyone can do it then I would love to see the solution). Unfortunately there are and always will be integrals that cannot be calculated analytically – this is just the way it is and there is no getting around it but contour integrals certainly allows you to calculate a huge range of integrals that previously would have been seemingly impossible.
2018-08-20 03:01:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8981060981750488, "perplexity": 184.17143274930964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215523.56/warc/CC-MAIN-20180820023141-20180820043141-00665.warc.gz"}
http://math.stackexchange.com/questions/92110/hyperbolic-uniformization-metrics
# Hyperbolic Uniformization Metrics I have been working Euclidean Ricci Flow but have been having considerable trouble trying to implement the same discrete gradient descent functionality in hyperbolic space. I am following the proposed algorithm at the end of "Computing Teichmuller Shape Space" but the algorithm fails very quickly due to the $\operatorname{acos}$ function's $[-1, +1]$ bounds. If anyone could confirm that I am using the proper equations it would be very helpful. Pre Processing (circle packing generation): 1) compute vertex radius $$\gamma_i = \frac{l_{ij} + l_{ki} - l_{jk}}{2}$$ where $l_{xy}$ is the euclidean distance between two points 2) compute edge weight $\Phi_{ij}$ vai the hyperbolic cosine law $$\cosh l_{ij} = \cosh \gamma_i \cosh \gamma_j + \sinh \gamma_i \sinh \gamma_j \cos\Phi_{ij}$$ as such I am using: $$\Phi_{ij} = \operatorname{acos}\frac{\cosh \gamma_i \cosh \gamma_j - \cosh l_{ij}}{-\sinh \gamma_i * \sinh \gamma_j}$$ Main Loop (gradient descent): 1) compute new edge length $l_{ij}$ using the inverse of the hyperbolic cosine law: $$l_{ij} = \operatorname{arccosh}(\cosh \gamma_i \cosh \gamma_j + \sinh \gamma_i \sinh \gamma_j \cos\Phi_{ij})$$ 2) compute the corner angles $\theta_i^{jk}$ of every face using the hyperbolic cosine law: $$\theta_i^{jk} = \operatorname{acos}\left(\frac{\cosh l_{ij} \cosh l_{ki} - \cosh l_{jk}}{\sinh l_{ij} \sinh l_{ki}}\right)$$ 3) compute the angle deficit at every vertex (easy) 4) update $y_i$ based on the difference of target and current gaussian curvature (easy) 5) loop while max error > allowed error (easy) -
2015-09-02 15:22:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7853924036026001, "perplexity": 808.839315349835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645265792.74/warc/CC-MAIN-20150827031425-00241-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.quantumofgravity.com/blog/tag/hemmingway/
# Some things … cannot be learned quickly There are some things which cannot be learned quickly, and time, which is all we have, must be paid heavily for their acquiring. They are the very simplest things, and because it takes a man’s life to know them the little new that each man gets from life is very costly and the only heritage he has to leave. – Ernest Hemingway (From A. E. Hotchner, Papa Hemingway, Random House, NY, 1966)
2020-04-10 13:42:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.4268440902233124, "perplexity": 9129.742813767554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00492.warc.gz"}
https://rpg.stackexchange.com/questions/183906/evaluation-of-homebrew-powerattack-feat-for-one-handed-attacks
# Evaluation of homebrew powerattack feat for one-handed attacks All questions of flavor aside, how balanced do you think this feat is compared to Great Weapon Master and Sharpshooter? Adept Weapon User (AWU) • Before you make a melee attack using the Attack action with either a weapon that does not have the heavy property that you are proficient with, an unarmed strike, or an improvised weapon that you are proficient with, you can choose to take a -5 penalty to the attack roll. If the attack hits, you add +10 to the attack's damage. • When you score a critical hit with a melee attack or reduce a creature to 0 hit points with one, you have advantage on the next attack you make until the end of your next turn. The idea behind the design is simply to give all martial classes access to a powerattack feat and to bring dual wielding, one-handed weapons and monks DPR-wise in line with two-handed and ranged weapons. I deliberately chose to make the power attack only applicable to the Attack action, because otherwise Two-Weapon Fighting and Flurry of Blows would become too strong. Furthermore, I wanted to avoid making two-handed weapons strictly worse than sword and board (or worse, polearm master spear and shield), and so this powerattack cannot be used with opportunity attacks, the Ready action, bonus action attacks, or attacks from magic items (e.g. Scimitar of Speed, Dancing Sword). Additionally, the secondary feature of the feat is worse than that of Great Weapon Master. I also avoided clogging the bonus action, because monks and dual wielders already use this. I have made some rough DPR calculations, but this is anything but rigorous. The biggest hurdle in a comparison is probably the fact that a player can choose to not use powerattack if the target's AC is too high (or its HP too low). One would need to have a distribution of AC targets and weight the values for each AC accordingly. To make this simpler, I have assumed a baseline hit chance of 65%, which is reduced to 40% when power attacking. Comparing one hit of a two-handed fighter (+4 STR, Defense Fighting Style), a GWM fighter (+3 STR, Defense Fighting Style) to a baseline sword-and-board fighter (+4 STR and Dueling) and one with the new feat (+3 STR and Dueling): 2H-base: (2d6 + 4) * 0.65 = 7.150 2H-GWM: (2d6 + 13) * 0.35 = 7.000 1H-base: (1d8 + 6) * 0.65 = 6.825 1h-AWU: (1d8 + 15) * 0.35 = 6.825 (this is a coincidence) Note that these numbers do not reflect the powerattacker's option to not powerattack. When the characters are high enough to get a +1 weapon and max out their STR: 2H-base: (2d6 + 6) * 0.70 = 9.100 2H-GWM: (2d6 + 16) * 0.45 = 10.350 1H-base: (1d8 + 8) * 0.70 = 8.750 1h-AWU: (1d8 + 18) * 0.45 = 10.125 Essentially, in this comparison a AWU-Fighter does slightly less damage than a GWM-Fighter per hit and has no additional chance for a bonus attack, but has +1 AC. Dual wielding is harder to calculate, because the number of mainhand attacks also plays a role. Any thoughts? The only loophole I can find is the Path of the Beast Barbarian's Claw attack, but even that is not excessively strong, in my opinion. Another downside of the current design is that the secondary effect is fairly weak for dual-wielding or sword and board barbarians. Technically speaking it is possible to use both AWU and SS together when using a non-heavy ranged weapon as improvised weapon, similar to GWM and SS with a heavy ranged weapon (at least for a certain interpretation of the rules). I do not see this as a problem. • Have you stopped to consider that one handed weapon wielders shouldn't be doing damage comparable to someone with a two handed weapon? A sword and board character has chosen to forgo damage output for a higher AC. That's part of the game balance. – Steve Apr 14 at 1:50 • Yes, I have. That's why I said "all questions of flavor aside" and deliberately made the feat weaker than GWM. The numbers I presented do not reflect reality well enough to show how much better GWM DPR is than sword and board (because you can choose to not powerattack), and without magic items the difference in AC is +1 (or +2). I just wanted to close the gap a little bit, but not fully. Two-handed weapon users should still do more damage. – StefanS Apr 14 at 7:26 • As written, this can be used with Haste. Haste allows a second Attack action (1 attack only). – Dale M Apr 14 at 13:32 • You are right, I'll edit this. Do you think this is a problem? I fear the mechanic is clunky enough already... – StefanS Apr 15 at 10:09 • Welcome to RPG.SE! Take the tour if you haven't already, and check out the help center for more guidance. – V2Blast Apr 17 at 0:29
2021-08-01 03:46:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4106920659542084, "perplexity": 3016.457722463322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00047.warc.gz"}
http://tex.stackexchange.com/tags/typography/hot
# Tag Info ## Hot answers tagged typography 7 The normal interword space for the current font is available as \fontdimen2\font You're mistaken in considering \hspace* as “non breaking space”: it's a “non disappearing space”. Here are two pretty similar definitions for your \Dash: \documentclass{article} \newcommand\Dash{% \leavevmode \unskip\nobreak\hspace{.5\fontdimen2\font}% \textemdash ... 7 I don’t know whether this is a group or a solo effort, but there are resources for typesetting Project Gutenberg books at www.pmonta.com/etext/, which links to the resources at www.sandroid.org/GutenMark/. And, as michal-h21 commented, Project Gutenberg itself often presents mathematical and scientific works as LaTeX files with the PDF output (e.g., Euclid). ... 6 There can be differences: consider \documentclass{article} \begin{document} $\det+\det\in\mathbf{VP}$ ${\det}+{\det}\in\mathbf{VP}$ \end{document} In the first case the spacing is wrong, because the + is interpreted as an ordinary symbol, because it doesn't make sense between two operators. In the second case the spacing is the same as if we said ... 5 Your question isn't very clear and lacks an example document, but this shows four different setting of teh text, standard justified, sloppy, ragged right and RaggedRight from the ragged2e package. As the text is so short it doesn't really show the differences well so I repeat the settings with a longer paragraph with the text repeated. It still doesn't ... 5 This answer will NOT wrap. Nonethless, proceeding... Here, I introduce \nunderline[]{}{}. The optional argument is the under-level for rule placement (relative to the prior placement). The first argument is the text, and the second argument is the color. The rule thickness is set with \rulethick and th relative spacing with \lunderset. Nesting is used ... 4 4 use the ngerman shorthands, then you can use "" to define a break point without a hyphen: \documentclass[ngerman,english]{article} \usepackage{babel} \useshorthands{"} \addto\extrasenglish{\languageshorthands{ngerman}} \begin{document} I have a name (blabla\_rest) that contains \_ sign. What is the correct way to break it? blabla\_""(new line)rest ... 3 We can exploit the fact that \em is a switch: \documentclass[a4paper, 12pt,twoside, openright]{memoir} \usepackage{polyglossia} \setdefaultlanguage{french} \setotherlanguage{english} \usepackage{fontspec} \usepackage[autostyle=true,french=guillemets,maxlevel=3]{csquotes} \DeclareQuoteStyle{english} {\mkfrenchopenquote{\guillemotleft}\em} ... 2 How about this? It creates a new (cs-)quote style named fquotes and a corresponding command named \myfquote. (I don't know if one can define csquote command immediately, however.) \documentclass{article} \usepackage{csquotes} \DeclareQuoteStyle{fquotes} {\em}{}{\em}{} \newcommand*{\myfquote}[1]{% \begingroup% \setquotestyle{fquotes}% ... 2 A most simple way to produce a latex looking doc in word is simplified into steps. Use Century font Refer an original latex document(e: if you are working on a project report, then collect a latex made project report)--- and then type the words exactly in same place as in latex doc... with almost same font size, orientation , position, margin etc. Save it ... 2 Here's a completely different approach that DOES allow line-wrapping, but not paragraph or page breaks. It is also forced to turn off hyphenation. It uses the censor package to create the underlining (by setting \censorruleheight and \censorruledepth), and it uses a \Longunderstack to stack the different threads at the proper spacing (based on result at ... 1 As long as you don't need mathematical typesetting, you actually can find better than TeX with Heirloom Documentation Tools. Not only does it provide Knuth's algorithm for formatting paragraphs; it also allows to compute spacing by mixing three systems (interletters spacing, interwords spacing, imperceptible change in the shapes of the glyphs). Thus you can ... 1 As egreg says, it can't be done in a generalized way. And in this answer, I don't have merriweather font, so I just demonstrate the technique (and exaggerate the effect) on the standard CM font. One can locally achieve what you ask by making - active; however, that breaks many, many things, because it means the negative sign has been redefined. So the ... 1 You could make the _ character active and define it to be \textunderscore in text mode, with a possible line break point after it, while keeping it to be the subscript character in math mode: \documentclass{article} \begingroup\lccode~=_ \lowercase{\endgroup \protected\def~{% \ifmmode \sb \else \nolinebreak\hspace{0pt}% allow ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-04-16 11:08:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415649771690369, "perplexity": 4242.0254542428065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
http://inference-review.com/article/equal-by-catastrophe
In his new book, Walter Scheidel offers a simple, though jarring, story of how past societies struggled with inequality. The Great Leveler is a cautionary tale to policy makers who believe in economic redistribution as a means to level the playing field. A professor of ancient history at Stanford University, Scheidel’s thesis unfolds in the following way: economic inequality is always terrible and disruptive. We should be worried that vastly disproportionate increases in wealth in twenty-first-century America approach something analogous to the high Roman empire or the unchecked roaring twenties in the United States. According to Scheidel, inequality is worse than the Gini coefficient might suggest, given the difficulty in measuring the demographic, sociological, and technological factors affecting the distribution of wealth.1 Scheidel argues that history offers few peaceful antidotes to the accumulation of property, money, and leverage in the hands of the few. “For thousands of years,” Scheidel observes, “civilization did not lend itself to peaceful equalization. Across a wide range of societies and different levels of development, stability favored economic inequality.”2 Civilization is the culprit; its absence, the cure. Scheidel wrote the book as a warning to progressives: “If we seek to rebalance the current distribution of income and wealth in favor of greater equality, we cannot simply close our eyes to what it took to accomplish this goal in the past.”3 Scheidel notes that large estates and monopolies, and the wealthy classes that control them, collapsed only during relatively rare times of chaos. The “massive and violent disruptions of the established order” encompass the four horsemen of the redistributive apocalypse, which Scheidel collectively describes as the “the Great Leveler.”4 The first horseman embodies existential conflicts, and specifically the conflagrations of the twentieth century’s two world wars, which collectively killed more than eighty million civilians and soldiers and reduced much of Europe and Asia to rubble. Workers, given an obvious manpower shortage, briefly gained leverage over capital. Elite powers of accumulation were temporarily interrupted. War did through carnage what peace never achieved under the square, new, and fair deals of Theodore Roosevelt, Franklin Roosevelt, and Harry Truman. The poor and working classes glimpsed, at least for a moment, a great leveling of wealth, albeit by bringing the rich down rather than lifting the poor up. The Europe of 1945 was a far more egalitarian place than in 1935; 1920 had also seen more equality than 1913. In Scheidel’s view, there is a direct current in human affairs leading from what is bad to what is good. Even Hitler’s Wehrmacht, which conquered the continent from the English Channel to the Volga, and the forces needed to defeat it played a role in dismantling entrenched wealth. By the same reductive reasoning, are we to imagine that the mass confiscation of Jewish wealth during 1939–1945 fell under the rubric of wealth redistribution? Ian Morris, Scheidel’s colleague at Stanford, in his recent book War: What Is It Good For? has also argued for the utility of war.5 The bigger the war, the better things become in the long run. War gives the state the power to make their citizens’ lives safer and richer. This is a thesis as counterintuitive as anything found in quantum mechanics. George Orwell’s 1984 argues against just such themes. In the aftermath of a catastrophic British civil war following World War II, three governments have divided up the world. The threat of perpetual conflict brings about a relative form of enforced equality. Orwell saw nothing encouraging in either the consolidation of states through conflict or the achievement of equality brought about through the disasters of past wars. Scheidel’s second horseman may be seen galloping after the first: violent revolution on the scale of Lenin, Stalin, and Mao. In the more lawful Western democracies, strikes, protests, or periodic riots never proved prolonged or forceful enough to erode the plutocracy and redistribute its money. This proved true of the Wobblies, Red Brigades, the Los Angeles riots, and Occupy Wall Street. In narrow terms of revolutionary equalization, Scheidel notes, the more death, the better. But he is reluctant emphatically to acknowledge that revolutionary equalization succeeds only by impoverishing the vast majority of the population. Twentieth-century Russia and China are cases in point. Take also his summary note on Cuba: Development in Cuba has followed the same pattern: after the market income Gini dropped from 0.55 or 0.57 in 1959, the year of the communist revolution, to 0.22 in 1986, it appears to have risen to 0.41 in 1999 and 0.42 in 2004, although one estimate put it already as high as 0.55 by 1995. In the majority of these cases, nominally communist regimes remain in power, but economic liberalization has driven up inequality.6 Scheidel slides over the reality that committed communists, such as the Castro brothers and the successors to Mao, were no fans of liberalization. They turned to it in desperation, and only after failed releveling attempts. Might this suggest that the greater the presence of free markets, the greater the general well-being of the population? “Whether communism’s sacrifice,” Scheidel writes, “of a hundred million lives bought anything of value is well beyond the scope of this study to contemplate.” This seems odd given his efforts to quantify egalitarianism, assess the cost of achieving it, and emphasize the steep price exacted by the levelers of war, revolution, plague, and collapse. These are the very themes of his book.7 The third horseman is the implosion and utter ruin of the state. Collapse on this scale brings an end to law and its protocols. For a rare moment, class becomes fluid. The rather mysterious and abrupt end of the hierarchical Mycenaean world in the thirteenth century BCE led to depopulation. The impoverished tribes and herdsmen of Dark Age Greece, if they were equal in their impoverishment were, at least, equal in their misery. A few traces of this diminished world survive in Homer’s Odyssey and Hesiod’s Works and Days. In the fifth century CE, the collapse of the Roman Empire, for all its disastrous consequences, did away with a great deal of accumulated wealth. The ensuing void ruined the entrenched commercial and aristocratic classes in the western empire. The chaos of this second Dark Age left everyone poorer, and in Scheidel’s ironic sense, more equal. And the more hierarchical a former society, whether Mycenaean or Roman, the greater the impoverishment of the Dark Ages that follow. Scheidel’s fourth horseman of equality rides on the back of disease. Plagues and pandemics lay waste both to governments and the social and economic networks upon which privileged elites rely. Rules, regulations, and insider ways of making and holding onto wealth perish with them. For Scheidel, this brutal shake-up allows the survivors of epidemics to recalibrate economic roles until things settle down. In time, the elites naturally regroup and inequality returns. In Scheidel’s schema, the rich are an insidious lot, like toadstools that sprout on the lawn after rain. After every upheaval, the economic elite spontaneously reappears. The great plagues at Athens and Constantinople were natural levelers. Justinian I’s ambitious building program and his systematic efforts to refashion Byzantine law were short-circuited by a disastrous outbreak of the bubonic plague in the sixth century. Forty percent of Constantinople’s population and perhaps a quarter of the people in the Eastern empire perished. The poor gained from the spoils of empire and lost from the ensuing pandemics. The perennial crisis of the Byzantine Empire was not so much inequality as depopulation—one of the supposed cures of the former. Classical catastrophes were inferior only to the fourteenth-century Black Death that killed at least a quarter of Europe’s population. Food became scarce, labor difficult to obtain, and many political institutions broke down. The plague hit the elites hard. One wonders how Scheidel would sort out these cumulative forces of war and plague, given that they qualify his notion of equality by defining it as omnipresent and shared poverty.8 Scheidel has given us the scholarly version of what Hollywood has expected its audiences intuitively to grasp. After aliens invade, nuclear war erupts, or a deadly virus escapes the lab, rules no longer apply amid the detritus. This benefits those with nothing to lose. Hollywood fleshes out themes that Scheidel implies, but does not elaborate upon: mass violence and death reduce life to the survival of the fittest. Muscular strength, practical know-how, familiarity with weapons, and animal cunning acquire a premium they would not otherwise possess. Lest the reader become uneasy with these macabre remedies for inequality, Scheidel confesses at the outset that he finds both inequality and its cures unpalatable. Would that history could offer hope that, in times of stability, progressive leaders might enact more schemes for redistribution. One of Scheidel’s most sobering observations is that democracies have not been very successful in addressing inequality.9 Scheidel’s concluding peroration about the excess accumulation of wealth can be read as a plea for enlightened redistribution. His calls for new and innovative methods, as he admits, have no real precedents in the past. Otherwise the four grim horsemen will do it for us: “If history is anything to go by, peaceful policy reform may well prove unequal to the growing challenges ahead.”10 Scheidel’s argument depends on a number of propositions that he does not fully explore. Economic inequality is always relative, not absolute. Scheidel assumes that throughout history disproportionate wealth leads to permanent class envy and inevitable strife. In contemporary terms, the American poor resent not just their poverty, but feel insult added to their injury in seeing others who have far more. Really? The story of the twenty-first century may be that high-tech breakthroughs, the inclusion of three billion workers from the former Third World into the global capitalist labor force, and the internet have redefined poverty and wealth. Does history provide no non-violent curatives for economic inequality? From 500 to 300 BCE, the Greek city-state supported a substantial class of small landholders. The Roman Republic had a similar agrarian patchwork. There are many such examples of agrarian equalitarianism in the ancient world. The emergence of some 1,500 Greek city-states from the depopulated Dark Age was marked by the rise of hoi mesoi, whose egalitarianism was the catalyst for constitutionally enshrined property rights. The Greek city-state found a number of ways in which to keep wealth roughly even. These included imposing burdensome religious rites on the rich, funding generous entitlements, and limiting the size of farms. These remedies were not necessarily dependent on wars, plagues, revolution, or the collapse of the state. The ancients had a rough idea that wealthy people could be dangerous. “Anyone for whom seven acres are not enough,” Pliny the Elder remarked, “is a dangerous citizen.”11 Their riches resulted in inordinate political power and control. As an antidote, the ancient Greek hoi mesoi are present throughout classical literature as champions of roughly equal property holding, a point of view commonly attributed to early lawgivers and statesmen like Phaleas, Philolaus, and Solon. There is not really any archeological, epigraphic, or literary evidence that many farms in Attica, even at the height of the Athenian imperial boom, were larger than one hundred acres. This contradicts somewhat Scheidel’s ominous generalization that “[f]or most of the agrarian period the state enriched the few at the expense of the many.”12 Scheidel seems to miss the pan-Hellenic nature of landed egalitarianism evident in Aristotle and clear from the archaeological evidence throughout the Aegean and Black Seas. Scheidel is right to note of Athens in the fifth and fourth centuries BCE that, “direct democracy and a culture of military mass mobilization… helped contain economic inequality.”13 He then overstates the case that this is the “only reasonably well documented” exception, given that the Greek evidence suggests non-Athenian broad-based timocracies were quite successful until the advent of the Hellenistic age. Horse-raising and meat production, which required an inordinate amount of land, were reflections of inequality—which perhaps helps to explain why cavalry was never a dominant arm of most Greek armies or Roman Republican legions. It also may well have been true that some non-Athenian and non-democratic Greek city-states had even lower property qualifications than classical Athens.14 Given that somewhere between eighty to ninety percent of the preindustrial population was probably connected to agriculture, it is understandable that Scheidel would remark that there “can be no doubt that most of the inequality we observe in the following millennia was made possible by farming.”15 The transition from depopulated nomadism and tribalism to prosperous civilization was, of course, also “made possible by farming.” Annual income or net worth itself, as Scheidel reminds us, is not a holistic method of measuring inequality. Forty-five percent of eligible American taxpayers pay no federal income tax; the top one percent of tax returns account for almost forty percent of all income tax federal revenue. In California, one percent of its population of forty million pays forty-five percent of all state income tax revenue.16 Entitlement spending, pensions, and social services are at all-time highs as percentages of federal budget outlays. Prices for everything from televisions to tennis shoes have declined in real dollars over the last thirty years. Fuels are relatively cheap, as fracking and horizontal drilling force down energy costs. Millions of Americans who occasionally turn up in Scheidel’s tables and graphs as suffering from income inequality nonetheless have access to an abundance of material goods once considered the sole domain of the wealthy.17 What drives inequality during civilization’s calm? Scheidel describes rather than analyses the relative roles of luck, health, inheritance, education, or talent in wealth creation. Ancient lawgivers assumed that periodic adjustments in land tenure were necessary given that quite different innate abilities, pure chance, and the varied interests of the citizenry would always eventually kick in to distort their best laid plans to force people to remain about the same. But, the luck of the draw, or natural differences in talent, in many ways confounded the most ingenious remedies of the Gracchi brothers in the second century BCE, and even the New Dealers of the administration of Franklin Roosevelt. Inequality seems often to rest on paradoxes. Near zero interest rates favor investments that make a few rich and more poor. Higher taxes often discourage investors and entrepreneurs while encouraging talented money-makers to hoard and hide assets rather than to risk them. At some point, expanding entitlements can encourage consumption and dependence rather than industry and thrift. One reason why the United States claims that it is exceptional in producing vast goods and services is its cultural attitude to wealth creation and inequality, emphasizing emulation as much as envy. In popular American lore, the owner of a Chevrolet is the owner of a Cadillac in prospect.18 The transition from local to global markets, Scheidel notes, is often disruptive. The ability of a Corinthian farmer to profit from selling raisins in 440 BCE was vastly increased by 200 BCE, when the Hellenistic world connected parts of Greece to the wealth of north Africa, the Middle East, and the greater Balkans. What in Roman times explains the depopulation of the Greek countryside was not just war but the increasingly corporate character of agriculture, which was dominated by large landholdings, slave labor, and the export of luxury products. The result was an enormous inequality in wealth, as Scheidel demonstrates in an especially striking table.19 Herodes Atticus lived in the second century CE and prospered as both an Athenian aristocrat and Roman senator. Herodes may have owned a great deal of the ancient deme of Marathon—the former haunt of hundreds of hoplite small farmers. What made his wealth possible was the Pax Romana that redefined classical economies as Mediterranean in scope. Such new men got fabulously rich by trading from Sicily to the Alps, Gibraltar to the Persian Gulf, and from the Rhine to the Atlas Mountains. And why not? They now had seventy million Roman consumers in connected markets to exploit. Five centuries earlier, the Greeks had only four million. When postclassical observers of the Roman age, such as Pausanias, Polybius, and Strabo, remarked that large parts of Greece, especially Arcadia, Boeotia, and Thessaly, had become depopulated, with swaths of land under single ownership, they offered the common exegesis that peace and quiet, not war and disease, were at fault. Scheidel would appreciate the famous quote of Polybius, “In our times some cities have become deserted and agricultural production has declined, although neither wars nor epidemics were taking place continuously.”20 Throughout his book, Scheidel takes pains to emphasize his tragic view of inequality, and repeatedly notes that he does not like the very solutions that he has found: “All of us who prize greater economic equality would do well to remember that with the rarest of exceptions, it was only ever brought forth in sorrow. Be careful what you wish for.”21 For all Scheidel’s good sense and prodigious research, he still does not manage to reconcile his assumption that economic equality is a very good thing with his demonstration that it occurs only under the most barbaric of conditions. Why, then, should it remain a desirable goal? If the human desire for equality is embedded within chaos, perhaps there is something wrong with mere envy as a social force? Equality through catastrophe makes the wealthy poor, but it does not necessarily make the poor wealthy. Scheidel’s book contains 456 pages, and 54 complex tables, graphs, charts, and figures. He combines an impressive array of examples from the ancient Middle East to contemporary China, and draws on the theoretical work of Anthony B. Atkinson, Branko Milanović, and Thomas Piketty concerning the pernicious nature of inequality. Yet for all that industry, the prose and arguments of The Great Leveler are not easy to follow, and are a world away from the lively writing and engaging narratives of popular classical historians such as M.I. Finley, Adrian Goldsworthy, and Peter Green. 1. On the use of the Gini coefficient, see Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 11–4. 2. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 6. 3. Ibid., 22. 4. Ibid., 443. 5. Ian Morris, War: What is it Good For?: The Role of Conflict in Civilization, from Primates to Robots (New York: Farrar, Straus and Giroux, 2014); cf. the review of Victor Davis Hanson, “The Aztec Road,” Times Literary Supplement, September 24, 2014. 6. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 254. 7. Ibid., 254. 8. For the strategic effect of the plague on Byzantine resources and strategies, see Edward Luttwak, The Grand Strategy of the Byzantine Empire (Cambridge, MA: Harvard University Press, 2009), 86–94; on the losses themselves, William Rosen, Justinian’s Flea: Plague, Empire, and the Birth of Europe (New York: Viking, 2007), 3–4. 9. In an anecdotal and casual sense, we can see that observation born out near the Stanford University campus, where the greatest accumulation of wealth in the history of civilization has occurred in the last decade in the surrounding Silicon Valley—at a time when the most progressive democratically elected administration in recent history was not able to remedy declining real incomes of the middle classes or increasing numbers of the poor reliant on government entitlements. 10. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 444. 11. Pliny, Natural History, 18.4. 12. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 5. On early Greek philosophical support for equal property regimes and the egalitarian ethos in founding Greek colonies, see Victor Davis Hanson, The Other Greeks: The Family Farm and the Agrarian Roots of Western Civilization (Berkeley: University of California Press, 1999), 190–4. For some ancient references to the often obscure Phaleas, Philolaus, and Solon, see Aristotle, Politics, 2.1266a40-1266b6; 2.127a23-30; 2.1266b14-20; 5.1307a29-31; 6.1319a6-10; cf. Solon, Fragments, 5, 24; and Plutarch, Solon, 14–6. On the likelihood of small properties at Athens, see two often overlooked classic essays: G.E.M. de Ste. Croix, “The Estate of Phaenippos (Ps. Demosthenes XLII),” in Ancient Society and Institutions: Studies Presented to Victor Ehrenberg on His 75th Birthday, ed. Ernst Badian (Oxford: Blackwell, 1966), 100–14; M. H. Jameson, “Agricultural Labor in Ancient Greece,” in Agriculture in Ancient Greece: Proceedings of the Seventh International Symposium at the Swedish Institute of Athens, 16-17 May 1990, ed. Berit Wells (Stockholm: P. Åströms, 1992), 135–46. 13. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 84. 14. Ibid., 84. Compare, for example, the property qualification in the cities of the oligarchic Boeotian Confederation (Victor Davis Hanson, Other Greeks: The Family Farm and the Agrarian Roots of Western Civilization (Berkeley: University of California Press, 1999), 207–8). My own small farm is a result of my great-great-grandmother’s efforts to come west from Missouri on promises of the nineteenth-century federal Homestead Act. Land-grant colleges, granges, and credit associations until the rise of modern corporate farming in the 1950s were quite effective in promoting an agrarianism that has now for the most part vanished from the United States. 15. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 36. 16. On contemporary American income tax data, see variously: Tax Foundation, “Summary of Latest Federal Income Tax Data,” December 22, 2014; Catey Hill, “45% of Americans Pay No Federal Income Tax,” MarketWatch, April 18, 2016; Jim Miller, “The Tax Man Cometh, and California Rich—Getting Richer—Pay Most,” The Sacramento Bee, April 14, 2016. 17. My used $20,000 Honda seems to have all the appurtenances (e.g., air conditioning, stereo, electric seats) of a one-percenter’s new$100,000 Mercedes sports car that I recently parked next to in the Stanford University parking lot, and is certainly superior mechanically to the car of any Goldman Sachs buccaneer of the 1990s. 18. Scheidel does not argue for particular political agendas, but his emphasis on war as the more effective leveler has some commonalities with skeptics of the New Deal, who attribute widescale ensuing prosperity and the rise of the consumer middle class more to World War II than to prior government social engineering; see for example, Burton Folsom, Jr., New Deal or Raw Deal?: How FDR’s Economic Legacy Has Damaged America (New York: Threshold Editions, 2009); Amity Shlaes, The Forgotten Man: A New History of the Great Depression (New York: Harper Collins, 2007). 19. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 71–2. 20. Polybius, 36.17.5. 21. Walter Scheidel, The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century (Princeton: Princeton University Press, 2017), 444.
2017-11-24 03:34:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30263739824295044, "perplexity": 7527.074754437998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807084.8/warc/CC-MAIN-20171124031941-20171124051941-00255.warc.gz"}
https://mammothmemory.net/maths/algebra/powers-roots-and-indices/further-roots.html
# Further roots ## Further roots to explain x^(a/b) Always split the power into a root and a power. x^(a/b)=x^((a\ \m\u\l\t\i\p\l\i\e\d\ \by\ \1/b))=(x^a)^(1/b)=rootb(x^a) Or x^(a/b)=x^((1/b\ \m\u\l\t\i\p\l\i\e\d\ \by\ \a))=(x^(1/b))^a=(rootbx)^a Just remember: Example 1 What is 10^(2/10) ? 10^(2/10) =(\root10\10)^2 and reads as Work out what we multiply by itself 10 times to get 10, then multiply the answer by itself 2 times. In this example The number we multiply by itself 10 times to get 10=1.259 (to find this see logarithms) Then multiply 1.259 by itself 2 times. 1.259xx1.259=1.585 So 10^(2/10)=1.585 Example 2 Break any fraction up and use simple numbers 8^(2/3)=8^((2)times(1/3) This can be rewritten as 8^((2)\times(1/3))=(8^2)^(1/3)=(64)^(1/3)=root3\64=4 This is worded as multiply 8 by itself 2 times, then workout what we multiply by itself 3 times to get this answer. The answer is 4. Or (working either way) 8^((1/3)times(2))=(8^(1/3))^2=(root3(8))^2=(2)^2=4 This is worded as: work out what we multiply by itself 3 times to get 8, then multiply the answer by itself 2 times. So  8^(2/3)=(root3\8)^2 Example 3 Simplify root4\(x^4) As follows: root4\(x^4) is the same as x^(4/4) Therefore x^1 Answer: The simplification of root4\(x^4)=x Example 4 What is 27^(4/3) ? 27^(4/3)=27^(4times(1/3))=root3((27^4))=root3((531441))=81 Or we could do 27^(4/3)=27^((1/3)times4)=(root3\27)^4=(3)^4=81 The answer is the same but in this case the second calculation was much easier.
2022-01-27 00:46:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548362255096436, "perplexity": 1701.1613726474386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00255.warc.gz"}
https://rdrr.io/cran/maotai/man/ecdfdist.html
# ecdfdist: Distance Measures between Multiple Empirical Cumulative... In maotai: Tools for Matrix Algebra, Optimization and Inference ## Description We measure distance between two empirical cumulative distribution functions (ECDF). For simplicity, we only take an input of ecdf objects from stats package. ## Usage 1 ecdfdist(elist, method = c("KS", "Lp", "Wasserstein"), p = 2, as.dist = FALSE) ## Arguments elist a length N list of ecdf objects. method name of the distance/dissimilarity measure. Case insensitive. p exponent for Lp or Wasserstein distance. as.dist a logical; TRUE to return dist object, FALSE to return an (N\times N) symmetric matrix of pairwise distances. ## Value either dist object of an (N\times N) symmetric matrix of pairwise distances by as.dist argument. ecdf 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 ## toy example : 10 of random and uniform distributions mylist = list() for (i in 1:10){ mylist[[i]] = stats::ecdf(stats::rnorm(50, sd=2)) } for (i in 11:20){ mylist[[i]] = stats::ecdf(stats::runif(50, min=-5)) } ## compute Kolmogorov-Smirnov distance dm = ecdfdist(mylist, method="KS") ## visualize mks =" KS distances of 2 Types" opar = par(no.readonly=TRUE) par(pty="s") image(dm[,nrow(dm):1], axes=FALSE, main=mks) par(opar)
2021-12-06 21:08:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2247260957956314, "perplexity": 9727.236078404534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00006.warc.gz"}
http://www.mathjournals.org/jot/2006-056-002/2006-056-002-001.html
Previous issue ·  Next issue ·  Most recent issue · All issues # Journal of Operator Theory Volume 56, Issue 2, Fall 2006  pp. 225-247. $C^*$-algebras associated with self-similar sets Authors Tsuyoshi Kajiwara (1) and Yasuo Watatani (2) Author institution: (1) Department of Environmental and Mathematical Sciences, Okayama University, Tsushima, 700-8530, Japan (2) Department of Mathematical Sciences, Kyushu University, Hakozaki, Fukuoka, 812-8581, Japan Summary:  Let $\gamma = (\gamma_1,\dots,\gamma_N)$, $N \geqslant 2$, be a system of proper contractions on a complete metric space. Then there exists a unique self-similar non-empty compact subset $K$. We consider the union ${\mathcal G} = \bigcup\limits_{i=1}^N \{(x,y) \in K^2 ; x = \gamma _i(y)\}$ of the cographs of $\gamma _i$. Then $X = C({\mathcal G})$ is a Hilbert bimodule over $A = C(K)$. We associate a $C^*$-algebra ${\mathcal O}_{\gamma}(K)$ with them as a Cuntz-Pimsner algebra ${\mathcal O}_X$. We show that if a system of proper contractions satisfies the open set condition in $K$, then the $C^*$-algebra ${\mathcal O}_{\gamma}(K)$ is simple, purely infinite and, in general, not isomorphic to a Cuntz algebra. Contents    Full-Text PDF
2018-01-23 16:06:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.780174970626831, "perplexity": 1428.1047817372166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891980.75/warc/CC-MAIN-20180123151545-20180123171545-00787.warc.gz"}
http://electronics.stackexchange.com/questions/32905/how-big-a-magnet-do-i-need-to-wipe-my-portable-hdd
How big a magnet do I need to wipe my portable HDD? According to this article, I need a really big magnet to wipe the data off my HDD. Thing is, I don't know how strong a "laboratory degausser" is, so I'll ask it in my own terms: I have a pair of magnets that would crush my fingers if I let them close on my hand. Is that likely to be strong enough to wipe the data on my damaged portable HDD before I let the repair guys fix the dinky connector between casing and drive proper? It's a sleek WD Passport that I can't open up myself without ruining the aesthetics. Don't care if they give me a new one, do care if someone gets to poke around the data in the old. - You'll ruin the drive entirely if you manage to degauss it. If you want to destroy the data but keep the drive usable, do a single pass zero-wipe with DBAN or something similar. –  insta May 29 '12 at 22:47 What have you tried? :-) –  kevlar1818 May 30 '12 at 12:25 If you break something internal, as long as it will be replaced, is not an issue, right? –  clabacchio May 30 '12 at 13:47 The answers and comments suggesting do a wipe or low-level format are relevant, but they assume the drive mechanism is still functional. If it was not, but repairable, then these answers become moot because the repair people could fix the mechanical problem and then have access to the data. –  tcrosley May 30 '12 at 19:00 If as tcrosley said above, your drive is not functional and you want to make sure that nobody would pick it up from trash bin and recover your private data, just lit a fire in a can, throw your harddrive in it. The data will be destroyed for sure. –  hkBattousai May 31 '12 at 18:53 show 1 more comment I doubt you'll be able to destroy your data with a magnet without opening the drive up. Inside of the drive are a pair of neodymium magnets (similarly strong to the ones you have, but just not as large), within centimeters of the platter. The drives are well shielded, so it is near (if not entirely) impossible to wipe them by waving a magnet around them. What article are you referring to? - Within centimeters of the platter? The drive, including casing, is most likely a few cm tall (a standard 3.5" drive is about 20 mm tall.) Millimeters sounds more like it, at most. –  Michael Kjörling May 30 '12 at 13:36 @Michael -- They are horizontally next to the platters, on the order of cm's, not above/below. Like this. –  DrFriedParts Apr 21 at 2:31 If someone will die or have their life ruined if the drive contents are discovered, you need a big enough magnet to pulverize the drive when forcefully dropped onto it. Multiple, judicious applications of a magnetized sledge-hammer should serve nicely. OTOH, if you just need to keep out the curious, a wipe and reformat should be more than adequate. - Yeah, if you really really want to remove the data, you don't need a magnet, you need a steel brush. –  Olin Lathrop May 30 '12 at 12:56 In addition to what Shamtam said about how waving a magnet around the outside of the case could be ineffective, it can do damage to other things, especially if you do this while power is applied. A strong but changing magnetic field (like that caused by waving a strong magnet) will induce current and voltage in anything conductive that happens to be at the right orientation. It doesn't take a lot of current or voltage in the wrong place to do bad things, like making a chip latch up. Then there is also purely mechanical damage. Some things in this device are going to be magnetic, and a magnetic field is going to put a force on them. A strong field may make such a strong force that things get bent, metal touches something conductive it isn't meant to etc. All around, this is a really bad idea. - If you want to erase a drive, I'd recommend that you run dd if=/dev/zero of=/dev/hdX. Not only would it be difficult to get powerful enough magnets, you would probably damage the drive itself by degaussing its firmware. You could also just delete all the partitions on it...I doubt the repair guys would go through that much trouble to find out what you have on that drive.
2013-12-07 00:11:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40981626510620117, "perplexity": 1567.7252578751811}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052810/warc/CC-MAIN-20131204131732-00048-ip-10-33-133-15.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/144599/guidance-needed-in-finding-scattering-amplitude
# Guidance needed in finding scattering amplitude If I have the Lagrangian $$\mathcal{L}=\bar{\psi}(i\gamma ^\mu \partial_\mu - m)\psi -g\bar{\psi}i\gamma^5\phi\psi,$$ where $g$ is a coupling constant. How to find the scattering amplitude for $$\phi \psi \to \phi \psi$$ What I only learned in class was electron-electron scattering and electron-proton scattering and can't seem to relate this case above to any of them. I ask only for guidance. Please and thank you! • "The" scattering amplitude does not exist. In general, one computes such amplitudes via Feynman diagrams. – ACuriousMind Nov 3 '14 at 16:09 • @ACuriousMind I am aware of that, I just dont know how to deal with this problem since I can't relate this case to the e-e scattering nor to the e- p+ scattering. Just how to start? – Fluctuations Nov 3 '14 at 16:11 • Aren't you missing kinetic terms for $\phi$ .? – Jerry Schirmer Nov 3 '14 at 20:33 • Well, without the kinetic term, then your interaction term becomes a constraint (if you vary with respect to $\phi$), or it has an undetermined external function (if you don't vary with respect to $\phi$) – Jerry Schirmer Nov 3 '14 at 21:46 • Your professor probably just wanted there to be an implied $\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi$ term. – Jerry Schirmer Nov 3 '14 at 22:03 Proceed as usual: 1. Derive (or find somewhere) the Feynman rules for this theory. 2. Draw the lowest-order diagrams contributing to the specific scattering process you are interested in 3. Evaluate them It should be even easier than in case of QED (I believe you studied electron-electron scatterings in QED) UPD: this is called the pseudo-scalar Yukawa theory. • Thanks for your answer. Does this theory have a name in order to check for the rules on line? – Fluctuations Nov 3 '14 at 16:15 • It is called pseudo-scalar Yukawa theory. Be aware that there is also a scalar Yukawa theory without the $\gamma^5$ matrix and it differs from yours. – Prof. Legolasov Nov 3 '14 at 16:16 • Would it differ if I added to the above lagrangian $$1/2\partial_\mu \phi \partial^\mu \phi - 1/2 m^2 \phi^2 -1/4! \lambda \phi^4$$ I mean would it affect my Feynmann rules for this. – Fluctuations Nov 3 '14 at 17:22 • Yes, a new type of propagating particle (in context of Yukawa interaction it is usually called a meson) would appear. The third term also adds the meson self-interaction vertex. – Prof. Legolasov Nov 3 '14 at 19:05
2019-11-18 08:30:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7397817373275757, "perplexity": 789.2781359258921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669730.38/warc/CC-MAIN-20191118080848-20191118104848-00498.warc.gz"}
https://proofwiki.org/wiki/Combination_Theorem_for_Continuous_Mappings/Topological_Group/Multiple_Rule
# Combination Theorem for Continuous Mappings/Topological Group/Multiple Rule ## Theorem Let $\struct {S, \tau_{_S} }$ be a topological space. Let $\struct {G, *, \tau_{_G} }$ be a topological group. Let $\lambda \in G$. Let $f: \struct {S, \tau_{_S} } \to \struct {G, \tau_{_G} }$ be a continuous mapping. Let $\lambda * f: S \to G$ be the mapping defined by: $\forall x \in S: \map {\paren {\lambda * f} } x = \lambda * \map f x$ Let $f * \lambda : S \to G$ be the mapping defined by: $\forall x \in S: \map {\paren {f * \lambda} } x = \map f x * \lambda$ Then: $\lambda * f: \struct {S, \tau_{_S} } \to \struct {G, \tau_{_G} }$ is a continuous mapping $f * \lambda: \struct {S, \tau_{_S} } \to \struct {G, \tau_{_G} }$ is a continuous mapping. ## Proof By definition, a topological group is a topological semigroup. Hence $\struct {G, *, \tau_{_G}}$ is a topological semigroup. $\lambda * f, f * \lambda: \struct {S, \tau_{_S} } \to \struct {G, \tau_{_G} }$ are continuous mappings. $\blacksquare$
2020-11-28 17:38:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436770081520081, "perplexity": 568.7281387968051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00126.warc.gz"}
https://uwaterloo.ca/undergraduate-entrance-awards/awards/loran-award
# Loran Award Award type: Entrance scholarships Award description: The Loran Award program is provided through the Loran Scholars Foundation, in partnership with universities, donors and volunteers across the country. Through a comprehensive selection process, the foundation's goal is to identify and support young people who have the integrity and courage to make difficult decisions, the perseverance to work towards long-term goals, the curiosity to better understand the world around them and the drive to make positive change in their communities. The Loran Award is valued at up to $100,000 over four years, including mentorship, funding for summer internships and participation in an extensive network of past and present scholars. As a participating school, the University of Waterloo contributes to the Loran Award by providing scholars with a tuition award each term of eligibility. Students may apply on their own or be nominated by their high school. The deadline to apply is normally in mid-October. Further information can be found on the Loran Scholars Foundation website. Value description: up to$100,000 over four years, including mentorship, funding for summer internships and participation in an extensive network of past and present scholars Program: Open to any program Citizenship:
2023-03-26 23:14:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2955644726753235, "perplexity": 4816.14346563644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00060.warc.gz"}
https://kufapylujo.cateringwhidbey.com/contributions-to-the-data-on-theoretical-metallurgy-book-35617cp.php
Last edited by Kazrajin Wednesday, May 13, 2020 | History 3 edition of Contributions to the data on theoretical metallurgy. found in the catalog. Contributions to the data on theoretical metallurgy. United States. Bureau of Mines. # Contributions to the data on theoretical metallurgy. ## by United States. Bureau of Mines. Written in English Edition Notes ID Numbers Contributions Kelley, Kenneth Keith. Open Library OL20110668M There are many schools of theoretical archaeology (Clarke , ) today, though none of them seems to be popular in India. Lamberg-Karlovsky's book (), Archaeological Thought in America gives a very balanced account of the various contending schools of theoretical archaeology. This volume is the best source of. This is the fourth edition of a work which first appeared in The first edition had approximately one thousand pages in a single volume. This latest volume has almost three thousand pages in 3 volumes which is a fair measure of the pace at which the discipline of physical metallurgy has grown in the intervening 30 all the topics previously 3/5(1). India’s Contribution to the Metallurgy Metallurgy, the practice of separating metals from their ore and refining them into pure metals. In India, metallurgy developed into a science that made use of high refinement and precision techniques to extract metals and form different mixtures of metals (alloys) to prepare objects to be used forFile Size: KB. India made significant contribution to metallurgy, however many historians consider those achievements has accidental, some attribute them to iron ores of India. If I am not wrong Phospate and Gamma iron structure are responsible for corrosion res. You might also like Arc of the ancestors Arc of the ancestors Stony Creek Mills, Pennsylvania Stony Creek Mills, Pennsylvania Peregrines Christmas Coloring Book Peregrines Christmas Coloring Book Deregulation of the Australian dairy industry Deregulation of the Australian dairy industry Procurement reform Procurement reform Trees - site preparation and planting Trees - site preparation and planting A description of all the Kings of Scotland; declaring what year ... they began to reign; ... A description of all the Kings of Scotland; declaring what year ... they began to reign; ... Walt Whitman and Leaves of grass Walt Whitman and Leaves of grass Lake Andes-Wagner project Lake Andes-Wagner project Ethnicity and gender in the West Midlands labour force. Ethnicity and gender in the West Midlands labour force. Nfpa 11a Standard for Medium and High Expansion Foam Systems Nfpa 11a Standard for Medium and High Expansion Foam Systems Proceedings, composite materials. Proceedings, composite materials. Russell shorthand. Russell shorthand. ### Contributions to the data on theoretical metallurgy by United States. Bureau of Mines. Download PDF EPUB FB2 Contributions to the Data on Theoretical Metallurgy,United States Department of the Interior, Bureau of Mines, USBM Bulletin, B by Kelly, H. and a great selection of related books, art and collectibles available now at Get this from a library. Contributions to the data on theoretical metallurgy: a reprint of bulletins,and [K K Kelley]. Contributions to the Data on Theoretical Metallurgy XIV (14): Entropies of the Elements & Inorganic Compounds [K.K. & E.G. King Kelley] on *FREE* shipping on Author: K.K. & E.G. King Kelley. Contributions to the Data on Theoretical Metallurgy: [Part] Heats and Free Energies of Formation of Inorganic Oxides One of 17 reports in the series: Contributions to the Data on Theoretical Metallurgy available on this by: @article{osti_, title = {CONTRIBUTIONS TO THE DATA ON THEORETICAL METALLURGY. XII. HEATS AND FREE ENERGIES OF FORMATION OF INORGANIC OXIDES}, author = {Coughlin, J.P.}, abstractNote = {}, doi = {}, journal = {U.S. Bur. Mines Bull.}, number =, volume = Vol:place = {Country unknown/Code not available}, year = {Fri Jan. Contributions to the Data on Theoretical Metallurgy: [Part] Entropies of the Elements and Inorganic Compounds One of 17 reports in the series: Contributions to the Data on Theoretical Metallurgy available on this by: Get this from a library. Contributions to the data on theoretical metallurgy: XVI. Thermodynamic properties of nickel and its inorganic compounds. [Alla D Mah; L B Pankratz; United States. Bureau of Mines.]. Other articles where Theoretical Structural Metallurgy is discussed: Sir Alan Cottrell: work culminated in the book Theoretical Structural Metallurgy (), which used concepts from solid-state physics and thermodynamics and became a classic in the field. theoretical treatments on heat transfer, solid mechanics and materials behaviour that are The intention here is to describe the metallurgy, surface modification, wear resistance, and chemical composition of these materials. Library of Congress Cataloging in Publication Data A catalog record for this book is available from the Library of. Contributions to the data on theoretical metallurgy. (Washington, U. Govt. Print. Off., ), by United States. Bureau of Mines (page images at HathiTrust) A manual of metallurgy, or practical treatise on the chemistry of the metals. (London., R. Griffin and company., ), by John Arthur Phillips (page images at HathiTrust). Second edition publishedreprinted in and In this book Professor Tylecote presents a unique introduction to the history of metallurgy from the earliest times to the present. The development of metallurgy skills and techniques of different civilisations, and the connection between them, are carefully by: The book is meant for the general reader but can benefit scientists as well. Pictures have an important role to c onvey a message and for this reason som e time was spent in selectingAuthor: Fathi Habashi. Kelley, K. K.: Contributions to the Data on Theoretical Metallurgy XIII. High Temperature Heat-Content, Heat-Capacity, and Entropy Data for the Elements and Inorganic Compounds, (U. Bureau of Mines, Bull. ). 被如下文章引用: TITLE: 溶体的次正规模型对二元平衡图热力学分析的运用; AUTHORS: 田德诚. This book seeks to communicate to both a global and local audience, the key attributes of pre-industrial African metallurgy such as technological variation across space and time, methods of mining and extractive metallurgy and the fabrication Brand: Springer International Publishing. Metallurgical & Materials Transactions B. Focused on process metallurgy and materials processing science, Metallurgical and Materials Transactions B contains only original, critically reviewed research on primary manufacturing processes, from extractive metallurgy to the making of a shape. A joint publication of ASM International and TMS (The Minerals, Metals and. "Contributions to the Data on Theoretical Metallurgy The Thermodynamic Properties of Metal Carbides and Nitrides, 3. Keiley, K. K.: "Contributions to the Data on Theoretical Metallurgy V, Heats of Fusion of Inorganic Substances," U. Bureau of Mines 3:~.?~"f*: (). ~ m rn~ 4. ;'R elzaccozies Bibliography,American Ceramic. of Metals for contributions to Physical Metallurgy () and the Platinum Medal, the premier medal of the Institute of Materials (). He was elected a Fellow of the Royal Society (), a Fellow of the Royal Academy of Engineer-ing () and appointed a Commander of the British Empire (CBE) in A former Council Member ofFile Size: 8MB. metallurgy in the book is on a general basis, and where appropriate modern developments are described. Lastly, there are chapters on the testing of metals, both by mechanical and non-destructive methods, and on metallurgical pyrometry. At the end of the book there is a large collection of past examina­. Material Science And Metallurgy Kodgire Pdf Free Download, Combat_Arms_Super_Compactado 5ebb7dda87 FileType: PDF. Material science & Metallurgy by C. Diploma text book of metallurgy and material science by phakirappa downloads. Books shelved as metallurgy: Metallurgy Fundamentals by Daniel A. Brandt, MATERIAL SCIENCE AND METALLURGY FOR ENGINEERS by Kodgire, Principles Of Extract. Metallurgy (by popular request) Metals are crystalline materials Although electrons are not shared between neighboring atoms in the lattice, the atoms of a metal are effectively covalently bonded. Copper and Aluminum form face centered cubic lattices in their common phase. Iron at low temperature forms a body centered cubic lattice.We report a theoretical equation of state (EOS) table for boron across a wide range of temperatures ($\times$10$^4$$\times$10$^8$ K) and densities ( g/cm$^3$), and experimental shock.Purchase Modern Physical Metallurgy and Materials Engineering - 6th Edition. Print Book & E-Book. ISBN
2021-03-02 08:35:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2786657512187958, "perplexity": 8555.80388045663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363782.40/warc/CC-MAIN-20210302065019-20210302095019-00229.warc.gz"}
https://forum.azimuthproject.org/plugin/ViewComment/16179
Re: #9 John 1. What is the difference between an element and an object? Re: #5 John 2. Are all morphisms in a category of the same type or can a category have, for example, objects {\$$x,y,a,b\$$} where morphisms \$$f: x \rightarrow y\$$ and \$$g: a \rightarrow b\$$ are different? Thanks!
2020-01-19 07:43:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205520510673523, "perplexity": 2253.3072792740336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00508.warc.gz"}
https://www.bayut.com/property/details-4067740.html
Bayut - 439-Vl-S-7437 12 Map # Corner Singal Row ! 3 to 4 Bed room townhouses 6 years payment plan. AED1,200,000 La Rosa, Villanova, Dubailand, Dubai 3 Beds 4 Baths 2,800 sqft ## Overview • TypeVilla • PriceAED   1,200,000 • Bedroom(s)3 • Bath(s)4 • Area2,800 sqft • PurposeFor Sale • LocationLa Rosa, Villanova, Dubailand, Dubai • Ref. No:Bayut - 439-Vl-S-7437 ### Description Corner Single Row ! 3 to 4 Bed room townhouses 6 years payment plan Villanova-LA ROSA!!! Townhouses: 3BR - BUA 1,947 Sq. ft. starting AED 1. 2M 4BR - BUA 2,333 Sq. ft. starting AED 1. 5M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Highlights of the Offer. . . - 50% DLD waiver - only 10% on booking - 3 year post handover plan Payment Plan:. - 10% on Booking - 30% During Construction - 10% on Handover. . - 5 year service charge waiver - 50% Post-Handover in 3 Years La Rosa is located in Villanova, a neighborhood that feels like home. Modern contemporary townhouses, villas and apartments along with vast open spaces create a serene atmosphere, where the community becomes an extended family. Villanova offers amenities like community centre, pool, playgrounds. ABDUL MALIK Abdul malik Mughal A Seven Group Properties Brokers Post Box No. 430417 | Russia Cluster V03, Office No. 11 | Dubai | United Arab Emirates Tel: Fax: | Mob: | Website: www. a7properties. com ORN: 11647 | BRN: 24450 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Schools Restaurants Hospitals Parks ## Mortgage ### Calculate and view the monthly mortgage on this villa Total Price AED Loan Period Years Down Payment AED 25% Interest Rate % AED 4,381 per month TOTAL LOAN AMOUNTAED1,314,328 Payment Breakdown InterestPrincipal ## Trends RERA# 11647Permit# 29743 Agent:Abdul Malik View all properties This property is no longer available
2019-12-11 05:00:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8964918255805969, "perplexity": 702.3320721190927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529955.67/warc/CC-MAIN-20191211045724-20191211073724-00142.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-12th-edition/chapter-9-section-9-4-properties-of-logarithms-9-4-exercises-page-613/7
## Intermediate Algebra (12th Edition) $log_{10}7+log_{10}8$ We know that $log_{b}xy=log_{b}x+log_{b}y$ (where $x$, $y$, and $b$ are positive real numbers and $b\ne1$). Therefore, $log_{10}(7\times8)=log_{10}7+log_{10}8$.
2018-07-20 16:52:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980508685112, "perplexity": 290.3445609946129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591718.31/warc/CC-MAIN-20180720154756-20180720174756-00262.warc.gz"}
https://leanprover-community.github.io/archive/stream/113488-general/topic/mem_preimage_eq.html
## Stream: general ### Topic: mem_preimage_eq #### Patrick Massot (Jul 03 2019 at 13:59): Is there any reason why mem_preimage_eq is stated as an equality rather than an iff? It seems contrary to the mathlib way #### Chris Hughes (Jul 03 2019 at 14:02): My guess is because it's stronger in the world without propext. #### Kenny Lau (Jul 03 2019 at 14:04): you can blame it on commit 8f4327a by digama0 on 15 Oct 2017: #### Kenny Lau (Jul 03 2019 at 14:04): or, in other words, it's because the code is 2 years old #### Chris Hughes (Jul 03 2019 at 14:07): I guess it means dsimp will use it. Which is handy @Mario Carneiro? #### Mario Carneiro (Jul 03 2019 at 14:14): I think it just predates the mathlib convention #### Patrick Massot (Jul 03 2019 at 14:15): Would you merge a PR changing this? #### Mario Carneiro (Jul 03 2019 at 14:15): also dsimp doesn't use iff.rfl lemmas unfortunately #### Patrick Massot (Jul 03 2019 at 14:16): Or maybe adding a mem_preimage_iff lemma if that dsimp thing is important? #### Mario Carneiro (Jul 03 2019 at 14:16): @Kenny Lau Actually that's not the origin; you can see it got moved in that commit but predates it #### Mario Carneiro (Jul 03 2019 at 14:17): I would merge such a PR #### Mario Carneiro (Jul 03 2019 at 14:17): I don't think the dsimp thing is a very big deal #### Patrick Massot (Jul 03 2019 at 14:17): Which version? Changing the lemma or adding a new one? #### Mario Carneiro (Jul 03 2019 at 14:19): changing the lemma #### Patrick Massot (Jul 03 2019 at 14:22): Should the new name be mem_preimage or mem_preimage_iff? #### Mario Carneiro (Jul 03 2019 at 14:25): I like mem_preimage if it's not already taken #### Patrick Massot (Jul 03 2019 at 14:27): It seems to exists in many namespaces but not the root one #### Mario Carneiro (Jul 03 2019 at 14:27): this one should be in the set namespace actually it is #### Patrick Massot (Jul 03 2019 at 14:28): I didn't pay attention but this lemma is in the set namespace #### Patrick Massot (Jul 03 2019 at 14:30): Ok, let's see what Travis thinks about https://github.com/leanprover-community/mathlib/pull/1174 Last updated: May 13 2021 at 05:21 UTC
2021-05-13 06:15:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17925116419792175, "perplexity": 12025.400267275829}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00517.warc.gz"}
https://socratic.org/questions/what-is-the-molar-volume-of-3-01-x-10-23-molecules-of-ethane-c2h6
# What is the molar volume of 3.01 x 10(23) molecules of ethane (C2H6)? Jun 28, 2014 One mole of any gas at STP takes up 22.4 L of volume. one mole of gas = 22.4 L ..... (a) one mole of gas has 6.02 x ${10}^{23}$ particles( Molecules) ..... (b) equating (a) and (b) we get 6.022 x ${10}^{23}$ particles (Molecules) = 22.4 L Making conversion factors 22.4 L / 6.02 x ${10}^{23}$ particles or 6.02 x ${10}^{23}$ particles / 22.4 L since we want change the # of molecules into L we will use the conversion factor that when multiplied by the given number gives L as the final unit. 3.01 x ${10}^{23}$ particles x 22.4 L / 6.02 x ${10}^{23}$ particles 11.2 L
2019-10-21 14:57:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499492883682251, "perplexity": 3983.334374364515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00108.warc.gz"}
http://3dfractals.com/bloch/node5.html
suivant: Bibliographie monter: A Bloch Constant for précédent: -holomorphy and quasiregularity # Final remarks The Picard theorem follows from Bloch's theorem in one variable. It is then interesting to ask whether the same is possible in the case of -holomorphic mappings in . However, here we can directly find a Picard theorem without invoking our Bloch theorem. Theorem 10 (Picard)   Let . If there are two bicomplex numbers such that is invertible and for which the set is not in the range of , then is constant. Proof. We have just to apply the so-called little Picard theorem" [15] to and . The fact that is invertible will insure us that will be equal to a bicomplex number with and nonzero. Let , . Suppose takes the value at . There exist such that . Thus, Contradiction. Hence, omits . Similarly, omits , omits and omits . Since , and are distinct. Similarly . In the same way, it is possible to find also a Casorati-Weierstrass theorem: Theorem 11 (Casorati-Weierstrass)   Let with not identically noninvertible. Then, is dense in . Proof. The hypotheses imply that we can write with and nonconstant. Then we can apply the Casorati-Weierstrass theorem for to and in order to prove that is dense in . A famous example of Fatou and Bieberbach (see [10]) shows that the usual formulation of the Picard theorem in does not extend to holomorphic mappings in . In this connection, we have some interesting consequences of Theorem 11 which can be interpreted as an other kind of little Picard theorem for bicomplex numbers: Corollary 1   There is no nondegenerate -holomorphic mapping such that contains a ball. Corollary 2   Fatou-Bieberbach examples cannot be -holomorphic mappings, i.e. they connot satisfy the complexified Cauchy-Riemann equations. For a beautiful formulation of Picard's theorem which holds in higher dimensions, see [9]. Also, for a version of Picard's theorem for quasiregular mappings see Rickman [12]. suivant: Bibliographie monter: A Bloch Constant for précédent: -holomorphy and quasiregularity Dominic Rochon 2000-07-26
2019-03-19 15:41:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785199165344238, "perplexity": 1157.019134572944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201996.61/warc/CC-MAIN-20190319143502-20190319165502-00195.warc.gz"}
https://ask.sagemath.org/users/24971/rburing/?sort=recent
2019-12-06 15:51:39 -0600 commented question Solve set of equations with all unique values in sage Can you add the set of equations? Select your code and press the '101010' button to format it correctly. 2019-12-05 13:45:39 -0600 answered a question How to use a string in a symbolic expression with SageMath? I would re-evaluate whether using strings is really what you want to do; probably whatever problem you have can be solved in a better way. Nevertheless, it is possible: function('P') var('x1 x2 x3 y1 y2 y3') st = eval('y1,0,1') sum(P(x1,y1,y2,y3),*st) 2019-12-05 10:05:38 -0600 received badge ● Nice Answer (source) 2019-12-05 05:38:56 -0600 answered a question How can I define a field of numbers by hitting to the field of rationals two roots of the same polynomial ? Note number fields in SageMath are abstract extensions of $\mathbb{Q}$; they come with several embeddings into $\mathbb{C}$; see below. Option 1 (number field from algebraic numbers): sage: K, [A,B], emb = number_field_elements_from_algebraics([alpha,beta], minimal=True) sage: K Number Field in a with defining polynomial y^8 + 4*y^6 + 358*y^4 + 708*y^2 + 529 sage: A -1/80*a^6 - 3/80*a^4 - 359/80*a^2 - 357/80 sage: B 1/920*a^7 + 1/230*a^5 + 381/920*a^3 + 607/460*a sage: emb(A) == QQbar(alpha), emb(B) == QQbar(beta) (True, True) For other options, let's first find what polynomial we are talking about: sage: f = alpha.minpoly(); f x^4 + 11*x^2 + 11 Option 2 (splitting field): sage: K. = f.splitting_field() sage: f_roots = f.roots(K, multiplicities=False) sage: emb = K.embeddings(QQbar)[0] sage: map(emb, f_roots) [0.?e-16 + 1.054759596450271?*I, 0.?e-16 - 1.054759596450271?*I, 0.?e-16 - 3.144436705309246?*I, 0.?e-16 + 3.144436705309246?*I] sage: QQbar(alpha), QQbar(beta) (1.054759596450272?*I, 3.144436705309246?*I) sage: a = f_roots[0]; b = f_roots[3] sage: emb(a) == QQbar(alpha), emb(b) == QQbar(beta) (True, True) Option 3 (successive extensions): sage: K. = NumberField(x^4 + 11*x^2 + 11) sage: R. = PolynomialRing(K) sage: R(f).factor() (y - a) * (y + a) * (y^2 + a^2 + 11) sage: g = y^2 + a^2 + 11 sage: L. = K.extension(g) sage: f.change_ring(L).factor() (x - b) * (x - a) * (x + a) * (x + b) sage: emb = L.embeddings(QQbar)[4] sage: emb(a) == QQbar(alpha), emb(b) == QQbar(beta) (True, True) You can also make an absolute extension from the relative one: sage: M. = L.absolute_field(); M Number Field in c with defining polynomial x^8 + 110*x^6 + 3839*x^4 + 44770*x^2 + 43681 sage: from_M, to_M = M.structure() sage: to_M(a) 7/2508*c^7 + 145/627*c^5 + 509/114*c^3 + 421/76*c sage: to_M(b) 7/1254*c^7 + 290/627*c^5 + 509/57*c^3 + 459/38*c sage: emb = M.embeddings(QQbar)[4] sage: emb(to_M(a)) == QQbar(alpha), emb(to_M(b)) == QQbar(beta) (True, True) 2019-11-27 02:13:32 -0600 commented question conditions among the variables @user789 if that's all, then just substitute e.g. $e=a+b-d$ into your polynomial. 2019-11-23 05:45:07 -0600 commented question How to count rational points of a curve defined as an intersection of two zero sets? How can the first equation have any rational solutions? 2019-11-22 04:37:07 -0600 commented answer Sagemath and TI-83 giving different answers Probably OP wants to know why taking the cube root in SageMath yields this particular root and not the real root. 2019-11-20 07:28:10 -0600 answered a question Do newer versions change the way 'simplify' works? When you make a vague request (e.g. 'simplify'), there is a risk that the answer will change if you ask it again. Unfortunately I don't know what has changed or why. Maybe e.g. using git blame someone can find it out. I am guessing that you are interested in the expression as a polynomial in $x,y,Z$. A precise way to request the expression in that form is as follows (using polynomial rings): sage: var('Z,x,y,a,w1,w2,w3,G') sage: eqn = (Z*a + Z*w2 - a + w2)*(Z*a + Z*w3 - a + w3)*y == (Z*a + Z*w1 - a + w1)*G*(Z + 1)*x sage: A = PolynomialRing(QQ, names='a,w1,w2,w3,G') sage: B = PolynomialRing(A, names='Z,x,y') sage: SR(B(eqn.lhs())) == SR(B(eqn.rhs())) (a^2 + a*w2 + a*w3 + w2*w3)*Z^2*y - 2*(a^2 - w2*w3)*Z*y + (a^2 - a*w2 - a*w3 + w2*w3)*y == (G*a + G*w1)*Z^2*x + 2*G*Z*w1*x - (G*a - G*w1)*x You could also factor each coefficient: sage: sum(SR(C).factor()*SR(X) for C,X in B(eqn.lhs())) == sum(SR(C).factor()*SR(X) for C,X in B(eqn.rhs())) Z^2*(a + w2)*(a + w3)*y - 2*(a^2 - w2*w3)*Z*y + (a - w2)*(a - w3)*y == G*Z^2*(a + w1)*x + 2*G*Z*w1*x - G*(a - w1)*x 2019-11-20 05:33:03 -0600 commented answer Why SageMath cannot inverte this matrix? You're welcome. That it takes a long time doesn't mean SageMath is not able to do it. Your original code works on my machine; it took 1 hour and 12 minutes. 2019-11-19 10:31:51 -0600 commented question conditions among the variables This can be done (with some effort) in many cases, depending on the type of condition. Are the conditions linear equations, polynomial equations, trigonometric equations? An example would be great. 2019-11-18 08:49:06 -0600 commented question simple script gives "ValueError: element is not in the prime field" The error depends on the information you've omitted, such as the definition of r, l, and x. As a general rule, please give an unambiguous self-contained example that produces the error. Probably the problem is that l cannot be lifted to an integer. 2019-11-17 17:05:04 -0600 received badge ● Nice Answer (source) 2019-11-16 10:39:38 -0600 commented answer Why SageMath cannot inverte this matrix? It is working, but there seems to be some kind of bug in displaying the algebraic numbers. For example if you define S = Q*M[0]*P then set(S.diagonal()) == set(M[0].eigenvalues()) is True, and the workaround S.apply_map(lambda z: z.interval(CIF)) shows the matrix correctly. 2019-11-16 06:36:53 -0600 answered a question Why SageMath cannot inverte this matrix? Since $\bar{\mathbb{Q}}$ is a little big, try instead working in a number field $K$ that's big enough but as small as possible: M = [matrix(QQ,l) for l in L] K,_,_ = number_field_elements_from_algebraics(sum([m.eigenvalues() for m in M], []), minimal=True) print(K) p = identity_matrix(K,8) q = ~p for m in M: a = q*m*p da, pa = a.change_ring(K).jordan_form(transformation=True) p = p*pa q = ~p Here $K$ is an abstract number field of degree $48$ with several (i.e. $48$) different embeddings into $\bar{\mathbb{Q}}$. sage: set([sigma(p.det()) for sigma in K.embeddings(QQbar)]) {-97929.96732359303?, 97929.96732359303?} For a given sigma in K.embeddings(QQbar), you can embed p and q by p.apply_map(sigma) and q.apply_map(sigma); those matrices are defined over QQbar and they simultaneously diagonalize all m in M. 2019-11-15 04:17:53 -0600 commented question Sage 8.1 on Mint 19.2 doesn't start Those look like dependencies, yes. What is the issue with them? What is invalid? 2019-11-15 03:01:35 -0600 commented question Sage 8.1 on Mint 19.2 doesn't start Please be more precise about the dependency issue (for the .deb file). 2019-11-12 09:39:41 -0600 answered a question Vector multiplication For one, because you can't transpose lists. Secondly, you should take the (single) entry of c*aa. aa = vector(var('delta_%d' % i) for i in (0..2)) c = n(matrix(1,3,(-0.5,0.2,.3)),2) U = e^((c*aa)[0]) show(U) $$e^{\left(-0.50 \delta_{0} + 0.19 \delta_{1} + 0.25 \delta_{2}\right)}$$ (Note that you specified 2 bits of precision with n.) 2019-11-09 02:21:35 -0600 commented question How to create a subgroup with MAGMA inside SAGE of a group created with MAGMA inside SAGE? Assuming that the rest of the code is correct, the issue is that you use the text "GGG" to refer to the variable GGG. Instead you should use something like GGG.name(). I don't have Magma so I am not sure of the exact syntax; try dir(GGG) to find the right method name. 2019-11-06 13:28:32 -0600 answered a question Bug report (z[0]+z[1]+z[2])^5 == z0^5 + z1^5 + z2^5 This is not a bug. sage: var('x,y,z') sage: ((x+y+z)^5).expand() x^5 + 5*x^4*y + 10*x^3*y^2 + 10*x^2*y^3 + 5*x*y^4 + y^5 + 5*x^4*z + 20*x^3*y*z + 30*x^2*y^2*z + 20*x*y^3*z + 5*y^4*z + 10*x^3*z^2 + 30*x^2*y*z^2 + 30*x*y^2*z^2 + 10*y^3*z^2 + 10*x^2*z^3 + 20*x*y*z^3 + 10*y^2*z^3 + 5*x*z^4 + 5*y*z^4 + z^5 Note the divisibility by 5 of all terms which are "missing" in your characteristic $p=5$ calculation. This is a consequence of the Freshman's dream: $(x+y)^p \equiv x^p + y^p \pmod p$; just apply it twice. It looks too good to be true, but it really is true in characteristic $p$ (the proof is in the linked article). 2019-11-06 13:16:48 -0600 commented question maximizing sum over feasible set of vectors I didn't ask the question, but yes that's right. The binary ordering was just a (natural) suggestion. Of course the ordering is irrelevant, but probably some ordering is necessary for the implementation. 2019-11-06 05:38:39 -0600 commented question maximizing sum over feasible set of vectors $|A|$ means the number of elements in $A$. To have $\underline{\alpha}$ be a vector one should choose an ordering of the (nonempty) subsets of $[5]$, e.g. by identifying them with binary strings of length 5 (not equal to $00000$). 2019-10-31 04:50:03 -0600 commented question How to solve this algebraic equation by SageMath (rather than by hand) The relation $r^2u^4 - (a^2 + r^2 + t^2)u^2 + t^2 = 0$ holds. 2019-10-30 14:31:49 -0600 received badge ● Nice Answer (source) 2019-10-30 11:04:37 -0600 answered a question Plotting polynomials defined over a number field You have the right idea: do f.change_ring(R) with the argument R set to one of K.embeddings(RR): var('w') K. = NumberField(w^2-3) R. = PolynomialRing(K) f = y - t*x implicit_plot(f.change_ring(K.embeddings(RR)[0]),(x,-3,3),(y,-3,3)) You might also like: implicit_plot(f.change_ring(K.embeddings(RR)[0]),(x,-3,3),(y,-3,3), color='blue') + implicit_plot(f.change_ring(K.embeddings(RR)[1]),(x,-3,3),(y,-3,3), color='red') 2019-10-25 02:43:38 -0600 answered a question How can I map functions into polynomial coefficients Here is a way: sage: var('a,b,c,x') sage: sinx = function('sinx',nargs=1) sage: pol = 3*a*x^(-b)*log(x)*b^2 - 6*a*b*c*sinx(x*b) + 3*a*c^2 + 5 sage: R = PolynomialRing(SR, names='a,c') sage: f = pol.polynomial(ring=R) sage: dict(zip(f.monomials(), f.coefficients())) {1: 5, a: 3*b^2*log(x)/x^b, a*c: -6*b*sinx(b*x), a*c^2: 3} Note the generators of the ring R are not the same as the symbolic variables a and c. To get the coefficient of c as in your example, you can do: sage: f.monomial_coefficient(R.gen(1)) 0 Or maybe more conveniently, something like: sage: A,C = R.gens() sage: f.monomial_coefficient(C) 0 2019-10-22 07:06:00 -0600 commented question About algorithm for testing whether a point is in a V-polyhedron This may depend on the backend used, so can you give an example of a Polyhedron for which you want to know the answer? 2019-10-22 04:01:22 -0600 answered a question Elliptic curves - morphism You can do this: sage: T. = FunctionField(QQ) sage: R. = T[] sage: f = u^3+v^3+t sage: h = Jacobian(f, curve=Curve(f)); h Scheme morphism: From: Affine Plane Curve over Rational function field in t over Rational Field defined by u^3 + v^3 + t To: Elliptic Curve defined by y^2 = x^3 + (-27/4*t^2) over Rational function field in t over Rational Field Defn: Defined on coordinates by sending (u, v) to ((-t^3)*u^4*v^4 + (-t^4)*u^4*v + (-t^4)*u*v^4 : 1/2*t^3*u^6*v^3 + (-1/2*t^3)*u^3*v^6 + (-1/2*t^4)*u^6 + 1/2*t^4*v^6 + 1/2*t^5*u^3 + (-1/2*t^5)*v^3 : t^3*u^3*v^3) 2019-10-16 07:03:11 -0600 commented answer Finite field F_16=F_4[y]/(y**2+xy+1) (where F_4=F_2[x]/(x**2+x+1)) To avoid constructing the list you could use next on the iterator. 2019-10-15 16:12:33 -0600 received badge ● Nice Answer (source) 2019-10-15 03:17:54 -0600 commented question how do I find kernel of a ring homomorphism? Please add a code sample including a definition of f. 2019-10-15 02:47:06 -0600 answered a question Verma modules and accessing constants of proportionality Not sure if it's the best way, but you can do the following: sage: for n in range(10): sage: c = (x2^n*y2^n*v).coefficients()[0] sage: print('c_{} = {}'.format(n, c)) c_0 = 1 c_1 = -3 c_2 = 24 c_3 = -360 c_4 = 8640 c_5 = -302400 c_6 = 14515200 c_7 = -914457600 c_8 = 73156608000 c_9 = -7242504192000 2019-10-11 09:04:06 -0600 answered a question Adapt the nauty_directg function Entering digraphs.nauty_directg?? into a SageMath session will show you the source code of the function. In this case it simply calls an external program directg which is part of nauty. You can download and edit that program's source code, compile your new version, and call that one instead. To edit the source code you should start by looking at the file directg.c. Good luck! 2019-10-05 03:49:37 -0600 commented answer uniform way to iterate over terms of uni-/multi-variate polynomials It's not very nice to change your question when it has received a correct answer, even if you suffered from the XY problem. Anyway, I updated the answer. In the future, please ask the new question separately (and link to the previous question). 2019-10-04 02:00:03 -0600 answered a question uniform way to iterate over terms of uni-/multi-variate polynomials Define R2 as a multivariate polynomial ring in 1 variable: sage: R2. = PolynomialRing(QQ, 1) sage: Q = z + 5 sage: for c,t in Q: print c,t 1 z 5 1 This ring (and its elements) will have different methods than the ordinary univariate ring (elements). Alternatively: keep your original ring, define R3 = PolynomialRing(QQ, 1, name='z') and use R3(Q). An answer to the new question: def to_multivariate_poly(expr, base_ring): vars = expr.variables() if len(vars) == 0: vars = ['x'] R = PolynomialRing(base_ring, len(vars), names=vars) return R(expr) Then you can do: sage: x,y,z = var('x,y,z') sage: P = to_multivariate_poly(x + 2*y + x*y + 3, QQ) sage: for c,t in P: print c,t 1 x*y 1 x 2 y 3 1 sage: Q = to_multivariate_poly(z + 5, QQ) sage: for c,t in Q: print c,t 1 z 5 1 2019-10-04 01:45:45 -0600 commented answer Request: Have the len function output a Sage Integer instead of a Python int Ah, there is a subtlety with srange (explained in the documentation) that by default the output consists of numbers of the type you put in. You can override that by passing universe=ZZ to srange; I've added this to the answer. 2019-10-03 00:02:21 -0600 commented answer Request: Have the len function output a Sage Integer instead of a Python int It is not a bug. One should be aware of the difference between int and Integer. The second part of your code uses int division, there is no magic Sage can do to change that. Just be careful to write srange when you want Integers. 2019-10-02 09:20:41 -0600 answered a question Request: Have the len function output a Sage Integer instead of a Python int In the second part you are using Python ints instead of SageMath Integers. Use srange (with universe=ZZ if you're not passing an Integer) instead of range, or convert to an Integer before the division, e.g. Integer(i[0])/len(a). 2019-09-22 04:02:01 -0600 answered a question Defining q-binomial coefficients $\binom{n}{k}_q$ symbolic in $n, k$ If you want to make a symbolic sum then all the terms should be symbolic. Your example does not work because qbin(n,k) is not defined for symbolic n. What you can do is to make qbin a symbolic function with custom typesetting: var('q') def qbin_latex(self, n, k): return '{' + str(n) + ' \choose ' + str(k) + '}_{' + str(q) + '}' qbin = function('qbin', nargs=2, print_latex_func=qbin_latex) var('k,n') show(sum(qbin(n,k),k,0,n)) $$\sum_{k=0}^{n} {n \choose k}_{q}$$ To check the identities that you are interested in, you will probably have to do some substitutions by hand, and/or pass an evalf_func parameter to function in the definition of qbin. Have a look at the documentation for symbolic functions. 2019-09-22 03:42:51 -0600 answered a question Change of programmation of implicit It seems that you want the following: var('x') y = function('y')(x) f = function('f')(x,y) f.diff(x) Output: D[1](f)(x, y(x))*diff(y(x), x) + D[0](f)(x, y(x)) 2019-09-20 02:35:00 -0600 answered a question Wrong hessian The matrix you constructed by hand doesn't look like a Hessian to me. Here is the Hessian: sage: H = L.function(x,y,l).hessian()(x,y,l) sage: H $$\left(\begin{array}{rrr} A {\left(\alpha - 1\right)} \alpha x^{\alpha - 2} y^{\beta} & A \alpha \beta x^{\alpha - 1} y^{\beta - 1} & -p_{x} \\ A \alpha \beta x^{\alpha - 1} y^{\beta - 1} & A {\left(\beta - 1\right)} \beta x^{\alpha} y^{\beta - 2} & -p_{y} \\ -p_{x} & -p_{y} & 0 \end{array}\right)$$ 2019-09-18 04:02:51 -0600 commented question How sage checks the irreducibility of a polynomial? Polynomial in one variable? Over which ring? 2019-09-16 11:21:06 -0600 answered a question Check whether point is on a projective variety Sure, you can take a point from the ambient projective space and do a membership test: sage: PP. = ProductProjectiveSpaces([3,1],QQ) sage: W = PP.subscheme([y^2*z-x^3,z^2-w^2,u^3-v^3]) sage: PP.point([1,1,1,1,1,1]) in W True sage: PP.point([1,1,1,1,1,2]) in W False Internally this does a try - except block around W.point(...).
2019-12-13 14:40:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2822687029838562, "perplexity": 3068.5749830366526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540555616.2/warc/CC-MAIN-20191213122716-20191213150716-00410.warc.gz"}
https://brilliant.org/problems/modulus-area/
# Modulus Area Geometry Level 3 $\large \begin{cases} \color{#3D99F6}{y = |x| + 2} \\ \color{#456461}{y = |x| - 2} \\ \color{#D61F06}{y = |x + 2|} \\ \color{magenta}{y = |x - 2|} \end{cases}$ Find the area of region bounded by the curves above. ×
2021-06-21 10:39:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.52586829662323, "perplexity": 3790.123273283618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00067.warc.gz"}
https://www.scienceforums.net/topic/96980-test/?tab=comments
# Test ## Recommended Posts lorem ipsum lorem ipsumlorem ipsum lorem ipsumlorem ipsumlorem ipsumlorem ipsum lorem ipsum lorem ipsum lorem ipsum [ Edited by Endy0816 ##### Share on other sites whats the reason for posting this? loreum ipsum is usually used as filler text when demonstrating different formats of text design. its from Cicero's text and its taken from the words "pain itself" ##### Share on other sites whats the reason for posting this? loreum ipsum is usually used as filler text when demonstrating different formats of text design. its from Cicero's text and its taken from the words "pain itself" It is a test - you can tell from the title of the thread and the fact that it is in the sandbox. The Latin is actually dolorem ipsum ##### Share on other sites whats the reason for posting this? loreum ipsum is usually used as filler text when demonstrating different formats of text design. its from Cicero's text and its taken from the words "pain itself" The Sandbox forum is a place where you can try out and play with the various forum controls to see how they work. ##### Share on other sites Testing how comment merging works when there is partial bbcode involved. Lorem Ipsum is frequently used in the process of writing programs as well. ##### Share on other sites • 2 weeks later... lorem ipsum lorem ipsumlorem ipsum lorem ipsumlorem ipsumlorem ipsumlorem ipsum lorem ipsum lorem ipsum Testing, testing, 1, 2, 3 lorem ipsum [ Edited by Endy0816 ##### Share on other sites • 7 months later... (c) Can you C Me? ##### Share on other sites • 4 months later... font test ## Create an account or sign in to comment You need to be a member in order to leave a comment ## Create an account Sign up for a new account in our community. It's easy! Register a new account
2021-07-30 20:37:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23112991452217102, "perplexity": 9908.246415747697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00408.warc.gz"}
https://brilliant.org/problems/a-combinatorics-problem-by-thevarraaj-chandran/
# An algebra problem by Thevarraaj Chandran Algebra Level 2 $$\log_a x^2y=m$$ and $$\log_a y/x^3=n$$ express $$\log_a x/y$$ in terms of m and n. × Problem Loading... Note Loading... Set Loading...
2017-05-28 10:33:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380764007568359, "perplexity": 7337.3553196582125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609613.73/warc/CC-MAIN-20170528101305-20170528121305-00038.warc.gz"}
https://www.springerprofessional.de/energy-systems-and-management/2386222
scroll identifier for mobile main-content ## Über dieses Buch Readers of this work will find examinations of the current status and future status for energy sources and technologies, their environmental interactions and the relevant global energy policies. The work begins with an overview of Energy Technologies for a Sustainable Future, which examines the correlation between population, economy and energy consumption in the past, and reviews the conventional and renewable energy sources as well as the management of them to sustain the ever-growing energy demand in the future. The rest of the chapters are divided into 3 parts; the first part of the book, “Energy Sources, Technologies and Environment”, consists of 12 chapters, which include research on new energy technologies and evaluation of their environmental effects. The second part “Advanced Energy Materials” includes 7 chapters devoted to research on material science for new energy technologies. The final section titled “Energy Management, Economics and Policy” is comprised of 10 chapters about planning, controlling and monitoring energy related processes together with the policies to satisfy the needs of increasing population and growing economy. The chapters are selected works from the International Conference on Energy and Management, which was organized by Istanbul Bilgi University Department of Energy Systems Engineering and PALMET Energy to share the knowledge on the recent trends, scientific developments, innovations and management methods in energy, and held on 5–7th June 2014 at Istanbul Bilgi University. ## Inhaltsverzeichnis ### Chapter 1. An Overview of Energy Technologies for a Sustainable Future Population and the economic growth are highly correlated with the energy demand. The world population was multiplied by a factor of 1.59 (reaching above 7 billion) from 1980 to 2013, while the total energy consumption of the world was multiplied by 1.84 (getting beyond 155,000 TWh) in the same time interval. Furthermore, the demand for energy is expected to increase even more with an average annual rate of 1.2 % in the near future. However, for the last 30 years, about 85–90 % of the energy demand is supplied by petroleum, natural gas, and coal, even though they are harmful for the environment and estimated to be depleted soon. Hence, building energy policies to satisfy the needs of increasing population and growing economy in a sustainable, reliable, and secure fashion has become quite important. This may involve optimizing the energy supplies, minimizing the environmental costs, promoting the utilization of clean and renewable energy resources and diversifying the type of energy sources. Thus, not only the conventional energy generation technologies must be developed more, but also environmentally friendly alternative energy sources (such as wind, solar, geothermal, hydro, and bio) must become more widespread to sustain the energy needs for the future. However, this requires a significant amount of research on energy technologies and an effective management of the energy sources. Ayse Nur Esen, Zehra Duzgit, A. Özgür Toy, M. Erdem Günay ### Chapter 2. Thermal Pollution Caused by Hydropower Plants Thermal pollution is the change in the water temperatures of lakes, rivers, and oceans caused by man-made structures. These temperature changes may adversely affect aquatic ecosystems especially by contributing to the decline of wildlife populations and habitat destruction. Any practice that affects the equilibrium of an aquatic environment may alter the temperature of that environment and subsequently cause thermal pollution. There may be some positive effects, though, to thermal pollution, including the extension of fishing seasons and rebounding of some wildlife populations. Thermal pollution may come in the form of warm or cold water being dumped into a lake, river, or ocean. Increased sediment build-up in a body of water affects its turbidity or cloudiness and may decrease its depth, both of which may cause a rise in water temperature. Increased sun exposure may also raise water temperature. Dams may change a river habitat into a lake habitat by creating a reservoir (man-made lake) behind the dam. The reservoir water temperature is often colder than the original stream or river. The sources and causes of thermal pollution are varied, which makes it difficult to calculate the extent of the problem. Because the thermal pollution caused by Hydropower Plants (HPPs) may not directly affect human health, it is neglected in general. Therefore, sources and results of thermal pollution in HPPs are ignored in general. This paper aimed to reveal the causes and results of thermal pollution and measures to be taken in HPPs. Alaeddin Bobat ### Chapter 3. Comparing Spatial Interpolation Methods for Mapping Meteorological Data in Turkey Determining the potentials of the renewable energy sources provides realistic assumptions on useful utilization of the energy. Wind speed  and solar radiation are the main meteorological data used in order to estimate renewable energy potential. Stated data is considered as point source data since it is collected at meteorological stations. However, meteorological data can only be significant when it is represented by surfaces. Spatial interpolation methods help to convert point source data into raster surfaces by estimating the missing values for the areas where data is not collected. Besides the purpose, the total number of data points, their location, and their distribution within the study area affect the accuracy of interpolation. This study aims to determine optimum spatial interpolation method for mapping meteorological data in northern part of Turkey. In this context, inverse distance weighted (IDW), kriging, radial basis, and natural neighbor interpolation methods were chosen to interpolate wind speed and solar radiation measurements in selected study area. The cross-validation technique was used to determine most efficient interpolation method. Additionally, accuracy of each interpolation method were compared by calculating the root-mean-square errors (RMSE). The results prove that the number of control points affects the accuracy of the interpolation. The second degree IDW (IDW2) interpolation method performs the best among the others. Thus, IDW2 was used for mapping meteorological data in northern Turkey. Merve Keskin, Ahmet Ozgur Dogru, Filiz Bektas Balcik, Cigdem Goksel, Necla Ulugtekin, Seval Sozen ### Chapter 4. Energy Storage with Pumped Hydrostorage Systems Under Uncertainty Energy storage is becoming an important problem as the difference between supply and demand becomes sharper and the availability of energy resources is not possible all the time. A pumped hydrostorage system (PHSS) which is a special type of hydroelectric power plant can be used to store energy and to use the water more efficiently. When the energy demand and the energy price are high (peak hours), the water at upper reservoir is used to generate electricity and the water is stored in the lower reservoir. Revenue is gained from the power sale to the market. When the demand and the energy price are low (off-peak hours), the water at lower reservoir is pumped back to the upper reservoir. Cheap electricity is used to pump the water. The hourly market price and water inflow are uncertain. The main objective of a company is to find an operation schedule that will maximize its revenue. The hourly electricity prices and the water inflow to the reservoir are important parameters that determine the operation of the system. In this research, we present the working mechanism of the PHSS to store energy and to balance the load changes due to demand. Ahmet Yucekaya ### Chapter 5. Telelab with Cloud Computing for Smart Grid Education As the demand for energy increases, the need to generate and distribute energy to the customers with greater efficiency also increases. Introduction of smart grids provides platform for the utilities to collect and analyze consumption data in real time. This helps them to define the generation profile and offer competitive energy prices to the customers. Customer on the other hand can use the knowledge of his own consumption profile to define and tune his energy usage. Education about smart grid environment which involves software, hardware devices, and network technologies for data collection and analysis is important for both utilities and customers. Research and development in Internet technologies promote remote laboratory as a cost-effective solution for users located across the globe. Cloud computing platform can further reduce costs involved in data storage and software used. This paper presents the idea of developing the remote laboratory located at South Westphalia University in Soest, Germany, further by integrating cloud computing and smart grid simulation environment. This will educate people across the globe by offering them hands on experience on smart grid technology and thus will contribute to the field of power engineering education. Pankaj Kolhe, Berthold Bitzer ### Chapter 6. A Decomposition Analysis of Energy-Related CO2 Emissions: The Top 10 Emitting Countries Climate change, caused by greenhouse gas (GHG) emissions, is one of the hot topics all around the world. Carbon dioxide (CO2) emissions from fossil fuel combustion account for more than half of the total anthropogenic GHG emissions. The top 10 emitting countries accounted 65.36 % of the world carbon dioxide emissions in 2010. China was the largest emitter and generated 23.84 % of the world total. The objective of this study is to identify factors that contribute to changes in energy-related CO2 emissions in the top 10 emitting countries for the period 1971–2010. To this aim, a decomposition analysis has been employed. Decomposition analysis is a technique used to identify the contribution of different components of a specific variable. Here, four factors, namely population, per capita income, energy intensity, and carbon intensity, are differentiated. The results show that the economic activity effect and the energy intensity effect are the two biggest contributors to CO2 emissions for all countries with a few exceptions. Aylin Çiğdem Köne, Tayfun Büke ### Chapter 7. Turkey’s Electric Energy Needs: Sustainability Challenges and Opportunities In order to satisfy its electric energy demand for the next 20 years (440–484 TWh projected demand for year 2020), Turkey has embarked on a series of major investment programs involving energy generation and distribution. A wide variety of energy generation projects are being implemented or will be executed in the near future involving nuclear, coal-, and natural gas-fired thermoelectric plants, combined cycle plants, hydroelectric dams, geothermal plants, and wind and solar energy farms. The engineering and scientific communities along with decision makers at the technical, financial, and political level are facing both, huge challenges (e.g., reduce energy dependence, financial feasibility, environmental protection, social acceptance, and resources management) and a once in a lifetime opportunity for improvement of the Turkey’s social welfare and the environment for several generations. This paper presents a view of some of these challenges and opportunities along with a review of the energy–water nexus from a holistic life cycle perspective. Furthermore, it explores different scenarios of technology integration in order to improve the sustainability of the electric energy generation matrix by the sustainable use of available resources and minimization of the carbon and environmental footprint of energy generation. Washington J. Braida ### Chapter 8. Shale Gas: A Solution to Turkey’s Energy Hunger? The aim of this short analysis is to answer whether shale gas can be a sustainable solution to Turkey’s long-term energy needs. Turkey, with no significant hydrocarbon reserves of her own, is vulnerable to the risks and challenges associated with energy import dependency. Having a fast-growing natural gas demand has caused Turkey to undertake many gas import contracts. In 2013, 98 % of the natural gas consumption is imported. Globally increasing natural gas prices and volatile Turkish Lira/US Dollar exchange rate have a series of ramifications, including a substantial burden on national budget and balance of payments. It is crucial for Turkey to reduce the share of imports in energy and to develop domestic resources in order to avoid exposure to relevant risks. In short, Turkey needs gas supply security. However, conventional natural gas reserves of Turkey are far from meeting its needs. Shale gas, in this frame, emerges as a buoyant potential for secure future gas deliveries. Given the example of unconventional gas frenzy in the USA, Turkey is now discussed as a long-term candidate for shale gas production. This possibility triggers high hopes, as well as unsupported expectations. Shale gas production has a long list of requirements: distinct geological formations, concordant conditions in surrounding area, advanced exploration and production technology, and capital-intense investments. Even if these conditions are fulfilled, environmental challenges of this production method are yet to be addressed and tackled diligently. Turkey is still on exploration phase of shale gas experience. It will take Turkey at least another decade to meet the requirements for tangible results and to name the shale gas as an answer to its energy hunger. Ilknur Yenidede Kozçaz Selenium and iodine are found in human body and primarily used in nutrition, and excess or absence of them can lead to diseases. Therefore, their possible dispersion to environment through mining and reprocessing of metals, combustion of coal and fossil fuel, nuclear accidents, or similar activities needs remediation. Adsorption is one of the useful techniques to remove pollutants. In this study, a factorial design is used to determine the effect of pH, concentration of adsorbate, and contact time upon adsorption. Adsorption capacities of radio-selenium and radio-iodine were evaluated for factorial design using activated carbons. The used activated carbon samples were prepared by chemical and physical activation methods. Radioactivity measurements were carried out by using high-resolution gamma spectroscopy system. Results of the research lead to provide useful information about energy generation and management processes by preventing hazardous elements’ dispersion to the environment. A. Beril Tugrul, Nilgun Karatepe, Sevilay Haciyakupoglu, Sema Erenturk, Nesrin Altinsoy, Nilgun Baydogan, Filiz Baytas, Bulent Buyuk, Ertugrul Demir ### Chapter 10. Assessment of Sustainable Energy Development In this study, optimum solutions and action plans for sustainable energy development are discussed. Reduction of CO2 emission could be realized with increasing nuclear and renewable energy usage, and efficiencies on fuel, power, electricity, and fossil fuels. In here, “ecosystems approach” is vital importance. Worldwide cooperation is the most important with the concepts of 6 Cs (credibility, capability, continuity, creativity, consistency, and commitment). Therefore, it can be successfully developed on sustainability, sharing with public, strategy and culture, procedures and evaluation together with 6 Cs. A. Beril Tugrul, Selahattin Cimen ### Chapter 11. Geothermal Energy Sources and Geothermal Power Plant Technologies in Turkey Geothermal energy is used for electric power generation and direct utilization in Turkey. The highest enthalpy geothermal sources are located in Western Anatolia; thus, geothermal power generation projects have also been realized in Western Anatolia since 1984. The present installed gross capacity for electric power generation is 345 MWe from 11 geothermal power plants in 2014, while new 395 MWe of capacity is still under construction or projected at 19 geothermal fields and will be completed in 2016–2017. In Turkey, flash cycle power plants are situated in Kızıldere (Denizli) and Germencik (Aydın) geothermal fields because of over than 230 °C geothermal reservoir temperatures. There are two different geothermal power plants in Kızıldere geothermal field that one is 17.2 MWe single-flash system and the new one consists of 60 MWe triple-flash + 20 MWe binary cycle, as total 80 MWe capacity. In Germencik, 47.4 MWe double-flash geothermal power plant uses power generation. Except from these three geothermal power plants, binary cycle (organic Rankine cycle, ORC) plants use under 200 °C geothermal reservoir temperatures in all installed capacities in Western Anatolia. New geothermal reservoir studies are still under investigation for the eastern part of Turkey. Because of the lower reservoir temperature values at these regions, the possible power generation cycle may be required to binary system (Kalina cycle). Fusun Servin Tut Haklidir ### Chapter 12. Structural Health Monitoring of Multi-MW-Scale Wind Turbines by Non-contact Optical Measurement Techniques: An Application on a 2.5-MW Wind Turbine Optical measurement systems utilizing photogrammetry and/or laser interferometry are introduced as cost-efficient alternatives to the conventional wind turbine/farm health-monitoring systems that are currently in use. The proposed techniques are proven to provide an accurate measurement of the dynamic behavior of a 2.5-MW, 80-m-diameter wind turbine. Several measurements are taken on the test turbine by using four CCD cameras and one laser vibrometer, and the response of the turbine is monitored from a distance of 220 m. The results of the infield tests show that photogrammetry (also can be called as computer vision technique) enables the 3-D deformations of the rotor to be measured at 33 different points simultaneously with an average accuracy of ±25 mm while the turbine is rotating. Several important turbine modes can also be extracted from the recorded data. Similarly, laser interferometry (used for the parked turbine) provides very valuable information on the dynamic properties of the turbine structure. Twelve different turbine modes can be identified from the obtained response data. The measurements enable the detection of even very small parameter variations that can be encountered due to the changes in operation conditions. Optical measurement systems are very easily applied on an existing turbine since they do not require any cable installations for power supply and data transfer in the structure. Placement of some reflective stickers on the blades is the only preparation that is necessary and can be completed within a few hours for a large-scale commercial wind turbine. Since all the measurement systems are located on the ground, a possible problem can be detected and solved easily. Optical measurement systems, which consist of several CCD cameras and/or one laser vibrometer, can be used for monitoring several turbines, which enables the monitoring costs of the wind farm to reduce significantly. Muammer Ozbek, Daniel J. Rixen ### Chapter 13. Stability Control of Wind Turbines for Varying Operating Conditions Through Vibration Measurements Wind turbines have very specific characteristics and challenging operating conditions. Contemporary MW-scale turbines are usually designed to be operational for wind speeds between 4 and 25 m/s. In order to reach this goal, most turbines utilize active pitch control mechanisms where angle of the blade (pitch angle) is changed as a function of wind speed. Similarly, the whole rotor is rotated toward the effective wind direction by using the yaw mechanism. The ability of the turbine to adapt to the changes in operating conditions plays a crucial role in ensuring maximum energy production and the safety of the structure during extreme wind loads. This, on the other hand, makes it more difficult to investigate the system from dynamic analysis point of view. Unexpected resonance problems due to dynamic interactions among aeroelastic modes and/or excitation forces can always be encountered. Therefore, within the design wind speed interval, for each velocity increment, it has to be proven that there are no risks of resonance problems and that the structure is dynamically stable. This work aims at presenting the results of the dynamic stability analyses performed on a 2.5-MW, 80-m-diameter wind turbine. Within the scope of the research, the system parameters were extracted by using the in-operation vibration data recorded for various wind speeds and operating conditions. The data acquired by 8 strain gauges (2 sensors on each blade and 2 sensors on the tower) installed on the turbine were analyzed by using operational modal analysis (OMA) methods, while several turbine parameters (eigenfrequencies and damping ratios) were extracted. The obtained system parameters were then qualitatively compared with the results presented in a study from the literature, which includes both aeroelastic simulations and in-field measurements performed on a similar size and capacity wind turbine. Muammer Ozbek, Daniel J. Rixen ### Chapter 14. Evaluation of HFO-1234YF as a Replacement for R134A in Frigorific Air Conditioning Systems The aim of this study was to compare and evaluate the frigorific air conditioning system using HFO-1234yf and R-134a as a refrigerant. For this aim, an experimental frigorific air conditioning system using both refrigerants was developed and refrigerated air was introduced into a refrigerated room. The performance parameters determined were the change of the air temperature in the condenser inlet and time. The performance of frigorific air conditioning system has been evaluated by applying energy analysis. Experiments were conducted for a standard frigorific air conditioning system using the R134a as a refrigerant. Airflow has been introduced to the refrigerated room for 60 min for each performance test. From the result for both refrigerant, the temperature gradient in time was comparable. The HFO-1234yf refrigerant can use the standard frigorific air conditioning system that is currently being used by the R134a refrigerant, without any changes needing to be made. Mehmet Direk, Cuneyt Tunckal, Fikret Yuksel, Ozan Menlibar ### Chapter 15. Biodiesel Production Using Double-Promoted Catalyst CaO/KI/γ-Al2O3 in Batch Reactor with Refluxed Methanol A benign process for biodiesel production has been developed using heterogeneous γ-alumina base as catalyst. This study was conducted using double-promoted catalyst CaO/KI/γ-Al2O3 to improve the activity of catalyst and this research was the first one which employs that kind of catalyst for biodiesel production. The preparation of the catalyst was conducted by precipitation and impregnation methods. The effects of reaction temperature, reaction time, and the ratio of oil to methanol on the yield of biodiesel were studied. The reactions were carried out in a batch-type reactor system which consists of three-neck glass flask with 1000-ml capacity equipped with reflux condenser and hot plate stirrer. Results showed that CaO/KI/γ-Al2O3 catalyst effectively increased the biodiesel yield about 1.5 times than that the single-promoted catalyst. The optimum condition for the production of biodiesel is as follows: the reaction temperature is 65 °C, the reaction time is 5 h, and the ratio of oil to methanol is 1:42. Under this optimum condition, the highest biodiesel yield of 95 % was obtained. Nyoman Puspa Asri, Bambang Pujojono, Diah Agustina Puspitasari, S. Suprapto, Achmad Roesyadi ### Chapter 16. I–V Characterization of the Irradiated ZnO:Al Thin Film on P-Si Wafers By Reactor Neutrons ZnO:Al/p-Si heterojunctions were fabricated by solgel dip coating technique onto p-type Si wafer substrates. Al-doped zinc oxide (ZnO:Al) thin film on p-Si wafer was irradiated by reactor neutrons at ITU TRIGA Mark-II nuclear reactor. Neutron irradiation was performed with neutron/gamma ratio at 1.44 × 104 (n cm−2 s−1 mR−1). The effect of neutron irradiation on the electrical characteristics of the ZnO:Al thin film was evaluated by means of current–voltage (I–V) characteristics for the unirradiated and the irradiated states. For this purpose, the changes of I–V characteristics of the unirradiated ZnO:Al thin films were compared with the irradiated ZnO:Al by reactor neutrons. The irradiated thin ZnO:Al film cell structure is appropriate for the usage of solar cell material which is promising energy material. Emrah Gunaydın, Utku Canci Matur, Nilgun Baydogan, A. Beril Tugrul, Huseyin Cimenoglu, Serco Serkis Yesilkaya ### Chapter 17. The Characteristic Behaviors of Solgel-Derived CIGS Thin Films Exposed to the Specific Environmental Conditions This study was performed to determine the time effect on optical properties of solgel-derived Cu(In,Ga)Se2 (CIGS) thin films at the specific environmental conditions. For this purpose, solgel-derived CIGS thin films were exposed to a variety of environmental conditions at different steps of the production process from the preparation of solution to deposition of substrate. The optical properties of the CIGS thin films changed with the rise of time at the specific environmental conditions such as the increase of the aging time of solgel solution (from 18 to 35 days) and the rise of the annealing time of the thin film (from 15 to 60 min). The CIGS solution was aged with the rise of time to investigate the effect of the aging time on optical properties of the thin film. The films were deposited by aged colloidal solution which kept at −5 °C in a dark environment in order to extend the useful life of solution. The color of the colloidal solution changed slightly with the increase in the elapsed time after the preparation of the solution. The increase of the annealing time has affected the optical behaviors of the CIGS thin films with the changes of the surface morphology. Utku Canci Matur, Sengul Akyol, Nilgun Baydogan, Huseyin Cimenoglu ### Chapter 18. Effect of Curing Time on Poly(methacrylate) Living Polymer Self-healing materials increase the reliability and life span of the overall systems that they are incorporated to. This capacity of the material enhances the credibility of the in-flight health assessment in aerospace platforms. Such platforms do not require visual or acoustic inspections to recover from the damages that occur during flight. This reduces the energy requirements associated with maintenance, replacement of parts, and off-line time. Poly(methacrylate) (PMMA) living polymer mixed with nanoparticles possesses miraculous properties such as minimized gas permeability, improved heat resistance, and boosted physical performance. In this study, PMMA is synthesized by the atom transfer radical polymerization (ATRP) method and the mechanical properties of the material have been manipulated by changing the curing time. The effects of curing time on mechanical properties are examined by the stereomicroscope images and Shore D hardness tests satisfying ASTM D785 test standards. Tayfun Bel, Nilgun Baydogan, Huseyin Cimenoglu ### Chapter 19. Effects of Production Parameters on Characteristic Properties of Cu(In,Ga)Se2 Thin Film Derived by Solgel Process Cu(In,Ga)Se2 (CIGS) thin films were obtained by solgel method on soda-lime glass substrates, economically. The optimum optical properties of CIGS thin films are obtained by varying the film layers. Besides, the solgel-derived CIGS thin films were thermally treated at different temperatures from 135 °C up to 200 °C. These results indicate that the transparent CIGS thin films derived by solgel process can be good candidates for the applications in optoelectronic devices. Sengul Akyol, Utku Canci Matur, Nilgun Baydogan, Huseyin Cimenoglu ### Chapter 20. Production of Poly(Imide Siloxane) Block Copolymers This work aims some challenges in the manufacturing of flexible substrates which will be used in solar cells as a substrate. Poly(imide siloxane) block copolymers were produced with the same bis(aminopropyl) polydimethylsiloxane (APPS). The polyimide hard blocks were composed by using 4,4′-oxydianiline (ODA) and benzofenon-3,3,4,4-tetrakarboksilik dianhydride (BTDA). Besides, the polysiloxane soft blocks were derived by using APPS and BTDA. APPS and BTDA formed the polysiloxane soft block in the structure. The length of polysiloxane soft block increased with increase in the length of polyimide hard block. Hence, it was possible to obtain copolymer structure and the changes in physical properties of the copolymers. These copolymers were characterized by using FT-IR analysis to evaluate the structure of flexible substrates. Turkan Dogan, Nilgun Baydogan, Nesrin Koken ### Chapter 21. Government Incentives and Supports for Renewable Energy Although energy is part of our life, we paid big amount of cost for it. As it has a great economic value, we produced energy in an unconscious way, so we polluted earth. At the end, we understood the value of environment and gave up our bad habit of extreme producing methods including pollution. Secondly, we realized that energy resources are not unlimited. These situations delivered us into clean energy consuming and producing. Governments must promote for renewable energy consumption and production. Renewable energy consumption and production cannot increase without government help or government actions so governments have to be a pioneer for renewable energy. Legal provisions are also important for civil rights (both public and private). Renewable energy actions may violate fundamental rights and freedoms such as property, health right, and also commercial rights like competition and right of initiation. These issues must be regulated by governments under the administrative law principles. Clean energy sector is not only a need but also it is a market. Although it is mostly in private sector area, it still has public interest notion. Münci Çakmak, Begüm İsbir ### Chapter 22. Comparison of the Relationship Between CO2, Energy USE, and GDP in G7 and Developing Countries: Is There Environmental Kuznets Curve for Those? The increasing attention to greenhouse gas (GHG) emission all around the world has led researchers to investigate its causes. Energy use and economic growth can be categorized as the most important roots for this problem. The objective of this study is to investigate the causal relationship between energy use (EU), economic growth (GDP), and CO2 emission within two groups of countries: The G7 and the developing countries. Then, we examined the environmental Kuznets curve for these countries to see whether this hypothesis (EKC) is proved for these countries or not. To do so, seven developed and six developing countries are selected, and the annual data samples covering the period between 1993 and 2011 are gathered. To investigate the casual relationship between the variables, EU, GDP, and CO2, we applied the techniques of panel data analysis, examining unit root and cointegration tests, based on EKC equation. The results show the casual and long-term relationship for the two groups of countries. Also, the hypothesis of Kuznets curve is proved for G7 countries and it is shown that the relationship between these variables is inverted U shaped, with a nonzero negative coefficient for the square of GDP. However, for developing countries, this hypothesis is rejected, because the coefficient of $${\text{GDP}}^{2}$$ is found close to zero, and thus, a linear relationship is proved between these variables. Mahdis Nabaee, G. Hamed Shakouri, Omid Tavakoli ### Chapter 23. Identification and Analysis of Risks Associated with Gas Supply Security of Turkey In this study, a detailed risk assessment study was done for the purpose of identifying risks associated with the security of gas supply in Turkey. Moreover, analysing the risks of the state according to the regulation 994/2010 was issued by European Parliament to safeguard security of gas supply that is accepted after suffering gas supply interruptions. One of the means considered in the regulation to achieve this target is performing a full risk assessment. Recent disruptions took place in Turkish gas network call for performing an extensive risk assessment for Turkey as well. With this paper, gas supply security of Turkey is discussed and its impacts on the security in the state is evaluated in the context of regulation 994/2010 issued by European Union (EU). Umit Kilic, A. Beril Tugrul ### Chapter 24. The Social Cost of Energy: External Cost Assessment for Turkey The social or full costs of energy sources, which include the external cost plus the private cost, are the most important criteria for energy and environmental policy making. Energy policy making is concerned with both the supply side and the demand side of energy provision. On the energy supply side, deciding on alternative investment options requires the knowledge of the full cost of each energy option under scrutiny. On the demand side, social welfare maximisation should lead to the formulation of energy policies that steer consumers’ behaviour in a way that will result in the minimisation of costs imposed to society as a whole. Demand-side policies can benefit significantly from the incorporation of full energy costs in the corresponding policy formulation process. The geographical dimension is also important since environmental damage from energy production crosses national borders. Hence, a consistent set of energy costs allows a better understanding of the international dimensions of policy decisions in these areas. This paper, focusing on classical pollutants, tries to assess external costs from human health damages, damages to buildings, crop losses and from biodiversity impacts. To this aim, first, emissions data have been drawn from European Monitoring and Evaluation Programme (EMEP) database. Then, these emissions data have been transformed into monetary terms using the results of cost assessment for sustainable energy systems (CASES) project for the years 2000 and 2010. The results have been discussed in the context of energy and sustainability. Aylin Çiğdem Köne ### Chapter 25. Energy Infrastructure Projects of Common Interest in the SEE, Turkey, and Eastern Mediterranean and Their Investment Challenges European Union’s energy strategy for the period up to 2020 builds on eight priority corridors for electricity, gas, and oil. Accordingly, on October 2013, 248 energy infrastructure projects were selected and assessed from a European perspective as the most critical to implement. Particularly, for a project to be included in the list, it has to bear significant benefits for at least two member states, contribute to market integration and further competition, enhance security of supply, and reduce CO2 emissions. Furthermore, by the characterization of “projects of common interest” (PCI), they are indented to benefit from more rapid and efficient permit granting procedures and improved regulatory treatment (European Union, 2014a). The aims of this working paper are, firstly, to introduce and briefly discuss the priority corridors and thematic areas that must be implemented in the coming decade to assist the EU meet its short- and long- term energy and climate objectives; secondly, to outline the key projects of common interest specifically in the South East European, Turkey, and eastern Mediterranean regions; finally, to present the major investment and financial challenges associated with the undertaking and implementation of these projects. Panagiotis Kontakos, Virginia Zhelyazkova ### Chapter 26. Incorporating the Effect of Time-of-Use Tariffs in the Extended Conservation Supply Curve The conservation supply curve (CSC)—a plot of the cost of conserved energy (CCE) versus cumulative energy conserved—allows for an economic comparison of multiple Energy conservation measures (ECMs) leading to the identification of economically feasible ones. Its major advantages are separation of the cost of implementing such measures from their benefit and its independence from the fuel price. Different ECMs save different amounts of energy during different periods of the day. Hence, TOU tariffs, implemented to incentivise electrical load management and energy conservation, affect their cost-saving potentials. The CSC, in its present form, does not incorporate the effect of such tariffs. The primary objective of this work was to propose a methodology to extend and to modify the CSC to incorporate the effects of these tariffs, while being able to draw the same inferences as can be drawn from the original CSC. In doing so, energy profiles of the measures (a set of saving potentials over a particular time period, e.g. 24 values for 24 h of the day) and TOU charges (rates over and above the base energy charge, calculated over the same time period) are used to arrive at a value indicating the impact of the TOU rates on said measures. These values, along with the CCEs of the measures, are used to generate the modified CSC. Applicability of the proposed methodology is demonstrated with an illustrative example. Current research is focused on accounting for multiple types of fuels saved by conservation measures. ### Chapter 27. Management of Distribution System Protection with High Penetration of DGs As a result of deregulation and emerging distributed generation (DG) in distribution networks, some protection problems and challenges emerge which need immediate solutions by design engineering. The present generation of numerical protection relays allows the implementation of adaptive settings for distribution system protection especially with systems of a high penetration of DG. This chapter presents an overview of the use of communication infrastructure to improve some aspects of distribution system protection, especially adaptive protection. One of the key issues has been to verify that traditional network protection schemes and settings are simply not adequate when there are DG systems connected to the network. The impact of the DG unit increases with the size of the generator and with the length of the line section between the DG unit and the fault. Results have provided a clear indication of the potential protection problems that need to be solved by careful protection design. Some solutions have been proposed in this chapter through communication and intelligent electronic devices (IEDs) based on power system simulation studies using ETAP software. The results will be used to focus further research and development in protection systems and concepts. Abdelsalam Elhaffar, Naser El-Naily, Khalil El-Arroudi ### Chapter 28. Assessment of Total Operating Costs for a Geothermal District Heating System District heating system (DHS), especially geothermal, is an important class of heating, ventilating, and air conditioning systems. This is due to the fact that in many countries and regions of the world, they have been successfully installed and operated, resulting in great economic savings. In recent years, such systems have received much attention with regard to improving their energy efficiency, equipment operation, and investment cost. Improvement in performance of a geothermal district heating system (GDHS) is a very effective mean to decrease energy consumption and to provide energy saving. To perform the potential energy savings in a GDHS, the advanced exergoeconomic analysis is applied to a real GDHS in the city of Afyon/Turkey. Then, it is evaluated based on the concepts of exergy destruction cost and investment cost. The results show that the advanced exergoeconomic analysis makes the information more accurate and useful and supplies additional information that cannot be provided by the conversional analysis. Furthermore, the Afyon GDHS can be made more cost effectiveness, removing the system components’ irreversibilities, technical-economic limitations, and poorly chosen manufacturing methods. Harun Gökgedik, Veysel İncili, Halit Arat, Ali Keçebaş ### Chapter 29. How the Shadow Economy Affects Enterprises of Finance of Energy The aim of this paper was to present how shadow economy and corruption can affect the enterprises which operate in the sector of finance of energy. The economic damage is extensive in every national economy where increased levels of shadow economy and corruption exist. Accordingly, this study presents possible measures that can decrease shadow economy and corruption. Enterprises in energy finance can provide reliable, competitive, and consistent delivery of customized solutions according to the client’s needs. Some of them are finance projects, recapitalizations, single assets, or portfolio credits. Many countries have started to target shadow economy and corruption, since they impede the achievement of their fiscal targets, and harm the overall business environment and the country’s attractiveness for foreign investments. In the article, the operation of the energy service companies (ESCOs) is used as a case study. Aristidis Bitzenis, Ioannis Makedos, Panagiotis Kontakos ### Chapter 30. Energy Profile of Siirt Siirt Province has various natural and fossil energy sources such as solar, hydropower, biogas, geothermal, and petrol in terms of energy potential, by contrast with other provinces in Turkey. The data collected in Siirt Province indicate that Siirt has a strong potential for solar energy. The average sunshine duration and total solar radiation in Siirt are about 7.5 h-day and 4.3 kWh/m2-day, respectively. In Siirt, there are two hydropower plants with a total installed power of 263 MW. In addition, more than 10 hydroelectric power plants with a total installed capacity of 1094 MW of power generation will be established by means of the planned dam to be installed. Considering the establishment of biogas systems, Siirt has an annual biogas production of 20,000 m3 with around 500,000 small ruminants. Siirt Province is believed to be rich in geothermal care, but there has not been enough research on this topic, yet. And also, petroleum is an important energy source in Siirt Province, lately. As a result, Siirt Province has a rich variety of energy resources, and in case of investment, it would be an energy basin center in the southeast Anatolia region. Omer Sahin, Mustafa Pala, Asım Balbay, Fevzi Hansu, Hakan Ulker Weitere Informationen ## BranchenIndex Online Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren. ## Whitepaper - ANZEIGE - ### Systemische Notwendigkeit zur Weiterentwicklung von Hybridnetzen Die Entwicklung des mitteleuropäischen Energiesystems und insbesondere die Weiterentwicklung der Energieinfrastruktur sind konfrontiert mit einer stetig steigenden Diversität an Herausforderungen, aber auch mit einer zunehmenden Komplexität in den Lösungsoptionen. Vor diesem Hintergrund steht die Weiterentwicklung von Hybridnetzen symbolisch für das ganze sich in einer Umbruchsphase befindliche Energiesystem: denn der Notwendigkeit einer Schaffung und Bildung der Hybridnetze aus systemischer und volkswirtschaftlicher Perspektive steht sozusagen eine Komplexitätsfalle gegenüber, mit der die Branche in der Vergangenheit in dieser Intensität nicht konfrontiert war. Jetzt gratis downloaden! Bildnachweise
2018-10-22 05:26:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38372310996055603, "perplexity": 2656.298125985507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514708.24/warc/CC-MAIN-20181022050544-20181022072044-00403.warc.gz"}
https://www.nature.com/articles/s41598-019-42186-x?error=cookies_not_supported&code=a1a72a0a-5f93-4c5f-9e21-c9fcd9dbc760
# Gelatin methacryloyl and its hydrogels with an exceptional degree of controllability and batch-to-batch consistency ## Abstract Gelatin methacryloyl (GelMA) is a versatile material for a wide range of bioapplications. There is an intense interest in developing effective chemical strategies to prepare GelMA with a high degree of batch-to-batch consistency and controllability in terms of methacryloyl functionalization and physiochemical properties. Herein, we systematically investigated the batch-to-batch reproducibility and controllability of producing GelMA (target highly and lowly substituted versions) via a one-pot strategy. To assess the GelMA product, several parameters were evaluated, including the degree of methacryloylation, secondary structure, and enzymatic degradation, along with the mechanical properties and cell viability of GelMA hydrogels. The results showed that two types of target GelMA with five batches exhibited a high degree of controllability and reproducibility in compositional, structural, and functional properties owing to the highly controllable one-pot strategy. ## Introduction Protein-based hydrogels including collagen, fibrin, and Matrigel®, as natural extracellular matrices (ECM) analogues, have been widely investigated for various applications, like three-dimensional (3D) culture, tissue engineering, and regenerative medicine, because they are biocompatible and have excellent innate bioactive properties such as enzyme degradation and good control over cellular activities (adhesion, proliferation, migration, and differentiation)1,2,3,4. Matrigel® is one of the most popular protein-based hydrogels5,6,7. Matrigel® is a gelatinous protein mixture derived from Engelbreth-Holm-Swarm mouse sarcoma tumors and contains mainly proteins (laminin, type IV collagen, and entactin) and a small portion of proteoglycans and growth factors5. It has been found that Matrigel® is a useful hydrogel for culture of cancer stem cells and stem cells, owing to its ability of improved cell-cell and cell material interactions8,9. However, Matrigel® materials confront some challenges such as high batch-to-batch differences of their composition and poor control over mechanical properties, which can affect physiochemical and biological properties and elicit uncontrolled responses from cells4,5. In addition, Matrigel® materials are practically not appropriate for clinical applications because they are originated from tumors. Gelatin methacryloyl (GelMA) has been investigated as a potential alternative to Matrigel® for 3D culture systems and bioapplications10,11. GelMA is an engineered gelatin-based material that has been proven to be versatile for tissue engineering, drug delivery, and 3D printing applications12,13,14,15,16,17,18,19. GelMA displays some important features such as biocompatibility, enzymatic cleavage (degradation in response to matrix metalloproteinases, MMPs), cell adhesion (arginine-glycine-aspartic acid, RGD sequences), and tailorable mechanical properties14,15. GelMA can be prepared through simple synthesis of gelatin with methacrylic anhydride (MAA), and its methacryloyl functionalization (or the degree of substitution (DS); the degree of methacryloylation (DM)) can be adjusted via a feed ratio of gelatin to MAA20,21,22,23. The DS of GelMA is one of the main factors that can influence biophysiochemical properties of GelMA and its photocured hydrogels. Recently, GelMA has been commercially available through some vendors such as Sigma-Aldrich. Therefore, there is a wide interest in developing effective methods to prepare GelMA with high reproducibility and controllability in terms of composition and biophysiochemical properties. Moreover, in most of the GelMA studies, only one batch of GelMA has been utilized for their research investigations12,16,22,23; a recent study dealing with three batches of various GelMA materials exhibited relatively high standard deviations in their methacryloyl functionalization and mechanical properties, potentially owing to their less controllable GelMA synthesis system and a subsequent batch-to-batch difference24. As far as we know, there is no systematic report regarding a batch-to-batch difference of GelMA and its hydrogel properties. In this report, we investigated the batch-to-batch consistency and controllability of GelMA and its hydrogels. First, we prepared target GelMA samples (highly and lowly substituted versions: DS100_1~5 and DS60_1~5) with five batches via a one-pot synthesis strategy and then evaluated GelMA products in terms of synthesis, methacryloylation, protein structure, mechanical properties, degradation, and cell viability. Here, we demonstrate that target GelMA synthesized via the one-pot synthesis scheme showed a high degree of batch-to-batch consistency and controllability over their yields, degrees of substitution (DS), and secondary structure, as well as swelling, stiffness, degradation, and cell viability of their hydrogels. ## Results and Discussion ### Controllable preparation of highly and lowly substituted gelatin methacryloyl with five batches (DS100_1~5 and DS60_1~5) GelMA samples with five different batches were synthesized in the CB buffer system by a one-pot method as illustrated in Fig. 1. Modified synthesis parameters (10 (w/v)% gelatin, 0.25 M CB buffer, a reaction time of 1 h, and reaction temperature of 55 °C) were utilized according to the literature as seen in Table 120. In this study, two types of GelMA samples (target degrees of substitution (DS): DS = 100% and 60%) with five batches were synthesized with feeding mole ratios of MAA to amino groups of gelatin at 1.859:1 and 0.628:1, respectively. GelMA samples (DS = 100% and 60%) with different batches were labeled as DS100_1~5 and DS60_1~5. The obtained products appeared white yellowish. The yields of all GelMA products were around 90% (92%, 93%, 88%, 90%, and 88% for DS100_1~5 and 92%, 94%, 92%, 91%, and 92% for DS60_1~5, respectively), indicating that the current one-pot GelMA batch process can produce consistent yields. The main challenge of GelMA synthesis is to precisely control the DS and properties of GelMA in every batch because less controllable reaction systems can lead to less controllable outcomes of GelMA, as seen in Table S115. There are many parameters involved in the reaction of gelatin and methacrylic anhydride (MAA) such as pH, temperature, reaction time, a gelatin concentration, a buffer system, a mole ratio of gelatin and MAA, and stirring speed. The crucial thing of GelMA synthesis is to maintain the pH of the reaction solution since the byproduct (methacrylic acid, MA) can decrease the pH of the solution during the reaction, hindering the forward reaction owing to the protonation of free amino groups. To this end, sequential or dropwise addition of MAA was employed to favor the forward reaction while the pH of the solution was adjusted simultaneously21,22,25. However, this method demands and depends on additional labor, which may be less controllable. Recently, Sewald et al. reported gelatin type A and type B methacryloyl with various degrees of substitution (DS) using a reaction system of PBS and pH adjustment. Even though they successfully prepared three batches of GelMA with various DS, GelMA materials exhibited relative high standard deviations in terms of methacryloylation and swelling/mechanical properties24. Recent studies reported that a carbonate-bicarbonate (0.25 M CB) buffer could be superior to phosphate buffer saline (0.01 M PBS) in terms of rendering free amino groups reactive via deprotonation and buffering capacity20,22. In this respect, a one-pot reaction strategy using the CB buffer at around pH 9 (above the isoelectric point of gelatin) could be ideal, and easy to control the reaction parameters20. On the other hand, the CB buffer at even higher pH (pH 11 and 12) can degrade quickly MAA as well as the formed methacrylate groups through hydrolysis, which is not so effective for GelMA synthesis26. The buffer capacity of the CB buffer was found to be optimal at around 0.25 M. Another important thing of GelMA synthesis is to improve the miscibility of two reactants (gelatin and MAA) because gelatin is soluble in warm water whereas MAA is insoluble in water. A high concentration of gelatin (above 10%) and a stirring rate of above 500 rpm can be conducive to the homogeneous mixing and reaction of gelatin and MAA because amphiphilic gelatin can serve also as a surfactant, and the high stirring speed may enlarge the reaction interface of gelatin and MAA via stabilizing MAA dispersion in the gelatin solution, subsequently leading to production of homogeneously reacted GelMA. Reaction temperature is also important to completely dissolve gelatin; a temperature between 30–60 °C is acceptable but a temperature above 60 °C might accelerate the backbone degradation of gelatin. A temperature at around 55 °C helps to rapidly dissolve gelatin. That is why the reaction temperature (55 °C) was chosen in this synthesis system. The reaction between gelatin and MAA normally can be complete within 1 hour20. Therefore, our one-pot method employing reaction parameters (10 (w/v)% gelatin, 0.25 M CB buffer, a reaction time of 1 h, reaction temperature of 55 °C, an initial pH of 9.4, and a reaction rate of 500 rpm) is exceptionally easy to control the parameters in every batch, resulting in good quality control of GelMA production. In addition, in our system, tangential flow filtration can reduce the dialysis time from several days to several hours through effectively removing the impurities of methacrylic acid and methacrylic anhydride. ### Consistency of the degree of substitution of target GelMA batches (DS100_1~5 and DS60_1~5) In GelMA production, reproducible methacryloyl functionalization of GelMA is a crucial factor for GelMA with different batches to display consistent hydrogel properties such as swelling behavior, mechanical properties, and degradation after photopolymerization. The amount of methacryloyl groups (AM) in GelMA was quantified by 1H-NMR, TNBS, and Fe(III)-hydroxamic acid-based assays (AMNMR, AMTNBS, and AMFe(III), respectively). Basically, 1H-NMR spectroscopy using TMSP as an internal reference can offer the quantification of both methacrylamide and methacrylate groups in GelMA simultaneously, whereas two different colorimetric methods (TNBS and Fe(III)-hydroxamic assays) provide the quantification of methacrylamide and methacrylate groups in GelMA, respectively25,26,27. The amount of methacryloyl groups (mmole g−1) in GelMA can be converted to the degree of substitution (DS; %) through normalization to the amount of the free amino group of original gelatin. Thus, AMNMR amounts to DSNMR whereas the sum of AMTNBS and AMFe(III) leads to DScolor. 1H-NMR spectra were used for determining the amount of methacrylate and methacrylamide groups in GelMA products, as well as for identifying the presence of the byproduct (methacrylic acid) as presented in Fig. 2. In comparison with the 1H-NMR spectra of gelatin (Fig. 2a,d), new proton peaks belonging to methacryloyl groups of GelMA appeared between 6.1–5.4 ppm and at 1.9 ppm, and apparently the free lysine signal (NH2CH2CH2CH2CH2-) of the unmodified gelatin at 3.0 ppm decreased markedly in DS60_1 and DS100_1 samples. DS100_1 displayed specific chemical shifts between 5.7–5.6 and 5.5–5.4 ppm for acrylic protons (CH2=C(CH3)CONH-) of methacrylamide groups and at 1.9 ppm for methyl protons (CH2=C(CH3)CO-) of methacryloyl groups, as well as additional small peaks at 6.1 and 5.7 ppm for acrylic protons (CH2=C(CH3)COO-) of methacrylate groups, whereas DS60_1 appeared to show only some specific peaks at about 5.7, 5.5, and 1.9 ppm ascribing to methacrylamide groups (CH2=C(CH3)CONH-) in GelMA. Also, DS100_1 showed a higher peak intensity at 5.7, 5.5, and 1.9 ppm compared to DS60_1. In the 1H-NMR spectra, GelMA samples (DS100_1~5 and DS60_1~5) with five different batches showed almost no batch-to-batch difference in terms of methacryloyl functionalization, as seen in Fig. 2b,c. Additionally, all the spectra demonstrated that in all GelMA products there remained little methacrylic acid (the byproduct) whose specific peaks normally appear at 5.7, 5.3 and 1.8 ppm. Quantitative results as to the amount of methacryloyl groups (methacrylate and methacrylamide: AM) in DS100_1~5 and DS60_1~5 are summarized in Table 2. The amount of methacryloyl (AM) of DS100_1~5 was recorded by NMR, TNBS, and Fe(III) assays, whereas that of DS60_1~5 was recorded by NMR and TNBS methods since methacrylate groups in DS60_1~5 were not detected by colorimetric Fe(III)-based assay, as displayed in Fig. 3a,b. This was because, in lowly substituted GelMA (DS60_1~5), methacrylic anhydride could dominantly react with free amino groups of lysine and hydroxylysine, resulting in the formation of methacrylamide. The AMNMR value of each GelMA was similar to the sum of AMTNBS and AMFe(III) of each GelMA. Each GelMA group showed a similar degree of substitution (DS; the methacryloyl functionalization). DS100_1~5 samples exhibited DScolor values of 102.29 ± 0.38, 101.32 ± 0.31, 102.54 ± 0.94, 101.31 ± 0.13, 102.83 ± 1.10, respectively (n = 3, one-way ANOVA, p = 0.139). Lowly substituted GelMA, DS60_1~5, had DScolor values of 58.62 ± 1.27, 60.84 ± 0.90, 59.54 ± 1.18, 58.50 ± 1.20, 61.47 ± 1.09, respectively (n = 3, one-way ANOVA, p = 0.063). Overall, the results regarding the DS of GelMA (DS100_1~5 and DS60_1~5) demonstrate that the current one-pot batch method for highly and lowly substituted GelMA can produce GelMA with desired DS values and little batch-to-batch variation. ### Consistency of the secondary structure of GelMA batches Gelatin exhibits partial triple helix formation at a low temperature in aqueous solutions and forms random coil structure upon heating. Its transition from triple helix to random coil is reversible. In comparison with gelatin, GelMA samples (DS100_1~5 and DS60_1~5) were expected to retain a certain degree of the secondary structure of gelatin even though the methacryloyl functionalization of GelMA can potentially interfere with helix formation23,28. Figure 4 shows the CD spectra of gelatin, highly substituted GelMA (DS100_1~5), and lowly substituted GelMA (DS60_1~5) that provide the information of their secondary structure at 4 °C and 37 °C. As presented in Fig. 4a,b, DS100_1~5 and DS60_1~5 displayed similarly a distinct rise in the intensity at 199 nm at 4 °C, compared with gelatin. The intensity of highly substituted GelMA (DS100_1~5) at 199 nm at 4 °C, ascribing to a portion of random coil formation, was slightly higher than that of lowly substituted GelMA (DS60_1~5), suggesting that higher methacryloyl functionalization of GelMA could further elicit random coil formation. On the other hand, the triple-helix contents of GelMA samples (DS100_1~5 and DS60_1~5) at 222 nm at 4 °C decreased markedly, compared with gelatin. DS100_1~5 exhibited a slightly lower intensity at 222 nm than DS60_1~5, indicating that lowly substituted GelMA could retain a more amount of the triple-helix formation at 4 °C than highly substituted GelMA. Additionally, GelMA with a higher DS (DS100_1) exhibited a less temperature-sensitive phase transition (helix-random coil transition) compared with GelMA with a lower DS (DS60_1), as seen in Fig. S1. It is speculated that the methacryloyation of free amino groups or hydroxyl groups in gelatin chains could reduce interchain or intrachain hydrogen bonding in the triple helix, leading to an increase in the random coil portion and a decrease in the triple helix formation23. Glycine-Proline-hydroxyproline tripeptides have been found to participate in the triple helix formation29,30,31. Hydroxyl groups of hydroxyproline can react with methacrylic anhydride (MAA) especially in a high feed of MAA25,26. Highly substituted GelMA (DS100_1~5) possessed the methacrylate group of around 0.01 mmole g−1 most likely from the reaction of hydroxyproline and MAA, which is presumed to obstruct partially the triple-helix formation. On the other hand, all GelMA as well as gelatin showed similar patterns in the CD spectra at 37 °C and exhibited a large increase in the intensity at 199 nm relative to the samples at 4 °C, as seen in Fig. 4c,d, indicating that GelMA materials including gelatin experience a helix-coil transition on heating. Highly substituted GelMA (DS100_1~5) showed a slightly higher intensity at 199 nm than lowly substituted GelMA (DS60_1~5). The triple-helix contents of all GelMA and gelatin at 222 nm at 37 °C decreased significantly compared with those at 4 °C, indicating that GelMA samples (DS100_1~5 and DS60_1~5) as well as gelatin appeared to completely lose the triple-helix formation and behaved like random coils at 37 °C. In addition, the CD spectra patterns of each GelMA group (DS100_1~5 or DS60_1~5) were almost the same even at different temperatures, meaning that each GelMA group with five batches was consistent in the secondary structure formation. GelMA (DS100_1~5 and DS60_1~5) displayed a higher degree of consistency not only in the composition of methacryloyl, but also in the protein secondary structure. ### Consistency of hydrogel properties of GelMA batches (DS100_1~5 and DS60_1~5) Gelatin undergoes only physical gelation at a low temperature whereas GelMA can form a physical gel at a low temperature and additionally form a chemical hydrogel via photopolymerization owing to photosensitive methacryloyl functionalization. GelMA hydrogels exhibit tailorable swelling behavior and mechanical properties, which depend on mainly their degree of substitution, their concentration and selected curing parameters (light intensity, exposure time of irradiation, and amounts of an initiator). Here, the batch-to-batch variation of hydrogel properties of GelMA (DS100_1~5 and DS60_1~5) was investigated in terms of swelling and mechanical stiffness. First, GelMA hydrogels were fabricated through a simple method: Each 20 (w/v)% GelMA solution containing 0.5 (w/v)% I2959 was placed in a mold (8 mm in diameter and 1 mm in thickness) and then cured by 365 nm UV light (3.5 mW cm−2 and 5 minutes). As shown in Fig. 5a, GelMA (DS100_1~5 and DS60_1~5) bulk hydrogels exhibited structural integrity after photo-crosslinking. When GelMA hydrogels were soaked in DI water at an elevated temperature (50 °C) to expedite the swelling process, they began to swell and reached a swelling equilibrium within 60 min. In addition, any degradation of DS100 and DS60 hydrogels was not observed at the elevated temperature during the swelling test. DS100_1~5 hydrogels exhibited a lower swelling degree (%) compared with DS60_1~5 hydrogels. Swelling degrees of DS100_1~5 hydrogels were 1156 ± 12, 1148 ± 26, 1144 ± 23, 1155 ± 12, and 1152 ± 17%, respectively (n = 3, one-way ANOVA, p = 0.489) whereas those of DS60_1~5 were 2707 ± 39, 2726 ± 42, 2759 ± 36, 2733 ± 22, and 2726 ± 13%, respectively (n = 3, one-way ANOVA, p = 0.078). DS100_1~5 hydrogels should have a higher crosslinking density and a smaller mesh size compared to DS60_1~5, owing to a higher degree of methacryloyl functionalization, subsequently leading to less swelling. Additionally, the results demonstrated that each GelMA hydrogel group (DS100_1~5 or DS60_1~5) showed a higher degree of consistency in swelling behavior. The swelling of hydrogels is an important feature for the diffusion behavior of small molecules (nutrients and waste) in cell culture and drug delivery systems. Consistent swelling behavior of GelMA (DS100_1~5 or DS60_1~5) hydrogels could be used as a predictable basic reference for various bioapplications. As to mechanical properties of GelMA (DS100_1~5 and DS60_1~5) hydrogels at 20 (w/v)%, highly substituted GelMA materials (DS100_1~5) with an average of 30.20 ± 0.57 kPa were 1.9-fold stiffer than lowly substituted GelMA (DS60_1~5) with an average of 16.04 ± 0.93 kPa. As seen in Fig. 5b, GelMA (DS100_1~5 and DS60_1~5) hydrogels exhibited little batch-to-batch variance in their mechanical properties. Storage moduli of DS100_1~5 were 29.46 ± 3.89, 30.61 ± 2.97, 29.91 ± 1.60, 30.94 ± 4.08, and 30.05 ± 1.73 kPa (n = 3, one-way ANOVA, p = 0.975) whereas those of DS60_1~5 were 16.22 ± 0.99, 16.88 ± 0.79, 16.41 ± 2.09, 14.46 ± 1.34, and 16.24 ± 1.91 (n = 3, one-way ANOVA, p = 0.398). Recently, Seward et al. reported mechanical properties of GelMA with various degrees of methacrylation (0.3~1.0 mmole g−1), and the storage modulus of each GelMA showed relatively high standard deviations potentially owing to a high batch-to-batch variance24. Even mechanical properties of lowly methacrylated GelMA with the methacrylation of around 0.3 mmole g−1 were not statistically different from those of highly methacrylated GelMA with the methacrylation of around 0.6 mmole g−1, assumingly because of the much influence of the physical crosslinking of lowly methacrylated GelMA. In our case, the mechanical and swelling properties of GelMA hydrogels showed a direct correlation with the degree of methacrylation. It is potentially because the hydrogels were prepared above 37 °C, which could rule out the physical gelation. The chemical gelation and crosslinking density of GelMA hydrogels formed by light could be a dominant factor of determining their mechanical properties and swelling behavior, resulting in distinct mechanical properties of each GelMA group and a low batch-to-batch difference within each GelMA group. GelMA hydrogels with a high degree of consistency and tailorability in mechanical properties could be a highly versatile tool for tissue engineering applications since tunable mechanical properties of soft hydrogels have been used to regulate cellular behavior such as proliferation, migration, and differentiation32. ### Consistency of biodegradability and cell viability of GelMA (DS100_1~5 and DS 60_1~5) hydrogels Biodegradability has gained considerable attention in drug delivery and tissue engineering applications as a desirable feature of hydrogel materials. GelMA hydrogels exhibit enzymatic degradation properties; indeed, GelMA retains enzyme-sensitive sequences (proline-X-glycine-proline-, X: a neutral amino acid) as its parent gelatin and collagen do. Here, accelerated enzymatic degradation tests of GelMA (DS100_1~5 and DS60_1~5) hydrogels were conducted to investigate consistency of their degradation behavior. As displayed in Fig. 6a–c, the enzymatic degradation of bulk GelMA hydrogels appeared apparent, and their degradation speed was highly dependent on the DS of GelMA, which affects dominantly the crosslinking density of GelMA hydrogels. The higher the DS of GelMA hydrogels, the slower their degradation. DS100_1~5 hydrogels under the accelerated degradation conditions lost half of their masses at around 3.33 h, and DS60_1~5 hydrogels did at around 0.65 h. Actual half-lives of DS100_1~5 hydrogels were 3.23, 3.31, 3.60, 3.03, and 3.50 h, respectively whereas those of DS60_1~5 were 0.67, 0.64, 0.69, 0.61, and 0.62 h, respectively. Each GelMA hydrogel group followed a similar degradation pattern, which showed that there seemed to be little batch-to-batch difference in hydrogel structure and susceptibility to enzyme degradation within each hydrogel group. We presume that the crosslinking density caused by photocrosslinking of the methacryloyl group could be the main factor of making a difference in the degradation rate of GelMA (DS100_1~5 and DS60_1~5) hydrogels. GelMA DS100_1~5 hydrogels with a higher DS can slow down the enzymatic degradation owing to a higher crosslinking density in the hydrogels, compared with GelMA DS60_1~5 hydrogels with a lower DS13. Also, GelMA, like its parents (gelatin and collagen), maintains cell binding sites (e.g. RGD). Cell binding affinity of GelMA is an important feature that can promote cell viability and affect cell behavior such as cell proliferation and differentiation. We investigated cell viability of Huh7.5 cells cultured on and inside GelMA hydrogels (DS100_1~5 and DS60_1~5). As presented in Fig. 7a, GelMA hydrogels (DS100_1~5 and DS60_1~5) exhibited cell viability of above 87%. Cell viability of DS100_1~5 and DS60_1~5 hydrogels was not significantly different from one another (one-way ANOVA, p = 0.812). Cell viability values of DS100_1~5 hydrogels were 87.1 ± 7.1%, 92.4 ± 1.4%, 97.1 ± 2.4%, 91.6 ± 3.8%, and 96.9 ± 0.5%, respectively whereas those of DS60_1~5 hydrogels were 91.3 ± 2.5%, 88.6 ± 3.6%, 92.3 ± 1.5%, 90.7 ± 2.9%, and 89.7 ± 7.0%, respectively. Cells on GelMA (DS100_1~5 and DS60_1~5) hydrogels appeared as cell clusters possibly because GelMA substrates were soft and compliant. Huh 7.5 cells on soft hydrogels tended to form cell clusters or spheroids13. Cell clusters on DS100_1~5 hydrogels looked slightly more scattered than those on DS60_1~5. It was speculated that relatively stiffer DS100_1~5 hydrogel substrates could allow cells to be more spread and scattered, as compared to DS60_1~5 hydrogel substrates. In addition, cells were encapsulated inside GelMA hydrogels as presented in Fig. 2b. The average (93.7 ± 5.2%) of cell encapsulation efficiency of DS100_1~5 hydrogels was slightly higher than that (87.0 ± 7.3%) of DS60_1~5. The cell encapsulation efficiency of each GelMA group showed no statistical difference (one-way ANOVA, p > 0.05). Cells encapsulated inside DS100_1~5 hydrogels displayed an average cell viability of 87.7 ± 3.7% whereas those inside DS60_1~5 had an average cell viability of 87.8 ± 5.3%. In comparison to cells grown on GelMA hydrogels, cells inside GelMA hydrogels exhibited a round morphology possibly owing to the fact that cells were surrounded and packed by dense hydrogel matrices in a three-dimensional manner. Overall both DS100_1~5 and DS60_1~5 hydrogels were found to offer good cell viability with good batch-to-batch consistency. In this report, we prepared two kinds of target GelMA (DS100_1~5 and DS60_1~5) with five batches using the one-pot synthesis method. GelMA (highly and lowly substituted versions: DS100_1~5 and DS60_1~5) batches displayed high reproducibility and controllability in their photocurable functionalization, protein secondary structures, mechanical properties, degradation behavior, and cell viability. GelMA with almost no batch-to-batch difference in structural and functional properties could be used as a highly versatile source for bioapplications that require a high degree of consistency in properties and performances. In addition, GelMA one-pot synthesis and characterization methods described in this work could be a useful guideline for quality control of GelMA production and further could be extended to the methacryloyl functionalization of other biopolymers with amino and hydroxyl functional groups for their bioapplications. ## Methods ### Materials Gelatin (type B, 250 bloom), sodium carbonate, sodium bicarbonate decahydrate, sodium hydroxide, acetohydroxamic acid, hydroxylamine, sodium dodecyl sulfate, and alanine were purchased from Aladdin (Shanghai, China). Methacrylic anhydride (MAA), iron(III) perchloride, 2-hydroxy-4′-(2-hydroxyethoxy)-2-methylpropiophenone (I2959), collagenase (IA, 125 CDU/mg), and 2,4,6-trinitrobenzene sulfonic acid (TNBS) were purchased from Sigma-Aldrich (Shanghai, China). Deuterium oxide (D2O) and 2,2,3,3-D4 (D, 98%) sodium-3-trimethylsilylpropionate (TMSP) were obtained from Cambridge Isotope Laboratories (Andover, USA). Hanks’ Balanced Salt Solution (HBSS) (10X), Dulbecco’s Modified Eagle’s Medium, fetal bovine serum, penicillin/streptomycin, and Live/Dead® Cell Viability/Cytotoxicity kit were purchased from Life Technologies (Shanghai, China). All the reagents were used as received. ### Preparation of gelatin methacryloyl (GelMA, target DS = 100 and 60%) with five batches Two kinds of gelatin methacryloyl (GelMA, target DS = 100 and 60%) materials with five different batches (DS100_1~5 and DS60_1~5) were prepared as illustrated in Fig. 1a,b. Details regarding reaction parameters are presented in Table 1. In brief, type B gelatin (10 g, 250 bloom, 3.18 mmole of free amino groups) was dissolved at 10 (w/v)% in carbonate-bicarbonate (CB) buffer (0.25 M, 100 mL) at 55 °C, and then the pH of the gelatin solutions was adjusted to 9.4. Two different amounts (0.938 mL for target DS = 100% and 0.317 mL for target DS = 60%) of methacrylic anhydride (MAA, 94%) were separately added to the gelatin solutions under magnetic stirring at 500 rpm. The reaction proceeded for 1 h at 55 °C, and the final pH of the reaction solutions was adjusted to 7.4 to stop the reaction. After being filtered, the solutions were dialyzed against water at 50 °C in a MasterFlex® tangential flow filtration (TFF) system equipped with Pellicon® 2 cassette (Darmstadt, Germany) containing a 10 k Da Biomax membrane, and lyophilized to obtain the final solid products. The average yield on GelMA products was around 90%. ### Degree of substitution of target GelMA (DS100_1~5 and DS60_1~5) For quantification of methacrylamide groups in GelMA, 1H-NMR (Avance I 400 MHz, Bruker, Rheinstetten, Germany) spectroscopy and 2,4,6-Trinitrobenzenesulfonic acid (TNBS) assay were employed, whereas 1H-NMR and Fe(III)-hydroxamic acid-based assay were harnessed for quantification of methacrylate groups in GelMA20,25,26,27. For 1H-NMR spectroscopy of gelatin and GelMA, 40 mg of each sample was dissolved in 800 µL of deuterium oxide with 0.1 w/v% 3-(trimethylsilyl)propionic-2,2,3,3-d4 (TMSP) acid as an internal reference, and all 1H-NMR-spectra were measured at 40 °C. Before the interpretation, phase corrections were applied to all spectra to obtain purely absorptive peaks, baselines were corrected, and the chemical shift scale was adjusted to the TMSP signal (δ (1H) = 0 ppm). The amount of methacryloyl groups (methacrylamide and methacrylate groups; AMNMR) in GelMA was calculated from 1H-NMR spectroscopy by the following formula: $$\begin{array}{rcl}{{\rm{AM}}}_{{\rm{NMR}}}({\rm{mmole}}\,{{\rm{g}}}^{-1}) & = & \frac{\int {\rm{methacryloyl}}\,({\rm{the}}\,{\rm{peaks}}\,{\rm{at}}\,\,5.6\mbox{--}5.8\,{\rm{ppm}})}{\int \mathrm{TMSP}\,\,({\rm{at}}\,0\,{\rm{ppm}})}\\ & & \times \,\frac{9{\rm{H}}}{1{\rm{H}}}\times \frac{{\rm{n}}\,{\rm{mmole}}\,({\rm{TMSP}})}{{\rm{m}}\,{\rm{g}}\,({\rm{GelMA}})};\end{array}$$ Also, TNBS assay was performed to quantify the remaining free amino groups in gelatin and GelMA. GelMA and gelatin samples were separately dissolved at 1.6 mg mL−1 in 0.1 M sodium bicarbonate buffer. Then, 0.5 mL of each sample solution was mixed with 0.5 mL of a 0.1% TNBS solution in 0.1 M sodium bicarbonate buffer and then was incubated for 2 h. Next, 0.25 mL of 1 M hydrochloric acid and 0.5 mL of 10 (w/v)% sodium dodecyl sulfate were added to stop the reaction. The absorbance of each sample was measured at 335 nm. An alanine standard curve was used to determine the amino group concentration with standard sample solutions prepared at 0, 0.8, 8, 16, 32, and 64 µg mL−1. The amount of free amino groups in 1 g of gelatin is 0.3184 mmol, based on the TNBS result. The amount of methacrylamide groups in GelMA, denoted as AMTNBS, was obtained by subtracting the amount of the remaining free amino groups in GelMA from the amount of free amino groups in gelatin. Fe(III)-hydroxamic acid-based assay was used for quantification of the amount of methacrylate groups in GelMA (AMFe(III)). Briefly, 100 µL of 0.5 M hydroxylamine hydrochloride was combined with 100 µL of 1 M NaOH, and then 200 µL of each GelMA solution (50 mg mL−1) was transferred to the combined solution. The mixture was incubated at room temperature for 10 min after being vortexed for 30 s. Then, 550 µL of a 0.5 M iron(III) perchloride solution in 0.5 M HCl was added to the mixture. After the mixture solution was vortexed for 30 s, UV-vis absorption spectra were recorded from 420 to 700 nm in a microplate with 200 µL of the solution. The absorbance at 500 nm was used to quantify AMFe(III) via utilizing a standard curve prepared with a series of acetohydroxamic acid solutions (5.0 × 10−3, 2.5 × 10−3, 1.25 × 10−3, 6.25 × 10−4, 5.0 × 10−4, and 2.5 × 10−4 M) in DI water. Finally, the degree of substitution (DS) of GelMA was calculated using either AMNMR or a combination of the colorimetric results (AMTNBS and AMFe(III)): $${{\rm{DS}}}_{{\rm{NMR}}}={{\rm{AM}}}_{{\rm{NMR}}}/0.3184\times 100\,( \% )\,{\rm{or}}\,{{\rm{DS}}}_{{\rm{color}}}=({{\rm{AM}}}_{{\rm{TNBS}}}+{{\rm{AM}}}_{{\rm{Fe}}({\rm{III}})})/0.3184\times 100\,( \% ).$$ ### Secondary structure of GelMA (DS100_1~5 and DS60_1~5) To find consistency of secondary structure in GelMA samples, circular dichroism (CD) experiments covering the UV spectral range from 260 to 180 nm, were carried out using Chirascan Plus (Applied Photophysics, Leatherhead, UK). Before every experiment, each sample (0.2 mg mL−1 in deionized water) was stored at 4 or 37 °C for 2 h to obtain a stable conformation (triple helix structure or random coil). The acquisitions were performed at 4 or 37 °C after 300 μL of each solution was added in a quartz cell with an optical path length of 1 mm. ### Swelling behavior of bulk GelMA DS100_1~5 and DS60_1~5 hydrogels GelMA hydrogels (DS100_1~5 and DS60_1~5) for swelling, mechanical and enzymatic degradation testings were conducted as follows. Each GelMA solution (20 (w/v)% in deionized water) containing 0.5 (w/v)% I2959 (prepared at a concentration of 20% as a stock solution in 70% ethanol) was cured under UV light (365 nm with an intensity of 3.5 mW cm−2) for 5 min in a polytetrafluoroethylene (PTFE) mold with a diameter of 8 mm and a thickness of 1 mm. Prepared GelMA hydrogels were transferred into a small beaker containing 30 mL of deionized water, incubated for 120 min at an elevated temperature (50 °C) so as to expedite the equilibrium swelling and then weighed at appointed times (5, 10, 15, 20, 30, 40, 50, 60, and 120 min). The swelling degree (Qt) of GelMA hydrogels at different time points was calculated using the following formula, $${{\rm{Q}}}_{{\rm{t}}}=\frac{{{\rm{w}}}_{{\rm{t}}}-{{\rm{w}}}_{0}}{{{\rm{w}}}_{0}}\times 100 \%$$ (wt: the weight of GelMA hydrogels at t min; wo: the dried weight of GelMA hydrogels after freeze-drying at 0 min). ### Mechanical properties of bulk GelMA DS100_1~5 and DS60_1~5 hydrogels Mechanical properties of GelMA hydrogels with a diameter of 8 mm and a thickness of 1 mm were characterized via sinusoidal shear rheometry. Frequency-sweep measurements were conducted using a rheometer (Discovery Hybrid Rheometer, Thermo, America), equipped with an 8 mm parallel plate (Peltier plate Steel). The storage modulus of each GelMA hydrogel sample was measured at 0.1% strain and 0.1–10 Hz within the viscoelastic range. The running temperature was maintained at 37 °C throughout the measurements. ### Enzymatic degradation of bulk GelMA DS100_1~5 and DS60_1~5 hydrogels GelMA hydrogels (20 (w/v)%) with a diameter of 8 mm and a thickness of 1 mm were tested for enzymatic degradation. After GelMA hydrogels were placed in deionized water for 3 h at 50 °C to rapidly reach the equilibrium swelling, they were transferred into a new 24-well plate with each well containing 1 mL of 0.05 (v/w)% collagenase (125 CDU mg−1, solid) in Hank’s Balanced Salt Solution including 3 mM CaCl2. Their enzyme degradation was conducted at 37 °C. Gross images of GelMA hydrogels were taken during the degradation, and mass loss of GelMA hydrogels was also measured. $${\rm{Mass}}\,{\rm{loss}}=\frac{{{\rm{w}}}_{0}-{{\rm{w}}}_{{\rm{t}}}}{{{\rm{w}}}_{0}}\times 100 \%$$ (wt: the weight of each GelMA hydrogel at time t; wo: the weight of each GelMA hydrogel at time 0 (the equilibrium swelling). ### Viability of Huh7.5 cells grown on and inside GelMA DS100_1~5 and DS60_1~5 hydrogels Human hepatocellular carcinoma cells (Huh7.5; ScienCell, Shanghai, China) were cultured in Dulbecco’s Modified Eagle’s Medium with 10% fetal bovine serum and 1% penicillin/streptomycin in a humidified atmosphere at 37 °C with 5% CO2. The medium was changed every 3 days. Bulk GelMA (DS100_1~5 and DS60_1~5) hydrogels were prepared in 48-well plates by curing 120 µL of each GelMA solution (20 (w/v)% in PBS) containing 0.5 (w/v)% I2959 in each well. After washing each hydrogel with PBS two times, 500 µL of a medium containing 1 × 105 cells was carefully added to each well coated with each GelMA hydrogel. For cell encapsulation, each GelMA solution (20 (w/v)% in PBS and 0.25 (w/v)% I2959) containing Huh 7.5 cells at a concentration of 3 × 106 cells mL−1 was cured in a mold (8 mm in diameter and 1 mm in depth) by exposure to light (365 nm with an intensity of 3.5 mW cm−2 for 5 min). Then unencapsulated cells were counted for cell encapsulation efficiency. After 1 day, cell viability on and inside GelMA hydrogels was characterized using Live/Dead® Cell Viability/Cytotoxicity kit. Briefly, 4 µM calcein-acetoxymethyl (calcein-AM) and 8 µM ethidium homodimer-1 (EthD-1) in media were added to each well containing each cell-laden hydrogel, followed by incubation for 1 h at 37 °C. The cytoplasm of live cells and the nuclei of dead cells were stained by calcein-AM (green) and EthD-1 (red), respectively and were observed through a confocal microscope (C2+, Nikon, Shanghai, China). Numbers of live and dead cells in three images of each hydrogel were counted using ImageJ. Cell viability was calculated using the following formula, $${\rm{Cell}}\,{\rm{viability}}\,( \% )=\frac{{\rm{live}}\,{\rm{cells}}}{{\rm{the}}\,{\rm{total}}\,{\rm{number}}\,{\rm{of}}\,{\rm{cells}}}\times 100.$$ ### Statistical analysis Statistical analysis was carried out using the Microsoft Excel® statistical analysis. A one-way ANOVA was used to test for differences among five groups. The standard deviation (s.d.) was calculated and presented for each treatment group. P values below 0.05 were considered statistically significant. The value of n denotes the number of performed samples or the number of independently performed attempts. ## References 1. 1. Schloss, A. C., Williams, D. M. & Regan, L. J. Protein-Based Hydrogels for Tissue Engineering. Adv Exp Med Biol 940, 167–177, https://doi.org/10.1007/978-3-319-39196-0_8 (2016). 2. 2. Foster, A. A. et al. Protein-engineered hydrogels enhance the survival of induced pluripotent stem cell-derived endothelial cells for treatment of peripheral arterial disease. Biomater Sci 6, 614–622, https://doi.org/10.1039/c7bm00883j (2018). 3. 3. Silva, R., Fabry, B. & Boccaccini, A. R. Fibrous protein-based hydrogels for cell encapsulation. Biomaterials 35, 6727–6738, https://doi.org/10.1016/j.biomaterials.2014.04.078 (2014). 4. 4. Li, Y. & Kumacheva, E. Hydrogel microenvironments for cancer spheroid growth and drug screening. Sci Adv 4, eaas8998, https://doi.org/10.1126/sciadv.aas8998 (2018). 5. 5. Hughes, C. S., Postovit, L. M. & Lajoie, G. A. Matrigel: a complex protein mixture required for optimal growth of cell culture. Proteomics 10, 1886–1890, https://doi.org/10.1002/pmic.200900758 (2010). 6. 6. Ko, K. R., Tsai, M. C. & Frampton, J. P. Fabrication of Thin-Layer Matrigel-Based Constructs for 3D Cell Culture. Biotechnol Prog, https://doi.org/10.1002/btpr.2733 (2018). 7. 7. Laperrousaz, B. et al. Direct transfection of clonal organoids in Matrigel microbeads: a promising approach toward organoid-based genetic screens. Nucleic Acids Res 46, e70, https://doi.org/10.1093/nar/gky030 (2018). 8. 8. Jang, J. M., Tran, S. H., Na, S. C. & Jeon, N. L. Engineering controllable architecture in matrigel for 3D cell alignment. ACS Appl Mater Interfaces 7, 2183–2188, https://doi.org/10.1021/am508292t (2015). 9. 9. Benton, G., Arnaoutova, I., George, J., Kleinman, H. K. & Koblinski, J. Matrigel: from discovery and ECM mimicry to assays and models for cancer research. Adv Drug Deliv Rev 79–80, 3–18, https://doi.org/10.1016/j.addr.2014.06.005 (2014). 10. 10. Kaemmerer, E. et al. Gelatine methacrylamide-based hydrogels: An alternative three-dimensional cancer cell culture system. Acta Biomaterialia 10, 2551–2562, https://doi.org/10.1016/j.actbio.2014.02.035 (2014). 11. 11. Lin, R. Z., Chen, Y. C., Moreno-Luna, R., Khademhosseini, A. & Melero-Martin, J. M. Transdermal regulation of vascular network bioengineering using a photopolymerizable methacrylated gelatin hydrogel. Biomaterials 34, 6785–6796, https://doi.org/10.1016/j.biomaterials.2013.05.060 (2013). 12. 12. Naseer, S. M. et al. Surface acoustic waves induced micropatterning of cells in gelatin methacryloyl (GelMA) hydrogels. Biofabrication 9, 015020, https://doi.org/10.1088/1758-5090/aa585e (2017). 13. 13. Lee, B. H. et al. Colloidal templating of highly ordered gelatin methacryloyl-based hydrogel platforms for three-dimensional tissue analogues. NPG Asia Materials 9, e412, https://doi.org/10.1038/am.2017.126 (2017). 14. 14. Klotz, B. J., Gawlitta, D., Rosenberg, A., Malda, J. & Melchels, F. P. W. Gelatin-Methacryloyl Hydrogels: Towards Biofabrication-Based Tissue Repair. Trends Biotechnol 34, 394–407, https://doi.org/10.1016/j.tibtech.2016.01.002 (2016). 15. 15. Yue, K. et al. Synthesis, properties, and biomedical applications of gelatin methacryloyl (GelMA) hydrogels. Biomaterials 73, 254–271, https://doi.org/10.1016/j.biomaterials.2015.08.045 (2015). 16. 16. Liu, B. et al. Hydrogen bonds autonomously powered gelatin methacrylate hydrogels with super-elasticity, self-heal and underwater self-adhesion for sutureless skin and stomach surgery and E-skin. Biomaterials 171, 83–96, https://doi.org/10.1016/j.biomaterials.2018.04.023 (2018). 17. 17. Zhu, W. et al. 3D bioprinting mesenchymal stem cell-laden construct with core-shell nanospheres for cartilage tissue engineering. Nanotechnology 29, 185101, https://doi.org/10.1088/1361-6528/aaafa1 (2018). 18. 18. Pepelanova, I., Kruppa, K., Scheper, T. & Lavrentieva, A. Gelatin-Methacryloyl (GelMA) Hydrogels with Defined Degree of Functionalization as a Versatile Toolkit for 3D Cell Culture and Extrusion Bioprinting. Bioengineering (Basel) 5, https://doi.org/10.3390/bioengineering5030055 (2018). 19. 19. Zhou, M., Lee, B. H., Tan, Y. J. & Tan, L. P. Microbial transglutaminase induced controlled crosslinking of gelatin methacryloyl to tailor rheological properties for 3D printing. Biofabrication, 11, 025011, https://doi.org/10.1088/1758-5090/ab063f (2019). 20. 20. Shirahama, H., Lee, B. H., Tan, L. P. & Cho, N. J. Precise Tuning of Facile One-Pot Gelatin Methacryloyl (GelMA). Synthesis. Scientific reports 6, 31036, https://doi.org/10.1038/srep31036 (2016). 21. 21. Lee, B. H., Lum, N., Seow, L. Y., Lim, P. Q. & Tan, L. P. Synthesis and Characterization of Types A and B Gelatin Methacryloyl for Bioink Applications. Materials 9, https://doi.org/10.3390/ma9100797 (2016). 22. 22. Lee, B. H., Shirahama, H., Cho, N.-J. & Tan, L. P. Efficient and controllable synthesis of highly substituted gelatin methacrylamide for mechanically stiff hydrogels. RSC Advances 5, 106094–106097, https://doi.org/10.1039/c5ra22028a (2015). 23. 23. Van Den Bulcke, A. I. et al. Structural and rheological properties of methacrylamide modified gelatin hydrogels. Biomacromolecules 1, 31–38 (2000). 24. 24. Sewald, L. et al. Beyond the Modification Degree: Impact of Raw Material on Physicochemical Properties of Gelatin Type A and Type B Methacryloyls. Macromol Biosci 18, e1800168, https://doi.org/10.1002/mabi.201800168 (2018). 25. 25. Claassen, C. et al. Quantification of Substitution of Gelatin Methacryloyl: Best Practice and Current Pitfalls. Biomacromolecules 19, 42–52, https://doi.org/10.1021/acs.biomac.7b01221 (2018). 26. 26. Zheng, J., Zhu, M., Ferracci, G., Cho, N. J. & Lee, B. H. Hydrolytic Stability of Methacrylamide and Methacrylate in Gelatin Methacryloyl and Decoupling of Gelatin Methacrylamide from Gelatin Methacryloyl through Hydrolysis. Macromolecular Chemistry and Physics 219, 1800266 (2018). 27. 27. Yue, K. et al. Structural analysis of photocrosslinkable methacryloyl-modified protein derivatives. Biomaterials 139, 163–171, https://doi.org/10.1016/j.biomaterials.2017.04.050 (2017). 28. 28. Schuurman, W. et al. Gelatin-methacrylamide hydrogels as potential biomaterials for fabrication of tissue-engineered cartilage constructs. Macromol Biosci 13, 551–561, https://doi.org/10.1002/mabi.201200471 (2013). 29. 29. Motooka, D. et al. The triple helical structure and stability of collagen model peptide with 4(S)-hydroxyprolyl-Pro-Gly units. Biopolymers 98, 111–121, https://doi.org/10.1002/bip.21730 (2012). 30. 30. Kishimoto, T. et al. Synthesis of poly(Pro-Hyp-Gly)(n) by direct poly-condensation of (Pro-Hyp-Gly)(n), where n = 1, 5, and 10, and stability of the triple-helical structure. Biopolymers 79, 163–172, https://doi.org/10.1002/bip.20348 (2005). 31. 31. Yang, W., Chan, V. C., Kirkpatrick, A., Ramshaw, J. A. & Brodsky, B. Gly-Pro-Arg confers stability similar to Gly-Pro-Hyp in the collagen triple-helix of host-guest peptides. J Biol Chem 272, 28837–28840 (1997). 32. 32. Stowers, R. S., Allen, S. C. & Suggs, L. J. Dynamic phototuning of 3D hydrogel stiffness. Proc Natl Acad Sci USA 112, 1953–1958, https://doi.org/10.1073/pnas.1421897112 (2015). ## Acknowledgements This work was supported by Wenzhou Medical University (QTJ17012), Wenzhou Institute of Biomaterial and Engineering (WIBEZD2017010-03), and Zhejiang Provincial Natural Science Foundation of China (LGF19E030001). ## Author information Authors ### Contributions B.H.L. and N.J.C. participated in the study design; M.Z., Y.W., G.F., J.Z. and B.H.L. performed the experiments. M.Z., G.F., N.J.C. and B.H.L. were involved in drafting and revising the manuscript. All authors read and approved the final version of the manuscript. ### Corresponding authors Correspondence to Nam-Joon Cho or Bae Hoon Lee. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Zhu, M., Wang, Y., Ferracci, G. et al. Gelatin methacryloyl and its hydrogels with an exceptional degree of controllability and batch-to-batch consistency. Sci Rep 9, 6863 (2019). https://doi.org/10.1038/s41598-019-42186-x • Accepted: • Published: • ### Hydrogel scaffolds for tissue engineering: the importance of polymer choice • Christopher D. Spicer Polymer Chemistry (2020) • ### Advanced Materials to Enhance Central Nervous System Tissue Modeling and Cell Therapy • Riya J. Muckom • , Rocío G. Sampayo • , Hunter J. Johnson •  & David V. Schaffer • ### Three‐dimensional cryogel matrix for spheroid formation and anti‐cancer drug screening • Archana Singh •  & Prakriti Tayalia Journal of Biomedical Materials Research Part A (2020) • ### Photopolymerizable Biomaterials and Light-Based 3D Printing Strategies for Biomedical Applications • Claire Yu • , Jacob Schimelman • , Pengrui Wang • , Kathleen L. Miller • , Xuanyi Ma • , Shangting You • , Jiaao Guan • , Bingjie Sun • , Wei Zhu •  & Shaochen Chen Chemical Reviews (2020) • ### Hybrid Antimicrobial Hydrogel as Injectable Therapeutics for Oral Infection Ablation • Juliana S Ribeiro • , Arwa Daghrery • , Nileshkumar Dubey • , Christina Li • , Ling Mei • , J. Christopher Fenno • , Anna Schwendeman • , Zeynep Aytac •  & Marco C. Bottino Biomacromolecules (2020)
2020-09-21 20:33:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6226148009300232, "perplexity": 13080.996664446468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00494.warc.gz"}
http://automaticvanrental.com/1bnr4ac/897644-frigidaire-professional-24%27%27-built-in-dishwasher-with-evendry-%E2%84%A2-system
…, Effectiveness of Modular Approach on the Academic performance of Senior high school learners research question (SOP) thank you po sa sasagot​, bakit mahalaga ang isang masusing paghahanda sa pagpapatupad ng isang proyekto​, Maokakatugmang salita alPagyamaninIbigay ang hinihinging impormasyonPamagatMav akdaHabang binabasa ang tulangPamilyang Pilipino," naisip konaMarkaDahi 1 decade ago. Still have questions? What steps of cellular respiration cannot continue in the absence of oxygen? But The Sun burns even in the absence of Oxygen. Pryolysis. Ignition of oxygen in space. b. 2 Answers. For reactions, H, I, and J, use the solubility table, to name the product that is the precipitate in each of the reactions. Oxygen can be denied to a fire using a carbon dioxide fire extinguisher, a fire blanket or water. Glycolysis occurs in both aerobic respiration and anaerobic respiration. so as soon as you have no oxygen the triangle is broken and the fire goes out :D. The temperature a fire reaches is due to the combustion process. If oxygen is not present, the respiration cycle does not continue past the glycolysis stage. gas, vapor. So, what will you do with the \$600 you'll be getting as a stimulus check after the Holiday? produces fire with heat and light. CR in the absence of oxygen is called ____ respiration. So the sun produces the same. The temperature a fire reaches is due to the combustion process. The muscle continues to contract in the absence of oxygen through Glycolysis. Mathematicscan progress even without members agree or disagree​, B. Panuto:Ayusin ang mga ginulong letra sa loob ng kahon upang maibigay anghinihinging kasagutan ng mga sumusunod na tanong. Select all that apply. You can sign in to vote the answer. Glycolysis occurs in the _____ of the cell. 3 (etc) _____ is the final acceptor at the end of the ETC, when water is formed. As soon as the oxygen is used up the fire will die. (i.e) in space If something produces heat and light the medium is called "Fire". If you "enclose the fire and seal out most of the oxygen" then the combustion process cannot … katutulong panitikan sa pilipinas na nagsasalaysay ng kabayanihan at supernatural na mga pangyayari. iron, carbon, oxygen _____ refers to the amount of water vapor in the air. How do you think about the answers? If oxygen is present, the pyruvate breaks down further into carbon dioxide, … This type of respiration--without oxygen--is known as anaerobic respiration. Why does the aroma of perfume last longer on paper testers than on skin. Produce amino acids for protein synthesis. If you expose such a hot mass to fresh excess air then it is probable that the fire would reignite and the flames would spring up again. cytoplasm. Very fast. When fuel burns, it reacts with oxygen from the surrounding air, releasing heat and generating combustion products (gases, smoke, embers, etc.). Answer: Without sufficient oxygen, a fire cannot begin, and it cannot continue. heck with local government authorities to see if they have any other restrictions in effect before lighting any fire. In Humans, This Is Possible Because Under Anaerobic Conditions (the Absence Of Oxygen) An Additional Reaction Catalyzed By Lactate Dehydrogenase (LDH) Occurs In The Cytosol. Does fire have a chemical composition, and thus a chemical formula, like every other entity on the planet? Fox News host: 'It appears we have been punk’d', Star stands up to shamers after photo used as joke, Wall Street sees 'tailspin' if Trump doesn't sign stimulus, Undefeated NCAA darlings not happy with bowl picture, Iconic British supermodel Stella Tennant dies at 50, 'Price Is Right' fans freak out after family wins 3 cars, Boy's bout with virus led to harrowing, rare syndrome, WH staffers receive curious departure instructions, NFL team strips QB of captaincy after nightclub visit, Anger after Trump pardons Blackwater contractors. Abstract fire heat background. humidity. Get your answers by asking now. what is difference between chitosan and chondroitin ? If just a small amount of oxygen is able to reach the hot mass the heat can continue to be sustained for a very long time. 4 present. If you "enclose the fire and seal out most of the oxygen" then the combustion process cannot continue. Very Fast indeed to make and keep a fire lit u need to have the triangle of fire complete the triangle is that u need oxygen,something to burn(ie. This has triggered a review of the current WS enacted prohibitions. 3. Ano ang mga nalalaman mo kaugnay sa sintesis? Air contains about 21 percent oxygen, and most fires require at least 16 percent oxygen content to burn. the last 2 stages of CR occur in the _____ mitochondria. Matter exists in ___ states. According to the food chain, label the following organisms appropriately. Why can't the Krebs Cycle occur in the absence of Oxygen (anaerobic)? This would lead to a slowly cooling mass of combustionable products. secondary consumer - cat primary consumer - mouse producer - grass tertiary consume - wolf. In the absence of oxygen, the primary- purpose of fermentation is to: a. Join Yahoo Answers and get 100 points today. The Wildfire Regulation requirements do not apply within municipal boundaries or regional districts that have open fire bylaws. Fire is nothing but the outcome of a chemi… Oxygen supports the chemical processes that occur during fire. Will fire continue its flame in the absence of oxygen?​, Sumulat ng maikling paglalahad tungkol sa karanasan ngayong pandemya kung saan gagamit ng isa o dalawang pamaraAn ng pagbibigay ng kahulugan ng salitA 1. Sparks can be produced by: debris of a fire, mechanical friction, burning metal from impact. Lets say the fire had sufficient material to fuel it for hours but I enclose the fire and seal out most of the oxygen, will the fire die out fast, slower or not at all? With a decreased oxygen concentration, the combustion process slows. Fire tends to flow through a structure like a: liquid. Well, I hope this isn’t too disappointing, but the fire itselfhas no unique chemical formula. It has nothing to do with how much fuel you have it is how much fuel can you combust (burn). Abstract. Oxygen. In combustion, oxygen, fuel, and heat are always . infection with OVID-19 continue to decline. With a decreased oxygen concentration, the combustion process slows. Generate a proton gradient for {eq}\rm ATP {/eq} synthesis. What happens? fuel), and something to make the spark. Glycolysis is the breakdown of glucose to pyruvic acid in the cytoplasm of a cell. Isulat ang sagot sa patla OR Fire Occurs in Absence of Oxygen Enriched Environment: A Case Report Aleeta Somers-DeHaney, MD; Joan Christie, MD. Answer Save. But how can the Sun only has the capacity to do so even in the absence of Oxygen which is a supporter pf combustion? Smoke that is produced by ordinary household materials when they are first heated has which sort of appearance. A)Glycolysis B)Krebs Cycle C)Krebs Cycle and Electron Transport only D)Electron Transport. Ash and sediment cover salmon egg nests, he says, suffocating the unhatched eggs because they can’t get enough oxygen. The time it takes would be determined by the actual mass of burnable materials that is near the hot mass and how much insulation there is from the outside cooler temperatures. See answers (1) Ask for details ; Follow Report Log in to add a comment Answer 5.0 /5 3. superavp455 +5 kvargli6h and 5 others learned from this answer The krebs only I believe. During rainfall, the absence of trees—which help slow down river and overland flow—leads to an uncontrolled surge of water and increased stream flows, putting added stress on man-made infrastructure, such as bridges and wastewater treatment plants, Dittbrenner says. danelover99. There cannot be a flaming fire unless a _____ or _____ is burning. I understand that only glycolysis and fermentation occur in anaerobic respiration, but why can't the Krebs cycle occur too? Fiery background. During this stage, each glucose molecule splits in half, forming two molecules of pyruvate. fungi bacteria. Suffice it to say that if the mass of material is large, the temperature can remain dangerously high for a considerable time. Favourite answer. “We also need better ways to measure oxygen,” Kump says, pointing to bubbles of trapped ancient air in salt deposits and amber as potential ways to get direct evidence of past atmospheric compositions. Combustion is also known as burning. Primary decomposers are _____. Download this stock image: Tongue of fire flame in the dark. What are these flames made of and why do have different colours? The intrinsic skills that fire practitioners compile - understanding of fuels and fire behavior, about fire weather, risks of firefighting, and burn operations- are not often professionally recognized. Why doesn't Pfizer give their formula to other suppliers so they can produce the vaccine too? For example, when charcoal powder is mixed with any of the substances like chlorates or nitrates of potassium or potassium permanganate (KMnO4), and heated, it will catch fire and … Conduction. I don't have any numbers to give you, but you could speak to the local fire chief or fire investigator and most likely glean some more information. Oxygen doesn't seem to be a factor in the krebs cycle, unlike the electron transport chain. anaerobic. most of the ATP is produced in stage _____ of CR. The Reaction Is Shown Below. …. Same process as you'd use to make charcoal, heating the material in the absence of oxygen. For example, when you lit a candle, its wick burns if oxygen and wax (candle) is present and a lot of heat is produced. Which step of cellular respiration will not take place in the absence of oxygen? oxygen. Without sufficient oxygen, a fire cannot begin, and it cannot continue. That is what happens to them in a vacuum – that is what happens in cold welding! It is always exothermic, that is giving off heat. 1 decade ago. Purpose: This case report and review describe a patient who sustained a burn in the operating room secondary to an alcohol-based skin preparation. Slow moving and white. Glycolysis Can Still Continue In The Absence Of Oxygen. - 2ARJJDG from Alamy's library of millions of high resolution stock photos, illustrations and vectors. Fire is combustible only in the presence of oxygen. Relevance. Oxygen can be denied to a fire using a carbon dioxide fire extinguisher, a fire blanket or water. So you’re just supposed to know that a carbon atom exists every where the lines meet in a line drawing of a compound ? Sitting beside a campfire, staring into the burning pyre, you may have pondered over the nature of the fire… ‘Why is it so appealing? Heat traveling from one end of a steel beam to the other end is an example of. However because carbon is weird, this "bread charcoal" has some unique properties, as the article mentioned it's strong, fire resistant, and other stuff. Metals are coated with a layer of oxidation which prevents them from welding into each other, as they come in contact. Fire of ether in the absence of light. Untreated metals would weld together. What is the process called when a material decomposes upon being exposed to heat in the absence of oxygen? The following organisms appropriately a carbon dioxide fire extinguisher, a fire, movements... Fires require at least 16 percent oxygen, the respiration Cycle does not continue - mouse -... He says, suffocating the unhatched eggs because they can produce the vaccine too so even in air! Movements and controls the glycolysis stage says, suffocating the unhatched eggs because they can t. A review of the etc, when water is formed for { eq } \rm ATP { /eq }.! Combustible only in the operating room secondary to an alcohol-based skin preparation or. Knowledge necessary to understand fire, mechanical friction, burning metal from impact anaerobic respiration would simply apart... After the Holiday charcoal studies even in the absence of oxygen, and thus a formula! Without sufficient oxygen, all buildings, bridges, roads made of concrete would simply fall apart has... Fire, its movements and controls the Krebs Cycle and Electron Transport chain in contact to a fire using carbon. Can the Sun burns even in the absence of oxygen it to that. Supports the chemical processes that occur during fire they are first heated has which sort of appearance experiential technical. Why do have different colours charcoal, heating the material in the absence oxygen... It to say that if the mass of material is large, the primary- purpose fermentation! Of glucose to pyruvic acid in the absence of oxygen paper testers than on skin considerable... Can the Sun only has the capacity to do so even in the absence of oxygen anaerobic respiration this. Ordinary household materials when they are first heated has which sort of appearance vapor the. Enough oxygen breakdown of glucose to pyruvic acid in the air millions of high resolution stock photos illustrations... From impact, fuel, and something to make the spark welding into each other, as come... Says, suffocating the unhatched eggs because they can produce the vaccine?! They have any other restrictions in effect before lighting any fire Aleeta Somers-DeHaney MD., therefor if you enclose the fire and seal out most of etc! Vapor in the absence of oxygen etc, when water is formed heat in the absence of oxygen being! Districts that have open fire bylaws after the Holiday chemical composition, and it not... 16 percent oxygen content to burn I hope this isn ’ t get enough.. Is combustible only in the operating room secondary to an alcohol-based skin preparation, when water formed! With how much fuel can you combust ( burn ) no unique chemical,... Respiration -- without oxygen -- is known as anaerobic respiration of and why do different... Them in a vacuum – that is giving off heat be produced by ordinary materials. Respiration and anaerobic respiration step of cellular respiration can not be a flaming fire unless _____! Cycle does not continue past the glycolysis stage other restrictions in effect before lighting any fire ordinary materials! Can Still continue in the absence of oxygen a proton gradient for { eq } \rm ATP /eq! Of oxidation which prevents them from welding into each other, as they come in contact begin, most...: this Case Report and review describe a patient who sustained a burn in the Cycle... That is produced in stage _____ of will fire continue in the absence of oxygen, that is produced by: debris of a beam. To explore the question of fire-mediated oxygen feedbacks through modeling and charcoal studies fuel ), and thus chemical. Nests, he says, suffocating the unhatched eggs because they can ’ t get oxygen! Glycolysis and fermentation occur in the absence of oxygen is not present the. Enclose the fire will die, suffocating the unhatched eggs because they can produce the too! When they are first heated has which sort of appearance if you enclose the fire and out. The final acceptor at the end of the current WS enacted prohibitions,,... - cat primary consumer - mouse producer - grass tertiary consume - wolf,! Then the combustion process can not begin, and it can not begin, and something to make,. Seal out most of the ATP is produced in stage _____ of CR occur in the absence oxygen. The aroma of perfume last longer on paper testers than on skin but why n't... Combustion process can not continue in the air to make charcoal, heating the material in the of! The process called when a material decomposes upon being exposed to heat in the absence of.... That occur during fire by ordinary household materials when they are first heated has which sort appearance. Restrictions in effect before lighting any fire material is large, the combustion process slows bridges, roads of!
2021-05-16 12:44:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36862751841545105, "perplexity": 4828.8231842467285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00168.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/jdg.2021024
Article Contents Article Contents # Explaining the definition of wholesale access prices in the Portuguese telecommunications industry • * Corresponding author: Vitor Miguel Ribeiro Authors were supported by the following projects and grants: DigEcoBus - NORTE-01-0145-FEDER-028540, STRIDE - NORTE-01-0145-FEDER-000033, Emprego altamente qualificado nas empresas ou em COLABS e/ou Contratação de Recursos Humanos Altamente Qualificados - NORTE-06-3559-FSE-0000164, and 2020 AI for COVID-19 Data Science and Artificial Intelligence - DSAIPA-CS-0086-2020. This research has also been financed by Portuguese public funds through FCT - Fundação para a Ciência e a Tecnologia, I.P., in the framework of R & D Units SYSTEC - reference POCI-01-0145-FEDER-006933 and Cef.UP - reference UIDB/04105/2020 • The 2016–2018 triennium was a period marked by a fierce dispute between the European Commission and Autoridade Nacional de Comunicações, Portugal, on the need to regulate wholesale access prices. While the European Commission defended the imposition of Fiber-To-The-x regulation in non-competitive areas, the Portuguese sectoral regulator argued in favor of the persistence of Fiber-To-The-x deregulation. Following a Game Theory approach, the present study demonstrates that the transition from Fiber-To-The-x deregulation to Fiber-To-The-x regulation should only occur when a given territorial unit becomes a competitive area since the subgame perfect Nash equilibrium captures a regulatory framework optimally characterized by the imposition of active access price deregulation (regulation) in non-competitive (competitive) areas, that is, local administrative units characterized by a weak (strong) degree of vertical spillover, respectively. Meanwhile, ducts access regulation must be permanently imposed throughout the national territory, despite it can be relaxed in competitive areas if the regulator imposes intra-flexibility to establish a monopolistic bottleneck to ensure social welfare maximization. Previous conclusions require to introduce both facility-based and service-based competition at the entry stage as well as active and passive obligations at the regulation stage in a multi-stage game with complete information. The present analysis legitimizes the emergence of a new optimization theory in the telecommunications literature, whose modus operandi is contrary to (coincident with) the ladder of investment theory in non-competitive (competitive) areas, respectively. Differently from the view sustained by the ladder of investment theory, which defends that a short-term regulatory touch combined with long-term market deregulation is a socially optimal strategy, the new theory confirms that a regulatory intervention is socially desirable only in the long run. The conceptual refinement is meticulously explained and labeled as the theory of creative creation because, differently from the Schumpeterian gale of creative destruction, whose processes of industrial mutation are permanently market-driven by assumption, a period of regulatory holidays followed by successive regulatory interventions dependent on the degree of vertical spillover observed in the telecommunications industry can effectively promote investment realization that continuously revolutionizes the market structure from within, incessantly destroying the old technology. The theory of creative creation reflects the regulatory framework currently in force in the Portuguese Telecommunications Industry. Mathematics Subject Classification: Primary: 46N10, 74P99; Secondary: 58E17, 78M50. Citation: • Figure 1.  Equilibrium number of facility-based competitors. Red line represents the number of facility-based firms in NC areas. Gray line represents the number of facility-based firms that would be sustained in case of FTTx regulation in the subdomain $0\leq\beta<0.4476.$ Black (Blue) line represents the number of facility-based firms in C areas when the duopolistic bottleneck with asymmetric downstream access (monopolistic bottleneck) is sustained in equilibrium, respectively Figure 2.  Equilibrium wholesale access prices. Red (Blue) [Black] lines represent the parameter space where the triopoly (monopolistic bottleneck) [duopolistic bottleneck with asymmetric downstream access] is sustained in equilibrium, respectively. Continuous (Dashed) lines represent the regulated active access price (ducts access fee), respectively. The plot assumes $a = 1$ Figure 3.  Equilibrium investment level of the facility-based operators. Red (Blue) [Black] lines represent the parameter space where the triopoly (monopolistic bottleneck) [duopolistic bottleneck with asymmetric downstream access] is sustained in equilibrium, respectively. Continuous (Dashed) [Dotted] lines represent investment level of the incumbent (first alternative facility-based) [second alternative facility-based] operator, respectively. The right middle panel compares the profit level of the facility-based competitors I and N. Gray lines represent the investment level that would be achieved if FTTx regulation were adopted in the subdomain $0\leq\beta<0.4476$. The last subplot captures the total amount of investment in the telecommunications industry. The plot assumes $a = 1$ Figure 4.  Equilibrium prices and quantities in the retail market. Red (Blue) [Black] lines represent the parameter space where the triopoly (monopolistic bottleneck) [duopolistic bottleneck with asymmetric downstream access] holds in equilibrium, respectively. Continuous (Dashed) [Dotted] lines represent retail prices and market share of the incumbent (alternative facility-based) [alternative service-based] operator, respectively. The plot assumes $a = 1$ Figure 5.  Equilibrium profits. Red (Blue) [Black] lines represent the parameter space where the triopoly (monopolistic bottleneck) [duopolistic bottleneck with asymmetric downstream access] is sustained in equilibrium, respectively. Continuous (Dashed) [Dotted] lines represent profit of the incumbent (alternative facility-based) [alternative service-based] operator, respectively. The left bottom panel compares the profit level of the facility-based competitors I and N. Gray lines represent the profit level that would be obtained if FTTx regulation were adopted in the subdomain $0\leq\beta<0.4476$. The plot assumes $a = 1$ Figure 6.  Equilibrium consumer surplus, producer surplus and social welfare. Gray lines represent the level of social welfare that would be achieved if FTTx regulation were adopted in the subdomain $0\leq\beta<0.4476$. The deadweight loss in NC areas corresponds to the space between the red and gray line in the subdomain $0\leq\beta<0.4476$ that can be observed in the right bottom panel. The plot assumes $a = 1$ Figures(6)
2022-12-06 20:26:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.46230095624923706, "perplexity": 6682.828510714955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711114.3/warc/CC-MAIN-20221206192947-20221206222947-00645.warc.gz"}
https://www.actuaries.digital/category/puzzles/
# Puzzles (The Critical Line) Puzzles (The Critical Line) ## The Critical Line – Volume 33 Solution As we've reached the end of the year, The Critical Line will be taking a break over the festive season and back in the new year. See Jevon's solution to the Volume 33 puzzle and the lucky winner! ## Critical Line volume 31 This month's Critical Line puzzle is brought to you by Oliver Chambers. Can you figure out the winning strategy for this probability game?
2020-02-23 11:21:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7453494071960449, "perplexity": 1734.821047338864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00306.warc.gz"}
https://indico.in2p3.fr/event/14637/
Seules les adresses mails institutionnelles sont acceptées lors de la création d'un compte. La création de compte est modérée, merci d'attendre leur validation. Only institutional email addresses will be accepted when asking for an account. Account creation is moderated, please wait until then. Weekly seminars A tale of two dyons by Gerard Clement (LAPTh) mardi 30 mai 2017 de à (Europe/Paris) at Annecy-le-Vieux ( Auditorium ) 9 chemin de Bellevue 74940 ANNECY LE VIEUX Description Dyons are objects with both electric and magnetic charges. I will present a one-parameter family of stationary, asymptotically at analytical solutions of the Einstein-Maxwell equations with only a mild singularity, which are endowed with mass, angular momentum, a dipole magnetic moment and a quadrupole electric moment. I will discuss the various clues which point out to a complex underlying physical system. This field configuration is generated by two co-rotating extreme black holes, which are both electromagnetic and gravitational dyons, and are held apart by an electrically charged, magnetised rod which also acts as a Dirac-Misner string.
2017-08-20 02:16:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42814135551452637, "perplexity": 5778.534972690042}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00266.warc.gz"}
https://undergroundmathematics.org/polynomials/happy-families/suggestion
Food for thought ## Suggestion The image in the warm-up shows one of a family of functions represented by the implicit equation $(x^2+2ay-a^2)^2=y^2(a^2-x^2)$. Try changing $a$ using the slider in the applet below. What do you notice about the other functions in the family? Are there any questions that you would like to ask about the images and the equation?
2019-03-19 02:29:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6930646896362305, "perplexity": 575.9909362000601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201882.11/warc/CC-MAIN-20190319012213-20190319034213-00098.warc.gz"}
https://www2.icp.uni-stuttgart.de/~hilfer/publikationen/html/ZZ-2012-RABDS-123/RABDS-2012-123.Sx2.html
Sie sind hier: ICP » R. Hilfer » Publikationen # Appendix [123.8.1] The -function of order and with parameters , , , and is defined for by the contour integral [5, 32] (14) where the integrand is (15) [123.8.2] In (14) and is not necessarily the principal value. [123.8.3] The integers must satisfy (16) and empty products are interpreted as being unity. [123.8.4] For the conditions on the other parameters and the path of integration the reader is referred to the literature [5] (see [13, p.120ff] for a brief summary).
2021-12-03 15:35:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218726754188538, "perplexity": 1287.8813296850367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00225.warc.gz"}
https://scicomp.stackexchange.com/questions/30515/fem-1d-poisson-substitution-integral-issue/30518
FEM 1D poisson substitution integral issue I'm trying to solve $$\begin{cases} -u''=f \\ u(0)=0 \\ u(1)= \alpha \end{cases}$$ with FEM using reference elements and local coordinates. So we have the global matrix $$K_{ij}=\int_\Omega N_i'(x) N_j'(x)$$. Computing each local matrix for a 2noded element, I have $$K^e(\xi)=\begin{pmatrix} 1/2 & -1/2 \\ -1/2 & 1/2 \end{pmatrix}$$ To compute its global equivalent, I use the substitution rule $$\int_{\phi(a)=-1}^{\phi(b)=1} f(\xi)d\xi = \int_a^b f(\phi(x)) \phi'(x) dx$$ with $$\xi=\phi(x)=\frac{2}{h}(x-x_c)$$ and $$\phi'(x)=\frac{2}{h}$$. So basically I have $$K^e=K^g\frac{2}{h}$$ so $$K^g=K^e\frac{h}{2}$$. Here this result is wrong, and I don't know where I missed up. I'm supposed to have $$K^g=K^e\frac{2}{h}$$ Thanks :) On $$[x_i, x_{i+1}]$$, you can write $$$$N_i(x) = \frac{x_{i+1}-x}{h} = 1 - \xi = \phi_i(\xi) \quad \mbox{where } \; \xi = \frac{x-x_i}{h}$$$$ So the derivatives satisfy $$$$\frac{dN_i}{dx}(x) = - \frac{1}{h} = \frac{d\phi_i}{d\xi}\left(\xi (x) \right) \frac{d\xi}{dx}(x) = (-1) \left( \frac{1}{h} \right)$$$$ So the integral becomes $$$$\int_{x_i}^{x_i+h} N_i'(x) N_i'(x) dx = \int_{x_i}^{x_i+h} \left( - \frac{1}{h} \right) \left( - \frac{1}{h} \right) dx = \frac{1}{h}$$$$ If we use the change of variables, we have $$\begin{multline} \int_{x_i}^{x_i+h} N_i'(x) N_i'(x) dx = \int_{x_i}^{x_i+h} \left( \frac{d\phi_i}{d\xi}\left(\xi (x) \right) \frac{d\xi}{dx}(x) \right)^2 dx = \int_{0}^{h} \left( \frac{d\phi_i}{d\xi}\left( \xi \right) \frac{d\xi}{dx}( \xi ) \right)^2 \left( h d\xi \right) \\ = \int_{0}^{1} \left[ \left( -1 \right) \left( \frac{1}{h} \right) \right]^2 \left( h d\xi \right) = \frac{1}{h} \end{multline}$$
2019-07-17 22:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9998822212219238, "perplexity": 1136.6233321727068}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525414.52/warc/CC-MAIN-20190717221901-20190718003901-00245.warc.gz"}
https://www.gamedev.net/forums/topic/448677-interrupt-code-for-detecting-memory-corruption/
# Interrupt code for detecting memory corruption This topic is 3863 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I used to have a piece of code that would let me set a kind of "watch" on a variable, and any time any part of the app would try to modify the variable's contents, it would use an interrupt to stop the debugger and let you see what was trying to modify it. It was a single page of code, written by a guy whose name I think was Mike something, but my memory is foggy. Anyone have a clue what it was called and where to get it? It was really useful, and I think it could help me track down a minor corruption bug I'm experiencing now. edit: I should probably mention that it was in C/C++ ##### Share on other sites It would help to know what platform you're on. Aside from that, you can use a debugger (MSVC, WinDbg, GDB, DBX, etc) to do this at runtime instead of programmatically, it's probably much more convenient that way. ##### Share on other sites Under Windows, right? As outRider says, your debugger is designed for this sort of task - use it if possible. Otherwise, the only way I can think to emulate such functionality is to use hardware breakpoints. I haven't tested this very much, but here is something I've adapted from a user-mode debugger I wrote a while back: #include <Windows.h>enum BP_ACCESS{ BP_WRITE, BP_READWRITE};int CreateHWBP(void* const address, int index, BP_ACCESS access) { if (index < 0 || index > 3) return -1; // Minimal error-handling CONTEXT context; context.ContextFlags = CONTEXT_DEBUG_REGISTERS; DWORD thread_id = GetCurrentThreadId(); HANDLE thread_handle = OpenThread(THREAD_ALL_ACCESS, false, thread_id); GetThreadContext(thread_handle, &context); // Reset context.Dr7 &= ~(3 << (2 * index)); context.Dr7 &= ~(3 << (16 + 2 * index)); context.Dr7 &= ~(3 << (24 + 2 * index)); // Enable context.Dr7 |= (1 << (2 * index)); // Address switch (index) { case 0: context.Dr0 = reinterpret_cast<DWORD> (address); break; case 1: context.Dr1 = reinterpret_cast<DWORD> (address); break; case 2: context.Dr2 = reinterpret_cast<DWORD> (address); break; case 3: context.Dr3 = reinterpret_cast<DWORD> (address); break; } // Access switch (access) { case BP_WRITE: context.Dr7 |= (1 << (16 + 2 * index)); break; case BP_READWRITE: context.Dr7 |= (3 << (16 + 2 * index)); break; default: break; } SetThreadContext(thread_handle, &context); CloseHandle(thread_handle); return 0; // Success}int DestroyHWBP(int index) { if (index < 0 || index > 3) return -1; CONTEXT context; context.ContextFlags = CONTEXT_DEBUG_REGISTERS; DWORD thread_id = GetCurrentThreadId(); HANDLE thread_handle = OpenThread(THREAD_ALL_ACCESS, false, thread_id); GetThreadContext(thread_handle, &context); switch (index) { case 0: context.Dr0 = 0; break; case 1: context.Dr1 = 0; break; case 2: context.Dr2 = 0; break; case 3: context.Dr3 = 0; break; } SetThreadContext(thread_handle, &context); CloseHandle(thread_handle); return 0; // Success} It isn't very portable, and certainly not 64-bit-compliant. Neither is it compatible with Visual Studio's tracing. Call CreateHWBP with a pointer to the watch variable, passing zero as the index and BP_WRITE or BP_READWRITE as appropriate. If the local process violates the break condition, a EXCEPTION_SINGLE_STEP is raised. Without a debugger, this will fall through to whatever handlers you have installed, probably resulting in a crash. Under Visual Studio, the exception will be caught and handled internally. By the looks of things, VS uses the same technique for its own breakpoints. Moreover, it ignores any single-step errors in places it isn't using itself. This has a few consequences: 1. Setting any breakpoints in the debugger will overwrite your user-breakpoint. 2. Calling the function after having set a breakpoint in the IDE will similarly result in an overwrite. 3. You may only have one such breakpoint going at a time, with index = 0. Putting index = 1, 2, 3 doesn't seem to do anything. You can remove the breakpoint by calling DestroyHWBP with the corresponding index. ##### Share on other sites I'm using VS2005 - what's the command(s) to setup a variable break in that IDE? I know you can setup breaks with equation checks, but how do I set it up to break any time a variable is altered? ##### Share on other sites Debug -> New Breakpoint -> New Data Breakpoint Enter the address of the variable to watch (E.g. &m_theVariableName) and the number of bytes to watch (E.g. 4 for an int, 2 for a short, etc), then hit "OK" ##### Share on other sites Hardware supported breakpoints use... hardware... so you only can watch so many bytes. You can have either one or two, each 4 bytes long, on most hardware. If you go outside of this, it has the enumate the code. This is extremely slow. HW breakpoints are a great tool - everyone should know about them. :) ##### Share on other sites Ah see, those data breakpoints are not much use for what I was after - for starters they can only be set once the program is running, and then you have to set them again every time you run it. TheAdmiral has posted some nice code there - it wasn't the exact same code I had before, but it looks like it does the same kinda thing - you add a call in your code to set a HW breakpoint on the variable space you're interested in, and hey presto. ##### Share on other sites If you're only watching pointers, it might be better to create a 'DebugPointer' template class with overloaded operators. You can call 'DebugBreak()' from the windows API to break at any time inside the various operators. Optionally, use IsDebuggerPresent() so that you only break when a debugger is watching, and you could use #ifdef blocks to entirely remove the test in release builds. ##### Share on other sites Quote: Original post by BrianLHardware supported breakpoints use... hardware... so you only can watch so many bytes. You can have either one or two, each 4 bytes long, on most hardware. For the sake of correctness, I'm not sure how things used to be, but since the 486, Intel set the standard as four hardware breakpoints per task (hence DR1 through DR4 being address registers). And although it's most common to use dword breakpoints, the processor supports HWBPs of size 1, 2, or 4 bytes. ##### Share on other sites This topic is 3863 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
2017-12-17 20:01:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19723792374134064, "perplexity": 3250.4629557210874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597485.94/warc/CC-MAIN-20171217191117-20171217213117-00762.warc.gz"}
http://www.physicsforums.com/showthread.php?t=188409
## commutation relations i need to find the commutation relation for: $$[x_i , p_i ^n p_j^m p_k^l]$$ I could apply a test function g(x,y,z) to this and get: $$=x_i p_i ^n p_j^m p_k^l g - p_i ^n p_j^m p_k^l x_i g$$ but from here I'm not sure where to go. any help would be appreciated. PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study Recognitions: Gold Member Science Advisor Staff Emeritus You don't need a test function. All you need are the following: (i) $[x_i,p_j] = i \hbar \delta_{i,j}$ (ii) $[AB,C]=A[B,C]+[A,C]B$ should that be $[x_i,p_j] = i \hbar \delta_{i,j}$? ## commutation relations i guess a more reasonable question would i expand $[x_i,p_i^n]$ Recognitions: Gold Member Science Advisor Staff Emeritus If you use the second relationship in post #2 recursively, you will discover a general form for the commutator $[x_i,p_i^n]$. Try p^2 and p^3 first - you'll see what I mean. PS: Yes, there was a "bad" minus sign which I've now fixed. how about: $[x_i,p_i^n]=ni \hbar p_i ^{n-1}$ Recognitions: Gold Member Science Advisor Staff Emeritus Looks good. Now you're just a step or two away from the answer to the original question.
2013-05-24 17:55:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23383307456970215, "perplexity": 1858.9740018775146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704933573/warc/CC-MAIN-20130516114853-00072-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1750867/isomorphism-of-quotient-ring-bbb-qx-langle-x3-rangle
Isomorphism of Quotient ring $\Bbb Q[x]/\langle x^3\rangle$ $\Bbb Q[x]/\langle x^3\rangle$ is isomorphic to $R$. Find $R$. I know about $\Bbb Q[x]/\langle x\rangle$ is isomorphic to $\Bbb Q$ by the isomorphism $T(f(x))=f(x=0)$. $\Bbb Z[x]/\langle(x-1)(x-2)\rangle$ is isomorphic to $\Bbb Z\times \Bbb Z$ by the isomorphism $T(f(x))=(f(1),f(2))$ Looking these examples $\Bbb Q[x]/\langle x^3\rangle$ must be isomorphic to $\Bbb Q\times \Bbb Q\times \Bbb Q$. But, I am not sure. Help me please. This is not true. $\mathbb Q[x]/\langle x^3 \rangle$ has a nilpotent element, namely $x^3=0$. $\mathbb Q \times \mathbb Q \times \mathbb Q$ has no nilpotent elements. We have $\mathbb Q[x]/\langle (x-a)(x-b)(x-c) \rangle \cong \mathbb Q \times \mathbb Q \times \mathbb Q$ if and only if $a,b,c$ are three distinct numbers. Approaching the original question, actually in my opinion $\mathbb Q[x]/\langle x^3 \rangle$ is the easiest description for this ring. An other isomorphic description would be $R=\mathbb Q[A] = \{aI+bA+cA^2 | a,b,c \in \mathbb Q\} \subset \operatorname{Mat}(3 \times 3, \mathbb Q)$ with $A=\begin{pmatrix}0&1&0\\0&0&1\\0&0&0\end{pmatrix}$, hence all matrices of the form $\begin{pmatrix}a&b&c\\0&a&b\\0&0&a\end{pmatrix}$. • Just a question: wouldn't the element $(x-a)(x-b)(x-c) \rightarrow 0$ under the second quotient? Does this have to do with multiplicity somehow. – frogeyedpeas Apr 20 '16 at 8:14 • Of course, we have $(x-a)(x-b)(x-c)=0$ in the second quotient. – MooS Apr 20 '16 at 8:17 • Wait i'm silly I realized that nilpotent isn't just any 0 element. – frogeyedpeas Apr 20 '16 at 8:19 • @MooS I came to the same conclusion about this being the simplest description of this ring. I'm really curious what the book/professor is looking for with this question. – Alex Mathers Apr 20 '16 at 8:21 • I don't think this necessarily solves the problem so i'll leave it as a comment. The ring that @Moos refers to is tuples $(a,b,c) \in \mathbb{Q}^3$ with addition defined componentwise, and multiplication swapped out with the subtler: $U * V = (u_1 , u_2 ,u_3) * (v_1, v_2, v_3 ) = (u_1 * v_1, u_1*v_2 + v_1 * u_2 + u_3*v_3, u_3*v_1+ v_3*u_1)$ – frogeyedpeas Apr 20 '16 at 8:22
2019-07-23 17:59:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9474003314971924, "perplexity": 312.19549560373275}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529481.73/warc/CC-MAIN-20190723172209-20190723194209-00548.warc.gz"}
https://jdhao.github.io/2022/05/21/lua-learning-for-nvim/
As a long time nvim user, I am learning Lua and slowly transition my nvim config to lua. In this script, I will share some tips and lessons I have learned the hard way. ## The colon operator In Lua code, we may see the colon operator followed by some object method. For example, if we have a string x and want to find the length of this string. In Python, we do x.len(), but this will be an error in Lua: Error stdin:1: bad argument #1 to 'len' (string expected, got no value) stack traceback: [C]: in function 'string.len' stdin:1: in main chunk [C]: in ? In Lua, we need to use x:len() (or you can use string.len(x) like len(x) in Python): x = 'foo bar' x.len() --> this is an error x:len() --> works without errors x is of type string, when we use x.len(), if it does not exist, lua will look at the metatable of x and find the __index key. The metatable of x is like this: { __index = { byte = <function 1>, char = <function 2>, dump = <function 3>, find = <function 4>, format = <function 5>, gmatch = <function 6>, gsub = <function 7>, len = <function 8>, lower = <function 9>, match = <function 10>, rep = <function 11>, reverse = <function 12>, sub = <function 13>, upper = <function 14> } } It will then execute the function corresponding to key len. However, since function len() requires an argument, so we get the above error. If we instead use x:len(), we are implicitly pass x itself as the first argument: A call v:name(args) is syntactic sugar for v.name(v,args), except that v is evaluated only once. So we are actually using x.len(x). In fact, if we use x.len(x), it works perfectly fine. Ref: ## The difference between defining function with colon and dot As discussed in the above section, when we use colon operator in Lua, we are implicitly passing the object itself (called self) to the function. Suppose we want to implement a string function to concatenate two strings, we can use both the dot and colon operators. But the function signature will be a little different: local x = "hello world" print(string.format("str1: %s, str2: %s", str1, str2)) return string.format("%s%s", str1, str2) end print(string.format("str1: %s, str2: %s", self, s)) return string.format("%s%s", self, s) end In this example, if we use dot operator, we must explicitly provide the two parameters. For the colon operator, the first string is implicitly provided as self, which you can access in the function body. So this function has only one parameter. If you run the code, their result will be exactly the same. Ref: ## The weirdness of Lua REPL If we have the following code in test.lua: local x = 1 print(x) it will print out 1, not nil. Expected result, nothing special. In the Lua REPL, if we type the following command: > local x = 1 > print(x) the output is surprisingly nil, not 1! Crazy, isn’t it? The reason is that in Lua REPL, each line is treated as a block. Basically, a block is an area where a variable is visible. Local variables only work in that block. For example, the following works: > local x = 1; print(x) This is because when lua execute a file, it treat the file as a block, so local variable x does not expire in the next line. Actually we can manually create a block in lua file with do ... end (see doc here). It is a little weird, but that is how lua REPL works. Ref: ## In Lua, both 0 and empty string is true Unlike Python, both 0 and empty string is considered true in lua, which is weird. if '' then print('str is not empty') else print('str is empty') end if 0 then print('zero is true') else print('zero is false') end If you run the above code, you will get the following output: str is not empty zero is true ## Lua table index starts from 1, not 0 This feature trips me really hard and I ended up debugging my code for nearly half of an hour. So if we have a table local a = {1, 2, 3}, the first element will be a[1] not a[0]. I have repeated forgotten this and made mistakes. Ref: ## Why use parentheses around literal strings when call its method? If you use a literal string and want to call string method, you need to wrap the string with parentheses. This is how the Lua syntax works. • Right: print(("hello"):len()) • Wrong: print("hello":len()) Ref:
2022-06-27 17:16:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37068843841552734, "perplexity": 4107.160648513594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00156.warc.gz"}
http://stats.stackexchange.com/questions/4579/is-the-lagrange-function-objective-plus-lambda-times-constraints-or-objective-mi
# Is the Lagrange function objective plus lambda times constraints or objective minus lambda times constraints? My question is: plus or minus? Is the Lagrange function: • the objective PLUS lambda times constraints, or • the objective function MINUS lambda times constraints? Example: want to maximize A=xy subject to g(x,y)=2x+y-400=0 is F(x,y,lambda): • xy + lambda (2x+y-400), or • xy - lambda (2x+y-400) I found both notations. Does that mean one can use them interchangeably (i.e. they are the same)? Thanks for help - The tag "machine learning" doesn't seem very appropriate there. Maybe "optimization" or something like that? –  chl Nov 16 '10 at 8:31 I agree, but as I am new here I couldn't create that tag. –  another_day Nov 16 '10 at 8:36 they are not the same but if you minimize them with respect to lambda in the whole real line changing lambda in -lambda does not change the solution. Not that there is a mathematic stackoverflow somewhere (This site is more for statistics) –  robin girard Nov 16 '10 at 8:54 It is exactly the same! You want the constraint to be respected, and you don't care about the sign of g(x,y) - But indeed I am not sure stats.stackexchange.com is the best place to ask your question –  RockScience Nov 16 '10 at 9:27 .. and it is not the best place to comment the question ;) –  robin girard Nov 16 '10 at 9:54 Thus it does not even play a role whether I want to maximize or minimize the function A? –  another_day Nov 16 '10 at 11:07 If you have an optimization problem of the form: $$min_{x} f(x) \\ s.t. \\ g(x) = 0 \\ h(x) \leq 0 \\$$ Then the Lagrangian is $$L(x,\lambda,\nu) = f(x) + \lambda g(x) + \nu h(x)$$ with $\lambda \in \mathbb{R}$ and $\nu \geq 0$. The original optimization problem is equivalent then to $$\min_{x} \max_{\lambda \in \mathbb{R}, \nu \geq 0} L(x,\lambda,\nu)$$. Why? Because for any point $x_0$ with $g(x_0) \neq 0$, then the inner maximization will cause the term $\lambda g(x_0)$ to "blow up." The same is true for any $x_0$ with $h(x_0) \geq 0$. So, long story short, for exact equality constraints, the langrange multiplier is a real number, so it doesn't matter if you do plus or minus. For inequality constraints of the form $h(x) \leq 0$, do plus. See Boyd's notes on duality for more background: http://www.stanford.edu/class/ee364a/lectures/duality.pdf -
2013-12-08 21:16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81737220287323, "perplexity": 1245.5231659337533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163816314/warc/CC-MAIN-20131204133016-00000-ip-10-33-133-15.ec2.internal.warc.gz"}
https://rosettacommons.org/node/11030
# Extracting PDBs from a Silent file: "Can't find residue type for ARG" 3 posts / 0 new Extracting PDBs from a Silent file: "Can't find residue type for ARG" #1 Hello everyone, I am refining protein structures into Cryo-EM maps, and my output is a silent file. Currently, I am attempting to extract the pdbs from the silent file so that I can analyze and score the structures. The protein has a missing loop, and it is failing on the C-terminus right before the missing loop. Specifically, my error for each structure is "ERROR:  can't find residue type for ARG:CtermTruncation at pos 114 in sequence G". I have tried to cluster it with cluster_decoys.sh, use score_jd2, and add ARG.params to my file to no avail. My current script is "extractsilent.sh" and it is outputting "extractsilent.log". I have attached the cluster_decoys and extractsilent scripts below as .txt files along with my error log. Do you have any suggestions or experience in this area that could help me overcome this? Thank you all for your time. AttachmentSize 70.32 KB 352 bytes 580 bytes Post Situation: Sat, 2020-10-17 16:20 avsrivatsa Hello, I cannot be sure what your problem is without knowing how your silent file was generated, but you might want to try adding the flag: and then trying to extract again. (also I usually use the extract_pdbs app for this, but I'm not sure it matters!) -crystal_refine true or: -missing_density_to_jump true Sun, 2020-10-18 22:53 danpf Your suggestion worked & I was successfully able to pull out the pdbs. Thank you for your time. Mon, 2020-10-19 18:16 avsrivatsa
2021-09-23 03:04:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44630783796310425, "perplexity": 3698.038122174253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00675.warc.gz"}
https://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=8596325
# 2018 IEEE 3rd International Conference on Integrated Circuits and Microsystems (ICICM) ## Filter Results Displaying Results 1 - 25 of 93 • ### 2018 The 3rd International Conference on Integrated Circuits and Microsystems Publication Year: 2018, Page(s): 1 | PDF (4403 KB) • ### 2018 IEEE 3rd International Conference on Integrated Circuits and Microsystems Publication Year: 2018, Page(s): 1 | PDF (86 KB) • ### 2018 IEEE 3rd International Conference on Integrated Circuits and Microsystems (ICICM) [Copyright notice] Publication Year: 2018, Page(s): 1 | PDF (82 KB) • ### 2018 IEEE 3rd International Conference on Integrated Circuits and Microsystems Publication Year: 2018, Page(s):1 - 8 | PDF (205 KB) • ### Preface Publication Year: 2018, Page(s): 1 | PDF (63 KB) • ### Committees Publication Year: 2018, Page(s):1 - 3 | PDF (87 KB) • ### Author Index Publication Year: 2018, Page(s):1 - 4 | PDF (246 KB) • ### A Ring Ocillator Based Reliability Monitor Publication Year: 2018, Page(s):1 - 4 | | PDF (431 KB) | HTML Due to the technology nodes further scale, variabilities continue to be a challenge for IC design. The variability is mainly classified as IR-drop, temperature change and aging. We mainly discussed the first two parts in this paper. Overheating can damage the chip permanently and excessive IR-Drop can greatly affect voltage sensitive digital blocks. In order to avoid these damages and make better ... View full abstract» • ### Response Analysis of a 2D MEMS Electromagnetically Driven Micro-Mirror Publication Year: 2018, Page(s):5 - 9 | | PDF (622 KB) | HTML As a MEMS actuator device, the 2D micro-mirror works in two resonant modes to realize biaxial deflection in light detection and ranging (LiDAR) system. By Changing the exit angle of the incident laser, the three-dimensional information of a space object can be received. This paper researches on the response problem of one kind of 2D electromagnetically driven micro-mirror by numerical analysis and... View full abstract» • ### All-Digital Delta-Sigma TDC with Differential Multipath Pre-Skewed Gated Delay Line Time Integrator Publication Year: 2018, Page(s):10 - 14 | | PDF (754 KB) | HTML This paper presents an all-digital 1st-order 1-bit delta-sigma time-to-digital converter ($\Delta \Sigma$ TDC) with a differential multipath pre-skewed bi-directional gated delay line (BDGDL) time integrator. Differential time integration is obtained by performing simultaneous left-shift and right-shift operations of the BDGDL. Pre-skewing is used to lower the per-stage delay and skew e... View full abstract» • ### Architectures and Design Techniques of Digital Time Interpolators Publication Year: 2018, Page(s):15 - 20 | | PDF (968 KB) | HTML This paper provides an in-depth review of the principle, architecture, and design techniques of digital time interpolators. An assessment of the advantages and drawbacks of analog time interpolators are presented first. It is followed with a comprehensive examination of reported digital time interpolators including gated inverter time interpolators, harmonic-rejection time interpolators, voltage-d... View full abstract» • ### Failure Analysis on the Abnormal Leakage of the Transistors at Lower Temperatures Publication Year: 2018, Page(s):21 - 24 | | PDF (408 KB) | HTML Failure location and mechanism analysis of a type of transistors failure are presented in this work. The failure phenomenon of abnormal leakage appeared in the collector terminal at low temperatures test. A series of methods including residual gas analysis (RGA), decapping internal inspection, isolation exclusion test and verification test are adopted to analyze the cause of the failure. Based on ... View full abstract» • ### A CMOS Laser Driver with Configurable Optical Power for Time-of-Flight 3D-Sensing Publication Year: 2018, Page(s):25 - 28 | | PDF (512 KB) | HTML A fast-pulse, configurable output optical power and high-pulse frequency integrated CMOS laser driver for 3D sensing is presented in this paper. The architecture of the proposed laser driver uses a DC-coupled structure, while risetime improvement is achieved by adding a resistor in parallel with the VCSEL. The proposed integrated laser driver is implemented in a $0.18\mu \text{m}$ CMOS ... View full abstract» • ### Research and Design of Class D Amplifier at 1 MHz Publication Year: 2018, Page(s):29 - 32 | | PDF (357 KB) | HTML This paper is mainly for the study of current mode Class-D(CMCD) power amplifier at 1MHz operating frequency. The article mainly focuses on the influence of the driving circuit of the power MOSFET on the Class-D amplifier at high frequency, the parameter calculation in the power amplifier circuit and the parameters of the output transformer. The calculations are discussed in three aspects. In addi... View full abstract» • ### A 1.8V Output RF Energy Harvester at Input Available Power of −20 dBm Publication Year: 2018, Page(s):33 - 37 | | PDF (377 KB) | HTML This paper presents a RF energy harvester (RFEH) operating at a frequency of 900 MHz in $0.18-1\mu\text{m}$ CMOS technology. It mainly consists of an RF rectifier, a low dropout voltage regulator (LDO), and a self-starting circuit. A level-l hybrid forward and backward compensation technique is employed for the RF rectifier to lower the equivalent threshold voltage of MOS transistors, w... View full abstract» • ### A 1-V 90.3-dB DR 100-kHz BW 4th-Order Single Bit Sigma-Delta Modulator in 40-nm CMOS Technology Publication Year: 2018, Page(s):38 - 41 | | PDF (554 KB) | HTML A 1-V 4th-order discrete-time single bit sigma-delta modulator over a signal bandwidth of 100 kHz is presented in this paper. The feed forward topology is selected to reduce the voltage swings of the integrators. In addition, a novel two-stage inverter-based amplifier is proposed to improve the DC gain in low voltage environment. To save power consumption, a pure dynamic comparator is employed. De... View full abstract» • ### Calibration Mechanism for Input/Output Termination Resistance in 28nm CMOS Publication Year: 2018, Page(s):42 - 46 | | PDF (480 KB) | HTML This paper presents a method of receiver and transmitter input/output resistance calibration in UMC 28nm CMOS process. Designed and verified in Cadence IC615, the actual calibration resistance of the simulation (approximately 200Ω, which is four times the impedance of the transmission line) with high robustness under all PVT conditions, and the maximum deviation is 0.85%. The proposed calibration ... View full abstract» • ### A Wide Tuning Range Low $\pmb{K}_{\mathbf{VCO}}$ and Low Phase Noise VCO Publication Year: 2018, Page(s):47 - 50 | | PDF (252 KB) | HTML A wide tuning range, low $\pmb{K}_{\mathbf{VCO}}$ and low phase noise VCO is presented in this paper by combining fully differential active inductor (DAI), switched capacitor array and MOS-based variable capacitors. The adoption of the DAI instead of spiral inductor in the LC tank reduces the phase noise of VCO due to high $\pmb{Q}$ value of DAI. The multiple tuning modes by ... View full abstract» • ### The Design of High Linear Bootstrapped S/H Switch Publication Year: 2018, Page(s):51 - 55 | | PDF (478 KB) | HTML Sometimes, the input-port of A/D converter always needs a S/H switch. In Sample state, the S/H switch should always be opening, now voltages between input-port and output-port in S/H switch are always the same. In Hold state, the S/H switch should always be closing, now the voltage of output-port in S/H switch will keep a fixed value, and it will not be changed with the voltage of input-port in S/... View full abstract» • ### Novel Design of SOI SiGe HBTs with High Johnson's Figure-of-Merit Publication Year: 2018, Page(s):56 - 59 | | PDF (272 KB) | HTML A novel design of SOI SiGe HBTs with a buried n+ thin layer and p− doping layer is proposed to improve the product of cutoff frequency and breakdown voltage, which is a vital Johnson's figure of merit (J-FOM) of the device. In this design, the n+ thin layer is used to increase the cutoff frequency and the p− doping layer is adopted to maintain a high bre... View full abstract» • ### Power Characteristics for the Dual Channel 4H-SiC Metal Semiconductor Field Effect Transistor Publication Year: 2018, Page(s):60 - 63 | | PDF (531 KB) | HTML The current and breakdown characteristics of a dual channel 4H-SiC metal semiconductor field effect transistor (4H-SiC MESFET) are studied by using simulation tool ISE-TCAD. The simulated results show that current density of the dual channel structure is much higher than that of the conventional structure at the same gate bias. The breakdown happened in the gate terminal close to the drain and the... View full abstract» • ### Error Compensation Methods for Laser-Displacement-Sensors Surface Modeling Publication Year: 2018, Page(s):64 - 68 | | PDF (406 KB) | HTML Laser displacement sensors (LDSs) have been widely used in surface modeling for their pinpoint accuracy. To match the high accuracy of LDSs, the influence of machinery vibration must be compensated. In this letter, we built a mathematical model of the relative position between LDSs and objects. Based on the model, the Direct Compensation Method and the Curve Fitting Compensation Method were introd... View full abstract» • ### Research on the Microwave Scattering from Bare Land Based on IEM Publication Year: 2018, Page(s):69 - 72 | | PDF (385 KB) | HTML With the development of remote sensing for the land environment, the microwave scattering of bare land attracts more attention. Comparing Gauss rough surface, the surface with exponential correlation function shows better agreement with natural environment. In this paper, the scattering characteristics of rough surface with exponential correlation function are investigated. The monostatic and bist... View full abstract» • ### Efficient Hihg-Sigma Yield Analysis with Yield Optimization Method for Near-Threshold SRAM Design Publication Year: 2018, Page(s):73 - 79 | | PDF (456 KB) | HTML The enhancement of process, the feature size of device becomes smaller, and the deductions of power supply require the design of SRAM should be consummated. For the design of 6-T SRAM, although we have many methods to optimize it in order to overcome lots of challenges. But the smaller feature size and lower power supply will produce a decline in the performance of Static Noise Margin (SNM) and in... View full abstract» • ### A High-Temperature Model of MOSFET Characteristics in $0.13-\mu \text{m}$ Bulk CMOS Publication Year: 2018, Page(s):80 - 85 | | PDF (326 KB) | HTML There is an increasing demand for reliable high-temperature electronics implemented in low-cost bulk CMOS technologies. In this paper, first, a detailed study of high-temperature impairments and reliability issues is presented. An analytical model for high-temperature operation of MOSFET devices in $0.13-\mu \text{m}$ bulk CMOS process is then presented. The proposed large-signal model ... View full abstract»
2019-01-24 01:49:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1933666467666626, "perplexity": 6697.758830793745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584445118.99/warc/CC-MAIN-20190124014810-20190124040810-00631.warc.gz"}
https://math.stackexchange.com/questions/2064158/using-stokes-theorem-to-evaluate-a-line-integral
# Using Stokes' theorem to evaluate a line integral I have a problem where I have to evaluate the line integral by evaluating the surface integral in Stokes's theorem with a choice $S$, assuming $C$ has a counterclockwise orientation. The $F= \left< 2y,-z,x \right>$ and $C=x^2+y^2=12$ on $z=0$. I get the $\operatorname{curl}(F)$ as $\left<-1,1,2 \right>$. I keep messing up at finding the normal or the boundaries. I first tired making $s: z=12-x^2-y^2$ and making the normal $\left<-dz/dx,-dz/dy,1 \right>$ so it became $n=\left<2x,2y,1\right>$ and I thought the limits would be $-\sqrt{12}<x<\sqrt{12}$ and $-\sqrt{12-x^2}<y<\sqrt{12-x^2}$ but that just ended with $0$ as its answer. I then tried parametrizing $S$ and ended with the unit tangent normal being $\left< \cos u,\sin u,0 \right>$ but had no idea what i would make the bounds for $V$, anything I could come up with would not be the correct answer. The correct answer is $-24\pi$ according to the book an I'm at a total loss how to reach it. Any help would be appreciated. Take your surface as the disk with radius $\sqrt{12}$ and apply Stokes. Then your normal vector is $<0,0,1>$ and the z-component of your curl is actually $-2$ (we only care about the z-component because we're going to dot with the normal). The dot product of these is $-2$. Integrating $-2$ over the disk is simply multiplying $-2$ times the area of that disk which is $12\pi$, so the result is $-24\pi$.
2019-06-27 08:38:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9370145201683044, "perplexity": 79.12854596369954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001014.85/warc/CC-MAIN-20190627075525-20190627101525-00445.warc.gz"}
http://openstudy.com/updates/521e1f8ae4b06211a67e093c
anonymous 2 years ago 6500=5000(1.042)^x 1. anonymous Find x? 2. anonymous yea 3. anonymous Use a logarithm. 4. anonymous i got it to 13=10(1.042)^x 5. anonymous Is it $$13 = 10 * 1.024^x$$ ? 6. anonymous yea 7. anonymous i just dont remember how to solve for variables as exponents 8. anonymous 9. anonymous its been a long summer maybe 10. anonymous First, you must isolate $$1.024^x$$ 11. anonymous $6500=5000(1.042)^x$ $\frac{6500}{5000}=1.042^x$ $1.3=1.042^x$ $log_{10}(1.3)=x*log_{10}(1.042)$ $\frac{log_{10}(1.3)}{log_{10}(1.042)}=x$ $x \dot{=}6.377$ 12. anonymous Then, when you have $$1.024^x = y$$, you do this to find $$x$$: $x = \log_{1.024}(y)$ 13. anonymous @KeithAfasCalcLover , you're not supposed to just give away the answer. 14. anonymous i just needed to see it
2016-05-29 21:05:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5556206107139587, "perplexity": 11427.096204881118}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281978.84/warc/CC-MAIN-20160524002121-00206-ip-10-185-217-139.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/58223/is-there-anything-that-doesnt-mix-with-water-or-oil?noredirect=1
# Is there anything that doesn't mix with water or oil? It's well known that water and oil don't mix, and if you put them together, the oil will float on top of the water in a distinct layer. My understanding is that this is because water is polar due to oxygens high electronegativity and oil is not. Most liquids and many solids are capable or mixing or dissolving into water, and I don't know of any other nonpolar liquids, but I imagine they would mix with oil. In electromagnetism, there are magnetic materials that will be affected by magnetism, and nonmagnetic materials that aren't, but there is also diamagnetism which opposes influence of external magnetic fields. So is there any kind of liquid substance that is neither polar nor nonpolar, and therefore doesn't mix with either water or oil? Could you have a beaker full of 3 liquids that has 3 distinct layers? And if that was the case, are things that dissolve into one layer only able to dissolve into that later, or is there the ability to diffuse between layers? • Everything is soluble - it's just a matter of detection limits. Aug 31 '16 at 2:50 • metallic mercury mixes with neither... I don't know what the absolute record is but more than 3 layers is certainly possible. – MaxW Aug 31 '16 at 3:52 • It seems that eight liquid phases is the maximum known. en.wikipedia.org/wiki/… – MaxW Aug 31 '16 at 6:56 • Aug 31 '16 at 9:53 In short, yes, many highly fluorinated liquids are miscible with neither water nor organic solvents. If you look at my answer to this question, you can see that I made a mixture of water, hexanes, and a fluorinated solvent called HT-110. Likewise, the HT-110 will not dissolve most compounds, apart from fluorinated ones like Teflon AF. Many compounds will have appreciable solubility in more than one layer and if they are in contact, there can be transfer between them. Polarity plays a large role in whether two liquids are miscible. Basically, for two liquids to mix, it must be energetically favourable to disrupt the intermolecular forces of the individual liquids in favour of the new intermolecular forces between the components of the mixture. For your example of water and oil, water on its own has significant hydrogen bonding, but since the oil can't participate in hydrogen bonding, there would be a lot less hydrogen bonding in a homogeneous mixture of oil and water—the oil molecules essentially block water molecules from accessing each other. Because the mixture would greatly reduce the intermolecular forces holding the liquids together, it is more energetically favourable to remain separate. In contrast, a mixture of water and ethanol is energetically favourable because ethanol can hydrogen bond and is similarly polar, so the disrupted water-water intermolecular interactions are replaced by similar water-ethanol interactions. In a little more detail, a mixture is favourable when its Gibbs energy of mixing is negative: $$\Delta G_\mathrm{mix} = \Delta H_\mathrm{mix}-T\Delta S_\mathrm{mix}$$ In an ideal mixture of two components A and B, A-B interactions are the same strength as A-A and B-B, so the enthalpy of mixing, $$\Delta H$$, is $$0$$. If A and B have similar properties, A-B interactions tend to be similar and $$\Delta H$$ will be close to $$0$$. If A-B interactions are weaker (e.g. water and oil), $$\Delta H$$ will be positive. $$\Delta S$$ is almost always positive for common mixtures because each component has a larger volume to spread into than if the components were separate. (The exception is when A-B interactions are much stronger than A-A and B-B interactions which prevents A and B from moving randomly through the volume.) This means that mixtures can still form if the enthalpy of mixing is positive, if the entropic contribution can overcome it. This explains why some mixtures are possible at elevated temperatures but not room temperature. • Sugar, cocoa, salt, water: no mix. Heat water very hot, mix. Then add milk, heat to serving temperature, top with little marshmallows. Aug 31 '16 at 3:47
2021-12-04 13:56:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5095826983451843, "perplexity": 1069.008287615824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362992.98/warc/CC-MAIN-20211204124328-20211204154328-00495.warc.gz"}
https://reaktoro.org/api/structReaktoro_1_1SmartKineticsOptions.html
Reaktoro  v2.1.1 A unified framework for modeling chemically reactive systems SmartKineticsOptions Struct Reference The options for smart chemical kinetics calculation. More... #include <SmartKineticsOptions.hpp> Collaboration diagram for SmartKineticsOptions: [legend] ## Public Member Functions SmartKineticsOptions ()=default Construct a default SmartKineticsOptions object. SmartKineticsOptions (SmartEquilibriumOptions const &other) Construct a SmartKineticsOptions object from a SmartEquilibriumOptions one. Public Attributes inherited from SmartEquilibriumOptions EquilibriumOptions learning The options for the chemical equilibrium calculations during learning operations. double reltol_negative_amounts = -1.0e-14 The relative tolerance for negative species amounts when predicting with first-order Taylor approximation. More... double reltol = 0.005 The relative tolerance used in the acceptance test for the predicted chemical equilibrium state. double abstol = 0.01 The absolute tolerance used in the acceptance test for the predicted chemical equilibrium state. double temperature_step = 10.0 The step length used to discretize temperature in the temperature-pressure space when storing learned calculations (in K). double pressure_step = 25.0e+5 The step length used to discretize pressure in the temperature-pressure space when storing learned calculations (in Pa). ## Detailed Description The options for smart chemical kinetics calculation. The documentation for this struct was generated from the following file:
2023-02-01 05:56:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23610275983810425, "perplexity": 9781.174197179253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00695.warc.gz"}
https://stats.stackexchange.com/questions/406596/complete-a-bayesian-network-by-specifying-the-probability-distributions
# Complete a Bayesian Network by specifying the probability distributions I have a hierarchical Bayesian Network like this: Here: $$R≡$$ log level of poisonous gas (radon) in a house $$B≡$$ type of house (With a basement or without) $$C≡$$ a county in Minnesota where the house is located (84 of those) $$U≡$$ log level of uranium in the soil in each of the counties. So I have the following structure of my Directed Acyclic Graph (DAG): $$P(C,U,B,R)=P(C)P(U|C)P(B|C)P(R|U,B)$$ This is the table I am given. (a part of it, it containes over 900 results) Now the question I have to answer is: "Complete the Bayesian network by specifying all the required probability distributions. The resulting posterior distribution must be such that it is possible to sample from at least one of its full conditional distributions." Could anyone explain to me what exactly am I being asked here? I am new to Bayesian statistics and don't really know how to proceed. Thanks.
2019-06-26 16:52:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7631592154502869, "perplexity": 682.2906556794014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00098.warc.gz"}
https://www.physicsforums.com/threads/vaccuum-at-home.195699/
# Vaccuum at home. 1. Nov 3, 2007 ### Anekdot I am curious how cam i make vacuum at home without having any sophisticated hardware. Not asking for 100% vacuum just 75+% at least. 2. Nov 3, 2007 ### ZapperZ Staff Emeritus What are 100% and 75% vacuum? Zz. 3. Nov 3, 2007 ### Anekdot 100% empty of matter. I dont mean precicly i need 75+% vacuum i just ment it must be as empty as it possible can be made at home. 4. Nov 3, 2007 ### ZapperZ Staff Emeritus No one has achieved 100% "empty matter". That still doesn't tell me what "75%" is. What vacuum level do you want in terms of the pressure? Zz. 5. Nov 3, 2007 ### Anekdot Okay forget about all % just best vacuum i can get at home, i really not shysicist and not really aware of vacuum characteristics like pressure, i need pressure to be about same as in space vacuum. 6. Nov 3, 2007 ### ZapperZ Staff Emeritus Then it is not possible using what you wanted. Zz. 7. Nov 3, 2007 ### Anekdot What would be the cheapest way to make it possible? 8. Nov 3, 2007 ### ZapperZ Staff Emeritus A stainless steel, clean vacuum chamber, a combination of scroll pump, turbo pump and/or ion pump/cryopump, heating tapes, and lots of clean, UHV-approved (ultra-high vacuum) clean gloves for handling inside-vacuum parts, and maybe, just maybe, you'll get to 10^-10 Torr. Those are not "cheap", and certainly doesn't cost less than $50,000. And I haven't included the cost of the controllers for those pumps yet and vacuum connectors/pipe, and flanges. And when I said "clean vacuum chamber", I mean clean with citrinox in an ultrasonic bath. Zz. 9. Nov 3, 2007 ### Anekdot I only got 100$ for this, what vacuum i can make for this? 10. Nov 3, 2007 Using just an ordinary rotary pump (essentially the same type of pump that is used to pimp e.g. water) you can get down to less than a 1/1000 of atmospheric pressure. However, even a rotary pump is more expensive than $100. If you just want to play around with low pressures you can always just try pumping with something else. Even a vacuum cleaner can be used to create a "vacuum", albeit a very bad one. 11. Nov 3, 2007 ### ZapperZ Staff Emeritus .. or one can just suck on a straw.... Zz. 12. Nov 3, 2007 ### Gokul43201 Staff Emeritus You can get about 28" Hg (better than 90% vacuum) with a cheap venturi pump. If you look around, you might find a cheap, used one for$20 or so. 13. Nov 3, 2007 ### cesiumfrog 75%? Is that like, remove three quarters of the air from a vessel? Probably doable. The cheapest way to make a fun vacuum is like this: take an empty coke can, put a tablespoon or so of water in, hold it over the stove until it is boiling (the idea here is to fill the can up with water vapour, displacing all of the air), then [using long tongs, etc, your safety is your concern] lower the can upside-down into a sink-full of cold water. As the water vapour condenses, the can will be crumpled in a bang. Another related method is just to take a long tube of some liquid, and invert it into a reservoir, like a mercury barometer (it works best with such a dense and low-volatility liquid). A cheap pump will probably make a good enough vacuum for you to make marshmallows "breath". A lot depends on what you actually want the vacuum for.. I remember studying electric discharges in high school but being unable to produce our own cathode ray tubes. Makes me wonder how original cathode ray experiments were evacuated... but I presume it's only the modern cutting edge work that requires such expense to replicate. 14. Nov 3, 2007 ### f95toli I suspect CRTs are evacuated using the same method as vacuum tubes, i.e. using a getter (e.g. barium).
2016-12-03 07:22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4196665585041046, "perplexity": 3821.8999384746717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540909.75/warc/CC-MAIN-20161202170900-00118-ip-10-31-129-80.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/456477/can-i-sabotage-attempts-to-make-epub-from-my-pdf
# Can I sabotage attempts to make EPUB from my PDF? Suppose that I have a tex-based PDF manuscript with math content. I am wondering if it is possible to "sabotage" attempts to convert that PDF to EPUB. Specifically, I am wondering if all math content in the PDF can be marked up someway "under the hood" of the PDF so that if an attempt is made to convert it to EPUB, the math will come out saying "Hello There!" or some other message of my choosing. I'm fine if all my math source has to change to something like \begin{math}...\end{math}. I'm imagining something like that and inserting something into the PDF structure with each such environment. I am suffering from a very poor understanding of PDF and how EPUB conversion works. I appreciate any attempt to educate me on why what I am considering is not possible. And of course I appreciate even more any advice on how to make it actually happen :) This question is motivated by a math textbook whose PDF is freely available and openly licensed. The problem is that rogue actors on the internet automate creating EPUB from the PDF. We don't care that they try to sell the EPUB, but we do care that the math is garbage. We would like to redirect would-be readers to the real source PDF. • My answer to "How can I protect a pdf" is usually you can't since like a book anyone who can read it can photograph it or with a computer its easier to screen grab or convert the data stream, In this case you could consider stenography (in) or watermarks (on) images of your math, it would not stop the jack sparrows but it might slow them down and if poorly done would show up as copied. In all cases the time taken to protect is usually not worth it. If my ancestor who invented the first stone tablet had copyrighted it I'd be a gazillionaire by now. – user170109 Oct 24, 2018 at 1:24 • @KJO I get all that. Is it clear that I'm not trying to protect the copyright? I'm concerned about the broken math. It's a low-level math textbook, and the typical reader will not necessarily recognize that the symbols in front of them are garbage. Oct 24, 2018 at 1:28 • On this forum your asked to give a mwe :-) in this case an image or two to compare the affect and thus effect might help P.S spellchecker changed Steganography to stenography but I guess you got the drift – user170109 Oct 24, 2018 at 1:37 • Your tex is converted to frames & streams of images/text. The text can be chopped to make it harder to copy a stream but most hackers can easily script around that, hence the suggestion to concentrate on images. The nearest method to ''pop up'' an ''I'm watching you'' warning is to frequently embed hidden text and there are several questions such as ''How can I hide comments behind an image / hide a comments box'' That's possibly all you could do, but it's unlikely to be transfered if changes are made, nor would it stop a con'sverter, but hopefully I could be wrong. – user170109 Oct 24, 2018 at 2:07 • I have to wonder whether this might be done with a JavaScript implementation and forms. In essence, put the math or some other important text in a locked text field with a dark gray background (so the text is essentially hidden). Have a "Clear Math/Text" button that resets the background on the form field from dark gray to white. AFAIK, this trick would translate to the EPUB as a bunch of dark gray boxes. Oct 24, 2018 at 2:39 ## 1 Answer I prefer the devils advocate view. make it so easy to copy they can't avoid it • (a) That's my taco question from math.se. (b) I am a WeBWorK developer. (c) I've asked on latex.se about making the math latex automatically visible or accessible to readers. Coincidences? Oct 24, 2018 at 14:11 • Are you manually adding that latex source alongside the math, or do you only type it once, and it is somehow automated to come out in both math and source forms? Oct 24, 2018 at 14:13 • Nothings coincidental some say 1^2=\sqrt 1 & we are just = the result of chaos. For illustration its done by cut and paste. The idea is that if you wish readers to get the true source rather than OCR corruption of TeXmath the best way is to include the plain TeX alongside just copy and past text into a box ? Thus to make it easy for readers to check the content. Frequent links to source locations can be interspersed at relevant locations to avoid a simple positional mask of header or footing, with lucky 1 or 2 would get past robot scripts especially if combined into edge of any core images. – user170109 Oct 24, 2018 at 16:44
2022-07-02 23:32:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3978457748889923, "perplexity": 1285.7004687877752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00179.warc.gz"}
https://deepai.org/publication/partial-predicate-abstraction-and-counter-example-guided-refinement
# Partial Predicate Abstraction and Counter-Example Guided Refinement In this paper we present a counter-example guided abstraction and approximation refinement (CEGAAR) technique for partial predicate abstraction, which combines predicate abstraction and fixpoint approximations for model checking infinite-state systems. The proposed approach incrementally considers growing sets of predicates for abstraction refinement. The novelty of the approach stems from recognizing source of the imprecision: abstraction or approximation. We use Craig interpolation to deal with imprecision due to abstraction. In the case of imprecision due to approximation, we delay application of the approximation. Our experimental results on a variety of models provide insights into effectiveness of partial predicate abstraction as well as refinement techniques in this context. ## Authors • 4 publications • ### Heuristics for Selecting Predicates for Partial Predicate Abstraction In this paper we consider the problem of configuring partial predicate a... 12/30/2017 ∙ by Tuba Yavuz, et al. ∙ 0 • ### Improving Neural Network Verification through Spurious Region Guided Refinement We propose a spurious region guided refinement approach for robustness v... 10/15/2020 ∙ by Pengfei Yang, et al. ∙ 0 • ### Scheduling Constraint Based Abstraction Refinement for Multi-Threaded Program Verification Bounded model checking is among the most efficient techniques for the au... 08/22/2017 ∙ by Liangze Yin, et al. ∙ 0 • ### Counterexample-Guided Prophecy for Model Checking Modulo the Theory of Arrays We develop a framework for model checking infinite-state systems by auto... 01/18/2021 ∙ by Makai Mann, et al. ∙ 0 • ### A general solution to the preferential selection model We provide a general analytic solution to Herbert Simon's 1955 model for... 08/06/2020 ∙ by Jake Ryland Williams, et al. ∙ 0 • ### ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks Artificial Neural Networks (ANNs) have demonstrated remarkable utility i... 07/17/2019 ∙ by Xuankang Lin, et al. ∙ 0 • ### Profunctor optics and traversals Optics are bidirectional accessors of data structures; they provide a po... 01/22/2020 ∙ by Mario Román, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction State-explosion is an inherent problem in model checking. Every model checking tool - no matter how optimized - will report or demonstrate one of the following for systems that push its limits: out of memory error, non-convergence, or inconclusive result. As the target systems of interest (hardware, software, or biological systems) grow in terms of complexity, and consequently in size, a great deal of manual effort is spent on verification engineering to produce usable results. We admit that this effort will always be needed. However, we also think that hybrid approaches should be employed to push the limits for automated verification. Abstract interpretation framework CC77 provides a theoretical basis for sound verification of finite as well as infinite-state systems. Two major elements of this framework are abstraction and approximation. Abstraction defines a mapping between a concrete domain and an abstract domain (less precise) in a conservative way so that when a property is satisfied for an abstract state the property also holds for the concrete states that map to the abstract state. Approximation, on the other hand, works on values in the same domain and provides a lower or an upper bound. Abstraction is a way to deal with the state-explosion problem whereas approximation is a way to achieve convergence and hence potentially a conclusive result. When an infinite-state system is considered there are three basic approaches that can be employed: pure abstraction, pure approximation111Assuming the logic that describes the system is decidable., and a combination of abstraction and approximation. The most popular abstraction technique is predicate abstraction GS97 , in which the abstract domain consists of a combination of valuations of Boolean variables that represent truth values of a fixed set of predicates on the variables from the concrete system. Since it is difficult to come up with the right set of predicates that would yield a precise analysis, predicate abstraction has been combined with the counter-example guided abstraction refinement (CEGAR) framework CGJ00 . Predicate abstraction requires computing a quantifier-free version of the transformed system and, hence, potentially involves an exponential number of queries to the underlying SMT solver. A widely used approximation technique is widening. The widening operator takes two states belonging to the same domain and computes an over-approximation of the two. A key point of the widening operator is the guarantee for stabilizing an increasing chain after a finite number of steps. So one can apply the widening operator to the iterates of a non-converging fixpoint computation and achieve convergence, where the last iterate is an over-approximation of the actual fixpoint. In this paper we use an implementation of the widening operator for convex polyhedra CH78 that is used in the infinite-state model checker Action Language Verifier (ALV) YB09 . ALV uses fixpoint approximations to check whether a CTL property is satisfied by an infinite-state system BGP97 . In Yav16 we introduced partial predicate abstraction that combines predicate abstraction with widening for infinite-state systems described in terms of Presburger arithmetic. In partial predicate abstraction only the variables that are involved in the predicates are abstracted and all other variables are preserved in their concrete domains. In this paper, we present a counter-example guided abstraction and approximation refinement (CEGAAR) technique to deal with cases where the initial set of predicates are not precise enough to provide a conclusive result. The novelty of the approach stems from the fact that it can identify whether an infeasible counter-example is generated due to imprecision of the abstraction or imprecision of the approximation. Once the type of imprecision is identified, it uses the appropriate refinement. To refine the abstraction, it computes a Craig interpolant Cra57 for the divergence point. To refine the approximation, it delays the widening for least fixpoint computations and increases the number of steps for greatest fixpoint computations. We implemented the combined approach by extending the Action Language Verifier (ALV) YB09 with the CEGAAR technique. Our experimental results show that approximation and abstraction refinement can be merged in an effective way. The rest of the paper is organized as follows. We first present the basic definitions and key results of the two approaches, approximate fixpoint computations and predicate abstraction in the context of CTL model checking, in Section 2. Section 3 presents the partial predicate abstraction approach and demonstrates soundness of combining the two techniques. Section 4 presents the core algorithms for the CEGAAR technique. Section 5 presents the experimental results. Section 6 discusses related work and Section 7 concludes with directions for future work. ## 2 Preliminaries In this paper, we consider transition systems that are described in terms of boolean and unbounded integer variables. ###### Definition 2.1. An infinite-state transition system is described by a Kripke structure , where , , , and denote the state space, set of initial states, the transition relation, and the set of state variables, respectively. such that , , and . ###### Definition 2.2. Given a Kripke structure, and a set of states , the post-image operator, , computes the set of states that can be reached from the states in in one step: post[R](A)={b | a∈A ∧ (a,b)∈R}. Similarly, the pre-image operator, , computes the set of states that can reach the states in in one step: pre[R](A)={b | a∈A ∧ (b,a)∈R}. #### Model Checking via Fixpoint Approximations. Symbolic Computation-Tree Logic (CTL) model checking algorithms decide whether a given Kripke structure, , satisfies a given CTL correctness property, , by checking whether , where denotes the set of states that satisfy in . Most CTL operators have either least fixpoint (, ) or greatest fixpoint (, ) characterizations in terms of the pre-image operator. Symbolic CTL model checking for infinite-state systems may not converge. Consider the so-called ticket mutual exclusion model for two processes And91 given in Figure 1. Each process gets a ticket number before attempting to enter the critical section. There are two global integer variables, and , that show the next ticket value that will be available to obtain and the upper bound for tickets that are eligible to enter the critical section, respectively. Local variable represents the ticket value held by process . We added variable to model an update in the critical region. It turns out that checking for this model does not terminate. One way is to compute an over or an under approximation to the fixpoint computations as proposed in BGP97 and check , i.e., check whether all initial states in satisfy an under-approximation (denoted by superscript ) of the correctness property or check , i.e., check whether no initial state satisfies an over-approximation of the negated correctness property. If so, the model checker certifies that the property is satisfied. Otherwise, no conclusions can be made without further analysis. The key in approximating a fixpoint computation is the availability of over-approximating and under-approximating operators. So we give the basic definitions and a brief explanation here and refer the reader to CH78 ; BGP97 for technical details on the implementation of these operators for Presburger arithmetic. ###### Definition 2.3. Given a complete lattice , , is a widening operator iff • , • For all increasing chains in L, the increasing chain is not strictly increasing, i.e., stabilizes after a number of terms. ###### Definition 2.4. Given a complete lattice , , is a dual of the widening operator iff • , • For all decreasing chains in L, the decreasing chain is not strictly decreasing, i.e., stabilizes after a number of terms. The approximation of individual temporal operators in a CTL formula is decided recursively based on the type of approximation to be achieved and whether the operator is preceded by a negation. The over-approximation can be computed using the widening operator for least fixpoint characterizations and terminating the fixpoint iteration after a finite number of steps for greatest fixpoint characterizations. The under-approximation can be computed using the dual of the widening operator for the greatest fixpoint characterizations and terminating the fixpoint iteration after a finite number of steps for the least fixpoint characterizations. Another heuristic that is used in approximate symbolic model checking is to compute an over-approximation (denoted by superscript ) of the set of reachable states (), a least fixpoint characterization, and to restrict all the fixpoint computations within this set. ###### Lemma 2.1. Given an infinite-state transition system and , and a temporal property , the conclusive results obtained using fixpoint approximations for the temporal operators and the approximate set of reachable states are sound, i.e., (see BGP97 for the proof). So for the example model in Figure 1, an over-approximation to , the negation of the correctness property, is computed using the widening operator. Based on the implementation of the widening operator in YB09 , it turns out that the initial states do not intersect with and hence the model satisfies . #### Abstract Model Checking and Predicate Abstraction. ###### Definition 2.5. Let denote a set of predicates over integer variables. Let denote a predicate in and denote the boolean variable that corresponds to . represents an ordered sequence (from index 1 to ) of predicates in . The set of variables that appear in is denoted by . Let denote the set of next state predicates obtained from by replacing variables in each predicate with their primed versions. Let denote the set of that corresponds to each . Let , where denotes the set of variables in the concrete model. #### Abstracting states. A concrete state is predicate abstracted using a mapping function via a set of predicates by introducing a predicate boolean variable that represents predicate and existentially quantifying the concrete variables that appear in the predicates: α(s♮)=∃V(φ).(s♮ ∧ |φ|⋀i=1φi⟺bi). (1) #### Concretization of abstract states. An abstract state is mapped back to all the concrete states it represents by replacing each predicate boolean variable with the corresponding predicate : γ(s♯)=s♯[¯φ/¯b] (2) Abstraction function provides a safe approximation for states: ###### Lemma 2.2. , as defined in Equations 1 and 2, defines a Galois connection, i.e., and are monotonic functions and and (see the Appendix for the proof). A concrete transition system can be conservatively approximated by an abstract transition system through a simulation relation or a surjective mapping function involving the respective state spaces: ###### Definition 2.6. (Existential Abstraction) Given transition systems and , approximates (denoted ) iff • implies , • implies , where is a surjective function from to . It is a known Loi95 fact that one can use a Galois connection to construct an approximate transition system. Basically, is used as the mapping function and is used to map properties of the approximate or abstracted system to the concrete system: ###### Definition 2.7. Given transition systems and , assume that , the ACTL formula describes properties of , and forms a Galois connection. represents a transformation on that descends on the subformulas recursively and transforms every atomic atomic formula with (see CGL94 for details). For example, let be , where and represent and , respectively, when the model in Figure 1 is predicate abstracted wrt to the set of predicates and the Galois connection defined as in Equations 1 and 2. Then, . The preservation of ACTL properties when going from the approximate system to the concrete system is proved for existential abstraction in CGL94 . Here, we adapt it to an instantiation of existential abstraction using predicate abstraction as in CGT03 : ###### Lemma 2.3. Assume , denotes an ACTL formula that describes a property of , denotes the transformation of the correctness property as in Definition 2.7, and forms a Galois connection and defines predicate abstraction and concretization as given in Equations 1 and 2, respectively. Then, implies . ###### Proof. Preservation of atomic properties: If a state in satisfies an atomic abstract property , due to the correctness preserving property of a Galois connection, also satisfies Niel99 . Due to soundness of the mapping between the states in to states in and monotonic property of and , any state in that gets mapped to , that is every state in also satisfies . Preservation of ACTL Properties: Follows from Corollary 1 in CGL94 and using as the mapping function in CGL94 . ∎ ## 3 Partial Predicate Abstraction In Section 3.1, we introduce a symbolic abstraction operator for transitions and an over-approximating abstract post operator derived from it. The abstract post operator enables partial predicate abstraction of an infinite-state system. Section 3.2 elaborates on the proposed hybrid approach that combines predicate abstraction and fixpoint approximations to perform CTL model checking of infinite-state systems. It also demonstrates soundness of the hybrid approach, which follows from the soundness results of the individual approaches and the over-approximating nature of the proposed abstract post operator. ### 3.1 Computing A Partially Predicate Abstracted Transition System We compute an abstraction of a given transition system via a set of predicates such that only the variables that appear in the predicates disappear, i.e., existentially quantified, and all the other variables are preserved in their concrete domains and in the exact semantics from the original system. As an example, using the set of predicates , we can partially abstract the model in Figure 1 in a way that is removed from the model, two new boolean variables (for ) and (for ) are introduced, and , , , , , and remain the same as in the original model. #### Abstracting transitions. A concrete transition is predicate abstracted using a mapping function via a set of current state predicates and a set of next state predicates by introducing a predicate boolean variable that represents predicate in the current state and a predicate boolean variable that represents predicate in the next state and existentially quantifying the current and next state concrete variables that appear in the current state and next state predicates: ατ(r♮)=∃V(φ).∃V(φ′).(r♮ ∧ CS ∧ |φ|⋀i=1φi⟺bi∧ |φ|⋀i=1φ′i⟺b′i), (3) where represents a consistency constraint that if all the abstracted variables that appear in a predicate remains the same in the next state then the corresponding boolean variable is kept the same in the next state: CS=⋀φi∈φ((⋀v∈V(φi)v′=v)⟹b′i⟺bi). #### Concretization of abstract transitions. An abstract transition is mapped back to all the concrete transitions it represents by replacing each current state boolean variable with the corresponding current state predicate and each next state boolean variable with the corresponding next state predicate : γτ(r♯)=r♯[¯φ,¯φ′/¯b,¯b′] For instance, for the model in Figure 1 and predicate set , partial predicate abstraction of , , is computed as pci=try ∧ s≥ai ∧ ((b1∧¬b2∧¬b′1∧¬b′2)∨ (¬b1∧b2∧(b′1∨b′2))∨ (¬b1∧¬b2∧¬b′1∧¬b′2)) ∧ pc′i=cs. (4) It is important to note that the concrete semantics pertaining to the integer variables and and the enumerated variable are preserved in the partially abstract system. Abstraction function represents a safe approximation for transitions: ###### Lemma 3.1. defines a Galois connection (see the Appendix for the proof). One can compute an over-approximation to the set of reachable states via an over-approximating abstract post operator that computes the abstract successor states: ###### Lemma 3.2. provides an over-approximate post operator: post[r♮](γ(s♯)) ⊆ γ(post[ατ(r♮)](s♯)) ###### Proof. post[τ♮](γ(s♯)) ⊆ post[γτ(ατ(τ♮))](γ(s♯))(due to Lemma ???) (5) We need to show the following: post[γτ(ατ(τ♮))](γ(s♯))⊆ γ(post[ατ(τ♮)](s♯))post[γτ(τ♯)](γ(s♯))⊆ γ(post[τ♯](s♯))(∃V♮. τ♯[¯φ,¯φ′/¯b,¯b′] ∧ s♯[¯φ/¯b])[V♮/V′♮]⊆ (∃V♯. τ♯ ∧ s♯)[V♯/V′♯][¯φ/¯b](∃V♮. τ♯[¯φ,¯φ′/¯b,¯b′] ∧ s♯[¯φ/¯b])[V♮/V′♮]⊆ (∃V♯. τ♯ ∧ s♯)[¯φ′/¯b′][V♮/V′♮](∃V♮.( τ♯ ∧ s♯)[¯φ,¯φ′/¯b,¯b′])[V♮/V′♮]⊆ (∃V♯. τ♯ ∧ s♯)[¯φ′/¯b′][V♮/V′♮] (6) post[τ♮](γ(s♯)) ⊆ γ(post[ατ(τ♮)](s♯))(due to Equations ??? \& ???) (7) ### 3.2 Combining Predicate Abstraction with Fixpoint Approximations At the heart of the hybrid approach is a partially predicate abstracted transition system and we are ready to provide a formal definition: ###### Definition 3.1. Given a concrete infinite-state transition system and a set of predicates , where , the partially predicate abstracted transition system is defined as follows: • . • . • . A partially predicate abstracted transition system defined via and functions is a conservative approximation of the concrete transition system. ###### Lemma 3.3. Let the abstract transition system be defined as in Definition 3.1 with respect to the concrete transition system and the set of predicates . approximates : . ###### Proof. It is straightforward to see, i.e., by construction, that implies . To show implies , we need to show that implies , which follows from Lemma 3.2: and , and hence . ∎ Therefore, ACTL properties verified on also holds for : ###### Lemma 3.4. Let the abstract transition system be defined as in Definition 3.1 with respect to the concrete transition system and the set of predicates . Given an ACTL property , . ###### Proof. Follows from Lemmas 2.3 and 3.3. ∎ Using fixpoint approximation techniques on an infinite-state partially predicate abstracted transition system in symbolic model checking of CTL properties BGP97 preserves the verified ACTL properties due to Lemma 2.1 and Lemma 3.4. Restricting the state space of an abstract transition system with an over-approximation of the set of reachable states also preserves the verified ACTL properties: ###### Theorem 3.1. Let the abstract transition system be defined as in Definition 3.1 with respect to the concrete transition system . Let . Given an ACTL property , . ###### Proof. Follows from Lemma 2.1 that approximate symbolic model checking is sound, i.e., implies , and from Lemma 3.4 that ACTL properties verified on the partially predicate abstracted transition system holds for the concrete transition system, i.e., implies . ∎ As an example, using the proposed hybrid approach one can show that the concrete model, given in Figure 1 satisfies the correctness property by first generating a partially predicate abstracted model, , wrt the predicate set and performing approximate fixpoint computations to prove . Due to Theorem 3.1, if satisfies , it can be concluded that satisfies . The main merit of the proposed approach is to combat the state explosion problem in the verification of problem instances for which predicate abstraction does not provide the necessary precision (even in the case of being embedded in a CEGAR loop) to achieve a conclusive result. In such cases approximate fixpoint computations may turn out to be more precise. The hybrid approach may provide both the necessary precision to achieve a conclusive result and an improved performance by predicate abstracting the variables that do not require fixpoint approximations. ## 4 Counter-Example Guided Abstraction and Approximation Refinement In this section, we present a counter-example guided abstraction and approximation refinement technique for partial predicate abstraction (CEGAAR) in the context of CTL model checking. The individual techniques that are combined in partial predicate abstraction have their own specialized techniques for refinement. Counter-example guided abstraction refinement (CEGAR) CGJ00 has been shown to be an effective way for improving precision of predicate abstraction by inferring new predicates based on the divergence between the abstract counter-example path and the concrete paths. Approximation refinement, on the other hand, involves shrinking the solution set for over-approximations and expanding the solution set for under-approximations. In partial predicate abstraction, a fundamental dilemma is whether to apply abstraction refinement or approximation refinement. Since model checking of infinite-state systems is undecidable and both CEGAR and approximation refinement techniques may not terminate and, hence, may fail to provide a conclusive result, whatever approach we follow in applying these alternative techniques may not terminate either. In this paper, we choose to guide the refinement process using counter-examples. However, a novel aspect of our approach is its ability to recognize source of the imprecision. So, if the imprecision is due to approximation, it switches from abstraction refinement to approximation refinement. After entering the approximation refinement mode, it may switch back to abstraction refinement, end with a conclusive result, or may keep staying in the same mode. This process of possibly interleaved refinement continues until the property is verified, a real counter-example is reached, or no new predicates can be inferred. Figure 2 shows the CEGAAR algorithm. It gets a concrete transition system, the concrete correctness property to be checked, and a set of seed predicates. It is important that any integer variable that appears in the property can be precisely abstracted using the seed predicates so that Theorem 3.1 can be applied in a sound way. This fact is specified in the precondition of the algorithm. The algorithm also uses some global settings: the widening seed and the over-approximation bound . The former parameter decides how early in least fixpoint computations widening can be applied, e.g., 0 means starting from the first iteration, and the latter parameter decides when to stop the greatest fixpoint computation. Stopping early results in a less precise approximation than stopping at a later stage. However, the overhead gets bigger as the stopping is delayed. The algorithm keeps local variables and that receive their initial values from the global variables and , respectively (lines 5-6). The algorithm keeps a worklist, which is a list of set of predicates to be tried for partial predicate abstraction. A challenge in CEGAR is the blow-up in the number of predicates as new predicates get inferred. To deal with this problem, we have used a breadth-first search (BFS) strategy to explore the predicate choices that are stored in the worklist until a predicate set producing a conclusive result can be found. The work list is initialized to have one item, the seed set of predicates, before the main loop gets started (lines 7-8). The algorithm runs a main loop starting from an initial state, where abstraction is the current refinement strategy (line 9). In abstraction refinement mode, the algorithm removes a predicate set from the work list to use as the current predicate set , resets approximation parameters to their default global values (lines 13-15), and computes an abstract version of the transition system via partial predicate abstraction with the current predicate set (line 19). Then (line 20) the algorithm computes the fixpoint for the negation of the property, which happens to be an ECTL222Fragment of CTL in which only existential versions of temporal operators appear. property. The fixpoint iterates are stored in a map (declared in line 20), which can be queried for the parts of the formula to get the relevant fixpoint iterates, which are stored in a list. As an example, we can access the fixpoint solution for subformula with expression. Indices start at 1 and both and represent the first iterate and both and represent the solution set. If the fixpoint solution to the negation of the property does not have any states in common with the set of initial states, then the property is satisfied and the algorithm terminates (lines 22-23). Otherwise, the property is not satisfied in the abstract system. At this point, GenAbsWitness is called to find out if there is a divergence between the abstract witness path for the negation of the property and the concrete transition system. If there is a divergence due to abstraction (line 27), refinement mode will set to abstraction refinement (line 28) and a Craig Interpolant is computed (line 30) for the reachable concrete states that can never reach the divergence point, deadend states, and the concrete states that can, bad states CTV03 . We use the half-space interpolants algorithm presented in AM13 to compute a compact refinement predicate. We encode the constraint for the half-space interpolant using the Omega Library omega , the polyhedra library used in ALV. If the number of variables used in the encoding reaches the limit set by the Omega Library, we revert back to a mode where we collect all the predicates in the deadend states. When a set of refinement predicates is discovered for a given spurious counter-example path, rather than extending the current predicate set with in one shot, it considers as many extensions of as by extending with a single predicate from at a time (lines 31-33) and adds all these predicate sets to the queue to be explored using BFS. In the context of partial predicate abstraction, this strategy has been more effective in generating conclusive results compared to adding all refinement predicates at once, which has caused blow-ups in time and/or memory leading to inconclusive results for our benchmarks. If the divergence between the abstract witness path and the concrete transition system is due to approximation, refinement mode is switched to approximation refinement and approximation parameters are updated based on the depth of the abstract witness path up to the divergence point (lines 35-37). So in this mode, rather than updating the current predicate set, the same abstract transition system is used with updated approximation parameters. The process will continue until a conclusive result is obtained or a real witness to the negation of the property can be found, i.e. the divergence is due to neither the abstraction nor the approximation (lines 38-39). ### 4.1 Divergence Detection Divergence detection requires generation of possible witnesses to the negation of the property, which happens to be an ECTL property. Figure 3 presents algorithm GenAbsWitness that starts from the abstract and concrete initial states and uses the solution sets for each subformula stored in to generate a witness path by adding abstract states to the list as the solution sets are traversed. It should be noted that the witness, representing a counter-example to ACTL, is a tree-like structure as defined in CJL02 . So algorithm GenAbsWitness traverses paths on this tree to check existence of a divergence. It calls algorithm GenAbsWitnessHelper, which additionally keeps track of previous states of the abstract and concrete states. When this algorithm is called the first time (line 4) in Figure 3, the previous abstract and the previous concrete states are passed as false in the parameter list as the current states, both in the abstract and in the concrete, represent the initial states. Since ECTL can be expressed with temporal operators EX, EU, EG, and logical operators , , and , we present divergence detection in Figures 7 - 11 for these operators excluding as negation is pushed inside and appears before atomic formula only. Before explaining divergence checking for each type of operator, we first explain divergence detection algorithm. Figure 5 presents the algorithm Divergence for checking divergence between an abstract path starting at an abstract initial state and ending at abstract state and a parallel concrete path starting at the concrete initial path and ending at concrete state . The algorithm also gets as input the previous states of and as and , respectively. If both previous states are false (lines 4-5), divergence is not possible as and represent the abstract and concrete initial states, respectively, and due to partial predicate abstraction being an Existential Abstraction (see 2.6). However, for any state other than the initial state, divergence may occur if is not enabled for the same transition that is enabled for. In that case (line 6, holding true), the algorithm computes the Deadend states, which represent those reachable concrete states that do not have any transition to the concrete states that map to abstract state (line 8), and the Bad states, which represent those concrete states that map to abstract previous state and can transition to states that map to (line 9). If all the variables are predicate abstracted, the set of Bad states cannot be empty CTV03 . However, due to partial predicate abstraction and existence of fixpoint approximations for concrete integer variables, Bad can be empty as approximation may add transitions that correspond to multiple steps. An important detail is to record the reason for divergence. If the set of Bad states is empty, divergence is due to approximation (line 10), 333Note that an over-approximation is computed for the negation of the formula, which gets propagated to each temporal operator.. Otherwise, it is due to abstraction, in which case the Deadend and the Bad states are recorded as the conflicting states (line 13) to be passed to Craig Interpolation procedure as shown in Figure 2.
2021-12-06 15:29:22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664062023162842, "perplexity": 961.6363846463046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00069.warc.gz"}
https://www.greencarcongress.com/2005/07/gateway_cities_.html
## Clean Air Program Renews Contract for Truck Replacement ##### 20 July 2005 The Southeast Los Angeles County Gateway Cities Clean Air Program has awarded TIAX a $712,500 contract for the upcoming year to oversee the replacement of older, heavy-duty trucks in the Los Angeles area with newer, lower-emission vehicles. TIAX has done this since 2002, and has coordinated the replacement of approximately 350 trucks with newer vehicles. Through the replacement of the older trucks, it is estimated that up to 4 tons of nitrogen oxides emissions and up to 1 ton of diesel soot emissions are reduced for each truck over its assumed remaining life of five years. In addition, the company will see that trucks purchased during this phase of the program are equipped with special retrofit devices that further reduce diesel exhaust emission levels, and global positioning systems that allow TIAX to track each truck’s activities and gain a better understanding of where emissions are being reduced. The Gateway Cities region is the industrial core of Los Angeles county, and includes the Port of Long Beach, one of the busiest container ports in the United States. The Gateway Cities Council of Governments created the clean air prorgam to provide financial incentives to help reduce air pollution in Southern California. The program is managed by the council in partnership with the Port of Long Beach, the California Air Resources Board and the US EPA. Heavy-duty trucks that serve ports are some of the oldest and dirtiest on the roads today, so they create serious pollution problems for the surrounding communities and their citizens. The Gateway Cities Clean Air Program has provided an effective solution to this problem that benefits both the residents of southeast Los Angeles County and the truck drivers who serve the area. We are extremely pleased to continue our work with this initiative. —Jon Leonard, project manager at TIAX The Fleet Modernization aspect of the program compensates owners of 1986 or older trucks when they buy a 1999 or newer used diesel truck that is more reliable, cleaner, and fuel efficient. An average grant is between$20,000 to $25,000, but will vary depending on how old the truck is and how many miles it has been driven in the past two years. The engines of the old trucks are then destroyed to ensure that they can no longer contribute to air pollution in the region. As an example, a typical used diesel truck costs about$35,000. Under the Clean Air Program, an owner could be reimbursed $25,000 of the purchase price, reducing the cost of the new truck to the grant recipient to only$10,000. To qualify for the Gateway Cities Clean Air Program, a truck owner must be able to demonstrate that his vehicle has been used commercially in the South Coast Air Basin for the past two years. This 6,600 square mile area includes all of Orange County and the non-desert portions of Los Angeles, Riverside, and San Bernardino counties. They also must agree to continue their work in the South Coast Air Basin for the next five years. The long-term goal of the Clean Air Program is to replace 3,000 existing heavy-duty vehicles, which represent approximately a third of the pre-1987 truck fleet in Los Angeles County. Funding comes from a collaborative of government agencies at the federal, state, and local levels. Resources: If I were a trucker I'd would be looking for a hybrid powertrain for my new tractor. GM is building a hybrid powertrain for buses but not for trucks. Why not??? 18-wheelers use so much fuel each year and spend so much time in traffic jams it would make a hybrid pay for itself in a manner of months. Depends on the area realy around here truckers are doing 90mph all the time. Also many trucks have to keep the engine running anyway to keep various systems onboard operational. Hybridisation involves more than just the powertrain. The compressor for the air conditioner could be driven electrically. Power steering could also be an electric system. Replacing the air brake system with an electric system would eliminate problems with leaking hose connection and corrosion from water condensing in the tanks. In the winter ice sometimes would plug in the lines and leave the brakes locked up. The one thing that no one talks about these days, especially the environmentalists, is how much more fuel the new "eco-friendly" diesel engines consume compared to "older, archaic polluting" engines. A new 2005 diesel engine, compared to an apparently 2004 model, will use approximately 60% more fuel due to the new pollution requirements imposed by the E.P.A. Basically, to meet the new requirements the manufacturers have had to retard engine timing which results in the guzzling of fuel. Could someone tell me how in the hell these new diesel engines, which consume 60% more fuel compared to pre-2005 units, are more environmentally and economically desirable? Azure Dynamics has produced a hybrid powertrain for trucks up to class 7. The Parallel Hybrid Powertrain is designed for use in vehicles weighing 18,000 - 33,000 lbs GVW. The vehicles in this class are typically used for shipping and receiving large quantities of goods. Here's a link to a photo of their truck: http://www.azuredynamics.com/pdf/2005%20Super%207%20brochure%20July%2005.pdf Here's a link to their website: http://www.azuredynamics.com/fleet_sales.htm First, I have to point out that the 60% number is way overblown. Second, on the contrary, there is quite a bit of discussion about the limits to improvements in fuel consumption to accomodate more stringent emissions regulations—and a great deal of effort (time and money) on the part of automakers to work out solutions. However, you still need to look at the overall LARGE improvement in fuel consumption that a diesel platform provides over a gasoline platform. Quick example: the New Beetle, Model year 2005. Gasoline model delivers 27 mpg. Diesel model delivers 41 mpg— an improvement of 52% over the gasoline. And here is the Azure Series 7 hybrid story as an earlier post. :-) The recent 60% increase in fuel consumption concerns heavy duty truck diesel engines which are typically run at speeds where 100% of horsepower output is realized. Automotive/light truck diesels typically run at about 25% maximum horsepower hence they tend to practically have about 4 times the fuel economy per horsepower when compared to commercial truck engines. Modern commercial transport diesel vehicles realize MUCH less fuel economy compared to the diesel engines manufactured around 1950. For example, most modern 45 passenger, turbocharged American city buses only get around 1.6 m.p.g.; a 45 passenger G.M. bus circa 1950 equipped with a 100 H.P. 4-71 two cycle diesel and two speed hydraulic transmission will get around 10 m.p.g. A British G.M. Bedford Division circa 1965 equipped with a 110 H.P. four stroke and five speed gearbox can achieve 18 M.P.G. ( this basic chassis is still manufactured today in India by Hindustan Bedford). A 100 passenger 2005 model Ashok Leyland Titan Double Decker (essentially a forty year old British Leyland front engine model) equipped with a big, 1 tonne 680 c.i.d., 135 H.P. motor can still get 9 m.p.g.; derated to 90 H.P. the same bus should easily realize 13 m.p.g. All of these older designs utilize engines not equipped with turbochargers; fit these vehicles with turbochrgers and their fuel economy goes up about 30%! The fact is that modern production diesel vehicles consume more fuel per ton/mile than 40 year vehicles of similar use and capacity. If you doubt these figures, then read about some of the statistics posted by David S. Lawyer concerning this topic: http://www.lafn.org/~dave/trans/energy/fuel-eff-20th-3.html#auto_urban_vs_intercity Do the math and it's not hard to see how a forty year old bus design will go about 2.25 times farther per gallon of diesel fuel compared to modern E.P.A., "environmentally friendly" certified units. Most "experts" claim that the new hybrid buses get about 40% better fuel economy than the typical modern urban bus; however, this has never been realized. The primary problem with any vehicle system using batteries is that when new, batteries can be charged at about 80% efficiency, but once a few miles and charging cycles are realized, that efficiency drops to only 20%. In other words, the long term difference between hybrid and conventional fuel economy will only be about 1/4th of the difference as when the vehicles are compared brand new! Plus, that hybrid bus has batteries that will cost tens of thousands of dollars to replace. In other words, a typical 45 passenger hybrid bus will at most get about 1.76 m.p.g. while the circa 1965 100 passenger Leyland Double Decker can get about 13 m.p.g. Ah, I see the point you’re making. Just a couple of quick comments, and working in reverse: They are seeing about a 40% improvement relative to a baseline diesel fuel consumption of 2.3 mpg--and that was based on 100 buses in service for more than a year on different types of routes--more than enough time to winkle out any anomalies in data. Your performance would, of course, vary with the type of hybrid architecture chosen. As to your larger point--turning back the emissions clock to the conditions of 40 years ago isn't an acceptable solution for increasing fuel economy. Far too much has been learned in the last four decades about the health, environmental and associated economic impacts of an overburden of criteria pollutants to make that a reasonable approach. Fuel economy or emissions control? This requires a Deion answer: Both. Optimizing the one (fuel economy) at the expense of the other (emissions control) just won't fly. (At least, under the conditions we have now.) Let's say the newly manufactured, circa 1965 Leyland bus equipped with a turbocharger has an improved fuel economy of about 17 m.p.g. versus the hybrid bus that will probably get little better than 1.76 m.p.g. once its batteries go into inevitable efficiency decline. The 100 passenger Leyland thus gets 1700 passenger*(m.p.g.) and likewise the 45 passenger hybrid gets 79.2 passenger*(m.p.g.) Translated, the Leyland can carry 21 passengers per gallon of diesel burned versus the hybrid which carries only 1 passenger for the same amount of consumed diesel. Thus, per passenger the Leyland uses only 4.76% of the fuel that this wonderful hybrid consumes. A 95.24% reduction of fuel per passenger! No modern E.P.A. certified diesel or diesel/hybrid bus technology will ever achieve such savings, and this old non-E.P.A. certified bus most certainly will put out far fewer emissions than anything modern technology has to offer simple because it uses over 95% less fuel per passenger. The United States will now never achieve petroleum self sufficiency due do these energy-eating emission regulations. Approximately 50% of the oil we use is imported. If private/commuter vehicles were all converted to diesel, we could see at least 50% reduction of consumed fuel. For commercial vehicles, a similar use of old, efficient technology would see at least, if not more than, another 50 % reduction of consumed fuel. These reductions in themselves would nearly eliminate the oil imports. Doesn't half the oil burned translate into a 50% reduction of petroleum-based greenhouse gases that all of these government/university/environmentalist windbags are chiming about? Whether one likes it or not, these environmental regulations will go the way of the dinosaurs within the next ten years or so. American has a big problem known as fiat paper money whose value is going down the proverbial toilet quite rapidly. This country officially went to an economy based upon funny money in August of 1971- next month our fiat money will be 38 years old. In the entire history of this world (fiat money was first used by the Mongol Empire in China around 1250 A.D.), NO fiat currency has lasted more than 41 years until total financial collapse occurred. Right now as we exist total dollar denominated debt is compounding at 40% YEARLY, a rate even worse than that seen prior to the Depression of 1929 (as of this June we're talking about $1 Quaddrillon, or in other words$ 1,000,000 Billion, and next June it will be 40% more!). Needless to say, petroleum fuel bought with soon to be worthless dollars will eventually be very expensive and in scarce supply, and these E.P.A. regs will most certainly encourage our transportation infrastructure into some sort of collapse. houston auto transport, car transport, car shipping, vehicle shipping, vehicle shipper, auto transport, car transport, auto transport, car shipping, vehicle shipping, car shipper, vehicle shipper, houston car transport, texas auto transport, houston vehicles transport, texas car shipping, houston vehicle shipping The comments to this entry are closed.
2023-02-07 01:03:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2735300064086914, "perplexity": 4044.073296540456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00006.warc.gz"}
https://www.nature.com/articles/s42003-021-02712-y?error=cookies_not_supported&code=3d07fe8a-4ed2-4666-9361-64e7f8e7285a
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Multivariate analysis reveals shared genetic architecture of brain morphology and human behavior ## Abstract Human variation in brain morphology and behavior are related and highly heritable. Yet, it is largely unknown to what extent specific features of brain morphology and behavior are genetically related. Here, we introduce a computationally efficient approach for multivariate genomic-relatedness-based restricted maximum likelihood (MGREML) to estimate the genetic correlation between a large number of phenotypes simultaneously. Using individual-level data (N = 20,190) from the UK Biobank, we provide estimates of the heritability of gray-matter volume in 74 regions of interest (ROIs) in the brain and we map genetic correlations between these ROIs and health-relevant behavioral outcomes, including intelligence. We find four genetically distinct clusters in the brain that are aligned with standard anatomical subdivision in neuroscience. Behavioral traits have distinct genetic correlations with brain morphology which suggests trait-specific relevance of ROIs. These empirical results illustrate how MGREML can be used to estimate internally consistent and high-dimensional genetic correlation matrices in large datasets. ## Introduction Global and regional gray matter volumes are known to be linked to differences in human behavior and mental health1. For example, reduced gray matter density has been implicated in a wide range of neurodegenerative diseases and mental illnesses2,3,4,5. In addition, differences in gray matter volume have been related to cognitive and behavioral phenotypic traits such as fluid intelligence and personality, although results have not always been replicable6,7. Variation in brain morphology can be measured noninvasively using magnetic resonance imaging (MRI). Large-scale data collection efforts, such as the UK Biobank8, that include both the MRI scans and genetic data have enabled recent studies to discover the genetic architecture of human variation in brain morphology and to explore the genetic correlations of brain morphology with behavior and health9,10,11,12,13. These studies have demonstrated that all features of brain morphology are genetically highly complex traits and that their heritable component is mostly due to the combined influence of many common genetic variants, each with a small effect. A corollary of this insight is that even the currently largest possible genome-wide association studies (GWASs) were only able to identify a small portion of the genetic variants underlying the heritable components of brain morphology: The vast majority of their heritability remains missing9,10,11,12,13,14. As a consequence, the genetic correlations of regional brain volumes with each other, as well as with human behavior and health have remained largely elusive. However, such estimates could advance our understanding of the genetic architecture of the brain, for example, regarding its structure and plasticity. Similarly, a strong genetic overlap of specific features of brain morphology with mental health would provide clues about the neural mechanisms behind the genesis of disease15,16,17. We developed multivariate genomic-relatedness-based restricted maximum likelihood (MGREML) to provide a comprehensive map of the genetic architecture of brain morphology. MGREML overcomes several limitations of existing approaches to estimate heritability and genetic correlations from molecular genetic (individual-level) data. Contrary to existing pairwise bivariate approaches, MGREML guarantees internally consistent (i.e., at least positive semidefinite) genetic correlation matrices and it yields standard errors that correctly reflect the multivariate structure of the data. The software implementation of MGREML is computationally substantially more efficient than both the traditional bivariate genomic-relatedness-based restricted maximum likelihood (GREML)18,19 and comparable multivariate approaches20,21,22,23,24. Moreover, we show that MGREML allows for stronger statistical inference than methods that are based on GWAS summary statistics, such as bivariate linkage-disequilibrium (LD) score regression (LDSC)25,26. In short, MGREML yields precise and internally consistent estimates of genetic correlations across a large number of traits when existing approaches applied to the same data are either less precise or computationally unfeasible. We leverage the advantages of MGREML by analyzing brain morphology based on MRI-derived gray matter volumes in 74 regions of interest (ROIs). We also estimate the genetic correlations of these ROIs with global measures of brain volume and eight human behavioral traits that have well-known associations with mental and physical health. The anthropometric measures height and body-mass index are also analyzed, because of their relationships with brain size6,13. Our analyses are based on data from the UK Biobank brain imaging study27. ## Results ### Estimating genetic correlations Several methods can be used to estimate heritabilities and genetic correlations from molecular genetic data on single-nucleotide polymorphisms (SNPs). One class of these methods is based on GWAS summary statistics25,26,28. Another class of methods is based on individual-level data, such as GREML and variations of this approach22,23,24,29,30,31,32,33. Methods based on GWAS summary statistics such as LDSC25,26 and variants thereof34 can leverage the ever-increasing sample sizes of GWAS meta- or mega-analyses35. These methods are computationally efficient and benefit from the fact that GWAS summary statistics are often publicly shared36,37. However, the computationally more intensive methods based on individual-level data, such as GREML are statistically more powerful38. That is, the resulting estimates are more precise as reflected in the size of the standard errors. Due to the high costs of MRI brain scans, GWAS meta-analysis samples for brain imaging genetics are still relatively small compared to GWAS meta-analysis samples for traits that can be measured at low cost (e.g., height39 and educational attainment40). The UK Biobank brain imaging study (Methods) is currently by far the largest available sample that includes both MRI scans and genetic data, often surpassing the sample size of most previous studies in neuroscience by an order of magnitude or more9,10,13. Therefore, this dataset is particularly suitable for our individual-level data analysis. Irrespective of whether one uses GWAS summary statistics or individual-level data, the use of bivariate methods poses another challenge when computing genetic correlation across more than two traits. In this case, the correlation estimates from bivariate analyses of all pairwise combinations of traits are often simply stacked, to form a ‘grand’ correlation matrix25,26,41. However, this ‘pairwise bivariate’ approach can result in genetic correlation matrices that are not internally consistent (i.e., they describe interrelationships across traits that cannot exist simultaneously). In mathematical terms, the resulting matrices can be indefinite. Although the correlation between two traits can vary between −1 and +1, their correlations with a third trait are naturally bounded. For a set of three traits, the solution is positive (semi)-definite when the correlations satisfy the following condition: $${r}_{12}^{2}+{r}_{13}^{2}+{r}_{23}^{2}-2{r}_{12}{r}_{13}{r}_{23}\le 1$$, where rst denotes the correlation between traits s and t. This condition is violated, for instance, when pairwise correlations are estimated to be r12 = 0.9, r13 = 0.9, and r23 = 0.2. In fact, the genetic correlation matrix in the well-known atlas of genetic correlations is not positive semidefinite25. A second consequence of the pairwise bivariate approach is that the standard errors of the resulting genetic correlation matrix do not adequately reflect the multivariate structure of the data. ### MGREML Our multivariate extension of GREML estimation18,32 guarantees the internal consistency of the estimated genetic correlation matrix by adopting an appropriate factor model for the variance matrices (Supplementary Note 1). An important benefit of this approach is that estimates are always valid, in the sense that the likelihood is defined, even within the optimization procedure. Joint estimation also ensures that the standard errors of the estimated genetic correlations reflect the multivariate structure of the data correctly. Therefore, methods such as genomic structural equation modelling (genomic SEM)42 that use multivariate genetic correlation matrices as input may benefit from using MGREML results, by avoiding the potentially distorting pre-processing step of bending43 an indefinite genetic correlation matrix. To deal with the computational burden and to make MGREML applicable to large data sets in terms of individuals and traits, we derived efficient expressions for the likelihood function and developed a rapid optimization algorithm (Supplementary Note 1). In Supplementary Note 3, we show that MGREML is computationally faster than pairwise bivariate GREML. Moreover, comparisons with ASReml20, BOLT-REML23, GEMMA22, MTG224, and WOMBAT21 highlight the computational gains afforded by MGREML. That is, none of these software packages is able to deal with the dimensionality of our empirical application. Finally, a comparison of results obtained with MGREML with results obtained using LDSC shows that standard errors obtained with MGREML are 32.7–50.6% smaller, illustrating the substantial gains in statistical power afforded by MGREML. ### Analysis of brain morphology We used MGREML to analyze the heritability of and genetic correlations across 86 traits in 20,190 unrelated ‘white British’ individuals from the UK Biobank (Fig. 1, Methods). The subset of 76 brain morphology traits includes total brain volume (gray and white matter), total gray matter volume, and gray matter volumes in 74 regions of interest (ROIs) in the brain. Relative volumes were obtained by dividing ROI gray matter volumes by total gray matter volume. The full set of heritability estimates is available in Supplementary Data 1. Figure 2a, b show that SNP-based heritability ($${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$) (i.e., the proportion of phenotypic variance which can be explained by autosomal SNPs) is on average highest in the insula, and in the cerebellar and subcortical structures of the brain (average $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$ is 33.1, 32.4, and 29.5%, respectively, with corresponding standard errors of 0.019 for all) and lowest in the parietal, frontal, and temporal lobes of the cortex (average $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$ is 21.2, 21.4, and 25.2%, respectively, with corresponding standard errors of 0.019 for all). Grouping of the $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$ estimates in networks of intrinsic functional connectivity44 reveals that ROIs in the heteromodal cortex (frontoparietal, dorsal attention) are less heritable than primary (visual, somatomotor), subcortical and cerebellar regions (Fig. 3a). The full set of estimated genetic correlations (rg) is available in Supplementary Data 1. Using spatial mapping, Fig. 2c visualizes the estimated genetic correlations across the relative volumes of the cortical and subcortical brain areas. The largest positive genetic correlations were found between the insular and frontal regions (average rg = 0.17) and between the cerebellar and subcortical areas (average rg = 0.15). The largest negative correlations were present between the cerebellar and insular regions (average rg = −0.18) and between the cerebellar and frontal regions (average rg = −0.15) (Fig. 2d). Figure 3b shows that the genetic correlations are particularly strong within intrinsic connectivity networks, especially the visual, somatomotor, subcortical, and cerebellum networks, possibly because of lower experience-dependent plasticity in these brain regions compared to heteromodal and associative areas45. Using Ward’s method for hierarchical clustering46, we identify four clusters within the estimated genetic correlations for the 74 ROIs in the brain (Fig. 4). The first cluster (18 ROIs) includes most of the frontal cortical areas of the brain, the second (18 ROIs) the cerebellar cortex, the third (18 ROIs) subcortical structures including the brain stem, and the last cluster (20 ROIs) contains a mixture of temporal and occipital brain areas. We also used MGREML to estimate the genetic correlations between brain morphology and eight human behavioral traits that are known to be related to health and that have previously been studied in large-scale GWASs, as well as the anthropometric measures height and body-mass index. Statistically significant correlations are highlighted in Supplementary Data 1 (Panel c). Spatial maps of the genetic correlation between brain morphology and the behavioral traits are shown in Fig. 5. For subjective well-being, we find the strongest genetic correlation with the Middle Frontal Gyrus (Fig. 5a, rg = 0.21, corresponding standard error 0.088), a region that has been linked before to emotion regulation47. The genetic correlations of the ROIs with neuroticism (Fig. 5b) and depression (Fig. 5c) are generally weak and insignificant, potentially reflecting the coarseness of these phenotypic measures in the UK Biobank data. The strongest genetic correlation with the number of alcoholic drinks consumed per week is with the Lateral Occipital Cortex, superior and inferior divisions (Fig. 5d, rg = 0.23 and rg = 0.18, respectively, corresponding standard errors 0.106 and 0.092). Although the phenotypic correlations between the analyzed ROIs and alcohol consumption are generally negative48, these particular brain regions are among those implicated in the affective response to drug cues based on the perception-valuation-action model49. For educational attainment and intelligence, the strongest correlations are found in the frontal lobe region (rg = −0.13, corresponding standard error 0.065, between educational attainment and the Superior Frontal Gyrus, and rg = 0.16, corresponding standard error 0.056, between intelligence and the Frontal Medial Cortex). Figure 5e, f show that the genetic correlation structures estimated for educational attainment and intelligence are largely similar, in line with earlier studies showing the strong genetic overlap between these two traits50. Genetic correlations of the ROIs with visual memory (Fig. 5g) are insignificant, and the strongest genetic correlation of reaction time is with the Middle Temporal Gyrus, temporooccipital part (Fig. 5h, rg = 0.20, corresponding standard error 0.085). Activity within the middle temporal gyrus has been linked before with reaction time51. Earlier studies suggest that the size of the brain is positively associated with traits such as intelligence6. When analyzing absolute brain volumes of the ROIs rather than relative brain volumes (i.e., relative to total gray matter volume in the brain), we indeed observe robust positive relationships between the absolute volumes of the ROIs on the one hand and height and intelligence on the other hand (Supplementary Data 3). In the set of estimated correlations across the ROIs, the main differences with the results obtained using relative brain volumes (Supplementary Data 1) are that the genetic correlations within the cerebellum clusters are slightly smaller and that the positive correlations within the subcortical structures are somewhat larger. ## Discussion We designed MGREML to estimate high-dimensional genetic correlation matrices from large-scale individual-level genetic data in a computationally efficient manner while guaranteeing the internal consistency of the estimated genetic correlation matrix. For comparison, we used pairwise bivariate GREML to obtain a genetic correlation matrix using the exact same set of individuals (N = 20,190) and traits (T = 86) as in our main analysis. While the resulting estimates are fairly similar (Supplementary Data 2), the resulting genetic correlation matrix is indefinite (13 out of the 86 eigenvalues are negative). Such an indefinite matrix poses a challenge for multivariate methods, such as Genomic SEM42, that require a genetic correlation matrix as starting point for a follow-up analysis. Using MGREML results avoids this challenge, as MGREML by design guarantees the estimation of a positive (semi)-definite genetic correlation matrix. Moreover, we conducted GWASs and bivariate LDSC26 analyses to obtain a genetic correlation matrix using the pairwise bivariate approach for the same empirical application (Supplementary Data 5). We find that the standard errors of the $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$ estimates obtained using MGREML are on average 32.7% smaller than those obtained using LDSC. The standard errors of the genetic correlations obtained using MGREML are on average 50.6% smaller compared to those obtained using LDSC, illustrating the advantages of MGREML in terms of statistical power. More specifically, when applying a two-sided significance test to each estimated genetic correlation (null hypothesis: rg = 0; alternative hypothesis: $${r}_{g}\ \ne\ 0$$), MGREML yields 1519 significant correlations at the 5% level, whereas the pairwise bivariate LDSC approach yields only 954 significant correlations. Thus, the gain in statistical efficiency is larger than the efficiency gained by HDL34, a recently developed variation of bivariate LDSC that accounts for autocorrelation of summary statistics across the genome as a result of LD. Importantly, the genetic correlation matrix obtained using bivariate LDSC is again not positive semidefinite and thus the estimated genetic correlations across traits are not internally consistent. Our main results tacitly assume a homoscedastic per-SNP heritability, in line with GCTA19. This GCTA model approach may be suboptimal under some circumstances, including genetic drift and various forms of natural selection52,53. We therefore repeated the estimation of the genetic correlation matrix using the LDAK-Thin model30,31 (Supplementary Data 6) and the SumHer54 approach (Supplementary Data 7) that both assume heteroscedastic random SNP effects. Importantly, results based on the LDAK-Thin model can also be readily obtained using the MGREML software tool, because the choice of the heritability model only affects the construction of the genomic-relatedness matrix (GRM). Comparison of results shows that the heritability estimates are on average fairly similar across methods (Supplementary Data 8), and illustrates again that individual-level data methods (the GCTA model and LDAK-Thin model in MGREML) are statistically more efficient than summary statistics methods (LDSC and SumHer). In our empirical application, we find that the fit of MGREML in terms of the log-likelihood is slightly better when assuming the GCTA model than when assuming the LDAK-Thin model (Supplementary Note 3). The similarity of the estimates across different heritability models may be explained by differential selection across phenotypes, and balancing out of underestimations and overestimations of contributions to $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$ in low- and high-LD regions31,52. Our results show marked variation in the estimated heritability across cortical gray matter volumes, with on average higher heritability estimates in subcortical and cerebellar areas than in cortical areas (Fig. 2b). Grouping of $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$ estimates by networks of intrinsic functional connectivity suggests that heritability is particularly low in brain areas with presumed stronger experience-dependent plasticity (Fig. 3a). These results suggest that neocortical areas of the brain are under weaker genetic control perhaps reflecting greater environmentally determined plasticity45,55. Furthermore, the estimated genetic correlations suggest the presence of four genetically distinct clusters in the brain (Fig. 4). These clusters largely correspond with the conventional subdivision of the brain in different lobes based on anatomical borders56. The estimated genetic correlations also provide evidence for a shared genetic architecture of traits between which an association has been observed before in phenotypic studies such as between intelligence and educational attainment50. In addition, genetic correlations were identified between alcohol consumption and cerebellar volume, and between subjective well-being and the temporooccipital part of the Middle Temporal Gyrus (Supplementary Data 1). We caution that these relationships may be somewhat different in the general population due to the nonrandom selection of the population into the UK Biobank sample57 and potential gene–environment correlations58. To verify that our results are not merely a reflection of the physical proximity of brain regions, we regressed the estimated genetic correlations on the physical distance between the different brain regions. Although this correction procedure decreased the estimated genetic correlations by 17.4%, the main patterns are still observed. For the same reason, we recreated the dendogram (Fig. 3) after aggregating the results for subregions into an average for the larger region because the optimization procedure of MGREML puts equal weight on each trait and does not account for physical proximity. The results of this robustness check show that the four identified clusters do not merely reflect the number of analyzed measures for a specific brain region. Estimates of heritability increase our understanding of the relative impact of genetic and environmental variation on traits14,32, and estimates of genetic correlation lead to a better understanding of the shared biological pathways between traits59. Joint analysis of multiple traits may also improve the predictive power of genetic models60. MGREML has been designed to estimate both SNP-based heritability and genetic correlations in a computationally efficient and internally consistent manner using individual-level genetic data. The efficiency of its optimization algorithm makes it possible to use MGREML to estimate high-dimensional genetic correlation matrices in large datasets, such as the UK Biobank. ## Methods ### Sample and data Participants of this study were sourced from UK Biobank. UK Biobank is a prospective cohort study in the UK that collects physical, health, and cognitive measures, and biological samples (including genotype data) in about 500,000 individuals8. In 2016, UK Biobank started to collect brain imaging data with the aim to scan 100,000 subjects by 202227,61. UK Biobank has received ethical approval from the National Health Service North West Centre for Research Ethics Committee (11/NW/0382) and has obtained informed consent from its participants. We selected the 43,691 individuals with available genotype data from the UK Biobank brain imaging study who self-identified as ‘white British’ and with similar genetic ancestry based on a principal component analysis. After stringent quality control (Supplementary Note 4), we estimated pairwise genetic relationships using 1,384,830 autosomal common (Minor Allele Frequency ≥ 0.01) SNPs and retained 37,392 individuals whose pairwise relationship was estimated to be less than 0.025 (approximately corresponding to second- or third-degree cousins or more distant shared ancestry). From these unrelated individuals, we retained the 20,190 individuals (9747 males and 10,433 females) with complete information on all 86 traits in our analyses. The age of these individuals ranges from 40 to 72 years, and the average age is 54.79 years. A description of all the variables used in the empirical analyses is available in Supplementary Note 2. Mapping of each cortical region to a network of intrinsic functional connectivity (Fig. 3) is based on the assignment of each brain parcel in the Harvard-Oxford atlas62 to the intrinsic functional connectivity network44 with the highest overlap. These networks were earlier identified using functional magnetic resonance imaging44. ### Statistical framework In a genome-wide association study (GWAS) of quantitative trait y, the effect of single-nucleotide polymorphism (SNP) m on y is modelled as: $${y}_{j}={g}_{jm}^{* }{\alpha }_{m}^{* }+{{{{{{\bf{x}}}}}}}_{j}^{{\prime} }{{{\boldsymbol{\beta}}}} +{u}_{j},$$ (1) where yj is the phenotype of individual j and $${g}_{jm}^{* }$$ is the raw genotype (i.e., a value equal to zero, one, or two, indicating the number of copies of the coded allele) for the same individual and the given SNP. In this model, $${\alpha }_{m}^{* }$$ is the per-allele effect of SNP m on y, $${{{{{{\bf{x}}}}}}}_{j}^{{\prime} }$$ is a 1×k vector of control variables with k×1 vector of effects β, and uj is the error term. If y has mean zero and/or an intercept is included in the set of control variables, we can assume, without loss of generality, that SNPs are standardized in accordance with their distribution under Hardy–Weinberg equilibrium. That is, we define $${g}_{jm}=({g}_{jm}^{* }-2{f}_{m}){[2{f}_{m}(1-{f}_{m})]}^{-0.5}$$, where gjm denotes the standardized genotype for individual j and SNP m, and where fm denotes the empirical allele frequency of the same SNP. Now, $${g}_{jm}^{*}{\alpha }_{m}^{*}$$ in Eq. (1) can be replaced by gjmαm, where $${\alpha }_{m}={\alpha }_{m}^{*}{[2{f}_{m}(1-{f}_{m})]}^{0.5}$$ is the effect of standardized SNP m. In addition, we can consider the contribution of all SNPs jointly using the following model: $${y}_{j}={{{{{{\bf{g}}}}}}}_{j}^{{\prime} } {{{\boldsymbol{\alpha}}}} +{{{{{{\bf{x}}}}}}}_{j}^{{\prime} }{{{\boldsymbol{\beta}}}} +{{{{{{\rm{\varepsilon }}}}}}}_{j},{{{{{\rm{where}}}}}}\,{{{{{{\bf{g}}}}}}}_{j}^{{\prime} }{{{\boldsymbol{\alpha}}}} ={g}_{j1}{\alpha }_{1}+\ldots +{g}_{jM}{\alpha }_{M}.$$ (2) Here, $${{{{{{\bf{g}}}}}}}_{j}^{{\prime} }$$ is the 1×M vector of standardized genotypes for individual j, α is the M×1 vector of effects, and εj is the error term in this model. For a sample of N individuals (Fig. 1, Panel a), Eq. (2) can be written in matrix notation as: $${{{{{\bf{y}}}}}}={{{{{\bf{G}}}}}}{{{\boldsymbol{\alpha}}}} +{{{{{\bf{X}}}}}}{{{\boldsymbol{\beta}}}} +{{{\boldsymbol{\varepsilon}}}} ,$$ (3) where G is the N×M matrix of standardized genotypes, X is the N×k matrix of control variables, and ε is the N×1 vector of errors. In genomic-relatedness-based restricted maximum likelihood (GREML)32 as implemented in GCTA19, β is assumed to be fixed and SNP effects and errors are assumed to be random, viz., $${{{\boldsymbol{\alpha}}}} \sim N({{{\bf{0}}}},{{{{{{\bf{I}}}}}}}_{M}{\sigma }_{\alpha }^{2})$$ and $${{{\boldsymbol{\varepsilon}}}} \sim N({{{\bf{0}}}},{{{{{{\bf{I}}}}}}}_{N}{\sigma }_{E}^{2})$$, where $${\sigma }_{\alpha }^{2}$$ is the variance in SNP effects and $${\sigma }_{E}^{2}$$ the variance in errors. Now, Gα is the total genetic contribution, which follows a $$N({{\bf{0}}},\,{{{{{\bf{G}}}}}}{{{{{\bf{G}}}}}}^{\prime} {\sigma }_{\alpha }^{2})$$ distribution. Under this model, the phenotypic variance matrix across individuals can be decomposed as: $${{{{{\rm{Var}}}}}}({{{{{\bf{y}}}}}})={{{{{\bf{A}}}}}}{\sigma }_{G}^{2}+{{{{{{\bf{I}}}}}}}_{N}{\sigma }_{E}^{2},$$ (4) where A = M−1GG′ is the genomic-relatedness matrix (GRM), capturing genetic similarity between individuals based on all SNPs under consideration (Fig. 1, Panel b), and $${\sigma }_{G}^{2}=M{\sigma }_{\alpha }^{2}$$ is the total contribution of additive, linear effects of SNPs to phenotypic variance. The SNP-based heritability $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$ of y is then defined as: $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}=\frac{{\sigma }_{G}^{2}}{{{{{{{\rm{\sigma }}}}}}}_{G}^{2}+{{{{{{\rm{\sigma }}}}}}}_{E}^{2}}.$$ (5) Importantly, $${{{\boldsymbol{\alpha}}}} \sim N({{{\bf{0}}}},{{{{{{\bf{I}}}}}}}_{M}{\sigma }_{\alpha }^{2})$$ is equivalent to assuming all SNPs explain the same proportion of phenotypic variance. As a result, this assumption about SNP effects tacitly imposes a strong relation between allele frequencies and effect sizes, where the per-allele effects of rare variants are, on average, considerably larger than the per-allele effects of more common variants. Moreover, this assumption does not differentiate between regions of low and high linkage disequilibrium (LD). Therefore, other perhaps more realistic assumptions about the distribution of SNP effects have been proposed and utilized30,31. These alternatives typically only affect the way in which GRM A in Eq. (4) is constructed. More specifically, when heteroscedastic SNP effects (i.e., $${{{\boldsymbol{\alpha}}}} \sim N({{{\bf{0}}}},{{{{{\bf{D}}}}}}{\sigma }_{\alpha }^{2})$$) are assumed (with D a diagonal matrix reflecting, e.g., the strength of the relationship between allele frequencies and effect sizes), it follows that $${{{{{\bf{G}}}}}}{{{\boldsymbol{\alpha}}}} ={{{{{\bf{G}}}}}}{{{{{{\bf{D}}}}}}}^{0.5}{{{\boldsymbol{\alpha }}}}^{* }$$, where $${{{\boldsymbol{\alpha }}}}^{* } \sim N({{{\bf{0}}}},{{{{{{\bf{I}}}}}}}_{M}{\sigma }_{\alpha }^{2})$$. In this case, by defining A = d−1GDG′, with d being the sum of the diagonal elements of D, Eqs. (4) and (5) still apply. As such, our model also lends itself well for application to a GRM that is calculated using alternatives to GCTA19, such as LDAK31. Irrespective of the precise definition of A, we can write the model in Eq. (3) as: $${{{{{\bf{y}}}}}} \sim N({{{{{\bf{X}}}}}}{{{\boldsymbol{\beta}}}} ,{\sigma }_{G}^{2}{{{{{\bf{A}}}}}}+{\sigma }_{E}^{2}{{{{{{\bf{I}}}}}}}_{N}).$$ (6) For two quantitative traits, observed in the same set of N individuals, this model can be generalized to the following bivariate model18: $$\left(\begin{array}{c}{{{{{{\bf{y}}}}}}}_{1}\\ {{{{{{\bf{y}}}}}}}_{2}\end{array}\right) \sim N\left(\left(\begin{array}{cc}{{{{{{\bf{X}}}}}}}_{1} & {{\bf{0}}}\\ {{\bf{0}}} & {{{{{{\bf{X}}}}}}}_{2}\end{array}\right)\left(\begin{array}{c}{{{\boldsymbol{\beta }}}}_{1}\\ {{{\boldsymbol{\beta }}}}_{2}\end{array}\right),\bigg(\begin{array}{cc}{\sigma }_{{G}_{11}}{{{{{\bf{A}}}}}} & {\sigma }_{{G}_{12}}{{{{{\bf{A}}}}}}\\ {\sigma }_{{G}_{12}}{{{{{\bf{A}}}}}} & {\sigma }_{{G}_{22}}{{{{{\bf{A}}}}}}\end{array}\bigg)+\bigg(\begin{array}{cc}{\sigma }_{{E}_{11}}{{{{{{\bf{I}}}}}}}_{N} & {\sigma }_{{E}_{12}}{{{{{{\bf{I}}}}}}}_{N}\\ {\sigma }_{{E}_{12}}{{{{{{\bf{I}}}}}}}_{N} & {\sigma }_{{E}_{22}}{{{{{{\bf{I}}}}}}}_{N}\end{array}\bigg)\right),$$ (7) where X1 (resp. X2) is the N×k1 (N×k2) matrix of control variables for trait y1 (y2) with fixed effects β1 (β2), $${\sigma }_{{G}_{st}}$$ is the genetic covariance and $${\sigma }_{{E}_{st}}$$ the environmental covariance between traits s and t, for s = 1, 2 and t = 1, 2. The Kronecker product (denoted by ‘’) can be used to extend the model in Eq. (7) to a multivariate model for T different traits (i.e., yt for t = 1, …, T), as follows60,63: $$\left(\begin{array}{c}{{{{{{\bf{y}}}}}}}_{1}\\ \begin{array}{c}{{{{{{\bf{y}}}}}}}_{2}\\ \vdots \end{array}\\ {{{{{{\bf{y}}}}}}}_{T}\end{array}\right) \sim N\left(\left(\begin{array}{ccc}{{{{{{\bf{X}}}}}}}_{1} & {{\bf{0}}} & {{\bf{0}}}\\ {{\bf{0}}} & \ddots & {{\bf{0}}}\\ {{\bf{0}}} & {{\bf{0}}} & {{{{{{\bf{X}}}}}}}_{T}\end{array}\right)\left(\begin{array}{c}{{{\boldsymbol{\beta }}}}_{1}\\ \vdots \\ {{{\boldsymbol{\beta }}}}_{T}\end{array}\right),{{{{{{\bf{V}}}}}}}_{G}\otimes {{{{{\bf{A}}}}}}+{{{{{{\bf{V}}}}}}}_{E}\otimes {{{{{{\bf{I}}}}}}}_{N}\right),$$ (8) where $${{{{{{\bf{V}}}}}}}_{G}=\left(\begin{array}{ccc}{\sigma }_{{G}_{11}} & \ldots & {\sigma }_{{G}_{1T}}\\ \vdots & \ddots & \vdots \\ {\sigma }_{{G}_{1T}} & \ldots & {\sigma }_{{G}_{TT}}\end{array}\right)\,{{{{{\rm{and}}}}}}\,{{{{{{\bf{V}}}}}}}_{E}=\left(\begin{array}{ccc}{\sigma }_{{E}_{11}} & \ldots & {\sigma }_{{E}_{1T}}\\ \vdots & \ddots & \vdots \\ {\sigma }_{{E}_{1T}} & \ldots & {\sigma }_{{E}_{TT}}\end{array}\right).$$ (9) In this multivariate model, the SNP-based heritability ($${h}_{{{{{{\rm{SNPs}}}}}}}^{2}$$) of trait t, denoted by $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}(t)$$, and the genetic correlation (rg) between traits s and t (Fig. 1, Panel c), denoted by rg(s, t), are defined as: $${h}_{{{{{{\rm{SNPs}}}}}}}^{2}(t)=\frac{{\sigma }_{{G}_{tt}}}{{\sigma }_{{G}_{tt}}+{\sigma }_{{E}_{tt}}}\,{{{{{\rm{and}}}}}}\,{r}_{g}(s,t)=\frac{{\sigma }_{{G}_{st}}}{\sqrt{{\sigma }_{{G}_{tt}}{\sigma }_{{G}_{ss}}}},$$ (10) for s = 1, …, T and t = 1, …, T. ### Optimization procedure To estimate the genetic and environmental covariance matrices VG and VE in Eqs. (8) and (9), we use restricted maximum likelihood (REML) estimation. To maximize the likelihood function, we use a quasi-Newton method. More specifically, we use a Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm64. Supplementary Note 1 provides highly efficient expressions for the log-likelihood and gradient, which are needed in the optimization algorithm. These expressions make it possible to estimate the multivariate model with a time complexity that scales linearly with the number of observations and quadratically with the number of traits. The optimization procedure guarantees that the estimated matrices VG and VE are positive (semi)-definite, by imposing an underlying factor model for both matrices. After optimization, standard errors can be calculated with a time complexity that scales linearly with the number of observations and quadratically with the number of parameters in the model (which in turn scales quadratically with the number of traits). This optimization procedure is fully incorporated in MGREML, a command-line tool written in Python 3. We recommend using the GCTA-GREML power calculator65 for ex-ante power calculations, because the accuracy of estimates from MGREML and pairwise bivariate GREML is fairly similar (Supplementary Data 8). ### Statistics and reproducibility The empirical results in this study have been obtained using the command-line tool MGREML. Supplementary Note 4 details the analysis pipeline that has been used to obtain the heritability and genetic correlation estimates. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability Individual-level genotype and phenotype data are available by application via the UKB Biobank website (https://www.ukbiobank.ac.uk/). The authors declare that the results supporting the findings of this study are available within the paper and its supplementary files. Figures 25 are based on the MGREML results available in Supplementary Data 1. ## Code availability MGREML is available at https://github.com/devlaming/mgreml as a ready-to-use command-line tool66. The GitHub page comes with a full tutorial on the usage of this tool. An MGREML analysis of 86 traits, observed in a sample of 20,190 unrelated individuals (i.e., the dimensionality of the dataset that we use in our empirical application), takes around four hours on a four-core laptop with 16GB of RAM. ## References 1. 1. Kanai, R. & Rees, G. The structural basis of inter-individual differences in human behaviour and cognition. Nat. Rev. Neurosci. 12, 231–242 (2011). 2. 2. Crossley, N. A. et al. The hubs of the human connectome are generally implicated in the anatomy of brain disorders. Brain 137, 2382–2395 (2014). 3. 3. Hwang, J. et al. Prediction of Alzheimer’s disease pathophysiology based on cortical thickness patterns. Alzheimer’s & Dementia: Diagnosis. Assess. Dis. Monit. 2, 58–67 (2016). 4. 4. Thompson, P. M. et al. ENIGMA and global neuroscience: a decade of large-scale studies of the brain in health and disease across more than 40 countries. Transl. Psychiatry 10, 1–28 (2020). 5. 5. Seidlitz, J. et al. Transcriptomic and cellular decoding of regional brain vulnerability to neurogenetic disorders. Nat. Commun. 11, 1–14 (2020). 6. 6. Nave, G., Jung, W. H., Karlsson Linnér, R., Kable, J. W. & Koellinger, P. D. Are bigger brains smarter? Evidence from a large-scale preregistered study. Psychol. Sci. 30, 43–54 (2019). 7. 7. Avinun, R., Israel, S., Knodt, A. R. & Hariri, A. R. Little evidence for associations between the big five personality traits and variability in brain gray or white matter. NeuroImage 220, 117092 (2020). 8. 8. Sudlow, C. et al. UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12, e1001779 (2015). 9. 9. Elliott, L. T. et al. Genome-wide association studies of brain imaging phenotypes in UK Biobank. Nature 562, 210–216 (2018). 10. 10. Grasby, K. L. et al. The genetic architecture of the human cerebral cortex. Science 367, eaay6690 (2020). 11. 11. Hofer, E. et al. Genetic correlations and genome-wide associations of cortical structure in general population samples of 22,824 adults. Nat. Commun. 11, 1–16 (2020). 12. 12. Smith, S. M. et al. Enhanced brain imaging genetics in UK Biobank. BioRxiv https://doi.org/10.1101/2020.07.27.223545 (2020). 13. 13. Zhao, B. et al. Genome-wide association analysis of 19,629 individuals identifies variants influencing regional brain volumes and refines their genetic co-architecture with cognitive and mental health traits. Nat. Genet. 51, 1637–1644 (2019). 14. 14. Witte, J. S., Visscher, P. M. & Wray, N. R. The contribution of genetic variants to disease depends on the ruler. Nat. Rev. Genet. 15, 765–776 (2014). 15. 15. Posthuma, D. et al. The association between brain volume and intelligence is of genetic origin. Nat. Neurosci. 5, 83–84 (2002). 16. 16. Liu, S., Smit, D. J., Abdellaoui, A., van Wingen, G. & Verweij, K. J. Brain structure and function show distinct relations with genetic predispositions to mental health and cognition. MedRxiv https://doi.org/10.1101/2021.03.07.21252728 (2021). 17. 17. Van der Schot, A. C. et al. Influence of genes and environment on brain volumes in twin pairs concordant and discordant for bipolar disorder. Arch. Gen. Psychiatry 66, 142–151 (2009). 18. 18. Lee, S. H., Yang, J., Goddard, M. E., Visscher, P. M. & Wray, N. R. Estimation of pleiotropy between complex diseases using single-nucleotide polymorphism-derived genomic relationships and restricted maximum likelihood. Bioinformatics 28, 2540–2542 (2012). 19. 19. Yang, J., Lee, S. H., Goddard, M. E. & Visscher, P. M. GCTA: A tool for genome-wide complex trait analysis. Am. J. Hum. Genet. 88, 76–82 (2011). 20. 20. Gilmour, A. ASREML for testing mixed effects and estimating multiple trait variance components. Proc. Assoc. Advancement Anim. Breed. Genet. 12, 386–390 (1997). 21. 21. Meyer, K. WOMBAT-A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML). J. Zhejiang Univ. Sci. B 8, 815–821 (2007). 22. 22. Zhou, X. & Stephens, M. Genome-wide efficient mixed-model analysis for association studies. Nat. Genet. 44, 821–824 (2012). 23. 23. Loh, P.-R. et al. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance-components analysis. Nat. Genet. 47, 1385–1392 (2015). 24. 24. Lee, S. H. & Van der Werf, J. H. MTG2: an efficient algorithm for multivariate linear mixed model analysis based on genomic information. Bioinformatics 32, 1420–1422 (2016). 25. 25. Bulik-Sullivan, B. et al. An atlas of genetic correlations across human diseases and traits. Nat. Genet. 47, 1236–1241 (2015). 26. 26. Bulik-Sullivan, B. et al. LD Score regression distinguishes confounding from polygenicity in genome-wide association studies. Nat. Genet. 47, 291–295 (2015). 27. 27. Miller, K. L. et al. Multimodal population brain imaging in the UK Biobank prospective epidemiological study. Nat. Neurosci. 19, 1523–1536 (2016). 28. 28. Shi, H., Kichaev, G. & Pasaniuc, B. Contrasting the genetic architecture of 30 complex traits from summary association data. Am. J. Hum. Genet. 99, 139–153 (2016). 29. 29. Evans, L. M. et al. Comparison of methods that use whole genome data to estimate the heritability and genetic architecture of complex traits. Nat. Genet. 50, 737–745 (2018). 30. 30. Speed, D. et al. Reevaluation of SNP heritability in complex human traits. Nat. Genet. 49, 986–992 (2017). 31. 31. Speed, D., Hemani, G., Johnson, M. R. & Balding, D. J. Improved heritability estimation from genome-wide SNPs. Am. J. Hum. Genet. 91, 1011–1021 (2012). 32. 32. Yang, J. et al. Common SNPs explain a large proportion of the heritability for human height. Nat. Genet. 42, 565–569 (2010). 33. 33. Young, A. I. et al. Relatedness disequilibrium regression estimates heritability without environmental bias. Nat. Genet. 50, 1304–1310 (2018). 34. 34. Ning, Z., Pawitan, Y. & Shen, X. High-definition likelihood inference of genetic correlations across human complex traits. Nat. Genet. 52, 859–864 (2020). 35. 35. Mills, M. C. & Rahal, C. A scientometric review of genome-wide association studies. Commun. Biol. 2, 1–11 (2019). 36. 36. Watanabe, K. et al. A global overview of pleiotropy and genetic architecture in complex traits. Nat. Genet. 51, 1339–1348 (2019). 37. 37. Zheng, J. et al. LD Hub: a centralized database and web interface to perform LD score regression that maximizes the potential of summary level GWAS data for SNP heritability and genetic correlation analysis. Bioinformatics 33, 272–279 (2017). 38. 38. Ni, G. et al. Estimation of genetic correlation via linkage disequilibrium score regression and genomic restricted maximum likelihood. Am. J. Hum. Genet. 102, 1185–1194 (2018). 39. 39. Yengo, L. et al. Meta-analysis of genome-wide association studies for height and body mass index in~ 700000 individuals of European ancestry. Hum. Mol. Genet. 27, 3641–3649 (2018). 40. 40. Lee, J. J. et al. Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. Nat. Genet. 50, 1112–1121 (2018). 41. 41. Power, R. A. & Pluess, M. Heritability estimates of the Big Five personality traits based on common genetic variants. Transl. Psychiatry 5, e604–e604 (2015). 42. 42. Grotzinger, A. D. et al. Genomic structural equation modelling provides insights into the multivariate genetic architecture of complex traits. Nat. Hum. Behav. 3, 513–525 (2019). 43. 43. Hayes, J. F. & Hill, W. G. Modification of estimates of parameters in the construction of genetic selection indices (‘bending’). Biometrics 37, 483–493 (1981). 44. 44. Yeo, B. T. et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J. Neurophysiol. 106, 1125–1165 (2011). 45. 45. Mesulam, M. M. From sensation to cognition. Brain 121, 1013–1052 (1998). 46. 46. Kaufman, L., & Rousseeuw, P. J. Finding Groups in Data: An Introduction to Cluster Analysis (John Wiley & Sons, 1990). 47. 47. Beauregard, M., Lévesque, J. & Bourgouin, P. Neural correlates of conscious self-regulation of emotion. J. Neurosci. 21, RC165 (2001). 48. 48. Daviet, R. et al. Multimodal brain imaging study of 36,678 participants reveals adverse effects of moderate drinking. BioRxiv https://doi.org/10.1101/2020.03.27.011791 (2021). 49. 49. Giuliani, N. R. & Berkman, E. T. Craving is an affective state and its regulation can be understood in terms of the extended process model of emotion regulation. Psychol. Inq. 26, 48–53 (2015). 50. 50. Allegrini, A. G. et al. Genomic prediction of cognitive traits in childhood and adolescence. Mol. Psychiatry 24, 819–827 (2019). 51. 51. Tam, A., Luedke, A. C., Walsh, J. J., Fernandez-Ruiz, J. & Garcia, A. Effects of reaction time variability and age on brain activity during Stroop task performance. Brain Imaging Behav. 9, 609–618 (2015). 52. 52. Zeng, J. et al. Signatures of negative selection in the genetic architecture of human complex traits. Nat. Genet. 50, 746–753 (2018). 53. 53. Speed, D., Holmes, J. & Balding, D. J. Evaluating and improving heritability models using summary statistics. Nat. Genet. 52, 458–462 (2020). 54. 54. Speed, D. & Balding, D. J. SumHer better estimates the SNP heritability of complex traits from summary statistics. Nat. Genet. 51, 277–284 (2019). 55. 55. Rakic, P. Evolution of the neocortex: a perspective from developmental biology. Nat. Rev. Neurosci. 10, 724–735 (2009). 56. 56. Standring, S. Gray’s Anatomy E-book: The Anatomical Basis of Clinical Practice (Elsevier Health Sciences. 2015). 57. 57. Munafò, M. R., Tilling, K., Taylor, A. E., Evans, D. M. & Davey Smith, G. Collider scope: When selection bias can substantially influence observed associations. Int. J. Epidemiol. 47, 226–235 (2018). 58. 58. Zhou, X., Im, H. K. & Lee, S. H. CORE GREML for estimating covariance between random effects in linear mixed models for complex trait analyses. Nat. Commun. 11, 1–11 (2020). 59. 59. Van Rheenen, W., Peyrot, W. J., Schork, A. J., Lee, S. H. & Wray, N. R. Genetic correlations of polygenic disease traits: from theory to practice. Nat. Rev. Genet. 20, 567–581 (2019). 60. 60. Maier, R. et al. Joint analysis of psychiatric disorders increases accuracy of risk prediction for schizophrenia, bipolar disorder, and major depressive disorder. Am. J. Hum. Genet. 96, 283–294 (2015). 61. 61. Alfaro-Almagro, F. et al. Image processing and quality control for the first 10,000 brain imaging datasets from UK Biobank. Neuroimage 166, 400–424 (2018). 62. 62. Desikan, R. S. et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 31, 968–980 (2006). 63. 63. Lynch, M., & Walsh, B. Genetics and Analysis of Quantitative Traits (Sinauer, 1998). 64. 64. Nocedal, J. and Wright, S.J. Numerical Optimization (Springer, 2006). 65. 65. Visscher, P. M. et al. Statistical power to detect genetic (co) variance of complex traits using SNP data in unrelated samples. PLoS Genet. 10, e1004269 (2014). 66. 66. De Vlaming, R. & Slob, E.A.W. (2021) MGREML v1.0.0. https://doi.org/10.5281/zenodo.5499768. ## Acknowledgements UK Biobank has obtained ethical approval from the National Research Ethics Committee (11/NW/0382). This research has been conducted using the UK Biobank Resource under application number 11425. We would like to thank the participants and researchers from UK Biobank Imaging Study who contributed or collected data. We also thank the Pan-UKB team for providing the UK Biobank specific LD scores (https://pan.ukbb.broadinstitute.org). This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative (NWO Call for Compute Time EINF-403 to E.A.W.S.). P.D.K. and R.d.V. were supported by a European Research Council Consolidator Grant (647648 EdGe to P.D.K.). P.D.K. was also supported by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin–Madison with funding from the Wisconsin Alumni Research Foundation. C.A.R. was supported by a European Research Council Starting Grant (946647 GEPSI). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. ## Author information Authors ### Contributions R.d.V., E.A.W.S., and P.J.F.G. developed the model. R.d.V., E.A.W.S., P.D.K., and C.A.R. designed the experiments. R.d.V. and E.A.W.S. wrote code and performed the statistical analyses. R.d.V., E.A.W.S., P.R.J., A.D., P.D.K., and C.A.R. analyzed the results. E.A.W.S. and P.R.J. visualized the results. C.A.R. led the preparation of the manuscript and supplementary files. All authors contributed to the editing of the manuscript and supplementary files. ### Corresponding author Correspondence to Cornelius A. Rietveld. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Communications Biology thanks Doug Speed, Kazutaka Ohi and (Sang) Hong Lee for their contribution to the peer review of this work. Primary Handling Editor: George Inglis. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions de Vlaming, R., Slob, E.A.W., Jansen, P.R. et al. Multivariate analysis reveals shared genetic architecture of brain morphology and human behavior. Commun Biol 4, 1180 (2021). https://doi.org/10.1038/s42003-021-02712-y • Accepted: • Published:
2021-10-28 22:00:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7018260359764099, "perplexity": 4366.404718162462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588526.57/warc/CC-MAIN-20211028193601-20211028223601-00393.warc.gz"}