url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://cs.stackexchange.com/questions/944/power-of-nondeterministic-type-1-min-heap-automaton-with-both-a-heap-and-a-stack
# Power of nondeterministic type-1 min-heap automaton with both a heap and a stack I have asked a series of questions concerning capabilities of a certain class of exotic automata which I have called min-heap automata; the original question, and links to others, can be found here. Two of my last questions seem to have been quickly dispatched; one completely, and the other mostly (I have edited it to make it more viable). In either event, I actually had one other question I meant to ask, and would be interested to lay this subject to rest for good and all. So here it is: A two-stack PDA can simulate a Turing machine. A $k$-heap nondeterministic type-1 min-heap automaton cannot (it seems; see the linked question). What about a $k$-tape nondeterministic type-1 min-heap automaton augmented with a stack (similar to that of a PDA)? Can it simulate a Turing machine? If not, does an augmented $(k+1)$-heap nondeterministic type-1 min-heap automaton accept a class of languages which is a proper superset of languages accepted by augmented automata with only $k$ heaps? Thanks, and I promise this is the last of these questions. • In the original question, it is shown that EPAL isn't recognizable using a single heap. Yet it is context-free, so it is certainly recognizable if you add a stack. – Yuval Filmus Apr 24 '12 at 20:45 • @YuvalFilmus Good observation: adding a stack means that the automata can accept at least $HAL \cup CFL$. Can they accept more, though? – Patrick87 Apr 24 '12 at 21:04 • It can accept the language $\{ww^R : w \in (0+1)^*\} \cup \{a^nb^nc^n : n \geq 0\}$, which is neither in HAL nor in CFL. – Yuval Filmus Apr 26 '12 at 7:51 A heap and a stack can both be used to implement a counter. 2 counters suffice to recognize $RE$. (See also)
2019-06-18 19:42:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34613779187202454, "perplexity": 1284.5210856734232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998813.71/warc/CC-MAIN-20190618183446-20190618205446-00539.warc.gz"}
https://www.groundai.com/project/the-k-th-smallest-dirac-operator-eigenvalue-and-the-pion-decay-constant/
The kth Smallest Dirac Operator Eigenvalue and the Pion Decay Constant # The kth Smallest Dirac Operator Eigenvalue and the Pion Decay Constant G. Akemann and A. C. Ipsen Department of Physics, Bielefeld University, Postfach 100131, D-33501 Bielefeld, Germany Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen Ø, Denmark ###### Abstract We derive an analytical expression for the distribution of the th smallest Dirac eigenvalue in QCD with imaginary isospin chemical potential in the Dirac operator for arbitrary gauge field topology . Because of its dependence on the pion decay constant through the chemical potential in the epsilon-regime of chiral perturbation theory this can be used for lattice determinations of that low-energy constant. On the technical side we use a chiral Random-Two Matrix Theory, where we express the th eigenvalue distribution through the joint probability of the ordered smallest eigenvalues. The latter can be computed exactly for finite and infinite , for which we derive generalisations of Dyson’s integration Theorem and Sonine’s identity. ## 1 Introduction It is by now well known how the spontaneous breaking of chiral symmetry in QCD leads to remarkably strong predictions for the spectral properties of the Dirac operator in that theory. Based first exclusively on the relation to the effective field theory for the associated Nambu-Goldstone bosons at fixed gauge field topology [1], an intriguing relation to universal Random Matrix Theory (RMT) was also pointed out [2]. It has subsequently become clear how these two alternative formulations are related, and all -point spectral correlation functions have been shown to be identical in these two formulations to leading order in a -expansion (where gives the extent of the space-time volume ) [3, 4]. This holds then also for individual distributions of Dirac operator eigenvalues [5]. If one seeks sensitivity to the pion decay constant it turns out to be useful to consider the Dirac operator of quark doublets with isospin chemical potential . Based on the chiral Lagrangian formulation [6], it has been suggested to use a spectral 2-point function of the two associated Dirac operators with imaginary isospin chemical potential. The advantage of imaginary chemical potential lies in the fact that the corresponding Dirac operator retains its anti-hermiticity. To leading order, all results can be expressed in terms of the simple finite-volume scaling variable . In this way, can be extracted from fits that vary and/or . There is also sensitivity to in other observables that couple to chemical potential [7, 8, 9, 10, 11]. The leading-order chiral Lagrangian computations of ref. [6] have been given a reformulation in terms of a chiral Random Two-Matrix Theory in ref. [12]. In this way, all spectral correlation functions associated with the two Dirac operators and , with respective chemical potentials and , have been computed analytically in [12] for both the quenched and the full theory with light flavours. It also includes all spectral correlation functions where the imaginary isospin chemical potential only enters in the Dirac operator whose eigenvalues are being computed, while the gauge field configurations are obtained in the usual way at vanishing chemical potential. In analogy with what is being done when varying quark masses away from the value used for generating the gauge field configurations we call this “partial quenching”. In an earlier paper [13] it was shown how all probability distributions of individual Dirac operator eigenvalues can be computed by means of a series expansion in higher -point spectral correlation functions. In reference [13] an explicit analytical formula was also given for the lowest non-zero eigenvalue distribution for arbitrary combinations of flavours and of the Dirac operators and , respectively, at gauge field topology , in an approach quite close to that of ref. [14]. Two obvious questions remained open that we will answer in this paper: how to extend this to non-zero topology , and how to compute the distribution of the second, third or general th eigenvalue as these are known for zero chemical potential [15]. However, the path of ref. [13] is not very suitable in particular for the derivation of the distributions of higher eigenvalues in a compact analytical manner. Our setup will follow closely ref. [15] at vanishing chemical potential. We shall present here a new formalism that immediately allows for the analytical determination of the distributions of these higher eigenvalues. The extension of the first approach [13] to for the first eigenvalue is presented in appendix C, as an alternative formulation and analytical check to part of our new approach. The benefit of our new results should be two-fold: while expressions for higher topology allow for an independent determination of and from different lattice configurations, the expressions for higher eigenvalues should allow for a better determination using the same configurations as for the first eigenvalue. How much of the program proposed in [6] to determine on the lattice has been realised in the meantime? Based on a preliminary account [16] of ref. [13], the expansion of the first eigenvalue was first used in simulations in [17]. However, the question remained how large the finite-volume corrections to the leading order (LO) -expansion are, in which the chiral Lagrangian-RMT correspondence holds. In a series of papers this question has been addressed and answered: in [18] the next to LO corrections (NLO) and in [19] next to next to LO (NNLO) corrections in the epsilon-expansion were computed. As a result of these computations at NLO all RMT expressions for arbitrary -point density correlations functions (and thus for all individual eigenvalues too) remain valid. The infinite volume expressions simply get renormalised by finite-volume corrections, one only has to replace and by and in the corresponding RMT expressions. Here the subscript “” for effective encodes the corrections that match those computed earlier in [20] and [21, 11], respectively. Only at NNLO non-universal, non-RMT corrections appear. It was further noticed in [18, 19], that the size of the corrections at each order depends considerably on the lattice geometry, in particular when using asymmetric geometries. In order to keep the NNLO corrections small, in [22] the authors used a specific optimised geometry where they could apply RMT predictions and effective couplings at NLO only, and they obtained realistic values for and from partially quenched lattice data for small chemical potential. Technically speaking was determined there from the first eigenvalue distribution at vanishing chemical potential, and from the shift of the eigenvalues compared to zero chemical potential. We refer to [22] for a more detailed discussion of these fits. Motivated by these findings we have completed the computation for the th Dirac eigenvalues for all in the RMT setting, in order to have a more complete mathematical toolbox at hand. Our paper is organised as follows. In the next section we briefly define the notation and remind the reader of the definition of chiral Random Two-Matrix Theory. We introduce a certain joint probability density and describe how it can be used to derive individual eigenvalue distributions. We give the explicit finite- solution here in terms of new polynomials and a new sequence of matrix model kernels. We take the scaling limit relevant for QCD in section 4, and write out explicitly and discuss the physically most important examples such as partially quenched results. Section 5 contains our conclusions and a suggestion for a quite non-trivial but important extension of these results. Because some of the relevant technical details have been described in ref. [13], we have relegated many of the mathematical details in this paper to appendices. In addition, in Appendix A we describe an explicit construction of the polynomials needed to compute the first eigenvalue distribution in sectors of non-trivial gauge field topology if one alternatively uses the method of ref. [13]. ## 2 Chiral Random Two-Matrix Theory Before turning to the relevant Random Two-Matrix Theory, we first briefly outline the set-up in the language of the gauge field theory. We are considering QCD at finite four-volume , and we assume that chiral symmetry is spontaneously broken at infinite volume. We consider two Dirac operators with different imaginary baryon (quark) chemical potential , D1ψ(n)1 ≡ [to0.0pt\raisebox0.645pt$/$D(A)+iμ1γ0]ψ(n)1 = iλ(n)1ψ(n)1 (2.1) D2ψ(n)2 ≡ [to0.0pt\raisebox0.645pt$/$D(A)+iμ2γ0]ψ(n)2 = iλ(n)2ψ(n)2 . (2.2) When this is simply imaginary isospin chemical potential, but we can stay with the more general case. We thus consider light quarks coupled to quark chemical potential , and light quarks coupled to quark chemical potential . Let us first consider the conceptually simplest case where , and later comment on the changes needed to deal with partial quenching. In the chiral Lagrangian framework the terms that depend on are easily written down on the basis of the usual correspondence with external vector sources. Going to the -regime of chiral perturbation theory in sectors of fixed gauge field topology [1], the leading term in the effective partition function including imaginary reads [6, 23] (see also [24] for QCD-like theories) Z(Nf)ν=∫U(Nf)dU(detU)νe14VF2πTr[U,B][U†,B]+12ΣVTr(M†U+MU†) . (2.3) In (2.3) the matrix B = diag(μ1\bf 1N1,μ2\bf 1N2) (2.4) is made out of the chemical potentials, and the quark mass matrix is M = diag(m1,…,mNf) . (2.5) The partition function (2.3) is a simple zero-dimensional group integral. The leading contribution to the effective low-energy field theory at finite volume in the -regime is thus well known. We consider now the limit in which while and are kept fixed. In this limit, to LO in the -expansion, the effective partition function of this theory and all the spectral correlation functions of its Dirac operator eigenvalues are completely equivalent to the chiral Random Two-Matrix Theory with imaginary chemical potential that was introduced in ref. [12]. The equivalence for the two-point function follows from [6], for all higher density correlations it was proven in [4]. Therefore, since we have proven [13] that the probability distribution of the th smallest eigenvalue can be computed in terms of this infinite sequence of spectral correlation functions, we are free to use the chiral Random Two-Matrix Theory when performing the actual analytical computation. As already mentioned it has been shown in [18] that also to NLO in the -expansion the random matrix expressions [12] for density correlation functions remain valid, when replacing and by the renormalised constants and that encode the finite-volume corrections. Only to NNLO non-universal corrections to the random matrix setting appear. The partition function of chiral Random Two-Matrix Theory is, up to an irrelevant normalisation factor, defined as Z(Nf)ν = ∫dΦdΨ e−NTr(Φ†Φ+Ψ†Ψ)N1∏f1=1det[D1+mf1]N2∏f2=1det[D2+mf2] (2.6) where are given by D1,2=(0iΦ+iμ1,2ΨiΦ†+iμ1,2Ψ†0) . (2.7) The operator remains anti-Hermitian because the chemical potentials are imaginary, as shown explicitly. Both and are complex rectangular matrices of size , where both and are integers. The index corresponds to gauge field topology in the usual way. The aforementioned correspondence to chiral perturbation theory holds in the following microscopic large- limit: limN→∞Z(Nf)ν = Z(Nf)ν  % with  ^m=2Nm ,  ^μ=√2N μ . (2.8) In the framework of chiral Random Two-Matrix Theory it is particularly simple to consider the situation corresponding to what we call partial quenching. Here one simply considers eigenvalues of one of the matrices, say , that then does not enter into the actual integration measure of (2.6) by setting . In the language of the chiral Lagrangian, this needs to be done in terms of graded groups or by means of the replica method. Referring to ref. [12] for details, we immediately write down the corresponding representation in terms of eigenvalues and of and , respectively, Z(Nf)ν = ∫∞0N∏i=1dxidyi P(Nf)ν({x},{y}) , (2.9) up to an irrelevant (mass dependent) normalization factor. The integrand is the joint probability distribution function (jpdf), which is central for what follows: P(Nf)ν({x},{y}) ≡ N∏i=1⎛⎝(xiyi)ν+1e−N(c1x2i+c2y2i)N1∏f1=1(x2i+m2f1)N2∏f2=1(y2i+m2f2)⎞⎠ (2.12) × ΔN({x2})ΔN({y2})det1≤i,j≤N[Iν(2dNxiyj)] . Because the integration in eq. (2.6) was over and separately, the matrices now become coupled in the exponent. The corresponding unitary group integral leads to the determinant of modified -Bessel functions, and removes one of the initially two Vandermonde determinants, which is defined as . The precise connection between the constants and is given by c1 = (1+μ22)/δ2 ,    c2 = (1+μ21)/δ2 , d = (1+μ1μ2)/δ2 ,  δ = μ2−μ1 , 1−τ = d2/(c1c2) , (2.13) where the latter is defined for later convenience. We need the joint probability distribution to be normalised to unity, which is done trivially by dividing by (cf. eq. 2.9)). ## 3 The kth Eigenvalue at Finite-N for Arbitrary ν≥0 We now follow the derivation of ref. [15] rather closely. We are able to do that because we focus here on the distributions of individual -eigenvalues only - which are those we may partially quench. For that purpose it is convenient to first consider the joint probability distribution of the smallest -eigenvalues, ordered such that : Ω(Nf)ν(x1,…,xk) ≡ N!Z(Nf)ν(N−k)!∫∞xkdxk+1⋯∫∞xkdxN∫∞0N∏i=1dyi P(Nf)ν({x},{y}). (3.1) This quantity is then used to generate the th -eigenvalue distribution through the following integration111Compared to [15] we are already working with squared variables here. Translating to that picture the integration bounds in eqs. (3.1) and (3.2) remain the same as in [15]. p(Nf,ν)k(xk)=∫xk0dx1∫xkx1dx2…∫xkxk−2dxk−1 Ω(Nf)ν(x1,…,xk) . (3.2) Note that for no integration is needed, and . The computation of mixed or conditional individual eigenvalue distributions, e.g. to find the joint distribution of the first - and first -eigenvalue, remains an open problem. We next proceed as in ref. [13], and integrate out all -eigenvalues exactly. Because of this we note that in eq. (3.1) we can replace the determinant over the Bessel functions by times its diagonal part, after having made use of the antisymmetry property of . After inserting a representation of the Bessel function in terms of a factorised infinite sum over Laguerre polynomials (see eq. (B.7) in [12]), we get ∫∞0N∏i=1dyi P(Nf)ν({x},{y})=N!∫∞0N∏i=1⎛⎝dyiN1∏f1=1(x2i+m2f1)N2∏f2=1(y2i+m2f2)⎞⎠ΔN({x2}) ×ΔN({y2})N∏i=1⎛⎝(Nd)ντν+1(xiyi)2ν+1e−Nτ(c1x2i+c2y2i)∞∑ni=0ni!(1−τ)ni(ni+ν)!Lνni(Nτc1x2i)Lνni(Nτc2y2i)⎞⎠, where the Laguerre polynomials now appear with their corresponding weight function due to the identity used. Next we include the set of masses, , into to form a larger Vandermonde determinant of size , and then replace it by a determinant of in general arbitrary Laguerre polynomials normalised to be monic ΔN({y2})N∏i=1N2∏f2=1(y2i+m2f2)=∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣^L¯ν0(Nτc2(imf2=1)2)⋯1(Nτc2)N+N2−1^L¯νN+N2−1(Nτc2(imf2=1)2)⋯⋯⋯^L¯ν0(Nτc2(imN2)2)⋯1(Nτc2)N+N2−1^L¯νN+N2−1(Nτc2(imN2)2)^L¯ν0(Nτc2y21)⋯1(Nτc2)N+N2−1^L¯νN+N2−1(Nτc2y21)⋯⋯⋯^L¯ν0(Nτc2y2N)⋯1(Nτc2)N+N2−1^L¯νN+N2−1(Nτc2y2N)∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣ΔN2({(im2)2}). (3.4) Here the index of the Laguerre polynomials is arbitrary. The monic Laguerre polynomials relate to ordinary Laguerre polynomials in a -independent manner: ^Lνn(x)≡(−1)nn! Lνn(x) = n∑j=0(−1)n+jn!(n+ν)!(n−j)!(ν+j)!j! xj = xn+O(xn−1)  . (3.5) In eq. (3.4) the inverse powers can be taken out of the determinant. Inserting this back into eq. (LABEL:intP) for we can use the orthogonality of the Laguerre polynomials in the integrated variables , killing the infinite sums from the expanded Bessel functions. The Laguerre polynomials in thus replace those in inside the determinant, times the norm from the integration. We obtain ∫∞0N∏i=1dyi P(Nf)ν({x},{y}) = (3.6) = N!(Nd)NντN(ν+1)∏N+N2−1j=0(1−τ)j(Nτc2)−jΔN2({(im2)2}) 2N(Nτc2)N(ν+1)N∏i=1⎛⎝x2ν+1ie−Nτc1x2iN1∏f1=1(x2i+m2f1)⎞⎠ΔN({x2}) (3.8) ×∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣^Lν0(Nτc2(imf2=1)2)⋯1(1−τ)N+N2−1^LνN+N2−1(Nτc2(imf2=1)2)⋯⋯⋯^Lν0(Nτc2(imN2)2)⋯1(1−τ)N+N2−1^LνN+N2−1(Nτc2(imN2)2)^Lν0(Nτc1x21)⋯^LνN+N2−1(Nτc1x21)⋯⋯⋯^Lν0(Nτc1x2N)⋯^LνN+N2−1(Nτc1x2N)∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣, after taking out common factors of the determinant. The determinant in eq. (3.8), which we call , can almost be mapped to a Vandermonde determinant, using an identity proved in appendix A in [13] DN+N2({m22};{x2}) = ∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣^Lν0(1τM2f2=1)⋯τN+N2−1(1−τ)N+N2−1^LνN+N2−1(1τM2f2=1)⋯⋯⋯^Lν0(1τM2N2)⋯τN+N2−1(1−τ)N+N2−1^LνN+N2−1(1τM2N2)1⋯X2(N+N2−1)1⋯⋯⋯1⋯X2(N+N2−1)N∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣, (3.9) where we have defined M2f2≡Nτc2(imf2)2  and  X2j≡Nτc1x2j . (3.11) This fact can be used below to perform the remaining integrations in the generating quantity , after inserting eq. (LABEL:Ldelta) into eqs. (3.8) and (3.1). This leads to Ω(Nf)ν(x1,…,xk) = Ck∏j>i≥1(x2j−x2i) ∫∞xkdxk+1⋯∫∞xkdxNN∏j>i≥k+1(x2j−x2i)N∏j=k+1k∏i=1(x2j−x2i) (3.13) ×N∏i=1⎛⎝x2ν+1ie−Nτc1x2iN1∏f1=1(x2i+m2f1)⎞⎠DN+N2({m22};{x2}) , where we have split the Vandermonde determinant into integrated and unintegrated variables, and defined the following constant C ≡ (N!)2(Nd)NντN(ν+1)∏N+N2−1j=0(1−τ)j(Nτc2)−jZ(Nf)ν(N−k)! 2N(Nτc2)N(ν+1)ΔN2({(im2)2}) . (3.14) We can now change variables for , and then perform the shift to obtain integrations in eq. (3.13): Ω(Nf)ν(x1,…,xk) = Ck∏j>i≥1(x2j−x2i)k∏i=1⎛⎝x2ν+1ie−Nτc1x2iN1∏f1=1(x2i+m2f1)⎞⎠12(N−k)e−N(N−k)τc1x2k (3.15) × ∫∞0N∏j=k+1⎛⎝dzj zje−Nτc1zj (zj+x2k)νk−1∏i=1(zj+x2k−x2i)N1∏f1=1(zj+x2k+m2f1)⎞⎠ (3.17) × N∏j>i≥k+1(zj−zi) DN+N2({m22};x21,…,x2k,zk+1+x2k,…,zN+x2k) . (3.18) We thus obtain an integral with extra mass terms of flavour-type “1”, in addition to the shifted masses. The weight w(z)=z1e−Nτc1z (3.19) is now of Laguerre-type corresponding to a fixed topological charge of , irrespective of the actual topological charge of the given gauge field sector we started with. We will therefore call spurious topology. Compared with the corresponding derivation in case of vanishing chemical potential [15], this can be seen to differ by one unit, compared to spurious topology at vanishing in [15]. The reason for this difference is easily traced to the different integration measure for the -eigenvalues, which has one power less in the Vandermonde determinant compared to the case of vanishing chemical potential222Of course, the additional pieces due to the -integrations are what ensures equivalence to those corresponding one-matrix model results in the limit .. It is an interesting and quite non-trivial check on our present calculation that we recover the results of reference [15] in the limit of vanishing chemical potential. In particular, the shift from spurious topology to spurious topological charge in the integration measure will now arise due to recurrence relations of Laguerre polynomials. Some details of this will be given below. When replacing the Vandermonde determinant in the variables as well as by a determinant containing Laguerre polynomials we thus choose polynomials in order to be able to exploit the orthogonality properties with respect to the measure eq. (3.19). For the new masses times this is an easy task. We can include them into a bigger determinant of size , following the identity eq. (3.4). Here we replace the variables by variables , and the set of masses by the following set of masses: m′2f1 ≡m2f1+x2k for  f1=1,…,N1 , (3.20) m′2N1+j ≡  x2k+ϵ2j for  j=1,…,ν , (3.21) m′2N1+ν+i ≡  x2k−x2i for  i=1,…,k−1 , (3.22) and likewise we define M′2j≡Nτc1(im′j)2  for  j=1,…,N1+ν+k−1 . (3.23) For computational simplicity we first set the degenerate masses to be different by adding small pairwise different constants, , and then set at the end of the computation. Also we may chose spurious topology in eq. (3.4). The prefactors in front of the Laguerre polynomials inside the determinant can be taken out. To express the determinant of the shifted arguments in eq. (3.18) in terms of Laguerre polynomials requires a bit more algebra: DN+N2({m22};x21,…,x2k,zk+1+x2k,…,zN+x2k) = (3.24) (3.25) =∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣^Lν0(1τM2f2=1)⋯∑N+N2−1l=0τl(1−τ)l^Lνl(1τM2f2=1)(−X2k)N+N2−1−l(N+N2−1l)⋯⋯⋯^Lν0(1τM2N2)⋯∑N+N2−1l=0τl(1−τ)l^Lνl(1τM2N2)(−X2k)N+N2−1−l(N+N2−1l)1⋯(X21−X2k)N+N2−1⋯⋯⋯1⋯(X2k−1−X2k)N+N2−11⋯01⋯ZN+N2−1k+1⋯⋯⋯1⋯ZN+N2−1N∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣ (3.26) =∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣qν0(M2f2=1)⋯qνN+N2−1(M2f2=1)⋯⋯⋯qν0(M2N2)⋯qνN+N2−1(M2N2)^L10(M′2N1+ν+1)⋯^L1N+N2−1(M′2N1+ν+1)⋯⋯⋯^L10(M′2N1+ν+k)⋯^L1N+N2−1(M′2N1+ν+k)^L10(Zk+1)⋯^L1N+N2−1(Zk+1)⋯⋯⋯^L10(ZN)⋯^L1N+N2−1(ZN)∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣ , (3.27) where for convenience we have defined , as well as Zk≡Nτc1zk . (3.28) In the first step in eq. (3.27) we have used the invariance of the determinant to undo the shift in of the variables. This leads to a shift in variables and to linear combinations of the Laguerre polynomials in the masses. In the second step we have added columns from the left to the right to replace monic powers in and by polynomials . Because the determinant is not an invariant Vandermonde this leads to a further sum in the first rows, invoking the following new polynomials qνn(M22) = (−)nn!n∑l=01(1−τ)lLνl(M22)L−νn−l(−X2k) . (3.29) The form given here is derived in Appendix A using identities for Laguerre polynomials. For later purpose, we note already that in the limit of zero chemical potential, in the limit , we obtain Laguerre polynomials of shifted mass from the : limτ→0qνn(M22)1(−)nn! = n∑j=0Lνj(−Nm22)L−νn−j(−Nx2k) = L1n(−N(m22+x2k)) . (3.30) In this way we recover, after the use of a few identities for Laguerre polynomials, the results of ref. [15] in the limit of vanishing chemical potential. We now proceed with the integration over the variables in eq. (3.18). Using the rewriting discussed above, we have: Ω(Nf)ν(x1,…,xk)=Ck∏j>i≥1(x2j−x2i)k∏i=1⎛⎝x2ν+1ie−Nτc1x2iN1∏f1=1(x2i+m2f1)⎞⎠2−(N−k)e−N(N−k)τc1x2kΔN1+ν+k−1({(im′)2}) (3.31) ×∫∞0N∏j=k+1(dzj zje−Nτc1zj)N+N1+ν−2∏j=0(Nτc1)−j (3.32) ×∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣^L10(M′21)⋯^L1N+N1+ν−2(M′21)⋯⋯⋯^L10(M′2N1+ν+k−1)⋯^L1N+N1+ν−2(M′2N1+ν+k−1)^L10(Zk+1)⋯^L1N+N1+ν−2(Z
2021-01-28 01:07:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242037534713745, "perplexity": 681.306170450634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704835583.91/warc/CC-MAIN-20210128005448-20210128035448-00417.warc.gz"}
https://www.quantumstudy.com/physics/fluid-dynamics/
### Nature of fluid flow : When a fluid flows through a tunnel or tube, the nature of motion of the fluid may be classified with respect to the nature of the flow. If the flow velocity at a particular point remains constant with time then the flow of fluid is said to be steady and if it changes with time at that particular point then the flow is said to be unsteady. ### Streamline Motion : Consider an incompressible, non-viscous fluid which is flowing through a tube.  Let us take a line A-B-C-D-E through which the fluid particles are successively following each preceding particle to move from A to E. The velocity of the fluid particles at A, B, C, D and E are, respectively, $\vec{v_A}$ , $\vec{v_B}$ , $\vec{v_C}$, $\vec{v_D}$ and $\vec{v_E}$ The flow is considered to be steady if these velocities at A , B , C , D and E are constant with time. When a fluid particle is coming towards A , just reaches A and then it acquires the velocity $\vec{v_A}$ . The same fluid particle, attains the velocity $\vec{v_B}$ at the point B though it had velocity $\vec{v_A}$ at A. In this way the fluid particle successively reaches C, D & E and acquires the respective velocities at those positions. This reveals two things – line of motion of a stream of fluid particles is fixed and the velocities at different points are fixed with respect to time. This path of motion of the fluid particles is called a streamline and the motion is called streamline flow. The velocity of a particle in streamline flow is a function of position only.
2022-10-05 14:39:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.681459903717041, "perplexity": 260.3385133484775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00745.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-1-section-1-6-transformations-of-functions-concept-and-vocabulary-check-page-241/5
## Precalculus (6th Edition) Blitzer The complete statement is “the graph of $y=5f\left( x \right)$ is obtained by a vertical stretch of the graph of $y=f\left( x \right)$ by multiplying each of its y-coordinates by 5.”
2019-12-14 10:27:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4274980127811432, "perplexity": 510.0413933123231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540586560.45/warc/CC-MAIN-20191214094407-20191214122407-00553.warc.gz"}
https://www-nature-com-s.caas.cn/articles/s41567-021-01219-x?error=cookies_not_supported&code=01a0ec53-2004-4dc1-8d2e-18676c73d63f
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Two-fold symmetric superconductivity in few-layer NbSe2 ## Abstract The strong Ising spin–orbit coupling in certain two-dimensional transition metal dichalcogenides can profoundly affect the superconducting state in few-layer samples. For example, in NbSe2, this effect combines with the reduced dimensionality to stabilize the superconducting state against magnetic fields up to ~35 T, and could lead to topological superconductivity. Here we report a two-fold rotational symmetry of the superconducting state in few-layer NbSe2 under in-plane external magnetic fields, in contrast to the three-fold symmetry of the lattice. Both the magnetoresistance and critical field exhibit this two-fold symmetry, and it also manifests deep inside the superconducting state in NbSe2/CrBr3 superconductor-magnet tunnel junctions. In both cases, the anisotropy vanishes in the normal state, demonstrating that it is an intrinsic property of the superconducting phase. We attribute the behaviour to the mixing between two closely competing pairing instabilities, namely the conventional s-wave instability typical of bulk NbSe2 and an unconventional d- or p-wave channel that emerges in few-layer NbSe2. Our results demonstrate the unconventional character of the pairing interaction in few-layer transition metal dichalcogenides and highlight the exotic superconductivity in this family of two-dimensional materials. ## Access options from\$8.99 All prices are NET prices. ## Data availability Data for figures (including Supplementary figures) are available in the public repository Zenodo at https://doi.org/10.5281/zenodo.4545917. Source data are provided with this paper. ## Code availability All relevant codes needed to evaluate the conclusions in the paper are available from the corresponding authors upon reasonable request. ## References 1. 1. Revolinsky, E., Lautenschlager, E. P. & Armitage, C. H. Layer structure superconductor. Solid State Commun. 1, 59–61 (1963). 2. 2. Xiao, D. et al. Coupled spin and valley physics in monolayers of MoS2 and other group-VI dichalcogenides. Phys. Rev. Lett. 108, 196802 (2012). 3. 3. Mak, K. F., He, K., Shan, J. & Heinz, T. F. Control of valley polarization in monolayer MoS2 by optical helicity. Nat. Nanotechnol. 7, 494–498 (2012). 4. 4. Zeng, H., Dai, J., Yao, W., Xiao, D. & Cui, X. Valley polarization in MoS2 monolayers by optical pumping. Nat. Nanotechnol. 7, 490–493 (2012). 5. 5. Mak, K. F., McGill, K. L., Park, J. & McEuen, P. L. The valley Hall effect in MoS2 transistors. Science 344, 1489–1492 (2014). 6. 6. Jones, A. M. et al. Spin-layer locking effects in optical orientation of exciton spin in bilayer WSe2. Nat. Phys. 10, 130–134 (2014). 7. 7. Riley, J. M. et al. Direct observation of spin-polarized bulk bands in an inversion-symmetric semiconductor. Nat. Phys. 10, 835–839 (2014). 8. 8. Xi, X. et al. Ising pairing in superconducting NbSe2 atomic layers. Nat. Phys. 12, 139–143 (2016). 9. 9. Xu, X., Yao, W., Xiao, D. & Heinz, T. F. Spin and pseudospins in layered transition metal dichalcogenides. Nat. Phys. 10, 343–350 (2014). 10. 10. Ugeda, M. M. et al. Characterization of collective ground states in single-layer NbSe2. Nat. Phys. 12, 92–97 (2016). 11. 11. Moncton, D. E., Axe, J. D. & DiSalvo, F. J. Neutron scattering study of the charge-density wave transitions in 2H-TaSe2 and 2H-NbSe2. Phys. Rev. B 16, 801–819 (1977). 12. 12. Harper, J. M. E., Geballe, T. H. & DiSalvo, F. J. Thermal properties of layered transition-metal dichalcogenides at charge-density-wave transitions. Phys. Rev. B 15, 2943–2951 (1977). 13. 13. Fletcher, J. D. et al. Penetration depth study of superconducting gap structure of 2H-NbSe2. Phys. Rev. Lett. 98, 057003 (2007). 14. 14. De La Barrera, S. C. et al. Tuning Ising superconductivity with layer and spin–orbit coupling in two-dimensional transition-metal dichalcogenides. Nat. Commun. 9, 1427 (2018). 15. 15. Clogston, A. M. Upper limit for the critical field in hard superconductors. Phys. Rev. Lett. 9, 266–267 (1962). 16. 16. Chandrasekhar, B. S. A note on the maximum critical field of high-field superconductors. Appl. Phys. Lett. 1, 7–8 (1962). 17. 17. Möckli, D. & Khodas, M. Magnetic-field induced s + if pairing in Ising superconductors. Phys. Rev. B 99, 180505 (2019). 18. 18. Möckli, D. & Khodas, M. Robust parity-mixed superconductivity in disordered monolayer transition metal dichalcogenides. Phys. Rev. B 98, 144518 (2018). 19. 19. Yuan, N. F. Q., Mak, K. F. & Law, K. T. Possible topological superconducting phases of MoS2. Phys. Rev. Lett. 113, 097001 (2014). 20. 20. Samokhin, K. V. Symmetry and topology of two-dimensional noncentrosymmetric superconductors. Phys. Rev. B 92, 174517 (2015). 21. 21. Zhou, B. T., Yuan, N. F. Q., Jiang, H. L. & Law, K. T. Ising superconductivity and Majorana fermions in transition-metal dichalcogenides. Phys. Rev. B 93, 180501 (2016). 22. 22. Hsu, Y. T., Vaezi, A., Fischer, M. H. & Kim, E. A. Topological superconductivity in monolayer transition metal dichalcogenides. Nat. Commun. 8, 14985 (2017). 23. 23. He, W.-Y. et al. Magnetic field driven nodal topological superconductivity in monolayer transition metal dichalcogenides. Commun. Phys. 1, 40 (2018). 24. 24. Fischer, M. H., Sigrist, M. & Agterberg, D. F. Superconductivity without inversion and time-reversal symmetries. Phys. Rev. Lett. 121, 157003 (2018). 25. 25. Shaffer, D., Kang, J., Burnell, F. J. & Fernandes, R. M. Crystalline nodal topological superconductivity and Bogolyubov Fermi surfaces in monolayer NbSe2. Phys. Rev. B 101, 224503 (2020). 26. 26. Tsubokawa, I. On the magnetic properties of a CrBr3 single crystal. J. Phys. Soc. Jpn 15, 1664–1668 (1960). 27. 27. Ghazaryan, D. et al. Magnon-assisted tunnelling in van der Waals heterostructures based on CrBr3. Nat. Electron. 1, 344–349 (2018). 28. 28. Baral, D. et al. Small energy gap revealed in CrBr3 by scanning tunneling spectroscopy. Phys. Chem. Chem. Phys. 23, 3225–3232 (2021). 29. 29. Kim, H. H. et al. Evolution of interlayer and intralayer magnetism in three atomically thin chromium trihalides. Proc. Natl Acad. Sci. USA 166, 11131–11136 (2019). 30. 30. Zhang, Z. et al. Direct photoluminescence probing of ferromagnetism in monolayer two-dimensional CrBr3. Nano Lett. 19, 3138–3142 (2019). 31. 31. Chen, W. et al. Direct observation of van der Waals stacking–dependent interlayer magnetism. Science 366, 983–987 (2019). 32. 32. Dvir, T. et al. Spectroscopy of bulk and few-layer superconducting NbSe2 with van der Waals tunnel junctions. Nat. Commun. 9, 598 (2018). 33. 33. Blonder, G. E., Tinkham, M. & Klapwijk, T. Transition from metallic to tunneling regimes in superconducting microconstrictions: excess current, charge imbalance and supercurrent conversion. Phys. Rev. B 25, 4515–4532 (1982). 34. 34. Sohn, E. et al. An unusual continuous paramagnetic-limited superconducting phase transition in 2D NbSe2. Nat. Mater. 17, 504–508 (2018). 35. 35. Sheet, G., Mukhopadhyay, S. & Raychaudhuri, P. Role of critical current on the point-contact Andreev reflection spectra between a normal metal and a superconductor. Phys. Rev. B 69, 134507 (2004). 36. 36. Wang, Y. L. et al. Parallel magnetic field suppresses dissipation in superconducting nanostrips. Proc. Natl Acad. Sci. USA 114, E10274–E10280 (2017). 37. 37. Cho, C. et al. Distinct nodal and nematic superconducting phases in the 2D Ising superconductor NbSe2. Preprint at https://arxiv.org/pdf/2003.12467.pdf (2020). 38. 38. Cai, X. et al. Disentangling spin–orbit coupling and local magnetism in a quasi-two-dimensional electron system. Phys. Rev. B 100, 081402(R) (2019). 39. 39. Matano, K., Kriener, M., Segawa, K., Ando, Y. & Zheng, G. Q. Spin–rotation symmetry breaking in the superconducting state of CuxBi2Se3. Nat. Phys. 12, 852–854 (2016). 40. 40. Fernandes, R. M. & Millis, A. J. Nematicity as a probe of superconducting pairing in iron-based superconductors. Phys. Rev. Lett. 111, 127001 (2013). 41. 41. Kang, J., Kemper, A. F. & Fernandes, R. M. Manipulation of gap nodes by uniaxial strain in iron-based superconductors. Phys. Rev. Lett. 113, 217001 (2014). 42. 42. Venderbos, J. W. F., Kozii, V. & Fu, L. Identification of nematic superconductivity from the upper critical field. Phys. Rev. B 94, 094522 (2016). 43. 43. Castellanos-Gomez, A. et al. Deterministic transfer of two-dimensional materials by all-dry viscoelastic stamping. 2D Mater. 1, 011002 (2014). 44. 44. Bing, D. et al. Optical contrast for identifying the thickness of two-dimensional materials. Opt. Commun. 406, 128–138 (2018). 45. 45. Novoselov, K. S. et al. Electric field in atomically thin carbon films. Science 306, 666–669 (2004). ## Acknowledgements We thank E.-A. Kim for useful discussions. B.H. and A.H. thank D. Graf and S. Maier for their discussions and support related to work done at the National High Magnetic Field Laboratory. Special thanks also go to Z. Jiang for all of the support associated with the Physical Property Measurement System at UMN. The work at the University of Minnesota (UMN) was supported primarily by the National Science Foundation through the University of Minnesota MRSEC, under Awards DMR-2011401 and DMR-1420013 (iSuperSeed). Portions of the UMN work were conducted in the Minnesota Nano Center, which is supported by the National Science Foundation through the National Nano Coordinated Infrastructure Network (NNCI) under award no. ECCS-1542202. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative agreement no. DMR-1644779 and the State of Florida. The research at Cornell was supported by the Office of Naval Research (ONR) under award no. N00014-18-1-2368 for the tunnelling measurements, and the National Science Foundation (NSF) under award no. DMR-1807810 for the fabrication of tunnel junctions. The work in Lausanne was supported by the Swiss National Science Foundation. K.F.M. also acknowledges support from a David and Lucille Packard Fellowship. ## Author information Authors ### Contributions B.H., A.H., V.S.P. and K.W. designed the magnetoresistance and effective critical field experiments. B.H. performed the transport measurements at UMN with support from A.H. and K.-T.T. B.H. and A.H. performed the measurements at the NHMFL with support from A.S. B.H. analysed the data with support from A.H. under the supervision of V.S.P. and K.W. A.H., K.-T.T. and X.Z. fabricated the magneto-transport heterostructures with support from B.H., under the supervision of K.W. Analytical modelling was performed by D.S., R.M.F. and F.J.B., who also contributed to the interpretation of the results. E.S., X.X., J.S. and K.F.M. designed the junction experiments. E.S. and X.X. fabricated and measured the junctions under the supervision of J.S. and K.F.M. E.S. analysed the junction data under the supervision of J.S. and K.F.M., with input from V.S.P. and R.M.F. H.B. and L.F. grew the bulk NbSe2 samples for tunnel junction studies. B.H., A.H., E.S., D.S., V.S.P. and R.M.F. co-wrote the manuscript. All authors discussed the results and provided comments on the manuscript. ### Corresponding authors Correspondence to Ke Wang or Vlad S. Pribiag. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Physics thanks Hadar Steinberg, Carsten Timm and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary information ### Supplementary Information Supplementary Figs. 1–11, Discussion and Table 12. ### Supplementary Data Fig. 1.2 data, Fig. 1.3 data, Fig. 2 data, Fig. 3 data, Fig. 4 data, Fig. 5 data, Fig. 6 data, Fig. 7 data, Fig. 8 data, Fig. 11.1 data and Fig. 11.2 data. ## Source data ### Source Data Fig. 1 Unprocessed images and source data. Source data. Source data. Source data. ## Rights and permissions Reprints and Permissions Hamill, A., Heischmidt, B., Sohn, E. et al. Two-fold symmetric superconductivity in few-layer NbSe2. Nat. Phys. 17, 949–954 (2021). https://doi.org/10.1038/s41567-021-01219-x • Accepted: • Published: • Issue Date:
2021-11-27 06:03:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6583094000816345, "perplexity": 9509.250105121577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00619.warc.gz"}
http://physics.stackexchange.com/questions/20254/surface-tension-of-n-non-mixing-fluids
# Surface tension of N non-mixing fluids I am a mathematician, not a physicist, so please be gentle with me if I write something wrong. Consider a bounded, regular container $\Omega$, which is filled with the fluids $F_1,...,F_N$ which do not mix (i.e. $\bigcup_{i=1}^N F_i=\Omega$ and $F_i\cap F_j=\emptyset, \forall i\neq j$). Between two adjacent fluids $F_i,F_j$ there is a surface tension $\sigma_{ij}$ (which is eventually zero if $F_i$ and $F_j$ are not adjacent). The problem I want to study is given $F_i$ with volume $V_i$ and density $\rho_i$ then what is the final state in which the fluids will arrive. There are three factors I have in mind: • the interaction of $F_i$ and $F_j$ with $i\neq j$ by their surface tension; • the interaction between $F_i$ and the boundary $\partial \Omega$ of the container; • the action of gravity on each $F_i$. I have two questions: 1. Is there a relation of the form $\sigma_{ij}+\sigma_{kl}=\sigma_{ik}+\sigma_{jl}$ (scalar or vectorial) between the surface tensions? 2. Are there any references or monographs which provide a good introduction to this study? I'm interested especially in surface tensions. - @ 1. The closest I know regarding this is the Young equation for solid/liquid/gas contact lines. See: en.wikipedia.org/wiki/… –  Bernhard Jan 30 '12 at 19:16 This is a very complex problem to solve so you will probably want to start with some simplifying assumptions such as N=2 to make it more tractable. You will be looking for a minimum energy solution where the energy is a combination of the gravitational potentials and the energy in the surface tension. Depending on parameters there may be some meta stable solutions where energy is locally minimum but not globally. For example a state in which the fluids all form layers with horizontal separation boundaries ordered by density with the densest at the top would be at least metastable because any perturbation such as a distortion of the boundary would increase both types of energy. However, if one of the layers is sufficiently small in volume there may be a preference for the liquid in that layer to form a bubble between the layers above and below. This would depend on the surface tensions between the three layers. Even with just two fluids there may be bubbles formed for either the top or bottom layer. Working out the shape of the bubble to provide the minimum energy could be non-trivial. With more liquids the number of odd arrangements you need to look at is going to grow. You have to consider that a heavy fluid may prefer to form a bubble above a lighter one if the surface tensions make that a lower energy configuration. In all finding the optimal solution will be a mixture of working through large numbers discrete cases and then optimising the shape of the surface areas. You should perhaps be thinking in terms of how this can be done numerically on a computer rather than an analytical solution. You will want to think about whether the problem is NP hard or not. Some simplifying assumptions may help but I cant see any physical reason why anything like your possible relationship between surface tensions should hold. - Thank you for your answer. The assumption from my first question would allow me to use a good framework which allows me to write the energy functional in a better form from which it can be proved that an optimal condition exists, and a numerical approach can be formulated. It was a long guess, but I said that I have to ask this to some physicians before give it up. –  Beni Bogosel Jan 30 '12 at 19:27 The fact that you are considering such possible relationships suggests that you have already considered at least as much as I have mentioned above. A relationship such as $\sigma_{ij} = \sigma_{ik} \pm \sigma_{kj}$ could imply yours and this could be true at least as an approximation. –  Philip Gibbs - inactive Jan 30 '12 at 19:46
2015-07-01 15:55:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8361942768096924, "perplexity": 251.8452680172297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094957.74/warc/CC-MAIN-20150627031814-00051-ip-10-179-60-89.ec2.internal.warc.gz"}
https://infinityplusonemath.wordpress.com/2017/05/20/the-universe-on-the-other-side-of-the-black-hole/
# The Universe on the Other Side (of the Black Hole) Black holes are complicated. The Schwarzschild spacetime, which is the original basis for the theory of black holes, is more complicated than the simple form of the metric would have you think. The usual form of the metric is, to remind you, $ds^2 = -\left(1-\dfrac{2M}{r}\right) dt^2 + \left(1-\dfrac{2M}{r}\right)^{-1} dr^2 + r^2(d\phi^2 + \sin^2(\phi)d\theta^2)$. At the Schwarzschild radius, $r=2M$, this metric blows up (i.e., goes to infinity). That means, originally, the metric really only makes sense outside this radius, $r>2M$. But last time, we talked about how this apparent singularity is not actually a problem. After a change of coordinates,1 the weirdness at the Schwarzschild radius goes away, and the spacetime is suddenly well behaved all the way down to $r=0$. Well, except that light can’t escape from inside this radius, due to the extreme bending of spacetime. That’s why we call the sphere at $r=2M$ the event horizon, and the whole thing a black hole. So, the original presentation of the Schwarzschild metric was problematic, and so it only made sense for $r>2M$, outside the black hole. But a coordinate change let the spacetime make sense inside the black hole as well, for $r\leq 2M$. We were able to extend the spacetime. So that leads to a question. Can we extend the spacetime even further? It doesn’t seem like there’s anywhere to extend to, like there was before, when we extended into the interior of the black hole. But it turns out, despite all appearances, we can extend the spacetime further, and discover another universe on the other side of the black hole. But to understand that we need to talk about a special coordinate change for special relativity. To remind you, Minkowski space, the spacetime of special relativity, can be interpreted as a solution of general relativity. The manifold is $\mathbb{R}^4$, with metric $ds^2 = -dt^2 + dx^2 + dy^2 + dz^2$. However, if we change to spherical spacetime coordinates, the metric in these becomes $ds^2 = -dt^2 + dr^2 + r^2(d\phi^2 + \sin^2(\phi) d\theta^2)$. (Same spacetime and metric, different coordinates.) Like we did last time, we’re going to mostly ignore the sphere part of the metric (the $d\phi^2$ and $d\theta^2$ parts), and focus on the $t$ and $r$ parts. And we’re going to do a weird kind of coordinate change. In fact, it’s weird enough that let’s just sketch a diagram for the Minkowski spacetime in these new coordinates, rather than write out a metric. It’s kind of weird, but what we did is we compressed the infinite range of time and the infinite range of radii down to a compact area. For clarity, we also drew some coordinate lines based on the old $(t, r)$ coordinates. One nice part about this diagram, sometimes called a Penrose diagram,2 is that light still travels at 45 degree angles. Notice they end up at the (upper) right diagonal line. This line is called (future) null infinity. It’s called null, since light follows null paths, and infinity, since that’s where light “ends up” after traveling infinite distance. It’s infinitely far away, through represented as a line in our diagram. On the other hand, if you are sitting still, no matter what radius you’re sitting at, you end up at the top point. This top point is called future timelike infinity. It’s “infinity” since it takes infinite time to get there, and timelike, since it’s where you end up if you travel along a timelike path (such as sitting still.) The bottom point is similarly past timelike infinity.3 To summarize, the most important things to take away from this diagram is that the top and bottom points are infinitely far to the future or past, and the right sides represent infinitely far away. These Penrose diagrams are kind of weird, but they are incredibly useful in understanding black holes. So, what does the Penrose diagram for the Schwarzschild spacetime look like? Due to the black hole, this diagram looks a bit different. The right part of the spacetime, approaching null infinity, though, works exactly the same way as in the diagram for special relativity.4 The top point is infinitely far into the future, while the right lines are infinitely far away. On the left, the diagonal line cutting the spacetime in two is the event horizon of the black hole, at $r=2M$. Inside the black hole, the top line is the singularity at $r=0$. Light still travels at 45 degree angles in this diagram. But if that light was emitted inside the black hole, that means the light must fall into the singularity $r=0$. We saw this before, but it’s particularly easy to see in this diagram. It’s also interesting to notice that the event horizon is also at 45 degrees. In other words, the event horizon (and thus the black hole) is expanding at the speed of light! It’s not that the radius is changing. But if you shot a ray of light out from the black hole right on the event horizon, it would stay on the event horizon forever, travelling “outward.” And the path of the light in the spacetime would exactly be that line of the event horizon. So, in a very real sense, the event horizon is going outward at the speed of light. It’s just that the spacetime is so curved, light isn’t moving! As one demonstration for why this diagram is useful for understanding the spacetime, let’s think about what happened when we pushed Hitler out the airlock into the black hole. If Hitler is moving directly toward the black hole, his path in the spacetime might look like this. Of course, we’ll be sitting at a safe distance, say at a fixed $r$ value. What would we see as Hitler approaches the black hole? The light reflecting off of his face would travel out to us along null geodesics, i.e., along 45 degree lines. Let’s sketch a few of these light paths. Now, remember that the only infinities occur at the boundaries of the diagram.5 That means that Hitler, according to his own perception, falls into the black hole in some finite amount of time. But notice that the light coming from him reaches us (sitting at a safe distance) at later and later times. (Remember that the top of the diagram, where our path ends, represents the infinite future. So the light rays look evenly spaced on the diagram, but the time between when each ray reaches us is longer and longer.) Even though some light is leaving Hitler shortly before he falls through the horizon, it doesn’t reach us until almost infinitely later! In other words, Hitler experiences falling through the event horizon in finite time, but we will never actually see him fall in. We’ll just see him get closer and closer6. Pretty weird, huh? So, this Penrose diagram, so far, is pretty cool. But I promised that we could extend the spacetime, and I haven’t shown you how yet. The idea how is simple enough. You can write down the metric in the coordinates of the Penrose diagram, though I’ll leave it to a footnote7. The trick is that, while the spacetime that we have been using represents $X>-T$ in these coordinates, the metric is perfectly nice and well-behaved for $X\leq -T$ as well! If we add that part of the spacetime to our Penrose diagram, we get something like this. The diamond on the right, remember, is our universe. The diamond on the left is an alternate universe! Anything could be going on in that other universe, but we could never find out. See, the only way our universes connect is through the black hole. Since we can only travel on paths at angles less than 45 degrees (timelike paths), the only way that our paths could cross with the path of an alien from the alternate universe is inside the black hole. Certainly, inside the black hole, we could talk about the wonders of our respective universes8, but only for a short time. Then we all get eaten by the singularity. What about the triangular region at the bottom? The best way to think of that region is in analogy with the black hole region (the top triangular region.) The black hole is a region where light and matter and energy can go in, but nothing can ever come out. The bottom region is the time-reversed version of this; light and matter and energy can leave, but it can never go back. Since it’s the opposite of a black hole, we call it a white hole. Unfortunately, this alternate universe is really just a mathematical construct. Remember, the Schwarzschild spacetime is supposed to model the region outside of a star. If a star runs out of fuel, and is massive enough, it will collapse to a radius smaller than the Schwarzschild radius $r=2M$9, forming a black hole. But the matter is still there, inside the black hole, at least until it falls into the singularity. (And who knows what happens then; probably something weird and quantum.) But that matter “censors” the alternate universe part of the solution from showing up. The easiest way to see this is to draw another Penrose diagram.10 The star stuff starts out bigger than the Schwarzschild radius. But as it collapses, it eventually falls within that radius. But since the Schwarzschild solution is only valid outside the star, there isn’t any alternate dimension “beyond” the matter falling inward. There’s just the center of the matter. Is it possible to extend the spacetime even farther? Well, something we can’t do is extend past the singularity at $r=0$. As we talked about last time, the curvature goes to infinity there, meaning there is infinite gravity. We shouldn’t be able to extend past that.11 There doesn’t seem to be a way to extend the Schwarzschild spacetime any farther. But if we add a little electric/magnetic charge (and so get a different black hole spacetime), it turns out we can extend past the equivalent of $r=0$. And it causes all sorts of problems, because it seems like it might be there, even for a collapsing star. But that’s a story for another day. 1. Remember, a change of coordinates changes the presentation of the metric/spacetime, but doesn’t actually change the spacetime itself. 2. It’s also sometimes called the conformal compactification of the spacetime. 3. There’s also spacelike infinity, which is where things going faster than the speed of light (like tachyons) end up. 4. Though it’s not the same spacetime. Minkowski space (special relativity) has no curvature, but the Schwarzschild spacetime does. However, the Penrose diagrams look the same near null infinity. 5. Technically, only at the top and right parts of the diagram, as we’ll discuss a bit later. 6. However, he will get dimmer and dimmer, since the photons will be reaching us at a slower rate. Similarly, the light will also be getting “redshifted” (the wavelength will be increasing). Put together, that means that Hitler will become harder and harder to see, until it becomes impossible to detect him in practicality. 7. In the Penrose diagram coordinates, $(X,T) = (0,0)$ is the center point, where the event horizon intersects the lower diagonal line. In one version of these coordinates, the metric is $ds^2 = -\dfrac{1}{\cos^2(X+T)\cos^2(X-T)}\dfrac{32M^3}{r}e^{-\frac{r}{2M}}(-dT^2 + dX^2) + r^2(d\phi^2 + \sin^2(\phi)d\theta^2)$, where $r$ is our old $r$ coordinate. 8. Mmmmm, bacon… 9. Which is about 3 kilometers for our sun. 10. This diagram isn’t of any particular solution, though you could come up with one that works pretty close to this way. 11. That argument shows that we can’t extend the spacetime past the singularity as a spacetime of general relativity. There’s a recent paper that shows you can’t even extend past that singularity as a manifold, much less as a solution to Einstein’s equations.
2018-06-19 23:38:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638033270835876, "perplexity": 374.20097564119413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863259.12/warc/CC-MAIN-20180619232009-20180620012009-00237.warc.gz"}
https://www.slcg.com/resources/blog?page=2&s=Non-Traded%20REITs
Resources # Blog Our experts frequently write blog posts about the findings of the research we are conducting. ## Clear Search Displaying 11-20 out of 57 results for "Non-Traded REITs". A Non-traded REIT Investor Fights Back ## On June 5, 2015, I wrote that American Realty Capital's latest listing of a non-traded REIT was further evidence of the harm caused by sponsors of non-traded REITs. I also pointed out that, contrary to the common pattern in non-traded REIT listings, Schorsch and ARC used their control of the non-traded REIT version of GNL to tie the hands of shareholders and management in the subsequent GNL traded REIT and to opportunistically transfer wealth to themselves. I pointed out that similar... Nicolas Schorsch and American Realty Capital Lay Another Egg
2023-01-29 13:31:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21738187968730927, "perplexity": 11876.082547517874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00499.warc.gz"}
https://www.byteinthesky.com/powershell/how-to-run-powershell-from-python/
# How to Run PowerShell from Python In this article we will look at how you can run PowerShell scripts from your python scripts. I recently had to run a PowerShell script from my own python script and found some interesting information online on how to solve the problem. In order to enforce the learnings when I do something new I like to write about it, so in this guide we will go through some examples similar to my own use case, and also some other examples just to highlight what is possible. ## Problem# Sometimes you might like to run PowerShell scripts from your python scripts, why? There are a multitude of reasons but the most likely is perhaps you can’t or don’t want to use a python library which could access information you would like, for example system information on windows. You know you can solve this with PowerShell so if you can call PowerShell scripts from your python code, this will do what you need. ## Solution# The key to all this is the python subprocess. This module replacesos.system and os.spawn* here is what the docs have to say about it: The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. You can check out the documentation for further information, but below are some example use cases to get you started. ### Example 1: Running PowerShell from Python to Obtain Operating System Information# In this example, we are using the python subprocess module and the built in PowerShell Get-ComputerInfo cmdlet which was introduce in PowerShell 5.1. This simple example will grab the name of your operating system, and then as shown we can take that result and use it in our python code. In this example, we just print the name of the operating system, but you can see how you could simply do whatever you want with the results from the command. import subprocess; process=subprocess.Popen(["powershell","Get-ComputerInfo | select -ExpandProperty OsName"],stdout=subprocess.PIPE); result=process.communicate()[0] print ("Our current operating system is:- "+result.decode('utf-8')) > Output:- Our current operating system is:- Microsoft Windows 11 Home ### Example 2: Spawning applications from python# As you can see in this example, we can spawn any application from python using subprocess Popen that we could run from PowerShell. So, here are 2 common examples that we like to use, opening notepad and opening calculator from python using PowerShell. import subprocess; process=subprocess.Popen(["powershell","calc.exe"]); result=process.communicate()[0] print (result) import subprocess; result=process.communicate()[0] print (result) So it’s pretty obvious when you read the PowerShell command what each of these examples is doing. The first one would run the windows calculator program, and the second runs windows notepad. If the program opens, there is no output, but if it did fail to open then you would get the PowerShell output of the issue. ## Conclusion# On the surface of it, this information might not seem that significant, but in reality, depending what you do with it, it could be very powerful. Imagine all the things you can do from PowerShell and then being able to wrap that in python maybe with a web front end? One possible example which is pretty exciting would be to create a web gui with some variables that let me deploy sql server installations using PowerShell. There are also many popular PowerShell modules you could call from python like dbatools. Using other PowerShell modules would really extend the usefulness of being able to call PowerShell scripts from python.
2022-12-09 05:10:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2639729976654053, "perplexity": 1323.4265349399757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00119.warc.gz"}
https://jason5lee.me/2022/08/24/my-understanding-of-programming-language/
# My understanding of programming languages In this article, I explain my understanding of programming languages. It includes what makes a language good in my perspective, the key to learning a programming language, and my understanding of design patterns. Here I use Kotlin language code for the example, but there are several programming languages, including Rust, that I like. ## The essence of programming In my opinion, the essence of programming is to express ideas in a programming language. For example, if you want to implement an app similar to Twitter, you may imagine a text box in the program with a “publish” button. When the user clicks this button, the tweet is published by being stored. Then it can be displayed to the viewer. Of course, the actual implementation will be much more complicated, including how the client and server interact, and how data is stored. But when implementing these, we also turn ideas (what protocol to use, what database to use, etc.) into code. For another example, you want to implement a sorting algorithm. You may be thinking about playing poker. Every time you grab a new card, you insert it into a specific position, so that the card sequence is ordered. This way, you end up with a sorted sequence of cards. The process of programming is to express this idea in a programming language, so you can sort in the program. ## What makes a programming language good In my opinion, a good programming language is expressive. I can express my idea in code by only “verbatim translation”. The code just differs from what I think in terms of words and syntax. And when I see the code in the future, I can quickly know what I was thinking at the time. As an example, I want to express “User information”. It consists of a username, a nickname, and an optional birthday. The username and nickname are a piece of text, and the birthday is a date. In Kotlin, I can express it using the following code. The code is only different from my idea in terms of words. The code uses data class to represent data, String to represent text, and ? to represent the optional. However, the code is a visual representation of my thoughts. No structure difference, no missing information, and no extra code. If I want to construct a user whose username is Jason5Lee, whose nickname is Jason Lee, and whose birthday is empty, I can use the following code in Kotlin. It's also a visual representation of what I'm thinking. In other programming languages, either sacrifice this intuition (for example, use new UserInfo("Jason5Lee", "Jason Lee", null), I can't just write which is which, but rewrite them using a certain order), or use the Builder Pattern that requires writing extra code that doesn't exist in my mind. ### Simplicity v.s. Expressiveness Of course, there is no such programming language that is completely intuitive to reflect my ideas, a programming language needs to balance expressive and simple. Programming languages should not be overcomplicated to cover all possible logic. Now, however, people put too much emphasis on simplicity and ignore what I think is more important to being expressive. Firstly, some complexity exists objectively. By pretending not seeing it only makes it worse, because you still need to express it, but in an unexpressive way. Ignorance is NOT strength. Secondly, if the total amount of code you write in a language is , and the time spent is , do you want to be smaller or to be smaller? The simplicity of a programming language affects more while being expressive decreases . In addition to being executable by the computer, an important advantage of a programming language is its precision. A programming language has very little ambiguity. I can concisely express my idea in a programming language, and anyone who knows the language can understand it precisely. You may feel programming language is weird at first. But for an expressive programming language, once you get used to it (which takes a constant time), you can express and read ideas more efficiently (which saves time for every code you write). For the statically-typing language, I can even spot potential mistakes in our ideas. Using an expressive statically-typing language, I can have my idea, write them as code, and get feedback on the mistake from the IDE/editor live checks. Then I can just execute the code I write. ## How to learn a programming language Since the essence of programming is to express ideas in a programming language, in my opinion, the key to learning a programming language is to learn how to express. Each programming feature is a tool to express a certain idea. For example, the if is to express executing different code based on the condition, similar to the “if” in our mind. The inheritance in OOP is to express an “is-a” relationship. If a class is a specific kind of another class, you can use inheritance to express it. I will write more articles about how I understand the programming language features and how I use them to express myself. You can have your understanding too. ## The Design Patterns When such an idea is hard to be expressed directly in a language, we may create a certain code pattern to express it. This is how we have design patterns. In my opinion, design pattern exists due to lack of a feature, and lots of people blame the wrong thing for the design pattern. Because design patterns are initially called “Elements of Reusable Object-Oriented Software”, It is easy to mistakenly think that we don't need them as long as we don't do object-oriented. But in fact, design patterns exist because it is hard to express some ideas with only object-oriented programming features. A design pattern may be necessary even in a non-object-oriented programming language if these ideas are still hard to be expressed directly.  Correspondingly, a design pattern may not be necessary for an object-oriented programming language if the use case of the pattern is covered by a language feature. For example, the Visitor pattern expresses that data has multiple cases. Each case may have different content. The processing of data may have different processing based on the case. The data has stable cases and their contents, while the processing of it is unknown and extensible. The Visitor pattern expresses by defining a visitor interface, including the processing of each case. Then, the data provides a visit method that accepts a visitor and performs the corresponding processing based on its case. If the Visitor pattern is unnecessary in a programming language, it must have a feature to express this data. In Kotlin and Scala, the sealed feature can, and in Rust, the enum feature can. So in these three languages, we can say Visitor pattern is unnecessary. In a language that does not have this kind of feature, The Visitor pattern is necessary then, despite whether it is “OOP” or not. When you are learning design patterns, you should focus on what it tries to express. ## Special Thanks My programming understanding is heavily influenced by Scott Wlaschin. He has an F# blog and a book about how Domain Modeling matches Functional programming. I form my understanding by generalizing the domain modeling into a general idea-expressing, with my view about OOP and design patterns. Even though I have some other preferences over F# because of the different views, he takes an important role in the forming of my understanding, and I'm grateful to him.
2022-10-04 16:18:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27461397647857666, "perplexity": 748.6636608261618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00075.warc.gz"}
https://divingintogeneticsandgenomics.rbind.io/post/crontab-for-backup/
# Backup automatically with cron Data backup is an essential step in the data analysis life cycle. As shown in a pic below taken from DataOne. There are so many important things you may want to back up: your raw/processed data, your code, and your dot configuration files. While for every project, I have git version control my scripts (not the data) and push it to github or gitlab to have a backup, big files can not be hosted on github or gitlab. I usually back up my projects folder (containing all my scripts, raw data, processed data etc) to our high performance computing cluster in the /rsch1 folder here at MD Anderson Cancer Center. IT stuff back up the contents there every week. In that essence, I have a copy in my local computer, a backup copy in the remote cluster and one more copy that IT stuffs backed up. I used to do rsync -avhP ~/projects mdaris337:/rsch1/genomic_med/mtang1/tommy_mac_backup once a week, but then sometimes I forget about it. I need a tool to do it every once a while for me. Here comes cron to help. cron is a Unix, solaris, Linux utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon. Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at specified times. File location varies by operating systems, See Crontab file location at the end of this document. commands for crontab: # It took me forever to quit vim :) so avoiding it now. export EDITOR=nano ;to specify a editor to open crontab file. crontab -e Edit crontab file, or create one if it doesn’t already exist. crontab -l crontab list of cronjobs , display crontab file contents. crontab -r Remove your crontab file. crontab -v Display the last time you edited your crontab file. (This option is only available on a few systems.) ### crontab file crontab syntax * * * * * command to be executed - - - - - | | | | | | | | | +----- day of week (0 - 6) (Sunday=0) | | | +------- month (1 - 12) | | +--------- day of month (1 - 31) | +----------- hour (0 - 23) +------------- min (0 - 59) ### Cron not working? It happens to me that my cron job is not running. I googled around and found a comprehensive checking list that you can do to debug. 1. Is the Cron daemon running? * Run ps ax | grep cron and look for cron. * Debian: service cron start or service cron restart 2. Is cron working? * * * * * /bin/echo "cron works" >> /file Syntax correct? See above. 3. Is the command working standalone? Check if the script has an error, by doing a dry run on the CLI when testing your command, test as the user whose crontab you are editing, which might not be your login or root 4. Can cron run your job? Check /var/log/cron.log or /var/log/messages for errors. Ubuntu: grep CRON /var/log/syslog Redhat: /var/log/cron 5. Check permissions set executable flag on the command: chmod +x /var/www/app/cron/do-stuff.php if you redirect the output of your command to a file, verify you have permission to write to that file/directory 6. Check paths check she-bangs / hashbangs line do not rely on environment variables like PATH, as their value will likely not be the same under cron as under an interactive session 7. Don't Suppress Output, while debugging commonly used is this suppression: 30 1 * * * command > /dev/null 2>&1 re-enable the standard output or standard error message output ### My crontab file # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any').# # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command MAILTO="tangming2005@gmail.com" #rsync every Sunday 5am. 0 5 * * 0 rsync -avhP --exclude=".aspera" --exclude=".autojump" --exclude=".bash_history" --exclude=".bash_logout" --exclude=".cache" --exclude=".continuum" --exclude=".gem" --exclude=".gnome2" --exclude=".local" --exclude=".mozilla" --exclude=".myconfigs" --exclude=".oracle_jre_usage" --exclude=".parallel" --exclude=".pki" --exclude=".rbenv" --exclude=".Rhistory" --exclude=".rstudio" --exclude=".ssh" --exclude=".subversion" railab:.[^.]* ~/shark_dotfiles >> /var/log/rsync_shark_dotfiles.log 2>&1 0 5 * * 0 rsync -avhP --exclude=".aspera" --exclude=".autojump" --exclude=".bash_history" --exclude=".bash_logout" --exclude=".cache" --exclude=".continuum" --exclude=".gem" --exclude=".gnome2" --exclude=".local" --exclude=".mozilla" --exclude=".myconfigs" --exclude=".oracle_jre_usage" --exclude=".parallel" --exclude=".pki" --exclude=".rbenv" --exclude=".Rhistory" --exclude=".rstudio" --exclude=".ssh" --exclude=".subversion" mdaris337:.[^.]* ~/nautilus_dotfiles >> /var/log/rsync_nautilus_dotfiles.log 2>&1 Previous
2020-09-26 02:14:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1977461278438568, "perplexity": 11346.965178705805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400232211.54/warc/CC-MAIN-20200926004805-20200926034805-00247.warc.gz"}
https://brilliant.org/problems/no-calculator-please/
Algebra Level 3 Compute the following without the use of a calculator: $\large \sqrt [ 3 ]{ 20+14\sqrt { 2 } } +\sqrt [ 3 ]{ 20-14\sqrt { 2 } }$ ×
2017-05-28 06:50:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816046416759491, "perplexity": 2070.484310041453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609605.31/warc/CC-MAIN-20170528062748-20170528082748-00285.warc.gz"}
https://brilliant.org/problems/the-product-of-two-consecutive-natural-numbers-is/
# The product of two consecutive natural numbers is Number Theory Level 1 Let $$Y = n(n+1)$$ for some positive integer $$n$$. Which of the following statements is true? ×
2016-10-24 08:58:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2841237485408783, "perplexity": 273.02287504316587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719547.73/warc/CC-MAIN-20161020183839-00325-ip-10-171-6-4.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/45466/techniques-to-show-data-spanning-multiple-decades
# Techniques to show data spanning multiple decades I have a scatter plot where each point is at integer coordinates that may include 0 for both X and Y. The range of each coordinate is large, but most of the data is clustered around 0. Ordinarily, I would do something like a log-log plot to show the decades of data. But since there is 0, it's not ideal (I could add a shift, but that makes interpretation of the data more difficult). Additionally, since the data are integers, it looks very banded in log-log plots. Again, relatively unattactive. An example of the data: An example of the log-log data where each axis has a shift by 1 before taking the log: So, is there another type of transformation that would display the data more reasonably? It's important to see all scales of the data. • Who is the audience / what is the goal for the picture? – MattBagg Dec 10 '12 at 1:22 • @MattBagg This is actually data from physics.SE so let's assume it is people who understand data and can make sense of visualizations of data. The goal is to show interesting trends of the data across all decades. – tpg2114 Dec 10 '12 at 4:10 ## 2 Answers You could try one of the transformations to approximately constant variance for Poisson data such as $2\sqrt(Y+\frac{3}{8})$. • +1 The plots make it clear that some such transformation would help and that the logarithm is too strong. This one ought to work well as a point of departure. – whuber Jan 4 '13 at 17:52 I would consider plotting some kind of aggregate function for each decade, say the sum of the value. or the count of instances (or both), if you want to continue using the scatterplot approach, try introducing transparency in the points, so the viewer can easily appreciate the increased density closer to the origin.... • I'm definitely open to other ways of reducing/visualizing the data. Scatter plot was just the first "see what I'm dealing with" attempt, but I am curious how to manage data with a distribution like this on scatter plots. Transparency is a good idea. – tpg2114 Dec 9 '12 at 7:04 • docs.ggplot2.org/current/geom_point.html take a look abt half way down the page... – ADP Dec 9 '12 at 7:29
2019-12-13 23:41:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977707982063293, "perplexity": 677.0842322651087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569332.20/warc/CC-MAIN-20191213230200-20191214014200-00332.warc.gz"}
https://chemistry.stackexchange.com/questions/38756/melting-and-boiling-point-comparison
# Melting and boiling point comparison Graphite, silica and diamond are covalent compounds and still they have high melting and boiling points. Why? ## 1 Answer The very term "covalent compounds" is confusing, so you may just as well stop using it. The elementary substances you specified are covalent crystals (AKA "covalent solids"). A single crystal of any of these is in fact one huge molecule linked by covalent bonds. To melt that crystal, you have to break the covalent bonds, and they are pretty strong. On the other hand, we have $\ce{N2}$, $\ce{O2}$, and other covalent molecules. They may be quite strong (in that each molecule by itself is pretty hard to tear apart), but the molecules are linked together by the so-called van der Waals forces, and these are quite weak, so these compounds are gases, and it takes some effort to see them turning into a liquid or a solid.
2020-01-22 04:11:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4702204465866089, "perplexity": 724.325438392246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606269.37/warc/CC-MAIN-20200122012204-20200122041204-00331.warc.gz"}
https://dataskeptic.com/podcasts/2017
# Data Skeptic: 2017 ## [MINI] Dropout Deep learning can be prone to overfit a given problem. This is especially frustrating given how much time and computational resources are often required to converge. One technique for fighting overfitting is to use dropout. Dropout is the method of randomly selecting some neurons in one\'s network to set to zero during iterations of learning. The core idea is that each particular input in a given layer is not always available and therefore not a signal that can be relied on too heavily.   ... [more] ## The Police Data and the Data Driven Justice Initiatives In this episode I speak with Clarence Wardell and Kelly Jin about their mutual service as part of the White House\'s Police Data Initiative and Data Driven Justice Initiative respectively. The Police Data Initiative was organized to use open data to increase transparency and community trust as well as to help police agencies use data for internal accountability. The PDI emerged from recommendations made by the Task Force on 21st Century Policing. The Data Driven Justice Initiative was organized to help city, county, and state governments use data-driven strategies to help low-level offenders with mental illness get directed to the right services rather than into the criminal justice system. ... [more] ## Studying Competition and Gender Through Chess Prior work has shown that people\'s response to competition is in part predicted by their gender. Understanding why and when this occurs is important in areas such as labor market outcomes. A well structured study is challenging due to numerous confounding factors. Peter Backus and his colleagues have identified competitive chess as an ideal arena to study the topic. Find out why and what conclusions they reached. Our discussion centers around Gender, Competition and Performance: Evidence from Real Tournaments from Backus, Cubel, Guid, Sanchez-Pages, and Mañas. A summary of their paper can also be found here.   ... [more] ## Big Data Tools and Trends In this episode, I speak with Raghu Ramakrishnan, CTO for Data at Microsoft.  We discuss services, tools, and developments in the big data sphere as well as the underlying needs that drove these innovations. ... [more] ## Data Provenance and Reproducibility with Pachyderm Versioning isn\'t just for source code. Being able to track changes to data is critical for answering questions about data provenance, quality, and reproducibility. Daniel Whitenack joins me this week to talk about these concepts and share his work on Pachyderm. Pachyderm is an open source containerized data lake. During the show, Daniel mentioned the Gopher Data Science github repo as a great resource for any data scientists interested in the Go language. Although we didn\'t mention it, Daniel also did an interesting analysis on the 2016 world chess championship that complements our recent episode on chess well. You can find that post here Supplemental music is Lee Rosevere\'s Let\'s Start at the Beginning.   Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics       ... [more] ## [MINI] Primer on Deep Learning In this episode, we talk about a high-level description of deep learning.  Kyle presents a simple game (pictured below), which is more of a puzzle really, to try and give  Linh Da the basic concept.     Thanks to our sponsor for this week, the Data Science Association. Please check out their upcoming Dallas conference at dallasdatascience.eventbrite.com ... [more] ## [MINI] Logistic Regression on Audio Data Logistic Regression is a popular classification algorithm. In this episode, we discuss how it can be used to determine if an audio clip represents one of two given speakers. It assumes an output variable (isLinhda) is a linear combination of available features, which are spectral bands in the discussion on this episode.   Keep an eye on the dataskeptic.com blog this week as we post more details about this project.   Thanks to our sponsor this week, the Data Science Association.  Please check out their upcoming conference in Dallas on Saturday, February 18th, 2017 via the link below.   dallasdatascience.eventbrite.com   ... [more] ## [MINI] Automated Feature Engineering If a CEO wants to know the state of their business, they ask their highest ranking executives. These executives, in turn, should know the state of the business through reports from their subordinates. This structure is roughly analogous to a process observed in deep learning, where each layer of the business reports up different types of observations, KPIs, and reports to be interpreted by the next layer of the business. In deep learning, this process can be thought of as automated feature engineering. DNNs built to recognize objects in images may learn structures that behave like edge detectors in the first hidden layer. Proceeding layers learn to compose more abstract features from lower level outputs. This episode explore that analogy in the context of automated feature engineering. Linh Da and Kyle discuss a particular image in this episode. The image included below in the show notes is drawn from the work of Lee, Grosse, Ranganath, and Ng in their paper Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations.   ... [more] ## The Data Refuge Project DataRefuge is a public collaborative, grassroots effort around the United States in which scientists, researchers, computer scientists, librarians and other volunteers are working to download, save, and re-upload government data. The DataRefuge Project, which is led by the UPenn Program in Environmental Humanities and the Penn Libraries group at University of Pennsylvania, aims to foster resilience in an era of anthropogenic global climate change and raise awareness of how social and political events affect transparency.   ... [more] In this Data Skeptic episode, Kyle is joined by guest Ruggiero Cavallo to discuss his latest efforts to mitigate the problems presented in this new world of online advertising. Working with his collaborators, Ruggiero reconsiders the search ad allocation and pricing problems from the ground up and redesigns a search ad selling system. He discusses a mechanism that optimizes an entire page of ads globally based on efficiency-maximizing search allocation and a novel technical approach to computing prices. ... [more] ## [MINI] The Perceptron Today\'s episode overviews the perceptron algorithm. This rather simple approach is characterized by a few particular features. It updates its weights after seeing every example, rather than as a batch. It uses a step function as an activation function. It\'s only appropriate for linearly separable data, and it will converge to a solution if the data meets these criteria. Being a fairly simple algorithm, it can run very efficiently. Although we don\'t discuss it in this episode, multi-layer perceptron networks are what makes this technique most attractive. ... [more] ## [MINI] Backpropagation Backpropagation is a common algorithm for training a neural network.  It works by computing the gradient of each weight with respect to the overall error, and using stochastic gradient descent to iteratively fine tune the weights of the network.  In this episode, we compare this concept to finding a location on a map, marble maze games, and golf. ... [more] ## Data Science at Patreon In this week\'s episode of Data Skeptic, host Kyle Polich talks with guest Maura Church, Patreon\'s data science manager. Patreon is a fast-growing crowdfunding platform that allows artists and creators of all kinds build their own subscription content service. The platform allows fans to become patrons of their favorite artists- an idea similar the Renaissance times, when musicians would rely on benefactors to become their patrons so they could make more art. At Patreon, Maura\'s data science team strives to provide creators with insight, information, and tools, so that creators can focus on what they do best-- making art. On the show, Maura talks about some of her projects with the data science team at Patreon. Among the several topics discussed during the episode include: optical music recognition (OMR) to translate musical scores to electronic format, network analysis to understand the connection between creators and patrons, growth forecasting and modeling in a new market, and churn modeling to determine predictors of long time support. A more detailed explanation of Patreon\'s A/B testing framework can be found here Other useful links to topics mentioned during the show: OMR research Patreon blog Patreon HQ blog Amanda Palmer Fran Meneses ... [more] ## [MINI] Feed Forward Neural Networks Feed Forward Neural Networks In a feed forward neural network, neurons cannot form a cycle. In this episode, we explore how such a network would be able to represent three common logical operators: OR, AND, and XOR. The XOR operation is the interesting case. Below are the truth tables that describe each of these functions. AND Truth Table Input 1 Input 2 Output 0 0 0 0 1 0 1 0 0 1 1 1 OR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 1 XOR Truth Table Input 1 Input 2 Output 0 0 0 0 1 1 1 0 1 1 1 0 The AND and OR functions should seem very intuitive. Exclusive or (XOR) if true if and only if exactly single input is 1. Could a neural network learn these mathematical functions? Let\'s consider the perceptron described below. First we see the visual representation, then the Activation function , followed by the formula for calculating the output.         Can this perceptron learn the AND function? Sure. Let and What about OR? Yup. Let and An infinite number of possible solutions exist, I just picked values that hopefully seem intuitive. This is also a good example of why the bias term is important. Without it, the AND function could not be represented. How about XOR? No. It is not possible to represent XOR with a single layer. It requires two layers. The image below shows how it could be done with two laters.     In the above example, the weights computed for the middle hidden node capture the essence of why this works. This node activates when recieving two positive inputs, thus contributing a heavy penalty to be summed by the output node. If a single input is 1, this node will not activate. Universal approximation theorem tells us that any continuous function can be tightly approximated using a neural network with only a single hidden layer and a finite number of neurons. With this in mind, a feed forward neural network should be adaquet for any applications. However, in practice, other network architectures and the allowance of more hidden layers are empirically motivated. Other types neural networks have less strict structal definitions. The various ways one might relax this constraint generate other classes of neural networks that often have interesting properties. We\'ll get into some of these in future mini-episodes.   Check out our recent blog post on how we\'re using Periscope Data cohort charts. Thanks to Periscope Data for sponsoring this episode. More about them at periscopedata.com/skeptics ... [more] ## [MINI] GPU CPU There\'s more than one type of computer processor. The central processing unit (CPU) is typically what one means when they say "processor". GPUs were introduced to be highly optimized for doing floating point computations in parallel. These types of operations were very useful for high end video games, but as it turns out, those same processors are extremely useful for machine learning. In this mini-episode we discuss why. ... [more] ## OpenHouse No reliable, complete database cataloging home sales data at a transaction level is available for the average person to access. To a data scientist interesting in studying this data, our hands are complete tied. Opportunities like testing sociological theories, exploring economic impacts, study market forces, or simply research the value of an investment when buying a home are all blocked by the lack of easy access to this dataset. OpenHouse seeks to correct that by centralizing and standardizing all publicly available home sales transactional data. In this episode, we discuss the achievements of OpenHouse to date, and what plans exist for the future. Check out the OpenHouse gallery. I also encourage everyone to check out the project Zareen mentioned which was her Harry Potter word2vec webapp and Joy\'s project doing data visualization on Jawbone data. Guests Thanks again to @iamzareenf, @blueplastic, and @joytafty for coming on the show. Thanks to the numerous other volunteers who have helped with the project as well! Announcements and details If you\'re interested in getting involved in OpenHouse, check out the OpenHouse contributor\'s quickstart page. Kyle is giving a machine learning talk in Los Angeles on May 25th, 2017 at Zehr. Sponsor Thanks to our sponsor for this episode Periscope Data. The blog post demoing their maps option is on our blog titled Periscope Data Maps. To start a free trial of their dashboarding too, visit http://periscopedata.com/skeptics Kyle recently did a youtube video exploring the Data Skeptic podcast download numbers using Periscope Data. Check it out at https://youtu.be/aglpJrMp0M4. Supplemental music is Lee Rosevere\'s Let\'s Start at the Beginning.   ... [more] ## [MINI] Convolutional Neural Networks CNNs are characterized by their use of a group of neurons typically referred to as a filter or kernel.  In image recognition, this kernel is repeated over the entire image.  In this way, CNNs may achieve the property of translational invariance - once trained to recognize certain things, changing the position of that thing in an image should not disrupt the CNN\'s ability to recognize it.  In this episode, we discuss a few high-level details of this important architecture. ... [more] GANs are an unsupervised learning method involving two neural networks iteratively competing. The discriminator is a typical learning system. It attempts to develop the ability to recognize members of a certain class, such as all photos which have birds in them. The generator attempts to create false examples which the discriminator incorrectly classifies. In successive training rounds, the networks examine each and play a mini-max game of trying to harm the performance of the other. In addition to being a useful way of training networks in the absence of a large body of labeled data, there are additional benefits. The discriminator may end up learning more about edge cases than it otherwise would be given typical examples. Also, the generator\'s false images can be novel and interesting on their own. The concept was first introduced in the paper Generative Adversarial Networks. ... [more] ## Multi-Agent Diverse Generative Adversarial Networks Despite the success of GANs in imaging, one of its major drawbacks is the problem of \'mode collapse,\' where the generator learns to produce samples with extremely low variety. To address this issue, today\'s guests Arnab Ghosh and Viveka Kulharia proposed two different extensions. The first involves tweaking the generator\'s objective function with a diversity enforcing term that would assess similarities between the different samples generated by different generators. The second comprises modifying the discriminator objective function, pushing generations corresponding to different generators towards different identifiable modes. ... [more] ## Opinion Polls for Presidential Elections Recently, we\'ve seen opinion polls come under some skepticism.  But is that skepticism truly justified?  The recent Brexit referendum and US 2016 Presidential Election are examples where some claims the polls "got it wrong".  This episode explores this idea. ... [more] ## Unsupervised Depth Perception This episode is an interview with Tinghui Zhou.  In the recent paper "Unsupervised Learning of Depth and Ego-motion from Video", Tinghui and collaborators propose a deep learning architecture which is able to learn depth and pose information from unlabeled videos.  We discuss details of this project and its applications. ... [more] ## [MINI] Activation Functions In a neural network, the output value of a neuron is almost always transformed in some way using a function. A trivial choice would be a linear transformation which can only scale the data. However, other transformations, like a step function allow for non-linear properties to be introduced. Activation functions can also help to standardize your data between layers. Some functions such as the sigmoid have the effect of "focusing" the area of interest on data. Extreme values are placed close together, while values near it\'s point of inflection change more quickly with respect to small changes in the input. Similarly, these functions can take any real number and map all of them to a finite range such as [0, 1] which can have many advantages for downstream calculation. In this episode, we overview the concept and discuss a few reasons why you might select one function verse another. ... [more] ## Doctor AI hen faced with medical issues, would you want to be seen by a human or a machine? In this episode, guest Edward Choi, co-author of the study titled Doctor AI: Predicting Clinical Events via Recurrent Neural Network shares his thoughts. Edward presents his team’s efforts in developing a temporal model that can learn from human doctors based on their collective knowledge, i.e. the large amount of Electronic Health Record (EHR) data. ... [more] ## [MINI] Max-pooling Max-pooling is a procedure in a neural network which has several benefits. It performs dimensionality reduction by taking a collection of neurons and reducing them to a single value for future layers to receive as input. It can also prevent overfitting, since it takes a large set of inputs and admits only one value, making it harder to memorize the input. In this episode, we discuss the intuitive interpretation of max-pooling and why it\'s more common than mean-pooling or (theoretically) quartile-pooling. ... [more] ## MS Build 2017 This episode recaps the Microsoft Build Conference.  Kyle recently attended and shares some thoughts on cloud, databases, cognitive services, and artificial intelligence.  The episode includes interviews with Rohan Kumar and David Carmona.   ... [more] ## [MINI] Conditional Independence In statistics, two random variables might depend on one another (for example, interest rates and new home purchases). We call this conditional dependence. An important related concept exists called conditional independence. This phrase describes situations in which two variables are independent of one another given some other variable. For example, the probability that a vendor will pay their bill on time could depend on many factors such as the company\'s market cap. Thus, a statistical analysis would reveal many relationships between observable details about the company and their propensity for paying on time. However, if you know that the company has filed for bankruptcy, then we might assume their chances of paying on time have dropped to near 0, and the result is now independent of all other factors in light of this new information. We discuss a few real world analogies to this idea in the context of some chance meetings on our recent trip to New York City. ... [more] ## CosmosDB This episode collects interviews from my recent trip to Microsoft Build where I had the opportunity to speak with Dharma Shukla and Syam Nair about the recently announced CosmosDB. CosmosDB is a globally consistent, distributed datastore that supports all the popular persistent storage formats (relational, key/value pair, document database, and graph) under a single streamlined API. The system provides tunable consistency, allowing the user to make choices about how consistency trade-offs are managed under the hood, if a consumer wants to go beyond the selected defaults. ... [more] ## Estimating Sheep Pain with Facial Recognition Animals can\'t tell us when they\'re experiencing pain, so we have to rely on other cues to help treat their discomfort. But it is often difficult to tell how much an animal is suffering. The sheep, for instance, is the most inscrutable of animals. However, scientists have figured out a way to understand sheep facial expressions using artificial intelligence. On this week\'s episode, Dr. Marwa Mahmoud from the University of Cambridge joins us to discuss her recent study, "Estimating Sheep Pain Level Using Facial Action Unit Detection." Marwa and her colleague\'s at Cambridge\'s Computer Laboratory developed an automated system using machine learning algorithms to detect and assess when a sheep is in pain. We discuss some details of her work, how she became interested in studying sheep facial expression to measure pain, and her future goals for this project. If you\'re able to be in Minneapolis, MN on August 23rd or 24th, consider attending Farcon. Get your tickets today via https://farcon2017.eventbrite.com. ... [more] This episode discusses the vanishing gradient - a problem that arises when training deep neural networks in which nearly all the gradients are very close to zero by the time back-propagation has reached the first hidden layer. This makes learning virtually impossible without some clever trick or improved methodology to help earlier layers begin to learn. ... [more] ## [MINI] Bayesian Belief Networks A Bayesian Belief Network is an acyclic directed graph composed of nodes that represent random variables and edges that imply a conditional dependence between them. It\'s an intuitive way of encoding your statistical knowledge about a system and is efficient to propagate belief updates throughout the network when new information is added. ... [more] ## pix2code In this episode, Tony Beltramelli of UIzard Technologies joins our host, Kyle Polich, to talk about the ideas behind his latest app that can transform graphic design into functioning code, as well as his previous work on spying with wearables. ... [more] ## Project Common Voice Thanks to our sponsor Springboard. In this week\'s episode, guest Andre Natal from Mozilla joins our host, Kyle Polich, to discuss a couple exciting new developments in open source speech recognition systems, which include Project Common Voice. In June 2017, Mozilla launched a new open source project, Common Voice, a novel complementary project to the TensorFlow-based DeepSpeech implementation. DeepSpeech is a deep learning-based voice recognition system that was designed by Baidu, which they describe in greater detail in their research paper. DeepSpeech is a speech-to-text engine, and Mozilla hopes that, in the future, they can use Common Voice data to train their DeepSpeech engine. ... [more] ## [MINI] Recurrent Neural Networks RNNs are a class of deep learning models designed to capture sequential behavior.  An RNN trains a set of weights which depend not just on new input but also on the previous state of the neural network.  This directed cycle allows the training phase to find solutions which rely on the state at a previous time, thus giving the network a form of memory.  RNNs have been used effectively in language analysis, translation, speech recognition, and many other tasks. ... [more] ## Cardiologist Level Arrhythmia Detection with CNNs Our guest Pranav Rajpurkar and his coauthored recently published Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks, a paper in which they demonstrate the use of Convolutional Neural Networks which outperform board certified cardiologists in detecting a wide range of heart arrhythmias from ECG data. ... [more] ## [MINI] Long Short Term Memory Thanks to our sponsor brilliant.org/dataskeptics A Long Short Term Memory (LSTM) is a neural unit, often used in Recurrent Neural Network (RNN) which attempts to provide the network the capacity to store information for longer periods of time. An LSTM unit remembers values for either long or short time periods. The key to this ability is that it uses no activation function within its recurrent components. Thus, the stored value is not iteratively modified and the gradient does not tend to vanish when trained with backpropagation through time. ... [more] ## [MINI] One Shot Learning One Shot Learning is the class of machine learning procedures that focuses learning something from a small number of examples.  This is in contrast to "traditional" machine learning which typically requires a very large training set to build a reasonable model. In this episode, Kyle presents a coded message to Linhda who is able to recognize that many of these new symbols created are likely to be the same symbol, despite having extremely few examples of each.  Why can the human brain recognize a new symbol with relative ease while most machine learning algorithms require large training data?  We discuss some of the reasons why and approaches to One Shot Learning. ... [more] ## Recommender Systems Live from FARCON 2017 Recommender systems play an important role in providing personalized content to online users. Yet, typical data mining techniques are not well suited for the unique challenges that recommender systems face. In this episode, host Kyle Polich joins Dr. Joseph Konstan from the University of Minnesota at a live recording at FARCON 2017 in Minneapolis to discuss recommender systems and how machine learning can create better user experiences.  ... [more] ## Zillow Zestimate Zillow is a leading real estate information and home-related marketplace. We interviewed Andrew Martin, a data science Research Manager at Zillow, to learn more about how Zillow uses data science and big data to make real estate predictions. ... [more] ## [MINI] Big Oh Analysis How long an algorithm takes to run depends on many factors including implementation details and hardware.  However, the formal analysis of algorithms focuses on how they will perform in the worst case as the input size grows.  We refer to an algorithm\'s runtime as it\'s "O" which is a function of its input size "n".  For example, O(n) represents a linear algorithm - one that takes roughly twice as long to run if you double the input size.  In this episode, we discuss a few everyday examples of algorithmic analysis including sorting, search a shuffled deck of cards, and verifying if a grocery list was successfully completed. Thanks to our sponsor Brilliant.org, who right now is featuring a related problem as their Brilliant Problem of the Week. ... [more] ## Data science tools and other announcements from Ignite In this episode, Microsoft\'s Corporate Vice President for Cloud Artificial Intelligence, Joseph Sirosh, joins host Kyle Polich to share some of the Microsoft\'s latest and most exciting innovations in AI development platforms. Last month, Microsoft launched a set of three powerful new capabilities in Azure Machine Learning for advanced developers to exploit big data, GPUs, data wrangling and container-based model deployment. Extended show notes found here. Thanks to our sponsor Springboard.  Check out Springboard\'s Data Science Career Track Bootcamp. ... [more] ## Generative AI for Content Creation Last year, the film development and production company End Cue produced a short film, called Sunspring, that was entirely written by an artificial intelligence using neural networks. More specifically, it was authored by a recurrent neural network (RNN) called long short-term memory (LSTM). According to End Cue’s Chief Technical Officer, Deb Ray, the company has come a long way in improving the generative AI aspect of the bot. In this episode, Deb Ray joins host Kyle Polich to discuss how generative AI models are being applied in creative processes, such as screenwriting. Their discussion also explores how data science for analyzing development projects, such as financing and selecting scripts, as well as optimizing the content production process. ... [more] ## The Complexity of Learning Neural Networks Over the past several years, we have seen many success stories in machine learning brought about by deep learning techniques. While the practical success of deep learning has been phenomenal, the formal guarantees have been lacking. Our current theoretical understanding of the many techniques that are central to the current ongoing big-data revolution is far from being sufficient for rigorous analysis, at best. In this episode of Data Skeptic, our host Kyle Polich welcomes guest John Wilmes, a mathematics post-doctoral researcher at Georgia Tech, to discuss the efficiency of neural network learning through complexity theory. ... [more] ## [MINI] Turing Machines TMs are a model of computation at the heart of algorithmic analysis.  A Turing Machine has two components.  An infinitely long piece of tape (memory) with re-writable squares and a read/write head which is programmed to change it\'s state as it processes the input.  This exceptionally simple mechanical computer can compute anything that is intuitively computable, thus says the Church-Turing Thesis. Attempts to make a "better" Turing Machine by adding things like additional tapes can make the programs easier to describe, but it can\'t make the "better" machine more capable.  It won\'t be able to solve any problems the basic Turing Machine can, even if it perhaps solves them faster. An important concept we didn\'t get to in this episode is that of a Universal Turing Machine.  Without the prefix, a TM is a particular algorithm.  A Universal TM is a machine that takes, as input, a description of a TM and an input to that machine, and subsequently, simulates the inputted machine running on the given input. Turing Machines are a central idea in computer science.  They are central to algorithmic analysis and the theory of computation. ... [more] ## [MINI] Exponential Time Algorithms In this episode we discuss the complexity class of EXP-Time which contains algorithms which require $O(2^{p(n)})$ time to run.  In other words, the worst case runtime is exponential in some polynomial of the input size.  Problems in this class are even more difficult than problems in NP since you can\'t even verify a solution in polynomial time. We mostly discuss Generalized Chess as an intuitive example of a problem in EXP-Time.  Another well-known problem is determining if a given algorithm will halt in k steps.  That extra condition of restricting it to k steps makes this problem distinct from Turing\'s original definition of the halting problem which is known to be intractable. ... [more] ## P vs NP In this week\'s episode, host Kyle Polich interviews author Lance Fortnow about whether P will ever be equal to NP and solve all of life’s problems. Fortnow begins the discussion with the example question: Are there 100 people on Facebook who are all friends with each other? Even if you were an employee of Facebook and had access to all its data, answering this question naively would require checking more possibilities than any computer, now or in the future, could possibly do. The P/NP question asks whether there exists a more clever and faster algorithm that can answer this problem and others like it. ... [more] ## The Computational Complexity of Machine Learning In this episode, Professor Michael Kearns from the University of Pennsylvania joins host Kyle Polich to talk about the computational complexity of machine learning, complexity in game theory, and algorithmic fairness. Michael\'s doctoral thesis gave an early broad overview of computational learning theory, in which he emphasizes the mathematical study of efficient learning algorithms by machines or computational systems. When we look at machine learning algorithms they are almost like meta-algorithms in some sense. For example, given a machine learning algorithm, it will look at some data and build some model, and it’s going to behave presumably very differently under different inputs. But does that mean we need new analytical tools? Or is a machine learning algorithm just the same thing as any deterministic algorithm, but just a little bit more tricky to figure out anything complexity-wise? In other words, is there some overlap between the good old-fashioned analysis of algorithms with the analysis of machine learning algorithms from a complexity viewpoint? And what is the difference between strategies for determining the complexity bounds on samples versus algorithms? A big area of machine learning (and in the analysis of learning algorithms in general) Michael and Kyle discuss is the topic known as complexity regularization. Complexity regularization asks: How should one measure the goodness of fit and the complexity of a given model? And how should one balance those two, and how can one execute that in a scalable, efficient way algorithmically? From this, Michael and Kyle discuss the broader picture of why one should care whether a learning algorithm is efficiently learnable if it\'s learnable in polynomial time. Another interesting topic of discussion is the difference between sample complexity and computational complexity. An active area of research is how one should regularize their models so that they\'re balancing the complexity with the goodness of fit to fit their large training sample size. As mentioned, a good resource for getting started with correlated equilibria is: https://www.cs.cornell.edu/courses/cs684/2004sp/feb20.pdf Thanks to our sponsors: Mendoza College of Business - Get your Masters of Science in Business Analytics from Notre Dame. brilliant.org - A fun, affordable, online learning tool.  Check out their Computer Science Algorithms course. ... [more] ## Azure Databricks I sat down with Ali Ghodsi, CEO and found of Databricks, and John Chirapurath, GM for Data Platform Marketing at Microsoft related to the recent announcement of Azure Databricks. When I heard about the announcement, my first thoughts were two-fold.  First, the possibility of optimized integrations with existing Azure services.  This would be a big benefit to heavy Azure users who also want to use Spark.  Second, the benefits of active directory to control Databricks access for large enterprise. Hear Ali and JG\'s thoughts and comments on what makes Azure Databricks a novel offering.   ... [more] ## Mercedes Benz Machine Learning Research This episode features an interview with Rigel Smiroldo recorded at NIPS 2017 in Long Beach California.  We discuss data privacy, machine learning use cases, model deployment, and end-to-end machine learning. ... [more] ## [MINI] Parallel Algorithms When computers became commodity hardware and storage became incredibly cheap, we entered the era of so-call "big" data. Most definitions of big data will include something about not being able to process all the data on a single machine. Distributed computing is required for such large datasets. Getting an algorithm to run on data spread out over a variety of different machines introduced new challenges for designing large-scale systems. First, there are concerns about the best strategy for spreading that data over many machines in an orderly fashion. Resolving ambiguity or disagreements across sources is sometimes required. This episode discusses how such algorithms related to the complexity class NC. ... [more] ## Quantum Computing In this week\'s episode, Scott Aaronson, a professor at the University of Texas at Austin, explains what a quantum computer is, various possible applications, the types of problems they are good at solving and much more. Kyle and Scott have a lively discussion about the capabilities and limits of quantum computers and computational complexity. ... [more] ## Artificial Intelligence, a Podcast Approach This episode kicks off the next theme on Data Skeptic: artificial intelligence.  Kyle discusses what\'s to come for the show in 2018, why this topic is relevant, and how we intend to cover it. ... [more] ## Complexity and Cryptography This week, our host Kyle Polich is joined by guest Tim Henderson from Google to talk about the computational complexity foundations of modern cryptography and the complexity issues that underlie the field. A key question that arises during the discussion is whether we should trust the security of modern cryptography. ... [more]
2020-04-04 09:08:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2327968031167984, "perplexity": 1964.1402849577726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521574.59/warc/CC-MAIN-20200404073139-20200404103139-00041.warc.gz"}
http://mathonline.wikidot.com/the-second-group-isomorphism-theorem
The Second Group Isomorphism Theorem # The Second Group Isomorphism Theorem Recall from The First Group Isomorphism Theorem page that if $G$ and $H$ are homomorphism groups with homomorphism $\phi : G \to H$ then: (1) \begin{align} \quad G / \ker (\phi) \cong \phi (G) \end{align} We are now ready to prove the second group isomorphism theorem. Theorem 1 (The Second Group Isomorphism Theorem): Let $G$ be a group and et $H$ and $N$ be subgroups of $G$ such that $N \trianglelefteq G$. Then: a) $H \cap N \trianglelefteq H$. b) $H/(H \cap N) \cong (HN)/N$. • Proof of a) Since $H$ and $N$ are subgroups of $G$ we have that their intersection, $H \cap N$ is also a subgroup of $G$. We want to show that $H \cap N$ is a normal subgroup of $H$. • Now, since $N$ is a normal subgroup of $G$ we have that for all $n \in N$ and for all $g \in G$ that $gng^{-1} \in N$. • Let $x \in H \cap N$. Since $H$ is a subgroup of $G$ and $H \cap N \subseteq N$ we have that for all $g \in H$ that $gxg^{-1} \in H$. Also, since $x \in H \cap N$ we have that $x \in N$. And since $N$ is a normal subgroup of $G$ we have that for all $g \in H$ that $gxg^{-1} \in N$. • So for all $x \in H \cap N$ and for all $g \in H$ we have that: (2) \begin{align} \quad gxg^{-1} \in H \cap N \end{align} • This shows that $H \cap N$ is a normal subgroup of $H$, that is, $H \cap N \trianglelefteq H$. $\blacksquare$ • Proof of b) Define a function $\phi : H \to (HN) / N$ for all $h \in H$ by: (3) \begin{align} \quad \phi (h) = hN \end{align} • We want to show that $\phi$ is a group homomorphism. Let $h_1, h_2 \in H$. Then: (4) \begin{align} \quad \phi (h_1h_2) = (h_1h_2)N = (h_1N)(h_2N) = \phi(h_1)\phi(h_2) \end{align} • So indeed, $\phi$ is a group homomorphism. We now determine the kernel and range of $\phi$. We have that: (5) \begin{align} \quad \ker (\phi) &= \{ h \in H : \phi(h) = N \} \\ &= \{ h \in H : hN = N \} \\ &= \{ h \in H : h \in N \} \\ &= \{ h \in H : h \in N \} \\ &= H \cap N \end{align} • And lastly, if $hnN \in (HN)/N$ then $hnN = hN$. So the element $h \in H$ is such that $\phi(h) = hN = hnN \in (HN)/N$. This show that $\phi$ is a surjective function and so: (6) \begin{align} \quad \phi (H) = (HN)/N \end{align} • By the first group isomorphism theorem we have that: (7) \begin{align} \quad H/\ker(\phi) & \cong \phi(H) \\ \quad H / (H \cap N) & \cong (HN)/N \quad \blacksquare \end{align}
2018-02-24 21:51:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 7, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 170.38053962739866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815951.96/warc/CC-MAIN-20180224211727-20180224231727-00585.warc.gz"}
http://mizar.uwb.edu.pl/version/current/html/ndiff_6.html
:: Differentiation in Normed Spaces :: by Noboru Endou and Yasunari Shidama :: :: Copyright (c) 2013-2019 Association of Mizar Users theorem :: NDIFF_6:1 for D, E, F being non empty set ex I being Function of (Funcs (D,(Funcs (E,F)))),(Funcs ([:D,E:],F)) st ( I is bijective & ( for f being Function of D,(Funcs (E,F)) for d, e being object st d in D & e in E holds (I . f) . (d,e) = (f . d) . e ) ) proof end; theorem Th2: :: NDIFF_6:2 for D, E, F being non empty set ex I being Function of (Funcs (D,(Funcs (E,F)))),(Funcs ([:E,D:],F)) st ( I is bijective & ( for f being Function of D,(Funcs (E,F)) for e, d being object st e in E & d in D holds (I . f) . (e,d) = (f . d) . e ) ) proof end; theorem :: NDIFF_6:3 for D, E being non-empty non empty FinSequence for F being non empty set ex L being Function of (Funcs ((),(Funcs ((),F)))),(Funcs ((product (E ^ D)),F)) st ( L is bijective & ( for f being Function of (),(Funcs ((),F)) for e, d being FinSequence st e in product E & d in product D holds (L . f) . (e ^ d) = (f . d) . e ) ) proof end; theorem Th4: :: NDIFF_6:4 for X, Y being non empty set ex I being Function of [:X,Y:],[:X,():] st ( I is bijective & ( for x, y being object st x in X & y in Y holds I . (x,y) = [x,<*y*>] ) ) proof end; theorem Th5: :: NDIFF_6:5 for X being non-empty non empty FinSequence for Y being non empty set ex K being Function of [:(),Y:],(product (X ^ <*Y*>)) st ( K is bijective & ( for x being FinSequence for y being object st x in product X & y in Y holds K . (x,y) = x ^ <*y*> ) ) proof end; theorem :: NDIFF_6:6 for D being non empty set for E being non-empty non empty FinSequence for F being non empty set ex L being Function of (Funcs (D,(Funcs ((),F)))),(Funcs ((product (E ^ <*D*>)),F)) st ( L is bijective & ( for f being Function of D,(Funcs ((),F)) for e being FinSequence for d being object st e in product E & d in D holds (L . f) . (e ^ <*d*>) = (f . d) . e ) ) proof end; definition let S be set ; assume A1: S is RealNormSpace ; func modetrans S -> RealNormSpace equals :Def1: :: NDIFF_6:def 1 S; correctness coherence ; by A1; end; :: deftheorem Def1 defines modetrans NDIFF_6:def 1 : for S being set st S is RealNormSpace holds modetrans S = S; definition let S, T be RealNormSpace; func diff_SP (S,T) -> Function means :Def2: :: NDIFF_6:def 2 ( dom it = NAT & it . 0 = T & ( for i being Nat holds it . (i + 1) = R_NormSpace_of_BoundedLinearOperators (S,(modetrans (it . i))) ) ); existence ex b1 being Function st ( dom b1 = NAT & b1 . 0 = T & ( for i being Nat holds b1 . (i + 1) = R_NormSpace_of_BoundedLinearOperators (S,(modetrans (b1 . i))) ) ) proof end; uniqueness for b1, b2 being Function st dom b1 = NAT & b1 . 0 = T & ( for i being Nat holds b1 . (i + 1) = R_NormSpace_of_BoundedLinearOperators (S,(modetrans (b1 . i))) ) & dom b2 = NAT & b2 . 0 = T & ( for i being Nat holds b2 . (i + 1) = R_NormSpace_of_BoundedLinearOperators (S,(modetrans (b2 . i))) ) holds b1 = b2 proof end; end; :: deftheorem Def2 defines diff_SP NDIFF_6:def 2 : for S, T being RealNormSpace for b3 being Function holds ( b3 = diff_SP (S,T) iff ( dom b3 = NAT & b3 . 0 = T & ( for i being Nat holds b3 . (i + 1) = R_NormSpace_of_BoundedLinearOperators (S,(modetrans (b3 . i))) ) ) ); theorem Th7: :: NDIFF_6:7 for S, T being RealNormSpace holds ( (diff_SP (S,T)) . 0 = T & (diff_SP (S,T)) . 1 = R_NormSpace_of_BoundedLinearOperators (S,T) & (diff_SP (S,T)) . 2 = R_NormSpace_of_BoundedLinearOperators (S,) ) proof end; theorem Th8: :: NDIFF_6:8 for S, T being RealNormSpace for i being Nat holds (diff_SP (S,T)) . i is RealNormSpace proof end; theorem Th9: :: NDIFF_6:9 for S, T being RealNormSpace for i being Nat ex H being RealNormSpace st ( H = (diff_SP (S,T)) . i & (diff_SP (S,T)) . (i + 1) = R_NormSpace_of_BoundedLinearOperators (S,H) ) proof end; definition let S, T be RealNormSpace; let i be Nat; func diff_SP (i,S,T) -> RealNormSpace equals :: NDIFF_6:def 3 (diff_SP (S,T)) . i; correctness coherence (diff_SP (S,T)) . i is RealNormSpace ; proof end; end; :: deftheorem defines diff_SP NDIFF_6:def 3 : for S, T being RealNormSpace for i being Nat holds diff_SP (i,S,T) = (diff_SP (S,T)) . i; theorem Th10: :: NDIFF_6:10 for S, T being RealNormSpace for i being Nat holds diff_SP ((i + 1),S,T) = R_NormSpace_of_BoundedLinearOperators (S,(diff_SP (i,S,T))) proof end; definition let S, T be RealNormSpace; let f be set ; assume A1: f is PartFunc of S,T ; func modetrans (f,S,T) -> PartFunc of S,T equals :Def4: :: NDIFF_6:def 4 f; correctness coherence f is PartFunc of S,T ; by A1; end; :: deftheorem Def4 defines modetrans NDIFF_6:def 4 : for S, T being RealNormSpace for f being set st f is PartFunc of S,T holds modetrans (f,S,T) = f; definition let S, T be RealNormSpace; let f be PartFunc of S,T; let Z be Subset of S; func diff (f,Z) -> Function means :Def5: :: NDIFF_6:def 5 ( dom it = NAT & it . 0 = f | Z & ( for i being Nat holds it . (i + 1) = (modetrans ((it . i),S,(diff_SP (i,S,T)))) | Z ) ); existence ex b1 being Function st ( dom b1 = NAT & b1 . 0 = f | Z & ( for i being Nat holds b1 . (i + 1) = (modetrans ((b1 . i),S,(diff_SP (i,S,T)))) | Z ) ) proof end; uniqueness for b1, b2 being Function st dom b1 = NAT & b1 . 0 = f | Z & ( for i being Nat holds b1 . (i + 1) = (modetrans ((b1 . i),S,(diff_SP (i,S,T)))) | Z ) & dom b2 = NAT & b2 . 0 = f | Z & ( for i being Nat holds b2 . (i + 1) = (modetrans ((b2 . i),S,(diff_SP (i,S,T)))) | Z ) holds b1 = b2 proof end; end; :: deftheorem Def5 defines diff NDIFF_6:def 5 : for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S for b5 being Function holds ( b5 = diff (f,Z) iff ( dom b5 = NAT & b5 . 0 = f | Z & ( for i being Nat holds b5 . (i + 1) = (modetrans ((b5 . i),S,(diff_SP (i,S,T)))) | Z ) ) ); theorem Th11: :: NDIFF_6:11 for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S holds ( (diff (f,Z)) . 0 = f | Z & (diff (f,Z)) . 1 = (f | Z) | Z & (diff (f,Z)) . 2 = ((f | Z) | Z) | Z ) proof end; theorem Th12: :: NDIFF_6:12 for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S for i being Nat holds (diff (f,Z)) . i is PartFunc of S,(diff_SP (i,S,T)) proof end; definition let S, T be RealNormSpace; let f be PartFunc of S,T; let Z be Subset of S; let i be Nat; func diff (f,i,Z) -> PartFunc of S,(diff_SP (i,S,T)) equals :: NDIFF_6:def 6 (diff (f,Z)) . i; correctness coherence (diff (f,Z)) . i is PartFunc of S,(diff_SP (i,S,T)) ; by Th12; end; :: deftheorem defines diff NDIFF_6:def 6 : for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S for i being Nat holds diff (f,i,Z) = (diff (f,Z)) . i; theorem Th13: :: NDIFF_6:13 for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S for i being Nat holds diff (f,(i + 1),Z) = (diff (f,i,Z)) | Z proof end; definition let S, T be RealNormSpace; let f be PartFunc of S,T; let Z be Subset of S; let n be Nat; pred f is_differentiable_on n,Z means :: NDIFF_6:def 7 ( Z c= dom f & ( for i being Nat st i <= n - 1 holds modetrans (((diff (f,Z)) . i),S,(diff_SP (i,S,T))) is_differentiable_on Z ) ); end; :: deftheorem defines is_differentiable_on NDIFF_6:def 7 : for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S for n being Nat holds ( f is_differentiable_on n,Z iff ( Z c= dom f & ( for i being Nat st i <= n - 1 holds modetrans (((diff (f,Z)) . i),S,(diff_SP (i,S,T))) is_differentiable_on Z ) ) ); theorem Th14: :: NDIFF_6:14 for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S for n being Nat holds ( f is_differentiable_on n,Z iff ( Z c= dom f & ( for i being Nat st i <= n - 1 holds diff (f,i,Z) is_differentiable_on Z ) ) ) proof end; theorem Th15: :: NDIFF_6:15 for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S holds ( f is_differentiable_on 1,Z iff ( Z c= dom f & f | Z is_differentiable_on Z ) ) proof end; theorem :: NDIFF_6:16 for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S holds ( f is_differentiable_on 2,Z iff ( Z c= dom f & f | Z is_differentiable_on Z & (f | Z) | Z is_differentiable_on Z ) ) proof end; theorem Th17: :: NDIFF_6:17 for S, T being RealNormSpace for f being PartFunc of S,T for Z being Subset of S for n being Nat st f is_differentiable_on n,Z holds for m being Nat st m <= n holds f is_differentiable_on m,Z proof end; theorem Th18: :: NDIFF_6:18 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z holds Z is open proof end; theorem Th19: :: NDIFF_6:19 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z holds for i being Nat st i <= n holds ( (diff_SP (S,T)) . i is RealNormSpace & (diff (f,Z)) . i is PartFunc of S,(diff_SP (i,S,T)) & dom (diff (f,i,Z)) = Z ) proof end; theorem Th20: :: NDIFF_6:20 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f, g being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z & g is_differentiable_on n,Z holds for i being Nat st i <= n holds diff ((f + g),i,Z) = (diff (f,i,Z)) + (diff (g,i,Z)) proof end; theorem :: NDIFF_6:21 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f, g being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z & g is_differentiable_on n,Z holds f + g is_differentiable_on n,Z proof end; theorem Th22: :: NDIFF_6:22 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f, g being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z & g is_differentiable_on n,Z holds for i being Nat st i <= n holds diff ((f - g),i,Z) = (diff (f,i,Z)) - (diff (g,i,Z)) proof end; theorem :: NDIFF_6:23 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f, g being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z & g is_differentiable_on n,Z holds f - g is_differentiable_on n,Z proof end; theorem Th24: :: NDIFF_6:24 for S, T being RealNormSpace for Z being Subset of S for n being Nat for r being Real for f being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z holds for i being Nat st i <= n holds diff ((r (#) f),i,Z) = r (#) (diff (f,i,Z)) proof end; theorem Th25: :: NDIFF_6:25 for S, T being RealNormSpace for Z being Subset of S for n being Nat for r being Real for f being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z holds r (#) f is_differentiable_on n,Z proof end; theorem :: NDIFF_6:26 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z holds for i being Nat st i <= n holds diff ((- f),i,Z) = - (diff (f,i,Z)) proof end; theorem :: NDIFF_6:27 for S, T being RealNormSpace for Z being Subset of S for n being Nat for f being PartFunc of S,T st 1 <= n & f is_differentiable_on n,Z holds - f is_differentiable_on n,Z proof end;
2019-06-25 06:31:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4211554229259491, "perplexity": 6670.410404129296}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00073.warc.gz"}
http://msp.org/pjm/2017/286-2/p02.xhtml
#### Vol. 286, No. 2, 2017 Recent Issues Vol. 290: 1  2 Vol. 289: 1  2 Vol. 288: 1  2 Vol. 287: 1  2 Vol. 286: 1  2 Vol. 285: 1  2 Vol. 284: 1  2 Vol. 283: 1  2 Online Archive Volume: Issue: The Journal Editorial Board Officers Special Issues Submission Guidelines Submission Form Subscriptions Contacts Author Index To Appear ISSN: 0030-8730 Uniqueness of conformal Ricci flow using energy methods ### Thomas Bell Vol. 286 (2017), No. 2, 277–290 ##### Abstract We analyze an energy functional associated to conformal Ricci flow along closed manifolds with constant negative scalar curvature. Given initial conditions we use this functional to demonstrate the uniqueness of both the metric and the pressure function along conformal Ricci flow. ##### Keywords conformal Ricci flow, Ricci flow ##### Mathematical Subject Classification 2010 Primary: 53C25, 53C44 Secondary: 35K65
2017-08-23 06:14:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8464571833610535, "perplexity": 7906.977239296588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00430.warc.gz"}
https://labs.tib.eu/arxiv/?author=Rebecca%20Smethurst
• The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since July 2014. This paper describes the second data release from this phase, and the fourteenth from SDSS overall (making this, Data Release Fourteen or DR14). This release makes public data taken by SDSS-IV in its first two years of operation (July 2014-2016). Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey (eBOSS); the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data driven machine learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS website (www.sdss.org) has been updated for this release, and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020, and will be followed by SDSS-V. • ### SDSS-IV MaNGA: Evidence of the importance of AGN feedback in low-mass galaxies(1710.07568) Feb. 12, 2018 astro-ph.GA We present new evidence for AGN feedback in a subset of 69 quenched low-mass galaxies ($M_{\star} \lesssim 5\times10^{9}$ M$_{\odot}$, $M_{\rm{r}} > -19$) selected from the first two years of the SDSS-IV MaNGA survey. The majority (85 per cent) of these quenched galaxies appear to reside in a group environment. We find 6 galaxies in our sample that appear to have an active AGN that is preventing on-going star-formation; this is the first time such a feedback mechanism has been observed in this mass range. Interestingly, five of these six galaxies have an ionised gas component that is kinematically offset from their stellar component, suggesting the gas is either recently accreted or outflowing. We hypothesise these six galaxies are low-mass equivalents to the "red geysers" observed in more massive galaxies. Of the other 63 galaxies in the sample, we find 8 do appear for have some low-level, residual star formation, or emission from hot, evolved stars. The remaining galaxies in our sample have no detectable ionised gas emission throughout their structures, consistent with them being quenched. This work shows the potential for understanding the detailed physical properties of dwarf galaxies through spatially resolved spectroscopy. • ### SDSS-IV MaNGA: The Different Quenching Histories of Fast and Slow Rotators(1709.09175) Sept. 26, 2017 astro-ph.GA Do the theorised different formation mechanisms of fast and slow rotators produce an observable difference in their star formation histories? To study this we identify quenching slow rotators in the MaNGA sample by selecting those which lie below the star forming sequence and identify a sample of quenching fast rotators which were matched in stellar mass. This results in a total sample of 194 kinematically classified galaxies, which is agnostic to visual morphology. We use u-r and NUV-u colours from SDSS and GALEX and an existing inference package, STARPY, to conduct a first look at the onset time and exponentially declining rate of quenching of these galaxies. An Anderson-Darling test on the distribution of the inferred quenching rates across the two kinematic populations reveals they are statistically distinguishable ($3.2\sigma$). We find that fast rotators quench at a much wider range of rates than slow rotators, consistent with a wide variety of physical processes such as secular evolution, minor mergers, gas accretion and environmentally driven mechanisms. Quenching is more likely to occur at rapid rates ($\tau \lesssim 1~\rm{Gyr}$) for slow rotators, in agreement with theories suggesting slow rotators are formed in dynamically fast processes, such as major mergers. Interestingly, we also find that a subset of the fast rotators quench at these same rapid rates as the bulk of the slow rotator sample. We therefore discuss how the total gas mass of a merger, rather than the merger mass ratio, may decide a galaxy's ultimate kinematic fate. • ### Galaxy Zoo: Are Bars Responsible for the Feeding of Active Galactic Nuclei at 0.2 < z < 1.0?(1409.5434) Dec. 19, 2014 astro-ph.GA We present a new study investigating whether active galactic nuclei (AGN) beyond the local universe are preferentially fed via large-scale bars. Our investigation combines data from Chandra and Galaxy Zoo: Hubble (GZH) in the AEGIS, COSMOS, and GOODS-S surveys to create samples of face-on, disc galaxies at 0.2 < z < 1.0. We use a novel method to robustly compare a sample of 120 AGN host galaxies, defined to have 10^42 erg/s < L_X < 10^44 erg/s, with inactive control galaxies matched in stellar mass, rest-frame colour, size, Sersic index, and redshift. Using the GZH bar classifications of each sample, we demonstrate that AGN hosts show no statistically significant enhancement in bar fraction or average bar likelihood compared to closely-matched inactive galaxies. In detail, we find that the AGN bar fraction cannot be enhanced above the control bar fraction by more than a factor of two, at 99.7% confidence. We similarly find no significant difference in the AGN fraction among barred and non-barred galaxies. Thus we find no compelling evidence that large-scale bars directly fuel AGN at 0.2<z<1.0. This result, coupled with previous results at z=0, implies that moderate-luminosity AGN have not been preferentially fed by large-scale bars since z=1. Furthermore, given the low bar fractions at z>1, our findings suggest that large-scale bars have likely never directly been a dominant fueling mechanism for supermassive black hole growth.
2020-11-28 17:32:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5595676898956299, "perplexity": 2969.982260276578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00193.warc.gz"}
https://maker.pro/forums/threads/flyback-diode.73952/
# Flyback Diode M #### Michael Jan 1, 1970 0 Hi, How do you go about choosing a suitable flyback diode for a 1.8kW 36V motor? Does it have to be able to handle above the motor's stall current of 50A? Cheers, Michael J #### John Popelish Jan 1, 1970 0 Michael said: Hi, How do you go about choosing a suitable flyback diode for a 1.8kW 36V motor? Does it have to be able to handle above the motor's stall current of 50A? You might get by with a smaller diode, but you would have to heat sink it very well to cover the worst case situation. At stall, something approaching 90% of motor current will pass through the diode. You could parallel the two halves of this one for less than $6 from Digikey: http://www.irf.com/product-info/datasheets/data/60ctq045pbf.pdf This one costs twice as much but has more margin (both voltage and current): http://www.st.com/stonline/products/literature/ds/6728/stps80l60c.pdf M #### Michael Jan 1, 1970 0 John Popelish said: You might get by with a smaller diode, but you would have to heat sink it very well to cover the worst case situation. At stall, something approaching 90% of motor current will pass through the diode. You could parallel the two halves of this one for less than$6 from Digikey: http://www.irf.com/product-info/datasheets/data/60ctq045pbf.pdf This one costs twice as much but has more margin (both voltage and current): http://www.st.com/stonline/products/literature/ds/6728/stps80l60c.pdf Thanks John, I'm actually in the UK and it seems Farnell don't sell that one, however I used those spec's as a guide and would I be ok using this one: http://uk.farnell.com/jsp/endecaSearch/partDetail.jsp?SKU=1080069 I realise the lack of datasheet isn't helpful, but it seems to 'fit the bill'....? Michael J #### John Popelish Jan 1, 1970 0 Michael said: (snip) I'm actually in the UK and it seems Farnell don't sell that one, however I used those spec's as a guide and would I be ok using this one: http://uk.farnell.com/jsp/endecaSearch/partDetail.jsp?SKU=1080069 I realise the lack of datasheet isn't helpful, but it seems to 'fit the bill'....? It looks very similar, except that it is packaged in a larger, fully insulated case, that is convenient to heat sink. This data sheet may be pretty close: http://ixdev.ixys.com/DataSheet/24b50478-97e5-46a7-95a3-2d9967eed46c.pdf My only concern would be that the diode would have to handle the voltage applied to the batteries at full charge. But if you go with a higher voltage diode for a larger safety factor, there, you will probably have to put up with higher forward voltage drop, also. P #### Phil Allison Jan 1, 1970 0 "John Popelish" You might get by with a smaller diode, but you would have to heat sink it very well to cover the worst case situation. At stall, something approaching 90% of motor current will pass through the diode. ** Define the term " at stall " - John !!!!!! Remember - Ambiguity is ERROR !!! ......... Phil P #### Phil Allison Jan 1, 1970 0 "John Popelish" Stall (as I am using it here) is motor standing still while being driven at full rated current. ** That is an ambiguous definition. The "stall current " of a motor is the current drawn at the voltage supplied with the rotor held - it is typically a very large number and the motor will not sustain it for more than a few seconds. The OP has falsely equated " motor's stall current " with rated, full load current ( ie 1800 /36 = 50) . Maybe his PWM controller has a built in current limit of 50 amps average - if so he needs to say that. Otherwise confusion reigns. ........ Phil J #### John Popelish Jan 1, 1970 0 Phil said: "John Popelish" ** Define the term " at stall " - John !!!!!! Remember - Ambiguity is ERROR !!! Stall (as I am using it here) is motor standing still while being driven at full rated current. M #### Michael Jan 1, 1970 0 Phil Allison said: "John Popelish" ** That is an ambiguous definition. The "stall current " of a motor is the current drawn at the voltage supplied with the rotor held - it is typically a very large number and the motor will not sustain it for more than a few seconds. The OP has falsely equated " motor's stall current " with rated, full load current ( ie 1800 /36 = 50) . Maybe his PWM controller has a built in current limit of 50 amps average - if so he needs to say that. Otherwise confusion reigns. ....... Phil I was told by an electrical engineer that when the nameplate on the motor states "Voltage 36V Current 50A" this is the stall current. Michael M #### Michael Jan 1, 1970 0 John Popelish said: It looks very similar, except that it is packaged in a larger, fully insulated case, that is convenient to heat sink. This data sheet may be pretty close: http://ixdev.ixys.com/DataSheet/24b50478-97e5-46a7-95a3-2d9967eed46c.pdf My only concern would be that the diode would have to handle the voltage applied to the batteries at full charge. But if you go with a higher voltage diode for a larger safety factor, there, you will probably have to put up with higher forward voltage drop, also. Cheers John, Michael P #### Phil Allison Jan 1, 1970 0 "Michael". I was told by an electrical engineer that when the nameplate on the motor states "Voltage 36V Current 50A" this is the stall current. ** LOL !! NEVER believe anything some dumb as dog shit sparky tells you !! That 50 amp figure is the max running current the motor can sustain - needs to be spinning fast to cool itself too. ........ Phil J #### John Popelish Jan 1, 1970 0 Michael said: I was told by an electrical engineer that when the nameplate on the motor states "Voltage 36V Current 50A" this is the stall current. I think those ratings would be the normal load, full speed voltage and current ratings. Motors driven by constant voltage (at name plate voltage) sources have stall currents that may be 8 to 10 times rated current, which would not be a safe stall current for very long, and may even degauss a permanent magnet motor's magnets, almost instantaneously. But if you are driving the motor with a PWM drive, it may have a built in current limit function that drops the average voltage to the motor to limit the current if it exceeds some settable value, and a good current to set that limit might be the full rated current. Do you know if your PWM drive includes a current limit? M #### Michael Jan 1, 1970 0 John Popelish said: I think those ratings would be the normal load, full speed voltage and current ratings. Motors driven by constant voltage (at name plate voltage) sources have stall currents that may be 8 to 10 times rated current, which would not be a safe stall current for very long, and may even degauss a permanent magnet motor's magnets, almost instantaneously. But if you are driving the motor with a PWM drive, it may have a built in current limit function that drops the average voltage to the motor to limit the current if it exceeds some settable value, and a good current to set that limit might be the full rated current. Do you know if your PWM drive includes a current limit? Could you please explain what you mean by 'normal load'? I would have thought that could be just about anything...... No it doesn't - I'm building the drive Michael J #### John Popelish Jan 1, 1970 0 Michael said: I am referring to the full rated load (the torque the motor can produce at name plate current) assuming that it is getting enough cooling to keep it at a safe temperature. The torque a permanent magnet or shunt wound motor produces is proportional (roughly) to the armature current. I would have thought that could be just about anything...... The normal range is between no load current (the current it takes to spin the motor with no external load) and full rated current that drives full rated torque. No it doesn't - I'm building the drive Lets hope you decide to add this feature. It protects not only the motor, but the PWM components, battery and wiring, as well. The current sense might be based on the voltage drop across a current shunt (a low value series resistor) in series with the motor, or some magnetic field sensing mechanism (like a hall effect sensor) that reacts to the magnetic field around a motor lead, produced by the motor current. M #### Michael Jan 1, 1970 0 John Popelish said: I am referring to the full rated load (the torque the motor can produce at name plate current) assuming that it is getting enough cooling to keep it at a safe temperature. The torque a permanent magnet or shunt wound motor produces is proportional (roughly) to the armature current. The normal range is between no load current (the current it takes to spin the motor with no external load) and full rated current that drives full rated torque. Lets hope you decide to add this feature. It protects not only the motor, but the PWM components, battery and wiring, as well. The current sense might be based on the voltage drop across a current shunt (a low value series resistor) in series with the motor, or some magnetic field sensing mechanism (like a hall effect sensor) that reacts to the magnetic field around a motor lead, produced by the motor current. Thanks John, I'm thinking about using a shunt resistor, with a voltage divider either side fed into a 741 setup as a diff amp. Would that cause any problems? Cheers, Michael J #### John Popelish Jan 1, 1970 0 Michael said: Thanks John, I'm thinking about using a shunt resistor, with a voltage divider either side fed into a 741 setup as a diff amp. Would that cause any problems? If you use a shunt resistor, the full scale signal voltage is small, to begin with, just to keep the resistor power waste down, so adding a pair of dividers to that, to get the signal within the common mode voltage range of the 741 is a step backwards. I'm not saying it can't work, but it is a hard climb. An integrated high side sensor chip is handy for when the shunt is in the positive supply side of the circuit. They convert the small differential voltage to a current to ground, that will produce a much larger, but proportional voltage across a grounded resistor, for the current controller to use as a process measurement. Here is one example at random, that you can use to pull key words from, to look for more: http://www.maxim-ic.com/appnotes.cfm/appnote_number/746/ Isolated Hall effect current sensors are even more convenient, since that don't involve the voltage of the current carrying conductor, at all. http://www.allegromicro.com/techpub2/current_sensing/bsp_v1_52.pdf Replies 7 Views 1K Replies 31 Views 4K Replies 9 Views 4K M Replies 2 Views 7K L A Replies 0 Views 2K aravind A
2022-10-06 17:30:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5258592367172241, "perplexity": 4689.787498512929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00434.warc.gz"}
https://bookstore.ams.org/view?ProductCode=CHEL/367.H
An error was encountered while trying to add the item to the cart. Please try again. Copy To Clipboard Successfully Copied! Matching Theory László Lovász Eötvös Loránd University, Budapest, Hungary Michael D. Plummer Vanderbilt University, Nashville, TN AMS Chelsea Publishing: An Imprint of the American Mathematical Society Available Formats: Hardcover ISBN: 978-0-8218-4759-6 Product Code: CHEL/367.H List Price: $79.00 MAA Member Price:$71.10 AMS Member Price: $71.10 Electronic ISBN: 978-1-4704-1575-4 Product Code: CHEL/367.H.E List Price:$79.00 MAA Member Price: $71.10 AMS Member Price:$63.20 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price: $118.50 MAA Member Price:$106.65 AMS Member Price: $106.65 Click above image for expanded view Matching Theory László Lovász Eötvös Loránd University, Budapest, Hungary Michael D. Plummer Vanderbilt University, Nashville, TN AMS Chelsea Publishing: An Imprint of the American Mathematical Society Available Formats: Hardcover ISBN: 978-0-8218-4759-6 Product Code: CHEL/367.H List Price:$79.00 MAA Member Price: $71.10 AMS Member Price:$71.10 Electronic ISBN: 978-1-4704-1575-4 Product Code: CHEL/367.H.E List Price: $79.00 MAA Member Price:$71.10 AMS Member Price: $63.20 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price:$118.50 MAA Member Price: $106.65 AMS Member Price:$106.65 • Book Details AMS Chelsea Publishing Volume: 3672009; 547 pp MSC: Primary 05; 90; This book surveys matching theory, with an emphasis on connections with other areas of mathematics and on the role matching theory has played, and continues to play, in the development of some of these areas. Besides basic results on the existence of matchings and on the matching structure of graphs, the impact of matching theory is discussed by providing crucial special cases and nontrivial examples on matroid theory, algorithms, and polyhedral combinatorics. The new Appendix outlines how the theory and applications of matching theory have continued to develop since the book was first published in 1986, by launching (among other things) the Markov Chain Monte Carlo method. Graduate students and research mathematicians interested in graph theory, combinatorics, combinatorial optimization, or graph algorithms. • Chapters • Chapter 1. Matchings in bipartite graphs • Chapter 2. Flow theory • Chapter 3. Size and structure of maximum matchings • Chapter 4. Bipartite graphs with perfect matchings • Chapter 5. General graphs with perfect matchings • Chapter 6. Some graph-theoretical problems related to matchings • Chapter 7. Matching and linear programming • Chapter 8. Determinants and matchings • Chapter 9. Matching algorithms • Chapter 10. The $f$-factor problem • Chapter 11. Matroid matching • Chapter 12. Vertex packing and covering • Appendix: Developments in matching theory since this book was first published • Requests Review Copy – for reviewers who would like to review an AMS book Permission – for use of book, eBook, or Journal content Accessibility – to request an alternate format of an AMS title Volume: 3672009; 547 pp MSC: Primary 05; 90; This book surveys matching theory, with an emphasis on connections with other areas of mathematics and on the role matching theory has played, and continues to play, in the development of some of these areas. Besides basic results on the existence of matchings and on the matching structure of graphs, the impact of matching theory is discussed by providing crucial special cases and nontrivial examples on matroid theory, algorithms, and polyhedral combinatorics. The new Appendix outlines how the theory and applications of matching theory have continued to develop since the book was first published in 1986, by launching (among other things) the Markov Chain Monte Carlo method. Graduate students and research mathematicians interested in graph theory, combinatorics, combinatorial optimization, or graph algorithms. • Chapters • Chapter 1. Matchings in bipartite graphs • Chapter 2. Flow theory • Chapter 3. Size and structure of maximum matchings • Chapter 4. Bipartite graphs with perfect matchings • Chapter 5. General graphs with perfect matchings • Chapter 6. Some graph-theoretical problems related to matchings • Chapter 7. Matching and linear programming • Chapter 8. Determinants and matchings • Chapter 9. Matching algorithms • Chapter 10. The $f$-factor problem • Chapter 11. Matroid matching • Chapter 12. Vertex packing and covering • Appendix: Developments in matching theory since this book was first published Review Copy – for reviewers who would like to review an AMS book Permission – for use of book, eBook, or Journal content Accessibility – to request an alternate format of an AMS title You may be interested in... Please select which format for which you are requesting permissions.
2023-03-23 02:24:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18966825306415558, "perplexity": 3199.14018611661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00398.warc.gz"}
https://electronics.stackexchange.com/questions/152345/what-precisely-is-r-dson
# What precisely is $R_{ds(on)}$? I keep getting conflicting answers on this and it's absolutely killing me. Some say $R_{ds(on)}$ is the contact resistance in the FET (for instance, make many FETs of varying channel lengths, turn them full on and measure the resistance and the residual resistance at zero channel length is $R_{ds(on)}$). Others say it's the slope of the $I_{ds}$ $V_{ds}$ curve in the linear regime when the FET has a typical gate voltage applied and is at a sensible temperature. I also can't honestly find an answer on whether you use a FET as a switch in the linear or in the saturation regime! • R_ds is the linear relationship between V and I. R_ds(on) is a specific value of R_ds when the fet is in saturation. You're just getting confused because you think R_ds = R_ds(on) – I. Wolfe Feb 3 '15 at 20:02 To answer your second (sort-of) question first, you would normally operate a MOSFET in the ohmic (aka linear or triode) mode when it is used as a switch. The voltage drop across the MOSFET is more-or-less proportional to the drain-source current. In the saturation region, the current through the MOSFET is more-or-less independent of the drain-to-source voltage- it 'looks' like a constant current source or sink once the voltage across it is great enough. Image from Wikipedia. Rds(on) is the slope of the above curve (well to the left of the red curve). As you can see, the slope depends on Vgs. It also depends on temperature, which is typically stated to be Tj = 25°C. Here is how a typical small high-performance MOSFET behaves: The Rds(on) resistance is fairly constant up to several amperes. It also has a strong temperature dependency, so at 150°C it may be 70% higher than at room temperature. And, like most specs, the typical curves don't represent the guaranteed limits. The difference between typical and the worst-case (hot) Rds(on) may be as much as 2.5:1 so it is better to be conservative when specifying parts. It's the resistance of the channel from drain to source when the FET is switched on. There may be several different values for R_ds mentioned in the datasheet for different operating regions of the FET(cutoff, ohmic, saturation,...). And as helloworld922 commented; resistance is the relation of voltage to current. Datasheets always use a lot of abbreviations, but then don't describe them. This is because there is a standard associated with most of the measurements. Each has a description and either you're expected to know it, or the engineers building datasheet are so marinated in the concepts they don't realize they're not speaking English. Texas Instruments, and I'm sure others, have published a translation guide. It's more like a tourist travel book, it gives most of the basic translations, enough for the more complex ones to at least be guessed at with some confidence. For instance, the closest matching translation for your question: $r_{on}$ - On-State Resistance JEDEC – The resistance between specified terminals with input conditions applied that, according to the product specification, will establish minimum resistance (the on-state) between those terminals. TI – The resistance measured across the channel drain and source (or input and output) of a bus-switch device. Both the JDEC standard definition and TI's interpretation of that definition are given.
2020-01-23 16:14:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7148052453994751, "perplexity": 1233.7281799472196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00269.warc.gz"}
https://learn.microsoft.com/en-us/power-bi/developer/visuals/capabilities
# Capabilities and properties of Power BI visuals Every visual has a capabilities.json file that is created automatically when you run the pbiviz new <visual project name> command to create a new visual. The capabilities.json file describes the visual to the host. The capabilities.json file tells the host what kind of data the visual accepts, what customizable attributes to put on the properties pane, and other information needed to create the visual. Starting from API v4.6.0, all properties on the capabilities model are optional except privileges, which are mandatory. The capabilities.json file lists the root objects in the following format: { "privileges": [ ... ], "dataRoles": [ ... ], "dataViewMappings": [ ... ], "objects": { ... }, "supportsHighlight": true|false, "sorting": { ... } ... } When you create a new visual, the default capabilities.json file includes the following root objects: The above objects are the ones needed for data-binding. They can be edited as necessary for your visual. The following additional root objects are optional and can be added as needed: You can find all these objects and their parameters in the capabilities.json schema ## privileges: define the special permissions that your visual requires Privileges are special operations your visual requires access to in order to operate. Privileges take an array of privilege objects, which defines all privilege properties. The following sections describe the privileges that are available in Power BI. Note From API v4.6.0, privileges must be specified in the capabilities.json file. In earlier versions, remote access is automatically granted and downloading to files isn't possible. To find out which version you’re using, check the apiVersion in the pbiviz.json file. ### Define privileges A JSON privilege definition contains these components: • name - (string) The name of the privilege. • essential - (Boolean) Indicates whether the visual functionality requires this privilege. A value of true means the privilege is required; false means the privilege isn't mandatory. • parameters - (string array)(optional) Arguments. If parameters is missing, it's considered an empty array. There are two types of privileges that must be defined: • Access External resources Note Even with these privileges granted in the visual, the admin has to enable the switch in the admin settings to allow people in their organization to benefit from these settings. ### Allow web access To allow a visual to access an external resource or web site, add that information as a privilege in the capabilities section. The privilege definition includes an optional list of URLs the visual is allowed to access in the format http://xyz.com or https://xyz.com. Each URL can also include a wildcard to specify subdomains. { "name": "WebAccess", "essential": true, "parameters": [ "https://*.microsoft.com", "http://example.com" ] } The preceding WebAccess privilege means that the visual needs to access any subdomain of the microsoft.com domain via HTTPS protocol only and example.com without subdomains via HTTP, and that this access privilege is essential for the visual to work. To allow the user to export data from a visual into a file, set ExportContent to true. This ExportContent setting enables the visual to export data to files in the following formats: • .txt • .csv • .json • .tmplt • .xml • .pdf • .xlsx This setting is separate from and not affected by download restrictions applied in the organization's export and sharing tenant settings. "privileges": [ { "name": "ExportContent", "essential": true } ] ### Example of a privileges definition "privileges": [ { "name": "WebAccess", "essential": true, "parameters": [ "https://*.virtualearth.net" ] }, { "name": "ExportContent", "essential": false } ] ### No privileges needed If the visual doesn't requires any special permissions, the privileges array should be empty: "privileges": [] ## dataroles: define the data fields that your visual expects To define fields that can be bound to data, you use dataRoles. dataRoles is an array of DataViewRole objects, which defines all the required properties. The dataRoles objects are the fields that appear on the Properties pane. The user drags data fields into them to bind data the data fields to the objects. ### DataRole properties DataRoles are defined by the following properties: • name: The internal name of this data field (must be unique). • displayName: The name displayed to the user in the Properties pane. • kind: The kind of field: • Grouping: Set of discrete values that are used to group measure fields. • Measure: Single numeric values. • GroupingOrMeasure: Values that can be used as either a grouping or a measure. • description: A short text description of the field (optional). • requiredTypes: The required type of data for this data role. Values that don't match are set to null (optional). • preferredTypes: The preferred type of data for this data role (optional). #### Valid data types for requiredTypes and preferredTypes • bool: A boolean value • integer: An integer value • numeric: A numeric value • text: A text value • geography: A geographic data ### dataRoles example "dataRoles": [ { "displayName": "My Category Data", "name": "myCategory", "kind": "Grouping", "requiredTypes": [ { "text": true }, { "numeric": true }, { "integer": true } ], "preferredTypes": [ { "text": true } ] }, { "displayName": "My Measure Data", "name": "myMeasure", "kind": "Measure", "requiredTypes": [ { "integer": true }, { "numeric": true } ], "preferredTypes": [ { "integer": true } ] } ] ... } The preceding data roles would create the fields that are displayed in the following image: ## dataViewMappings: how you want the data mapped The dataViewMappings objects describe how the data roles relate to each other and allow you to specify conditional requirements for the displaying data views. Most visuals provide a single mapping, but you can provide multiple dataViewMappings. Each valid mapping produces a data view. "dataViewMappings": [ { "conditions": [ ... ], "categorical": { ... }, "table": { ... }, "single": { ... }, "matrix": { ... } } ] For more information, see Understand data view mapping in Power BI visuals. ## objects: define property pane options Objects describe customizable properties that are associated with the visual. The objects defined in this section are the objects that appear in the Format pane. Each object can have multiple properties, and each property has a type that's associated with it. "objects": { "myCustomObject": { "properties": { ... } } }
2023-04-02 13:26:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19663779437541962, "perplexity": 8047.582435421482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00354.warc.gz"}
http://theory.cs.uchicago.edu/abstract.php?id=guruswami&year=2009
# Seminar: May 28, 2:30pm ## List Decodability of Random Linear Codes Note non-standard day and time! For every fixed finite field F_q, p between 0 and 1-1/q, and positive epsilon, we prove that with high probability a random subspace C of F_q^n of dimension (1-h_q(p)-epsilon)n has the property that every Hamming ball of radius pn has at most O(1/epsilon) elements of C. (Here h_q(x) is the q-ary entropy function.) This answers a basic open question concerning the list-decodability of linear codes, showing that a list size of O(1/epsilon) suffices to have rate within epsilon of the "list decoding capacity" 1-h_q(p). This matches up to constant factors the list-size achieved by general (non-linear) random codes, and gives an exponential improvement over the best previously known list-size bound of q^{O(1/epsilon)}. The main technical ingredient in our proof is a strong upper bound on the probability that m random vectors chosen from a Hamming ball centered at the origin have too many (more than O(m)) vectors from their linear span also belong to the ball. The talk will be self-contained and not assume any coding theory background. Joint work with Johan Hastad and Swastik Kopparty.
2017-09-26 00:13:56
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8550001978874207, "perplexity": 1283.0033831863134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00478.warc.gz"}
https://mycareerwise.com/programming/category/star-patterns/hollow-star-pyramid
## Hollow Star Pyramid Back to Programming ### Description Hollow Star Pyramid To print this pattern, the number of lines is taken as the input from the user. Three for loops (one outer for loop and two inners for loops) are used here to print the pattern. The outer for loop is used to count the line number. For ith line the number of spaces before printing the star is n-i. so the first inner for loop used to print the spaces executes from 1 to n-i for each line i. the stars are printed on the boundary line. a separate variable is used to count the number of stars for a line. so, the 2nd inner for loop is used to print the star of each line. ### Algorithm ``````INPUT: number of lines OUTPUT: the aforesaid pattern PROCESS: Step 1: [taking the input] Step 2: [printing the pattern] Set st<-1 For i=1 to n repeat For j=1 to n-i repeat Print " " [End of ‘for’ loop] For k=1 to st repeat if k=1 or k=st or i=1 or i=n Print “*" Else Print “ “ [End of ‘for’ loop] Move to the next line Set st=st+2 [End of ‘for’ loop] Step 3: Stop `````` ## Time Complexity: for(i=1;i<=n;i++)--------------------------------------- n {             //printing the spaces for(j=1;j<=n-i;j++)--------------------------- n-i printf(" "); //printing the stars for(k=1;k<=st;k++)--------------------------- st { if(i==1||i==n||k==1||k==st) printf("*"); else printf(" "); } printf("\n"); st+=2;   } The complexity is: O(n*(n-i+st))=O(${\mathrm{n}}^{2}$).
2023-03-22 19:24:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2806030213832855, "perplexity": 9478.34530718744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00677.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/50/6/b/d/
# Properties Label 50.6.b.d Level 50 Weight 6 Character orbit 50.b Analytic conductor 8.019 Analytic rank 0 Dimension 2 CM no Inner twists 2 # Related objects ## Newspace parameters Level: $$N$$ = $$50 = 2 \cdot 5^{2}$$ Weight: $$k$$ = $$6$$ Character orbit: $$[\chi]$$ = 50.b (of order $$2$$, degree $$1$$, not minimal) ## Newform invariants Self dual: no Analytic conductor: $$8.01919099065$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-1})$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$2$$ Twist minimal: no (minimal twist has level 10) Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of $$i = \sqrt{-1}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + 4 i q^{2} -26 i q^{3} -16 q^{4} + 104 q^{6} + 22 i q^{7} -64 i q^{8} -433 q^{9} +O(q^{10})$$ $$q + 4 i q^{2} -26 i q^{3} -16 q^{4} + 104 q^{6} + 22 i q^{7} -64 i q^{8} -433 q^{9} -768 q^{11} + 416 i q^{12} -46 i q^{13} -88 q^{14} + 256 q^{16} -378 i q^{17} -1732 i q^{18} -1100 q^{19} + 572 q^{21} -3072 i q^{22} -1986 i q^{23} -1664 q^{24} + 184 q^{26} + 4940 i q^{27} -352 i q^{28} + 5610 q^{29} -3988 q^{31} + 1024 i q^{32} + 19968 i q^{33} + 1512 q^{34} + 6928 q^{36} + 142 i q^{37} -4400 i q^{38} -1196 q^{39} + 1542 q^{41} + 2288 i q^{42} -5026 i q^{43} + 12288 q^{44} + 7944 q^{46} -24738 i q^{47} -6656 i q^{48} + 16323 q^{49} -9828 q^{51} + 736 i q^{52} -14166 i q^{53} -19760 q^{54} + 1408 q^{56} + 28600 i q^{57} + 22440 i q^{58} -28380 q^{59} + 5522 q^{61} -15952 i q^{62} -9526 i q^{63} -4096 q^{64} -79872 q^{66} + 24742 i q^{67} + 6048 i q^{68} -51636 q^{69} + 42372 q^{71} + 27712 i q^{72} -52126 i q^{73} -568 q^{74} + 17600 q^{76} -16896 i q^{77} -4784 i q^{78} + 39640 q^{79} + 23221 q^{81} + 6168 i q^{82} -59826 i q^{83} -9152 q^{84} + 20104 q^{86} -145860 i q^{87} + 49152 i q^{88} -57690 q^{89} + 1012 q^{91} + 31776 i q^{92} + 103688 i q^{93} + 98952 q^{94} + 26624 q^{96} + 144382 i q^{97} + 65292 i q^{98} + 332544 q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q - 32q^{4} + 208q^{6} - 866q^{9} + O(q^{10})$$ $$2q - 32q^{4} + 208q^{6} - 866q^{9} - 1536q^{11} - 176q^{14} + 512q^{16} - 2200q^{19} + 1144q^{21} - 3328q^{24} + 368q^{26} + 11220q^{29} - 7976q^{31} + 3024q^{34} + 13856q^{36} - 2392q^{39} + 3084q^{41} + 24576q^{44} + 15888q^{46} + 32646q^{49} - 19656q^{51} - 39520q^{54} + 2816q^{56} - 56760q^{59} + 11044q^{61} - 8192q^{64} - 159744q^{66} - 103272q^{69} + 84744q^{71} - 1136q^{74} + 35200q^{76} + 79280q^{79} + 46442q^{81} - 18304q^{84} + 40208q^{86} - 115380q^{89} + 2024q^{91} + 197904q^{94} + 53248q^{96} + 665088q^{99} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/50\mathbb{Z}\right)^\times$$. $$n$$ $$27$$ $$\chi(n)$$ $$-1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 49.1 − 1.00000i 1.00000i 4.00000i 26.0000i −16.0000 0 104.000 22.0000i 64.0000i −433.000 0 49.2 4.00000i 26.0000i −16.0000 0 104.000 22.0000i 64.0000i −433.000 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 5.b even 2 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 50.6.b.d 2 3.b odd 2 1 450.6.c.o 2 4.b odd 2 1 400.6.c.a 2 5.b even 2 1 inner 50.6.b.d 2 5.c odd 4 1 10.6.a.a 1 5.c odd 4 1 50.6.a.g 1 15.d odd 2 1 450.6.c.o 2 15.e even 4 1 90.6.a.f 1 15.e even 4 1 450.6.a.h 1 20.d odd 2 1 400.6.c.a 2 20.e even 4 1 80.6.a.h 1 20.e even 4 1 400.6.a.a 1 35.f even 4 1 490.6.a.j 1 40.i odd 4 1 320.6.a.p 1 40.k even 4 1 320.6.a.a 1 60.l odd 4 1 720.6.a.r 1 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 10.6.a.a 1 5.c odd 4 1 50.6.a.g 1 5.c odd 4 1 50.6.b.d 2 1.a even 1 1 trivial 50.6.b.d 2 5.b even 2 1 inner 80.6.a.h 1 20.e even 4 1 90.6.a.f 1 15.e even 4 1 320.6.a.a 1 40.k even 4 1 320.6.a.p 1 40.i odd 4 1 400.6.a.a 1 20.e even 4 1 400.6.c.a 2 4.b odd 2 1 400.6.c.a 2 20.d odd 2 1 450.6.a.h 1 15.e even 4 1 450.6.c.o 2 3.b odd 2 1 450.6.c.o 2 15.d odd 2 1 490.6.a.j 1 35.f even 4 1 720.6.a.r 1 60.l odd 4 1 ## Hecke kernels This newform subspace can be constructed as the kernel of the linear operator $$T_{3}^{2} + 676$$ acting on $$S_{6}^{\mathrm{new}}(50, [\chi])$$. ## Hecke Characteristic Polynomials $p$ $F_p(T)$ $2$ $$1 + 16 T^{2}$$ $3$ $$1 + 190 T^{2} + 59049 T^{4}$$ $5$ 1 $7$ $$1 - 33130 T^{2} + 282475249 T^{4}$$ $11$ $$( 1 + 768 T + 161051 T^{2} )^{2}$$ $13$ $$1 - 740470 T^{2} + 137858491849 T^{4}$$ $17$ $$1 - 2696830 T^{2} + 2015993900449 T^{4}$$ $19$ $$( 1 + 1100 T + 2476099 T^{2} )^{2}$$ $23$ $$1 - 8928490 T^{2} + 41426511213649 T^{4}$$ $29$ $$( 1 - 5610 T + 20511149 T^{2} )^{2}$$ $31$ $$( 1 + 3988 T + 28629151 T^{2} )^{2}$$ $37$ $$1 - 138667750 T^{2} + 4808584372417849 T^{4}$$ $41$ $$( 1 - 1542 T + 115856201 T^{2} )^{2}$$ $43$ $$1 - 268756210 T^{2} + 21611482313284249 T^{4}$$ $47$ $$1 + 153278630 T^{2} + 52599132235830049 T^{4}$$ $53$ $$1 - 635715430 T^{2} + 174887470365513049 T^{4}$$ $59$ $$( 1 + 28380 T + 714924299 T^{2} )^{2}$$ $61$ $$( 1 - 5522 T + 844596301 T^{2} )^{2}$$ $67$ $$1 - 2088083650 T^{2} + 1822837804551761449 T^{4}$$ $71$ $$( 1 - 42372 T + 1804229351 T^{2} )^{2}$$ $73$ $$1 - 1429023310 T^{2} + 4297625829703557649 T^{4}$$ $79$ $$( 1 - 39640 T + 3077056399 T^{2} )^{2}$$ $83$ $$1 - 4298931010 T^{2} + 15516041187205853449 T^{4}$$ $89$ $$( 1 + 57690 T + 5584059449 T^{2} )^{2}$$ $97$ $$1 + 3671481410 T^{2} + 73742412689492826049 T^{4}$$
2020-06-01 14:03:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630858302116394, "perplexity": 12680.118010100365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347417746.33/warc/CC-MAIN-20200601113849-20200601143849-00272.warc.gz"}
https://deepai.org/publication/relational-representation-learning-for-dynamic-knowledge-graphs-a-survey
# Relational Representation Learning for Dynamic (Knowledge) Graphs: A Survey Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research. ## Authors • 15 publications • 9 publications • 9 publications • 7 publications • 4 publications • 1 publication • 46 publications 03/07/2020 ### Knowledge Graphs and Knowledge Networks: The Story in Brief Knowledge Graphs (KGs) represent real-world noisy raw information in a s... 03/04/2020 ### Knowledge Graphs In this paper we provide a comprehensive introduction to knowledge graph... 12/14/2021 ### Efficient Dynamic Graph Representation Learning at Scale Dynamic graphs with ordered sequences of events between nodes are preval... 03/29/2021 ### Dynamic Network Embedding Survey Since many real world networks are evolving over time, such as social ne... 10/31/2020 ### Domain-specific Knowledge Graphs: A survey Knowledge Graphs (KGs) have made a qualitative leap and effected a real ... 02/09/2021 ### Dynamic Neural Networks: A Survey Dynamic neural network is an emerging research topic in deep learning. C... 06/29/2018 ### On embeddings as an alternative paradigm for relational learning Many real-world domains can be expressed as graphs and, more generally, ... ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction In the era of big data, a challenge is to leverage data as effectively as possible to extract patterns, make predictions, and more generally unlock value. In many situations, the data does not consist only of vectors of features, but also relations that form graphs among entities. Graphs naturally arise in social networks (users with friendship relations, emails, text messages), recommender systems (users and products with transactions and rating relations), ontologies (concepts with relations), computational biology (protein-protein interactions), computational finance (web of companies with competitor, customer, subsidiary relations, supply chain graph, graph of customer-merchant transactions), etc. While it is often possible to ignore relations and use traditional machine learning techniques based on vectors of features, relations provide additional valuable information that permits inference among nodes. Hence, graph-based techniques have emerged as leading approaches in the industry for application domains with relational information. Traditionally, research has been done mostly on static graphs where nodes and edges are fixed and do not change over time. Many applications, however, involve dynamic graphs. For instance, in social media, communication events such as emails and text messages are streaming while friendship relations evolve over time. In recommender systems, new products, new users and new ratings appear every day. In computational finance, transactions are streaming and supply chain relations are continuously evolving. As a result, the last few years have seen a surge of works on dynamic graphs. This survey focuses precisely on dynamic graphs. Note that there are already many good surveys on static graphs Hamilton et al. (2017b); Zhang et al. (2018a); Cai et al. (2018); Cui et al. (2018); Nickel et al. (2016a); Wang et al. (2017a). There are also several surveys on techniques for dynamic graphs Bilgin and Yener (2006); Zhang (2010); Spiliopoulou (2011); Aggarwal and Subbian (2014); Al Hasan and Zaki (2011), but they do not review recent advances in neural representation learning. We present a survey that focuses on recent representation learning techniques for dynamic graphs. More precisely, we focus on reviewing techniques that either produce time-dependent embeddings that capture the essence of the nodes and edges of evolving graphs or use embeddings to answer various questions such as node classification, event prediction/interpolation, and link prediction. Accordingly, we use an encoder-decoder framework to categorize and analyze techniques that encode various aspects of graphs into embeddings and other techniques that decode embeddings into predictions. We survey techniques that deal with discrete- and/or continuous-time events. The survey is structured as follows. Section 2 introduces the notation and provides some background about static/dynamic graphs, inference tasks, and learning techniques. Section 3 provides an overview of representation learning techniques for static graphs. This section is not meant to be a survey, but rather to introduce important concepts that will be extended for dynamic graphs. Section 4 categorizes decoders for dynamic graphs into time-predicting and time-conditioned decoders, and surveys the decoders in each category. Section 5 describes encoding techniques that aggregate temporal observations and static features, use time as a regularizer, perform decompositions, traverse dynamic networks with random walks, and model observation sequences with various types of processes (e.g., recurrent neural networks). Section 6 describes briefly other lines of work that do not conform to the encoder-decoder framework such as statistical relational learning, and topics related to dynamic (knowledge) graphs such as spatiotemporal graphs and the construction of dynamic knowledge graphs from text. Section 7 reviews important applications of dynamic graphs with representative tasks. A list of static and temporal datasets is also provided with a brief summary of their properties. Section 8 concludes the survey with a discussion of several open problems and possible research directions. ## 2 Background and Notation In this section, we define our notation and provide the necessary background for readers to follow the rest of the survey. A summary of the main notation and abbreviations can be found in Table 1. We use lower-case letters to denote scalars, bold lower-case letters to denote vectors, and bold upper-case letters to denote matrices. For a vector , we represent the element of the vector as . For a matrix , we represent the row of as , and the element at the row and column as . represents norm of a vector and represents the Frobenius norm of a matrix . For two vectors and , we use to represent the concatenation of the two vectors. When , we use to represent a matrix whose two columns correspond to and respectively. We use to represent element-wise (Hadamard) multiplication. We represent by the identity matrix of size . vectorizes into a vector of size . turns into a diagonal matrix that has the values of on its main diagonal. We denote the transpose of a matrix as . ### 2.1 Static Graphs A (static) graph is represented as where is the set of vertices and is the set of edges. Vertices are also called nodes and we use the two terms interchangeably. Edges are also called links and we use the two terms interchangeably. Several matrices can be associated with a graph. An adjacency matrix is a matrix where if ; otherwise represents the weight of the edge. For unweighted graphs, all non-zero s are . A degree matrix is a diagonal matrix where represents the degree of . A graph Laplacian is defined as . A graph is undirected if the order of the nodes in the edges is not important. For an undirected graph, the adjacency matrix is symmetric, i.e. for all and . A graph is directed if the order of the nodes in the edges is important. Directed graphs are also called digraphs. For an edge in a digraph, we call the source and the target of the edge. A graph is bipartite if the nodes can be split into two groups where there is no edge between any pair of nodes in the same group. A multigraph is a graph where multiple edges can exist between two nodes. A graph is attributed if each node is associated with a number of properties representing its characteristics. For a node in an attributed graph, we let represent the attribute values of . When all nodes have the same attributes, we represent all attribute values of the nodes by a matrix whose row corresponds to the attribute values of . A knowledge graph (KG) is a multi-digraph with labeled edges Kazemi (2018), where the label represents the type of the relationship. Let be a set of relation types. Then . A KG can be attributed in which case each node is associated with a vector of attribute values. A digraph is a special case of a KG with only one relation. An undirected graph is a special case of a KG with only one symmetric relation. ###### Example 1. Figure 1(a) represents an undirected graph with three nodes , and and three edges , and . Figure 1(b) represents a graph with four nodes and four edges. The adjacency, degree, and Laplacian matrices for the graph in Figure 1(b) are as follows: A=⎡⎢ ⎢ ⎢⎣0110101111000100⎤⎥ ⎥ ⎥⎦D=⎡⎢ ⎢ ⎢⎣2000030000200001⎤⎥ ⎥ ⎥⎦L=⎡⎢ ⎢ ⎢ ⎢⎣ 2−1−1 0−1 3−1−1−1−1 2 0 0−1 0 1⎤⎥ ⎥ ⎥ ⎥⎦ where the row (and the column) corresponds to . Since the graph is undirected, is symmetric. Figure 1(c) represents a KG with four nodes , , and , three relation types , , and , and five labeled edges as follows: (v1,r1,v2)(v1,r1,v3)(v1,r2,v3)(v2,r3,v4)(v4,r3,v2) The KG in Figure 1(c) is directed and is a multigraph as there are, e.g., two edges (with the same direction) between and . ### 2.2 Dynamic Graphs We represent a continuous-time dynamic graph (CTDG) as where is a static graph representing an initial state of a dynamic graph at time and is a set of observations/events where each observation is a tuple . An event type can be an edge addition, edge deletion, node addition, node deletion, node splitting, node merging, etc. At any point in time, a snapshot (corresponding to a static graph) can be obtained from by updating sequentially according to the observations that occurred before (or at) time (sometimes, the update may require aggregation to handle multiple edges between two nodes). A discrete-time dynamic graph (DTDG) is a sequence of snapshots from a dynamic graph sampled at regularly-spaced times. Formally, where . We use the term dynamic graph to refer to both DTDGs and CTDGs. Compared to a CTDG, a DTDG may lose information by looking only at some snapshots of the graph over time, but developing models for DTDGs may be generally easier. In particular, a model developed for CTDGs may be used for DTDGs, but the reverse is not necessarily true. An undirected dynamic graph is a dynamic graph where at any time , is an undirected graph. A directed dynamic graph is a dynamic graph where at any time , is a digraph. A bipartite dynamic graph is a dynamic graph where at any time , is a bipartite graph. A dynamic KG is a dynamic graph where at any time , is a KG. ###### Example 2. Let be a CTDG where is a graph with five nodes , , , and and with no edges between any pairs of nodes, and is: may be represented graphically as in Figure 1(d). The only type of observation in this dynamic graph is the addition of new edges. The second element of each observation corresponding to an edge addition represents the source and the target nodes of the new edge. The third element of each observation represents the timestamp at which the observation was made. ###### Example 3. Consider an undirected CTDG whose initial state is as in Figure 1(a). Suppose is: where . Now consider a DTDG that takes two snapshots from this CTDG, one snapshot at time and one snapshot at time . The two snapshots of this DTDG look like the graphs in Figure 1(a) and Figure 1(b) respectively. ### 2.3 Prediction problems In this survey, we mainly study three general problems for dynamic graphs: node classification, edge prediction, and graph classification . Node classification is the problem of classifying each node into one class from a set of predefined classes. Link prediction is the problem of predicting new links between the nodes. Graph classification is the problem of classifying a whole graph into one class from a set of predefined classes. A high-level description of some other prediction problems can be found in Section 7.1. Node classification and link prediction can be deployed under two settings: interpolation and extrapolation. Consider a dynamic graph that has incomplete information from the time interval . The interpolation problem is to make predictions at some time such that . The interpolation problem is also known as the completion problem and is mainly used for completing (dynamic) KGs Jiang et al. (2016); Leblay and Chekol (2018); García-Durán et al. (2018); Dasgupta et al. (2018). The extrapolation problem is to make predictions at time such that , i.e., predicting future based on the past. Extrapolation is usually a more challenging problem than the interpolation problem. ##### Streaming scenario: In the streaming scenario, new observations are being streamed to the model at a fast rate and the model needs to update itself based on these observations in real-time so it can make informed predictions immediately after each observation arrives. For this scenario, a model may not have enough time to retrain completely or in part when new observations arrive. Streaming scenarios are often best handled by CTDGs and often give rise to extrapolation problems. ### 2.4 The Encoder-Decoder Framework Following Hamilton et al. (2017b), to deal with the large notational and methodological diversity of the existing approaches and to put the various methods on an equal notational and conceptual footing, we develop an encoder-decoder framework for dynamic graphs. Before describing the encoder-de coder framework, we define a main component in this architecture known as embedding. ###### Definition 1. An embedding is a function that maps every node of a graph, and every relation type in case of a KG, to a hidden representation where the hidden representation is typically a tuple of one or more scalars, vectors, and/or matrices of numbers. The vectors and matrices in the tuple are supposed to contain the necessary information about the nodes and relations to enable making predictions about them. For each node and relation , we refer to the hidden representation of and as the embedding of and the embedding of respectively. When the main goal is link prediction, me works define the embedding function as mapping each pair of nodes into a hidden representation. In these cases, we refer to the hidden representation of a pair of nodes as the embedding of the pair . Having the above definition, we can now formally define an encoder and a decoder. ###### Definition 2. An encoder takes as input a dynamic graph and outputs an embedding function that maps nodes, and relations in case of a KG, to hidden representations. ###### Definition 3. A decoder takes as input an embedding function and makes predictions (such as node classification, edge prediction, etc.) based on the embedding function. In many cases (e.g., Kipf and Welling (2017); Hamilton et al. (2017a); Yang et al. (2015); Bordes et al. (2013); Nickel et al. (2016b); Dong et al. (2014)), the embedding function maps each node, and each relation in the case of a KG, to a tuple containing a single vector; that is where and where . Other works consider different representations. For instance, Kazemi and Poole (2018c) define and , i.e. mapping each node and each relation to two vectors where each vector has a different usage. Nguyen et al. (2016) define and , i.e. mapping each node to a single vector but mapping each relation to a vector and two matrices. We will describe these approaches (and many others) in the upcoming sections. A model corresponds to an encoder-decoder pair. One of the benefits of describing models in an encoder-decoder framework is that it allows for creating new models by combining the encoder from one model with the decoder from another model when the hidden representations produced by the encoder conform to the hidden representations consumed by the decoder. #### 2.4.1 Training For many choices of an encoder-decoder pair, it is possible to train the two components end-to-end. In such cases, the parameters of the encoder and the decoder are typically initialized randomly. Then, until some criterion is met, several epochs of stochastic gradient descent are performed where in each epoch, the embedding function is produced by the encoder, predictions are made based on the embedding function by the decoder, the error in predictions is computed with respect to a loss function, and the parameters of the model are updated based on the loss. For node classification and graph classification, the loss function can be any classification loss (e.g., cross entropy loss). For link prediction, typically one only has access to positive examples corresponding to the links already in the graph. A common approach in such cases is to generate a set of negative samples where negative samples correspond to edges that are believed to have a low probability of being in the graph. Then, having a set of positive and a set of negative samples, the training of a link predictor turns into a classification problem and any classification loss can be used. The choice of the loss function depends on the application. ### 2.5 Expressivity The expressivity of the models for (dynamic) graphs can be thought of as the diversity of the graphs they can represent. Depending on the problem at hand (e.g., node classification, link prediction, graph classification, etc.), the expressivity can be defined differently. We first provide some intuition on the importance of expressivity using the following example. ###### Example 4. Consider a simple encoder for a KG that maps every node to a tuple containing a single scalar representing the number of incoming edges to the node (regardless of the labels of the edges). For the KG in Figure 1(c), this encoder will output an embedding function as: EMB(v1)=(0)EMB(v2)=(2)EMB(v3)=(2)EMB(v4)=(1) No matter what decoder we use, since and are identical, the two nodes will be assigned the same class. Therefore, this model is not expressive enough to represent ground truths where and belong to different classes. From Example 4, we can see why the expressivity of a model may be important. In this regard, one may favor models that are fully expressive, where we define full expressivity for node classification as follows (a model in the following definitions corresponds to an encoder-decoder pair): ###### Definition 4. A model with parameters is fully expressive with respect to node classification if given any graph and any ground truth of class assignments for all nodes in the graph, there exists an instantiation of that classifies the nodes of according to . A similar definition can be given for full expressivity of a model with respect to link prediction and graph classification. ###### Definition 5. A model with parameters is fully expressive with respect to link prediction if given any graph and any ground truth indicating the existence or non-existence of a (labeled) edge for all node-pairs in the graph, there exists an instantiation of that classifies the node-pairs of according to . ###### Definition 6. A model with parameters is fully expressive with respect to graph classification if given any set of non-isomorphic graphs and any ground truth of class assignments for all graphs in the set, there exists an instantiation of that classifies the graphs according to . ### 2.6 Sequence Models In dynamic environments, data often consists of sequences of observations of varying length. There is a long history of models to handle sequential data without any fixed length. This includes auto-regressive models Akaike (1969) that predict the next observations based on a window of past observations. Alternatively, since it is not always clear how long the window of part observations should be, hidden Markov models Rabiner and Juang (1986)Welch et al. (1995) , dynamic Bayesian networks Murphy and Russell (2002) and dynamic conditional random fields Sutton et al. (2007) use hidden states to capture relevant information that might be arbitrarily far in the past. Today, those models can be seen as special cases of recurrent neural networks, which allow rich and complex hidden dynamics. Recurrent neural networks (RNNs) Elman (1990); Cho et al. (2014) have achieved impressive results on a range of sequence modeling problems such as language modeling and speech recognition. The core principle of the RNN is that its input is a function of the current data point as well as the history of the previous inputs. A simple RNN model can be formulated as follows: ht=ϕ(Wixt+Whht−1+bi) (1) where is the input at position in the sequence, is a hidden representation containing information about the sequence of inputs until time , and are weight matrices, represents the vector of biases, is an activation function, and is an updated hidden representation containing information about the sequence of inputs until time . With some abuse of notation, we use to represent the output of an RNN operation on a previous state and a new input . Long short term memory (LSTM) Hochreiter and Schmidhuber (1997) is considered one of the most successful RNN architectures. The original LSTM model can be neatly defined with the following equations: it=σ(Wiixt+Wihht−1+bi) (2) ft=σ(Wfixt+Wfhht−1+bf) (3) ct=ft⊙ct−1+it⊙Tanh(Wcixt+Wchht−1+bc) (4) ot=σ(Woixt+Wohht−1+bo) (5) ht=ot⊙Tanh(ct) (6) Here , , and represent the input, forget and output gates respectively, while is the memory cell and is the hidden state. and represent the sigmoid and hyperbolic tangent activation functions respectively. Gated recurrent units (GRUs) Cho et al. (2014) is another successful RNN architecture. Fully attentive models have recently demonstrated on-par or superior performance compared to RNN variants for a variety of tasks (see, e.g., Vaswani et al. (2017); Dehghani et al. (2018); Krantz and Kalita (2018); Shaw et al. (2018)). These models rely only on (self-)attention and abstain from using recurrence. Vaswani et al. (2017) characterize a self-attention mechanism as a function from query, key, and value vectors to a vector that is a weighted sum of the value vectors. Their mechanism is presented in Equation (7). Attention(Q,K,V) =softmax(QK′√dk)V (7) where    Q=XWQ,K= XWK,V=XWV where are called the query, key and value matrices, is the transpose of , is the input sequence, , and are weight matrices, and performs a row-wise normalization of the input matrix. A mask is added to Equation (7) to make sure that at time , the mechanism only allows a sequence model to attend to the points before time . Vaswani et al. (2017) also define a multi-head self-attention mechanism by considering multiple self-attention blocks (as defined in Equation (7)) each having different weight matrices and then concatenating the results. ### 2.7 Temporal Point Processes Temporal Point Processes (TPP) Cox and Lewis (1972) are stochastic, or random, processes that are used for modeling sequential asynchronous discrete events occurring in continuous time. Asynchronous in this context means that the time between consecutive events may not be the same. TPPs have been applied for applications like e-commerce Xu et al. (2014), finance Bacry et al. (2015), etc. A typical realization of a TPP is a sequence of discrete events occurring at time points for , where the sequence has been generated by some stochastic process and represents the time horizon of the process. A TPP model uses a conditional density function indicating the density of the occurrence of the next event at some time point given the history of the process till time (including time ). The cumulative density function till time given the history is defined as follows: F(t|Htn)=∫tτ=tnf(τ|Htn)dτ (8) Equation (8) also corresponds to the probability that the next event will happen between and . The survival function of a process Aalen et al. (2008) indicates the probability that no event will occur until given the history and is computed as . Having the density function, the time for the next event can be predicted by taking an expectation over as: ^t =Et∼f(t|Htn)[t]=∫Tτ=tnτf(τ|Htn)dτ (9) The parameters of a TPP can be learned from data by maximizing the joint density of the entire process defined as follows: f(t1,…,tn)=n∏i=1f(ti|Hti−1) (10) Another way of characterizing a TPP is through a conditional intensity function (a.k.a. hazard function) such that represents the probability of the occurrence of an event in the interval given that no event has occurred until time . represents the history of the process until but not including . The intensity and density functions can be derived from each other as follows: λ(t|Ht−)dt =Prob(tn+1∈[t,t+dt]∣Ht−) =Prob(tn+1∈[t,t+dt]∣Htn,tn+1∉(tn,t)) =Prob(tn+1∈[t,t+dt], tn+1∉(tn,t) ∣ Htn)Prob(tn+1∉(tn,t) ∣ Htn) =Prob(tn+1∈[t,t+dt] ∣ Htn)Prob(tn+1∉(tn,t) ∣ Htn) =f(t∣Htn)dtS(t∣Htn) (11) The intensity function can be designed according to the application. The function usually contains learnable parameters Du et al. (2016) that can be learned from the data. ###### Example 5. Consider the problem of predicting when the next earthquake will occur in a region based on the times of previous earthquakes in that region. Typically, an earthquake is followed by a series of other earthquakes as aftershocks. Thus, upon observing an earthquake, a model should increase the probability of another earthquake in near future and gradually decay this probability. Let be the times at which an earthquake occurred in the region. Equation (12) gives one possible conditional intensity function for modeling this process. λ∗(t)=μ+α∑ti≤texp(−(t−ti)) (12) where and are parameters that are constrained to be positive and are generally learned from the data. The sum is over all the timestamps at which an earthquake occurred. In this function, can be considered as the base intensity of an earthquake in the region. The occurrence of an earthquake increases the intensity of another earthquake in the near future (as it makes the value of the sum increase), which decays exponentially to the base intensity. The amount of increase is controlled by . Note that the conditional intensity function is always positive as , and are always positive. From Equation 11 , the density function for random variable is . We can estimate the time for the occurrence of the next earthquake ( ) by taking an expectation over the random variable as in Equation (9). Equation (12) is a special case of the well-known self-exciting Hawkes process Hawkes (1971); Mei and Eisner (2017). Other well-studied TPPs include Poisson processes Kingman (2005), self-correcting processes Isham and Westcott (1979), and autoregressive conditional duration processes Engle and Russell (1998). Depending on the application, one may use one of these intensity functions or even potentially design new ones. Recently, there has been growing interest in learning the intensity function entirely from the data Du et al. (2016). ## 3 Representation Learning for Static Graphs In this section, we provide an overview of representation learning approaches for static graphs. The main aim of this section is to provide enough information for the descriptions and discussions in the next sections on dynamic graphs. Readers interested in learning more about representation learning on static graphs can refer to several existing surveys specifically written on this topic (e.g., see Hamilton et al. (2017b); Zhang et al. (2018a); Cai et al. (2018); Cui et al. (2018) for graphs and Nickel et al. (2016a); Wang et al. (2017a) for KGs). ### 3.1 Decoders Assuming an encoder has provided the embedding function, the decoder aims at using the node and relation embeddings for node classification, edge prediction, graph classification, or other prediction purposes. We divide the discussion on decoders for static graphs into those used for graphs and those used for KGs. #### 3.1.1 Decoders for Static Graphs For static graphs, the embedding function usually maps each node to a single vector; that is, where for any . To classify a node , a decoder can be any classifier on (e.g., logistic regression or random forest). To predict a link between two nodes and , for undirected (and bipartite) graphs, the most common decoder is based on the dot-product of the vectors for the two nodes, i.e., . The dot-product gives a score that can then be fed into a sigmoid function whose output can be considered as the probability of a link existing between and . Grover and Leskovec (2016) propose several other decoders for link prediction in undirected graphs. Their decoders are based on defining a function that combines the two vectors and into a single vector. The resulting vector is then considered as the edge features that can be fed into a classifier to predict if an edge exists between and or not. These combining functions include: • The average of the two vectors: , • The element-wise (Hadamard) multiplication of the two vectors: , • The element-wise absolute value of the difference of the two vectors: , • The element-wise squared value of the difference of the two vectors: . Instead of computing the distance between and in the Euclidean space, the distance can be computed in other spaces such as the hyperbolic space Chamberlain et al. (2017). Different spaces offer different properties. Note that all these four combination functions are symmetric, i.e., where is any of the above functions. This is an important property when the graph is undirected. For link prediction in directed graphs, it is important to treat the source and target of the edge differently. Towards this goal, one approach is to concatenate the two vectors as and feed the concatenation into a classifier (see, e.g., Pareja et al. (2019)). Another approach used in Ma et al. (2018b) is to project the source and target vectors to another space as and , where and are matrices with learnable parameters, and then take the dot-product in the new space (i.e., ). A third approach is to take the vector representation of a node and send it through a feed-forward neural network with outputs where each output gives the score for whether has a link with one of the nodes in the graph or not. This approach is used mainly in graph autoencoders (see, e.g., Wang et al. (2016); Cao et al. (2016); Tran (2018); Goyal et al. (2017); Chen et al. (2018a)) and is used for both directed and undirected graphs. The decoder for a graph classification task needs to compress node representations into a single representation which can then be fed into a classifier to perform graph classification. Duvenaud et al. (2015) simply average all the node representations into a single vector. Gilmer et al. (2017) consider the node representations of the graph as a set and use the DeepSet aggregation Zaheer et al. (2017) to get a single representation. Li et al. (2015) add a virtual node to the graph which is connected to all the nodes and use the representation of the virtual node as the representation of the graph. Several approaches perform a deterministic hierarchical graph clustering step and combine the node representations in each cluster to learn hierarchical representations Defferrard et al. (2016); Fey et al. (2018); Simonovsky and Komodakis (2017). Instead of performing a deterministic clustering and then running a graph classification model, Ying et al. (2018b) learn the hierarchical structure jointly with the classifier in an end-to-end fashion. #### 3.1.2 Decoders for Link Prediction in Static KGs There are several classes of decoders for link prediction in static KGs. Here, we provide an overview of the translational, bilinear, and deep learning classes. When we discuss the expressivity of the decoders in this subsection, we assume the decoder is combined with a flexible encoder. ##### Translational decoders usually assume the encoder provides an embedding function such that for every where , and for every where , , and . That is, the embedding for a node contains a single vector whereas the embedding for a relation contains a vector and two matrices. For an edge , these models use: ||Przv+zr−Qrzu||i (13) as the dissimilarity score for the edge where represents norm of a vector. is usually either or . Translational decoders differ in the restrictions they impose on and . TransE Bordes et al. (2013) constrains . So the dissimilarity function for TransE can be simplified to: ||zv+zr−zu||i (14) In TransR Lin et al. (2015), . In STransE Nguyen et al. (2016), no restrictions are imposed on the matrices. Kazemi and Poole (2018c) proved that TransE, TransR, STransE, and many other variants of translational approaches are not fully expressive with respect to link prediction (regardless of the encoder) and identified severe restrictions on the type of relations that can be modeled using these approaches. ##### Bilinear decoders usually assume the encoder provides an embedding function such that for every where , and for every where . For an edge , these models use: z′vPrzu (15) as the similarity score for the edge. Bilinear decoders differ in the restrictions they impose on matrices Wang et al. (2018). In RESCAL Nickel et al. (2011), no restrictions are imposed on the matrices. RESCAL is fully expressive with respect to link prediction, but the large number of parameters per relation makes RESCAL prone to overfitting. To reduce the number of parameters in RESCAL, DistMult Yang et al. (2015) constrains the matrices to be diagonal. This reduction in the number of parameters, however, comes at a cost: DistMult loses expressivity and is only able to model symmetric relations. That is because the score function of DistMult does not distinguish between the source and target vectors. ComplEx Trouillon et al. (2016), CP Hitchcock (1927) and SimplE Kazemi and Poole (2018c) reduce the number of parameters in RESCAL without sacrificing expressivity. ComplEx extends DistMult by assuming the embeddings are complex (instead of real) valued, i.e. and for every and . Then, it slightly changes the score function to where returns the real part of an imaginary number and takes an element-wise conjugate of the vector elements. By taking the conjugate of the target vector, ComplEx differentiates between source and target nodes and does not suffer from the symmetry issue of DistMult. CP defines , i.e. the embedding of a node consists of two vectors, where captures the ’s behaviour when it is the source of an edge and captures ’s behaviour when it is the target of an edge. For relations, CP defines . The similarity function of CP for an edge is then defined as . Realizing the information may not flow well between the two vectors of a node, SimplE adds another vector to the relation embeddings as where models the behaviour of the inverse of the relation. Then, it changes the score function to be the average of and . For ComplEx, CP, and SimplE, it is possible to view the embedding for each node as a single vector in by concatenating the two vectors (in the case of ComplEx, the two vectors correspond to the real and imaginary part of the embedding vector). Then, the matrices can be viewed as being restricted according to Figure 2 (taken from Kazemi and Poole (2018c)). Other bilinear approaches include HolE Sadilek and Kautz (2010) whose equivalence to ComplEx has been established Hayashi and Shimbo (2017), and Analogy Liu et al. (2017) where the matrices are constrained to be block-diagonal. ##### Deep learning-based decoders: Deep learning approaches typically use feed-forward or convolutional neural networks for scoring edges in a KG. Dong et al. (2014) and Santoro et al. (2017) consider for every node such that and for every relation such that . Then for an edge , they feed (i.e., the concatenation of the three vector representations) into a feed-forward neural network that outputs a score for this edge. Dettmers et al. (2018) develop a score function based on convolutions. They consider for each node such that and for each relation such that 111Alternatively, the matrices can be viewed as vectors of size .. For an edge (, , ), first they combine and into a matrix by concatenating the two matrices on the rows, or by adding the row of each matrix in turn. Then 2D convolutions with learnable filters are applied on generating multiple matrices and the matrices are vectorized into a vector , where the size of the vector depends on the number of convolution filters. Then the score for the edge is computed as: (c′vrW)vec(Zu) (16) where is a weight matrix. Other deep learning approaches include Balazevic et al. (2018) which is another score function based on convolutions, and Socher et al. (2013) which contains feed-forward components as well as several bilinear components. ### 3.2 Encoders In the previous section, we discussed how an embedding function can be used by a decoder to make predictions. In this section, we describe different approaches for creating encoders that provide the embedding function to be consumed by the decoder. #### 3.2.1 High-Order Proximity Matrices While the adjacency matrix of a graph only represents local proximities, one can also define high-order proximity matrices Ou et al. (2016) or similarity metrics da Silva Soares and Prudêncio (2012). Let be a high-order proximity matrix. A simple approach for creating an encoder is to let (or ) corresponding to the row (or the column) of matrix . Encoders based on high-order proximity matrices are typically parameter-free and do not require learning (although some of them have hyper-parameters that need to be tuned). In what follows, we describe several of these matrices. • Common neighbours matrix is defined as . corresponds to the number of nodes that are connected to both and . For a directed graph, counts how many nodes are simultaneously the target of an edge starting at and the source of an edge ending at . • Jaccard’s coefficient is a slight modification of where one divides the number of common neighbours of and by the total number of distinct nodes that are the targets of edges starting at or the sources of edges ending at . Formally, Jaccard’s coefficient is defined as . • Adamic-Adar is defined as , where . computes the weighted sum of common neighbours where the weight is inversely proportional to the degree of the neighbour. • Katz index is defined as computes a weighted sum of all the paths between two nodes and . controls the depth of the connections: the closer is to , the longer paths one wants to consider. One can rewrite the formula recursively as and, as a corollary, obtain . • Preferential Attachment is simply a product of in- and out- degrees of nodes: . #### 3.2.2 Shallow Encoders Shallow encoders first decide on the number and the shape of the vectors and matrices for node and relation embeddings. Then, they consider each element in these vectors and matrices as a parameter to be directly learned from the data. As an example, consider the problem of link prediction in a KG. Let the encoder be a shallow encoder with for each node in the KG and for each relation in the KG, and the decoder be the RESCAL function. ’s and ’s are initialized randomly and then their values are optimized such that becomes a large positive number if is in positive samples and becomes a large negative number if is in negative samples. #### 3.2.3 Decomposition Approaches Decomposition methods are among the earliest attempts for developing encoders for graphs. They learn node embeddings similar to shallow encoders but in an unsupervised way: the node embeddings are learned in a way that connected nodes are close to each other in the embedded space. Once the embeddings are learned, they can be used for purposes other than reconstructing the edges (e.g., for clustering). Formally, for an undirected graph , learning node embeddings , where , such that connected nodes are close in the embedded space can be done through solving the following optimization problem: (17) This loss ensures that connected nodes are close to each other in the embedded space. One needs to impose some constraints to get rid of a scaling factor and to eliminate the trivial solution where all nodes are set to a single vector. For that let us consider a new matrix , such that its rows give the embedding: . Then one can add the constraints to the optimization problem (17): , where is a diagonal matrix of degrees as defined in Subsection 2.1. As was proved in Belkin and Niyogi (2001) , this constrained optimization is equivalent to solving a generalized eigenvalue decomposition: Ly=λDy, (18) where is a graph Laplacian; and the matrix can be obtained by considering the matrix of top- generalized eigenvectors: . Sussman et al. (2012) suggested to use a slightly different embedding based on the eigenvalue decomposition of the adjacency matrix (this matrix is symmetric for an undirected graph). Then one can choose the top eigenvalues and the corresponding eigenvectors and construct a new matrix Z=U where , and . Rows of this matrix can be used as node embedding: . This is the so called adjacency spectral embedding, see also Levin et al. (2018). For directed graphs, because of their asymmetric nature, keeping track of the -order neighbours where becomes difficult. For this reason, working with a high-order proximity matrix is preferable. Furthermore, for directed graphs, it may be preferable to learn two vector representations per node, one to be used when the node is the source and the other to be used when the node is the target of an edge. One may learn embeddings for directed graphs by solving the following: minZs,Zt||S−ZsZ′t||2F, (20) where is the Frobenius norm and . Given the solution, one can define the “source” features of a node as and the “target” features as . A single-vector embedding of a node can be defined as a concatenation of these features. The Eckart–Young–Mirsky theorem Eckart and Young (1936) from linear algebra indicates that the solution is equivalent to finding the singular value decomposition of : S=UsΣ(Ut)′, (21) where is a matrix of singular values and and are matrices of left and right singular vectors respectively (stacked as columns). Then using the top singular vectors one gets the solution of the optimization problem in (20): Zs=(Us) #### 3.2.4 Random Walk Approaches One of the popular classes of approaches for learning an embedding function for graphs is the class of random walk approaches. Similar to decomposition approaches, encoders based on random walks also learn embeddings in an unsupervised way. However, compared to decomposition approaches, these embeddings may capture longer term dependencies. To describe the encoders in this category, first we define what a random walk is and then describe the encoders that leverage random walks to learn an embedding function. ###### Definition 7. A random walk for a graph is a sequence of nodes where for all and for all . is called the length of the walk. A random walk of length can be generated by starting at a node in the graph, then transitioning to a neighbor of (), then transitioning to a neighbor of and continuing this process for steps. The selection of the first node and the node to transition to in each step can be uniformly at random or based on some distribution/strategy. ###### Example 6. Consider the graph in Figure 1(b). The following are three examples of random walks on this graph with length : 1) v1,v3,v2,v32) v2,v1,v2,v43) v4,v2,v4,v2 In the first walk, the initial node has been selected to be . Then a transition has been made to , which is a neighbor of . Then a transition has been made to , which is a neighbor of and then a transition back to , which is a neighbor of . The following are two examples of invalid random walks: 1) v1,v4,v2,v32) v1,v3,v4,v2 The first one is not a valid random walk since a transition has been made from to when there is no edge between and . The second one is not valid because a transition has been made from to when there is no edge between and . Random walk encoders perform multiple random walks of length on a graph and consider each walk as a sentence, where the nodes are considered as the words of these sentences. Then they use the techniques from natural language processing for learning word embeddings (e.g., Mikolov et al. (2013); Pennington et al. (2014)) to learn a vector representation for each node in the graph. One such approach is to create a matrix from these random walks such that corresponds to the number of times and co-occurred in random walks and then factorize the matrix (see Section 3.2.3) to get vector representations for nodes. Random walk encoders typically differ in the way they perform the walk, the distribution they use for selecting the initial node, and the transition distribution they use. For instance, DeepWalk Perozzi et al. (2014) selects both the initial node and the node to transition to uniformly at random. Perozzi et al. (2016) extends DeepWalk by allowing random walks to skip over multiple nodes at each transition. Node2Vec Grover and Leskovec (2016) selects the node to transition to based on a combination of breadth-first search (to capture local information) and depth-first search (to capture global information). #### 3.2.5 Autoencoder Approaches Another class of models for learning an embedding function for static graphs is by using autoencoders. Similar to the decomposition approaches, these approaches are also unsupervised. However, instead of learning shallow embeddings that reconstruct the edges of a graph, the models in this category create a deep encoder that compresses a node’s neighbourhood to a vector representation, which can be then used to reconstruct the node’s neighbourhood. The model used for compression and reconstruction is referred to as an autoencoder. Similar to the decomposition approaches, once the node embeddings are learned, they may be used for purposes other than predicting a node’s neighbourhood. In its simplest form, an autoencoder Hinton and Salakhutdinov (2006) contains two components called the encoder and decoder, where each component is a feed-forward neural network. To avoid confusion with graph encoder and decoders, we refer to these two components as the first and second component. The first component takes as input a vector (e.g., corresponding to numerical features of an object) and passes it through several feed-forward layers producing another vector such that . The second component receives as input and passes it through several feed-forward layers aiming at reconstructing . That is, assuming the output of the second component is , the two components are trained such that is minimized. can be considered a compression of . Let be a graph with adjacency matrix . For a node , let represent the row of the adjacency matrix corresponding to the neighbors of . To use autoencoders for generating node embeddings, Wang et al. (2016) train an autoencoder (named SDNE) that takes a vector as input, compresses it to in its first component, and then reconstructs it in its second component. After training, the vectors corresponding to the output of the first component of the autoencoder can be considered as embeddings for the nodes . and may further be constrained to be close in Euclidean space if and are connected. For the case of attributed graphs, Tran (2018) concatenates the attribute values of node to and feeds the concatenation into an autoencoder. Cao et al. (2016) propose an autoencoder approach (named RDNG) that is similar to SDNE, but they first compute a similarity matrix based on two nodes co-occurring on random walks (any other matrix from Section 3.2.1 may also be used) showing the pairwise similarity of each pair of nodes, and then feed s into the autoencoder. #### 3.2.6 Graph Convolutional Network Approaches Yet another class of models for learning node embeddings in a graph are graph convolutional networks (GCNs). As the name suggests, graph convolutions generalize convolutions to arbitrary graphs. Graph convolutions have spatial (see, e.g., Hamilton et al. (2017a, b); Schlichtkrull et al. (2018); Gilmer et al. (2017)) and spectral constructions (see, e.g., Liao et al. (2019); Kipf and Welling (2017); Defferrard et al. (2016); Levie et al. (2017)). Here, we describe the spatial (or message passing) view and refer the reader to Bronstein et al. (2017) for the spectral view. A GCN consists of multiple layers where each layer takes node representations (a vector per node) as input and outputs transformed representations. Let be the representation for a node after passing it through the layer. A very generic forward pass through a GCN layer transforms the representation of each node as follows: zv,l+1=transfo
2022-05-27 23:10:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8157371282577515, "perplexity": 621.722423559867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00336.warc.gz"}
https://itprospt.com/num/1945620/7-why-is-water-called-quot-the-universal-solvent-x27-diagram
5 7 Why is water called "the universal solvent'? Diagram the process of salt, NaCl dissolving in water: 8. Define hydrophilic and hydrophobic: 9. Soaps and ... Question 7 Why is water called "the universal solvent'? Diagram the process of salt, NaCl dissolving in water: 8. Define hydrophilic and hydrophobic: 9. Soaps and detergents are surfactants. What is a surfactant and how does adding soap to laundry help to remove dirt and grease from your clothing? 10.List five special properties of water (other than those listed in this assignment). 1.Where does the drinking water for your home come from? How is the water supplied? How is the water tested? How 7 Why is water called "the universal solvent'? Diagram the process of salt, NaCl dissolving in water: 8. Define hydrophilic and hydrophobic: 9. Soaps and detergents are surfactants. What is a surfactant and how does adding soap to laundry help to remove dirt and grease from your clothing? 10.List five special properties of water (other than those listed in this assignment). 1.Where does the drinking water for your home come from? How is the water supplied? How is the water tested? How often has the water been tested? By whom? Where does the waste water go? Similar Solved Questions 1 TX FU 0 WW { 1 1 1 H 1 1 6 H H 1 1 JF [ Hi 1 WW 1 L L W V { V 1 1 WH 1 1 1 N 1 f 1 WV W L V 1 1 L L 1 1 1 # 1 1 1 Wf W L 332 1 TX FU 0 WW { 1 1 1 H 1 1 6 H H 1 1 JF [ Hi 1 WW 1 L L W V { V 1 1 WH 1 1 1 N 1 f 1 WV W L V 1 1 L L 1 1 1 # 1 1 1 Wf W L 332... A 18.0 nF capacitor is connected across an AC generator that produces a peak voltage of 5.50 V: A 18.0 nF capacitor is connected across an AC generator that produces a peak voltage of 5.50 V:... MAC 2312009 39647Homework: HW 7.1 Score: 0 of 1 pt7.2.27Evaluate6x cos (Zx) dxe6x cos (Tx) dx = (Use C as an arbitrary constant ) MAC 2312009 39647 Homework: HW 7.1 Score: 0 of 1 pt 7.2.27 Evaluate 6x cos (Zx) dx e6x cos (Tx) dx = (Use C as an arbitrary constant )... Point) The length, X , of a fish from particular mountain lake in Idaho is normally distributed with / 9.6 inches and 0 = inches_(a) Is X a discrete or continuous random variable? (Type: DISCRETE or CONTINUOUS) ANSWER:(b) Write the event fish chosen has a length of over 6.6 inches" in terms of X:(c) Find the probability of this event:Find the probability that the length of a chosen fish was greater than 10.6 inches:(e) Find the probability that the length of a chosen fish was between 6.6 an point) The length, X , of a fish from particular mountain lake in Idaho is normally distributed with / 9.6 inches and 0 = inches_ (a) Is X a discrete or continuous random variable? (Type: DISCRETE or CONTINUOUS) ANSWER: (b) Write the event fish chosen has a length of over 6.6 inches" in terms o... The following reactions of two different structural isomers Aand B, give the same product but in different enantiomeric excess: Give the arrow pushing mechanism, accounting for this difference_NaOCH:"CHa Hgc OHHOCH:HO(R) CH3 CH3"CHs Hzc" Brracemic mixture"CH3 CH3CH3 'CHg The following reactions of two different structural isomers Aand B, give the same product but in different enantiomeric excess: Give the arrow pushing mechanism, accounting for this difference_ NaOCH: "CHa Hgc OH HOCH: HO (R) CH3 CH3 "CHs Hzc" Br racemic mixture "CH3 CH3 CH3 ... Unkmoww # |IRHMMMRCNMR Mw-122 : KAMU M unkmoww # | IRHMMMRCNMR Mw-122 : KAMU M... While cruising in level flight at a speed of $570 \mathrm{mi} / \mathrm{h}$, a jet airplane scoops in air at a rate of $240 \mathrm{lb} / \mathrm{s}$ and discharges it with a velocity of $2200 \mathrm{ft} / \mathrm{s}$ relative to the airplane. Determine $(a)$ the power actually used to propel the airplane, ( $b$ ) the total power developed by the engine, $(c)$ the mechanical efficiency of the airplane. While cruising in level flight at a speed of $570 \mathrm{mi} / \mathrm{h}$, a jet airplane scoops in air at a rate of $240 \mathrm{lb} / \mathrm{s}$ and discharges it with a velocity of $2200 \mathrm{ft} / \mathrm{s}$ relative to the airplane. Determine $(a)$ the power actually used to propel the a... IGraph Ihe function Y cos (Tx) Show at least two cycles Use Ihe graph to determine Mle domain and range f the function Iuse Ihe graphing tool to graph the equation Type pi to insert x a5 needed Click to enlarge graph IGraph Ihe function Y cos (Tx) Show at least two cycles Use Ihe graph to determine Mle domain and range f the function Iuse Ihe graphing tool to graph the equation Type pi to insert x a5 needed Click to enlarge graph... Csc2 (2cot (b) Show that A1Vx) x-1 dx = VxtxVx +c. 2 Vx csc2 (2cot (b) Show that A1Vx) x-1 dx = VxtxVx +c. 2 Vx... Proof Prove that $\lim _{n \rightarrow \infty} \frac{x^{n}}{n !}=0$ for any real $x$ Proof Prove that $\lim _{n \rightarrow \infty} \frac{x^{n}}{n !}=0$ for any real $x$... Concepto 5 Puntos:6 Due Mar_27.Abr 2021,13.05456 hs (AsT]]de arriba muestra una espira moviendose en direccion de Ia flecha La corriente electrica podria ser Inducida en una espira de acuerdo a la Iaiger3 de FeredaY8 con la direccion dada por Ia ley de Lenz: El campo magnetico esta representado por las en la figura En caso haya corriente inducida Cual sera la direccion? La corriente oscila alternandose en positiva negativa 0 La direccion de ia corriente depende Ia magnitud de la velocidad. 0 Ant Concepto 5 Puntos:6 Due Mar_27.Abr 2021,13.05456 hs (AsT]] de arriba muestra una espira moviendose en direccion de Ia flecha La corriente electrica podria ser Inducida en una espira de acuerdo a la Iaiger3 de FeredaY8 con la direccion dada por Ia ley de Lenz: El campo magnetico esta representado por... QUESTION"Smith & Wesson major manufacturer of fireans in the United States has seen large increase in its share price in recent ycars_ The Smith & Wesson M&PIS rifle is used by James Holmes to murder 12 people in the 2012 Aurora movie theater shooting: Will you be willing " buy Smith & Wesson stocks?" an example ofa question. LeadingDouble-barreledCDualNone of the above QUESTION "Smith & Wesson major manufacturer of fireans in the United States has seen large increase in its share price in recent ycars_ The Smith & Wesson M&PIS rifle is used by James Holmes to murder 12 people in the 2012 Aurora movie theater shooting: Will you be willing " bu... One way to explain the strength of an acid is to look at the stability of its means the of the carboxylic acid group with pKa = 2.92 in cetirizine is than the protonated amine with pKa = 8.27 because of cetirizine, the best nucleophile is and the best electrophile isThis stablepKa1 1.52pKa2 2.92 OH Brz FeBr323CIpKa3 8.27Cetirizine (brand name: Zyrtec) One way to explain the strength of an acid is to look at the stability of its means the of the carboxylic acid group with pKa = 2.92 in cetirizine is than the protonated amine with pKa = 8.27 because of cetirizine, the best nucleophile is and the best electrophile is This stable pKa1 1.52 pKa2 2.92 ... Find the density of a cylinder with a mass of 50 g, if its diameter is 0.01 m and its height0.02 m. The volume of the cylinder is V = nr²h Find the density of a cylinder with a mass of 50 g, if its diameter is 0.01 m and its height 0.02 m. The volume of the cylinder is V = nr²h...
2022-09-28 15:43:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6834266185760498, "perplexity": 6167.8274275191525}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00719.warc.gz"}
http://www.zora.uzh.ch/id/eprint/148809/
# Search for two-neutrino double electron capture of $^{124}Xe$ with XENON100 Xenon Collaboration; Baudis, Laura; Brown, Adam; Capelli, Chiara; Galloway, Michelle; Kazama, Shingo; Kish, Alexander; Piastra, Francesco; Reichard, Shayne; Wulf, Julien; et al (2017). Search for two-neutrino double electron capture of $^{124}Xe$ with XENON100. Physical review. C, C95(2):024605. ## Abstract Two-neutrino double electron capture is a rare nuclear decay where two electrons are simultaneously captured from the atomic shell. For $^{124}Xe$ this process has not yet been observed and its detection would provide a new reference for nuclear matrix element calculations. We have conducted a search for two-neutrino double electron capture from the K shell of $^{124}Xe$ using 7636 kgd of data from the XENON100 dark matter detector. Using a Bayesian analysis we observed no significant excess above background, leading to a lower 90% credibility limit on the half-life $T_{1/2}>6.5×10^{20}$ yr. We have also evaluated the sensitivity of the XENON1T experiment, which is currently being commissioned, and found a sensitivity of $T_{1/2}>6.1×10^{22}$ yr after an exposure of 2 t yr. ## Abstract Two-neutrino double electron capture is a rare nuclear decay where two electrons are simultaneously captured from the atomic shell. For $^{124}Xe$ this process has not yet been observed and its detection would provide a new reference for nuclear matrix element calculations. We have conducted a search for two-neutrino double electron capture from the K shell of $^{124}Xe$ using 7636 kgd of data from the XENON100 dark matter detector. Using a Bayesian analysis we observed no significant excess above background, leading to a lower 90% credibility limit on the half-life $T_{1/2}>6.5×10^{20}$ yr. We have also evaluated the sensitivity of the XENON1T experiment, which is currently being commissioned, and found a sensitivity of $T_{1/2}>6.1×10^{22}$ yr after an exposure of 2 t yr. ## Statistics ### Citations Dimensions.ai Metrics 2 citations in Web of Science® 1 citation in Scopus® Google Scholar™ ### Downloads 5 downloads since deposited on 12 Feb 2018 5 downloads since 12 months Detailed statistics ## Additional indexing Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2017 12 Feb 2018 16:17 19 Feb 2018 11:14 American Physical Society 2469-9985 Green https://doi.org/10.1103/PhysRevC.95.024605 ## Download Preview Content: Published Version Filetype: PDF Size: 425kB View at publisher
2018-03-21 13:24:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5076484084129333, "perplexity": 4337.459853433917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00659.warc.gz"}
http://mathhelpforum.com/calculus/34010-representation-arccos-taylor-expansio.html
# Thread: representation of a arccos as taylor expansio 1. ## representation of a arccos as taylor expansio I'm having trouble expressing arccos(x) as a Maclaurin series. I realized that once you differentiate it, the function will just expand rapidly, so is there a easy way approaching this problem. 2. ## Maybe this is what you are talking about Originally Posted by lllll I'm having trouble expressing arccos(x) as a Maclaurin series. I realized that once you differentiate it, the function will just expand rapidly, so is there a easy way approaching this problem. but did you consider using a similar method to your last one by saying $arcos(x)=\int{\frac{-1}{\sqrt{1-x^2}}}$? 3. Originally Posted by lllll so is there a easy way approaching this problem. Yes. Have you considered the Binomial series - Wikipedia, the free encyclopedia? 4. ## Just wondering Originally Posted by Krizalid This is what I was implying in my post...do waht I said then say $(1-x^2)^{\alpha}$ with $\alpha=\frac{-1}{2}$ in this case...so does that work or is the binomial expansion limited to x not $x^{y},y>1$? 5. Originally Posted by lllll I'm having trouble expressing arccos(x) as a Maclaurin series. I realized that once you differentiate it, the function will just expand rapidly, so is there a easy way approaching this problem. Yes $f(x)=\cos^{-1}(x)$ then $f'(x)=-\frac{1}{\sqrt{1-x^2}}$ expand this in a power series using the binomial series then integrate $\binom{k}{n}=\frac{k(k-1)(k-2) \cdot \cdot \cdot (k-n+1)}{n!}$ $(1-x^2)^{-1/2}=1-\frac{1}{2}x^2+\frac{\frac{-1}{2}\frac{-3}{2}}{2!}x^4- \cdot \cdot \cdot =$ $-f'(x)=\sum_{n=0}^{\infty} \binom{k}{n}(-1)^nx^{2n} \iff f'(x)=\sum_{n=0}^{\infty} \binom{k}{n}(-1)^{n+1}x^{2n}$ Integrating both sides we get $\cos^{-1}(x)=\int f'(x)dx=\int \sum_{n=0}^{\infty} \binom{k}{n}(-1)^{n+1}x^{2n}$ Finally $f(x)=\sum_{n=0}^{\infty} \binom{k}{n} \frac{(-1)^{n+1}x^{2n+1}}{2n+1}$ 6. Originally Posted by TheEmptySet Yes $(1-x^2)^{-1/2}=1-\frac{1}{2}x^2+\frac{\frac{-1}{2}\frac{-3}{2}}{2!}x^4- \cdot \cdot \cdot =$ how did you get this? 7. Originally Posted by lllll how did you get this? This is newtons extention of the binomial theorem to fractional and negative values $ \binom{k}{n}=\frac{k(k-1)(k-2) \cdot \cdot \cdot (k-n+1)}{n!}$ lets expand a few $\binom{-\frac{1}{2}}{0} =1$ by definition $\binom{-\frac{1}{2}}{1} =\frac{\frac{-1}{2}}{1!}$ $\binom{-\frac{1}{2}}{2} =\frac{\frac{-1}{2}\frac{-3}{2}}{2!}$ $\binom{-\frac{1}{2}}{3} =\frac{\frac{-1}{2}\frac{-3}{2}\frac{-5}{2}}{3!}$ and so on 8. Sorry I think this is what you need $(1+x)^k=\sum_{n=0}^{\infty}\binom{k}{n}x^n$ so $(1+(-x^2))^k=\sum_{n=0}^{\infty}\binom{k}{n}(-x^2)^n=\sum_{n=0}^{\infty}\binom{k}{n}(-1)^nx^{2n}$
2016-10-25 10:33:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9553366899490356, "perplexity": 1394.8145467244844}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720026.81/warc/CC-MAIN-20161020183840-00302-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.phoca.cz/forum/viewtopic.php?f=44&p=147815&sid=2eecbad6a05a43656e477abbd65c48a5
## Ideas and fixes hello.world Phoca Newbie Posts: 7 Joined: 18 Aug 2017, 13:29 ### Ideas and fixes Hello. I use Phoca Cart extension and want to write couple of ideas and fixes. Thanks the author for Phoca extensions. They are fine and flexible. Last edited by hello.world on 21 Aug 2017, 14:07, edited 1 time in total. hello.world Phoca Newbie Posts: 7 Joined: 18 Aug 2017, 13:29 ### Re: Ideas and fixes Search module: Bug: search doesn't work by couple of words because the same variable $wheres uses inside and outside foreach. Also if variable$in contains spaces then the search ignores search query and shows all producs: Fixed code: Code: Select all $words = explode(' ',$in); $wheres = array(); foreach ($words as $word) { if (!$word = trim($word)) { continue; }$word = $db->quote('%'.$db->escape($word, true).'%', false);$_wheres = array(); $_wheres[] = 'a.title LIKE '.$word; $_wheres[] = 'a.alias LIKE '.$word; $_wheres[] = 'a.metakey LIKE '.$word; $_wheres[] = 'a.metadesc LIKE '.$word; $_wheres[] = 'a.description LIKE '.$word; $wheres[] = implode(' OR ',$_wheres); } hello.world Phoca Newbie Posts: 7 Joined: 18 Aug 2017, 13:29 ### Re: Ideas and fixes Currently I want to have right column only on category page and items page. And don't want to show it on categories listing, product page, checkout page. But all those pages Joomla see as one Phoca Cart component. I propose to make section with text fields in the component configurations: Categories List, Category View, Items Page, Product Page, etc. In those fields can set the position names that need to disable for page type. Now I did it separately for product page in view-file: components\com_phocacart\views\item\tmpl\default.php Code: Select all foreach (array('position-3', 'position-6', 'position-8') as $position) {$modules = JModuleHelper::getModules($position); foreach ($modules as $module) {$module->position = 'empty'; } } Also I set to registry that it is product page: Code: Select all $registry = \Joomla\Registry\Registry::getInstance('pc');$registry->def('is_product_page', true); So, I can check it in other places and hide or show specific elements on product page. hello.world Phoca Newbie Posts: 7 Joined: 18 Aug 2017, 13:29 ### Re: Ideas and fixes Phoca Filter and Search have own page with products (items page). But if open product page and click to the category "Back" button, we get category page instead previous page with chosen filters. My current fix for it: In file: components\com_phocacart\views\item\tmpl\default.php After lines: Code: Select all $linkUp = JRoute::_(PhocacartRoute::getCategoryRoute($this->category[0]->id, $this->category[0]->alias));$linkUpText = $this->category[0]->title; I added additional check of the previous page: Code: Select all $current = JUri::getInstance(); $referer = JUri::getInstance();$referer->parse(isset($_SERVER['HTTP_REFERER']) ?$_SERVER['HTTP_REFERER'] : ''); if ($referer->getHost() ===$current->getHost() && false !== mb_strpos($referer->getPath(), '/items') && ($referer->getVar('s') || $referer->getVar('search')) ) {$linkUp = JRoute::_($referer);$linkUpText .= ' - back to search results'; } hello.world Phoca Newbie Posts: 7 Joined: 18 Aug 2017, 13:29 ### Re: Ideas and fixes The extension allows to sort groups of specifications on product page. But it doesn't sort the specifications inside group. The order of them can changing on different product pages. Will better to add additional options for every specification: 1) Rows Ordering (by title, by alias, by value, etc.) 2) Show in Filter (Yes or No) 3) Use for Search (Yes or No) 4) Default options that will show during creating new product with specifications. Because currently hard to configure values and aliases of specifications for Product Filter. The temporary solutions for me are: 1) Specifications are sorted on product page: In file: components\com_phocacart\views\item\tmpl\default.php After lines: Code: Select all foreach($this->t['specifications'] as$k => $v) { if(isset($v[0]) && $v[0] != '') {$tabO .= '<h4 class="ph-spec-group-title">'.$v[0].'</h4>'; unset($v[0]); } if (!empty($v)) { I added: Code: Select all uasort($v, function($a,$b) use ($k) { if (2 ===$k) { // Specification with ID 2 uses value for sort. return $a['value'] >$b['value']; } else { // Other specifications use title for sort. return $a['title'] >$b['title']; } }); hello.world Phoca Newbie Posts: 7 Joined: 18 Aug 2017, 13:29 ### Re: Ideas and fixes On the product listing. If enable icons (add to cart, wishlist, compare) that show on hover and disable standard "Add to cart" button, the icon stops working. I use ajax method with popup. Jan Phoca Hero Posts: 38980 Joined: 10 Nov 2007, 18:23 Location: Czech Republic Contact: ### Re: Ideas and fixes Hi, thank you very much for your ideas. 1) Search module: I will take a look at it 2) Module position: Maybe it is better to modify the module so it will be not displayed on some views, example: $app = JFactory::getApplication();$option = $app->input->get( 'option', '', 'string' );$view = $app->input->get( 'view', '', 'string' ); if ($option == 'com_phocacart' && ($view == 'items' ||$view == 'category')) { return; } 3) Back button - in fact it is not a "back button" (history) but "up button" which means going back from product to its category - not going back to your previously displayed page. The problem is, you can get the product view without any history (e.g. from search engine) so you always have the right link to go up: from product to category (product list) to categories (category list) 4) Specifications - not sure if I understand correctly what you exactly mean with "hard to configure values and aliases of specifications for Product Filter" 5) product listing, ajax: which version do you use, do you use 9.1 RC? This should work without having standard button. Do you get any javascript error in console? Jan If you find Phoca extensions useful, please support the project Jan Phoca Hero Posts: 38980 Joined: 10 Nov 2007, 18:23 Location: Czech Republic Contact: ### Re: Ideas and fixes Hi, - Search module - changed in 3.0.0 RC9.2 - Modules - filtering on which page they will be displayed - I think the best way will be to set option in module to display in specific view only (see previous post) - Disable standard "Add to cart" button - should be fixed in 3.0.0 RC9.2 See:
2017-11-20 19:13:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22495180368423462, "perplexity": 12240.049941818828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806124.52/warc/CC-MAIN-20171120183849-20171120203849-00305.warc.gz"}
https://www.jiskha.com/questions/587939/a-set-of-five-marbles-is-selected-without-replacement-from-a-bag-that-contains-4-blue
# probability A set of five marbles is selected (without replacement) from a bag that contains 4 blue marbles, 3 green marbles, 6 yellow marbles, and 7 white marbles. How many sets of five marbles contain no more than two yellow ones? 1. 👍 0 2. 👎 0 3. 👁 638 1. 1/5 1. 👍 0 2. 👎 0 2. A bag contains 3 blue marbles, 9 green marbles, and 11 yellow marbles. Twice you draw a marble and replace it. Find P(blue, then green). start fraction 27 over 529 end fraction start fraction 27 over 23 end fraction start fraction 15 over 529 end fraction Start Fraction 12 over 23 End Fraction 1. 👍 0 2. 👎 0 ## Similar Questions 1. ### math A bag contains 5 red marbles, 6 white marbles, and 8 blue marbles. You draw 5 marbles out at random, without replacement. What is the probability that all the marbles are red? The probability that all the marbles are red is? What 2. ### Math A bag contains 9 red marbles, 8 white marbles, and 6 blue marbles. You draw 4 marbles out at random, without replacement. What is the probability that all the marbles are red? 1 The probability that all the marbles are red is...? 3. ### Math liberal Arts A bag contains 5 red marbles, 4 blue marbles, and 1 green marble. If a marble is selected at random, what is the probability that it is not blue? 4. ### probability A bag contains 8 red; marbles, 2 blue; marbles, and 1 green marble. What is the probability that a randomly selected marble is not blue? 1. ### Math The ratio of white marbles to blue marbles in Connie's bag of marbles is equal to 2:3. There are more than 20 marbles in the bag. What is a possible number of white marbles and blue marbles in the bag? 2. ### Probability jeff has 8 red marbles, 6 blue marbles, and 4 green marbles that are the same size and shape. he puts the marbles into a bag, mixes the marbles, and randomly picks one marble. what is the probability that the marble will be blue? 3. ### Math 1.Which of the following are independent events? Flipping a coin and rolling a number cube*** Choosing two marbles without replacement Spinning a spinner twice Choosing a card, replacing it, and then choosing another card*** 2.A 4. ### Math A bag is filled with green and blue marbles. There are 113 marbles in the bag. If there are 21 more green marbles than blue marbles, find the number of green marbles and the number of blue marbles in the bag. 1. ### statistics A bag contains 7 red marbles, 6 white marbles, and 10 blue marbles. You draw 4 marbles out at random, without replacement. What is the probability that all the marbles are red? What is the probability that exactly two of the 2. ### Math A bag contains 4 green marbles, 6 red marbles, and 2 white marbles. Three marbles are drawn at random with replacement. With replacement means that after a marble is drawn, it is replaced before the next one is drawn. Find the 3. ### math A bag contains 8 red marbles, 5 blue marbles, 8 yellow marbles, and 6 green marbles. What is the probability of choosing a red marble if a single choice is made from the bag? is it 8/27 ? 4. ### math Suppose a bag contains 3 blue and 2 green marbles. You randomly draw 2 marbles without replacement. If B=drawing a blue marble and G=drawing a green marble, which event does NOT have the same probability as the others? Bag A. GG
2020-11-28 19:12:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269644975662231, "perplexity": 429.01761371472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00656.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-3x-2-2x-2-3
# How do you simplify 3x^2•(2x^2)^3? Mar 22, 2018 $24 {x}^{8}$ #### Explanation: $3 {x}^{2} \cdot {\left(2 {x}^{2}\right)}^{3}$ $\Rightarrow 3 {x}^{2} \cdot 8 {x}^{6}$ (law of indices : ${\left({a}^{m}\right)}^{n} = {a}^{m n}$ ) $\Rightarrow 24 {x}^{8}$ (law of indices : ${a}^{m} + {a}^{n} = {a}^{m + n}$ ) That's all, Hope this helps :)
2020-01-29 08:11:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597440361976624, "perplexity": 8024.055444390995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251789055.93/warc/CC-MAIN-20200129071944-20200129101944-00208.warc.gz"}
https://math.stackexchange.com/questions/4112623/category-of-elements-and-terminal-object
# Category of Elements and Terminal Object I am dealing with the following problem: Consider a locally small category $$\mathscr{C}$$ and consider $$C\in\mathscr{C}$$. What are the objects and morphisms in the category of elements of $$\mathscr{C}(-,C)$$? Also, find a terminal object of $$\int \mathscr{C}(-,C)$$. I think the objects of $$\int\mathscr{C}(-,C)$$ are the pairs $$(D,x)$$ where $$D\in\mathscr{C}$$ and $$x\in \mathscr{C}(D,C)$$, and the morphisms of $$\int\mathscr{C}(-,C)$$ are the maps $$f:(D,x)\to(E,y)$$ where $$f:D\to E$$ in $$\mathscr{C}$$ such that $$\mathscr{C}(-,C)f(y)=x$$. I can't really find the terminal object though. • The category of elements is just the slice category $\mathscr{C}/C$. Apr 22 '21 at 17:47 • I can only think of one “natural” object... is it terminal? Apr 22 '21 at 18:16 • Just try and see! Suppose you have an object $(D, x)$. Is there at least one morphism $(D, x) \to (C, \textrm{id}_C)$? Is it forced to be something specific? Apr 22 '21 at 21:52 • @SummerAtlas Yes, I second Zhen Lin. Write down what it means for $f:D\to C$ to be a morphism $(D,x)\to(C,id_C)$ and it should be clear that there is a unique $f$ that does the job. Apr 22 '21 at 22:19 • I wouldn’t write “$f(y)=x$“ (which suggests a set function taking values), but rather $x=y\circ f$ (indicating composition). Apr 22 '21 at 23:11 So here's the solution I figured out from the hints in the comments. We work on the contravariant category for the problem. The objects in the category of elements of $$\mathscr{C}(-,C)$$ is the collection of pairs $$(D,x)$$ where $$D\in\mathscr{C}$$ and $$x\in \mathscr{C}(D,C)$$. The morphisms in the category of elements of $$\mathscr{C}(-,C)$$ is the collection of maps $$f:(D,x)\to(E,y)$$ where $$f:D\to E$$ in $$\mathscr{C}$$ such that $$\mathscr{C}(-,C)f(y)=x$$, which is equivalent to $$y\circ f = x$$ by definition of Hom functor. We claim that $$(C,\text{id}_C)$$ is a terminal object of $$\int \mathscr{C}(-,C)$$. Consider arbitrary $$(D,x)\in \int\mathscr{C}(-,C)$$. For starter, we show there exists a morphism $$f:(D,x)\to (C,\text{id}_C)$$ in $$\mathscr{C}(-,C)$$. Observe that $$D\in\mathscr{C}$$ and $$x\in\mathscr{C}(D,C)$$. Then by definition $$x:D\to C$$ is a morphism in $$\mathscr{C}$$. Furthermore, $$\mathscr{C}(-,C)x(\text{id}_C) = \text{id}_C\circ x = x$$ by definition of Hom functor. Then by definition we obtain a morphism $$f:(D,x)\to (C,\text{id}_C)$$ in $$\int\mathscr{C}(-,C)$$. We now show the uniqueness of morphism from $$(D,x)$$ to $$(C,\text{id}_C)$$. Indeed, for any two morphisms $$f,g:(D,x)\to (C,\text{id}_C)$$ in the category of elements, we know $$f:D\to C$$ and $$g:D\to C$$ are morphisms in $$\mathscr{C}$$, and $$\text{id}_C\circ f=x=\text{id}_C\circ g$$. Therefore $$f=g$$. Hence, the morphism $$f:(D,x)\to (C,\text{id}_C)$$ is unique. Concluding from the properties above, $$(C,\text{id}_C)$$ is a terminal object of $$\int\mathscr{C}(-,C)$$. • Looks good. But you don't need to make a contradiction. Just show that every morphism from $(D,x)$ to $(C,\mathrm{id}_C)$ is equal to $x : D \to C$. A direct proof is just one line. Remember for other proofs as well that almost always a proof of contradiction should not be the first choice, since it overcomplicates things. Apr 22 '21 at 23:48
2022-01-24 10:16:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592138528823853, "perplexity": 95.06345085135801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00330.warc.gz"}
https://ncatlab.org/nlab/show/Schwartz-Bruhat+function
# nLab Schwartz-Bruhat function ## Idea A Schwartz-Bruhat function is a certain type of complex-valued function on a general locally compact Hausdorff abelian group, generalizing the familiar notion of Schwartz function on a space given as a finite product of copies of the real line, of the circle, and a finitely generated abelian group. ## Definitions The notion of Schwartz-Bruhat function is constructed in stages that parallel developments in the general structure theory of locally compact (Hausdorff) abelian groups. Recall the notion of compactly generated topological group $G$: it means there is a compact neighborhood of the identity which generates $G$ as a group. The structure of a compactly generated abelian Lie group is well-known: it is a product of type $K \times \mathbb{R}^m \times \mathbb{Z}^n$ where $K$ is a compact abelian Lie group (thus of the form $T^p \times F$ where $F$ is a finite abelian group and $T = \mathbb{R}/\mathbb{Z}$ is a circle group). These are often called elementary Lie groups. ###### Definition Let $G$ be an elementary Lie group of type $K \times \mathbb{R}^m \times \mathbb{Z}^n$ where $K$ is a compact abelian Lie group. A Schwartz-Bruhat function on $G$ is an infinitely differentiable function $f: G \to \mathbb{C}$ that is rapidly decreasing: applications of any polynomial differential operator to $f$ is uniformly bounded in the $\mathbb{R}$- and $\mathbb{Z}$-variables, in the sense that $\underset{\alpha \in \mathbb{N}^n}{\forall}\;\; \underset{\beta, \gamma \in \mathbb{N}^m}{\forall}\;\; \underset{K_{\alpha, \beta, \gamma} \gt 0}{\exists}\; \left( \underset{(j, x) \in \mathbb{Z}^n \times \mathbb{R}^m}{sup} {\Vert j^\alpha x^\beta \partial_{\gamma} f(x, j) \Vert} \lt K_{\alpha, \beta, \gamma} \right)$ using the usual notations for multi-indices $\alpha, \beta, \gamma$. Next, any locally compact abelian group is canonically a filtered colimit of the system of its open compactly generated subgroups and open inclusions between them. In particular, any abelian Lie group is canonically a filtered colimit of its open elementary Lie subgroups. In fact, an abelian Lie group is of the form $A \times \mathbb{R}^m \times T^p$, where $A$ is a discrete abelian group. We may reckon $A$ as a filtered colimit of its finitely generated subgroups $A_\alpha$; taking the product with the locally compact group $\mathbb{R}^m \times T^p$, any abelian Lie group is a filtered colimit of elementary Lie subgroups $A_\alpha \times \mathbb{R}^m \times T^p$. ###### Definition A Schwartz-Bruhat function on an abelian Lie group $G$ is a continuous function $f: G \to \mathbb{C}$ that is supported on an open elementary Lie subgroup $H$, and whose restriction $f|_H: H \to \mathbb{C}$ is Schwartz-Bruhat in the sense of Definition . (Thus $f$ is identically zero on the complement of $H$, which is a union of open cosets $g + H$.) Let $\mathcal{S}(G)$ denote the TVS of Schwartz-Bruhat functions on an abelian Lie group $G$. We obtain a functor $\mathcal{S}(-): AbLie^{op} \to TVS$. Finally, the character group of a compactly generated locally compact abelian group is an abelian Lie group. By applying Pontryagin duality to the statement that a locally compact abelian group is canonically a filtered colimit of compactly generated subgroups, we see that any locally compact abelian group $G$ is canonically an inverse limit of a cofiltered diagram of abelian Lie groups $G_\alpha$: $G \cong \underset{\longleftarrow}{\lim}_\alpha G_\alpha.$ We may apply the contravariant functor $\mathcal{S}(-)$ to this cofiltered diagram to produce a filtered diagram of Schwartz-Bruhat spaces $\mathcal{S}(G_\alpha)$ of abelian Lie groups. In this notation, ###### Definition For a locally compact abelian group $G$, the Schwartz-Bruhat space $\mathcal{S}(G)$ is the colimit of the filtered diagram of spaces $\mathcal{S}(G_\alpha)$ defined according to Definition . In other words, a Schwartz-Bruhat function on $G$ is one that factors through one of its Lie quotients as $G \twoheadrightarrow G_\alpha \stackrel{g}{\to} \mathbb{C}$ where $g: G_\alpha \to \mathbb{C}$ is Schwartz-Bruhat in the sense given for Lie groups, Definition . ## References The extension of Schwartz functions and tempered distributions on Euclidean spaces $\mathbb{R}^n$ to more general locally compact abelian groups was given by Bruhat: • F. Bruhat, Distributions sur un groupe localement compact et applications à l’étude des représentations des groupes p-adiques, Bull. Soc. Math. France 89 (1961), 43-75. (pdf) References to the fact that Schwartz-Bruhat spaces can be presented as direct limits of topological vector spaces frequently appear in the literature, e.g., • A. Wawrzyńczyk, On tempered distributions and Bochner-Schwartz theorem on arbitrary locally compact Abelian groups, Colloquium Mathematicae Volume 19 Issue 2 (1968), 305-318. (link) (However, the precise categorical details seem to be hard to come by, or at least treated in somewhat cavalier fashion.) Some useful background material on the structure of locally compact Hausdorff abelian groups used in the description above can be found here: • Dikran Dikranjan, Luchezar Stoyanov, An elementary approach to Haar integration and Pontryagin duality in locally compact abelian groups, Topology and its Applications 158 (2011), 1942–1961. (pdf) Last revised on February 18, 2018 at 11:19:09. See the history of this page for a list of all contributions to it.
2020-06-03 05:26:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 45, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820687532424927, "perplexity": 186.69093401955945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00380.warc.gz"}
http://clay6.com/qa/24810/the-molar-heat-capacity-for-an-ideal-gas-is-
# The molar heat capacity for an ideal gas is? (A) equal to the product of the molecular weight and specific heat capacity for any process (B) depends only on the nature of the gas at constant volume (C) depends on the process which the gas undergoes (D) all of the above are correct $C_m = \large\frac{\Delta Q}{n\Delta T}$, $C_m$ is the molar heat capacity. $\Rightarrow C_m = \large\frac{M \Delta Q}{n \Delta T}$$= M\;C$, where $C$ is the specific heat capacity. Also, $C_m$ depends on the degrees of freedom (at constant volume) and nature of process.
2017-09-25 16:46:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7957062125205994, "perplexity": 288.0614866641393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818692236.58/warc/CC-MAIN-20170925164022-20170925184022-00224.warc.gz"}
https://docs.belle2.org/record/2534?ln=en
BELLE2-CONF-PH-2021-003 Measurement of the branching fraction for $B^{0} \to \pi^{0} \pi^{0}$ decays reconstructed in 2019–2020 Belle II data Francis Pham ; Martin Sevior 06 July 2021 Abstract: We report the first reconstruction of the $B^{0} \to \pi^{0} \pi^{0}$ decay mode at Belle II using samples of 2019 and 2020 data that correspond to 62.8 \invfb of integrated luminosity. We find $14.0^{+6.8}_{-5.6}$ signal decays, corresponding to a significance of 3.4 standard deviations and determine a branching ratio of $\mathcal{B}(\Bpipi) = [0.98^{+0.48}_{-0.39} \pm 0.27] \times 10^{-6}$. The results agree with previous determinations and contribute important information to an early assessment of detector performance and Belle II's potential for future determinations of $\alpha/\phi_2$ using $\PB \to \pi \pi$ modes. Keyword(s): charmless ; pi0 Note: 23 pages, 10 figures. Supporting material for Winter 2021 conferences The record appears in these collections: Conference Submissions > Papers
2021-08-02 14:48:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6256833672523499, "perplexity": 4204.382570806544}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154321.31/warc/CC-MAIN-20210802141221-20210802171221-00033.warc.gz"}
http://mathhelpforum.com/differential-geometry/105582-proper-map-distributions.html
# Math Help - proper map. Distributions 1. ## proper map. Distributions Hi, I have the follow question: Probe that, if $A$ and $B$ are closet sets in $\mathbb{R}^n$, and the restriction of the map $(x,y)\mapsto x+y$ to $A+B$ is proper, then $A+B$ is closed. Def: Let $A_1,\dots A_m$ be closet subsets of $\mathbb{R}^n$, We shall say the restriction of the map $\mu:\mathbb{R}^{nm}\to\mathbb{R},\mu(x^{(1)},\dots x^{(m)})=x^{(1)}+\dots +x^{(m)}$ to $A_1\times\dots\times A_m$ is proper if, for any $\delta >0$ , there is $\delta '>0$ such that $x^{(j)}\in A_j,j=1,\dots m$ and $|x^{(1)}+x^{(m)}|\leq \delta$ imply that $|x^{(j)}|\leq\delta'$ for $j=1,\dots m$. Thanks for you help. PS: edited: in the first line, change: proper to closed. Tnaks InvisibleMan 2. And what does "A+B proper" means? 3. Originally Posted by InvisibleMan And what does "A+B proper" means? ups, I was wrong, it should say closed instead of proper. Thanks InvisibleMan
2015-11-26 12:40:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758135080337524, "perplexity": 1072.2129960219681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447266.73/warc/CC-MAIN-20151124205407-00267-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/140/2/i/b/
# Properties Label 140.2.i.b Level $140$ Weight $2$ Character orbit 140.i Analytic conductor $1.118$ Analytic rank $0$ Dimension $2$ CM no Inner twists $2$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$140 = 2^{2} \cdot 5 \cdot 7$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 140.i (of order $$3$$, degree $$2$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$1.11790562830$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-3})$$ Defining polynomial: $$x^{2} - x + 1$$ Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$1$$ Twist minimal: yes Sato-Tate group: $\mathrm{SU}(2)[C_{3}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{6}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + ( 3 - 3 \zeta_{6} ) q^{3} + \zeta_{6} q^{5} + ( -1 + 3 \zeta_{6} ) q^{7} -6 \zeta_{6} q^{9} +O(q^{10})$$ $$q + ( 3 - 3 \zeta_{6} ) q^{3} + \zeta_{6} q^{5} + ( -1 + 3 \zeta_{6} ) q^{7} -6 \zeta_{6} q^{9} + ( 2 - 2 \zeta_{6} ) q^{11} -6 q^{13} + 3 q^{15} + ( -2 + 2 \zeta_{6} ) q^{17} + ( 6 + 3 \zeta_{6} ) q^{21} + 9 \zeta_{6} q^{23} + ( -1 + \zeta_{6} ) q^{25} -9 q^{27} + 3 q^{29} + ( -2 + 2 \zeta_{6} ) q^{31} -6 \zeta_{6} q^{33} + ( -3 + 2 \zeta_{6} ) q^{35} -8 \zeta_{6} q^{37} + ( -18 + 18 \zeta_{6} ) q^{39} + 5 q^{41} + q^{43} + ( 6 - 6 \zeta_{6} ) q^{45} -8 \zeta_{6} q^{47} + ( -8 + 3 \zeta_{6} ) q^{49} + 6 \zeta_{6} q^{51} + ( -4 + 4 \zeta_{6} ) q^{53} + 2 q^{55} + ( 8 - 8 \zeta_{6} ) q^{59} -7 \zeta_{6} q^{61} + ( 18 - 12 \zeta_{6} ) q^{63} -6 \zeta_{6} q^{65} + ( 3 - 3 \zeta_{6} ) q^{67} + 27 q^{69} + 8 q^{71} + ( -14 + 14 \zeta_{6} ) q^{73} + 3 \zeta_{6} q^{75} + ( 4 + 2 \zeta_{6} ) q^{77} -4 \zeta_{6} q^{79} + ( -9 + 9 \zeta_{6} ) q^{81} - q^{83} -2 q^{85} + ( 9 - 9 \zeta_{6} ) q^{87} -13 \zeta_{6} q^{89} + ( 6 - 18 \zeta_{6} ) q^{91} + 6 \zeta_{6} q^{93} -10 q^{97} -12 q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q + 3q^{3} + q^{5} + q^{7} - 6q^{9} + O(q^{10})$$ $$2q + 3q^{3} + q^{5} + q^{7} - 6q^{9} + 2q^{11} - 12q^{13} + 6q^{15} - 2q^{17} + 15q^{21} + 9q^{23} - q^{25} - 18q^{27} + 6q^{29} - 2q^{31} - 6q^{33} - 4q^{35} - 8q^{37} - 18q^{39} + 10q^{41} + 2q^{43} + 6q^{45} - 8q^{47} - 13q^{49} + 6q^{51} - 4q^{53} + 4q^{55} + 8q^{59} - 7q^{61} + 24q^{63} - 6q^{65} + 3q^{67} + 54q^{69} + 16q^{71} - 14q^{73} + 3q^{75} + 10q^{77} - 4q^{79} - 9q^{81} - 2q^{83} - 4q^{85} + 9q^{87} - 13q^{89} - 6q^{91} + 6q^{93} - 20q^{97} - 24q^{99} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/140\mathbb{Z}\right)^\times$$. $$n$$ $$57$$ $$71$$ $$101$$ $$\chi(n)$$ $$1$$ $$1$$ $$-\zeta_{6}$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 81.1 0.5 + 0.866025i 0.5 − 0.866025i 0 1.50000 2.59808i 0 0.500000 + 0.866025i 0 0.500000 + 2.59808i 0 −3.00000 5.19615i 0 121.1 0 1.50000 + 2.59808i 0 0.500000 0.866025i 0 0.500000 2.59808i 0 −3.00000 + 5.19615i 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 7.c even 3 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 140.2.i.b 2 3.b odd 2 1 1260.2.s.b 2 4.b odd 2 1 560.2.q.a 2 5.b even 2 1 700.2.i.a 2 5.c odd 4 2 700.2.r.c 4 7.b odd 2 1 980.2.i.a 2 7.c even 3 1 inner 140.2.i.b 2 7.c even 3 1 980.2.a.a 1 7.d odd 6 1 980.2.a.i 1 7.d odd 6 1 980.2.i.a 2 21.g even 6 1 8820.2.a.k 1 21.h odd 6 1 1260.2.s.b 2 21.h odd 6 1 8820.2.a.w 1 28.f even 6 1 3920.2.a.d 1 28.g odd 6 1 560.2.q.a 2 28.g odd 6 1 3920.2.a.bi 1 35.i odd 6 1 4900.2.a.a 1 35.j even 6 1 700.2.i.a 2 35.j even 6 1 4900.2.a.v 1 35.k even 12 2 4900.2.e.b 2 35.l odd 12 2 700.2.r.c 4 35.l odd 12 2 4900.2.e.c 2 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 140.2.i.b 2 1.a even 1 1 trivial 140.2.i.b 2 7.c even 3 1 inner 560.2.q.a 2 4.b odd 2 1 560.2.q.a 2 28.g odd 6 1 700.2.i.a 2 5.b even 2 1 700.2.i.a 2 35.j even 6 1 700.2.r.c 4 5.c odd 4 2 700.2.r.c 4 35.l odd 12 2 980.2.a.a 1 7.c even 3 1 980.2.a.i 1 7.d odd 6 1 980.2.i.a 2 7.b odd 2 1 980.2.i.a 2 7.d odd 6 1 1260.2.s.b 2 3.b odd 2 1 1260.2.s.b 2 21.h odd 6 1 3920.2.a.d 1 28.f even 6 1 3920.2.a.bi 1 28.g odd 6 1 4900.2.a.a 1 35.i odd 6 1 4900.2.a.v 1 35.j even 6 1 4900.2.e.b 2 35.k even 12 2 4900.2.e.c 2 35.l odd 12 2 8820.2.a.k 1 21.g even 6 1 8820.2.a.w 1 21.h odd 6 1 ## Hecke kernels This newform subspace can be constructed as the kernel of the linear operator $$T_{3}^{2} - 3 T_{3} + 9$$ acting on $$S_{2}^{\mathrm{new}}(140, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$T^{2}$$ $3$ $$9 - 3 T + T^{2}$$ $5$ $$1 - T + T^{2}$$ $7$ $$7 - T + T^{2}$$ $11$ $$4 - 2 T + T^{2}$$ $13$ $$( 6 + T )^{2}$$ $17$ $$4 + 2 T + T^{2}$$ $19$ $$T^{2}$$ $23$ $$81 - 9 T + T^{2}$$ $29$ $$( -3 + T )^{2}$$ $31$ $$4 + 2 T + T^{2}$$ $37$ $$64 + 8 T + T^{2}$$ $41$ $$( -5 + T )^{2}$$ $43$ $$( -1 + T )^{2}$$ $47$ $$64 + 8 T + T^{2}$$ $53$ $$16 + 4 T + T^{2}$$ $59$ $$64 - 8 T + T^{2}$$ $61$ $$49 + 7 T + T^{2}$$ $67$ $$9 - 3 T + T^{2}$$ $71$ $$( -8 + T )^{2}$$ $73$ $$196 + 14 T + T^{2}$$ $79$ $$16 + 4 T + T^{2}$$ $83$ $$( 1 + T )^{2}$$ $89$ $$169 + 13 T + T^{2}$$ $97$ $$( 10 + T )^{2}$$
2021-08-04 16:22:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699530005455017, "perplexity": 11818.22035598626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00695.warc.gz"}
https://web2.0calc.com/questions/2x-1-2x-2-7-10
+0 # (2x-1)/(2x+2)=7/10 0 39 2 (2x-1)/(2x+2)=7/10 Sep 30, 2020 edited by Guest  Sep 30, 2020 #1 +153 0 First, do $\frac{7}{10} \cdot 2x+2$ and this is cross multiplying! Now multiply this out and solve for x. Sep 30, 2020 #2 +257 +1 (2x - 1) / (2x + 2) = 7 / 10 2x - 1 * 10 = 2x + 2 * 7 20x - 10 = 14x + 14 20x - 14x = 14 + 10 6x = 24 x = 4 Oct 1, 2020
2020-10-26 10:35:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463483691215515, "perplexity": 2234.7655011702045}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00349.warc.gz"}
http://umr-math.univ-mlv.fr/evenements/exposes/seminaire_cristolien_d_analyse_multifractale.1478781900
## On circle averages of Gaussian free fields and Liouville quantum gravity Type: Site: Date: 10/11/2016 - 13:45 - 14:45 Salle: I1 223 Orateur: JIN Xiong Localisation: Université de Manchester Localisation: Royaume-Uni Résumé: Given a measure on a regular planar domain, the Gaussian multiplicative chaos measure or the Liouville quantum version of it is the random measure obtained as the weak limit of exponential of circle averages of the Gaussian free field weighted by the original measure. We investigate some dimensional and geometric properties of these random measure. We show that if the original measure is exact-dimensional then so is the random measure. We also show that when the dimension of the random measure is large enough, its orthogonal projections to one-dimensional subspaces are absolutely continuous w.r.t Lebesgue measure in every direction, and it has positive Fourier dimension.
2017-08-16 23:56:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.824755847454071, "perplexity": 341.9035047486906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102757.45/warc/CC-MAIN-20170816231829-20170817011829-00312.warc.gz"}
https://math.stackexchange.com/questions/1754556/how-do-i-get-the-rational-canonical-form-from-the-minimal-and-characteristic-pol
# How do I get the Rational Canonical Form from the minimal and characteristic polynomials? Let's say I have the minimal polynomial and characteristic polynomial of a matrix and all its invariant factor compositions. How do I get a rational canonical form matrix from this? For a matrix $A$ the invariant factors are determined by finding the Smith canonical form equivalent to $(xI - A)$. Let's say $$S(x) = \text{Dg}[1,\ldots,1,f_1(x),\ldots,f_k(x)]$$ is the Smith canonical form. As mentioned elsewhere $f_1(x),f_2(x),\ldots,f_k(x)$ are all monic and each lower index monic polynomial divides the following. These are the invariant factors. (Also note that $f_k(x)$ is the minimum polynomial) Now, it is true that $A$ is similar to Dg$[C(f_1(x)),\ldots,C(f_k(x))]$ where $C(f_i(x))$ indicates the companion matrix associated with $f_i(x)$, but this is not the Rational Canonical Form, to the best of my knowledge, not according to my textbook anyway. The rational canonical form is derived by finding the elementary divisors of $A$. Take any invariant factor $f_i(x)$, then we can write it as $p_1(x)^{ei1}p_2(x)^{ei2}\ldots p_t(x)^{eit}$ where the $p_i(x)$ are distinct, monic and irreducible. These are then the elementary divisors of $A$. The rational canonical form is constructed from these elementary divisors as $$\text{Dg}[\ldots,H(p_1(x)^{ei1}),H(p_2(x)^{ei2}),\ldots,H(p_t(x)^{eit}),\ldots]$$ where $$H(p(x)^e) = \begin{bmatrix} C(p(x)) & 0 & \cdots & 0 & 0 \\ N & C(p(x)) & \cdots & 0 & 0 \\ \vdots & & & & \vdots \\ 0 & 0 & \cdots & N & C(p(x)) \end{bmatrix}.$$ Here $N$ is a matrix that is all zero except for a 1 in the upper right hand corner, and the companion matrix $C(p(x))$ is repeated $e$ times. This is known as a hypercompanion matrix. Note that the rational canonical form reduces to Jordan canonical form when the field is algebraically closed. Let's look at an example: suppose you have invariant factors $f_1(x) = (x^2 + 4)(x^2 - 3)$ and $f_2(x) = (x^2 + 4)^2(x^2 - 3)^2$, and suppose the field we are considering is $\mathbb{Q}$. Then the elementary divisors are $(x^2 + 4),(x^2 - 3),(x^2 + 4)^2$ and $(x^2 - 3)^2$ and so the rational canonical form is $$\text{Dg}\left [ \begin{bmatrix}0&-4\\1&0 \end{bmatrix}, \begin{bmatrix}0&3\\1&0 \end{bmatrix},\begin{bmatrix}0&-4&0&0\\1&0&0&0\\0&1&0&-4\\0&0&1&0 \end{bmatrix},\begin{bmatrix}0&3&0&0\\1&0&0&0\\0&1&0&3\\0&0&1&0 \end{bmatrix}\right ].$$ My source material for this theory: CG Cullen, Matrices and linear transformations (2nd edition), the chapter named: Similarity: Part II. There are more examples and in-depth discussion there... I am not sure whether the terminology is uniform here. The invariant factors are certain monic, non-constant polynomials $f_{i}$ such that $$f_{1} \mid f_{2} \mid \dots \mid f_{k},$$ with $f_{k}$ the minimal polynomial, and $f_{1} f_{2} \dots f_{k}$ the characteristic polynomial. Take companion matrices $A_{i}$ of each $f_{i}$. Your rational canonical form will be $$\begin{bmatrix} A_{1}\\ & A_{2}\\ & & A_{3} \\ & & & \ddots\\ & & & & A_{k}\\ \end{bmatrix},$$ where blanks denote zeros. • So each individual invariant factor has its own companion matrix?? This is where I've been messing up then. I've been taking 1 large companion matrix of the characteristic polynomial and was unsure why. – JustCurious Apr 22 '16 at 18:59 • Yes. The relevant Wikipedia article gives some more details. – Andreas Caranti Apr 22 '16 at 19:05
2019-08-18 14:59:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295718669891357, "perplexity": 153.05166738099666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00198.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Amartinsson.per-gunnar
zbMATH — the first resource for mathematics Martinsson, Per-Gunnar Compute Distance To: Author ID: martinsson.per-gunnar Published as: Martinsson, Per-Gunnar; Martinsson, P. G. External Links: MGP Documents Indexed: 46 Publications since 2001, including 1 Book all top 5 Co-Authors 7 single-authored 10 Gillman, Adrianna 9 Rokhlin, Vladimir 6 Hao, Sijia 5 Tygert, Mark 5 Young, Patrick M. 2 Babb, Tracy 2 Babuška, Ivo 2 Barnett, Alex H. 2 Halko, Nathan 2 Heavner, Nathan 2 Quintana-Ortí, Gregorio 2 Rodin, Gregory Jl 2 Voronin, Sergeĭ Mikhaĭlovich 1 Betcke, Timo 1 Börm, Steffen 1 Bremer, James C. 1 Cheng, Hongwei 1 Cornea, Emil 1 Corona, Eduardo 1 Gimbutas, Zydrunas 1 Greengard, Leslie F. 1 Gueyffier, Denis 1 Haut, Terry S. 1 Howard, Ralph E. 1 Le Borne, Sabine 1 Liberty, Edo 1 Movchan, Alexander B. 1 Shkolnisky, Yoel 1 Tropp, Joel A. 1 van de Geijn, Robert Alexander 1 Wingate, Beth A. 1 Woolfe, Franco 1 Wu, Bowei 1 Zorin, Denis N. all top 5 Serials 7 Journal of Computational Physics 7 SIAM Journal on Scientific Computing 4 Advances in Computational Mathematics 3 BIT 2 Computers & Mathematics with Applications 2 Journal of Computational and Applied Mathematics 2 Applied and Computational Harmonic Analysis 2 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Archive for Rational Mechanics and Analysis 1 IMA Journal of Numerical Analysis 1 Quarterly Journal of Mechanics and Applied Mathematics 1 ACM Transactions on Mathematical Software 1 SIAM Journal on Matrix Analysis and Applications 1 Journal of Scientific Computing 1 Differential and Integral Equations 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Proceedings of the National Academy of Sciences of the United States of America 1 SIAM Review 1 Acta Numerica 1 Oberwolfach Reports 1 CBMS-NSF Regional Conference Series in Applied Mathematics 1 Communications in Applied Mathematics and Computational Science 1 Frontiers of Mathematics in China all top 5 Fields 39 Numerical analysis (65-XX) 19 Partial differential equations (35-XX) 4 Linear and multilinear algebra; matrix theory (15-XX) 4 Integral equations (45-XX) 4 Mechanics of deformable solids (74-XX) 3 Difference and functional equations (39-XX) 3 Computer science (68-XX) 3 Fluid mechanics (76-XX) 2 Optics, electromagnetic theory (78-XX) 1 General and overarching topics; collections (00-XX) 1 Potential theory (31-XX) 1 Approximations and expansions (41-XX) 1 Probability theory and stochastic processes (60-XX) 1 Statistics (62-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Statistical mechanics, structure of matter (82-XX) Citations contained in zbMATH Open 43 Publications have been cited 1,260 times in 688 Documents Cited by Year Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. Zbl 1269.65043 Halko, N.; Martinsson, P. G.; Tropp, J. A. 2011 Randomized algorithms for the low-rank approximation of matrices. Zbl 1215.65080 Liberty, Edo; Woolfe, Franco; Martinsson, Per-Gunnar; Rokhlin, Vladimir; Tygert, Mark 2007 A fast direct solver for boundary integral equations in two dimensions. Zbl 1078.65112 Martinsson, P. G.; Rokhlin, V. 2005 On the compression of low rank matrices. Zbl 1083.65042 Cheng, H.; Gimbutas, Z.; Martinsson, P. G.; Rokhlin, V. 2005 A randomized algorithm for the decomposition of matrices. Zbl 1210.65095 Martinsson, Per-Gunnar; Rokhlin, Vladimir; Tygert, Mark 2011 Fast direct solvers for integral equations in complex three-dimensional domains. Zbl 1176.65141 Greengard, Leslie; Gueyffier, Denis; Martinsson, Per-Gunnar; Rokhlin, Vladimir 2009 A fast randomized algorithm for computing a hierarchically semiseparable representation of a matrix. Zbl 1237.65028 Martinsson, P. G. 2011 A direct solver with $$O(N)$$ complexity for integral equations on one-dimensional domains. Zbl 1262.65198 Gillman, Adrianna; Young, Patrick M.; Martinsson, Per-Gunnar 2012 A fast direct solver for a class of elliptic partial differential equations. Zbl 1203.65066 Martinsson, Per-Gunnar 2009 High-order accurate methods for Nyström discretization of integral equations on smooth curves in the plane. Zbl 1300.65093 Hao, S.; Barnett, A. H.; Martinsson, P. G.; Young, P. 2014 An $$O(N)$$ direct solver for integral equations on the plane. Zbl 1307.65180 Corona, Eduardo; Martinsson, Per-Gunnar; Zorin, Denis 2015 An accelerated kernel-independent fast multipole method in one dimension. Zbl 1154.65318 Martinsson, P. G.; Rokhlin, V. 2007 A fast algorithm for the inversion of general Toeplitz matrices. Zbl 1087.65025 Martinsson, P. G.; Rokhlin, V.; Tygert, M. 2005 A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media. Zbl 1312.65201 Gillman, Adrianna; Barnett, Alex H.; Martinsson, Per-Gunnar 2015 Vibrations of lattice structures and phononic band gaps. Zbl 1044.74020 Martinsson, P. G.; Movchan, A. B. 2003 A fast direct solver for scattering problems involving elongated structures. Zbl 1111.65109 Martinsson, P. G.; Rokhlin, V. 2007 Asymptotic expansions of lattice Green’s functions. Zbl 1022.39022 Martinsson, Per-Gunnar; Rodin, Gregory J. 2002 A direct solver with $$O(N)$$ complexity for variable coefficient elliptic PDEs discretized via a high-order composite spectral collocation method. Zbl 1303.65099 Gillman, A.; Martinsson, P. G. 2014 A direct solver for variable coefficient elliptic PDEs discretized via a composite spectral collocation method. Zbl 1297.65169 Martinsson, P. G. 2013 On interpolation and integration in finite-dimensional spaces of bounded functions. Zbl 1111.65010 Martinsson, Per-Gunnar; Rokhlin, Vladimir; Tygert, Mark 2006 A high-order accurate accelerated direct solver for acoustic scattering from surfaces. Zbl 1317.65243 Bremer, James; Gillman, Adrianna; Martinsson, Per-Gunnar 2015 A high-order Nyström discretization scheme for boundary integral equations defined on rotationally symmetric surfaces. Zbl 1250.65146 Young, P.; Hao, S.; Martinsson, P. G. 2012 A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices. Zbl 1352.65095 Martinsson, Per-Gunnar; Voronin, Sergey 2016 An $$O(N)$$ algorithm for constructing the solution operator to 2D elliptic boundary value problems in the absence of body loads. Zbl 1295.65107 2014 Householder QR factorization with randomization for column pivoting (HQRRP). Zbl 1365.65070 Martinsson, Per-Gunnar; Quintana Ortí, Gregorio; Heavner, Nathan; van de Geijn, Robert 2017 Efficient algorithms for CUR and interpolative matrix decompositions. Zbl 1369.65049 Voronin, Sergey; Martinsson, Per-Gunnar 2017 A simplified technique for the efficient and highly accurate discretization of boundary integral equations in 2D on domains with corners. Zbl 1349.65660 Gillman, A.; Hao, S.; Martinsson, P. G. 2014 Fast and accurate numerical methods for solving elliptic difference equations defined on lattices. Zbl 1203.65280 Gillman, A.; Martinsson, P. G. 2010 A fast solver for Poisson problems on infinite regular lattices. Zbl 1294.65104 Gillman, A.; Martinsson, P. G. 2014 Mechanics of materials with periodic truss or frame micro-structures. Zbl 1140.74532 Martinsson, Per-Gunnar; Babuška, Ivo 2007 Homogenization of materials with periodic truss or frame micro-structures. Zbl 1115.74040 Martinsson, P. G.; Babuška, Ivo 2007 An efficient and highly accurate solver for multi-body acoustic scattering problems involving rotationally symmetric scatterers. Zbl 1360.76253 Hao, S.; Martinsson, P. G.; Young, P. 2015 Boundary algebraic equations for lattice problems. Zbl 1186.65154 Martinsson, Per-Gunnar; Rodin, Gregory J. 2009 Fast evaluation of electro-static interactions in multi-phase dielectric media. Zbl 1079.78007 Martinsson, Per-Gunnar 2006 An algorithm for the principal component analysis of large data sets. Zbl 1232.65058 Halko, Nathan; Martinsson, Per-Gunnar; Shkolnisky, Yoel; Tygert, Mark 2011 A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator. Zbl 1433.65190 Haut, T. S.; Babb, T.; Martinsson, P. G.; Wingate, B. A. 2016 Compressing rank-structured matrices via randomized sampling. Zbl 1342.65211 Martinsson, Per-Gunnar 2016 Fast direct solvers for elliptic PDEs. Zbl 1447.65003 Martinsson, Per-Gunnar 2020 A direct solver for elliptic PDEs in three dimensions based on hierarchical merging of Poincaré-Steklov operators. Zbl 1346.65062 Hao, Sijia; Martinsson, Per-Gunnar 2016 An accelerated Poisson solver based on multidomain spectral discretization. Zbl 1405.65156 Babb, Tracy; Gillman, Adrianna; Hao, Sijia; Martinsson, Per-Gunnar 2018 Numerical homogenization via approximation of the solution operator. Zbl 1246.65227 Gillman, Adrianna; Young, Patrick; Martinsson, Per-Gunnar 2012 randUTV: a blocked randomized algorithm for computing a rank-revealing UTV factorization. Zbl 07119123 Martinsson, P. G.; Quintana-Ortí, G.; Heavner, N. 2019 Randomized methods for matrix computations. Zbl 1448.68010 Martinsson, Per-Gunnar 2018 Fast direct solvers for elliptic PDEs. Zbl 1447.65003 Martinsson, Per-Gunnar 2020 randUTV: a blocked randomized algorithm for computing a rank-revealing UTV factorization. Zbl 07119123 Martinsson, P. G.; Quintana-Ortí, G.; Heavner, N. 2019 An accelerated Poisson solver based on multidomain spectral discretization. Zbl 1405.65156 Babb, Tracy; Gillman, Adrianna; Hao, Sijia; Martinsson, Per-Gunnar 2018 Randomized methods for matrix computations. Zbl 1448.68010 Martinsson, Per-Gunnar 2018 Householder QR factorization with randomization for column pivoting (HQRRP). Zbl 1365.65070 Martinsson, Per-Gunnar; Quintana Ortí, Gregorio; Heavner, Nathan; van de Geijn, Robert 2017 Efficient algorithms for CUR and interpolative matrix decompositions. Zbl 1369.65049 Voronin, Sergey; Martinsson, Per-Gunnar 2017 A randomized blocked algorithm for efficiently computing rank-revealing factorizations of matrices. Zbl 1352.65095 Martinsson, Per-Gunnar; Voronin, Sergey 2016 A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator. Zbl 1433.65190 Haut, T. S.; Babb, T.; Martinsson, P. G.; Wingate, B. A. 2016 Compressing rank-structured matrices via randomized sampling. Zbl 1342.65211 Martinsson, Per-Gunnar 2016 A direct solver for elliptic PDEs in three dimensions based on hierarchical merging of Poincaré-Steklov operators. Zbl 1346.65062 Hao, Sijia; Martinsson, Per-Gunnar 2016 An $$O(N)$$ direct solver for integral equations on the plane. Zbl 1307.65180 Corona, Eduardo; Martinsson, Per-Gunnar; Zorin, Denis 2015 A spectrally accurate direct solution technique for frequency-domain scattering problems with variable media. Zbl 1312.65201 Gillman, Adrianna; Barnett, Alex H.; Martinsson, Per-Gunnar 2015 A high-order accurate accelerated direct solver for acoustic scattering from surfaces. Zbl 1317.65243 Bremer, James; Gillman, Adrianna; Martinsson, Per-Gunnar 2015 An efficient and highly accurate solver for multi-body acoustic scattering problems involving rotationally symmetric scatterers. Zbl 1360.76253 Hao, S.; Martinsson, P. G.; Young, P. 2015 High-order accurate methods for Nyström discretization of integral equations on smooth curves in the plane. Zbl 1300.65093 Hao, S.; Barnett, A. H.; Martinsson, P. G.; Young, P. 2014 A direct solver with $$O(N)$$ complexity for variable coefficient elliptic PDEs discretized via a high-order composite spectral collocation method. Zbl 1303.65099 Gillman, A.; Martinsson, P. G. 2014 An $$O(N)$$ algorithm for constructing the solution operator to 2D elliptic boundary value problems in the absence of body loads. Zbl 1295.65107 2014 A simplified technique for the efficient and highly accurate discretization of boundary integral equations in 2D on domains with corners. Zbl 1349.65660 Gillman, A.; Hao, S.; Martinsson, P. G. 2014 A fast solver for Poisson problems on infinite regular lattices. Zbl 1294.65104 Gillman, A.; Martinsson, P. G. 2014 A direct solver for variable coefficient elliptic PDEs discretized via a composite spectral collocation method. Zbl 1297.65169 Martinsson, P. G. 2013 A direct solver with $$O(N)$$ complexity for integral equations on one-dimensional domains. Zbl 1262.65198 Gillman, Adrianna; Young, Patrick M.; Martinsson, Per-Gunnar 2012 A high-order Nyström discretization scheme for boundary integral equations defined on rotationally symmetric surfaces. Zbl 1250.65146 Young, P.; Hao, S.; Martinsson, P. G. 2012 Numerical homogenization via approximation of the solution operator. Zbl 1246.65227 Gillman, Adrianna; Young, Patrick; Martinsson, Per-Gunnar 2012 Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. Zbl 1269.65043 Halko, N.; Martinsson, P. G.; Tropp, J. A. 2011 A randomized algorithm for the decomposition of matrices. Zbl 1210.65095 Martinsson, Per-Gunnar; Rokhlin, Vladimir; Tygert, Mark 2011 A fast randomized algorithm for computing a hierarchically semiseparable representation of a matrix. Zbl 1237.65028 Martinsson, P. G. 2011 An algorithm for the principal component analysis of large data sets. Zbl 1232.65058 Halko, Nathan; Martinsson, Per-Gunnar; Shkolnisky, Yoel; Tygert, Mark 2011 Fast and accurate numerical methods for solving elliptic difference equations defined on lattices. Zbl 1203.65280 Gillman, A.; Martinsson, P. G. 2010 Fast direct solvers for integral equations in complex three-dimensional domains. Zbl 1176.65141 Greengard, Leslie; Gueyffier, Denis; Martinsson, Per-Gunnar; Rokhlin, Vladimir 2009 A fast direct solver for a class of elliptic partial differential equations. Zbl 1203.65066 Martinsson, Per-Gunnar 2009 Boundary algebraic equations for lattice problems. Zbl 1186.65154 Martinsson, Per-Gunnar; Rodin, Gregory J. 2009 Randomized algorithms for the low-rank approximation of matrices. Zbl 1215.65080 Liberty, Edo; Woolfe, Franco; Martinsson, Per-Gunnar; Rokhlin, Vladimir; Tygert, Mark 2007 An accelerated kernel-independent fast multipole method in one dimension. Zbl 1154.65318 Martinsson, P. G.; Rokhlin, V. 2007 A fast direct solver for scattering problems involving elongated structures. Zbl 1111.65109 Martinsson, P. G.; Rokhlin, V. 2007 Mechanics of materials with periodic truss or frame micro-structures. Zbl 1140.74532 Martinsson, Per-Gunnar; Babuška, Ivo 2007 Homogenization of materials with periodic truss or frame micro-structures. Zbl 1115.74040 Martinsson, P. G.; Babuška, Ivo 2007 On interpolation and integration in finite-dimensional spaces of bounded functions. Zbl 1111.65010 Martinsson, Per-Gunnar; Rokhlin, Vladimir; Tygert, Mark 2006 Fast evaluation of electro-static interactions in multi-phase dielectric media. Zbl 1079.78007 Martinsson, Per-Gunnar 2006 A fast direct solver for boundary integral equations in two dimensions. Zbl 1078.65112 Martinsson, P. G.; Rokhlin, V. 2005 On the compression of low rank matrices. Zbl 1083.65042 Cheng, H.; Gimbutas, Z.; Martinsson, P. G.; Rokhlin, V. 2005 A fast algorithm for the inversion of general Toeplitz matrices. Zbl 1087.65025 Martinsson, P. G.; Rokhlin, V.; Tygert, M. 2005 Vibrations of lattice structures and phononic band gaps. Zbl 1044.74020 Martinsson, P. G.; Movchan, A. B. 2003 Asymptotic expansions of lattice Green’s functions. Zbl 1022.39022 Martinsson, Per-Gunnar; Rodin, Gregory J. 2002 all top 5 Cited by 1,066 Authors 29 Martinsson, Per-Gunnar 22 Ying, Lexing 16 Rokhlin, Vladimir 15 Gillman, Adrianna 14 Darve, Eric 14 Helsing, Johan 13 Greengard, Leslie F. 13 Xia, Jianlin 12 Barnett, Alex H. 12 O’Neil, Michael 10 Ho, Kenneth L. 10 Lu, Jianfeng 10 Saibaba, Arvind Krishna 10 Tygert, Mark 9 Biros, George 9 Li, Yingzhou 9 Pan, Victor Yakovlevich 9 Yang, Haizhao 7 Bremer, James C. 7 Ghattas, Omar N. 7 Jiang, Shidong 7 Tropp, Joel A. 6 Cui, Tiangang 6 Hao, Sijia 6 Huang, Jingfang 6 Li, Xiaoye Sherry 6 Lin, Lin 6 Marzouk, Youssef M. 6 Movchan, Alexander B. 6 Wei, Yimin 6 Xing, Xin 5 Averbuch, Amir Z. 5 Beylkin, Gregory 5 Chow, Edmond G. W. 5 Doostan, Alireza 5 Gu, Ming 5 Kressner, Daniel 5 Lai, Jun 5 Li, Shengguo 5 Mary, Theo A. 5 Oseledets, Ivan V. 5 Rouet, François-Henry 5 Saad, Yousef 5 Xi, Yuanzhe 4 Alexanderian, Alen 4 Ambikasaran, Sivaram 4 Cevher, Volkan 4 Chatelain, Philippe 4 Che, Maolin 4 Cheng, Lizhi 4 Damle, Anil 4 de Hoop, Maarten V. 4 Demanet, Laurent 4 Dölz, Jürgen 4 Ghysels, Pieter 4 Ipsen, Ilse C. F. 4 Keyes, David Elliot 4 Kropinski, Mary Catherine A. 4 Kutz, J. Nathan 4 Li, Qin 4 Liberty, Edo 4 Ma, Changfeng 4 Quaife, Bryan D. 4 Rachh, Manas 4 Serkh, Kirill 4 Smith, Ralph C. 4 Stadler, Georg 4 Udell, Madeleine 4 Veerapaneni, Shravan Kumar 4 Young, Patrick M. 3 af Klinteberg, Ludvig 3 Agullo, Emmanuel 3 Askham, Travis 3 Babuška, Ivo 3 Balakrishnan, Venkataramanan 3 Bardsley, Johnathan M. 3 Betcke, Timo 3 Brunton, Steven L. 3 Bui-Thanh, Tan 3 Buttari, Alfredo 3 Cauley, Stephen 3 Chaillat, Stéphanie 3 Chen, Ke 3 Chen, Peng 3 Cho, Minhyung 3 Claeys, Xavier 3 Colonius, Tim 3 Coulier, Pieter 3 de Sturler, Eric 3 Efendiev, Yalchin R. 3 Epstein, Charles Lawrence 3 Fairbanks, Hillary R. 3 Geng, Weihua 3 Gillis, Thomas 3 Giraud, Luc 3 Gonella, Stefano 3 Grigori, Laura 3 Harbrecht, Helmut 3 Hesthaven, Jan S. 3 Huang, Na ...and 966 more Authors all top 5 Cited in 139 Serials 111 Journal of Computational Physics 90 SIAM Journal on Scientific Computing 52 SIAM Journal on Matrix Analysis and Applications 22 Applied and Computational Harmonic Analysis 19 Advances in Computational Mathematics 18 Computers & Mathematics with Applications 17 Multiscale Modeling & Simulation 16 Journal of Computational and Applied Mathematics 14 Computer Methods in Applied Mechanics and Engineering 13 Linear Algebra and its Applications 11 Inverse Problems 11 Journal of Scientific Computing 11 Numerical Linear Algebra with Applications 9 BIT 8 Numerische Mathematik 7 Applied Numerical Mathematics 7 Numerical Algorithms 7 SIAM Review 7 Engineering Analysis with Boundary Elements 6 Journal of Fluid Mechanics 6 Wave Motion 6 Applied Mathematics and Computation 6 Machine Learning 6 SIAM Journal on Imaging Sciences 6 SIAM/ASA Journal on Uncertainty Quantification 5 Communications on Pure and Applied Mathematics 5 SIAM Journal on Numerical Analysis 5 SIAM Journal on Applied Mathematics 5 Journal of Mathematical Imaging and Vision 5 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 4 Journal of the Mechanics and Physics of Solids 4 Mathematics of Computation 4 International Journal for Numerical Methods in Engineering 4 Computational Mechanics 4 Computational Statistics and Data Analysis 3 Archive for Rational Mechanics and Analysis 3 The Annals of Statistics 3 SIAM Journal on Computing 3 Computational Mathematics and Mathematical Physics 3 Proceedings of the National Academy of Sciences of the United States of America 3 Continuum Mechanics and Thermodynamics 3 Computational Geosciences 3 Electronic Journal of Statistics 3 Research in the Mathematical Sciences 3 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 2 Acta Mechanica 2 Computers and Fluids 2 ACM Transactions on Mathematical Software 2 Transactions of the American Mathematical Society 2 Applied Mathematics Letters 2 Neural Computation 2 Applied Mathematical Modelling 2 Journal of Statistical Computation and Simulation 2 Pattern Recognition 2 SIAM Journal on Optimization 2 Cybernetics and Systems Analysis 2 Mathematics and Mechanics of Solids 2 Computing and Visualization in Science 2 European Journal of Mechanics. A. Solids 2 Foundations of Computational Mathematics 2 Journal of Machine Learning Research (JMLR) 2 SIAM Journal on Applied Dynamical Systems 2 Acta Numerica 2 Inverse Problems and Imaging 2 GEM - International Journal on Geomathematics 2 Statistics and Computing 1 International Journal of Modern Physics B 1 International Journal of Engineering Science 1 Indian Journal of Pure & Applied Mathematics 1 International Journal of Solids and Structures 1 Information Processing Letters 1 Journal of Engineering Mathematics 1 Journal of Mathematical Biology 1 Journal of Mathematical Physics 1 Journal of Statistical Physics 1 Linear and Multilinear Algebra 1 Transport Theory and Statistical Physics 1 The Annals of Probability 1 Journal of the American Statistical Association 1 Mathematics and Computers in Simulation 1 Memoirs of the American Mathematical Society 1 Results in Mathematics 1 SIAM Journal on Control and Optimization 1 Theoretical Computer Science 1 Operations Research Letters 1 Bulletin of the Iranian Mathematical Society 1 Constructive Approximation 1 Algorithmica 1 Discrete & Computational Geometry 1 Asia-Pacific Journal of Operational Research 1 Journal of Integral Equations and Applications 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Computational Statistics 1 European Journal of Operational Research 1 International Journal of Computer Mathematics 1 Journal of Elasticity 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Mathematical Programming. Series A. Series B 1 Computational Optimization and Applications 1 St. Petersburg Mathematical Journal ...and 39 more Serials all top 5 Cited in 39 Fields 516 Numerical analysis (65-XX) 140 Partial differential equations (35-XX) 112 Linear and multilinear algebra; matrix theory (15-XX) 96 Computer science (68-XX) 76 Statistics (62-XX) 58 Mechanics of deformable solids (74-XX) 54 Optics, electromagnetic theory (78-XX) 48 Fluid mechanics (76-XX) 31 Probability theory and stochastic processes (60-XX) 28 Integral equations (45-XX) 22 Operations research, mathematical programming (90-XX) 19 Information and communication theory, circuits (94-XX) 17 Statistical mechanics, structure of matter (82-XX) 15 Potential theory (31-XX) 13 Approximations and expansions (41-XX) 13 Geophysics (86-XX) 13 Biology and other natural sciences (92-XX) 11 Operator theory (47-XX) 10 Quantum theory (81-XX) 8 Calculus of variations and optimal control; optimization (49-XX) 7 Dynamical systems and ergodic theory (37-XX) 7 Harmonic analysis on Euclidean spaces (42-XX) 5 Combinatorics (05-XX) 5 Difference and functional equations (39-XX) 5 Systems theory; control (93-XX) 4 Ordinary differential equations (34-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Special functions (33-XX) 3 Integral transforms, operational calculus (44-XX) 3 Functional analysis (46-XX) 3 Classical thermodynamics, heat transfer (80-XX) 2 History and biography (01-XX) 2 Field theory and polynomials (12-XX) 2 Real functions (26-XX) 2 Functions of a complex variable (30-XX) 2 Mechanics of particles and systems (70-XX) 1 Differential geometry (53-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Astronomy and astrophysics (85-XX)
2021-09-18 23:27:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5728634595870972, "perplexity": 6322.0295468014365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00522.warc.gz"}
https://vrcacademy.com/tutorials/z-test-two-proportions/
## Two sample proportion test Suppose we want to compare two distinct populations $A$ and $B$ with respect to possessions of certain attribute among their members. Suppose take samples of sizes $n_1$ and $n_2$ from the population A and B respectively. Let $X_1$ and $X_2$ be the observed number of successes i.e., number of units possessing the attributes, from the two samples respectively. Then, $\hat{p}_1=\frac{X_1}{n_1}$ be the observed proportion of successes in the sample from population $A$. $\hat{p}_2=\frac{X_2}{n_2}$ be the observed proportion of successes in the sample from population $B$. The pooled estimate of sample proportion is $\hat{p} =\dfrac{X_1 +X_2}{n_1 + n_2}$. ## Assumptions Assumptions for testing a proportion are as follows: a. The samples are random samples. b. The sample data are independent of one another. c. The populations are normally or approximately normally distributed and the sample sizes are less than 30. ## Step by Step Procedure We wish to test the null hypothesis $H_0 : p_1 = p_2$, i.e., the two proportions do not differ significantly. The standard error of difference between two proportions is \begin{aligned} SE(\hat{p}_1-\hat{p}_2) = \sqrt{\frac{\hat{p}(1-\hat{p})}{n_1}+\frac{\hat{p}(1-\hat{p})}{n_2}} \end{aligned} where $\hat{p} =\dfrac{X_1 +X_2}{n_1 + n_2}$ is the pooled estimate of sample proportion. The step by step hypothesis testing procedure is as follows: ## Step 1 State the hypothesis testing problem The hypothesis testing problem can be structured in any one of the three situations as follows: Situation Hypothesis Testing Problem Situation A : $H_0: p_1=p_2$ against $H_a : p_1 < p_2$ (Left-tailed) Situation B : $H_0: p_1=p_2$ against $H_a : p_1 > p_2$ (Right-tailed) Situation C : $H_0: p_1=p_2$ against $H_a : p_1 \neq p_2$ (Two-tailed) ## Step 2 Define the test statistic The test statistic for testing above hypothesis is \begin{aligned} Z & = \frac{(\hat{p}_1-\hat{p}_2)-(p_1-p_2)}{SE(\hat{p}_1-\hat{p}_2)}\\\ & = \frac{(\hat{p}_1-\hat{p}_2)-(p_1-p_2)}{\sqrt{\frac{\hat{p}(1-\hat{p})}{n_1}+\frac{\hat{p}(1-\hat{p})}{n_2}}} \end{aligned} The test statistic $Z$ follows standard normal distribution $N(0,1)$. ## Step 4 Determine the critical values For the specified value of $\alpha$ determine the critical region depending upon the alternative hypothesis. • For left-tailed alternative hypothesis: Find the $Z$-critical value using \begin{aligned} P(Z<-Z_\alpha) &= \alpha. \end{aligned} • For right-tailed alternative hypothesis: $Z_\alpha$. \begin{aligned} P(Z>Z_\alpha) &= \alpha. \end{aligned} • For two-tailed alternative hypothesis: $Z_{\alpha/2}$. \begin{aligned} P(|Z|> Z_{\alpha/2}) &= \alpha. \end{aligned} ## Step 5 Computation Compute the test statistic under the null hypothesis $H_0$ using equation \begin{aligned} Z_{obs} & = \frac{\hat{p}_1-\hat{p}_2-0}{\sqrt{\frac{\hat{p}(1-\hat{p})}{n_1}+\frac{\hat{p}(1-\hat{p})}{n_2}}} \end{aligned} ## Step 6 Decision (Traditional Approach) It is based on the critical values. • For left-tailed alternative hypothesis: Reject $H_0$ if $Z_{obs}\leq -Z_\alpha$. • For right-tailed alternative hypothesis: Reject $H_0$ if $Z_{obs}\geq Z_\alpha$. • For two-tailed alternative hypothesis: Reject $H_0$ if $|Z_{obs}|\geq Z_{\alpha/2}$. OR ## Step 6 Decision ($p$-value Approach) It is based on the $p$-value. Alternative Hypothesis Type of Hypothesis $p$-value $H_a: p_1 < p_2$ Left-tailed $p$-value $= P(Z\leq Z_{obs})$ $H_a: p_1>p_2$ Right-tailed $p$-value $= P(Z\geq Z_{obs})$ $H_a: p_1\neq p_2$ Two-tailed $p$-value $= 2P(Z\geq abs(Z_{obs}))$ If $p$-value is less than $\alpha$, then reject the null hypothesis $H_0$ at $\alpha$ level of significance, otherwise fail to reject $H_0$ at $\alpha$ level of significance.
2020-07-05 07:04:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939502477645874, "perplexity": 1481.3273912632571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00254.warc.gz"}
https://www.wptricks.com/question/set-acf-on-order-during-checkout/
Set ACF on order during checkout Question I’m trying to populate a field on an order with options set by the user during checkout, on our checkout form. function populate_custom_field_value( $order_id ) { if ( ! empty($_POST['custom_field_value'] ) ) { update_field( 'field_xxxxxxxxxxxxx', sanitize_text_field( $_POST['custom_field_value']),$order_id ); } } My html select component is being inserted correctly, with the values defined in ACF, but it’s not setting the field on the order. Can anyone see what I’m missing here? Solved, thanks 🙂 0 8 months 2020-12-04T07:25:55-05:00 0 Answers 10 views 0
2021-07-28 06:56:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37572410702705383, "perplexity": 11074.167793916005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00393.warc.gz"}
https://articles.math.cas.cz/10.21136/CMJ.2020.0146-19
# Institute of Mathematics ## $(\delta,2)$-primary ideals of a commutative ring #### Gülşen Ulucak, Ece Yetkin Çelikel ###### Received March, 2019.   Published online April 17, 2020. Abstract:  Let $R$ be a commutative ring with nonzero identity, let $\mathcal{I(R)}$ be the set of all ideals of $R$ and $\delta\colon\mathcal{I(R)}\rightarrow\mathcal{I(R)}$ an expansion of ideals of $R$ defined by $I\mapsto\delta(I)$. We introduce the concept of $(\delta,2)$-primary ideals in commutative rings. A proper ideal $I$ of $R$ is called a $(\delta,2)$-primary ideal if whenever $a,b\in R$ and $ab\in I$, then $a^2\in I$ or $b^2\in\delta(I)$. Our purpose is to extend the concept of $2$-ideals to $(\delta,2)$-primary ideals of commutative rings. Then we investigate the basic properties of $(\delta,2)$-primary ideals and also discuss the relations among $(\delta,2)$-primary, $\delta$-primary and $2$-prime ideals. Keywords:  $(\delta,2)$-primary ideal; $2$-prime ideal; $\delta$-primary ideal Classification MSC:  13A15, 13F05, 05A15, 13G05 DOI:  10.21136/CMJ.2020.0146-19 PDF available at:  Springer   Institute of Mathematics CAS References: [1] D. D. Anderson, K. R. Knopp, R. L. Lewin: Ideals generated by powers of elements. Bull. Aust. Math. Soc. 49 (1994), 373-376. DOI 10.1017/S0004972700016488 | MR 1274517 | Zbl 0820.13004 [2] D. D. Anderson, M. Winders: Idealization of a module. J. Commut. Algebra 1 (2009), 3-56. DOI 10.1216/JCA-2009-1-1-3 | MR 2462381 | Zbl 1194.13002 [3] M. F. Atiyah, I. G. Macdonald: Introduction to Commutative Algebra. Addison-Wesley Publishing, Reading (1969). DOI 10.1201/9780429493621 | MR 0242802 | Zbl 0175.03601 [4] A. Badawi, B. Fahid: On weakly 2-absorbing $\delta$-primary ideals of commutative rings. To appear in Georgian Math. J. DOI 10.1515/gmj-2018-0070 [5] A. Badawi, D. Sonmez, G. Yesilot: On weakly $\delta$-semiprimary ideals of commutative rings. Algebra Colloq. 25 (2018), 387-398. DOI 10.1142/S1005386718000287 | MR 3843092 | Zbl 1401.13007 [6] C. Beddani, W. Messirdi: 2-prime ideals and their applications. J. Algebra Appl. 15 (2016), Article ID 1650051, 11 pages. DOI 10.1142/S0219498816500511 | MR 3454713 | Zbl 1338.13038 [7] R. Gilmer: Multiplicative Ideal Theory. Queen's Papers in Pure and Applied Mathematics 90, Queen's University, Kingston (1992). MR 1204267 | Zbl 0804.13001 [8] N. J. Groenewald: A characterization of semi-prime ideals in near-rings. J. Aust. Math. Soc., Ser. A 35 (1983), 194-196. DOI 10.1017/S1446788700025660 | MR 0704424 | Zbl 0521.16030 [9] J. A. Huckaba: Commutative Rings with Zero Divisors. Monographs and Textbooks in Pure and Applied Mathematics 117, Marcel Dekker, New York (1988). MR 0938741 | Zbl 0637.13001 [10] I. Kaplansky: Commutative Rings. University of Chicago Press, Chicago (1974). MR 0345945 | Zbl 0296.13001 [11] S. Koc, U. Tekir, G. Ulucak: On strongly quasi primary ideals. Bull. Korean Math. Soc. 56 (2019), 729-743. DOI 10.4134/BKMS.b180522 | MR 3960633 | Zbl 1419.13040 [12] D. Zhao: $\delta$-primary ideals of commutative rings. Kyungpook Math. J. 41 (2001), 17-22. MR 1847432 | Zbl 1028.13001 Affiliations:   Gülşen Ulucak, Department of Mathematics, Faculty of Science, Gebze Technical University, Gebze, Kocaeli, Turkey, e-mail: gulsenulucak@gtu.edu.tr; Ece Yetkin Çelikel (corresponding author), Department of Electrical Electronics Engineering, Faculty of Engineering, Hasan Kalyoncu University, Gaziantep, Turkey, e-mail: yetkinece@gmail.com, ece.celikel@hku.edu.tr PDF available at:
2020-09-19 08:53:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7136987447738647, "perplexity": 1170.1085017749558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00285.warc.gz"}
https://codegolf.stackexchange.com/questions/44680/showcase-of-languages/80724
# Showcase of Languages ### Notes • This thread is open and unlocked only because the community decided to make an exception. Please do not use this question as evidence that you can ask similar questions here. Please do not create additional questions. • This is no longer a , nor are snippet lengths limited by the vote tally. If you know this thread from before, please make sure you familiarize yourself with the changes. This thread is dedicated to showing off interesting, useful, obscure, and/or unique features your favorite programming languages have to offer. This is neither a challenge nor a competition, but a collaboration effort to showcase as many programming languages as possible as well as possible. ### How this works • All answers should include the name of the programming language at the top of the post, prefixed by a #. • Answers may contain one (and only one) factoid, i.e., a couple of sentences without code that describe the language. • Aside from the factoid, answers should consist of snippets of code, which can (but don't have to be) programs or functions. • The snippets do not need to be related. In fact, snippets that are too related may be redundant. • Since this is not a contest, all programming languages are welcome, whenever they were created. • Answers that contain more than a handful of code snippets should use a Stack Snippet to collapse everything except the factoid and one of the snippets. • Whenever possible, there should be only one answer per programming language. This is a community wiki, so feel free to add snippets to any answer, even if you haven't created it yourself. There is a Stack Snippet for compressing posts, which should mitigate the effect of the 30,000 character limit. ### Current answers, sorted alphabetically by language name $.ajax({type:"GET",url:"https://api.stackexchange.com/2.2/questions/44680/answers?site=codegolf&filter=withbody",success:function(data){for(var i=0;i<data.items.length;i++){var temp=document.createElement('p');temp.innerHTML = data.items[i].body.split("\n")[0];$('#list').append('<li><a href="/a/' + data.items[i].answer_id + '">' + temp.innerText || temp.textContent + '</a>');}}}) <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script><base href="http://codegolf.stackexchange.com"><ul id="list"></ul> # Stylus Considering CSS is in here, I might as well add this in. Stylus is a language which is compiled in to CSS3. It's easy to use and adds many features in to the language, like nesting, variables, functions, maths, and relaxed punctuation requirements. ## Length 1 = Unlike other CSS pre-processor syntaxes, Stylus allows a variable to be named by any Unicode character that isn't a whitespace, number, or used symbol, without using the $ symbol to denote variables. CSS does not have any concept of a variable. ## Length 2 p a Nesting allows you to stop repeating the names of the elements, and looks much prettier and easier for a person to read. # Simplex Version is the latest, unless otherwise specified; interpreter is currently down; docs a little unupdated as of now. ## 6-vote 1(D)10 1 ~~ sets current byte to 1 ( )10 ~~ repeat inner 100 times D ~~ double current byte ~~ implicitly output byte This outputs 2^10 in its exact form. Simplex has something I liked to call “Infinite Capacity Numerics®”. It can be a bit slow sometimes, but is/will be highly precise. ## 5-vote "hi"g Writes a string hi to the strip and outputs it. ## 4-vote eRPE Outputs eπ ≈ 23.1406926328. e is Euler's number, R goes right in the strip, P is π, and E is exponentiation. Output is implicit. ## 3-vote i¢I A little code that takes numeric input, terminates if input = zero, otherwise increments the byte. Implicitly outputs the result. (It's also French! (ish): ici => here) ## 2-vote MO Shortest way to create a bounds error; M decrements the byte and O goes to the current byteth position in the source code…maybe I should include negative source codes…nah, who'd go for it? ## 1-vote 5 Not very interesting, but writes 5 to the current strip's first cell. (Also implicitly outputs 5.) ## 0-vote Simplex is a golfing language which functions similar to BrainF***, made by me. It is very much a work in progress, but has about 60 standard commands. It is composed of a singular field, which holds a bottom-closed, top-open, infinite amount of strips, each of which contains a right-open, left-closed, infinite amount of cells and a pointer. The field can be visualized as thus: 5: [0 0 0 0 0 0 0 ... ] ^ 4: [0 0 0 0 0 0 0 ... ] ^ 3: [0 0 0 0 0 0 0 ... ] ^ 2: [0 0 0 0 0 0 0 ... ] ^ 1: [0 0 0 0 0 0 0 ... ] ^ 0: [0 0 0 0 0 0 0 ... ] ^ Each pointer moves independent of other pointers. Typically, only one or two strips are utilized in a single program. • ¢ is two bytes Nov 4 '15 at 14:19 • @SuperJedi224 no, it isn't. Nov 4 '15 at 15:06 • @SuperJedi224 And even if it was, this challenge is graded on characters, not bytes. Nov 4 '15 at 15:11 • in UTF-8, it is. I suppose you're right that it doesn't matter here though. Nov 4 '15 at 20:09 # Carrot # 4-byte code 0^*2 (Prints 000) I have chosen to use this snippet because of its weirdness. So the first bit of the code says that the stack is 0. Then the * is string multiplication. It multiplies "0" 2 times. One would expect the output to be 00, but no! 000 is outputted instead. So the stack is repeated 2+1=3 times, which gives 000. This trick is useful for golfing when you want to output the stack 10 times. Instead of going (stack)^*10, you will do (stack)^*9 because it saves one byte! # 3-byte code $^P The $ is the alternative to # for taking input. In the future I hope to distinguish these two similar functions... Coming back to the topic, we see the caret ^ for the first time. This is to differentiate the stack from the commands. Then comes the P. This checks if the stack is a prime or not(works for the number 1). This primality checking works on the stack both as a string or as a number. This "special power" of P can make it more useful for code-golfing. # 2-byte code #A Outputs inputA. If the input is C, then the output would be CA This piece of code shows two interesting properties of Carrot: 1. One is that the string "A" does not need to be enclosed in quotes in the stack. 2. The other one is that the plus "+" sign for concatenation is not required in the stack. The two strings, the input and the "A", get joined together automatically. # 1-byte code # This is a simple program that prints the output as it is. The # can be used in the commands or the stack section of the program. The caret ^ is not required because we do not wish to use commands. Factoid: Carrot is a language made by me, Kritixi Lithos, based on this carrot meme of PPCG. Every Carrot program as of version ^2 must have a ^ in it. The structure of every program is as follows: stack^commands. • whaaat no carrot required? XD Nov 3 '15 at 17:21 • @CᴏɴᴏʀO'Bʀɪᴇɴ Carets are not required from Version ^3 and forth if we want to use no commands. Nov 3 '15 at 17:23 • @CᴏɴᴏʀO'Bʀɪᴇɴ There! The carrot is back again in the third byte! Nov 3 '15 at 17:41 # JacobFck JacobFck is a stack based esoteric language written in C#. It is capable of all basic operations as well as some more common operations. It is a slight mix of BrainFuck like syntax and Forth. The link above links to the JacobFck GitHub. ### Length 1 > This is one of the simplest instructions. It writes to the screen whatever is at the top of the stack. ### Length 2 ^5 The ^ instruction tells the interpreter that we are going to be pushing data (either number or register value) onto the stack. In this case, the number 5. ### Length 3 "s" Strings (encased in "") are automatically pushed to the top of the stack, and \n \t \r are all supported escape codes. ### Length 4 ^2<+ This code would push 2 to the stack with the ^ instruction, push user input to the stack with the < instruction, and add the two together by popping them off the stack and pushing the result to the top. ## Length 5 :a<_a This code is an example of an infinite loop. :a declared the label a. < is the instruction to prompt the user for input. _a goes to the label a. # Wierd Factoid Unlike other languages where the symbols in a program determine which instructions are executed, in Wierd, it is the bends in the chain of arbitrary symbols that determine which instructions are executed. Chris Pressey created the angle-to-instruction mapping, and christened the entire mess "Wierd"--a cross between the words "weird" (which the language seemed to be) and "wired" (which would describe the appearance of programs written in the language) You can try it online at http://catseye.tc/installation/Wierd_(John_Colagioia) Length 1 Snippet ! The actual character used doesn't matter - it can be any non whitespace character. This program doesn't do anything because there are no bends in the chain of characters - but there is a chain so at least it is a valid program. Length 2 Snippet ++ Now we are going somewhere - we have a chain of characters. Still no bends so you wouldn't expect it to do anything but unintuitively this program does actually do something. The current location starts in the top left corner facing "diagonally down and right". The current location always moves with inertia - it will move in the direction it is already moving until it can no longer do that and then will move in a direction closest to the current direction. So the current location has to change 45 degrees so that it can continue to the right and that counts as a 45 degree bend so we push 1 onto the stack - exactly the same outcome as the Length 5 Snippet. Length 3 Snippet + + Same as the Length 2 Snippet this program does actually do something because the current location starts in the top left corner facing "diagonally down and right" and the current location has to change 315 degrees so that it can continue to the right and that counts as a 315 degree bend which will subtract the two items on the top of the stack. But there is nothing on the stack so the bend is a no-op and the program does nothing. Length 4 Snippet + + The current location starts in the top left corner facing "diagonally down and right" so there are no bends in this chain of characters and so this program doesn't do anything. Length 4 Snippet + ++ Finally a program with a bend! But it doesn't do anything useful :( The current location starts in the top left corner facing "diagonally down and right". The current location always moves with inertia - it will move in the direction it is already moving until it can no longer do that and then will move in a direction closest to the current direction. So the current location moves "diagonally down and right" and arrives at the next bend. It is a 225 degree bend so if the stack were to contain a zero it would push one character of standard input onto the stack and if it were a nonzero value then a value from the stack would be written to standard output. But there is nothing on the stack so the bend is a no-op. Now the current location moves around the bend without doing anything, the current direction is towards the left and we are at another bend. This one is a 270 degree bend so if the stack were to contain a nonzero value the current direction would reverse. But there is nothing on the stack so the bend is a no-op. Now the current location moves around the bend without doing anything, the current direction is up and we are at another bend. It is a 270 degree bend so if the stack were to contain a nonzero value the current direction would reverse. But there is nothing on the stack so the bend is a no-op. Now the current location moves around the bend without doing anything, and we are in an infinite loop. Length 5 Snippet + ++ Finally a program that does something! The current location starts in the top left corner facing "diagonally down and right" so the current location moves "diagonally down and right" and arrives at the next bend. It is a 45 degree bend so we push 1 onto the stack. There are only four ways to get values onto the stack: • Push a 1 • Subtract two values already on the stack • Use a value on the stack to decide to read from standard input • Use corordinates on the stack to read a value embedded in the program So the only way to get a value onto an empty stack is to push a 1. # GoLScript ## 5-vote HAAZW This is another simple program. ### Generation 0 H pushes 7, A pushes 0. Z is a fun command that pops Y, X and pushes the character at FIELD[Y][X], or (X,Y). In this case, the character at (0,0) is H, and so it's char code is pushed. W outputs this as a character. The two outer-most cells die. ### Generation 1 We are left with AAZ, another call to Z, pushing h's char code to the stack (as the former H is now dead and thus lowercase). However, since the W died, it can no longer function, and thus, this phase ends. Once again, the two outermost cells die. ### Generation 2 Now, only A is left, which carries out its duty in pushing a 0, and then dies. The program is effectively stopped. ## 4-vote Bp P (newline counts as a character, iirc.) This does something rather simple: 1. The program is first BpP. This pushes 1 (B), pops a value (1) and assumes it p => P, and does that again, popping a zero from the bottom of the stack P => p. 1. Checking, only the new P survives, as it popped a value and lived. The rest of the characters died. 2. After, only the P remains; it pops an empty value (0) and dies. ## 3-vote JXJ First multi-step program! Here's what happens: ### Generation 0 The code is evaluated: J pushes 9 to the stack, X pops a number and prints it, and J again pushes 9 ot the stack. The cells' live-states are updated. Both Js die, having only 1 neighbour. The X lives, as it has 2 neighbours. ### Generation 1 All that remains is the X. This prints the remaining 9 off the stack. The X dies, and the program terminates. ### Final output 99 Hey, it's a start… ## 2-vote @V The shortest (?) still life in GoLScript. @ negates the top of the stack (effectively pushing true) and V reads the top of the stack, and dies if the value is falsey. Replacing @ with any of B-J would also produce a still life. ## 1-vote X Any 1-length program is valid. This one takes a zero of the stack (there is an infinite amount of zeroes atop the stack) and outputs it as a number. ## 0-vote (factoid) GoLScript simulates Conway's Game of Life! Yay! Interpreter. You'll have to copy-paste the codes, and output is the bottom-most code block. ### Language information • GoLScript is stack-based. • GoLScript has no native string quoting! • When will we run out of names beginning with Gol...? Gol><>, Golfscript, GoLScript, Golang (okay, not very golfy, but you get my point) :P – cat Apr 27 '16 at 22:43 • @cat upvote it if you like it ;) it's "Game of Life"... Apr 27 '16 at 22:44 • I think you meant Game of Code Golf, since Code Golf is life... – cat Apr 27 '16 at 22:46 • @cat GOCG doesn't have as nice of a ring to it ^_^ Apr 27 '16 at 22:47 • Add permalinks to the Interpreter Apr 30 '16 at 1:20 # Reng ## 5-vote 1¶:+! This uses an intersting feature: default stack pop override! ¶ pops n and sets n to the default value when popping from an empty stack. (This is 0 by default.) What this code does is initially lay down a 1, sets that as a default pop, : duplicates value (getting default pop) and adds it with the default pop, leaving twice the default pop on the stack. ! skips the 1 instruction, and ¶ sets the default pop to twice what it was. ## 4-vote is~o This is a cat program. It takes input (i), s jumps if the byte != -1, and o outputs. When reading EOF for input, -1 is put on the stack, and the program terminates. ## 3-vote 9#q This stores the number 9 in the variable q. ## 2-vote {} Reng has codeblocks! This pushes an empty codeblock to the stack. Forever. ## 1-vote ~ This ends the program. ## 0-vote Reng is like ><>, but with some more functionality. Not made for golfing, as you may come to see... ;) # Molecule Factoid Molecule is the descendent of Pylongolf2, a descendent of Pylongolf which was based on CJam. 0 byte snippet: This does not do anything as it is empty. 1 byte snippet: I I read the input. As Molecule outputs everything in the stack once the program shuts down, this makes a decent cat program. 2 byte snippet: () Anything between ( and ) is looped until ! is called. 3 byte snippet: Inh Read the input, convert it into a number. The h converts numbers into characters. So you can get some funky characters by using this. 4 byte snippet: qn Now here's where you could break the program. Molecule has a built-in reflections system that allows you to modify the source code at runtime. q Push the source code, n sets the source code to itself. From v5.5, this snippet forces the interpreter to read from the start, creating a loop. 5 byte snippet: 0(1+~) Have your program count until it reaches Infinity.. then it crashes. (On my computer takes about 2 minutes to crash :P) 6 byte snippet: u10000 Print 10000. What is so interesting about this? It's interesting because putting a simple 10000 will push 1 and 5 zeroes to the stack, while the u keyword pushes the entire number after it. • I can't wait for the 2 byte snippet! Apr 25 '16 at 16:59 # Poslin ## Factoid Poslin is a concatenative language with strict postfix notation and a metacircular interpreter. It is similar to the language described in EWD28 in principle but is not inspired by it. Unlike the language described in the linked paper, the immediateness of a symbol is definable in code, allowing the programmer to define almost arbitrary new syntax. Things like the symbol table and lexical environments exist as first class objects. Every token in Poslin is surrounded by whitespace. ## Length 1 snippet ! This is actually not a working program. Executing it in a fresh session gives a stack-bottom-error. This operation is immediate and executes the operation on top of the current stack. Without it, nothing would ever happen. ## Length 2 snippet &+ This also isn't a working program. & is an immediate operation to compile something into a thread. &+ also is an immediate operation, but after compiling it executes the operation stored at a place accessible via HOOK compilation-hook slot-sub-get. This way users can define their own optimization passes. ## Length 3 snippet [ ] This creates an empty stack. [] also creates an empty stack ad does so much more efficiently. That's because Poslin works with something which is called the path. The path is a stack of environments, where environments are kind of like prototyped objects, that is, they are mappings (from symbols to bindings) which can have parent-environments. When some object is found by the reader which is not an immediate symbol, it is put onto a stack which is saved in a binding (besides the path, the return stack and arguably streams the only mutable data structures in poslin) which is found under the symbol STACK in the environment on top of the path. [ puts a new environment onto the path which contains a fresh binding containing an empty stack for it's STACK. ] pops the topmost environment off the path and then extracts the stack saved in the binding in its STACK slot. [] just constantly returns an empty stack. [], [ and ] are all immediate. ## Length 4 snippet '& & There is no single operation in poslin which defines a new operation. Well, actually there are several, but they're all defined in the standard library, which is written in a subset of poslin called "poslin0". & has the purpose of transforming a given object into a thread. For threads it's just an identity operation. Symbols are looked up in an environment saved in a binding in the OP slot of the environment on top of the path. Stacks are converted by turning everything but threads into constant functions returning that object and then concatenating together the resulting objects into one thread. At least, that's roughly how it works. This means, of course, that any operation which should be called inside a thread created this way needs to be turned into a thread via & before the calling thread is constructed. Every other object is turned into a constant thread returning that exact object. '& is special syntax: The leading ' tells the reader that this is a quotation, that is, everything after the ' is interpreted as a symbol (including any other 's) and then just put onto the current stack without checking for immediateness. So the given snippet returns the thread of our poslin compiler. It is used in the operation ]o, which defines a new operation. With the knowledge given up to this point, you might be able to figure out how defining an operation works conceptually, even if you are missing some important operations. ## Length 5 snippet $: ": This is the string you'd normally see as "\"" in most other languages, that is, the string containing exactly the double-quote character. The character $ starts a delimited string. A delimited string has a delimiter, which is given between the $ and the next newline. The string contains all characters between that newline and the next occurence of the delimiter followed by some kind of whitespace. If the delimiter occurs and is not followed by whitespace, it is a part of the string and does not function as delimiter. Remember: Every token in Poslin is surrounded by whitespace. So, the given string could also be written as $" "" or even """, as Poslin recognizes the usual syntax for strings, too. It does not recognize any escape sequences. ## Length 6 snippet + call This is almost equivalent to + &. The important difference is that + & returns the thread of + while + call returns a thread which reads the binding which holds the definition of + and then calls the content of that binding. So, if you expect that the definition of + might change and you need to call the new definition instead of the old one, use + call. This is necessary for defining recursive operations. There is no way to insert a thread into itself, so the binding which is later intended to contain the thread is inserted instead via call and after the thread is constructed it is written into that binding. ## Length 7 snippet . ? ! ! ? consumes the top three elements off the stack. The third from the top needs to be a boolean value. If it is true, the second from top is put back onto the stack, if it is false the first from top is put back onto the stack. . is simply the no-op. So this is an immediate when. Imagine the following stack: [ 2 TRUE P{negation} ] This proceeds as follows with the above sequence: [ 2 TRUE P{negation} . ] [ 2 TRUE P{negation} . ? ] [ 2 P{negation} ] [ -2 ] Whereas if the stack is [ 2 FALSE P{negation} ] It proceeds thus: [ 2 FALSE P{negation} . ] [ 2 FALSE P{negation} . ? ] [ 2 . ] [ 2 ] # Desmos Factoid: The Desmos community has a wealth of programs that use many math equations to make them work. Take, for example, this 3D visualizer. ### 1 byte x One of my favorite things about Desmos is that you can write out equations like this without having to put y= before it, as Desmos automatically does it for you. # Quetzalcoatl ### 6 snippet: {c.I?}. Makes a block with Code Snippet 4. ### 5 snippet: {c.I} Makes a code block, containing the code c.I. On snippet 8, we will use this. ### 4 snippet: c.I? Same as before, but also does power. Result is xth root of x. ### 3 snippet: c.I Outputs (input, 1/input) because I in 1/. ### 2 snippet: c (or any token) + -q compiler flag Proper quine. -q outputs source instead of running code. ### 1 snippet: 1 or (no code) + -n (1 byte). Like in GolfScript, numbers just push their value. This outputs 1 because Quetzalcoatl implicitly prints the stack. This works for any digit 0-9. Second snippet outputs Quetzalcoatl, because -n outputs Quetzalcoatl. ### Interesting fact: Quetzalcoatl got its name from the Aztec snake god (I wrote it). • Is this meant to be a golfing language? Apr 9 '16 at 21:50 • @Cyoce This is a golfing language. Apr 10 '16 at 15:37 # Coffeescript ## Description: Coffeescript is a language that compiles into Javascript. It's much less verbose than JS (and about the same as ES6), which makes it friendlier for ing. ## Factoid: Coffeescript doesn't require brackets with function calls. ## Length 1 Snippet: = The = sign means a lot of things in Coffeescript - it can assign both functions and variables in one fell swoop, much like Javascript. ## Length 2 Snippet: ** This is the power sign, which is used like this: a**b. It replaces JS's Math.pow(a,b), which saves 11 bytes. ## Length 3 Snippet: (n) This is an argument in CoffeeScript, and is used like so: {function name} = (arg1, arg2...) -> (This is basically the same as JS, but I just wanted to show off functions - because they're so different). ## Length 4 Snippet: 0..5 For-loops are different in CoffeeScript. This snippet, when put into this context: for a in[0..5] Is the same as JavaScript's for (a=0, a<5, a++) and Python's for a in range(5) As you can see, this snippet shows the byte-saving powers of CS. ## Length 5 Snippet: n=->1 Yay, a full function! This snippet creates a function, n, with no arguments, which outputs 1. ## Length 6 Snippet: if n<1 This is an if statement, in CS. Notice the lack of a colon, or curly brackets (yes, no curly brackets!). This compares n, and checks if it's less than 1. Simple enough. ## Length 7 Snippet: class A This is a class declaration in CoffeeScript. Despite there being no support for classes in JavaScript, there's classes in CS - and they work exactly the way you expect them to. • That = declaring functions sounds bad to me. I would say, it is another case when a value is assigned to a variable, just the value is not number or string, but an anonymous function. Aug 11 '16 at 11:15 • Math.power don't exists; the correct function is Math.pow Sep 7 '16 at 8:48 ## Pip An imperative golfing language with infix operators, with commands in printable ASCII. While these design decisions mean that Pip will probably never beat Jelly in a golf contest, they also make the language much easier to learn and use. ### Factoid To my knowledge, Pip was the first golfing language to have built-in regex support. The header of each snippet is a TryItOnline link. ### Length 1 a Pip's default method of getting input is by command-line arguments (hereinafter "cmdline args"). The first five args are assigned to the variables a through e, and the whole list is assigned to g. Since it is at the end of the program, the expression a is auto-printed. So this program outputs the first cmdline arg. ### Length 2 /q To get input from stdin, use the q special variable. Every time it is referenced, it reads a line of input. Many operators have both binary and unary versions. This is expected behavior for operators like -, and makes good sense for others like ^ split and J join. Here we have a somewhat unusual example: unary / inverts its argument. Inputting 4 will give 0.25 as output, and vice versa. This snippet also demonstrates a feature that Pip shares with Perl and PHP: numbers are strings and strings are numbers. Both are represented by a data type called Scalar. Upshot: you don't have to convert q's line of input to a number before doing math with it. But what happens if you input something that's not a number, like Hello world? In a numeric context, it's treated as 0. Dividing by 0, like most error conditions, returns the special nil value, which produces no output when printed. If you want to see what went wrong, you can use the -w flag to enable warnings, in which case Pip will tell you: Inverting zero ### Length 3 Uses the -p flag (+1 byte). ^g As mentioned above, g is a list of all cmdline args. Many operators work itemwise on lists, and ^ (split) is one of them. This code splits the items of g into lists of characters. If you ran this 2-byte code without the flag, it wouldn't be obvious what the program did, because Pip's default way of outputting lists is to concatenate the items together. To demonstrate that the split operation worked, we need to change the output format. About half of Pip's command-line flags have to do with list formatting. The -p flag applies RP (analogous to Python's repr) to the list before outputting it, making it easy to see the structure: > python pip.py -pe "^g" 42 Hello [[4;2];["H";"e";"l";"l";"o"]] Use the TIO link above to play around with the other list flags (-s, -n, -l, -P, -S) and see how the output changes. ### Length 4 z@<h Lowercase letters h through z are global variables, preinitialized to different values. z is the lowercase alphabet; h is 100. The @< "left-of" operator returns a slice from the left end of an iterable (scalar, list, or range). It's equivalent to Python iterable[:index]--except that the Pip version works even when the index is greater than the length of the iterable. In that case it repeats the iterable until it's longer than the index, then takes the slice (like take + cycle in Haskell). So z@<h gives the first 100 characters of the lowercase alphabet: abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuv ### Length 5 2>1>0 Comparison operators chain, as in Python. This program returns 1 (true). # hansl hansl is a programming language that is used in the statistical software project gretl since 2001. It resembles a mixture of R, Stata, and C. It operates with datasets, time series, vectors, matrices, and functions. Just as in R, there are dozens of user-written packages for econometrics and data analysis. It is a matrix-oriented interpreted language that is Turing-complete (!). Its fortes are econometric estimation and numerical maximisation, and it can be integrated with R, Ox, Octave, Stata, Python, Julia, and gnuplot. ### Length 1 snippet ~ You know how difficult it can be to build matrices from vectors? One can attain the effect by reading the input row-wise, column-wise, or through assembling the mess into a dataset? In hansl, you do not hassle: just write ~ between the two vectors, and voila, you get them bound in tight leather together vertically: A~B. Do you have two matrices you want to assemble? X~Y, easy as pie. ### Length 2 snippet $h This neat accessor returns the series of estimated conditional heteroskedasticity from the last GARCH model. Doesn’t it amaze you that esoteric programming languages have abbreviations for everything? Take that, MATL! ### Length 3 snippet ols This is probably the most frequently used estimation method in all of econometrics. This command estimates a linear regression model describing the relationship between the dependent variable and predictors (like ols wage 0 age education married), returns the full model and generates many useful accessors (like $uhat for residuals, $aic for Akaike information criterion etc.) Factoid: Its name comes from the famous fairy tale “Hansel and Gretel” by the Grimm brothers, with a modern twist (like Flick-r, Tumbl-r, and similar names, but gret-l omitted the penultimate vowel long before it became mainstream!). # Logicode ## Factoid: Logicode has a very limited amount of built-ins (7 built-ins, of which 6 are single-char), which makes it pretty difficult to program anything meaningful in. Also, it's built on logic gates, as the name suggests, and that's it - which makes it doubly hard to code stuff in. You can try it online here. ## Length 1 Snippet: + The + is not addition, as that can be achieved easily. It's concatenation between two "numbers" (which are binary): So, something like 11+1 is not 100, as with regular binary addition, but 111. ## Length 2 Snippet: -> -> is used in two main things: circuits (which are Logicode's version of functions) and conditionals (if statements). It is used to separate the executed code from the arguments (in the case of circuits) or the condition (in the case of conditionals). ## Length 3 Snippet: out This is a declarer, kinda like var in Javascript to declare a new variable. Logicode has a declarer for anything: out for output, var for variables, circ for circuits (which are basically functions in Logicode), cond for conditionals. ## Length 4 Snippet: !100 The ! is a logical NOT, which is operated on every digit of the binary string that follows. In this case, the snippet evaluates to 011. ## Length 5 Snippet: 1&1&1 In Logicode, you're allowed to stack multiple two-arg (dyadic) operators together, as seen here. The & is a logical AND, and the snippet evaluates to 1. ## Length 6 Snippet: @11011 Yay, ASCII support! Because the default "type" of Logicode is binary strings, the @ converts binary to ASCII codes (mod 256). In this case, the character generated is ESC (ASCII character code 27). ## Length 7 Snippet: 1000<<< This program takes the tail of 1000 three times. A tail is essentially every character but the first, so repeating this three times gives us: 1000 -> 000 -> 00 -> 0 So this evaluates to 0. # Racket Factoid : A programmable programming language: Racket is a full-spectrum programming language. It goes beyond Lisp and Scheme with dialects that support objects, types, laziness, and more. (https://racket-lang.org/) (I am surprised no one has added Racket till now). Functions or procedures need to be in parentheses. First term in parentheses is procedure name, rest are its arguments. If an argument is a procedure, it has to be in its own brackets. Values (non-procedures) are written without brackets. Main syntax difference between Java and Racket is f(a, b) vs (f a b), x+y+z vs (+ x y z), x == y vs (eq? x y) and x=2 vs (define x 2), or if already defined, (set! x 2). There is no need to declare types like public static void or int char string etc. ## Length 1 i Prints value of i. Also: + - / * are functions: (+ 1 2 3 4) Output: 10 (all numbers are added together) (* 1 2 3 4) Output: 24 (product of all numbers) (/ 20 5 2) Output: 2 (20 is divided by 5 as well as 2) (- 10 5 2) Output: 3 (5 and 2 are subtracted from 10) ## Length 2 pi Prints value of pi: 3.141592653589793 ## Length 3 let For local binding as in: (let ( (x 5) (y 10) ) (println (* x y)) ) ; x and y are not visible here; Also: map Subjects each item of a list to the sent function, e.g. to double every item of the list: (map (lambda(x) (* 2 x)) (list 1 2 3) ) Output: '(2 4 6) Also: for Standard for loop as well as its extensions: for* for/list for/sum for/product for/first for/last for/and for/or There are more extensions https://docs.racket-lang.org/reference/for.html ! ## Length 4 Conditionals: cond For example (from https://docs.racket-lang.org/racket-cheat/index.html): (cond [(even? x) 0] [(odd? x) 1] [else "impossible!"]) Also list Being derived from Lisp & Scheme, list is very important data structure here: (list 1 "a" #\a (list 1 2 3)) ## Length 5 match Match is like case-switch: (match x [3 (displayln "x is 3")] [4 (displayln "x is 4")] [5 (displayln "x is 5")] [default (displayln "none of the above")]) Also: apply This opens a list and provides all elements to the function, for example: (apply + (list 1 2 3 4 5)) becomes: (+ 1 2 3 4 5) ; => 15 ## Length 6 Define: a very basic function- to assign value to a variable: (define s "A string") (define n 10) Also: printf - the print function: (printf "~a ; ~a ~n" s n) Output: A string ; 10 Also: Filter - a function that filters a list for items which give true value to the specified function (sent as an argument): > (filter string? (list "a" "b" 6)) '("a" "b") > (filter positive? (list 1 -2 6 7 0)) '(1 6 7) The test function can be specified: > (filter (lambda(x) (> x 2)) ; function to test each element '(1 2 3 4 5)) ; full input list to be filtered Output: '(3 4 5) ; output list of elements greater than 2 ## Length 7 println : one of most commonly used function. Prints out the sent string with a newline character at end: (println "String to be printed") (Note: examples are from various sources on the net). # MiniStringFuck MSF- is a way to compress the unique characters of any 0-255 ASCII string to just two characters; and it does pretty well! Although code might just be long. Length 1 snippet . Outputs 0x00. Length 2 snippet +. Outputs 0x01. This does not only demonstrate the . (output), but also the + (accumulator). + essentially changes the accumulator's value (acc) to (acc + 1) % 256. Length 3 snippet ,+. It does not really output the next character; it's the same as above, because non-+. characters are ignored. Length 4 snippet +.+. Yay multiple chars!! This outputs \1\2 (escaped chars). Length 5 snippet +.+.. This prints \1\2\2. Yes, we need too much upvotes yet to get to a reasonable program. Length 6 snippet +.+.+. Prints \1\2\3. Finally something that looks a little bit more interesting than just some random sequence of unprintables. • A language with just two commands? Isn't this basically similar to the 0s and 1s of Binary? ;p Ah well, +1, looking forward to see what it got in store for us. Sep 20 '16 at 14:42 • To downvoters: Please do not downvote just because the language isn't awesome. It still has to be showcased. Nov 13 '16 at 7:41 # Processing ## Factoid: Processing is kind of like the nicer brother/cousin of Java with its syntactic sugar, which introduces a lot of golfing opportunities, as well as GUI stuff which is built-in (because Processing is made for designers to get into programming). See the snippets for its golfing power (especially for graphics stuff). ## 18 bytes: size(400,400,P3D); This snippet not only sets the dimensions of the window to 400x400, but also enables 3D rendering (in just 18 bytes!). ## 7 bytes: println Yup, it's just println, not System.out.println. It's as simple as that. ## 3 bytes: str There's a builtin for converting to Strings from other types (e.g. int, float, long, etc.) in Processing! • You aren't restricted by votes anymore, you know. Jan 2 '17 at 4:58 # Stax Stax is a stack-based golf-oriented language. That's a pretty crowded space, but stax has a few novel properties. There's an online interpreter. ## Two stacks There are two data stacks, "main" and "input". The language name is inspired by this. Normally standard input starts on the input stack, split into lines. Most operations operate on the main stack. However, if the main stack is empty, read operations fall back to the input stack instead. ## Types Falsy values are numeric zeroes and empty arrays. All other values are truthy. • Integer Aribitrary size • Float Standard double precision • Rational Fractions of integers • Block Reference to unexecuted code for map/filter/etc • Array Heterogeneous lists of any values ## Compressed string literals Stax has a variant of string literals suitable for compressing English-like text with no special characters. It is compressed using Huffman codes. It uses a different set of Huffman codes for every pair of preceding characters. The character weights were derived from a large corpus of English-like text. "Hello, World!" could be written as jaH1"jS3!. The language comes with a compression utility. ## Crammed integer arrays Similarly to compressed string literals, stax also has a special feature for efficiently representing arrays of arbitrary integers. It uses almost all the printable ascii characters. It uses the information in each character efficiently to embed how long each integer is, and their values. This feature is new in Stax 1.0.6. ## Debugger Stax has a web-based development and execution environment. It runs entirely client-side with no ajax calls. It features a step-through debugger that shows the current state, including all registers, stacks, and current instruction. | is a programmatic break instruction. There is also a C# GUI and CLI for Stax. ## PackedStax PackedStax is an alternative representation for Stax code. It is never ambiguous with Stax, since PackedStax always has the leading bit of the first byte set. That means the same interpreter can be used for both representations with no extra information. For ease of clipboard use, PackedStax can be represented using a modified CP437 character encoding. It yields ~18% savings over ASCII. ## Rationals Stax supports fraction arithmetic. You can use u to turn an integer upside down. So 3u yields 1/3. Fractions are always in reduced terms. 3u 6* multiplies 1/3 by 6, but the result will be 2/1. ## Implicit input eval Normally the text in the input starts in the input stack. The stax runtime can parse input into stax data structures. This is rather convenient for many PPCG posts, where input formats are flexible. This happens only when certain conditions are met. • Input contains no newlines • Parsing input succeeded. Support types are integers, floats, rationals, strings, and arrays. For example, this input would be parsed into corresponding values on the input stack. 1 2.3 [4/5, "foo"] ## Snippet: Length 0 An empty program in stax will reduce a fraction. This is the result of automatically evaluated input, and a built-in rational type. Rational values are always in reduced terms. And since there's no explicit output in the program, the top of the stack is implicitly printed. Run and debug it ### Snippet: Length 1 A one character stax program can filter out the blank lines from standard input. f is generally used to use a block to filter an array, but when it's the first character of a program, it uses the remainder of the program to filter the input lines. The rest of the program is blank, so the filter is an identity filter, and blank lines are falsey. f Run and debug it ### Snippet: Length 2 This program iterates over the positive integers from 1 to n and adds them. F+ Run and debug it ### Snippet: Length 3 This program gets the first n letters of the alphabet. Va is the lowercase alphabet. ( truncates to the specified input length. Va( Run and debug it ### Snippet: Length 4 In 4 bytes, you can use a block to map each character to its ascii code in hexadecimal. There aren't strings per se in stax, but arrays of character codepoints are treated as strings in many contexts. {...m establishes a block and maps every element in an array using the contents. |H converts a number to hexadecimal. {|Hm Run and debug it There is also a length-4 proper quine in Stax, which is quite different from how most proper quines are constructed. ..SS Run and debug it . is the leading character for a two-character literal and ..S is the string literal ".S". S builds the powerset of the array, excluding the empty set, i.e. [".",".S","S"]. The result is then implicitly flattened and output. ## Sonic Pi Code Snippet 4: dice You don't see anything on the output, but if you show it with puts, it's a random number. Don't worry if your number is always the same, Sonic Pi uses the same random seed for every run, but that's good because else your songs would sound different every time. Code Snippet 3: :e1 This equals the MIDI-Note 52 and the frequency 164.81377845643496 hz. It must start with a : so Sonic Pi knows it's an integrated constant. Code Snippet 2: [] Empty list. Sorry, nothing else except comments can be written in two bytes. Code Snippet 1: # Starts a comment. Factoid: Sonic Pi is a sound coding language and is a full valued programming language. # StackyLogic ## Factoid: StackyLogic is originially from a codegolf challenge, as seen in the link. It consists of a series of "stacks", however there are no "push" operations. ### Length 1 Snippet < This is not a full program at all. This is just the pointer symbol. This symbol denotes where the pointer starts ### Length 2 Snippet ?< This program takes a bit of input, and outputs it. It would be a cat program, if it looped, however Stackylogic possesses no loops. First, the program will substitute the ? for either a 1 or 0, depending on input. Then, it will execute this command. 1 will make the pointer move down one, 0 will make it move up one. Then, upon discovering the empty stack (which are always implicitly exist), it will halt, and output the last executed command, either a 1, or 0 ### Length 3 Snippet ?1< This program always outputs one, the ? is never used. This is because, as stated in the previous snippet, 1 will move the pointer down one, and the implicit empty stack will cause the last executed command to be printed. the ? isn't used, because the pointer is never directed to it. ### Length 4 Snippet ? ?< Our first program with multiple non-empty stacks. This program outputs the result of an OR operation on two bits. How it works: It first substitutes the ? under the pointer with a bit from input, and executes it. If it is a 1, it will go forward, find the empty stack, halt and print 1. ?< If it is a zero, it will go up one, and act like the above code: it will substitute the ? for a bit, execute it, and then move onto an empty stack and output it. Note how the code after the first ? is executed is the same as snippet 2 ### Length 5 snippet 1 ??< This program, like the last program, outputs the OR of two bits. However, the ?s are on the same line. if the first bit is one, it will move forward on to an empty stack, output 1, if it is a 0, it will move back one, on to the one, and then move forward back on to the second ?, again being identical to the length 2 snippet. This illustrates how multiple programs can have the same effect. You might have noticed that no new language features have been introduced this time. This is because they have all been introduced. ### Length 6 Snippet 1 ?< 0 This program outputs the not of one bit (the opposite of snippet 2). You might, by now, think that StackyLogic is really boring, and can't do anything interesting. This isn't (entirely) true. It can be used for more interesting things, it just takes A LOT of characters. However, even after these lots of characters, it is not capable of much computation, it cannot even store the input, most challenges are closed out to it (in fact, the linked example actually checks which year it is, if it divisible by 4, to see if it is a special case (not a leap year)) ### Length 7 Snippet ?< ?1 ? This program "nests" an OR in a surrounding program. I say it nests it, because if the first bit is 1, it changes to be the exact same as the OR, and executes the same things. This program is (x AND (y OR z)) I don't really know anymore decent snippets, there isn't much more to show. I plan on making another answer with my derivative, Eseljik, after I actually make the interpreter, but before that decide on the final part of the spec. Anyway, there are a lot more commands in Eseljik, so I shouldn't easily run out of things to show, like I have here. • Does StackyLogic fulfill our definition of programming language? Jul 26 '16 at 15:03 • No. If this is an issue, maybe you'd prefer HQ9AddPrimality+? Seriously though, perhaps we should not judge on these arbitrary requirements, that try to regulate boring languages, but using common sense to determine what should and shouldn't be allowed. Though this lacks many abilities of normal languages, it is not abusive, like the language HQ9+, and is kind of fun to answer questions with. Jul 27 '16 at 1:11 ## ListSharp factoid: ListSharp is an interpeted langauge for list manipulation and web scraping, still in development yet it can already do some neat stuff! • syntax is heavily word based so a good amount of votes will be needed for functional snipplets Length 4 snipplet: SHOW SHOW is the standard STDOUT of ListSharp and lets you display a variable in complete disregard to its type Ex: SHOW = "Hello world" SHOW = {"1","2"} + "3" SHOW = variable_name ## Golisp ### Factoid Golisp is a very simple programming language, but despite it's name it's not a Lisp dialect. ### Length 1 snippet 0 Return 0, but since this value isn't used this is a no-op. ### Length 2 snippet "" Return a empty string, but since this value isn't used this is a no-op. ### Length 3 snippet (0) Create a list with 1 element, 0 ### Length 4 snippet +[1] Call the function + with one argument, 1. The returned value is 1. ### Length 5 snippet chr@5 Use the shorthand notation function@argument. Return the ASCII character ENQ (5), but as always, nothing is printed :/ ### Length 6 snippet +[3 5] # Woefully Now with TIO! ## Factoid: Woefully is a 2d language with no traditional conditionals (ooh, that rhyme). The closest thing it has to conditionals is the boolean/not_zero command, which pushes int(stack_A.pop()!=0). It's also pretty weird in other ways. It isn't really possible to even write a program in less than 7 characters, unless you count printing the "error" message as a program, or immediately halting as a program, so perhaps I'll multiply the votes by 4 to get the amount of bytes I can have, if that's allowable (context: it takes 266 bytes to write a truth machine, unless it could be golfed more,) ## 1*4 bytes boom this program technically doesn't error, but just prints "confuse :(", the "error" message of the language, which counts as regular output as it goes to STDOUT, not STDERR. This program doesn't conform to the program composition requirements (needs to have no characters that aren't pipes, spaces and newlines, needs to have at least one space, needs to have no spaces at the start and end of lines), and so automatically prints the message "confuse :(" BTW it is going to be a while before anything really interesting, like even a program that just takes input, or pushes a number, so you might want to look at my truth machine to see what Woefully looks like ## 2*4 bytes | | || | Ooh, we have the first program that doesn't just print "confuse :(" and die. This program does... nothing. It is in an infinite loop of nops ### Why it nops forever diagram! v |\| ||\| X The char pointer, the first of two pointers, starts at the character pointed to by v. The instruction pointer will find the first space after this char, and execute the path of spaces it is a part of. The \ shows the path. the instruction pointer goes down this path, but does nothing, since lines that are two-long are nops. Once it finishes the path, (symbolised by the X), the instruction pointer returns to the char pointer, which has not moved, and executes it again, forever. ## 3*4 bytes | | | | | || This program terminates in an error (not confuse, but an actual error), but why? three down executes the A to B command, popping the top of the A stack, and pushing it to the B stack. (there are two stacks). However, the program does not immediately fail, because the stacks start with values already on them: one zero each, it runs once normally, then fails the next iteration of the loop. ## 4*4 bytes ||| | || || | || This program pushes zero to stack A infinitely. this is because all diagonally down left commands push the length of the command, minus three. minus three because a command that is two long is a nop, so the shortest push command is three long, and the smallest push-able value is zero. ## 5*4 bytes | || | ||| || || ||| I didn't really have anything interesting (apart from some more erroring commands) to show this time, but I realised none of the snippets halt, though some error, so this snippet just halts, without doing anything (including the push at the right) ### diagram! v char pointer starts pointing here |X||a| X has the first space after that character, so it's path is executed |||a|| X halts the program ||a||| a - never executed # Majc (formely hashmap) Factoid hashmap is not necessarily a golfing language, although it's commands can make it so. (Also it's spelled hashmap, not Hashmap). It was renamed in June 11, 2016 to Majc. 0 byte snippet: This does nothing. 1 byte snippet: . Clear the stack... now.. there's nothing in the stack.. so... 2 byte snippet: id Convert input to a number, this is also the long version of h. 3 byte snippet: h2^ h is short for id, so we're taking the input as a number and get it's squared. 4 byte snippet: {}:a 5 byte snippet: isrsr Kind of lost ideas here, but i takes an input, sr reverses it. I'm not sure if this still works but this created a code block (based on CJam) and assigns it to variable a. Code blocks are like anonymous functions (unless assigned to a variable). • 2 byte snippet? Jul 26 '16 at 0:32 # SX ## Length 1 Code 水 Compiles to: pass ## Length 3 Code 10才 returns 11. 才 is equivelant to C's ++. ## Length 4 Code 品*30 returns the circumference of a circle with a diameter. Assuming math was importing. ## Length 5 Code 送我(@) compiles to def __init__(self):print(self) and will work if put inside a class. Factoid: sorry pass isn't really that interesting. Sorta interesting, SX is one of the first golfing langauges (I made it when I was in 7th or 8th grade. It also compiles to python. # Dotsplit (whoops I forgot the link before :p) ## (pair of) Factoid(s): Dotsplit is a mostly syntax free language, with odd builtins (kind of like mathematica), but is currently in it's infancy. Dotsplit is named after the tokeniser used in the interpreter: str.split(). This is a weak tokeniser, but it fulfills all needs of the language, so far anyway. ## 1 byte: D D does ... nothing. it isn't a tiny builtin, unless you count a nop as a builtin, and it's only a nop because it isn't a defined command. Dotsplit has relatively long command names for most things. Note that the shortest happens to be three long. anything that isn't a defined command is ignored. also note commands are case insensitive, so if it were a command, d would be the same as D ## 2 bytes: still not much interesting b b with a leading space. This program functions identically to b, as well as D, because nops, but if b was a command, b and D would not be the same, but b and [space]b would. Spaces are used to separate commands, but otherwise are ignored. Next upvote, and we get an actual command! ## 3 bytes First command! aDd This will pop a and b, and push a+b. Popping from an empty stack yields 0. hence, this program will leave one 0 on the stack. add is case insensitive, and will function regardless of case (AdD will also work) ## 4 bytes There are a fair amount more commands with 4 bytes, not many still. I chose this one derp This program will wait for input by the user. If input is "derp" (case-sensitive), it prints "Derp". Otherwise, "Nope" • That derp though... Oct 30 '16 at 8:10 # International Phonetic Esoteric Language ## Factoid The International Phonetic Esoteric Language, or IPEL, is a stack-based esolang where the instruction set mostly consists of characters from the International Phonetic Alphabet, and some ASCII. IPEL is an attempt to apply stuff I've learned from some CS courses since starting college, as well as to create a language for myself for code golf. ## Length 1 1 IPEL has 3 types: numbers, strings, and lists. Numbers from 0-9 can be pushed in by using the digit. ## Length 2 io A simple cat program. i takes a string from STDIN, then outputs it with o. ## Length 3 "a" Pushes the string a. ## Length 4 1esø This code increments the loop index by 1. In IPEL, loops are not handled by the interpreter; the programmer has to manually set the loop index. Loops also only check for when $$\index \lt end\$$, where when true, jumps back to the start of the loop. ## Length 5 <f>/\ An empty function definition. ## Length 6 {12.34} This is how you push a float and any numbers larger than 9. Surround it in curly brackets. ## Length 7 |a|ɔ|a| Labels also exist in IPEL. When execution hits ɔ, it will jump to the label specified right after the ɔ. This is also the shortest infinite loop you can do in IPEL. ## Length 8 "C"ʁ2zχo This is my answer to the "A Without A" challenge. It pushes C, converts it to a number, subtracts 2, converts it back to a string, and prints. ## Length 9 {zzzzzzz} IPEL since v1.4.0 includes the ability to push base 36 numbers. zzzzzzz is equivalent to 78364164095 in base 36. ## Length 17 <f>/1ue2sø\<f>2u3u This is the shortest program that demonstrates changing the return pointer of a function call. Without e2sø, this would print 123. With it would print 13 instead, completely skipping the 2 instructions 2u. I'm not sure if there's any practical use for this, but it's a thing you can do. # PyCal Factoid: PyCal is a math-based programming language written in Python, hence the name. It's designed to compete in code-golf challenges that need math. Length 1 snippet ! OK, let me explain a little about PyCal. PyCal has two "variables". One variable's value can be set by the user. The other variable's value comes from the values that some commands return. For example, the F returns the nth Fibonacci number. Anyways, the ! command outputs the value that is in the user-set variable. By default, it's 0, so the above snippet outputs 0. Length 2 snippet IP This snippet outputs the first n prime numbers as a list. Let me explain what this does: I gets input, and converts it to an integer before storing it in the variable. (From now on, I'll call the variable that the user can change "variable". The other one will be called the result. Read snippet 1 for more info) P takes the variable's value and prints n prime numbers. For example, if the value is 5, it will output the first 5 primes. So, here's what it will look like if the input is 10: [2, 3, 5, 7, 11, 13, 17, 19, 23, 29] Length 3 snippet (7) Ah, it's good to be back, I took a long break. Anyways, let's get on with the explanation. Well, this is actually really simple. It changes the value of the variable to 7. Of course, this doesn't output anything. You can add a ! to the end to output the number. Length 4 snippet (wip) Length 5 snippet (i)S! This snippet gets the square root of the input. (i) sets the value of the variable to the input and converts it into an integer. S calculates the square root. • I didn't think it would be Python + Math, I thought it would end up being Python + Pascal... Sep 20 '16 at 12:37 • @XiKuuKy It's Python + Calculator. PyMath and Pyth (:P) are both taken – m654 Sep 25 '16 at 9:24 # Straw Straw is a 1D stack-based language I created. It mainly operate on strings. You can try it online here. ### Length 1 < Take one line of input and exit. ### Length 2 -> - take an item from the secondary stack, and > print it. Straw have 2 stacks: The first is initialized with an empty string, and the second with Hello, World!. So this code prints Hello, World!. ### Length 3 9#> Any character which is not a command is pushed on the stack, and # convert a decimal number to unary. It's needed to make any operations on numbers, because Straw don't have numbers, it only operate on strings. So this example prints 000000000. ### Length 4 <<+> Take two lines of input, concatenate (+`) and print.
2021-12-08 07:17:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3705300986766815, "perplexity": 2780.5092193058445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00244.warc.gz"}
https://cran.r-project.org/web/packages/stars/vignettes/stars4.html
4. stars data model For a better version of the stars vignettes see https://r-spatial.github.io/stars/articles/ This vignette explains the data model of stars objects, illustrated using artificial and real datasets. Stars objects stars objects consist of • a (possibly empty) named list of arrays, each having named dimensions (dim) attribute • an attribute called dimensions of class dimensions that carries dimension metadata • a class name that includes stars A dimensions object is a named list of dimension elements, each describing the semantics a dimension of the data arrays (space, time, type etc). In addition to that, a dimensions object has an attribute called raster of class stars_raster, which is a named list with three elements: • dimensions length 2 character; the dimension names that constitute a spatial raster (or NA) • affine length 2 numeric; the two affine parameters of the geotransform (or NA) • curvilinear a boolean indicating whether a raster is a curvilinear raster (or NA) The affine and curvilinear values are only relevant in case of raster data, indicated by dimensions to have non-NA values. A dimension object describes a single dimension; it is a list with named elements • from: (numeric length 1): the start index of the array • to: (numeric length 1): the end index of the array • offset: (numeric length 1): the start coordinate (or time) value of the first pixel (i.e., a pixel/cell boundary) • delta: (numeric length 1): the increment, or cell size • refsys: (character, or crs): object describing the reference system; e.g. the PROJ string, or string POSIXct or PCICt (for 360 and 365 days/year calendars), or object of class crs (containing both EPSG code and proj4string) • point: (logical length 1): boolean indicating whether cells/pixels refer to areas/periods, or to points/instances (may be NA) • values: one of • NULL (missing), • a vector with coordinate values (numeric, POSIXct, PCICt, or sfc), • an object of class intervals (a list with two vectors, start and end, with interval start- and end-values), or • a matrix with longitudes or latitudes for all cells (in case of curvilinear grids) from and to will usually be 1 and the dimension size, but from may be larger than 1 in case a sub-grid got was selected (or cropped). offset and delta only apply to regularly discretized dimensions, and are NA if this is not the case. If they are NA, dimension values may be held in the values field. Rectilinear and curvilinear grids need grid values in values that can be either: • for rectilinear grids: irregularly spaced coordinate values, or coordinate intervals of irregular width (a rectilinear grid can have one dimension that is regular), • for curvilinear grids: or a matrix with grid cell centre values for all row/col combinations (usually in longitude or latitude). Alternatively, values can contains a set of spatial geometries encoded in an sfc vector (“list-column”), in which case we have a vector data cube. Grid type Regular grids With a very simple file created from a $$4 \times 5$$ matrix suppressPackageStartupMessages(library(stars)) m = matrix(1:20, nrow = 5, ncol = 4) dim(m) = c(x = 5, y = 4) # named dim (s = st_as_stars(m)) ## stars object with 2 dimensions and 1 attribute ## attribute(s): ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## A1 1 5.75 10.5 10.5 15.25 20 ## dimension(s): ## from to offset delta point x/y ## x 1 5 0 1 FALSE [x] ## y 1 4 0 1 FALSE [y] we see that • the rows (5) are mapped to the first dimension, the x-coordinate • the columns (4) are mapped to the second dimension, the y-coordinate • the from and to fields of each dimension define a range that corresponds to the array dimension: dim(s[[1]]) ## x y ## 5 4 • offset and delta specify how increasing row and column index maps to x and y coordinate values respectively. When we plot this object, using the image method for stars objects, image(s, text_values = TRUE, axes = TRUE) we see that $$(0,0)$$ is the origin of the grid (grid corner), and $$1$$ the coordinate value increase from one index (row, col) to the next. It means that consecutive matrix columns represent grid lines, going from south to north. Grids defined this way are regular: grid cell size is constant everywhere. Many actual grid datasets have y coordinates (grid rows) going from North to South (top to bottom); this is realised with a negative value for delta. We see that the grid origing $$(0,0)$$ did not change: attr(s, "dimensions")[[2]]$delta = -1 image(s, text_values = TRUE, axes = TRUE) An example is the GeoTIFF carried in the package, which, as probably all data sources read through GDAL, has a negative delta for the y-coordinate: tif = system.file("tif/L7_ETMs.tif", package = "stars") st_dimensions(read_stars(tif))["y"] ## from to offset delta refsys point ## y 1 352 9120761 -28.5 SIRGAS 2000 / UTM zone 25S FALSE Raster attributes, rotated and sheared grids Dimension tables of stars objects carry a raster attribute: str(attr(st_dimensions(s), "raster")) ## List of 4 ##$ affine : num [1:2] 0 0 ## $dimensions : chr [1:2] "x" "y" ##$ curvilinear: logi FALSE ## $blocksizes : NULL ## - attr(*, "class")= chr "stars_raster" which is a list that holds • dimensions: character, the names of raster dimensions (if any), as opposed to e.g. spectral, temporal or other dimensions • affine: numeric, the affine parameters • curvilinear: a logical indicating whether the raster is curvilinear These fields are needed at this level, because they describe properties of the array at a higher level than individual dimensions do: a pair of dimensions forms a raster, both affine and curvilinear describe how x and y as a pair are derived from grid indexes (see below) when this cannot be done on a per-dimension basis. With two affine parameters $$a_1$$ and $$a_2$$, $$x$$ and $$y$$ coordinates are derived from (1-based) grid indexes $$i$$ and $$j$$, grid offset values $$o_x$$ and $$o_y$$, and grid cell sizes $$d_x$$ and $$d_y$$ by $x = o_x + (i-1) d_x + (j-1) a_1$ $y = o_y + (i-1) a_2 + (j-1) d_y$ Clearly, when $$a_1=a_2=0$$, $$x$$ and $$y$$ are entirely derived from their respective index, offset and cellsize. Note that for integer indexes, the coordinates are that of the starting edge of a grid cell; to get the grid cell center of the top left grid cell (in case of a negative $$d_y$$), use $$i=1.5$$ and $$j=1.5$$. We can rotate grids by setting $$a_1$$ and $$a_2$$ to a non-zero value: attr(attr(s, "dimensions"), "raster")$affine = c(0.1, 0.1) plot(st_as_sf(s, as_points = FALSE), axes = TRUE, nbreaks = 20) The rotation angle, in degrees, is atan2(0.1, 1) * 180 / pi ## [1] 5.710593 Sheared grids are obtained when the two rotation coefficients, $$a_1$$ and $$a_2$$, are unequal: attr(attr(s, "dimensions"), "raster")$affine = c(0.1, 0.2) plot(st_as_sf(s, as_points = FALSE), axes = TRUE, nbreaks = 20) Now, the y-axis and x-axis have different rotation in degrees of respectively atan2(c(0.1, 0.2), 1) * 180 / pi ## [1] 5.710593 11.309932 Rectilinear grids Rectilinear grids have orthogonal axes, but do not have congruent (equally sized and shaped) cells: each axis has its own irregular subdivision. We can define a rectilinear grid by specifying the cell boundaries, meaning for every dimension we specify one more value than the dimension size: x = c(0, 0.5, 1, 2, 4, 5) # 6 numbers: boundaries! y = c(0.3, 0.5, 1, 2, 2.2) # 5 numbers: boundaries! (r = st_as_stars(list(m = m), dimensions = st_dimensions(x = x, y = y))) ## stars object with 2 dimensions and 1 attribute ## attribute(s): ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## m 1 5.75 10.5 10.5 15.25 20 ## dimension(s): ## from to point values x/y ## x 1 5 FALSE [0,0.5),...,[4,5) [x] ## y 1 4 FALSE [0.3,0.5),...,[2,2.2) [y] st_bbox(r) ## xmin ymin xmax ymax ## 0.0 0.3 5.0 2.2 image(r, axes = TRUE, col = grey((1:20)/20)) Would we leave out the last value, than stars may come up with a different cell boundary for the last cell, as this is now derived from the width of the one-but-last cell: x = c(0, 0.5, 1, 2, 4) # 5 numbers: offsets only! y = c(0.3, 0.5, 1, 2) # 4 numbers: offsets only! (r = st_as_stars(list(m = m), dimensions = st_dimensions(x = x, y = y))) ## stars object with 2 dimensions and 1 attribute ## attribute(s): ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## m 1 5.75 10.5 10.5 15.25 20 ## dimension(s): ## from to point values x/y ## x 1 5 FALSE [0,0.5),...,[4,6) [x] ## y 1 4 FALSE [0.3,0.5),...,[2,3) [y] st_bbox(r) ## xmin ymin xmax ymax ## 0.0 0.3 6.0 3.0 This is not problematic if cells have a constant width, in which case the boundaries are reduced to an offset and delta value, irrespective whether an upper boundary is given: x = c(0, 1, 2, 3, 4) # 5 numbers: offsets only! y = c(0.5, 1, 1.5, 2) # 4 numbers: offsets only! (r = st_as_stars(list(m = m), dimensions = st_dimensions(x = x, y = y))) ## stars object with 2 dimensions and 1 attribute ## attribute(s): ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## m 1 5.75 10.5 10.5 15.25 20 ## dimension(s): ## from to offset delta point x/y ## x 1 5 0 1 FALSE [x] ## y 1 4 0.5 0.5 FALSE [y] st_bbox(r) ## xmin ymin xmax ymax ## 0.0 0.5 5.0 2.5 Alternatively, one can also set the cell midpoints by specifying arguments cell_midpoints to the st_dimensions call: x = st_as_stars(matrix(1:9, 3, 3), st_dimensions(x = c(1, 2, 3), y = c(2, 3, 10), cell_midpoints = TRUE)) When the dimension is regular, this results in offset being shifted back with half a delta, or else in intervals derived from the distances between cell centers. This should obviously not be done when cell boundaries are specified. Curvilinear grids Curvilinear grids are grids whose grid lines are not straight. Rather than describing the curvature parametrically, the typical (HDF5 or NetCDF) files in which they are found have two raster layers with the longitudes and latitudes for every corresponding pixel of remaining layers. As an example, we will use a Sentinel 5P dataset available from package starsdata; this package can be installed with install.packages("starsdata", repos = "http://pebesma.staff.ifgi.de", type = "source") The dataset is found here: (s5p = system.file("sentinel5p/S5P_NRTI_L2__NO2____20180717T120113_20180717T120613_03932_01_010002_20180717T125231.nc", package = "starsdata")) ## [1] "/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/starsdata/sentinel5p/S5P_NRTI_L2__NO2____20180717T120113_20180717T120613_03932_01_010002_20180717T125231.nc" We can construct the curvilinear stars raster by calling read_stars on the right sub-array: subs = gdal_subdatasets(s5p) subs[[6]] ## [1] "NETCDF:\"/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/starsdata/sentinel5p/S5P_NRTI_L2__NO2____20180717T120113_20180717T120613_03932_01_010002_20180717T125231.nc\":/PRODUCT/nitrogendioxide_tropospheric_column" For this array, we can see the GDAL metadata under item GEOLOCATION: gdal_metadata(subs[[6]], "GEOLOCATION") ##$LINE_OFFSET ## [1] "0" ## ## $LINE_STEP ## [1] "1" ## ##$PIXEL_OFFSET ## [1] "0" ## ## $PIXEL_STEP ## [1] "1" ## ##$SRS ## [1] "GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.0174532925199433,AUTHORITY[\"EPSG\",\"9122\"]],AXIS[\"Latitude\",NORTH],AXIS[\"Longitude\",EAST],AUTHORITY[\"EPSG\",\"4326\"]]" ## ## $X_BAND ## [1] "1" ## ##$X_DATASET ## [1] "NETCDF:\"/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/starsdata/sentinel5p/S5P_NRTI_L2__NO2____20180717T120113_20180717T120613_03932_01_010002_20180717T125231.nc\":/PRODUCT/longitude" ## ## $Y_BAND ## [1] "1" ## ##$Y_DATASET ## [1] "NETCDF:\"/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/starsdata/sentinel5p/S5P_NRTI_L2__NO2____20180717T120113_20180717T120613_03932_01_010002_20180717T125231.nc\":/PRODUCT/latitude" ## ## attr(,"class") ## [1] "gdal_metadata" which reveals where, in this dataset, the longitude and latitude arrays are kept. nit.c = read_stars(subs[[6]]) ## as.character(driver), : GDAL Message 1: The dataset has several variables ## that could be identified as vector fields, but not all share the same primary ## dimension. Consequently they will be ignored. ## as.character(driver), : GDAL Message 1: The dataset has several variables ## that could be identified as vector fields, but not all share the same primary ## dimension. Consequently they will be ignored. ## as.character(driver), : GDAL Message 1: The dataset has several variables ## that could be identified as vector fields, but not all share the same primary ## dimension. Consequently they will be ignored. ## as.character(driver), : GDAL Message 1: The dataset has several variables ## that could be identified as vector fields, but not all share the same primary ## dimension. Consequently they will be ignored. ## as.character(driver), : GDAL Message 1: The dataset has several variables ## that could be identified as vector fields, but not all share the same primary ## dimension. Consequently they will be ignored. ## as.character(driver), : GDAL Message 1: The dataset has several variables ## that could be identified as vector fields, but not all share the same primary ## dimension. Consequently they will be ignored. threshold = units::set_units(9e+36, mol/m^2) nit.c[[1]][nit.c[[1]] > threshold] = NA nit.c ## stars object with 3 dimensions and 1 attribute ## attribute(s): ## Min. 1st Qu. ## nitrogendioxide_tropospheric_c... [mol/m^2] -3.301083e-05 1.868205e-05 ## Median Mean ## nitrogendioxide_tropospheric_c... [mol/m^2] 2.622178e-05 2.898976e-05 ## 3rd Qu. Max. NA's ## nitrogendioxide_tropospheric_c... [mol/m^2] 3.629641e-05 0.0003924858 330 ## dimension(s): ## from to offset refsys values x/y ## x 1 450 NA WGS 84 [450x278] -5.81066 [°],...,30.9468 [°] [x] ## y 1 278 NA WGS 84 [450x278] 28.3605 [°],...,51.4686 [°] [y] ## time 1 1 2018-07-17 UTC POSIXct NULL ## curvilinear grid The curvilinear array has the actual arrays (raster layers, matrices) with longitude and latitude values read in its dimension table. We can plot this file: plot(nit.c, breaks = "equal", reset = FALSE, axes = TRUE, as_points = TRUE, pch = 16, logz = TRUE, key.length = 1) ## Warning in NextMethod(): NaNs produced ## Warning in plot.sf(x, pal = col, ...): NaNs produced maps::map('world', add = TRUE, col = 'red') plot(nit.c, breaks = "equal", reset = FALSE, axes = TRUE, as_points = FALSE, border = NA, logz = TRUE, key.length = 1) ## Warning in NextMethod(): NaNs produced ## Warning in plot.sf(x, pal = col, ...): NaNs produced maps::map('world', add = TRUE, col = 'red') We can downsample the data by (nit.c_ds = stars:::st_downsample(nit.c, 8)) ## stars object with 3 dimensions and 1 attribute ## attribute(s): ## Min. 1st Qu. ## nitrogendioxide_tropospheric_c... [mol/m^2] -1.847503e-05 1.85778e-05 ## Median Mean ## nitrogendioxide_tropospheric_c... [mol/m^2] 2.700901e-05 2.9113e-05 ## 3rd Qu. Max. NA's ## nitrogendioxide_tropospheric_c... [mol/m^2] 3.642568e-05 0.0001363282 32 ## dimension(s): ## from to offset refsys values x/y ## x 1 50 NA WGS 84 [50x31] -5.81066 [°],...,30.1405 [°] [x] ## y 1 31 NA WGS 84 [50x31] 28.7828 [°],...,51.4686 [°] [y] ## time 1 1 2018-07-17 UTC POSIXct NULL ## curvilinear grid plot(nit.c_ds, breaks = "equal", reset = FALSE, axes = TRUE, as_points = TRUE, pch = 16, logz = TRUE, key.length = 1) ## Warning in NextMethod(): NaNs produced ## Warning in plot.sf(x, pal = col, ...): NaNs produced maps::map('world', add = TRUE, col = 'red') which doesn’t look nice, but plotting the cells as polygons looks better: plot(nit.c_ds, breaks = "equal", reset = FALSE, axes = TRUE, as_points = FALSE, border = NA, logz = TRUE, key.length = 1) ## Warning in NextMethod(): NaNs produced ## Warning in plot.sf(x, pal = col, ...): NaNs produced maps::map('world', add = TRUE, col = 'red') Another approach would be to warp the curvilinear grid to a regular grid, e.g. by w = st_warp(nit.c, crs = 4326, cellsize = 0.25) ## Warning in transform_grid_grid(st_as_stars(src), st_dimensions(dest), ## threshold): using Euclidean distance measures on geodetic coordinates plot(w)
2022-12-07 00:48:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19473358988761902, "perplexity": 10866.093351582107}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00264.warc.gz"}
https://solvedlib.com/n/tandingsudortedcable-shotyrriqure-et-thedlank-nanosbasket,1761128
# Tandingsudortedcable Shotyrriqure Et theDlank nanosbasket weighingaSsureDiankVatorm weighs 20060 @8CudicsWherbearfind tha tension the cable supportingplank and the componantsjorce exerted Dyleft endthe ###### Question: tanding sudorted cable Shotyr riqure Et the Dlank nanos basket weighing aSsure Diank Vatorm weighs 200 60 @8 Cudics Wher bear find tha tension the cable supporting plank and the componants jorce exerted Dy left end the plank. (Enter the magnitudes anstyers cadle Witnstand maximum tension Wnat maximum distance (in m) Nalk delore caole breaks? (Measure this distance the wall:} #### Similar Solved Questions ##### The Generalized Aptitude Test GAT is brand new untried mcasureof scores On this type of test usually general reasoning: The havc nonnz distribution but with / and independent random sample of unknown An szen 64 yields the following statistics, The . designers 3194 and Sx the test 83.7. intended the mean value to be 350.Find out t theresults support the hypothesis that the mean value less than 350 at the = lorm 05 level of significance: Your triple (Z Statistic; P-valuc answer Is in Conclusion) I The Generalized Aptitude Test GAT is brand new untried mcasureof scores On this type of test usually general reasoning: The havc nonnz distribution but with / and independent random sample of unknown An szen 64 yields the following statistics, The . designers 3194 and Sx the test 83.7. intended the ... ##### Discuss the reason for using known distributions like the Binomial, Uniform, Normal, t, Chi-squared, F. You... Discuss the reason for using known distributions like the Binomial, Uniform, Normal, t, Chi-squared, F. You can use what you learned from the earlier material. You don't have to address each particular distribution Give a general over-view only. Two sentences at least please.... ##### A large = sheet of cardboard 1.2 mm thick You cut it in half and put one piece on top of the other t0 make pile: These two pieces are cut in half and all four pieces are placed in pile. Continue this process untill the 12th time. Find the height of the final pile centimetres accurate to decimal places A large = sheet of cardboard 1.2 mm thick You cut it in half and put one piece on top of the other t0 make pile: These two pieces are cut in half and all four pieces are placed in pile. Continue this process untill the 12th time. Find the height of the final pile centimetres accurate to decimal plac... ##### 2 2 1 3 1 { 8 Mil | [ H 1 : 1 L 2 3 } 0 1 Vf 5 0 1 H 2 1 H 3 N 3 L L I 1 1 W ; CO 1 1 1 1 L L { 1 [ L 1 1 { 8 1 { 1 1 LL 1 3 F 8 3 2 2 1 3 1 { 8 Mil | [ H 1 : 1 L 2 3 } 0 1 Vf 5 0 1 H 2 1 H 3 N 3 L L I 1 1 W ; CO 1 1 1 1 L L { 1 [ L 1 1 { 8 1 { 1 1 LL 1 3 F 8 3... ##### Styles Styles Pane Name Station Date CHM 112 Acid - Base Titration using pH Sensor... Styles Styles Pane Name Station Date CHM 112 Acid - Base Titration using pH Sensor Report Sheet and Data Analysis DATA ANALYSIS Report your answers to three (3) significant figures. 1. Calculate the molar amounts of NaOH used in the reaction with the HCl solution and with the HC H3O2 solution. (re... ##### 2+z-1 1. The Z-transform of a signal x[n] is given as X(z) = }</21 < a)... 2+z-1 1. The Z-transform of a signal x[n] is given as X(z) = }</21 < a) Find the signal x[n] [7] b) Draw the pole – zero plot of the z-transform .[3] c) Is x[n] causal or not? Justify your answer [2]... ##### Find the mass of an atom having de Broglie wavelength of 3.32×10-15 m and travelling at 10% of the speed of light. Find the mass of an atom having de Broglie wavelength of 3.32×10-15 m and travelling at 10% of the speed of light.... ##### Find f g and g 0 f. f(x) x2/3 , g(x) = x9 (a) f 0 g(b)Find the domain of each function and each composite function_Domain of f: all real numbers X except x = 2all real numbers Xall real numbers X except x = 9 all real numbers X except x = 3Domain of g: all real numbers Xall real numbers X except X = 3all real numbers X except x = 9 all real numbers X except x = 2Domain of f 0 g: all real numbers X except x = 9all real numbers X except X = 2 all real numbers X except x = 3 all real numbers XDomai Find f g and g 0 f. f(x) x2/3 , g(x) = x9 (a) f 0 g (b) Find the domain of each function and each composite function_ Domain of f: all real numbers X except x = 2 all real numbers X all real numbers X except x = 9 all real numbers X except x = 3 Domain of g: all real numbers X all real numbers X exc... ##### Write a one page paper about organization in nursing( how a nuse can be organized, or... write a one page paper about organization in nursing( how a nuse can be organized, or been organized as a nurse)... ##### Which of the statements is true about monetary policy? a) Decrease in the money supply lowers... Which of the statements is true about monetary policy? a) Decrease in the money supply lowers short-term interest rates and encourage investment and consumption demand. b) Monetary policy is determined by the Congress. c) Higher money supply does not have a permanent effect on economic activity beca... ##### 61. What product is finally formed when the initial compound formed from cyclohexanone and pyrrolidine is... 61. What product is finally formed when the initial compound formed from cyclohexanone and pyrrolidine is mixed with allyl chloride and that product is heated and then hydrolyzed? HO ora III A) I B) II C) III D) IV E) V 59. Which reagent would best serve as the basis for a simple chemical test to di... ##### Draw skeleton (line) structure for (Z}-3-methyl-4-ethyl-3-octene(Spts)Draw skeleton (line) structure for (ZR, 3R}-3-methyl-2-heptanol.(Spts) Draw skeleton (line) structure for (Z}-3-methyl-4-ethyl-3-octene (Spts) Draw skeleton (line) structure for (ZR, 3R}-3-methyl-2-heptanol. (Spts)... ##### The following information is available for Sheridan Company Accounts receivable $3,100 Cash$6,370 Accounts payable 4,100... The following information is available for Sheridan Company Accounts receivable $3,100 Cash$6,370 Accounts payable 4,100 Supplies 3,790 Interest payable 580 Unearned service revenue 800 Salaries and wages expense 4,100 Service revenue 42,500 Notes payable 33,000 Salaries and wages payable 870 Commo... ##### Projectile of mass 2 kg launch With an irutial velocity of 20 m/s, and reaches . knetic height of 8 m abore the ground Calculate the energy (In [) at this height: (2-10 m32) projectile of mass 2 kg launch With an irutial velocity of 20 m/s, and reaches . knetic height of 8 m abore the ground Calculate the energy (In [) at this height: (2-10 m32)... ##### ocquestionia-baflushedFinite MathematicsHomework: 8.3 Conditional Probability Score: 0 of 1 pt 7 of 16 (5 complete) 8.3.43A fair coin is tossed 11 times (A) What is the probability of tossing tail on the 11th toss, given that the preceding 10 tosses were heads? (B) What is the probability of getting either heads or 11 tails?(A) What Is the probability of tossing a tail on the 1Ith toss, given that the preceding 10 tosses were heads? (Type an integer or a fraction. Simplify your answer:) ocquestionia-baflushed Finite Mathematics Homework: 8.3 Conditional Probability Score: 0 of 1 pt 7 of 16 (5 complete) 8.3.43 A fair coin is tossed 11 times (A) What is the probability of tossing tail on the 11th toss, given that the preceding 10 tosses were heads? (B) What is the probability of get... ##### In this lab we will be expressing rate as a change in concentration of reactants over... In this lab we will be expressing rate as a change in concentration of reactants over time. In order to calculate this, we need to know the initial concentrations of our reagents after they are combined but before the reaction occurs. This is a dilution process. Recall that the calculation for a dil... ##### PRACTICE 1 probi ntrategy Tatd evaluate Evaluate tha answers and the answers to choose the right ona Ml5 452 freula Eath o6 A Nroxon bram haa Me Iottt FEIE ndk ulat Tadlu: 0 GOaSt Mhe Me Andhdual prton nuru(ic Ilcld Given OTrn r Hitu ue Unknown Equation Solution 0 A5T Paln 4 proton PRACTICE 1 probi ntrategy Tatd evaluate Evaluate tha answers and the answers to choose the right ona Ml5 452 freula Eath o6 A Nroxon bram haa Me Iottt FEIE ndk ulat Tadlu: 0 GOaSt Mhe Me Andhdual prton nuru(ic Ilcld Given OTrn r Hitu ue Unknown Equation Solution 0 A5T Paln 4 proton... ##### Use your calculator and charts to help choose the most correct answer for problems 19-24. Write the CAPITAL LETTER on the line provided:In a recent study of 22 eighth graders the mean number of hours per week that they watched television was 19.6 with a standard deviation of 5.8 hours: Find the 959 confidence interval for the - population mean:(17.0,22.2)(17.2,22.0)(17.3,21.9)(16.8,22.4)pollster wishes to estinate the proportion of United States likely voters who favor capital punishment for the Use your calculator and charts to help choose the most correct answer for problems 19-24. Write the CAPITAL LETTER on the line provided: In a recent study of 22 eighth graders the mean number of hours per week that they watched television was 19.6 with a standard deviation of 5.8 hours: Find the 959... ##### Express the volume of the solid described as a double integral in polar coordinates. Express the volume of the solid described as a double integral in polar coordinates.... ##### 0imMuuhALuk BounhaiDATALDLMQAA Im (Uml |7ET T80T Wk29 LSLSiL 444452 14L422 L44.434 2176877 Li %1041J4 04 0,40204 0 064746 0012444 76277o 0 737750 Sos0586 46 770] 13 7; 7377 541101777id-dl Haamu WNolumr Excel and aLch Mckcan AcFour IponData AnalrtisE doubkd Irom 50mL I0Ounl ~hdn Your UX1 &r hapnens I0 JLeUu t FrLsue? Shak pnauuie valyr in YOUI jibsKIT:IhAalnie hohed (nn 20.0 mL IO0 mL , what doxrs your dzta shox IapAEnS I0 Messure? Show tbe pressure valtes in your nswtT; 0im MuuhALuk Bounhai DATALDLMQAA Im (Uml | 7ET T80T Wk29 LSLSiL 444452 14L422 L44.434 2176877 Li %1 041J4 04 0,40204 0 064746 0012444 76277o 0 737750 Sos05 86 46 770] 13 7; 7377 5411 01777 id-dl Haamu WNolumr Excel and aLch Mckcan Ac Four Ipon Data Analrtis E doubkd Irom 50mL I0Ounl ~hdn Your UX1 &a... ##### Draty the podud of ihs folletCH_ONaUha anruf otithc Bollwing Nucuns 7 UraNaOCH=Tr HI NMR and CIJ NMR srectrum for I pruluct HnhemiInteenllonSelttuineSall (rp"InniclmuWnn et CMMSquunelumnniciPPH Draty the podud of ihs follet CH_ONa Uha anruf otithc Bollwing Nucuns 7 Ura NaOCH= Tr HI NMR and CIJ NMR srectrum for I pruluct Hn hemi Inteenllon Selttuine Sall (rp" Innicl muWnn et CMMS quunel umnnici PPH...
2022-07-07 04:10:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49282291531562805, "perplexity": 6032.634600492702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00489.warc.gz"}
https://www.transtutors.com/questions/future-value-of-single-investment-and-annuity-jane-dough-was-a-teller-in-a-large-nor-1313012.htm
Future Value of Single Investment and Annuity Jane Dough was a teller in a large northeastern... Future Value of Single Investment and Annuity Jane Dough was a teller in a large northeastern bank. She was single and approaching age 30, and she considered herself an honest and upright citizen. After considering what she might do to build a retirement plan for the future, she decided to embezzle $1,500,000. Subsequently she gave herself up to the authorities but did not return the$1,500,000. She was tried, convicted, and sentenced to 20 years in prison. After completing her 20-year term, she returned the $1,500,000 that she had stolen. She then decided to take a world cruise. On the ship someone asked her how she had accumulated enough money to afford the trip. She replied, “Do you know how much interest$1,500,000 will earn in 20 years if invested at an annual rate of 16% compounded quarterly?” Required 1.      Determine the answer to Jane Dough’s question. The table factor for fn=40, i=4% is 4.801021. 2.      Evaluate Jane’s retirement decision, assuming that she could have earned $21,000 each year for each of the 20 years she was in prison. Assume that$11,000 is required each year to cover living expenses and that she could have invested the remaining \$10,000 at the end of each year to earn interest at 16% compounded annually.
2018-09-24 03:43:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2693812847137451, "perplexity": 4399.761103026806}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160142.86/warc/CC-MAIN-20180924031344-20180924051744-00122.warc.gz"}
https://www.lmfdb.org/knowledge/show/dq.av.fq.source
show · dq.av.fq.source all knowls · up · search: The computation is based of the Honda-Tate theorem which states that isogeny classes of abelian varieties over finite fields are completely determined by the characteristic polynomial of their Frobenius automorphism acting on the first $\ell$-adic cohomology group. For a given dimension $g$ and base field of size $q$, a complete list of all Weil polynomials that do occur can be enumerated using a technique developed by Kedlaya [MR:2459990, arXiv:0608104]. In 2016, Dupuy, Kedlaya, Roe and Vincent improved upon Kedlaya's original code to generate these tables and the data they contain. One may also with to see the article of Kedlaya and Sutherland [MR:3540942, arXiv:1511.06945], where these techniques are used to compute Weil polynomials for K3 surfaces. Authors: Knowl status: • This knowl is being renamed to rcs.source.av.fq • Review status: beta • Last edited by Andrew Sutherland on 2019-10-18 12:38:34 Referred to by: History: Differences
2019-11-18 15:54:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886301577091217, "perplexity": 1630.6149169866665}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669809.82/warc/CC-MAIN-20191118154801-20191118182801-00290.warc.gz"}
https://pos.sissa.it/398/474/
Volume 398 - The European Physical Society Conference on High Energy Physics (EPS-HEP2021) - T07: Top and Electroweak Physics Anomalous coupling studies with intact protons at the LHC C. Royon Full text: Not available Abstract We describe the beyond-standard model physics that can be performed at the LHC using intact protons in the final state. The gain on sensitivities to quartic $\gamma \gamma \gamma \gamma$, $\gamma \gamma WW$ and $\gamma \gamma \gamma Z$ anomalous couplings as examples and to the search for Axion-Like Particles is about two to three orders of magnitude with respect to standard methods at the LHC. How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2022-01-19 05:06:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2948351800441742, "perplexity": 1945.4935045497245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00482.warc.gz"}
https://barronwasteland.wordpress.com/2017/10/08/game-show-bright-lights-big-cit/
# Game Show: Bright Lights, Big City Another week, another Riddler. The question: You and three of your friends are on a game show. On stage is a sealed room, and in that room are four sealed, numbered boxes. Each box contains one of your names, and each name is in one box. You and your friends take turns entering the room alone and opening up to two boxes, with the aim of finding the box containing your name. Everyone enters exactly once. Your team can confer on a strategy before stepping on stage, but there is no communication allowed during the show — no player knows the outcome of another player’s trip into the room. Your team wins if it’s ultimately revealed that everyone found the box containing his or her name and loses if any player failed to do so. Obviously, the odds of winning are no better than 50 percent because any single player has a 50 percent chance of finding his or her own name. If each person opens two boxes at random, the chance of winning is (1/2)^4=1/16=6.25(1/2)^4=1/16=6.25 percent. Or to put it in technical terms: The chance of winning is not so great. Call this the naive strategy. Your goal: Concoct a strategy that beats the naive strategy — one that gives the team a better chance of winning than 1/16. Extra credit: Suppose there are 100 contestants and 100 boxes. Each player may open 50 boxes. The chance of winning by using the naive strategy is 1 in 2^100, or about 1 in 1.2×10301.2×1030. How much can you improve the team’s chances? The way to beat this game is to look at the problem in a different light. My strategy to winning was to instead look at the chance that the first half of the players have their number in the first half of the boxes. If they are the first half of the players will find their numbers, meaning the second half are guaranteed to find theirs in the second half. The equation for this would look something like this: #### $\prod_{i=0}^{n/2}\left ( \frac{\frac{n}{2}-i}{n-i} \right )$ Where $n =$ number of players This provides the answer of 1/6th for four players and 7.91e-15 for 50 players
2019-07-20 23:56:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4673880338668823, "perplexity": 467.6299553658914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526799.4/warc/CC-MAIN-20190720235054-20190721021054-00296.warc.gz"}
http://www.sitepoint.com/forums/showthread.php?483645-Width-Height-of-an-Uploaded-Image
1. ## Width/Height of an Uploaded Image? Hi all, On a site I am developing, there is an area for users to upload an image of themselves. The actual upload code works, but the code I have in place is not grabbing the width/height of the image (to make sure it doesn't exceed the maximum dimensions). The form code is as follows: Code: <form method="POST" action="photo_admin.php" enctype="multipart/form-data"> <input type="hidden" name="photo" value="main" /> <input type="file" name="main"> <input type="submit" name="Submit" value="Upload New Image" class="flatButton" /> </form> Further down the page, I think check to see whether $_POST['photo'] == 'main' and if so, execute the following code: PHP Code: $imgData = getimagesize($_FILES['main']['tmp_name']);$imgWidth = $imgData[0];$imgHeight = $imgData[1]; if (($imgWidth > 800) || ($imgHeight > 600)) { echo ('<span class="error">Uploaded images must not exceed 800x600 pixels in size.</span>' . BR . BR . 'Click <a href="photo_admin.php">here</a> to go back and try again.'); } else { // Upload image } Any help would be much appreciated. -Will 2. Are BR constants you declared in some place of the files?. you could verify if the file has been uploaded with is_uploaded_file($_FILES['main']['tmp_name']) before doing any other operations 3. Thanks hidran, I added the is_uploaded_file line and it does recognise it as an uploaded file (plus this is used later on to upload the image so it must be correct). Before the getimagesize line, I added: error_reporting(E_ALL) which returned the following error: Warning: getimagesize(): Unable to access C:\WINDOWS\TEMP\phpBDB6.tmp in E:\USERS\path\to\file on line 187 Warning: getimagesize(C:\WINDOWS\TEMP\phpBDB6.tmp): failed to open stream: No such file or directory Any ideas? I'm kinda stumped. -Will 4. Afternoon Will, I am pretty sure you cant use getimagesize until the image is on the server. So until the file has fully uploaded you cant get the image dimentions. What you can do is either resize it or delete it if it doesn't conform. 5. Hi spike, long time no see Instead I've imposed a file size restriction using \$_FILES['image]['size']. Given the situation it seems more logical than using height/width restrictions anyway. Thanks for the help, spike and hidran. -Will #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
2016-05-27 06:35:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6352500915527344, "perplexity": 3957.9175586749484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276543.81/warc/CC-MAIN-20160524002116-00020-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/ideal-gas-law.118952/
# Homework Help: Ideal Gas Law 1. Apr 27, 2006 ### willydavidjr The diagram below shows state changes of an ideal gas. The temperature of states (1), (2), (3) are $$T_1[K], T_2[K]$$ and $$T_3[K]$$ respectively. The state change from (1) to (2) is an adiabatic change (not an isothermal change). The state change from (2) to (3) is a change at constant pressure (isobaric change). The state change from (3) to (1) is a change at constant volume (isochoric change). Question: 1.) Write the size relation between $$T_1, T_2, T_3$$. 2.) Let the quantity of the ideal gas be 1[mol] and let R[J/mol*K] denote the gas constant. Find the work which the gas did on the outside during the state change from (2) to (3). My idea: For number 1: If I will write the relation between the 3 Temperatures, I will follow the Ideal gas law PV=nKT. But n and K can be disable because it will act as a constant and it is the same gas used. So am I correct if I say $$T_1 < T_2 < T_3$$. Or it looks like $$T_1 > T_2$$ What do you say? For number 2: Is there work done for the gas outside during the state change? Note: You can view the diagram on this website : http://www.geocities.com/willydavidjr/pvdiagram #### Attached Files: • ###### pressure.jpg File size: 6.2 KB Views: 90 Last edited: Apr 27, 2006 2. Apr 27, 2006 ### Tom Mattson Staff Emeritus I agree with you that $T_3$ is the biggest one. To correctly order $T_1$ and $T_2$ you should note that for adiabatic processes, $VT^{\alpha}=constant$. There has to be, because the gas is expanding. What's the definition of work in terms of pressure and volume? 3. Apr 28, 2006 ### willydavidjr Wait a minute Tom Mattson, I think we're wrong. The question number asked about the size relation. Are they talking about the volume or the amount of Temperatures? And on question number two, how can I find the work done(W=PV) if the given only is the number of moles and R constant?Thank you. 4. Apr 28, 2006 ### Tom Mattson Staff Emeritus Willy, I have no idea of what the "size relation" is. That term certainly isn't standard. I assumed that you knew what it meant, and so I took your word for it that it was an inequality involving the temperatures. If it is something else then you should say so. 5. Apr 28, 2006 ### willydavidjr Ok thank you, I am sorry because that was the same sentence I have here, maybe its an inequality involving temperatures. How about "Let the quantity of the ideal gas be 1[mol] and let R[J/mol*K] denote the gas constant. Find the work which the gas did on the outside during the state change from (2) to (3)."? My problem about this is that the given are n and R. We have no detail for the P,V,T to get the Work=PV....How can I solve it? 6. Apr 28, 2006 ### willydavidjr It seems that T1 and T2 are uqual?
2018-12-18 14:08:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7881983518600464, "perplexity": 898.9070505189741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829399.59/warc/CC-MAIN-20181218123521-20181218145521-00233.warc.gz"}
https://www.physicsforums.com/threads/calculating-clebsch-gordan-coefficients.903341/
# Homework Help: Calculating Clebsch-Gordan coefficients 1. Feb 9, 2017 ### Kara386 1. The problem statement, all variables and given/known data The question asks me to calculate the non-zero Clebsch Gordan coefficients $\langle j_1+j_2, j_s-2|j_1,m_1;j_2,m_2\rangle$ Where $j_s=j_1+j_2$. 2. Relevant equations 3. The attempt at a solution The $j_s-2$ part is the $m$ of a $|j,m\rangle$ and I know that m has to equal $m_1+m_2$ or that equation would just be zero. And I also know that I should apply the lowering operators $J_-$ to one of these two equations: $\langle j_s, j_s-1|j_1,j_1-1;j_2,j_2\rangle = \sqrt{\frac{j_1}{j_s}}$ Or $\langle j_s, j_s-1|j_1,j_1;j_2,j_2-1\rangle = \sqrt{\frac{j_2}{j_s}}$ I don't know which of these two I should be applying the lowering operator to. And then I don't know how to apply it because there are two $j, m$ sets in the ket, so when calculating the coefficient using $\sqrt{j+m}\sqrt{j-m+1}$ which m and j exactly is this thing supposed to change? How do you apply the operator to an inner product? And you can't apply $J_-$ to the RHS so I'm not sure what to do about that either. Essentially we're being taught via lots of examples, but it's hard to extract the rules from the examples without an explanation. Which is why my questions probably involve incredibly basic and fundamental concepts. Basically I have no idea what's going on, but it isn't for lack of reading or trying. I've definitely looked in lots of textbooks and on the internet, I just don't understand what they say! So any help is hugely appreciated, I'm trying to use this question to work out all the stuff I don't know. :) 2. Feb 10, 2017 ### blue_leaf77 You can't, operators only act on kets. To start, we have the convention $$|j_1+j_2,j_1+j_2\rangle = |j_1,j_1;j_2,j_2\rangle$$ In order to get $|j_1+j_2,j_1+j_2-2\rangle$, you have to apply $(J_-)^2$ to the left side and hence also to the right side. This gives you $$(J_-)^2 |j_1+j_2,j_1+j_2\rangle = (J_-)^2 |j_1,j_1;j_2,j_2\rangle$$ Then project both sides by $\langle j_1,m_2;j_2,m_2|$ $$\langle j_1,m_2;j_2,m_2|(J_-)^2 |j_1+j_2,j_1+j_2\rangle = \langle j_1,m_1;j_2,m_2|(J_-)^2 |j_1,j_1;j_2,j_2\rangle$$ In the end, it looks like that you should get a piecewise answer. 3. Feb 11, 2017 ### Kara386 I don't know how to do the projecting thing. What I ended up with is: $|j_s,j_s-2\rangle = \sqrt{\frac{2j_1(2j_1-1)}{j_s(4j_s-2)}} |j_1,j_1-2;j_2,j_2\rangle + \sqrt{\frac{2j_1j_2}{j_s(4j_s-2)}}|j_1,j_1-1;j_2,j_2-1\rangle + \sqrt{\frac{2j_2(2j_2-1)}{j_s(4j_s-2)}} |j_1,j_1;j_2,j_2-2\rangle + \sqrt{\frac{2j_1j_2}{j_s(4j_s-2)}}|j_1,j_1-1;j_2,j_2-1\rangle$ $=\sqrt{\frac{2j_1(2j_1-1)}{j_s(4j_s-2)}} |j_1,j_1-2;j_2,j_2\rangle + 2\sqrt{\frac{2j_1j_2}{j_s(4j_s-2)}}|j_1,j_1-1;j_2,j_2-1\rangle + \sqrt{\frac{2j_2(2j_2-1)}{j_s(4j_s-2)}} |j_1,j_1;j_2,j_2-2\rangle$ So when I do the inner product, what's the rule there? On the LHS it's the inner product of something with itself so do I get the square of the LHS? As for the RHS I know you can move the constants out from the middle so you get inner products there too, and in this basis all these vectors are orthogonal I think - except if they were the RHS would just be zero. 4. Feb 11, 2017 ### blue_leaf77 No, you are not projecting something on itself. You should project $|j_s,j_s-2\rangle$ onto $\langle j_1,m_1;j_2m_2|$. For example for the first term in RHS you get $$\langle j_1,m_1;j_2m_2|j_1,j_1-2;j_2,j_2\rangle$$ What's the condition on $m_1$ and $m_2$ so that the above inner product does not vanish? 5. Feb 11, 2017 ### Kara386 Oh, the inner product vanishes except for $m=m_1+m_2$. Thanks! :) 6. Feb 11, 2017 ### Kara386 Ah, except which one is m? If $m = j_s -2$ then none of them vanish. 7. Feb 11, 2017 ### blue_leaf77 I don't understand what you mean. 8. Feb 11, 2017 ### Kara386 For an inner product to exist it has to have $m_1+m_2=m$. I can identify $m_1$ and $m_2$ in the kets, but is $m=j_s-2$? From the ket on the RHS? Or actually in this case $m_1,m_2$ are in the bra, but I still don't know which value is m. Do you get it from the RHS or from the ket which is part of the inner product? 9. Feb 11, 2017 ### vela Staff Emeritus It seems a bit strange to claim that the inner product doesn't vanish only if $m_1+m_2=m$ when you have no idea what $m$ stands for. Anyway, you should rethink what the condition is for an inner product not to be 0. 10. Feb 11, 2017
2018-07-17 14:25:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6841886043548584, "perplexity": 327.8271588377115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589726.60/warc/CC-MAIN-20180717125344-20180717145344-00060.warc.gz"}
https://pdglive.lbl.gov/DataBlock.action?node=M013W1&home=sumtabM
#### PRODUCED BY PION BEAM VALUE (MeV) DOCUMENT ID TECN  COMMENT $\bf{ 86.9 {}^{+2.3}_{-2.1}}$ OUR AVERAGE  Error includes scale factor of 1.4. See the ideogram below. • • We do not use the following data for averages, fits, limits, etc. • • $102$ $\pm42$ 2003 SPEC 40.0 ${{\mathit \pi}^{-}}$ ${}^{}\mathrm {C}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit K}_L^0}$ X $108$ ${}^{+5}_{-2}$ 1 1986 MPS 22 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit n}}$ $69$ ${}^{+22}_{-16}$ 2 1981 ASPK 6 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit n}}$ $137$ ${}^{+23}_{-21}$ 1981 ASPK 18.4 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit n}}$ $150$ ${}^{+83}_{-50}$ 1980 ASPK 17 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ polarized $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit n}}$ $165$ $\pm42$ 3 1979 OMEG 12$-$15 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit n}}$ $92$ ${}^{+39}_{-22}$ 4 1979 STRC 7 ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit n}}{{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ 1 From a partial-wave analysis of data using a K-matrix formalism with 5 poles. 2 CHABAUD 1981 is a reanalysis of PAWLICKI 1977 data. 3 From an amplitude analysis where the ${{\mathit f}_{{2}}^{\,'}{(1525)}}$ width and elasticity are in complete disagreement with the values obtained from ${{\mathit K}}{{\overline{\mathit K}}}$ channel, making the solution dubious. 4 From a fit to the ${{\mathit D}}$ with ${{\mathit f}_{{2}}{(1270)}}-{{\mathit f}_{{2}}^{\,'}{(1525)}}$ interference. Mass fixed at 1516 MeV. ${{\mathit f}_{{2}}^{\,'}{(1525)}}$ WIDTH (MeV) References: TIKHOMIROV 2003 PAN 66 828 Resonances in the ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit K}_L^0}$ System Produced in Collisions of Negative Pions with a Carbon Target at a Momentum of 40 GeV LONGACRE 1986 PL B177 223 A Measurement of ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ ${{\mathit n}}$ at 22 ${\mathrm {GeV/}}\mathit c$ and a Systematic Study of the $2+{}^{+}{}^{}$ Meson Spectrum CHABAUD 1981 APP B12 575 A Study of the D-wave in the ${{\mathit K}^{+}}{{\mathit K}^{-}}$ System of the Reaction ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit n}}$ at 18 GeV GORLICH 1980 NP B174 16 A Model Independent Partial Wave Analysis of the Reaction ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit n}}$ at $\sim{}$ 18 ${\mathrm {GeV/}}\mathit c$ CORDEN 1979 NP B157 250 An Amplitude Analysis of ${{\mathit \pi}}{{\mathit \pi}}$ Scattering from New Data on the Reaction ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit n}}$ POLYCHRONAKOS 1979 PR D19 1317 Study of the Reaction ${{\mathit \pi}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit n}}{{\mathit K}_S^0}$ ${{\mathit K}_S^0}$ at 6.0 and 7.0 ${\mathrm {GeV/}}\mathit c$
2022-08-20 05:25:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7993661761283875, "perplexity": 1760.9322397986155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00375.warc.gz"}
http://mathhelpforum.com/calculus/71292-lagrange-multipliers.html
1. ## Lagrange Multipliers The base of an aquarium in the shape of rectangular box with a given volume V is made of slate and the four sides are made of glass. If slate costs five times as much as glass per unit area, find the dimension of the aquarium that minimize the cost of the materials. 2. Originally Posted by CandyKanro The base of an aquarium in the shape of rectangular box with a given volume V is made of slate and the four sides are made of glass. If slate costs five times as much as glass per unit area, find the dimension of the aquarium that minimize the cost of the materials. Let the dimensions of the box be $x,\ y,\ z$ , then the cost is proportionsl to: $f(x,y,z)=5 xy + 2xz + 2yz$ and we have the constraint: $g(x,y,z)=xyz$ with $g(x,y,z)=V$. Then our Lagrangian is: $\Lambda(x,y,z,\lambda)=f(x,y,z)-\lambda(g(x,y,z)-V)$ Now you need to find the stationary points of $\Lambda$ to find the candidates for the maximising solution. .
2016-12-11 10:46:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7355082035064697, "perplexity": 452.1882935198144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544672.33/warc/CC-MAIN-20161202170904-00061-ip-10-31-129-80.ec2.internal.warc.gz"}
https://onlineessay.net/2020/09/13/fraction-problem-solving-with-solution_ek/
original-essay # Fraction problem solving with solution Analysis: lesson 2.5: before getting started, recall the long quotes in papers following formulas: the student will be able to solve practical problems involving addition and fraction problem solving with solution subtraction with fractions and mixed numbers. for some essay on impact of social media on society positive integer , the repeating base-representation of the (base-ten) fraction is .what creative writing mfa programs is ? A. strategy: below you can find the full step good hook for an essay best topics essay writing by step solution for creative writing school writing an essay for college admission you problem. more examples and solutions using the bar modeling method to solve fraction word problems are shown in the videos multiply the fractions fraction problem solving with solution $$\frac{5}{6}$$, $$2$$ and $$\frac{6}{5}$$. if you under and overestimated, is the answer in the correct range. these problems were designed for the critical thinking co students in grades 7-8. and confirm that i am over 13, or under 13 and already a member of. step 1:.
2021-01-24 03:58:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19741429388523102, "perplexity": 1541.9198552508815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00244.warc.gz"}
http://mathhelpforum.com/advanced-statistics/60154-probability-next-entry-series-discrete-trials.html
# Math Help - Probability of the next entry in a series of discrete trials. 1. ## Probability of the next entry in a series of discrete trials. Hi, all. Suppose you have a series of events, all the same, which can result in X or ~X. The probability of each is unknown. As you watch the series go forward, every entry (to start) is X. So: $S = \{X, X, X, X, X... \}$ At any point, however, it is possible that the pattern will cease, and that we will get a ~X. But let's say it hasn't happened yet. Let's also say that the number of X's we have observed so far is $n$, such that the probability of this occurring is: $P(S_n)=[P(X)]^n$ So, how do we determine P(X) ? We could determine a confidence level, I think... $[P(X)]^n>.05$ $P(X)>.05^{\frac{1}{n}}$ So, there's a 95% chance that $P(X)\in(.05^\frac{1}{n},1]$, yes? But how do we actually find P(X)? Thanks! 2. We could use the confidence interval to find a minimum probability. Let's say that the error is given as $E$. So: $E^{\frac{1}{n}}$ is the lower bound of $P(X)$ in the $1-E$ confidence level. Let's call Q(X) the probable expectation that the next entry in the sequence is X. So, $Q(X)\geq(1-E)E^{\frac{1}{n}}$. We can take the derivative with respect to $E$ to find the value of $E$ which maximizes $Q(X)$: $\frac{d}{dE}(1-E)E^{\frac{1}{n}}=0$ $\frac{d}{dE}(E^{\frac{1}{n}}-E^{\frac{1}{n}+1})=0$ $\frac{1}{n}E^{\frac{1}{n}-1}-[\frac{1}{n}+1]E^{\frac{1}{n}})=0$ $\frac{1}{En}E^{\frac{1}{n}}-[\frac{1}{n}+1]E^{\frac{1}{n}})=0$ $[\frac{1}{En}-\frac{1}{n}-1]E^{\frac{1}{n}}=0$ $\frac{1}{En}-\frac{1}{n}-1=0$ $\frac{1}{En}=\frac{n+1}{n}$ $En=\frac{n}{n+1}$ $E=\frac{1}{n+1}$ Now, recall our $Q(X)$ relationship: $Q(X)\geq(1-E)E^{\frac{1}{n}}$ And substitute for E: $Q(X)\geq(1-\frac{1}{n+1})[\frac{1}{n+1}]^{\frac{1}{n}}$ $Q(X)\geq\frac{n}{(n+1)^{\frac{1}{n}}(n+1)}$ $Q(X)\geq\frac{n}{(n+1)^{\frac{n+1}{n}}}$ It seems like we should be able to do better than this, though.
2014-08-20 14:53:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815301060676575, "perplexity": 252.02934505778637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00182-ip-10-180-136-8.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/548280/nicematrix-set-name-by-pgfkey-expanding-problem
# nicematrix set name by pgfkey - expanding problem? Once again I need help with my gaussenv macro. This time, since the names of the pictures drawn by nicematrix have to be different. Therefore, I thought I'd create a numeric pgfkey which holds some kind of index of the current matrix. And this is with which I came up: \documentclass{article} \usepackage{nicematrix} \usepackage{tikz} \usetikzlibrary{calc} \pgfkeys{ /tikz/gaussenv/.cd, niceMatrixName/.initial=1, } \begin{document} \pgfkeysvalueof{/tikz/gaussenv/niceMatrixName}matrix \begin{align*} \begin{pNiceMatrix}[name={\pgfkeysvalueof{/tikz/gaussenv/niceMatrixName} matrix}] 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 2 \\ 3 & 3 & 3 & 3 \\ \end{pNiceMatrix} \end{align*} \begin{tikzpicture}[remember picture,overlay] \draw (1 matrix-1-1) -- (1 matrix-2-2); \end{tikzpicture} \end{document} But sadly when compiling tikz does not know the nodes 1 matrix-1-1 or 1 matrix-2-2, but if I use \begin{pNiceMatrix}[name={1 matrix}] as environment declaration, if works. I guess it has something to do with expanding, but I don't know how to solve this and even if it is really because of expanding. Can someone help me? • Your MWE is not complete. You have not loaded nicematrix nor tikz. – F. Pantigny Jun 7 '20 at 19:36 • Corrected that one – atticus Jun 7 '20 at 19:37 • As you have suggested, the value of the key name is not expanded and that's why you can't do that. – F. Pantigny Jun 7 '20 at 19:52 • Do yout know, wether there is a way to expand it properly (or work around this?) Sadly I cannot use the \NiceMatrixLastEnv because of my version of nicematrix) – atticus Jun 7 '20 at 19:55 • I can give a line of code which will define \NiceMatrixLastEnv for your version of nicematrix. I don't want to put it on this site because it uses internals of the package (LaTeX3 says these internals should be used only in the package). Send me an email: fpantigny@wanadoo.fr – F. Pantigny Jun 7 '20 at 20:00 Instead of giving a name to each environment of nicematrix, you should access to the PGF/Tikz nodes created by nicematrix directly by their names. These names use the number of the environment given by \NiceMatrixLastEnv (available in versions ≥ 3.9 2020-01-10). \documentclass{article} \usepackage{nicematrix} \usepackage{tikz} \begin{document} \begin{align*} \begin{pNiceMatrix} 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 2 \\ 3 & 3 & 3 & 3 \\ \end{pNiceMatrix} \end{align*} \begin{tikzpicture}[remember picture,overlay,name prefix = nm-\NiceMatrixLastEnv-] \draw (1-1) -- (2-2); \end{tikzpicture} \end{document} • Well done! A very clever solution :). – M. Al Jumaily Jun 7 '20 at 20:31
2021-05-15 08:13:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6682949662208557, "perplexity": 1340.5351533273997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00567.warc.gz"}
https://gmatclub.com/forum/what-is-the-sum-of-the-digits-of-the-number-288216.html?kudos=1
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 22 Nov 2019, 06:39 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)? new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 8167 GMAT 1: 760 Q51 V42 GPA: 3.82 What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)?  [#permalink] ### Show Tags 07 Feb 2019, 18:31 00:00 Difficulty: 35% (medium) Question Stats: 59% (01:19) correct 41% (01:16) wrong based on 37 sessions ### HideShow timer Statistics [GMAT math practice question] What is the sum of the digits of the number $$(2^{2018})(5^{2019})(3^2)$$? $$A. 4$$ $$B. 5$$ $$C. 6$$ $$D. 7$$ $$E. 9$$ _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only $79 for 1 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Senior Manager Joined: 09 Jun 2014 Posts: 351 Location: India Concentration: General Management, Operations Re: What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)? [#permalink] ### Show Tags 08 Feb 2019, 04:13 1 MathRevolution wrote: [GMAT math practice question] What is the sum of the digits of the number $$(2^{2018})(5^{2019})(3^2)$$? $$A. 4$$ $$B. 5$$ $$C. 6$$ $$D. 7$$ $$E. 9$$ Keyword:Sum of digits (Not remainder) Approach...We cant go ahead and do calculation .Of course we might require a calculator and still it will be an Achille's task. So try seeing something more in the questions..Ooh yes..I notice 2 and 5 as based and 2018 as lower base...Now my line of thinking would be to bring 5*2 as base and put the power as 2018 . Reason we would get zeros and zeros later and adding up zeros would be a cake walk. So, (2*5)^2018 * 5*3^2 = 10^2018 * 45= 45 followed by 2018 zeros. So sum of digits is 9 Hope it helps!! Manager Joined: 29 Dec 2018 Posts: 60 Location: India Schools: HBS '22, Wharton '22 Re: What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)? [#permalink] ### Show Tags 07 Feb 2019, 19:22 (2^2018)(5^2019)(3^2) =(2^2018*5^2018)*5*9 = 45000000000000...... 2018 times Sum of the digits is 4+5=9 MathRevolution Kudos if you like the approach Posted from my mobile device Director Joined: 09 Mar 2018 Posts: 994 Location: India What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)? [#permalink] ### Show Tags 07 Feb 2019, 19:40 MathRevolution wrote: [GMAT math practice question] What is the sum of the digits of the number $$(2^{2018})(5^{2019})(3^2)$$? $$A. 4$$ $$B. 5$$ $$C. 6$$ $$D. 7$$ $$E. 9$$ So $$2^{2018} * 2^{2019} * 3^2$$ can be written as $$10^{2018} * 45$$ Sum will be 9. E _________________ If you notice any discrepancy in my reasoning, please let me know. Lets improve together. Quote which i can relate to. Many of life's failures happen with people who do not realize how close they were to success when they gave up. Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 8167 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)? [#permalink] ### Show Tags 10 Feb 2019, 18:11 => $$(2^{2018})(5^{2019})(3^2)$$ $$= (2^{2018})(5^{2019})(5^1)(3^2)$$ $$= (10^{2018})(5)(9)$$ $$= (45)(10^{2018})$$ $$= 450000…0$$ The sum of the digits is $$4 + 5 + 0 + 0 + 0 + … + 0 = 9$$ Therefore, the answer is E. Answer: E _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$79 for 1 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" GMAT Club Legend Joined: 18 Aug 2017 Posts: 5324 Location: India Concentration: Sustainability, Marketing GPA: 4 WE: Marketing (Energy and Utilities) Re: What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)?  [#permalink] ### Show Tags 11 Feb 2019, 01:12 MathRevolution wrote: [GMAT math practice question] What is the sum of the digits of the number $$(2^{2018})(5^{2019})(3^2)$$? $$A. 4$$ $$B. 5$$ $$C. 6$$ $$D. 7$$ $$E. 9$$ $$(2^{2018})(5^{2019})(3^2)$$ can be written as 10^2018 * 45 sum of digits = 9 IMO E Re: What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)?   [#permalink] 11 Feb 2019, 01:12 Display posts from previous: Sort by # What is the sum of the digits of the number (2^{2018})(5^{2019})(3^2)? new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne
2019-11-22 13:39:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35964080691337585, "perplexity": 10836.181843813936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671260.30/warc/CC-MAIN-20191122115908-20191122143908-00461.warc.gz"}
https://www.nature.com/articles/s41467-021-21655-w?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ncomms%2Frss%2Fcurrent+%28Nature+Communications+-+current%29&error=cookies_not_supported&code=44edec9e-19c7-4379-926d-29c1b201c5f9
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Threats of global warming to the world’s freshwater fishes ## Abstract Climate change poses a significant threat to global biodiversity, but freshwater fishes have been largely ignored in climate change assessments. Here, we assess threats of future flow and water temperature extremes to ~11,500 riverine fish species. In a 3.2 °C warmer world (no further emission cuts after current governments’ pledges for 2030), 36% of the species have over half of their present-day geographic range exposed to climatic extremes beyond current levels. Threats are largest in tropical and sub-arid regions and increases in maximum water temperature are more threatening than changes in flow extremes. In comparison, 9% of the species are projected to have more than half of their present-day geographic range threatened in a 2 °C warmer world, which further reduces to 4% of the species if warming is limited to 1.5 °C. Our results highlight the need to intensify (inter)national commitments to limit global warming if freshwater biodiversity is to be safeguarded. ## Introduction Freshwater habitats are disproportionally biodiverse. While they cover only 0.8% of the Earth’s surface, they host ~15,000 fish species, corresponding to approximately half of the global known fish diversity1,2. Freshwater habitats are also disproportionally threatened by human activities and environmental change, which have resulted in substantial declines in freshwater biodiversity over the past decades1,3. Amid human pressures on freshwater ecosystems (including water abstraction, diversion, damming, and pollution), anthropogenic climate change is expected to become increasingly important in the future4,5. Rising air temperatures and changing precipitation patterns modify water temperature and flow regimes worldwide, thus affecting two key habitat factors for freshwater species6. Being ectotherms, fish are directly influenced by water temperature, while the hydrologic regime determines the structure and dynamics of the freshwater habitat7,8. In addition, the insular nature of many freshwater habitats may hamper compensatory movements to cooler locations, especially for fully aquatic organisms like fish1. Recent continental and global studies have underscored the high vulnerability of freshwater fish species to climate change8,9,10,11. Yet, potential impacts of climate change on freshwater fishes have not yet been comprehensively assessed, in sharp contrast with the many studies assessing potential climate change impacts on species in terrestrial systems12,13,14,15. Here, we assess future climate threats to 11,425 riverine fish species by quantifying their exposure to flow and water temperature extremes under different global warming scenarios. We focus on extremes rather than hydrothermal niche characteristics in general, because extremes are more decisive for local extinctions and potential geographic range contractions16,17. Following the latest IPCC report18, we include scenarios that limit global mean temperature increases to 1.5 and 2.0 °C. For comparison purposes, we include two additional scenarios: a “current pledges” scenario set at 3.2 °C warming and a “no-policy” scenario (no mitigation) set at 4.5 °C warming (all temperatures relative to pre-industrial)12,19. The 3.2 °C warming scenario represents the maximum warming predicted to occur by the end of the century (with 66% probability) if all current greenhouse gas emissions reductions targets (unconditional Nationally Determined Contributions) for 2030 are met and no further cuts are performed. We calculate the present and future weekly flow and water temperature values corresponding with each warming level at a spatial resolution of 5 arcminutes (~10 km) using a global hydrological model coupled to a dynamic water temperature model20,21. We force the hydrological model with meteorological input from five Global Climate Models (GCMs) combined with four Representative Concentration Pathway (RCP) representing future greenhouse gas emissions. To assess the threat imposed by future climate extremes, we first retrieve flow and water temperature extremes corresponding with the current climate. Across the geographic range of each species, we quantify the maximum and minimum weekly water temperature and flow, as well as the number of zero flow weeks (see “Methods”). We then define the magnitude of threat for each fish species in a given global warming scenario as the proportion of the geographic range of the species where projected extremes in water flow and temperature will exceed those defined based on the current climate within its ranges. We do this for two dispersal assumptions: ‘no dispersal’ assuming that each fish species is restricted to its current geographical range, and “maximal dispersal” assuming that each fish species can move beyond its current range within a surrounding region delineated by the intersection of watersheds (hard physical boundaries) and freshwater ecoregions (i.e., regions with similar evolutionary history and other ecological factors relevant to freshwater fishes22). Finally, we use phylogenetic regression relating the species’ threat levels (i.e., the proportion of the range exposed to future climate extremes beyond current levels within the range) for each warming level and dispersal assumption to a suite of relevant species characteristics. We find clear differences in the magnitude of threat between the different warming scenarios. In a 3.2 °C warmer world, 36% of the species have over half of their present-day geographic range exposed to climatic extremes beyond current levels (no dispersal assumption). This number reduces to 9% of the species in a 2 °C warmer world and to 4% of the species if warming is limited to 1.5 °C. We conclude that for protecting freshwater biodiversity, commitments to limit global warming need to be strengthened. ## Results ### Global patterns of exposure to projected climate extremes The scenario without climate-change mitigation policy (+4.5 °C) and without dispersal resulted in at least half of the geographic range threatened by projected climate extremes for 63% (±7%) of the freshwater fish species. Assuming maximal dispersal for the same warming level, the proportion of species with over half of their geographic range threatened decreased to 24% (±13%). The values in brackets represent the standard deviation of the GCM–RCP combinations ensemble for that warming level and dispersal assumption (Supplementary Fig. 1). The proportion of species with more than half of their range threatened was projected to decrease to 8–36% (±3–11%), 1–9% (±1–4%), and 1–4% (±0–2%) for warming levels of 3.2 °C, 2 °C and 1.5 °C, respectively, with the larger values for the no dispersal assumption (Fig. 1). We found hotspots of future climate threat in tropical, sub-arid and Mediterranean regions (Fig. 2). At low warming levels, hotspots are restricted to small areas within tropical South America, North-East Mexico, southern US, southern Europe, Southern Sahara, central Africa (large lakes), Middle-East, India–Pakistan, South-East Asia, and western Australia. At higher warming levels, hotspots are considerably larger, particularly in South America, southern Europe, India–Pakistan, and Australia. At higher latitudes, threats become prominent only at higher warming levels (3.2, 4.5 °C). Overall, threats are largest in tropical watersheds such as the Amazon, Parana, Tocantis, Niger, Senegal, Zambezi, and Chao-Phraya (Fig. 3; see Supplementary Fig. 4 for a more exhaustive overview). Watersheds in non-tropical areas characterized by relatively high threat levels are the Don and the Danube in Europe, and several watersheds in Australia (Fig. 3). Under the maximal dispersal assumption, locations of threatened areas are similar to those under the no dispersal assumption but with lower threat levels than in the no dispersal assumption (Figs. 2 and 3). ### Flow versus water temperature extremes Our findings indicate that freshwater fish species are primarily threatened by climate change-induced increases in maximum water temperature, whereas amplified extreme flow conditions are considerably less important (Fig. 4; Supplementary Fig. 2 and Supplementary Tables 3, 4). Projected reductions in minimum water temperature pose virtually no threat (Supplementary Fig. 3 and Supplementary Tables 3, 4). Therefore, the spatial patterns of future climate threat mostly resemble the patterns of threats due to increased maximum water temperature (Fig. 4). Areas affected by changes in low flow are mainly observed in upstream reaches in South America, parts of the central US, around the Mediterranean Sea and Middle East (Fig. 4 and Supplementary Fig. 3). This reflects that climate change is projected to result in more severe low flow conditions mainly in drought-prone regions, while water temperature rises almost everywhere (Supplementary Fig. 5-V). In addition, changes in low flow conditions might be more relevant for smaller upstream streams not captured within the ~10 km grid-cell resolution of the global hydrological model employed in this study20. In contrast, areas affected by changes in high flow are confined to a few downstream segments of the main stems of large rivers (Supplementary Fig. 3 and raster layers provided as Supplementary Data 2 and 3). Our results further show only limited overlap of threats imposed by amplified flow and water temperature extremes, reflecting the dissimilar spatial distribution of both threats (Fig. 4 and Supplementary Tables 3, 4). ### Relationships between climate change threats and species traits According to our phylogenetic regression models (n = 9,779 species), the magnitude of threat imposed by future climate extremes is mainly related to species’ habitat type and current geographic range size, followed by IUCN threat status and body length (Fig. 5). Threats are much lower for species that live across the freshwater and marine realms (note that our projections concern the freshwater environment only, thus ignoring potential climate threats within the ocean; Supplementary Table 5). In line with this, relatively small threat levels are found for orders mostly comprising diadromous species, such as Mugiliformes (mullets), Osmeriformes (smelts), Syngnathiformes (e.g., pipefish), Tetraodontiformes (e.g., pufferfish) and Pleuronectiformes (flatfish) (Supplementary Fig. 6). Further, species with a smaller geographic range and body size are more likely to be threatened by climate change (negative regression coefficients; Supplementary Table 5). We also noticed lower threat levels for species currently belonging to a low IUCN threat category (e.g., “near threatened” or “least concern”; Supplementary Table 5). We found similarly low threat levels for species that are “data deficient” within the IUCN Red List (43% of the 9,779 species analyzed). Future climate threats were only weakly related to climate zone, commercial importance category and trophic group (Fig. 5). The results of the traits analysis were largely consistent between the two dispersal scenarios (Fig. 5). The results were less consistent across the warming levels, whereby the importance of geographic range size dropped considerably at higher warming levels under the no dispersal assumption, while habitat type became more important (Fig. 5). ## Discussion This study represents the first comprehensive assessment of the threat of potential future climate extremes to freshwater fish species, covering both flow and water temperature, the entire globe and about 90% of the known freshwater fish species. We found that in a “current pledges” scenario (3.2 °C warming), over one third of the freshwater fish species is projected to have more than half of their geographic range threatened by future climate extremes beyond current levels. Threats are considerably reduced under the assumption of maximal dispersal, yet this might represent an overly optimistic estimate given current and future barriers to dispersal23. Our results suggest that increases in maximum water temperature constitute a larger threat to freshwater fishes than changes in minimum water temperature or high and low flow conditions. This is because water temperatures vary less within species ranges and are projected to rise almost everywhere, while flow conditions are more spatially variable hence projected future flow is less likely to exceed present-day extremes within the species’ ranges. In line with previous studies, we found that climate change will result in reduced flows mainly in drought-prone regions21,24. In addition, depletion of low flows might be most important at low stream orders, which are not well captured by the 5 arcminutes resolution of the hydrological model PCR-GLOBWB employed in this study20. While global estimates of hydrological variables are available at higher spatial resolutions25, 5 arcminutes is the highest resolution currently achievable for future projections of both flow and water temperature21. Hence, the spatial resolution of our analysis might result in an underestimation of the impacts of climate change on species living in smaller upstream reaches. We further note that we did not explicitly consider changes in flow or water temperature seasonality, which might disfavor species whose life histories are adapted to specific flow or temperature regimes (e.g., specific seasonal flow regimes)5,26. However, flow and water temperature seasonality were clearly correlated to the minimum flow and water temperature extremes within the species’ ranges (Pearson’s r of 0.6–0.9), indicating that the effects of changes in the seasonality of flow or temperature were at least partially covered by our predictions. Exposure to climate extremes beyond present-day values does not necessarily imply local extinction. If species’ current distributions are confined by factors other than flow or water temperature (e.g., biogeographic dispersal barriers or anthropogenic pressures), species might be able to withstand larger temperature and flow extremes than inferred based on their current geographic range27,28. The same holds if species are able to adapt to new water temperature and flow conditions16 or if fishes have the possibility to hide from extremes in micro-climatic refugia, for example due to water stratification or small-scale thermal heterogeneity29, which are not included in our hydrological model21. On the other hand, species’ range maps are relatively coarse representations of species occurrence, hence some species might be more affected than indicated by our results (i.e., if present-day climate extremes within their geographic range already preclude local occurrence). Indeed, a tentative comparison of our species-specific maximum weekly water temperature limits with critical thermal maxima reported from laboratory tests suggest both under- and overestimations by our geographic range-based thermal limits, while showing an overall reasonable agreement (mean percent difference = 9%; Pearson’s r = 0.62; Supplementary Fig. 7). Further work is required to better understand deviations between our empirical thresholds and the lab-based maxima, which may stem not only from uncertainties in our modeling approach (e.g., in the range maps or the water temperature model) but also from heterogeneity in experimental conditions30. Additionally, we recognize that a given increase in global mean temperature may lead to locally different exposure levels depending on the GCM–RCP combination, as each is characterized by specific distributions of changes in water temperature and flow31. We notice a greater variability across GCMs than RCPs when looking at species-specific proportions of range affected (Supplementary Fig. 1), similar to previous findings for hydro-climatic variables32. However, variability across the GCMs did not affect the species-specific thresholds, which were consistent across the models (Supplementary Fig. 8). Although our future climate threat assessment is associated with uncertainties, our comparative analysis across the different warming levels and targets clearly showed a sharp increase in potential impacts with increasing global warming. Species already listed as “endangered” or “critically endangered” on the IUCN Red List of threatened species might be particularly affected by future warming, as these species were characterized by the highest future climate threat levels. Our findings also show that threats imposed by amplified climate extremes are expected to be particularly high in tropical watersheds, in accordance with previous studies suggesting large climate-change induced freshwater habitat degradation in the tropics33,34,35. Tropical species are indeed expected to be highly affected by climate change36, and our results confirm this for freshwater fishes. Many tropical watersheds host low-income food-deficit countries where local communities are highly dependent on fishery as a primary food source. Indeed, up to 50% of household incomes in countries along the Mekong, Zambezi, and Brazilian Amazon depend on fishing37. Hence, increased exposure of freshwater fish species to climate extremes, potentially resulting in local extinctions17, may have important socio-economic repercussions in these regions38. Our findings indicate that limiting global warming to 2 °C will reduce the proportion of freshwater fish species with more than half of their range threatened by 74–81% (the range refers to the two dispersal scenarios) compared to current pledges of governments (3.2 °C). Restricting the global mean temperature rise to 1.5 °C will lower this proportion by an additional 11–14% (or 53-58% compared to 2 °C). While we acknowledge that the ecological realism of our model projections can be improved, these first comparative estimates highlight the need to intensify (inter)national commitments to limit global warming if potentially severe disruption of freshwater biodiversity is to be prevented. ## Methods ### Species occurrence data We compiled species’ geographic ranges from a combination of datasets. We employed the IUCN Red List of Threatened Species database, which provides geographic range polygons for 8,564 freshwater fish species (~56% of freshwater fish species39), compiled from literature and expert knowledge40. We complemented these ranges with data from Barbarossa et al23, who compiled geographic range polygons for 6,213 freshwater fish species not yet represented in the IUCN dataset, and the Amazonfish dataset41, which provides range maps for 2,406 species occurring in the Amazon basin. We harmonized the species names based on Fishbase (www.fishbase.org)42 and merged the ranges (i.e., union of polygons) from the different datasets to obtain one geographic range per species. We then resampled the range polygons of each species to the 5 arcminutes (~10 km) hydrography of the global hydrological model (see below), with a given species marked as occurring in a cell if ≥ 50% of the cell area overlapped with the species’ polygon. In total, we obtained geographic ranges for 12,934 freshwater fish species, covering ~90% of the known freshwater fish species43. We excluded 1,160 exclusively lentic species because our hydrological model is less adequate for lakes than for rivers, i.e., it does not account for water temperature stratification (see section “Phylogenetic regression on species traits” for an explanation of how habitat information was extracted). Out of the 11,774 (partially or entirely) lotic fish species, we excluded 349 species (~3%) because their occurrence range was smaller than ~1,000 km2 (i.e., ten grid cells), which we considered too small relative to the spatial resolution of the hydrological model (see below). Hence, the analysis was based on 11,425 species in total (Supplementary Figs. 9, 10; a raster layer providing the number of species at each five arcminutes grid cell is available as Supplementary Data 6). ### Hydrological data We employed the Global Hydrological Model (GHM) PCR-GLOBWB20 with a full dynamical two-way coupling to the Dynamical Water temperate model (DynWAT)21 at 5 arcminutes spatial resolution (~10 km at the Equator), to retrieve weekly streamflow and water temperature worldwide20,21. PCR-GLOBWB simulates the vertical water balance between two soil layers and a groundwater layer, with up to four land cover types considered per grid cell. Surface runoff, interflow, and groundwater discharge are routed along the river network using the kinematic wave approximation of the Saint–Venant Equations21 and includes floodplain inundation. Apart from the larger lakes, PCR-GLOBWB includes over 6,000 man-made reservoirs44 as well as the effects of water use for irrigation, livestock, domestic, and industrial sectors. PCR-GLOBWB computes river discharge, river and lake water levels, surface water levels and runoff fluxes (surface runoff, interflow and groundwater discharge). These fluxes are dynamically coupled to DynWAT along with the meteorological forcing, such as air temperature and radiation from the GCMs to compute water temperature. DynWAT thus includes temperature advection, radiation and sensible heating but also ice formation and breakup, thermal mixing and stratification in large water bodies, effects of water abstraction and reservoir operations. We selected this model combination because it allows a full representation of the hydrological cycle (considering also anthropogenic stressors, e.g., water use), it fully integrates water temperature and calculates the hydrological variables on a high-resolution hydrography. The choice of one hydrological model over an ensemble was motivated by the fact that very few GHMs or Land Surface Models calculate water temperature at the spatial resolution desired for this study20,21. The PCR-GLOBWB model setup was similar to Wanders et al.21, with the exception that flow and water temperature were aggregated at the weekly scale to capture the fish species’ tolerance levels to extreme events45. ### Species-specific thresholds for extreme flow and water temperature To assess climate change threats to freshwater fishes, we focused on climate extremes rather than hydrothermal niche characteristics in general, because extremes are more decisive for local extinctions and potential geographic range contractions16,17. We quantified climate extremes using long-term average maximum and minimum water temperature (Tmax, Tmin), maximum and minimum flow (Qmax, Qmin), and the number of zero flow weeks (Qzf), based on the weekly hydrograph and thermograph of the hydrological model. Water temperature is considered the most important physiological threshold for fish species, as mortality of ectothermic species occurs above and below lethal thresholds8,46. Decreases in minimum flow directly affect riffle-pool systems and connectivity between viable habitat patches, leading to a rapid loss of biodiversity47. We included the number of zero-flow weeks because increases in the frequency of dry-spells directly correlates with reduction in diversity and biomass due to the loss of suitable aquatic habitat47. We considered maximum flow because increases in high flow might reduce abundance of young-of-the-year fish by washing away eggs and displacing juveniles and larvae, impeding them from reaching nursery and shelter habitats47,48. We quantified species-specific thresholds for minimum and maximum weekly flow, maximum number of zero flow weeks and maximum and minimum weekly water temperature based on the present-day distribution of these characteristics within the geographic range of each species, similarly to previous studies45,49,50. To this end, we overlaid the species’ range maps with the weekly flow and water temperature metrics from the output of the hydrological model, calculated for each year and averaged over a 30-years historical period to conform to the standard for climate analyses51,52 (1976–2005, for each GCM employed in the study). We calculated for each 5 arcminutes grid cell the long-term average minimum and maximum weekly flow (Qmin, Qmax, Eqs. (1) and (2)), the long-term average frequency of zero-flow weeks (Qzf, Eq. (3)) and the long-term average minimum and maximum weekly temperature (Tmin, Tmax, Eqs. (4) and (5)), as follows: $$Q_{\mathrm{min}} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N {\mathrm{min}}({\mathrm{Q}}7_i)}}{N}$$ (1) $$Q_{\mathrm{max}} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N {\mathrm{max}}({\mathrm{Q}}7_i)}}{N}$$ (2) $$Q_{zf} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N \left\{ {j \in \left\{ {1, \ldots ,M} \right\}:{\mathrm{q}}7_j = 0} \right\}_i}}{N}$$ (3) $$T_{{\mathrm{min}}} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N {\mathrm{min}}({\mathrm{T}}7_i)}}{N}$$ (4) $$T_{{\mathrm{max}}} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N {\mathrm{max}}({\mathrm{T}}7_i)}}{N}$$ (5) where Q7 and T7 are the vectors of weekly streamflow and water temperature values for a given year i, respectively; q7 is the streamflow value for the week j; N is the number of years considered (30 in this case) and M is the number of weeks in a year (~52). We then used the spatial distributions of these values within the range of each species to determine species-specific ‘thresholds’ for each of the variables, defined as the 2.5 percentile of the minimum flow and minimum temperature and the 97.5 percentile of the maximum water temperature and zero flow weeks values. We preferred these to using the absolute minimum and maximum values to reduce the influence of uncertainties and outliers in the threshold definition. Only for maximum flow we used the maximum value across the range, because of the highly right-skewed distribution of flow values within the range of the species. An overview of the thresholds’ distribution is available in Supplementary Fig. 8. ### Climate forcing and warming targets We considered four main future scenarios based on increases of global mean air temperature equal to 1.5, 2.0, 3.2, and 4.5 °C. The global mean temperature increase refers to a 30-years average, in accordance with guidelines for climate analyses51, and with pre-industrial reference set at 1850–190031. To obtain estimates of weekly water temperature and flow for each warming level, we forced the hydrological model with the output from an ensemble of five Global Climate Models (GCMs), each run for four Representative Concentration Pathway (RCP) scenarios, namely RCP 2.6, 4.5, 6.0, and 8.5 (see “Supplementary Methods” for details). Hence, each RCP–GCM combination would reach each warming level at a different point in time, with some of the RCP–GCM combinations not reaching certain warming levels. Consequently, the number of scenarios available differed among warming levels (an overview is provided in Supplementary Table 1). In total we modeled 42 scenarios (one scenario = one GCM–RCP combination at a certain point in the future), including 17 scenarios for 1.5 °C, 15 for 2.0 °C, 7 for 3.2 °C and 3 for 4.5 °C. ### Projecting species-specific future climate threats For each species and each of the 42 scenarios as described in the previous section, we quantified the proportion of the range where projected extremes exceed the present-day values within the species’ range for at least one of the variables. Thus, for each species x we quantified the percentage of geographic range threatened (RT [%]) at each GCM-RCP scenario combination c and for a variable (or group of variables) v as, $${\mathrm{RT}}_{x,c,v} = \frac{{{\mathrm{AT}}_{x,c,v}}}{{A_x}} \cdot 100$$ (6) where AT is the portion of area threatened [km2] and A is the current geographic range size [km2]. That is, we assessed for all grid cells within the species’ range if a projected minimum or maximum weekly flow would fall below the minimum or above the maximum flow threshold, if there would be a higher number of zero flow weeks than the threshold would allow, or if the minimum or maximum weekly water temperature would be lower than the minimum or higher than the maximum water temperature threshold. The variable-by-variable evaluation allowed us to identify which (groups of) variable(s) contributed to the threat. For simplicity, we grouped the number of zero flow weeks, minimum and maximum weekly flow variables to assess threat imposed by altered flow regimes. Similarly, we grouped threats imposed by amplified minimum and maximum weekly water temperature to assess temperature-related threats. In the aggregated results, a grid-cell is thus flagged as threatened if any of the underlying thresholds is exceeded. ### Accounting for dispersal In general, organisms may adapt to climate change (or escape from future extremes) by moving to more suitable locations53. Accounting for this possibility is challenging due to the uncertainties and data gaps associated with current and future barriers in freshwater systems (e.g., dams, weirs, culverts, sluices)54. In addition, data needed to reliably estimate dispersal ability is still lacking for the majority of the species55. We therefore employed two relatively simple dispersal assumptions in our calculations. Under the “no dispersal” assumption, fishes are restricted to their current geographic range, whereas under the “maximal dispersal” assumption, fishes are assumed to be able to reach any cell within the sub-basin units encompassing their current geograhic range. We defined the sub-basin units by intersecting the physical boundaries of main basins (defined as having an outlet to the sea/internal sink) with the boundaries defined by the freshwater ecoregions of the world, which provide intra-basins divisions based on evolutionary history and additional ecological factors relevant to freshwater fishes22 (Supplementary Fig. 11). Basins smaller than 1,000 km2 were combined with adjacent larger units. In total, we delineated 6,525 sub-basin units (area: µ = 20,376 km2, σ = 90,717 km2) from 10,884 main hydrologic basins and 449 freshwater ecoregions. To model future climate threats under the maximal dispersal assumption, we first expanded the geographic range for the current situation, allowing the species to occupy grid cells within the encompassing sub-basin boundaries if suitable according to the species-specific thresholds. Then we assessed future climate threats for the 42 different scenarios relative to the present-day range plus all cells potentially available to the species within the encompassing sub-basins (excluding cells that would become threatened in the future), as $${\mathrm{RT}}_{x,c,v} = \frac{{{\mathrm{AT}}_{x,c,v}}}{{A_x + ({\mathrm{AE}}_x - {\mathrm{AET}}_{x,c,v})}} \cdot 100$$ (7) where AE is the expanded part of the geographic range [km2] and AET is the area threatened within the expanded part of the geographic range [km2]. ### Aggregation of results To summarize our results, we first assessed the proportion of species having more than half of their (expanded) geographic range threatened (i.e., exposed to climate extremes beyond current levels within their range) at each warming level. We did this for each GCM-RCP scenario combination and then calculated the mean and standard deviation across the GCM-RCP combinations at each warming level. We further calculated the proportion of species threatened by future climate extremes in each 5 arcminutes (~10 km at the Equator) grid cell for each warming level, as follows: $${\mathrm{PAF}}_{i,w} = median_c\left( {1 - \frac{{S_{i,w}}}{{S_{\mathrm{i,present}}}}} \right)$$ (8) where PAF represents the potentially affected fraction of species in grid cell i for warming level w, c represents the scenario (i.e., GCM–RCP combination), Si,w represents the number of species for which extremes in water temperature and flow in grid-cell i according to warming level w do not exceed present-day levels within their range, and Si,present represents the number of species in grid cell i. For both numerator and denominator, the species pool for cell i was determined based on the overlap with the (expanded) geographic range maps (see “Species occurrence data” and “Accounting for dispersal”). We used the median across the GCM–RCP combinations rather than the mean because the data showed skewed distributions. Finally, we averaged the grid-specific proportions of species affected over main basins with an outlet to the ocean/sea or internal sink (e.g., lake), as follows: $$\overline {{\mathrm{PAF}}} _{x,w} = \frac{{\mathop {\sum }\nolimits_{i = 1}^I {\mathrm{PAF}}_{i,w}}}{{I_x}}$$ (9) where Ix represents the number of grid cells within the watershed x. ### Phylogenetic regression on species traits We performed phylogenetic regression to relate the threat level of each species, quantified as the proportion of the geographic range exposed to future climate extremes beyond current levels within the range (see Eqs. (6) and (7)), to a number of potentially relevant species characteristics, while accounting for the non-independence of observations due to phylogenetic relatedness among species56. We established a phylogenetic regression model per warming level and dispersal scenario (i.e., eight models in total, based on four warming levels times two dispersal assumptions). As species characteristics, we included initial range size (in km2), body length (in cm), climate zone, trophic group and habitat type, as these traits may influence species’ responses to (anthropogenic) environmental change8,30,57,58. We further included IUCN Red List category to evaluate the extent to which current threat status is indicative of potential impacts of future climate change, and commercial importance to evaluate implications of potential extirpations for fisheries. We overlaid each species’ geographic range with the historic Köppen–Geiger climate categories to obtain the main climate zone per species (i.e., capital letter of the climate classification)59. Species falling into multiple climate categories were assigned the climate zone with the largest overlap. We retrieved information on threat status from IUCN40 and on taxonomy from Fishbase42. We used the IUCN and Fishbase data also to gather a list of potential habitats for each species. For the species represented within the IUCN dataset, we classified species as lotic if they were associated with habitats containing at least one of the words “river”, “stream”, “creek”, “canal”, “channel”, “delta”, “estuaries”, and as lentic if the habitat descriptions contained at least one of the words “lake”,“pool”,“bog”,“swamp”,“pond”. For the remaining species, we extracted information on habitat from Fishbase, where we used the highest level of aggregation of habitat types to classify species found in lakes as lentic and species found in rivers as lotic. We classified species occurring in both streams and lakes as lotic-lentic and labeled species found in both freshwater and marine environments as lotic-marine. Further, we retrieved data from Fishbase on maximum body length and commercial importance42. From the same database we also retrieved trophic level values and aggregated them into Carnivore (trophic level >2.79), Omnivore (2.19 < trophic level ≤ 2.79) and Herbivore (trophic level ≤ 2.19)42. We performed a synonym check for the binomial nomenclature provided by the IUCN database to maximize the overlap with the Fishbase database. Since information on phylogeny was available only for a subset of 4,930 fish species covered in our study (based on Betancur-R et al.60), we allocated the remaining species to the phylogenetic tree using an imputation procedure implemented in the R package “fishtree”61. The empirical tree covered 97% of the families and 80% of the genera included in our species set, indicating that the majority of the missing species were allocated to the correct genus. Our final dataset for the regression included 9,779 species (695 species were excluded because covariates were not available and 951 because they were not included in the “fishtree” database). To account for the uncertainty in the phylogenetic tree imputation, we repeated each of the eight phylogenetic regression models based on 100 replicates of the phylogenetic tree61. Prior to running the regressions we log-transformed threat level (response variable), geographic range size and body length as these variables were right-skewed. As Spearman’s rank correlations among the covariates were below 0.4 and variance inflation factors below 1.5 (Supplementary Fig. 12 and Supplementary Table 6), we kept the full set of covariates. We ran the phylogenetic regression using the R package “nlme”62,63 and checked the residuals of the models using QQ plots (Supplementary Fig. 13). We then extracted coefficients, |t|-statistics, p values as well as the lambda parameter at each warming level (averaged over the 100 replicates; Supplementary Table 5). Then, we quantified variable importance using a procedure based on the random forest approach64, as implemented in the R package “biomod2”65. To that end, we randomized the values of the covariates one by one and computed the variable importance as 1 minus the Pearson’s r between the predictions of the original model and the predictions obtained from the model with randomized data. We iterated this procedure 10,000 times (100 iterations of the variable importance algorithm times 100 models based on replicates of the phylogenetic tree) and reported the average score and standard deviation across the 100 stochastic replicates (standard deviation across the iterations was negligible) for each of the eight models. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. ## Data availability Data associated with this publication including source data files for Figs. 15 of this manuscript are available within the paper and its supplementary information files. Species’ geographical ranges were downloaded from IUCN40, from Jézéquel et al.41 and a combination of additional sources as described in Barbarossa et al.23. ## Code availability The R code used to model species’ threat levels and produce all the figures presented here is available at https://github.com/vbarbarossa/fishsuit66. The Python source code used to obtain weekly water temperature and flow estimates at 5 arcminutes is available at https://github.com/UU-Hydro/PCR-GLOBWB_model67 (PCR-GLOBWB) and at https://github.com/wande001/dynWat (DynWat). All the model runs were carried out on the Dutch national e-infrastructure Cartesius. ## References 1. Dudgeon, D. et al. Freshwater biodiversity: importance, threats, status and conservation challenges. Biol. Rev. 81, 163 (2006). 2. Tedesco, P. A. et al. Data Descriptor: a global database on freshwater fish species occurrence in drainage basins. Sci. Data 4, 1–6 (2017). 3. Butchart, S. H. M. et al. Global Biodiversity: Indicators of Recent Declines. Science 328, 1164–1168 (2010). 4. Urban, M. C. Accelerating extinction risk from climate change. Science 348, 571–573 (2015). 5. Reid, A. J. et al. Emerging threats and persistent conservation challenges for freshwater biodiversity. Biol. Rev. 94, 849–873 (2019). 6. Knouft, J. H. & Ficklin, D. L. The potential impacts of climate change on biodiversity in flowing freshwater systems. Annu. Rev. Ecol. Evol. Syst. 48, 110316–022803 (2017). annurev-ecolsys. 7. Poff, N. L. R. Beyond the natural flow regime? Broadening the hydro-ecological foundation to meet environmental flows challenges in a non-stationary world. Freshw. Biol. 63, 1011–1021 (2018). 8. Comte, L. & Olden, J. D. Climatic vulnerability of the world’s freshwater and marine fishes. Nat. Clim. Chang. 7, 718–722 (2017). 9. Thieme, M. L., Lehner, B., Abell, R. & Matthews, J. Exposure of Africa’s freshwater biodiversity to a changing climate. Conserv. Lett. 3, 324–331 (2010). 10. Darwall, W. R. T. & Freyhof, J. Lost fishes, who is counting? The extent of the threat to freshwater fish biodiversity. In: Conservation of freshwater fishes (eds. Closs, G. P., Krkosek, M. & Olden, J. D.) 1–36 (Cambridge University Press, 2016). 11. van Vliet, M. T. H., Ludwig, F. & Kabat, P. Global streamflow and thermal habitats of freshwater fishes under climate change. Clim. Change 121, 739–754 (2013). 12. Warren, R., Price, J., Graham, E., Forstenhaeusler, N. & VanDerWal, J. The projected effect on insects, vertebrates, and plants of limiting global warming to 1.5 °C rather than 2 °C. Science 360, 791–795 (2018). 13. Powers, R. P. & Jetz, W. Global habitat loss and extinction risk of terrestrial vertebrates under future land-use-change scenarios. Nat. Clim. Chang. (2019) https://doi.org/10.1038/s41558-019-0406-z. 14. Zurell, D., Graham, C. H., Gallien, L., Thuiller, W. & Zimmermann, N. E. Long-distance migratory birds threatened by multiple independent risks from global change. Nat. Clim. Chang. 8, 992–996 (2018). 15. Hof, C., Araújo, M. B., Jetz, W. & Rahbek, C. Additive threats from pathogens, climate and land-use change for global amphibian diversity. Nature 480, 516–519 (2011). 16. Román-Palacios, C. & Wiens, J. J. Recent responses to climate change reveal the drivers of species extinction and survival. Proc. Natl Acad. Sci. 117, 4211–4217 (2020). 17. Wiens, J. J. Climate-related local extinctions are already widespread among plant and animal species. PLOS Biol. 14, e2001104 (2016). 18. Ove Hoegh-Guldberg, Jacob, D., Taylor, M., et al. Impacts of 1.5 °C global warming on natural and human systems. Glob. Warm. 1.5°C - IPCC’s Spec. Assess. Rep. (2018) https://doi.org/10.1093/aje/kwp410. 19. United Nations Environment Programme. Emissions Gap Report 2019 (2019). 20. Sutanudjaja, E. H. et al. PCR-GLOBWB 2: a 5 arcmin global hydrological and water resources model. Geosci. Model Dev. 11, 2429–2453 (2018). 21. Wanders, N., van Vliet, M. T. H., Wada, Y., Bierkens, M. F. P. & van Beek, L. P. H. High-resolution global water temperature modelling. Water Resour. Res. 55, 2760–2778 (2019). 22. Abell, R. et al. Freshwater ecoregions of the world: a new map of biogeographic units for freshwater biodiversity conservation. Bioscience 58, 403–414 (2008). 23. Barbarossa, V. et al. Impacts of current and future large dams on the geographic range connectivity of freshwater fish worldwide. Proc. Natl Acad. Sci. USA. 117, 3648–3655 (2020). 24. Wiel, K., Wanders, N., Selten, F. M. & Bierkens, M. F. P. Added value of large ensemble simulations for assessing extreme river discharge in a 2 °C warmer world. Geophys. Res. Lett. 46, 2093–2102 (2019). 25. Barbarossa, V. et al. FLO1K, global maps of mean, maximum and minimum annual streamflow at 1 km resolution from 1960 through 2015. Sci. Data 5, 180052 (2018). 26. Bunn, S. E. & Arthington, A. H. Basic principles and ecological consequences of altered flow regimes for aquatic biodiversity. Environ. Manag. 30, 492–507 (2002). 27. Faurby, S. & Araújo, M. B. Anthropogenic range contractions bias species climate change forecasts. Nat. Clim. Chang. 8, 252–256 (2018). 28. Sunday, J. M., Bates, A. E. & Dulvy, N. K. Thermal tolerance and the global redistribution of animals. Nat. Clim. Chang. 2, 686–690 (2012). 29. Collas, F. P. L., van Iersel, W. K., Straatsma, M. W., Buijse, A. D. & Leuven, R. S. E. W. Sub-daily temperature heterogeneity in a side channel and the influence on habitat suitability of freshwater fish. Remote Sens. 11, 2367 (2019). 30. Leiva, F. P., Calosi, P. & Verberk, W. C. E. P. Scaling of thermal tolerance with body mass and genome size in ectotherms: a comparison between water- and air-breathers. Philos. Trans. R. Soc. B Biol. Sci. 374, 20190035 (2019). 31. Seneviratne, S. I. et al. The many possible climates from the Paris Agreement’s aim of 1.5 °C warming. Nature 558, 41–49 (2018). 32. Greve, P. et al. Global assessment of water challenges under uncertainty in water scarcity projections. Nat. Sustain. 1, 486–494 (2018). 33. Pokhrel, Y. et al. A review of the integrated effects of changing climate, land use, and dams on mekong river hydrology. Water 10, 266 (2018). 34. Frederico, R. G., Olden, J. D. & Zuanon, J. Climate change sensitivity of threatened, and largely unprotected, Amazonian fishes. Aquat. Conserv. Mar. Freshw. Ecosyst. 26, 91–102 (2016). 35. Béné, C. et al. Vulnerability and adaptation of African rural populations to hydro-climate change: experience from fishing communities in the Inner Niger Delta (Mali). Clim. Change 115, 463–483 (2012). 36. Tewksbury, J., Huey, R. & Deutsch, C. Putting heat on tropical animals. Science 320, 1296–1297 (2008). 37. FAO. The State of World Fisheries and Aquaculture 2018—Meeting the sustainable development goals. (2018). 38. Allison, E. H. et al. Vulnerability of national economies to the impacts of climate change on fisheries. Fish Fish 10, 173–196 (2009). 39. Nelson, J. S. Fishes of the world. (John Wiley & Sons, Inc., 2006). 40. IUCN. The IUCN Red List of Threatened Species. Version 2018-2. http://www.iucnredlist.org (2018). Accessed on 2020-06. 41. Jézéquel, C. et al. A database of freshwater fish species of the Amazon Basin. Sci. Data 7, 96 (2020). 42. Froese, R. & Pauly, D. FishBase. World Wide Web electronic publication www.fishbase.org (2018). 43. Lévêque, C., Oberdorff, T., Paugy, D., Stiassny, M. L. J. & Tedesco, P. A. Global diversity of fish (Pisces) in freshwater. in Freshwater Animal Diversity Assessment (eds. Balian, E. V., Lévêque, C., Segers, H. & Martens, K.) 545–567 (Springer Netherlands, 2008). https://doi.org/10.1007/978-1-4020-8259-7_53. 44. Lehner, B. et al. High‐resolution mapping of the world’s reservoirs and dams for sustainable river‐flow management. Front. Ecol. Environ. 9, 494–502 (2011). 45. Eaton, J. G. & Scheller, R. M. Effect of climate warming on fish thermal habitat in streams of the USA. Limnol. Oceanogr. 41, 1109–1115 (1996). 46. Beitinger, T. L., Bennett, W. A. & McCauley, R. W. Temperature tolerances of North American freshwater fishes exposed to dynamic changes in temperature. Environ. Biol. Fishes 58, 237–275 (2000). 47. Poff, N. L. et al. The ecological limits of hydrologic alteration (ELOHA): A new framework for developing regional environmental flow standards. Freshw. Biol. 55, 147–170 (2010). 48. Sukhodolov, A., Bertoldi, W., Wolter, C., Surian, N. & Tubino, M. Implications of channel processes for juvenile fish habitats in Alpine rivers. Aquat. Sci. 71, 338–349 (2009). 49. Azevedo, L. B. et al. Species richness-phosphorus relationships for lakes and streams worldwide. Glob. Ecol. Biogeogr. 22, 1304–1314 (2013). 50. Leuven, R. S., Posthuma, L., Huijbregts, M. A., Struijs, J. & De Zwart, D. Field sensitivity distribution of macroinvertebrates for phosphorus in inland waters. Integr. Environ. Assess. Manag. 7, 280–286 (2010). 51. Rogelj, J., Schleussner, C. F. & Hare, W. Getting it right matters: temperature goal interpretations in geoscience research. Geophys. Res. Lett. 44, 10, 662–10,665 (2017). 52. Barbarossa, V. et al. Developing and testing a global-scale regression model to quantify mean annual streamflow. J. Hydrol. 544, 479–487 (2017). 53. Loarie, S. R. et al. The velocity of climate change. Nature 462, 1052–1055 (2009). 54. McManamay, R. A., Griffiths, N. A., DeRolph, C. R. & Pracheil, B. M. A synopsis of global mapping of freshwater habitats and biodiversity: implications for conservation. Pure Appl. Biogeogr. 2, 64 (2018). 55. Radinger, J. et al. The future distribution of river fish: the complex interplay of climate and land use changes, species dispersal and movement barriers. Glob. Chang. Biol. 23, 4970–4986 (2017). 56. Grafen, A. The phylogenetic regression. Philos. Trans. R. Soc. B Biol. Sci. 326, 119–157 (1989). 57. Olden, J. D., Hogan, Z. S. & Zanden, M. J. Vander. Small fish, big fish, red fish, blue fish: size-biased extinction risk of the world’s freshwater and marine fishes. Glob. Ecol. Biogeogr. 16, 694–701 (2007). 58. Pacifici, M. et al. Assessing species vulnerability to climate change. Nat. Clim. Chang. 5, 215–225 (2015). 59. Kottek, M., Grieser, J., Beck, C., Rudolf, B. & Rubel, F. World Map of the Köppen-Geiger climate classification updated. Meteorol. Z. 15, 259–263 (2006). 60. Betancur-R, R. et al. Phylogenetic classification of bony fishes. BMC Evol. Biol. 17, 162 (2017). 61. Chang, J., Rabosky, D. L., Smith, S. A. & Alfaro, M. E. An r package and online resource for macroevolutionary studies using the ray‐finned fish tree of life. Methods Ecol. Evol. 10, 1118–1124 (2019). 62. Pinheiro, J., Bates, D., DebRoy, S., Sarkar, D. & R Core Team. _nlme: Linear and Nonlinear Mixed Effects Models_. (2019). 63. R Core Team. R: a language and environment for statistical computing. (2019). 64. Breiman, L. Random forests. Mach. Learn. 45, 5–32 (2001). 65. Thuiller, W., Georges, D., Engler, R. & Breiner, F. biomod2: ensemble platform for species distribution modeling. R. package version 3, 3–7 (2016). 66. Barbarossa, V. vbarbarossa/fishsuit: Nature Communications release. (2020) https://doi.org/10.5281/ZENODO.4309835. 67. Sutanudjaja, E. Pcr-Globwb_Model: Pcr-Globwb Version V2.1.0_Beta_1. (2017) https://doi.org/10.5281/ZENODO.247139. ## Acknowledgements We would like to thank Felix Leiva for the help provided with the phylogenetic regression and Erin Henry for providing part of the data on critical thermal maxima. This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative (account ruc17252). This project has received funding from the Europeans Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 641459 and the GLOBIO project (www.globio.info). The contribution of M.A.J.H. was financed by the Netherlands Organisation for Scientific Research project no. 016.Vici.170.190. ## Author information Authors ### Contributions V.B. and A.M.S. conceived the idea and wrote the paper. V.B. performed the analyses. J.B. ran the PCR-GLOBWB simulations to produce the hydrological data. V.B., A.M.S., J.B., M.A.J.H., N.W., M.F.P.B., and H.K. contributed to the study design, discussions about the approach and results, and commented on the paper. ### Corresponding author Correspondence to Valerio Barbarossa. ## Ethics declarations ### Competing interests The authors declare no competing interests. Peer review information Nature Communications thanks Lise Comte and Xingli Giam for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Barbarossa, V., Bosmans, J., Wanders, N. et al. Threats of global warming to the world’s freshwater fishes. Nat Commun 12, 1701 (2021). https://doi.org/10.1038/s41467-021-21655-w • Accepted: • Published: • DOI: https://doi.org/10.1038/s41467-021-21655-w • ### FutureStreams, a global dataset of future streamflow and water temperature • Joyce Bosmans • Niko Wanders • Valerio Barbarossa Scientific Data (2022) • ### Reconstruction of 100-year dynamics in Daphnia spawning activity revealed by sedimentary DNA • Narumi Tsugeki • Kai Nakane • Michinobu Kuwae Scientific Reports (2022) • ### Spatial pattern of freshwater habitats and their prioritization using additive partitions of beta diversity of inhabitant piscine assemblages in the Terai–Dooars ecoregion of Eastern Himalayas • Anupam Podder • Sumit Homechaudhuri Limnology (2022) • ### Modeling the climate change impact on the habitat suitability and potential distribution of an economically important hill stream fish, Neolissochilus hexagonolepis, in the Ganges–Brahmaputra basin of Eastern Himalayas • Anupam Podder • Sumit Homechaudhuri Aquatic Sciences (2021) • ### Von der Eutrophierung in die Klimaerwärmung – 45 Jahre limnologisches Monitoring Mondsee • Martin Luger • Barbara Kammerlander • Hubert Gassner Österreichische Wasser- und Abfallwirtschaft (2021)
2022-07-06 13:25:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6409624814987183, "perplexity": 7071.4956148250485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00070.warc.gz"}
https://par.nsf.gov/biblio/10384483-radio-pulse-profiles-polarization-terzan-pulsars
Radio Pulse Profiles and Polarization of the Terzan 5 Pulsars Abstract Terzan 5 is a rich globular cluster within the galactic bulge containing 39 known millisecond pulsars, the largest known population of any globular cluster. These faint pulsars do not have sufficient signal-to-noise ratio (S/N) to measure reliable flux density or polarization information from individual observations in general. We combined over 5.2 days of archival data, at 1500 and 2000 MHz, taken with the Green Bank Telescope over the past 12 years. We created high-S/N profiles for 32 of the pulsars and determined precise rotation measures (RMs) for 28. We used the RMs, pulsar positions, and dispersion measures to map the projected parallel component of the Galactic magnetic field toward the cluster. The 〈B∣∣〉 shows a rough gradient of ∼6 nG arcsec−1(∼160 nG pc−1) or, fractionally, a change of ∼20% in the R.A. direction across the cluster, implying Galactic magnetic field variability at sub-parsec scales. We also measured average flux densitiesSνfor the pulsars, ranging from ∼10μJy to ∼2 mJy, and an average spectral indexα= −1.35, whereSννα. This spectral index is flatter than most known pulsars, likely a selection effect due to the high frequencies used in pulsar searches to mitigate dispersion and scattering. We used flux densities from each observation more » Authors: ; ; ; ; ; ; ; ; Publication Date: NSF-PAR ID: 10384483 Journal Name: The Astrophysical Journal Volume: 941 Issue: 1 Page Range or eLocation-ID: Article No. 22 ISSN: 0004-637X Publisher: DOI PREFIX: 10.3847 National Science Foundation More Like this 1. Abstract We present a chemodynamical study of the Grus I ultra-faint dwarf galaxy (UFD) from medium-resolution (R∼ 11,000) Magellan/IMACS spectra of its individual member stars. We identify eight confirmed members of Grus I, based on their low metallicities and coherent radial velocities, and four candidate members for which only velocities are derived. In contrast to previous work, we find that Grus I has a very low mean metallicity of 〈[Fe/H]〉 = −2.62 ± 0.11 dex, making it one of the most metal-poor UFDs. Grus I has a systemic radial velocity of −143.5 ± 1.2 km s−1and a velocity dispersion of$σrv=2.5−0.8+1.3$km s−1, which results in a dynamical mass of$M1/2(rh)=8−4+12×105$Mand a mass-to-light ratio ofM/LV=$440−250+650$M/L. Under the assumption of dynamical equilibrium, our analysis confirms that Grus I is a dark-matter-dominated UFD (M/L> 80M/L). However, we do not resolve a metallicity dispersion (σ[Fe/H]< 0.44 dex). Our results indicate that Grus I is a fairly typical UFD with parameters that agree with mass–metallicity and metallicity-luminosity trends for faint galaxies. This agreement suggests that Grus I has not lost an especially significant amount of mass from tidal encounters with the Milky Way, in linemore » 2. Abstract Recently, the Hydrogen Epoch of Reionization Array (HERA) has produced the experiment’s first upper limits on the power spectrum of 21 cm fluctuations atz∼ 8 and 10. Here, we use several independent theoretical models to infer constraints on the intergalactic medium (IGM) and galaxies during the epoch of reionization from these limits. We find that the IGM must have been heated above the adiabatic-cooling threshold byz∼ 8, independent of uncertainties about IGM ionization and the radio background. Combining HERA limits with complementary observations constrains the spin temperature of thez∼ 8 neutral IGM to 27 K$〈T¯S〉$630 K (2.3 K$〈T¯S〉$640 K) at 68% (95%) confidence. They therefore also place a lower bound on X-ray heating, a previously unconstrained aspects of early galaxies. For example, if the cosmic microwave background dominates thez∼ 8 radio background, the new HERA limits imply that the first galaxies produced X-rays more efficiently than local ones. Thez∼ 10 limits require even earlier heating if dark-matter interactions cool the hydrogen gas. If an extra radio background is produced by galaxies, we rule out (at 95% confidence) the combination of high radio and low X-raymore » 3. Abstract We report the discovery of Specter, a disrupted ultrafaint dwarf galaxy revealed by the H3 Spectroscopic Survey. We detected this structure via a pair of comoving metal-poor stars at a distance of 12.5 kpc, and further characterized it with Gaia astrometry and follow-up spectroscopy. Specter is a 25° × 1° stream of stars that is entirely invisible until strict kinematic cuts are applied to remove the Galactic foreground. The spectroscopic members suggest a stellar ageτ≳ 12 Gyr and a mean metallicity$〈[Fe/H]〉=−1.84−0.18+0.16$, with a significant intrinsic metallicity dispersion$σ[Fe/H]=0.37−0.13+0.21$. We therefore argue that Specter is the disrupted remnant of an ancient dwarf galaxy. With an integrated luminosityMV≈ −2.6, Specter is by far the least-luminous dwarf galaxy stream known. We estimate that dozens of similar streams are lurking below the detection threshold of current search techniques, and conclude that spectroscopic surveys offer a novel means to identify extremely low surface brightness structures. 4. Abstract We present a toy model for the thermal optical/UV/X-ray emission from tidal disruption events (TDEs). Motivated by recent hydrodynamical simulations, we assume that the debris streams promptly and rapidly circularize (on the orbital period of the most tightly bound debris), generating a hot quasi-spherical pressure-supported envelope of radiusRv∼ 1014cm (photosphere radius ∼1015cm) surrounding the supermassive black hole (SMBH). As the envelope cools radiatively, it undergoes Kelvin–Helmholtz contractionRvt−1, its temperature risingTefft1/2while its total luminosity remains roughly constant; the optical luminosity decays as$νLν∝Rv2Teff∝t−3/2$. Despite this similarity to the mass fallback rate$Ṁfb∝t−5/3$, envelope heating from fallback accretion is subdominant compared to the envelope cooling luminosity except near optical peak (where they are comparable). Envelope contraction can be delayed by energy injection from accretion from the inner envelope onto the SMBH in a regulated manner, leading to a late-time flattening of the optical/X-ray light curves, similar to those observed in some TDEs. Eventually, as the envelope contracts to near the circularization radius, the SMBH accretion rate rises to its maximum, in tandem with the decreasing optical luminosity. This cooling-induced (rather than circularization-induced) delay of up to several hundred days may account for themore » 5. Abstract We present a measurement of the intrinsic space density of intermediate-redshift (z∼ 0.5), massive (M*∼ 1011M), compact (Re∼ 100 pc) starburst (ΣSFR∼ 1000Myr−1kpc−1) galaxies with tidal features indicative of them having undergone recent major mergers. A subset of them host kiloparsec-scale, > 1000 km s−1outflows and have little indication of AGN activity, suggesting that extreme star formation can be a primary driver of large-scale feedback. The aim for this paper is to calculate their space density so we can place them in a better cosmological context. We do this by empirically modeling the stellar populations of massive, compact starburst galaxies. We determine the average timescale on which galaxies that have recently undergone an extreme nuclear starburst would be targeted and included in our spectroscopically selected sample. We find that massive, compact starburst galaxies targeted by our criteria would be selectable for$∼148−24+27$Myr and have an intrinsic space density$nCS∼(1.1−0.3+0.5)×10−6Mpc−3$. This space density is broadly consistent with ourz∼ 0.5 compact starbursts being the most extremely compact and star-forming low-redshift analogs of the compact star-forming galaxies in the early universe, as well as them being the progenitors to a fraction of intermediate-redshift, post-starburst, andmore »
2023-02-01 06:17:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6140902042388916, "perplexity": 5174.613577102754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00138.warc.gz"}
https://www.intechopen.com/books/broadband-communications-networks-recent-advances-and-lessons-from-practice/planning-of-fiwi-networks-to-support-communications-infrastructure-of-sg-and-sc
Open access peer-reviewed chapter # Planning of FiWi Networks to Support Communications Infrastructure of SG and SC By Arturo G. Peralta Submitted: June 5th 2017Reviewed: October 19th 2017Published: December 20th 2017 DOI: 10.5772/intechopen.71781 ## Abstract Nowadays, growth in demand for bandwidth, due to new and future applications being implemented, for services provided from smart grids (SG), smart cities (SC) and internet of things (IoT), it has drawn attention of scientific community, on issues related to planning, and optimization of communication infrastructure resources, in addition is necessary comply with requirements such as scalability, coverage, security, flexibility, availability, delay and security. Another important point is how to find and analyze possible solutions that seek to minimize the costs involved by capital expenditure (CAPEX) and operational expenditure (OPEX), but where it is possible to measure the uncertainty coming from stochastic projections, in order to obtain the maximum benefit expected to give access to users Who benefits from the services provided by SG, SC and IoT, on the other hand, we must look for communications architectures that generate optimum topologies to meet demanded requirements and at the same time save energy, possible alternatives highlight the use of hybrid networks of optical fiber links combined with wireless links (Fiber-Wireless, FiWi). This chapter seeks to provide planning alternatives to network segments linking universal data aggregation point (UDAP) with base stations (BS), this segment joins wide area network (WAN) with metropolitan area network (MAN). ### Keywords • FiWi networks • internet of things • planning • scalability • smart cities • smart grids • stochastic programming ## 1. Introduction The following chapter proposes a new planning model for the scalability and deployment of communications infrastructure that give supports to SG, SC and IoT; countries such as the United States and those that made up the European Union, are carrying out projects with SG motivated by the drawbacks related to the current energy network, such as blackouts, overloads and voltage drops, most of these events were due to a slowness in response times of the devices that control the energy network, in addition, the increase in the population of residential and commercial clients that demand to connect intelligent appliances or the IOT, has caused that the network of supply is obsolete, considering this background, it is urgent to make changes in the infrastructure of electrical and communications systems, so as to adapt to the temporal-spatial evolution of customers and to meet requirements such as: scalability, coverage, security, flexibility, availability, delays and latencies [1, 2, 3]. In order to observe a horizon of temporal-spatial evolution, it is necessary to characterize important parameters such as the demand and density of users, Who benefit from the services offered by SG, SC and IoT. It is difficult to make accurate forecasts regarding the projection and growth of intelligent electronic devices (IED) given that uncertainty exists because of the number of variables involved, however it is possible to make future projections in a stochastic way, which can serve as a reference for the take of decisions related to the deployment of the communications network, which supports the services provided on SG, SC and IoT, but testing various planning scenarios. Another point to highlight is how to find and analyze possible solutions that seek to minimize the costs involved by CAPEX and OPEX to maximize the benefits expected by telecommunications operators. Therefore, communication architectures that generate optimal topologies should be sought, in order to meet the requirements demanded by SG, SC and IoT and that at the same time save energy; possible alternatives from the scientific community point to the use of FiWi Hybrid Networks [4, 5, 6, 7, 8, 9]. The systems implemented through SG and SC are characterized by important parameters such as user density, types of services provided, spatial and geographical location of resources like communications infrastructure [1, 10, 11, 12, 13, 14, 15], which is the backbone of SG, SC and IoT. On which applications and services such as automated meters reader (AMR), or with more extended services advanced metering infrastructure (AMI), which for example help in detecting system failures such as: communications, failures in devices like sensors, actuators and/or controllers or failures due to control system and resources scheduling [16]. As for electricity distribution in terms of a smart grid, the terminology of distributed generation (DG) or distributed energy resources (DER) is introduced. In this way, the der goes from having few generation centers to having a large number distributed generation centers throughout electrical network, which can be renewable and/or traditional, forming interconnected micro-networks [17]. The main advantage of having DER is that distribution network operators (DNOs) can quickly and efficiently reconfigure and redirect power flow in response to events such as failures, changes in demand or even changes in energy generation costs. Furthermore, storage sources include traditional high-performance batteries such as lead-acid, sodium sulfide, lithium ions, and others, but studies are being made of materials and alloys that will form batteries of greater capacity, durability and more economical that the current ones. In ([18], chapter 1) are mentioned membranes and cells that are in process of investigation like polymer electrolyte membrane (PEM) and hydrogen fuel cells (HFC). On the other hand, in the next years a considerable increase in the penetration of electric vehicles (EV) is expected and the most common will be plug-in electric vehicles (PEV) and plug-in hybrid electric vehicles (PHEV), in [19, 20, 21] the requirements are mentioned that must satisfy a SG to meet these challenges. All these services and applications required by users of SG, SC and IoT, grow over time, like a tree that expands its leaves, in this way services implementation layers provided by SG and SC will be created them across different stages temporal, in addition to all this, the information flow must be conducted in a secure and scalable manner, on the different network segments how are: personal area network (PAN) and Home area network (HAN) see Figure 1, Neighborhood area network (NAN), WAN, and MAN see Figure 2. ### 1.1. Scalability of FiWi networks Figure 2 shows different users who are geographically located in four subregions that form a planning area, these future clients will benefit from the services provided by SG, SC and IoT, such as smart metering (energy, gas, water) demand response, power storage, civil security, community alarms, smart public lighting, smart road signposting, etc. To be able to offer these services it is necessary to have an adequate communications infrastructure throughout the region, to manage the flow of information together with the flow of energy. Therefore, Figure 2 shows how the deployment of a FiWi network to cover and scale horizontally in a timeline to all subregions would be. The configuration of the architecture would be conformed to the wireless access through data concentrators that we have called UDAP. These devices have the ability to carry information from wireless heterogeneous network (WHN) [22], coming from the different wireless sensors network (WSN) to the base stations that function as enhanced node base station (eNodeB), which have a gateway that connects to the optical network unit (ONU) to send the information over a PON that makes fronthaul/backhaul [23, 24] (very high speed) using fiber-optic links in tree topology to optical line termination (OLT), where the co of the public or private service provider is. In the PON network, it is possible to place bifurcations that function as remote nodes (RN) where passive optical equipment such as Splitter (SP) or arrayed waveguide grating (AWG) can be located. The proposed model seeks to guarantee a horizontal scalability in each stage of time tk, since by passing a time tkto tk+1, Fiber Optic and Wireless resources are designated to the FiWi network, by means of actions and policies that add hardware in an optimal way, trying to give the greatest possible coverage to the users that evolve and grow spatially in a timeline and at the same time, returning the maximum economic benefit to investors represented by public or private companies. This chapter is organized as follows: Section 2 we present the state of the art and related works; in Section 3 problem formulation with the planning model, and the algorithm MOA-FiWi; in Section 4 result analysis. Finally, in Section 5, we present the conclusions and future works. ## 2. State of art and related works Aggregation points (AP) over the NAN play a very important role for the communications network that holds SG, so an adequate AP planning model that links to the HAN, can minimize costs in the deployment of SG and it is proposed in Ref. [25]. In addition to this premise, algorithms based on Greedy and clustering techniques are presented; these proposals presented analysis in power line communication (PLC) and optical fiber. SG proposes a new concept in which electrical energy is generated, transported, distributed and consumed, thanks to the integration between telecommunications and advanced sensors to provide daily control and monitoring of the operation of the energy network within a WAN. Electricity is the key nucleus for the functioning of society and for the provision of services provided by technologies of information and communication (TIC). The works presented in Ref. [26] investigate the challenges and opportunities that can be achieved through the interaction of SG with green TIC, through efficient use of energy with wireless technologies and wired technologies such as PLC and fiber optics, present in the different domains that SG handles such as HAN, NAN and WAN. On the other hand, the problem of efficient collection of measured data from AMI by reusing existing communications infrastructures such as the cellular network, but facilitated by a primary or secondary operator, the latter through a mobile virtual network operator (MVNO) or cognitive-virtual network operator presented in Ref. [27], requires to analyze the coverage problem in rural areas and the capacity of channels in urban areas due to the density of cellular telephone users. In other words, there is a need to allocate channels in an equitable way to reduce the costs in the lease of the spectrum of frequency. Significant contributions have focused on the electric energy reserves, which can be managed by sending the information of the data measured by leased secondary channels at the lowest possible price. In order to reduce both costs of energy and communications, a problem called cost minimization for meter data collection (CMM) is formulated. This problem seeks to find an optimal solution for the minimization in the costs involved in the selection of communication channels and a scheme in the programming for the delivery of energy [28]. Within the AMI concept, the sub-steps that constitute the network topology for infrastructure planning must be determined. Thus, we have NAN [29, 30] delimited from the client meter to the UDAP concentrator with an uplink link [31], For this, conglomerates or clusters of smart meters (SM) are created to form NAN where cellular technology such as general packet radio service (GPRS)/long term evolution (LTE) [32] or WiFi and IEEE 802.15.4 g can be used through multiscales [33]. In this way the first stage is completed. Subsequently the different UDAP of NAN form a MAN, which forms the WAN [7, 34], but among NAN and MAN/WAN two solutions can be proposed for boundary zones; thus, we can continue to maintain a wireless cellular solution, WiFi or IEEE 802.15.4 g [35]. According to the coverage and the capacity of each UDAP that will be the one that finally allows the connectivity with the nearest cellular base station, but when the information demand grows substantially a fiber optic return is proposed [36, 37, 38, 39, 40], in addition to the interconnection of cellular base stations normally arranged for telephony, thus forming a hybrid network FiWi. On the other hand, resources allocation is important for network operator profitability, therefore communications network must be dimensioned to satisfy customers’ coverage and demand. Considering that these evolve over time, infrastructure must evolve accordingly. The demand growing is difficult to predict, in consequence it constitutes an important uncertainty source. Strategic planning of communications network must take account this uncertainty, and network evolution must be able to adaptable to market conditions, therefore, the application of advanced planning methods taking into account the uncertainty can improve network profitability and create a competitive advantage. Wireless network planning demands complex tasks and automated procedures that must adapt and support large data demands that flow from current and future technologies, such as LTE, 4G and in a few years 5G. There are very few contributions from the scientific community, regarding a planning framework that is suitable for various technologies and that demonstrates practical applicability by performing computational experiments using realistic and wide-ranging planning scenarios, where moreover network evolution start from an initial year and scale toward future years. A popular method for evaluating investment opportunities in several domains with real options is presented in Ref. [41]. The real options approach treats investment projects as options of the outcome of future cash flows and uses the financial market for a neutral monetary valuation in the presence of risk when there are investment opportunities. The real options have been used as a tool in several applications, including telecommunications [42, 43]. In order to correctly apply the theory of real options, the project has to be embedded in an appropriate market. Furthermore, stochastic programming can be useful as a tool to evaluate real options in the absence of a market embedding [44]. A discount rate must adjust the risk and be used to arrive to an outcome that is an implicit evaluation of paths that form scenarios over a stochastic decision tree. Since communications network evolution can be divided into several stages, the multistage stochastic programming (MSP) [45] is an appropriate framework for modeling strategic planning on telecommunication networks. Wireless networks planning for cellular telephony through multistage stochastic programming is modeled in Ref. [46], it is left for future works to get a deeper analysis to be able to do FTTx networks planning. Considering the aspects reviewed in the State of the Art, we can state that important work has been done on the analysis to save energy and provide greater capacity through the use of FiWi in multiservice networks that support SG, SC, and IoT. However, it is a priority to model mathematically in the presence of uncertainty how to deploy a FiWi network in a scalable way to propose a green field planning tool, or to perform access network upgrades or generate backup networks in case of failures. In addition, it is important to optimize the allocation of wireless and wired resources involved to meet the requirements of scalability, coverage and capacity, which is the main contribution and reason for the work we propose. In this chapter we present a novel model of scalability of FiWi networks, on Delaunay Triangulation Spaces; which to our best understanding, it is the first time the combination of scalability analysis is considered (CAPEX and OPEX) introducing uncertainty in the different time-space stages, by multi-stage stochastic programming. The model presents flexibility in decision making as the time stages progress, and this situation allows the planning of green fields, as well as the updating of networks that already have communication infrastructure. ## 3. Problem formulation The investigation problem seeks to make a resource allocation, over a temporal-spatial evolution, for communications infrastructure deployment, which will support provided services by SG, SC, and IoT, fulfilling with requirements of scalability and coverage, through use of wired and wireless mediums. ### 3.1. Planning model The model proposed in Figure 3 is divided into four phases which are described below: #### 3.1.1. Determination of parameters to be projected • Characterize demand, population or density of users. In order to do this, it is important to have previous statistical data, which can be obtained by surveys, fieldwork or by comparison with previous projects. • Then, a large number of projection scenarios are constructed at each stage of time. In order to fulfill this step, we can use Wiener stochastic processes (WSP), also known as geometric Brownian motion (GBM), whose model is represented in (1). This process is characterized by two parameters such as the expected growth rate μ and the volatility σ that generates uncertainty values at each time stage of a projection path. More features, properties and details of this stochastic process can be found in [47, 48]. dStSt=mdt+sdFtE1 • The evolution steps that generate a large number of scenarios are reduced by the techniques proposed in Ref. [49]. As a result a multistage stochastic projection tree (MSPT) is obtained. #### 3.1.2. Region of planning and location of candidate sites With the data generated by the MSPT, the candidate sites for base stations, fiber-optic links, location of the central office and potential users will be located. These sites will evolve over a time line for each of the routes in the MSPT scenarios. It is important to note that the coverage radios can be combined for macro, micro and femto cells. In this way, horizontal scalability (coverage requirements) and vertical scalability (increase capacity) can be given as users grow spatially in the planning region. Horizontal scalability, refers to the growth of the FiWi network over time, and in conformity with the evolution and growth of users. Therefore, this type of scalability does not observe the behavior of the process in a single instant of time (as a single photo or image of the scalability process). On the contrary, it is a process that changes, evolves and adapts automatically over time, according to the addition of hardware (base stations, fiber-optic links, etc.), from a time tkto tk+1. On the other hand, with the horizontal scalability capacity is not guaranteed, it can even be very limited. Then, to address the issue of capacity, the issue of vertical scalability is stated, whose objective is to increase capacity without increasing the deployment of the communications infrastructure. For vertical scalability, it is important to clarify that it is not part of the scope of this work to perform exhaustive analysis of capacity, interference and topology performance that can result as a possible solution. However, in general, there are alternatives for technological updating, which increase the capacity for each user, which is added to the future in a time tk+1. Moreover, the optimization model proposed by mixed integer linear program (MILP) is very versatile to adapt it to any wireless and fiber-optic technology; for example, chosen as wireless resources to work under LTE-Advanced-4G, and for the wired fiber optic network xPON, the alternatives for technological updating would be those presented in Table 1. Carrier aggregationIntra-carriers Inter-carriers Spatial multiplexing using techniquesMIMO MISO SIMO SISO Relay nodesMicrocells Femtocells Fronthaul/BackhaulNG-PON1 NG-PON2 DWDM UDWDM ### Table 1. Alternatives of vertical scalability. #### 3.1.3. Description of the MILP optimization model There is a planning area Amade up of users coming from services provided by SG and SC conglomerated through UDAP, situations which next will be represented by the binary variable Xsjnwhose value is one if one jthUDAP is served and covered within a time stage tkfor a node nof MSPT, or zero otherwise; the information of Xsjnis conveyed toward the base stations forming a Cset of candidate cells for coverage, as long as restrictions are met at the energy thresholds that hold connectivity in wireless links, then, when a ithbase station is activated in a stage of tkfor a node nof MSPT, the variable Yinbecomes one or zero; on the other hand, any candidate cell in a parent node pnof MSPT remains active when scaling horizontally from time tkto time tk+1. It is possible to carry out the technological upgrades indicated in Table 1. All the active base stations incorporate a Gateway that allows to migrate the information at high speed by ONU through fiber-optic links that are selected from a graph GVE, configured by a grid of streets, Avenues and intersections that lie within A. Consequently, if a link is activated in a time stage tkfor a node nof MSPT, the variable Zp,qnbecomes one or zero otherwise. The active links form a PON network in tree topology, whose two-way information flows, go from the ONUs to the Central Office where they have OLT. It is not the object of this work to perform an analysis of intermediate passive equipment, such as optical splitters SP and optical routers AWG. This is justified because the major cost in an investment is in the construction of the PON network; that is to say, it is directly related to the laying of fiber-optic cable which also requires civil works such as conduits, pipelines and fittings to guide the laying of the transmission medium. However, if the positioning of bifurcations which act as RN is allowed, after the optimization process and depending on the configuration of the PON network, SP or AWG will be located. These equipment would be part of the technological upgrades indicated in Table 1. Similar to what happens with active base stations, if a Zp,qnlink of optical fiber is chosen in a parent node pnof MSPT, it remains active when scaling horizontally from time tkto time tk+1, it is allowed to make branches to add RN, and make fronthaul/backhaul to the new cells that are going to be activated in the future. In this way, the reuse of guided transmission media such as the optical fiber is optimized. Then, the proposed MILP model seeks to maximize the benefit expected by the investment in the deployment of the FiWi topology, D:argmaxER, there are two ways to solve it. The first is through the use of mathematical optimization software, which could only treat small instances of the problem, since if the proposed MILP model is simplified and relaxed in some restrictions, then we have the equivalent of a maximum coverage problem (MCP); in this way, it can be stated that the complexity present in the proposed optimization problem is NP-Hard type. The second way to deal with the solution is to approximate feasible solutions by means of the heuristics and metaheuristics approach to provide computational scalability through polynomial models that do not grow exponentially to the size of the system; with this we could treat medium and large instances of the problem. The detailed formulation of the multistage stochastic optimization problem and the algorithms proposed on the basis of policies are discussed in the following subsections. #### 3.1.4. Appropriate horizontal scalability path The last phase of the scalable planning model for FiWi networks to get the information optimized by the MILP model, is to perform an adequate analysis to make decisions. In the last stage of time there are some scenarios, formed by paths that run through the MSPT, where each node ncontains the topology FiWi and the UDAPs to be covered. The scenarios can be classified as conservative, realistic and optimistic, depending on the degree of uncertainty they have. The tools, such as the analysis of real options [44, 50], can help to select which horizontal scalability paths are suitable within the MSPT. On the other hand, the model is dynamic and if necessary future scenarios can be reformulated at any stage of time, and the planning model process is the same as described in Section 3.1.1–3.1.4 presented in Figure 3. ### 3.2. Objective function The D:argmaxERis detailed in (2–12), It is important to indicate that all values are carried at net present value (NPV); CAPEX is loaded at the beginning of a year; income and OPEX at the end of a year and since there is no OPEX value for year zero, investment at both the beginning and the end of this year is high, giving negative cash flows in some cases: argmaxER=RprofitUDAPCcapexBSCcapexOFCopexBSCopexOFE2 Subject To: RprofitUDAP=NPnAÎjnXsjnE3 Îjn=Ijn1+rtkE4 CcapexBS=NPnCĈicapex,nYinYipnE5 Ĉicapex,n=Cicapex,n1+rtk1E6 CopexBS=NPnCĈiopex,nYinE7 Ĉiopex,n=Ciopex,n1+rtkE8 CcapexOF=NPnp,EĈp,qcapex,nD̂p,qnZp,qnZp,qpnE9 Ĉp,qcapex,n=Cp,qcapex,n1+rtk1E10 CopexOF=NPnp,EĈp,qopex,nD̂p,qnZp,qnE11 Ĉp,qopex,n=Cp,qopex,n1+rtkE12 ### 3.3. Wireless coverage restrictions The constraints in (13–16) control the horizontal scalability provided by the maximum wireless coverage. The restriction (13) ensures that each UDAP has service and coverage, restriction (14) prevents base stations from being destroyed from parent nodes to child nodes in the MSPT. It should be noted that the parent root node in MSPT is reflected by the variable Yipr=0, in (15) The number of base stations constructed that can be added to those already existing from the parent node is limited to control propagated energy and consumed electrical energy; in this way, restriction (16) controls and ensures that the coverage is successful over the planning area Athrough the parameters αn, coefficients Wjnand variables Cn. CYinXsjn;N,AE13 YinYipn;N,CE14 CYinYipnKn;NE15 AWjnXsjnαnACn;NE16 ### 3.4. Fiber-optic restrictions for Fronthaul/backhaul Restrictions (17–20) are responsible for ensuring a scalable deployment of the fiber-optic fronthaul/backhaul. Restriction (17) prevents fiber-optic links from being destroyed from parent nodes to child nodes in the MSPT. In (18, 19), it is sought to ensure the routing of all flows Ffrom the m–active cells to the Central Office-OLT by means of fiber paths having a minimum distance. On the other hand, (20) enforces that the active links correspond to each of the mflows. Zp,qnZp,qpn;N,p,EE17 qp,EpOUTZp,qn,mqq,EpINPUTZp,qn,m=Rp,iYinE18 Rp,i=1,ifi=OLT1,ifi=m0,ifiOLTimE19 FZp,qn,mMZp,qn;N,p,EE20 ### 3.5. Dimensioning of variables In (21) we place the dimensioning of all the decision variables involved in the MILP. Finally Table 2 summarizes all the variables, constants, coefficients and parameters used in the formulation of the MILP model. Xsε01N×AYε01N×CZε01N×E×FCε01NE21 NameDomainInterpretation Sets A3Planning area, divided into pixels CεZSet of candidate cells for coverage FεZSet of mflows Tree scenario NSet of MSPT nodes Pnε01Probability at node n pnεNParent node at MSPT Coefficients and parameters ε>>0It is a sufficiently large number >F KnεZConstruction limit at node n αnε01Coverage requirement parameter Wjnε01Weight on a pixel in node n Îjnε0Revenue per pixel at node n Ĉicapex,nε0NPV of CAPEX in cell iat node n Ĉiopex,nε0NPV of OPEX in cell iat node n Dp,qnε0Distance for link pqat node n Ĉp,qcapex,nε0NPV of CAPEX for link pqat node n Ĉp,qopex,nε0NPV of OPEX for link pqat node n Decision variables Yinε01Cell iis active at node n Xsjnε01UDAP jis covered at node n Zp,qnε01Link pqis active at node n Cnε01Fulfillment of coverage at node n ### Table 2. Variables, coefficients, and parameters of the MILP. ### 3.6. MOA-FiWi algorithm In order to treat medium or large instances of the problem, a new Multistage Optimization Algorithm for Fiber/Wireless networks, called MOA-FiWi, has been proposed. The main optimization base of multistage optimization algorithm for fiber-wireless hybrid networks (MOA-FiWi) is through a set of actions and policies πto provide the maximum coverage to the UDAPs that carry the information of the users that benefit from the services provided by SG, SC and IoT. Therefore, it must be kept in mind that the amount of UDAP grows according to each MSPT path, according to this growth, the resource designation to form the FiWi network must horizontally scale in time and space. The set ξrepresents the universe of possible geographical locations over time for UDAPs within the planning region A. Then, πka suitable policy depends on the values taken by the spatial locations of the UDAPs; ξ:ξ'ξ;πkξ', consequently the expected maximum benefit depends on the policies (22). D:argmaxER=Eπk=R˜πkE22 The policies are in charge of activating and optimally locating the base stations on candidate sites, for this a Modified Set Covered is used and fiber-optic links form a PON network through the help of a Modified Dijkstra in tree topology between the ONUs and the candidate site chosen to place the OLT, this location gives the reference point where the evolution of the FiWi network begins, this must be fulfilled for all nodes that form the MSPT, Algorithm 1 details MOA-FiWi. Algorithm 1. MOA-FiWi. Step:1   Generate: MSPTnt Step:2   Generate: ξ0'ξ Step:3   Generate: π0ξ0'MSPTnt Step:4   Calculate: Eξ0'=R˜π0 Step:5   Generate new: ξk+1'=f1ξk'ξ Step:6    Modify: πk+1=f2πk'ξk'MSPTnt Step:7    Apply: Decisioncriterion&Stopped Step:8    Go to Step 5: Ifcriteriondoesnotmeet Step:9    Return: D:argmaxER ## 4. Result analysis To exemplify the operation of moa, a planning region Adelimited by a graph GVEon a Delaunay Triangulation Space has been generated, within which a large number of UDAPs will be deployed, providing access to an average of ten to twenty users benefiting from the services provided by SG and SC. The coverage of the region will be distributed over four stages of time tk01234. Figure 4(a) shows a geographic distribution of the planning area which is a component of the subset ξ0'. On the other hand, Figure 4(b) presents the MSPT for the four temporal stages. In each node the projected population of UDAP is indicated and the value of the probability that measures the degree of uncertainty. At the end there are six scenarios, two considered as conservative, two as realistic and two as optimistic, being the point of break from year one. Moreover, to obtain the reduced MSPT, one hundred paths were projected with μ=0,4and σ=0,1. Table 3 summarizes the incomes and reference costs in US dollars, consulted with three telecommunications operators. These data are considered as input for MOA-FiWi. In addition, the simulations were performed with a discount rate r=9,57%. Figure 5 shows the evolution of the value D, over the search space by means of a simulated annealing metaheuristic; the main policies are presented in Figure 6, and the maximum expected profit was achieved in the 130thpolicy π130, given 300 iterations. Annual benefit per UDAP $400,00 CAPEX OLT, with capacity for 1000 users, type XG-PON$ 45.000,00 CAPEX eNodeB/ONU $25.000,00 CAPEX per meter includes optical fiber, supports, pipes, and ducts$ 15,00 Annual OPEX of eNodeB/ONU $200,00 Annual OPEX per meter optical fiber$ 1,20 ### Table 3. Revenues and costs considered in MOA-FiWi. Moreover, Figure 7 exhibits the behaviors of temperature curve and error curve, in response to the optimization process using simulated annealing metaheuristic; therefore, the behavior of MOA-FiWi is adequate, improving the feasible solutions found in each iteration. The maximum expected benefit reached in 130thpolicy on MSPT, where of the six stochastic paths after performing a decision-making analysis, paths with the best result were scenarios one and three. Figure 8 presents topologies of these two scenarios. ## 5. Conclusions and future works • SG, SC, and IoT applications require the power network to support a bidirectional flow of energy, so that users can interact with it to be able to deliver power to the system; in addition, a two-way flow of information between end users and service providers is required. For this reason the communications network that support services provided by SG, SC, and IoT plays a primary role, guaranteeing scalability, coverage, bandwidth, latency, reliability, security and privacy. These requirements must be fulfilled on all segments how are HAN, NAN, MAN and WAN. • As services provided by SG and SC increase, the demand and coverage of IEDs increase with time, consequently communications infrastructure has to evolve and scale in parallel, to achieve this purpose the application of planning methods advanced under conditions of uncertainty would help to make decisions to network operators and improve their profitability and competitiveness for adapt changing market conditions. • The main contributions of this work are the proposal of a planning model to treat scalability of FiWi networks, based on four phases, in addition a new mathematical optimization model MILP is proposed, through use of MSP, that it is destined to solve a problem that reaches a degree of complexity NP-Hard, however this is ideal as a tool to make decisions when communications network planning that presents uncertainty in demand growth, according different services that could be anchored over existing wireless networks such as are cellular networks. The optimization model focuses on achieving a scalable planning of fiber-optic network used as fronthaul/backhaul of wireless network, forming a FiWi hybrid network, which evolves over a space-time line. • Being a stochastic problem, gives possibility and alternative of measuring the risk or benefit playing with actions and policies taken in each projected scenario, therefore, possible solutions can be approached from several points of view and not from one, as is case of deterministic planning model. MSPT allows it to find important breakpoints to take actions and policies that mark new forms of horizontal scalability in the topology of FiWi network that supports the services provided by SG, SC, and IoT. • In order to deal computationally with multistage stochastic planning, an algorithm called MOA-FiWi has been proposed, where the optimization and stopping criterion were evaluated using simulated annealing metaheuristic. MOA-FiWi is based on optimization actions and policies, which provide horizontal scalability over a timeline and in presence of uncertainty; such situation occurs in real life when projects of expansion, updating or implementation of communications infrastructure are executed. • On the other hand, obtained results reveal that there is a great sensitivity in maximum expected benefit, according to how the designation of wired and wireless resources in time and space is done to give maximum coverage to the users, with proposed model can be simulated to the problem from different points of view. As a result, a planning tool is available which helps in analysis to make decisions. • Finally, for future works is intended to treat vertical scalability, with the purpose of improving performance and capacity of the system, in addition, to compare several technologies used in planning of FiWi networks, also try other metaheuristics that would help to explore the search space in a better way, to obtain feasible solutions. ## Acknowledgments This work is supported by the research groups GITEL of the Universidad Politécnica Salesiana, Cuenca, Ecuador, and the research group GIDATI of the Universidad Pontificia Bolivariana, Medellín, Colombia. ## Glossary AMI advanced metering infrastructure AMR automated meters reader AP aggregation points AWG arrayed waveguide grating BS se stations CAPEX capital expenditure CMM cost minimization for meter data collection CO central office DER distributed energy resources DG distributed generation DNOs distribution network operators eNodeB enhanced node base station EV electric vehicles FiWi hybrid networks of optical fiber links combined with wireless links GBM geometric Brownian motion GPRS general packet radio service HAN home area network HFC hydrogen fuel cells IED intelligent electronic devices IoT internet of things LTE long term evolution MAN metropolitan area network MCP maximum coverage problem MILP mixed integer linear program MOA-FiWi multistage optimization algorithm for fiber-wireless hybrid networks MSP multistage stochastic programming MSPT multistage stochastic projection tree MVNO mobile virtual network operator NAN neighborhood area network NPV net present value OLT optical line termination ONU optical network unit OPEX operational expenditure PAN personal area network PEM polymer electrolyte membrane PEV plug-in electric vehicles PHEV plug-in hybrid electric vehicles PLC power line communication PON passive optical network RN remote nodes SC smart cities SG smart grids SM smart meters SP splitter TIC technologies of information and communication UDAP universal data aggregation point WAN wide area network WHN wireless heterogeneous network WSN wireless sensors network WSP Wiener stochastic processes ## How to cite and reference ### Cite this chapter Copy to clipboard Arturo G. Peralta (December 20th 2017). Planning of FiWi Networks to Support Communications Infrastructure of SG and SC, Broadband Communications Networks - Recent Advances and Lessons from Practice, Abdelfatteh Haidine and Abdelhak Aqqal, IntechOpen, DOI: 10.5772/intechopen.71781. Available from: ### Embed this chapter on your site Copy to clipboard Embed this code snippet in the HTML of your website to show this chapter ### Related Content Next chapter #### Economic Interests and Social Problems in Realization of Broadband Network By Milan Ivanović First chapter #### Overview of Multi Functional Materials By Parul Gupta and R.K. Srivastava Mnnit We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities. View all books
2019-02-20 10:27:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39009150862693787, "perplexity": 1743.1567981338974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494694.1/warc/CC-MAIN-20190220085318-20190220111318-00599.warc.gz"}
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1224.15066
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1224.15066 Marcoci, A.; Marcoci, L.; Persson, L.E.; Popa, N. Schur multiplier characterization of a class of infinite matrices. (English) [J] Czech. Math. J. 60, No. 1, 183-193 (2010). ISSN 0011-4642; ISSN 1572-9141/e Summary: Let $B_w(\ell ^p)$ denote the space of infinite matrices $A$ for which $A(x)\in \ell ^p$ for all $x=\{x_k\}_{k=1}^\infty \in \ell ^p$ with $\vert x_{k}\vert \searrow 0$. We characterize the upper triangular positive matrices from $B_w(\ell ^p)$, $1<p<\infty$, by using a special kind of Schur multipliers and Bennett's factorization technique. Also, some related results are stated and discussed. MSC 2000: *15B48 15A60 Appl. of functional analysis to matrix theory 47B35 Toeplitz operators, etc. 26D15 Inequalities for sums, series and integrals of real functions Keywords: infinite matrix; Schur multiplier; discrete Sawyer duality principle; Bennett factorization; Wiener algebra; Hardy type inequality; upper triangular positive matrices Highlights Master Server
2013-05-25 15:00:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935509920120239, "perplexity": 5125.032126527473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956734/warc/CC-MAIN-20130516120556-00090-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/104279/the-decomposition-of-hydrogen-iodide-on-finely-divided-gold-at-150-c-is-zero-ord-2
Problem: The decomposition of hydrogen iodide on finely divided gold at 150°C is zero order with respect to HI. The rate defined below is constant at 1.20 x 10-4 mol/L•s.a. If the initial HI concentration was 0.250 mol/L, calculate the concentration of HI at 25 minutes after the start of the reaction. 🤓 Based on our data, we think this question is relevant for Professor Balasubramanian's class at New Jersey Institute of Technology. FREE Expert Solution Recall the equation used for a zero-order reaction: Given: k = 1.20x10-4 mol/(L·s) [HI]o = 0.250 mol/L Calculate amount of HI at 25 min: Problem Details The decomposition of hydrogen iodide on finely divided gold at 150°C is zero order with respect to HI. The rate defined below is constant at 1.20 x 10-4 mol/L•s. a. If the initial HI concentration was 0.250 mol/L, calculate the concentration of HI at 25 minutes after the start of the reaction.
2020-06-05 22:03:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8829424977302551, "perplexity": 2316.595189686977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00513.warc.gz"}
https://ask.libreoffice.org/en/question/154275/writer-numbering-table-of-contents/
Ask Your Question # [Writer] Numbering Table of contents [closed] I have the following structure: text here # Second Title (Heading 1) ### Subtitle (Heading 2) Then I setup headings from Tools > Chapter numbering... > Paragraph styles, also I change the Number field to 1,2,3. And show sub-levels. When I generate TOC it looks OK: 1. Title .......................3 1.1 Subtitle .................3 2. Second Title.................6 2.1 Subtitle .................6 It's OK, BUT the problems is that my original Title also changes and it has numbering. It became like: # 1.Title I want to have numbering only in TOC. Is that possible? edit retag reopen merge delete ## 1 Answer Sort by » oldest newest most voted Ideally, numbering in Heading x should be styled Hidden but it does not work. Therefore, it is a bit more complicated to get the expected result. • Define an Invisible character style In Font tab, set Size to 2pt (minimum allowed by LO Writer) In Font Effects, check Hidden (useless in fact, because Writer doesn't seem to take it into account in this specific case -- that's why I set Size to 2pt) Save the new definition. • Tune Tools>Chapter Numbering, after your custom settings are tuned and saved In Numbering tab, click on 1-10 level and set Character style to Invisible you just created. Make sure Separator Before and After are empty. In Position tab, , click on 1-10 level and set Number followed by to Nothing. Set Aligned at and Indent at to 0. Press OK. • Edit the TOC structure through right-click and Edit Index/Table. For every level you inserted in your TOC, put the cursor in the text box between E# and E. In this box type a space (or any other decorated separator you like). When done with all levels, press OK. You need to insert a space because in Tools>Chapter Numbering, Position, the separator was suppressed ( Nothing ) so that the Heading x paragraphs would be left-justified (save for a tiny space due to the numbering in 2pt). If this answer helped you, please accept it by clicking the check mark ✔ to the left and, karma permitting, upvote it. If this resolves your problem, close the question, that will help other people with the same question. more ## Comments Thanks for the answer. To make it fully invisible I can match the font color with the background color, ( 2018-05-08 18:31:20 +0200 )edit ## Stats Asked: 2018-05-07 22:41:05 +0200 Seen: 573 times Last updated: May 07 '18
2020-05-24 23:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49394097924232483, "perplexity": 7626.644623890456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347385193.5/warc/CC-MAIN-20200524210325-20200525000325-00194.warc.gz"}
https://fenicsproject.discourse.group/t/ordering-of-3d-solution-vector/10684
Ordering of 3D solution vector I’m dealing with the 3d Elastodynamics problem in the examples, where we have a 3D displacement vector field. Obvioulsly, if we have N points in the mesh, then the solution vector u will be of length 3N. My question is about the ordering in the following solution vector accessed via: u.vector().get_local(vertex_to_dof_map(self.V)) my guess is that the vector is ordered as uh = [uh_x, uh_y, uh_z], where each uh_x is of length N. So that if we access the i-th item in the coordinates array, then the associated solution vector shoud be [uh_x[i], uh_y[i], uh_z[i]], is that reasonable?
2023-03-22 09:24:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.801123857498169, "perplexity": 1174.34127100799}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00539.warc.gz"}
https://strutt.arup.com/help/Propagation/AngleViewAtten.htm
### Strutt Help Angle of View Correction    1/1 Strutt|Propagation|Angle of View Correction inserts the angle of view correction for an infinite line source into the active row of the worksheet. The correction is calculated as: 10log_10(theta/(180°)), theta in [0°,360°] where theta is the angle of view.
2021-03-01 19:32:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5656836628913879, "perplexity": 5306.672104559346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362899.14/warc/CC-MAIN-20210301182445-20210301212445-00320.warc.gz"}
https://styalways.co.uk/op97wie3/article.php?9054f8=mariadb-server-not-found
What was the chemical symbol for the starting isotope? It was not until the 20th century that people actually suceeded, as we will see next. Resonance electron capture: AB + e − (∼0.1 eV) → AB −• Dissociative electron capture: AB + e − (0–15 eV) → A • + B − Ion-pair formation: AB + e − (>10 eV) → A − + B + + e − Negative ion/molecule reaction: AB + C − → ABC − or (AB − H) − + HC This allows the atom to get the optimal ratio of protons and neutrons. Also Known As: EC, K-capture (if K shell electron is captured), L-capture (if L shell electron is captured) Magnesium-24 is the result of the beta-decay of another isotope of another element. Chemical Symbol: Ba-133: Half-life: 10.551 years: Atomic Number: 56: Mass Number: 133 (77 neutrons) Decay Mode: Electron Capture electron capture: an electron in the 1s orbital "falls" into the nucleus, which fuses with a proton to form a neutron. Positron emission versus electron capture The emission of a positron and the capture of an electron are twin reactions which both result in the diminution of the number of protons by 1 (from Z to Z-1) and the production of a neutrino.The positron observed in the final stage of the beta decay (top) is a new particle requiring the 0.511 MeV of its rest mass energy to be created. In electron capture, an electron from an inner orbital is captured by the nucleus of the atom and combined with a proton to form a neutron. Write an equation for the electron capture decay of polonium-200. Beta (β) emission is a process in which a nucleus emits a β particle (an electron or a positron). Identify the element and write the nuclear equation. It greets you with a quick-start template after opening – change a few things, choose the version of Electron you want to run it with, and play around. Source(s): ChemTeam. There are four primary types of natural radioactivity. Products. Then, save your Fiddle either as a GitHub Gist or to a local folder. 7 years ago. Electron capture occurs when neutrons and protons are below the band of stability, but there is not enough energy to emit a positron. 7 6. Marketing. Get your answers by asking now. electron capture. The emission of an electron is β⁻ decay. Solution for When the nuclide iron-55 decays by electron capture: The name of the product nuclide is The symbol for the product nuclide is 1S. Looking for the shorthand of Electron Capture? An experiment is set up to determine the type of radioactive decay coming from a certain hazardous nuclide. electron capture n (Physics) the transformation of an atomic nucleus in which an electron from the atom is spontaneously absorbed into the nucleus. As a result of the process of electron capture ("K-capture") by 211 At, the new isotope formed is: (a) 210 At (b) 212 At (c) 211 Po (d) 211 Rn (e) 207 Bi 25. Electron capture is one process that unstable atoms can use to become more stable. Electron capture is a type of decay in which the nucleus of an atom draws in an inner shell electron. Electron Fiddle lets you create and play with small Electron experiments. Lv 7. (81/36)Kr. Answer to radioactive element decays to bromine-81 after electron capture. Business. https://wps.prenhall.com/wps/media/objects/3084/3158429/blb2101.html menu. … 37 Rb 82 0-1 e82 36 Kr A nuclear reaction can also be forced to occur by bombarding a radioactive isotope with another particle. . Relevance. Half-life: 271.80(5) d The emitted X-rays can be used as an X-ray source for various scientific analysis methods, such as X-ray diffraction.Iron-55 is also a source for Auger electrons, which are produced during the decay. A) If 248 Bk were to undergo spontaneous fission, the products would be 247 Bk and a neutron.. B) If 248 Bk were to undergo beta decay, the products would be 248 Cf and a beta particle.. C) If 248 Bk were to undergo alpha decay, the products would be 244 Am and an alpha particle.. D) If 248 Bk were to undergo electron capture, the only product would be 248 Cm. E) Answer Save. Symbol Definition E electric field strength ε emf (voltage) or Hall electromotive force emf electromotive force E energy of a single photon E nuclear reaction energy E relativistic total energy E total energy E0 ground state energy for hydrogen E0 rest energy EC electron capture Ecap energy stored in a capacitor Eff efficiency—the useful work output divided by the energy input The electron has a negative charge, and the positron has positive charge. What is the daughter nucleus produced when Pb 196 undergoes electron capture? Alpha decay results in emission of an alpha particle and causes the nucleus to decrease its mass number by 4 and its atomic number by 2. ECCI Electron capture chemical ionisation ECD (1) Electron-capture detector; (2) Electrochemical detector ECL Electrochemical luminescence ECNI Electron capture negative ionisation ED (1) Electrochemical detection; (2) Energy dispersive EDAX , EDX Energy-dispersive X-ray spectrometry EDI-CI Electric discharge-induced chemical ionisation (c) A β particle is a product of natural radioactivity and is a high-speed electron. However, people have long sought to be able to change the nucleus. Subjects. The atomic number decreases … For natural radioactivity, there is usually only one symbol on the left side of the equation (the exception is electron capture). A plutonium-239 nucleus absorbs a neutron and fissions to release three neutrons, a krypton-81 nucleus and one other atomic nucleus. Leadership. The nuclear symbol for a … The alchemists tried to convert cheap metals like lead into gold. During electron capture, an electron in an atom's inner shell is drawn into the nucleus where it combines with a proton, forming a neutron and a neutrino.The neutrino is ejected from the atom's nucleus. This page is about the various possible meanings of the acronym, abbreviation, shorthand or slang term: Electron Capture. β Emission There are two types of β emission. $\ce{^{106}_{47}Ag} + \ce{^0_{-1}e} \rightarrow \ce{^{106}_{46}Pd}$ Note that the overall result of electron capture is identical to positron emission. Finance. (d) A positron is a particle with the same mass as an electron but with a positive charge. A proton is changed into a neutron, thereby reducing the atomic number by one. Still have questions? Iron-55 (55 Fe) is a radioactive isotope of iron with a nucleus containing 26 protons and 29 neutrons.It decays by electron capture to manganese-55 and this process has a half-life of 2.737 years. For example, silver-106 undergoes electron capture to become palladium-106. This is a schematic that grossly distorts the picture relative to a scale model of the atom. The electron orbit radii are tens of thousands of times the diameter of the nucleus. The symbol e-indicates that the transition from the level at 14 keV is predominately via electron emission (the conversion coefficient T is 8.56). We've got 1 shorthand for Electron Capture » What is the abbreviation for Electron Capture? The process may be detected by the consequent emission of the characteristic X-rays of the resultant element, (Former name) K-capture Accounting. Economics. Additional electron capture processes (not shown), with lower branching ratios, give rise to excited states of Fe 57 which then de-excite by photon emission to the ground state. ChemTeam. 1 Answer. When 235 U is bombarded with one neutron, fission occurs and the products are three neutrons, 94Kr, and _____ . Note that e is the symbol for the fundamental charge of an electron and a proton, which has a value of e = 1.602 * 10^-19C A γ particle is a(n): ... A radioactive element decays to bromine-81 after electron capture. Management. 81-Tl-196. Electron capture is the last type of naturally occuring decay that we will study. A neutrino is emitted. Most commonly, it is a K-shell electron which is captured, and this is referred to as K-capture. This process of causing radioactivity is … The electron converts a proton to a neutron. Electron Capture. The atomic number goes down by one and the mass number stays the same. (b) An α particle is one product of natural radioactivity and is the nucleus of a helium atom. One process that unstable atoms can use to become palladium-106 then, save your Fiddle either as a Gist... Radioactivity is … electron Fiddle lets you create and play with small experiments. Into gold is β⁠» decay thereby reducing the atomic number goes down by one decay of polonium-200 be..., people have long sought to be able to change the nucleus of a helium atom set up to the. Proton is changed into a neutron and fissions to release three neutrons, 94Kr, and _____ is. Is about the various possible meanings of the nucleus of a helium atom charge and... Radioactive isotope with another particle was the chemical symbol for the electron has a negative charge, and _____ got. This is a process in which a nucleus emits a β particle ( electron... But with a positive charge enough energy to emit a positron bombarding radioactive! Of times the diameter of the acronym, abbreviation, shorthand or term. γ particle is a process in which a nucleus emits a β particle is process... Another particle the electron orbit radii are tens of thousands of times the diameter the! Are below the band of stability, but There is not enough energy to emit a positron a. 196 undergoes electron capture » what is the result of the beta-decay of isotope... Decay that we will see next decay coming from a certain hazardous nuclide of! Element decays to bromine-81 after electron capture we will study is changed into a neutron, reducing... A proton is changed into a neutron and fissions to release three,... Silver-106 undergoes electron capture to become more stable the diameter of the nucleus same mass as electron. With one neutron, fission occurs and the positron has positive charge decays to bromine-81 after capture. By bombarding a radioactive isotope with another particle chemical symbol for the electron capture is the daughter nucleus when! Of stability, but There is not enough energy to emit a positron schematic that grossly distorts the picture to. A positron ), fission occurs and the mass number stays the same mass an... Century that people actually suceeded, as we will see next are below the band of,! Produced when Pb 196 undergoes electron capture decay of polonium-200 tried to convert metals! The optimal ratio of protons and neutrons save your Fiddle either as a GitHub Gist to... Ratio of protons and neutrons relative to a scale model of the beta-decay of another isotope of another of! The positron has positive charge and the mass number stays the same electron is β⁠» decay emission of electron. More stable was the chemical symbol for the starting isotope high-speed electron element! But There is not enough energy to emit a positron ) a that. Abbreviation for electron capture and play with small electron experiments fissions to release three neutrons, a krypton-81 and. Certain hazardous nuclide charge, and the mass number stays the same occuring decay that we will.... A krypton-81 nucleus and one other atomic nucleus:... a radioactive isotope another... Particle is a high-speed electron by one, people have long sought to be able to change nucleus. From a certain hazardous nuclide small electron experiments one process that unstable atoms can use to become palladium-106 naturally decay. Metals like lead into gold 82 0-1 e82 36 Kr a nuclear reaction can also be forced to occur bombarding... A certain hazardous nuclide, abbreviation, shorthand or slang term: electron?! To a scale model of the nucleus of a helium atom and protons are below the band of stability but! Was the chemical symbol for the electron orbit radii are tens of of... Produced when Pb 196 undergoes electron capture » what is the daughter nucleus produced when Pb 196 undergoes electron is... Gist or to a local folder cheap metals like lead into gold are two of... Convert cheap metals like lead into gold particle with the same mass as an electron but with a charge. Not enough energy to emit a positron alchemists tried to convert cheap metals lead. Other atomic nucleus neutron, thereby reducing the atomic number goes down by one and mass... ):... a radioactive isotope with another particle a negative charge, and the positron has charge! Electron has a negative charge, and _____ 0-1 e82 36 Kr a nuclear can. Relative to a local folder, thereby reducing the atomic number by one and the mass number the... Buy Daiya Mac And Cheese Online, Jee Mains Cutoff 2020, Video Camera Color Calibration Chart, Ontario Teachers' Pension Plan Subsidiaries, Mossberg Patriot 308 Walnut, Growing Murraya From Seed, Downtown Gatlinburg Hotels, Private College For Sale In Ontario, Jamaica Sea Moss, Plantifully Based Dan Dan Noodles,
2021-08-04 15:48:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.513994574546814, "perplexity": 1749.2339101080815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00635.warc.gz"}
https://benwhalley.github.io/just-enough-r/scaling-predictors.html
24 Scaling predictor variables When predictors have a natural scale, interpreting them can be relatively straightforward. However when predictors are on an arbitrary scale, or when multiple predictors are on different scales, then interpreting the model (or comparing between models) can be hard. In these cases scaling or standardising predictors in the model can make it easier to interpret the coefficients that are estimated. Standardising ‘Standardising’ predictors, by subtracting the mean and dividng by the standard deviation, is a common way to make interpreting regression models easier, and particularly to make comparisons between predictors — e.g. regarding their relative importance in predicting the outcome. @gelmanscaling_2008 covers in detail the advantages and disadvantages of standardising regression coefficients. Based on the observation that we often wish to compare continuous with binary predictors, they recommend standardisation by subtracting the mean and dividing by _two standard deviations (rather thant the normal one SD). The arm package implements this procedure, and makes it easy to automatically scale the predictors in a linear model. First, we run the linear model: m1 <- lm(mpg ~ wt + am, data=mtcars) m1 Call: lm(formula = mpg ~ wt + am, data = mtcars) Coefficients: (Intercept) wt am 37.32155 -5.35281 -0.02362 And then use arm::standardize to standardize the coefficients: arm::standardize(m1) Call: lm(formula = mpg ~ z.wt + c.am, data = mtcars) Coefficients: (Intercept) z.wt c.am 20.09062 -10.47500 -0.02362 This automatically scales the data for m1 and re-fits the model. An alternative is to use the MuMIn::stdizeFit although this applies scaling rules slightly differently to arm, in this case standardising by a single SD: MuMIn::stdizeFit(m1, mtcars) Registered S3 method overwritten by 'MuMIn': method from predict.merMod lme4 Call: lm(formula = mpg ~ wt + am, data = mtcars) Coefficients: (Intercept) wt am 37.32155 -5.35281 -0.02362 Check the help file for MuMIn::stdize for a detailed discussion of the differences with arm::standardize. Dichotomising continuous predictors (or outcomes) Dichotomising a continuous scale is almost always a bad idea. Although it is sometimes done to aid interpretation or presentation, there are better alternatives (for example estimating means from a model using Stata’s margins command and plotting them, something we will do in the next session). As the Cochrane collaboration puts it: The down side of converting other forms of data to a dichotomous form is that information about the size of the effect may be lost. For example a participant’s blood pressure may have lowered when measured on a continuous scale (mmHg), but if it has not lowered below the cut point they will still be in the ‘high blood pressure group’ and you will not see this improvement. In addition the process of dichotomising continuous data requires the setting of an appropriate clinical point about which to ‘split’ the data, and this may not be easy to determine. See http://www.cochrane-net.org/openlearning/html/mod11-2.htm and also @peacock_dichotomising_2012. Also note that trichotomising (splitting into 3) is likely to be a better better/more efficient approach, see @gelman2009splitting.
2022-05-25 10:41:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.682807981967926, "perplexity": 1999.9280270444742}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00288.warc.gz"}
http://www.analyzemath.com/rational/rational_transforms.html
# Rational Functions and Their Transforms This is an html 5 applet to explore rational functions of the form $f(x)=\dfrac{a x+ b}{c x+d}$ and their transforms. Select a transformation that you want to explore, enter values for the parameters a, b, c and d, click on 'enter' and explore. a = b = c = d = $f(x)=$ ___________________________ Select a transformation h(x)= ___________________________ Zoom In/Out
2017-02-26 21:10:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22070248425006866, "perplexity": 2251.201405541353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172077.66/warc/CC-MAIN-20170219104612-00422-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/227324-there-100-nos-x1-x2-such-sum-2-adjacent-nos-k-const-x10-1thenx1.html
# Thread: there are 100 nos. x1,x2,..such that sum of 2 adjacent nos is k(const.).x10=1thenx1=? 1. ## there are 100 nos. x1,x2,..such that sum of 2 adjacent nos is k(const.).x10=1thenx1=? More precisely, Let x1,x2,....x100 be positive nos. such that the sum of two adjacent nos. is k(constant). (xi +x (i+1) = k for all i) If x10=1 x1=? I solved it and my answer is one. But at the back it is showing k-1. 2. ## Re: there are 100 nos. x1,x2,..such that sum of 2 adjacent nos is k(const.).x10=1then Originally Posted by AaPa More precisely, Let x1,x2,....x100 be positive nos. such that the sum of two adjacent nos. is k(constant). (xi +x (i+1) = k for all i) If x10=1 x1=? I solved it and my answer is one. But at the back it is showing k-1. $x_{2n}=1$ $x_{2n+1}=k-1$ 1 is odd so $x_1=k-1$ 3. ## Re: there are 100 nos. x1,x2,..such that sum of 2 adjacent nos is k(const.).x10=1then nice method. Thanks. I had done it with another method but wrongly. I corrected it-right answer. 4. ## Re: there are 100 nos. x1,x2,..such that sum of 2 adjacent nos is k(const.).x10=1then Hi, When I was teaching I would discuss the sequence described in the attachment in many different classes. The main purpose was to show there is nothing mysterious about compound interest, examples 2 and 3. cool
2016-12-09 07:12:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835997462272644, "perplexity": 3176.703733688876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00393-ip-10-31-129-80.ec2.internal.warc.gz"}
https://variancejournal.org/article/32517
Landry, Drake David, and Steven Martin. 2022. “Policy-Level Unreported Frequency Model for Pure IBNR Estimation.” Variance 15 (1). • Figure 1. Piecewise Uniform-Gamma Distributions • Figure 2. Diagram of Distribution Fitting and Simulation Process • Policy-Level Unreported Frequency Model Excel Companion File ## Abstract The paper develops a policy-level unreported claim frequency distribution for use in individual claim reserving models. Recently, there has been increased interest in using individual claim detail to estimate reserves and to understand variability around reserve estimates. The method we describe can aid in the estimation/simulation of pure incurred but not reported (IBNR) from individual claim and policy data. In addition to a point estimate, the method can provide a full distribution of claim emergence, which can be useful for diagnostic tests (i.e., actual versus expected analyses) and to understand reserve variability. Accepted: July 05, 2021 EDT # Appendices ## Appendix A: Reported Frequency Distribution Derivations ### Poisson Case Let $$n \sim Poisson(\lambda)$$ and $$x|n \sim Binomial(n, q).$$ Recall that the probability mass functions for $$n$$ and $$x|n$$ are \begin{align} f(n)&=\frac{e^{-\lambda}\lambda^{n}}{n!}, \text{ and} \\ f(x|n)&={n \choose x}q^x(1-q)^{n-x}. \end{align} Notice that $$f(x),$$ the unconditional probability mass function for $$x,$$ is \begin{align} f(x)&=\sum_{n=x}^{\infty}f(n)f(x|n) \\ &=\sum_{n=x}^{\infty}\frac{e^{-\lambda}\lambda^{n}}{n!}{n \choose x}q^{x}(1-q)^{n-x} \\ &=\sum_{n=x}^{\infty}\frac{e^{-\lambda}\lambda^{n}}{n!}\frac{n!}{(n-x)!x!}q^{x}(1-q)^{n-x} \\ &=\frac{q^{x}e^{-\lambda}}{x!} \sum_{n=x}^{\infty}\frac{\lambda^{n}(1-q)^{n-x}}{(n-x)!}. \end{align} Let $$a=n-x.$$ Then \begin{align} f(x)&=\frac{q^{x}e^{-\lambda}}{x!}\sum_{a=0}^{\infty}\frac{\lambda^{a+x}(1-q)^{a}}{a!} \\ &=\frac{(q\lambda)^{x}e^{-\lambda}}{x!}\sum_{a=0}^{\infty}\frac{(\lambda(1-q))^{a}}{a!} \\ &=e^{\lambda(1-q)}\frac{(q\lambda)^{x}e^{-\lambda}}{x!}\sum_{a=0}^{\infty}\frac{(\lambda(1-q))^{a}}{a!}e^{-\lambda(1-q)} \\ &=\frac{(q\lambda)^{x}e^{-q\lambda}}{x!}\sum_{a=0}^{\infty}\frac{(\lambda(1-q))^{a}}{a!}e^{-\lambda(1-q)}. \end{align} Notice that $$f(a)=\frac{(\lambda(1-q))^{a}}{a!}e^{-\lambda(1-q)}$$ is the probability mass function for $$a$$ where $$a \sim Poisson(\lambda(1-q))$$ and since $$\sum_{a=0}^{\infty}f(a)=1,$$ $f(x)=\frac{e^{-q\lambda}(q\lambda)^{x}}{x!}.$ Notice that this is the probability mass function for a Poisson distribution. Thus, $x \sim Poisson(q\lambda).$ ### Negative Binomial Case Let $$n \sim Negative Binomial(k, p)$$ and $$x|n \sim Binomial(n, q).$$ Recall that the probability mass functions for $$n$$ and $$x|n$$ are \begin{align} f(n)&=\frac{\Gamma(n+k)}{\Gamma(n+1)\Gamma(k)}p^k(1-p)^n, \text{ and} \\ f(x|n)&={n \choose x}q^x(1-q)^{n-x}. \end{align} Notice that $$f(x),$$ the unconditional probability mass function for $$x,$$ is \begin{align} f(x)&=\sum_{n=x}^{\infty}f(n)*f(x|n) \\ &=\sum_{n=x}^{\infty}\frac{\Gamma(n+k)}{\Gamma(n+1)\Gamma(k)}p^k(1-p)^n{n \choose x}q^x(1-q)^{n-x} \\ &=\sum_{n=x}^{\infty}\frac{\Gamma(n+k)}{\Gamma(n+1)\Gamma(k)}p^k(1-p)^n \\ &\quad \times \frac{\Gamma(n+1)}{\Gamma(x+1)\Gamma(n-x+1)}q^x(1-q)^{n-x} \\ &=\frac{p^kq^x}{\Gamma(k)\Gamma(x+1)} \\ &\quad \times \sum_{n=x}^{\infty}\frac{\Gamma(n+k)}{\Gamma(n-x+1)}(1-p)^n(1-q)^{n-x}. \end{align} Let $$a=n-x.$$ Then \begin{align} f(x)&=\frac{p^kq^x}{\Gamma(k)\Gamma(x+1)} \\ &\quad \times \sum_{a=0}^{\infty}\frac{\Gamma(a+x+k)}{\Gamma(a+1)}(1-p)^{a+x}(1-q)^a \\ &=\frac{p^kq^x(1-p)^x}{\Gamma(k)\Gamma(x+1)} \\ &\quad \times \sum_{a=0}^{\infty}\frac{\Gamma(a+x+k)}{\Gamma(a+1)}((1-p)(1-q))^a \\ &=\frac{p^kq^x(1-p)^x\Gamma(x+k)}{\Gamma(k)\Gamma(x+1)} \\ &\quad \times \sum_{a=0}^{\infty}\frac{\Gamma(a+x+k)}{\Gamma(a+1)\Gamma(x+k)}((1-p)(1-q))^a \\ &=\frac{\Gamma(x+k)}{\Gamma(x+1)\Gamma(k)}p^kq^x(1-p)^x \\ &\quad \times \sum_{a=0}^{\infty}\frac{\Gamma(a+x+k)}{\Gamma(a+1)\Gamma(x+k)}((1-p)(1-q))^a \\ &=\frac{\Gamma(x+k)}{\Gamma(x+1)\Gamma(k)}\frac{p^kq^x(1-p)^x}{(1-(1-p)(1-q))^{x+k}} \\ &\quad \times \sum_{a=0}^{\infty}\frac{\Gamma(a+x+k)}{\Gamma(a+1)\Gamma(x+k)} \\ &\quad \times ((1-p)(1-q))^a(1-(1-p)(1-q))^{x+k}. \end{align} Notice that $$f(a)$$ $$=$$ $$\frac{\Gamma(a+x+k)}{\Gamma(a+1)\Gamma(x+k)}((1-p)(1-q))^a$$ $$\cdot (1-(1-p)(1-q))^{x+k}$$ is the probability mass function for $$a$$ where $$a$$ $$\sim$$ $$Negative Binomial(x+k,$$ $$(1-p)(1-q))$$ and since $$\sum_{a=0}^{\infty}f(a)=1,$$ \begin{align} f(x)&=\frac{\Gamma(x+k)}{\Gamma(x+1)\Gamma(k)}\frac{p^kq^x(1-p)^x}{(1-(1-p)(1-q))^{x+k}} \\ &=\frac{\Gamma(x+k)}{\Gamma(x+1)\Gamma(k)}(\frac{p}{p+q-pq})^k(\frac{q(1-p)}{p+q-pq})^x. \end{align} Notice that this is the probability mass function for a negative binomial distribution. Thus, $$x$$ $$\sim$$ $$Negative Binomial(k, \frac{p}{p+q-qp}).$$ ## Appendix B: Unreported Frequency Distribution Derivations ### Poisson Case Let $$n \sim Poisson(\lambda)$$ and $$x|n \sim Binomial(n, q).$$ Recall from Appendix A that this implies $$x \sim Poisson(\lambda)$$ and the probability mass functions for $$n,$$ $$x|n,$$ and $$x$$ are \begin{align} f(n)&=\frac{e^{-\lambda}\lambda^{n}}{n!}, \\ f(x|n)&={n \choose x}q^x(1-q)^{n-x}, \text{ and} \\ f(x)&=\frac{e^{-q\lambda}(q\lambda)^{x}}{x!}. \end{align} Recall that Bayes’ theorem states that $$P(A|B)=\frac{P(A \cap B)}{P(B)}.$$ So, \begin{align} f(n|x)&=\frac{P(n=n \cap x=x)}{f(x)} \\ &=\frac{f(n)f(x|n)}{f(x)} \\ &=\frac{\frac{e^{-\lambda}\lambda^{n}}{n!}{n \choose x}q^{x}(1-q)^{n-x}}{\frac{e^{-q\lambda}(q\lambda)^{x}}{x!}} \\ &=\frac{\frac{e^{-\lambda}\lambda^{n}}{n!}\frac{n!}{(n-x)!x!}q^{x}(1-q)^{n-x}}{\frac{e^{-q\lambda}(q\lambda)^{x}}{x!}} \\ &=e^{-\lambda+q\lambda}\lambda^{n-x}\frac{1}{(n-x)!}(1-q)^{n-x} \\ &=\frac{e^{-(1-q)\lambda}((1-q)\lambda)^{n-x}}{(n-x)!}. \end{align} Let $$a=n-x.$$ The PMF for $$a|x$$ is \begin{align} f(a|x)&=P(n-x=a|x) \\ &=P(n=a+x|x) \\ &=\frac{e^{-(1-q)\lambda}((1-q)\lambda)^{a}}{a!}. \end{align} It follows that $$a|x \sim Poisson((1-q)\lambda).$$ ### Negative Binomial Case Let $$n \sim Negative Binomial(k, p)$$ and $$x|n \sim Binomial(n, q).$$ Recall from Appendix A that this implies $$x \sim Negative Binomial(k, \frac{p}{p+q-qp})$$ and the probability mass functions for $$n,$$ $$x|n,$$ and $$x$$ are \begin{align} f(n)&=\frac{\Gamma(n+k)}{\Gamma(n+1)\Gamma(k)}p^k(1-p)^n, \\ f(x|n)&={n \choose x}q^x(1-q)^{n-x}, \text{ and} \\ f(x)&=\frac{\Gamma(x+k)}{\Gamma(x+1)\Gamma(k)}(\frac{p}{p+q-pq})^k(\frac{q(1-p)}{p+q-pq})^x. \end{align} Recall that Bayes’ theorem states that $$P(A|B)=\frac{P(A \cap B)}{P(B)}.$$ So, \begin{align} f(n|x)&=\frac{P(n=n \cap x=x)}{f(x)} \\ &=\frac{f(n)f(x|n)}{f(x)} \\ &=\frac{\frac{\Gamma(n+k)}{\Gamma(n+1)\Gamma(k)}p^k(1-p)^n{n \choose x}q^x(1-q)^{n-x}}{\frac{\Gamma(x+k)}{\Gamma(x+1)\Gamma(k)}(\frac{p}{p+q-pq})^k(\frac{q-qp}{p+q-pq})^x} \\ &=\frac{\Gamma(n+k)\Gamma(n+1)\Gamma(x+1)\Gamma(k)}{\Gamma(n+1)\Gamma(k)\Gamma(x+1)\Gamma(n-x+1)\Gamma(x+k)} \\ &\quad \times \frac{p^k(1-p)^nq^x(1-q)^{n-x}}{\frac{p^kq^x(1-p)^x}{(p+q-pq)^{k+x}}} \\ &=\frac{\Gamma(n+k)}{\Gamma(n-x+1)\Gamma(x+k)}(p+q-pq)^{k+x} \\ &\quad \times (1-p)^{n-x}(1-q)^{n-x} \\ &=\frac{\Gamma(n+k)}{\Gamma(n-x+1)\Gamma(x+k)}(p+q-pq)^{k+x} \\ &\quad \times ((1-p)(1-q))^{n-x} \\ &=\frac{\Gamma((n-x)+(k+x))}{\Gamma(n-x+1)\Gamma(k+x)}(p+q-pq)^{k+x} \\ &\quad \times (1-p-q+pq)^{n-x}. \end{align} Let $$a=n-x.$$ The PMF for $$a|x$$ is \begin{align} f(a|x)&=P(n-x=a|x) \\ &=P(n=a+x|x) \\ &=\frac{\Gamma(a+(k+x))}{\Gamma(a+1)\Gamma(k+x)}(p+q-pq)^{k+x} \\ &\quad \times (1-p-q+pq)^{a}. \end{align} It follows that $$a|x \sim Negative Binomial(k+x, p+q-pq).$$ ## Appendix C: Notation List • $$A_{i} =$$ policy identifier • $$B_{i} / O_{i} =$$ ultimate number of claims associated with policy $$A_{i}$$ if it was adjusted so that the frequency exposure equals 1 (used in Section 3) • $$BP =$$ base-level premium (used in Assumption 2 of Section 3.1) • $$C_{i} =$$ claim identifier • $$D_{i} =$$ attachment point on policy $$A_{i}$$ • $$E_{i} =$$ frequency exposure on policy $$A_{i}$$ • $$G_{i} =$$ evaluation date $$-$$ report date for claim $$C_{i}$$ • $$g =$$ policy year frequency trend • $$j_{i} =$$ probability that an unreported claim on policy $$A_{i}$$ will be reported in the next $$L$$ days • $$k, p =$$ parameters for a negative binominal distribution • $$L =$$ length in days of the simulated emergence period in Section 6.2 • $$M =$$ number of policies in the data set • $$N_{i} =$$ ultimate number of claims on policy $$A_{i}$$ • $$P_{i} =$$ earned premium on policy $$A_{i}$$ • $$py_{i} =$$ policy year for policy $$A_{i}$$ • $$q_{i} =$$ probability that a claim on policy $$A_{i}$$ has been reported by the evaluation date • $$s =$$ range of uniform portion of the piecewise report lag distribution discussed in Section 4.2 • $$T =$$ evaluation date • $$U_{i} =$$ number of claims that will be reported in the next $$L$$ days for policy $$A_{i}$$ • $$V_{i} =$$ length of the earned policy period for policy $$A_{i} =$$ min(evaluation date, policy expiration date) $$-$$ policy effective date • $$W_{i} =$$ the occurrence date for claim $$C_{i} -$$ policy effective date • $$w =$$ probability that an observation comes from the uniform portion of the piecewise report lag distribution discussed in Section 4.2 • $$X_{i} =$$ number of claims that have been reported on policy $$A_{i}$$ by the evaluation date • $$Y_{i} =$$ report lag for claim $$C_{i} =$$ report date $$-$$ occurrence date • $$yr =$$ most recent policy year (used in Assumption 4 of Section 3.1) • $$Z_{i} =$$ evaluation date $$-$$ policy effective date for policy $$A_{i}$$ • $$\lambda =$$ mean of a Poisson distribution • $$A(w) =$$ actual percentage of policies that had a reported frequency of $$w$$ • $$b_{x_{i}}(x) =$$ PMF for the reported frequency distribution for policy $$A_{i}$$ implied by the fitted frequency parameters • $$\mkern 1.5mu\overline{\mkern-1.5mub(w)\mkern-1.5mu}\mkern 1.5mu =$$ average probability of observing a reported frequency $$w$$ implied by the fitted frequency parameters • $$f(x) =$$ PMF for a Poisson distribution • $$g(x) =$$ PMF for a negative binomial • $$h(x) / H(x) =$$ the PDF/CDF for either an exponential, gamma, or Weibull distribution used in the piecewise report lag distribution discussed in Section 4.2 • $$LL(x) =$$ log-likelihood function for parameter(s) $$x$$ • $$r(x) / R(x) =$$ PDF/CDF for the report lag distribution • $$S(x) =$$ survival function for the severity distribution 1. The materiality of this assumption depends on the selected report lag distribution. Before implementing this simplification, one should evaluate the impact. 2. For more background on why the right truncation adjustment is necessary, please see Section 2.1 of Korn (2016). 3. A full discussion of MCMC techniques is beyond our scope here. If you want to learn more about these methods a good place to start is Appendix B of Meyers (2015).
2022-10-02 22:37:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 64, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 3152.280763240038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00032.warc.gz"}
https://www.physicsforums.com/threads/integration-problem.267237/
# Integration problem 1. Oct 26, 2008 ### EV33 Integral of cos√x We are supposed to use substitution and integration by parts but I really don't know where to even start. No matter what I substitute for U I will be left without a du. 2. Oct 26, 2008 ### gabbagabbahey what are your limits of integration?...what does your integral become when you use the substitution u=sqrt(x)? What is du? 3. Oct 26, 2008 ### EV33 There are no limits it is indefinite. If you let U= √x then du= 1/2√x and that is my problem because I am left with with integral of cos(u) and no du. 4. Oct 26, 2008 ### gabbagabbahey doesn't $du=\frac{1}{2\sqrt{x}}dx$ and doesn't that mean that $dx=2 \sqrt{x} du= 2udu$? 5. Oct 26, 2008 ### EV33 Yes it does. So does that mean when I integrate I get u^2 sinu and from there I just need to back substitute? 6. Oct 26, 2008 ### gabbagabbahey Well, that means that $\int cos(\sqrt{x})dx= \int 2ucos(u)du$ now you'll need to integrate by-parts....try using f(u)=2u and g'(u)=cos(u)du 7. Oct 26, 2008 ### EV33 oh ok. Thank you for the help.
2016-10-20 19:37:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8628714680671692, "perplexity": 2495.9718618879538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717783.68/warc/CC-MAIN-20161020183837-00269-ip-10-171-6-4.ec2.internal.warc.gz"}
https://sotfom.wordpress.com/2017/10/
# Programme and Abstracts for SOTFOM 4: Reverse Mathematics SOTFOM 4: Reverse Mathematics is now just a few days away! Anyone wishing to attend should register by sending an e-mail with name and affiliation to sotfom@gmail.com.  There will be a conference dinner on Monday, October 9, 2017. When registering, please indicate if you plan to attend the dinner. Below you can find a conference programme. There’s also the short abstracts to whet your appetite! We look forward to seeing you in Munich! Programme: Day 1 – Monday, October 9, 2017, LMU Main Building, room A 022 10:00-10:15 Welcome 10:15-11:45 Stephen G. Simpson Foundations of mathematics: an optimistic message’ 11:45-12:15 Coffee break 12:15-13:15 Sam Sanders Two is enough for chaos in reverse mathematics’ 13:15-15:00 Lunch break 15:00-16:00 Michał Tomasz Godziszewski What do we need to prove that satisfaction is not absolute? Generalization of the Nonabsoluteness of Satisfaction Theorem’ 16:00-16:30 Coffee break 16:30-18:00 Walter Dean Basis theorems and mathematical knowledge de re and de dicto’ 19:00 Conference dinner Day 2 – Tuesday, October 10, 2017, Main Building, room E 006 10:15-11:45 Benedict Eastaugh On the significance of reverse mathematics’ 11:45-12:15 Coffee break 12:15-13:15 Marta Fiori Carones Interval graphs and reverse mathematics’ 13:15-15:00 Lunch break 15:00-16:00 Eric P. Astor Divisions in the reverse math zoo, and the weakness of typicality’ 16:00-16:30 Coffee break 16:30-18:00 Marianna Antonutti Marfori De re and de dicto knowledge in mathematics: some case studies from reverse mathematics’ Day 3 – Wednesday, October 11, 2017, LMU Main Building, room A 022 10:15-11:45 Takako Nemoto Finite sets and infinite sets in constructive reverse mathematics’ 11:45-12:15 Coffee break 12:15-13:15 Vasco Brattka Weihrauch complexity: Choice as a unifying principle’ 13:15-15:00 Lunch break 15:00-16:00 Alberto Marcone Around ATR_0 in the Weihrauch lattice’ 16:00-16:30 Coffee break 16:30-18:00 Marcia Groszek Reverse recursion theory’ Abstracts: Stephen G. Simpson Foundations of mathematics: an optimistic message’ Historically, mathematics has often been regarded as a role model for all of science — a paragon of abstraction, logical precision, and objectivity. The 19th and early 20th centuries saw tremendous progress. The great mathematician David Hilbert proposed a sweeping program whereby the entire panorama of higher mathematical abstractions would be justified objectively and logically, in terms of finite processes. But then in 1931 the great logician Kurt Gödel published his famous incompleteness theorems, leading to an era of confusion and skepticism. In this talk I show how modern foundational research has opened a new path toward objectivity and optimism in mathematics. Sam Sanders Two is enough for chaos in reverse mathematics’ Reverse Mathematics studies mathematics via countable approximations, also called codes. Indeed, subsystems of second-order arithmetic are used, although the original Hilbert-Bernays system H includes higher-order objects. It is then a natural question if anything is lost by the restriction to the countable imposed by second-order arithmetic. We show that this restriction fundamentally distorts mathematics. To this end, we exhibit ten theorems from ordinary mathematics which involve type two objects and cannot be proved in any (higher-type version) of Pi_k^1-comprehension, for any k. Said theorems are however provable in full (higher-order) second-order arithmetic and intuitionism, as well as often in (constructive) recursive mathematics. Michał Tomasz Godziszewski What do we need to prove that satisfaction is not absolute? Generalization of the Nonabsoluteness of Satisfaction Theorem’ We prove the following strengthening of the Hamkins-Yang theorem on the nonabsoluteness of satisfaction: For any theory $T$  interpreting PA and defining the arithmetical truth and for any countable $\omega$-nonstandard model $M \models T$ there is an isomorphic model $M'$ such that $\mathbb{N}^M = \mathbb{N}^{M'}$ (the natural numbers of the models agree), but $Th(\mathbb{N})^M \neq Th(\mathbb{N})^{M'}$ i.e. the true arithemetics of the models are incompatible We will provide a philosophical discussion of the theorem in the context of the debate on the so-called multiverse perspective in foundations of mathematics. There is yet a question of reverse-mathematical strength of the presented results. If one assumes that $\mathcal{N}^M, Tr(\mathcal{N})^{M}$ is recursively saturated, then even $RCA_0$ is enough. Without this assumption, however, we need to use the result saying that in the countable recursive saturation implies chronic resplendency, and the reverse-mathematical status of this implication is (according to our knowledge) an open question. Further, if we phrase the assumptions of the result in a way that instead of postulating recursive saturation we require that an arithmetical truth predicate in the model satisfies the theory $CT^-$ axiomatizing the concept of full satisfaction class, then to get some of our corollaries we need to appeal to Lachlan’s theorem saying that admitting a satisfaction class implies recursive saturation. As above, the reverse-mathematical status of this implication is an open problem. We therefore leave some questions concerning strength not only of the theory $T$ from above, but reverse-mathematical status of the results on the nonabsoluteness of satisfaction themselves. Walter Dean Basis theorems and mathematical knowledge de re and de dicto’ TBA Marta Fiori Carones Interval graphs and reverse mathematics’ This paper deals with the characterization of interval graphs from the point of view of reverse mathematics, in particular we try to understand which axioms are needed in the context of subsystems of second order arithmetic to prove various characterizations of these graphs. Furthermore, we attempt to analyze the interplay between interval orders and interval graphs. Alberto Marcone (2007) already settled the former question for interval orders and this paper follows his path. An interval graph is a graph (V,E) whose vertices can be mapped in intervals of a linear order L such that if two vertices are related, then the intervals associated to them overlap. Different notions of intervals give raise to different notions of interval graphs (and orders), which collapse to one in WKL0. On this respect, interval graphs show the same behavior of interval orders. On the other hand, interval graphs and interval orders have different strength with respect to their characterizations in terms of structural properties. While RCA0 suffices to prove the combinatorial characterization of interval orders, WKL0 is required for interval graph s. Moreover, given an interval graph it is possible to define an associated interval order and vice versa. Even in this respect, the different definitions of interval graphs and orders mentioned before play a role in analysing the strength of this theorems. Eric P. Astor Divisions in the reverse math zoo, and the weakness of typicality’ In modern reverse math, we have begun to discover some exceptional subsystems that do not fall into the linearly-ordered Big Five, particularly concentrated between ACA0 and RCA0. These exceptional systems form a structure that is sometimes called the reverse-mathematics Zoo. Between ACA0 and RCA0, the Zoo can be seen to divide into three branches: roughly, combinatorics, randomness, and genericity. (Prominent examples on each branch include RT22, WWKL, and Pi^0_1G respectively.) We raise questions about the significance of this division in the Zoo, and obtain the first theorem describing this large-scale structurer. Noting that both randomness and genericity are notions of typicality, we find a strict limitation on the strength of principles stating that sufficiently typical sets exist; using this, for nearly all known principles P in the Zoo, we determine whether P follows from any randomness- or genericity-existence principle. Marianna Antonutti Marfori De re and de dicto knowledge in mathematics: some case studies from reverse mathematics’ TBA Takako Nemoto Finite sets and infinite sets in constructive reverse mathematics’ We consider, for a set A of natural numbers, the following notions of finiteness FIN1: There are k and m_0,,,,m_{k-1} such that A={m_0,…,m_{k-1}}; FIN2: There is an upper bound for A; FIN3: There is m such that for all B\subseteq A(|B|<m); FIN4: It is not the case that, for all x, there is y such that y\in A; FIN5: It is not the case that, forall m, there is B\subseteq A such that |B|=m, and infiniteness INF1: There are no k and m_0,…,m_{k-1} such that A={m_0,…,m_{k-1}}; INF2: There is no upper bound for A; INF3: There is no m such that for all B\subseteq A(|B|<m); INF4: For all y, there is x>y such that x\in A; INF5: Forall m, there is B\subseteq A such that (|B|=m). We systematically compare them in the method of constructive reverse mathematics. We show that the equivalence among them can be characterized by various combinations of induction axioms and non-constructive principles, including the axiom called bounded comprehension. Vasco Brattka Weihrauch complexity: Choice as a unifying principle’ TBA Alberto Marcone Around ATR_0 in the Weihrauch lattice’ The classification of mathematical problems in the Weihrauch lattice is a line of research that blossomed in the last few years. So far this approach has mainly dealt with statements which are provable in ACA0 and below. On the other hand the study of multi-valued functions arising from statements laying at higher levels (such as ATR0) of the reverse mathematics spectrum is still in its infancy. We pursue this study by looking at multi-valued functions arising from statements such as the perfect tree theorem, comparability of well-orders, \Delta^0_1 and \Sigma^0_1-determinacy, \Sigma^1_1-separation, weak \Sigma^1_1-comprehension, \Delta^1_1 and \Sigma^1_1-comprehension. As usual, choice functions provide significant milestones to locate these multi-valued functions in the Weihrauch lattice. Marcia Groszek `Reverse recursion theory’ TBA
2018-12-13 17:32:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.681129515171051, "perplexity": 1736.0805619178454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00410.warc.gz"}
https://study.com/academy/answer/a-parametrize-the-given-ellipse-x-4-2-plus-y-2-2-1-what-will-the-parametrization-be-if-the-center-of-the-ellipse-is-translated-to-the-point-5-1-b-find-parametric-equations-for-the-seg.html
# a) Parametrize the given ellipse (x/4)^{2} + (y/2)^{2} = 1 What will the parametrization be if... ## Question: a) Parametrize the given ellipse $$\left( \frac{x}{4} \right)^{2} + \left( \frac{y}{2} \right)^{2} = 1$$ What will the parametrization be if the center of the ellipse is translated to the point {eq}(5, 1) {/eq}? b) Find parametric equations for the segment joining the points {eq}(2, 3) {/eq} and {eq}(4, 6), \, 0 \leq t \leq 1 {/eq}. c) Use the formula for the slope of the tangent line to find {eq}\frac{dy}{dx} {/eq} for the curve {eq}c(t) = (7t^{3}, 4t^{2} - 1) {/eq} at {eq}t = 4 {/eq}. ## Parametric Equations: In this series of semi-related problems (that last one is a complete dark horse here), we will be dealing with parameterizations. In the first two, we will parameterize some curves. In the first case, we will apply a polar parameterization, and in the second we will use a trick that is super straightforward. We'll deal with that dark horse when we get to it. Part A We can use a polar parameterization where we simply have different radii along each axis. The axis along {eq}x {/eq} has radius 4 and the axis along {eq}y {/eq} has radius 2, so a parameterization is {eq}\begin{align*} x &= 4 \cos t \\ y &= 2 \sin t \\ t &\in [0, 2\pi] \end{align*} {/eq} If we move the center to (5,1), we just need to make sure we are always adding 5 to {eq}x {/eq} and 1 to {eq}y {/eq}, so the parameterization is simply {eq}\begin{align*} x &= 4 \cos t + 5 \\ y &= 2 \sin t + 1 \\ t &\in [0, 2\pi] \end{align*} {/eq} Part B Parameterizing a line segment on {eq}t \in [0,1] {/eq} is one of the easiest parameterizations ever. All we have to do is think about what we need to do to the current point to get it to the next (one way to think about the parameter in this case is as an off/on switch). We need to add 2 to {eq}x {/eq} to get it from 2 to 4, and we need to add 3 to {eq}y {/eq} to get it from 3 to 6, so a parameterization on {eq}t \in [0,1] {/eq} is {eq}\begin{align*} x &= 2+2t \\ y &= 3+3t \end{align*} {/eq} Part C Now this one. First, we do not need to memorize a formula to do this! The fact that we are being asked to use a formula here is unfortunate, as it seems to imply a certain level of difficulty required to obtain the formula, and robs us of any understanding of what is actually taking place. All we need to do here is apply the chain rule, which is really pretty obvious since we have both {eq}x {/eq} and {eq}y {/eq} in term of {eq}t {/eq}. How else are we going to change those variables? All we do is rewrite the derivative {eq}\frac{dy}{dx} {/eq} with it: {eq}\begin{align*} \frac{dy}{dx} &= \frac{dy/dt}{dx/dt} \end{align*} {/eq} This is the "formula" we want to use it. We get {eq}\begin{align*} \frac{dy}{dx} &= \frac{dy/dt}{dx/dt} \\ &= \frac{\frac{d}{dt} \left( 4t^2-1 \right)}{\frac{d}{dt} \left( 7t^3 \right)} \\ &= \frac{8t}{21t^2} \\ &= \frac8{21t} \end{align*} {/eq} So when {eq}t = 4 {/eq} the slope of the curve is {eq}\begin{align*} y'(4) &= \frac8{21(4)} \\ &= \frac2{21} \\ &\approx 0.095 \end{align*} {/eq}
2020-01-24 11:05:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000079870224, "perplexity": 1061.8516867687044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00274.warc.gz"}
https://www.physicsforums.com/threads/determination-of-amino-acid.709753/
# Determination of Amino Acid 1. Sep 11, 2013 ### Youngster 1. The problem statement, all variables and given/known data 7.39 mmol of base was used when titrating 432 mg of a monoamine monocarboxylic amino acid from pH 0.8 to 12.0. What is the name of the amino acid? 2. Relevant equations None I suppose. Henderson-Hasselbalch perhaps, but I don't see how that helps me 3. The attempt at a solution Well I decided to start with the simplest calculation first - Determining a molecular mass by dividing the given mass by the given amount of mmols. This provided me with 58.46 g/mol, which doesn't match any known amino acid weights. So now I'm thinking that the pH range provided during the titration is supposed to be a hint, but I can't quite figure out its relevance. I'm really just looking for a tip in the right direction here 2. Sep 12, 2013 ### epenguin Does any amino acid have only one titratable group? 3. Sep 14, 2013 ### chemisttree That's a very unusual amino acid to have a native pH of 0.8! Are you sure this problem isn't a titration of a hydrochloride salt of an amino acid? If that's the case your analysis needs to account for the FW of the hydrochloride salt of the amino acid rather than it's free base. 4. Sep 15, 2013 ### Youngster I probably should have specified that. The amino acid was dissolved in 0.2 N HCl. So the amine group is probably protonated, and the overall charge is +1 No. It should have at least two titratable groups 5. Sep 15, 2013 ### Staff: Mentor Then you should take into account amount of base required to titrate HCl. Something doesn't add up. Have you listed all the information you are given? 6. Sep 15, 2013 ### epenguin Your calculation appears implicitly based on 1 7. Sep 15, 2013 ### Youngster Alright well, I've tried something else. Starting with the equation for the reaction: H2X + 2NaOH → Na2X + 2H2O Where X is the amino acid. The equation states that 2 moles of base titrates 1 mole of amino acid. Assuming the change from pH 0.8 to pH 12.0 is the full titration (meaning the both the carboxyl group and the amino group are deprotonated), 7.39 mmol was used in the ionization of the entire amino acid. So 3.70 mmol of amino acid was titrated by the 7.39 mmol of base. A molecular weight can be obtained by dividing the mass of the amino acid by the moles titrated: $\frac{432 mg amino acid}{3.70 mmol amino acid}$ = 117 $mg/mmol$ This molecular weight corresponds with valine, which also fits the description of a monoamine monocarboxylic amino acid. Does this appear to be reasonable? 8. Sep 15, 2013 ### Youngster Sorry, it seems the 0.2 N HCl information only applies to my actual lab data. The question above is what I assume to be a practice calculation to be done with my lab data 9. Sep 15, 2013 ### epenguin Yes it does. And I have just noticed that the question itself gives you the suggestion of the point you previously missed since it specifies "monoamine monocarboxylic amino acid". I am not very comfortable with your chemical symbology and think you would be showing more understanding if you wrote the ionic forms, that you are going from RCHNH3+COOH to RCHNH2COO- 10. Sep 15, 2013 ### epenguin we can't work out what you don't tell us. Is this lab data, or is it a made up problem as I have assumed? 11. Sep 15, 2013 ### Youngster Ah, yes. I apologize for that. I'll be more specific next time. This is a made up problem that goes along with a lab assignment. To elaborate, the 0.2 N HCl information goes along with my titration curve below: As far as I know, the actual amino acid unknown solution was prepared by dissolving 4.5g of the unknown amino acid in 300mL of 0.2 N HCl. The titration, however, only used 20 mL of the solution. Currently, I'm at a loss, because the technically one equivalent of OH should be used to titrate one of the ionizable groups of the amino acid. At 2 equivalents, however, there doesn't appear to be any rise in pH. I'm beginning to wonder if the amino acid above is actually triprotic. 12. Sep 16, 2013 ### Staff: Mentor If so, you titrated 20 mL of 0.2 N HCl and 20/300*4.5 g of the amino acid. But 20/300*4.5 is 0.3 g, not 0.432 g. Plus, you have not used 7.39 mmol of NaOH to neutralize the aminoacid, but 7.39-20*0.2=3.39 mmol. Perhaps - if the question is made up - information about initial pH is simply wrong and put there without a second thought. Generally speaking valine looks reasonable, it just doesn't fit rest of the information. 13. Sep 16, 2013 ### epenguin Yes I think I know what you mean. The opposite of what you say - there is a very steep rise in pH there. That would normally be a good question, however both what you are told and your experimental data say diprotic. 14. Sep 16, 2013 ### chemisttree I don't agree. There are two inflection points. The first one is the HCl titration at ~1 "Equivalents NaOH", whatever that means in this case. Is it 40 grams? I don't think so. The second inflection point is the ammonium proton. The difference between these two peaks should be the amount of base corresponding to the monoamine monocarboxylic acid, right? What is presented isn't data. It is partially-reduced data without an explanation. Raw data would be presented as pH vs volume. Equivalents of base means nothing without an understanding of the underlying assumptions. What is meant by "Equivalents NaOH"? 15. Sep 16, 2013 ### epenguin Shome confushion, sorry. 16. Sep 17, 2013 ### chemisttree This is a monoprotic amino acid. The amount of base used is that amount between the inflection points NOT the amount required to go from either pH 0.8 to 12 or from pH 7 to 12. pH 12 is an arbitrary endpoint for this experiment designed only to fully describe the inflection point at pH 11. You still need to determine mmol of NaOH used between the inflection points.
2017-11-18 11:03:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3701018691062927, "perplexity": 2138.4245549664543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804724.3/warc/CC-MAIN-20171118094746-20171118114746-00404.warc.gz"}
http://mathoverflow.net/revisions/75762/list
Conway's Thrackle Conjecture: $E \le V$ in any "thrackle," a particular type of drawing of a graph of $V$ vertices and $E$ edges in the plane. Dangerously addictive! And advances made every few years; it is by no means an isolated conjecture.
2013-05-19 01:40:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5232668519020081, "perplexity": 574.3729684182451}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383081/warc/CC-MAIN-20130516092623-00084-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.ck12.org/geometry/Segments-from-Chords/lesson/Segments-from-Chords-Intermediate/r6/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> You are viewing an older version of this Concept. Go to the latest version. Segments from Chords Products of the segments of each of two intersecting chords are equal. % Progress Practice Segments from Chords Progress % Segments from Chords What if Ishmael wanted to know the diameter of a CD from his car? He found a broken piece of one in his car and took some measurements. He places a ruler across two points on the rim, and the length of the chord is 9.5 cm. The distance from the midpoint of this chord to the nearest point on the rim is 1.75 cm. Find the diameter of the CD. After completing this Concept, you'll be able to use your knowledge of chords to solve this problem. Guidance When two chords intersect inside a circle, the two triangles they create are similar, making the sides of each triangle in proportion with each other. If we remove $\overline{AD}$ and $\overline{BC}$ the ratios between $\overline{AE}, \overline{EC}, \overline{DE}$ , and $\overline{EB}$ will still be the same. Intersecting Chords Theorem: If two chords intersect inside a circle so that one is divided into segments of length $a$ and $b$ and the other into segments of length $c$ and $d$ then $ab = cd$ . In other words, the product of the segments of one chord is equal to the product of segments of the second chord. Example A Find $x$ in the diagram below. Use the ratio from the Intersecting Chords Theorem. The product of the segments of one chord is equal to the product of the segments of the other. $12 \cdot 8=10 \cdot x\\96=10x\\ 9.6=x$ Example B Find $x$ in the diagram below. Use the ratio from the Intersecting Chords Theorem. The product of the segments of one chord is equal to the product of the segments of the other. $x \cdot 15=5 \cdot 9\\15x=45\\x=3$ Example C Solve for $x$ . a) b) Again, we can use the Intersecting Chords Theorem. Set up an equation and solve for $x$ . a) $8 \cdot 24=(3x+1) \cdot 12\\192=36x+12\\180=36x\\5=x$ b) $32 \cdot 21=(x-9)(x-13)\\672=x^2-22x+117\\0=x^2-22x-555\\0=(x-37)(x+15)\\x=37, -15$ However, $x \neq -15$ because length cannot be negative, so $x=37$ . Watch this video for help with the Examples above. Concept Problem Revisited Think of this as two chords intersecting each other. If we were to extend the 1.75 cm segment, it would be a diameter. So, if we find $x$ in the diagram below and add it to 1.75 cm, we would find the diameter. $4.25 \cdot 4.25&=1.75\cdot x\\18.0625&=1.75x\\x & \approx 10.3 \ cm,\ \text{making the diameter} 10.3 + 1.75 \approx \ 12 \ cm, \ \text{which is the}\\& \qquad \qquad \qquad \text{actual diameter of a CD.}$ Vocabulary A circle is the set of all points that are the same distance away from a specific point, called the center . A radius is the distance from the center to the circle. A chord is a line segment whose endpoints are on a circle. A diameter is a chord that passes through the center of the circle. The length of a diameter is two times the length of a radius. A central angle is the angle formed by two radii and whose vertex is at the center of the circle. An inscribed angle is an angle with its vertex on the circle and whose sides are chords. The intercepted arc is the arc that is inside the inscribed angle and whose endpoints are on the angle. Guided Practice Find $x$ in each diagram below. Simplify any radicals. 1. 2. 3. Answers For all problems, use the Intersecting Chords Theorem. 1. $15\cdot 4 &=5\cdot x\\ 60&=5x \\ x&=12$ 2. $18 \cdot x &=9\cdot 3\\18x &=27\\ x&=1.5$ 3. $12 \cdot x &=9 \cdot 16 \\ 12x&=144\\ x&=12$ Practice Answer true or false. 1. If two chords bisect one another then they are diameters. 2. Tangent lines can create chords inside circles. 3. If two chords intersect and you know the length of one chord, you will be able to find the length of the second chord. Solve for the missing segment. Find $x$ in each diagram below. Simplify any radicals. Find the value of $x$ . 1. Suzie found a piece of a broken plate. She places a ruler across two points on the rim, and the length of the chord is 6 inches. The distance from the midpoint of this chord to the nearest point on the rim is 1 inch. Find the diameter of the plate. 2. Prove the Intersecting Chords Theorem. Given : Intersecting chords $\overline{AC}$ and $\overline{BE}$ . Prove : $ab=cd$ Vocabulary Language: English Spanish central angle central angle An angle formed by two radii and whose vertex is at the center of the circle. chord chord A line segment whose endpoints are on a circle. diameter diameter A chord that passes through the center of the circle. The length of a diameter is two times the length of a radius. inscribed angle inscribed angle An angle with its vertex on the circle and whose sides are chords. intercepted arc intercepted arc The arc that is inside an inscribed angle and whose endpoints are on the angle. Intersecting Chords Theorem Intersecting Chords Theorem According to the Intersecting Chords Theorem, if two chords intersect inside a circle so that one is divided into segments of length a and b and the other into segments of length c and d, then ab = cd. Explore More Sign in to explore more, including practice questions and solutions for Segments from Chords. Please wait... Please wait...
2015-05-25 18:21:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 30, "texerror": 0, "math_score": 0.6550398468971252, "perplexity": 325.0766598002893}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928562.33/warc/CC-MAIN-20150521113208-00070-ip-10-180-206-219.ec2.internal.warc.gz"}
https://wilkelab.org/ungeviz/reference/geom_hpline.html
The geoms geom_hpline() and geom_vpline() can be used as a drop-in replacement for geom_point() but draw horizontal or vertical lines (point-lines, or plines) instead of points. These lines can often be useful to indicate specific parameter estimates in a plot. The geoms take position aesthetics as x and y like geom_point(), and they use width or height to set the length of the line segment. All other aesthetics (colour, size, linetype, etc.) are inherited from geom_segment(). geom_hpline(mapping = NULL, data = NULL, stat = "identity", position = "identity", ..., na.rm = FALSE, show.legend = NA, inherit.aes = TRUE) geom_vpline(mapping = NULL, data = NULL, stat = "identity", position = "identity", ..., na.rm = FALSE, show.legend = NA, inherit.aes = TRUE) ## Arguments mapping Set of aesthetic mappings created by aes() or aes_(). If specified and inherit.aes = TRUE (the default), it is combined with the default mapping at the top level of the plot. You must supply mapping if there is no plot mapping. The data to be displayed in this layer. There are three options: If NULL, the default, the data is inherited from the plot data as specified in the call to ggplot(). A data.frame, or other object, will override the plot data. All objects will be fortified to produce a data frame. See fortify() for which variables will be created. A function will be called with a single argument, the plot data. The return value must be a data.frame, and will be used as the layer data. The statistical transformation to use on the data for this layer, as a string. Position adjustment, either as a string, or the result of a call to a position adjustment function. Other arguments passed on to layer(). These are often aesthetics, used to set an aesthetic to a fixed value, like colour = "red" or size = 3. They may also be parameters to the paired geom/stat. If FALSE, the default, missing values are removed with a warning. If TRUE, missing values are silently removed. logical. Should this layer be included in the legends? NA, the default, includes if any aesthetics are mapped. FALSE never includes, and TRUE always includes. It can also be a named logical vector to finely select the aesthetics to display. If FALSE, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions that define both data and aesthetics and shouldn't inherit behaviour from the default plot specification, e.g. borders(). ## Examples library(ggplot2) ggplot(iris, aes(Species, Sepal.Length)) + geom_hpline(stat = "summary")#> No summary function supplied, defaulting to mean_se() ggplot(iris, aes(Species, Sepal.Length)) + geom_point(position = "jitter", size = 0.5) + stat_summary(aes(colour = Species), geom = "hpline", width = 0.6, size = 1.5)#> No summary function supplied, defaulting to mean_se() ggplot(iris, aes(Sepal.Length, Species, color = Species)) + geom_point(color = "grey50", alpha = 0.3, size = 2) + geom_vpline(data = sampler(5, 1, group = Species), height = 0.4) + scale_color_brewer(type = "qual", palette = 2, guide = "none") + facet_wrap(~.draw) + theme_bw()
2021-03-04 12:35:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18966390192508698, "perplexity": 3814.535204387835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00621.warc.gz"}
http://openstudy.com/updates/51310316e4b04ada9951cd07
## anonymous 3 years ago Let R be the region in the first quadrant bounded by the graph y=3-√x the horizontal line y=1, and the y-axis as shown in the figure to the right. 1. anonymous http://goo.gl/AhgJX 1. Write but do not evaluate, an integral expression that gives the volume of the solid generated when R is rotated about the horizontal line y=-1. 2. Region R is the base of a solid. For each y, where 1≤y≤3, the cross section of the solid taken perpendicular to the y-axis is a rectangle whose height is half the length of its base. Write, but do not evaluate, an integral expression that gives the volume of the solid. 2. anonymous ? Anybody out there? xD 3. phi |dw:1362167243909:dw| 4. phi you need an expression for the radius of the discs, as a function of x 5. anonymous ?? 6. anonymous Isn't the volume $V=\pi \int\limits_{0}^{4}((-1-(3-√x))^2 -(-1-1)^2) dx$ 7. phi |dw:1362167608670:dw| 8. phi
2016-12-11 08:13:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.856932520866394, "perplexity": 299.12252938073414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544358.59/warc/CC-MAIN-20161202170904-00480-ip-10-31-129-80.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Periodogram
# Periodogram Jump to: navigation, search The periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898[1] as in the following quote:[2] Note that the term periodogram may also be used to describe the quantity $r^2$,[3] which is its common meaning in astronomy (as in "the modulus-squared of the discrete Fourier transform of the time series (with the appropriate normalisation)"[4]). See Scargle (1982) for a detailed discussion in this context.[5] A spectral plot refers to a smoothed version of the periodogram.[6][7] Smoothing is performed to reduce the effect of measurement noise. In practice, the periodogram is often computed from a finite-length digital sequence using the fast Fourier transform (FFT). The raw periodogram is not a good spectral estimate because of spectral bias and the fact that the variance at a given frequency does not decrease as the number of samples used in the computation increases. The spectral bias problem arises from a sharp truncation of the sequence, and can be reduced by first multiplying the finite sequence by a window function which truncates the sequence gradually rather than abruptly. The variance problem can be reduced by smoothing the periodogram. Various techniques to reduce spectral bias and variance are the subject of spectral estimation. One such technique to solve the variance problems is also known as the method of averaged periodograms[8] or as Bartlett's method. The idea behind it is, to divide the set of N samples into L sets of M samples, compute the discrete Fourier transform (DFT) of each set, square it to get the power spectral density and compute the average of all of them. This leads to a decrease in the standard deviation as $\frac{1}{\sqrt{L}}.$ ## References 1. ^ Schuster, A., "On the investigation of hidden periodicities with application to a supposed 26 day period of meteorological phenomena," Terrestrial Magnetism, 3, 13-41, 1898. 2. ^ 3. ^ Box, George and Jenkins, Gwilym (1970) Time series analysis: Forecasting and control, San Francisco: Holden-Day. 4. ^ Simon Vaughan and Philip Uttley (2006), "Detecting X-ray QPOs in active galaxies",Advances in Space Research, Volume 38, Issue 7, pp. 1405-1408 5. ^ Scargle (1982), Astrophysical Journal, Part 1, vol. 263, Dec. 15, 1982, p. 835-853 6. ^ Spectral Plot, from the NIST Engineering Statistics Handbook. 7. ^ Short explanation of the relation between the spectral plot and the periodogram. 8. ^ Engelberg, S. (2008), Digital Signal Processing: An Experimental Approach, Springer, Chap. 7 p. 56
2015-03-30 01:21:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.910577654838562, "perplexity": 844.5012417185229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298871.15/warc/CC-MAIN-20150323172138-00148-ip-10-168-14-71.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/20909/how-can-pixelconstrained-be-emulated-with-densityhistogram?noredirect=1
# How can PixelConstrained be emulated with DensityHistogram? This DensityHistogram doesn't display very well. Some bins are shown much larger than others and, oddly, empty bins appear to be drawn smaller than occupied bins. DensityHistogram[RandomVariate[BinormalDistribution[.5], 50000], {100, 100}] In ArrayPlot, similar problems are solved with PixelConstrained. How can PixelConstrained be emulated with DensityHistogram? • I don't understand: what parts of this graphic display "empty bins"? And how are we to tell that bin sizes differ? (Unfortunately your example is not reproducible because you haven't specified a seed--and it takes a bit of time to compute, anyway. Could you offer a simple example of the problem?) I wonder whether you're not just noting aliasing at this resolution. – whuber Mar 8 '13 at 19:51 • @whuber No, I could reproduce it — the pixel sizes indeed are different (try exporting it as a pdf — it's more obvious there). The empty bins that he's talking about are the "white" parts. – rm -rf Mar 8 '13 at 20:17 • @rm-rf I cannot reproduce this behavior even using the command as given (MMA 8.0). When I make the histogram large enough to be able to inspect the "empty bins," I find them to be of the correct sizes. Even when I make it very small, to the degree I can discern any details, I still do not see uneven sizes. (Exporting to PDF adds an unnecessary complication--if the result appears wrong, it could be a problem with the export or with the PDF rendering itself, so I didn't bother to look at that.) – whuber Mar 8 '13 at 20:41 • @whuber Here's a screenshot from my session. Please zoom in well into the region I've marked. You'll see that the squares are all of different sizes. – rm -rf Mar 8 '13 at 21:03 • @rm-rf I really don't see it, nor is it visible in the underlying code. Try Cases[DensityHistogram[ RandomVariate[BinormalDistribution[.5], 50000], {100, 100}], RectangleBox[a___] :> EuclideanDistance@a, Infinity] // Union to see what the kinds of rectangle sizes are present in the plot. They are all virtually the same. I must assume what you see is some kind of aliasing with the pixel raster of your screen. – Sjoerd C. de Vries Mar 8 '13 at 21:35 I don't quite think you can emulate this with DensityHistogram, but since all it does is computing the 2D histogram and plotting it, we could do those steps ourselves and use ArrayPlot with its nice PixelConstrained option. Here's a proof of concept: SeedRandom[1]; data = RandomVariate[BinormalDistribution[.5], 50000]; {bins, counts} = HistogramList[data, {100, 100}]; ArrayPlot[counts, DataReversed -> True, DataRange -> (Through[{Min, Max}@#] & /@ bins), FrameTicks -> {True, True, False, False}, ImageSize -> 500, ColorFunction -> "LakeColors", ColorRules -> {0 -> White}, PixelConstrained -> True, Frame -> True, PlotRange -> {{-4, 4}, {-4, 4}} ] Note that you still need to get the data reversal right and the ticks going the right way to mimic DensityHistogram exactly, but that's a minor detail and I'll leave that to you.
2020-01-20 15:56:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21224457025527954, "perplexity": 1821.5074784833705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00010.warc.gz"}
http://www.plustwophysics.com/category/iit-jee/
## CBSE PUBLISHED AIEEE ADMISSION NOTICE –2012 CBSE will be conducting the Eleventh All India Engineering/Architecture Entrance Examination (AIEEE), Offline Examination on 29th April, 2012 (Sunday) and Online Examination for B.E./.B.Tech. only from 7th May to 26th May, 2012 for admission to degree level courses in Engineering and Architecture in National Institute of Technologies (NITs), Indian Institute of Information Technologies (IIITs), Centrally funded Institutions, Deemed Universities and Institutions in States/UTs (Other than those covered by Joint Entrance Examination) State Level Entrance Examination. 1. Subject Combination and Mode of Examination SUBJECTS Paper 1 Physics, Chemistry & Mathematics TYPE OF QUESTIONS Objective type questions with equal weightage to Physics, Chemistry & Mathematics MODE OF EXAMINATION BOTH OFFLINE (PAPER/PEN) AND ONLINE (CBT MODE) AS PER SELECTED CITIES Paper 2 Mathematics – Part I (Objective type questions) Aptitude Test – Part II (Objective type questions) & Drawing Test – Part III (questions to test Drawing Aptitude OFFLINE ONLY) 2. Schedule of Examination AIEEE will be conducted as per schedule given below: A. Pen & Paper (Offline examination) 29.04.2012 0930-1230 Hours (3 Hours) Paper 1 Physics, Chemistry & Mathematics 29.04.2012 1400-1700 Hours (3 Hours) Paper 2 Mathematics – Part I Aptitude Test – Part II & Drawing Test – Part III B. On line examination for B.E. /B.Tech only (Computer base testing) 7-05-2012 to 26.05.2012 1st shift (9.00 am to 12.00 noon) 2nd shift (2.00 pm to 5.00 pm) 3 Hours Paper 1 Physics, Chemistry & Mathematics ## Motion in a vertical circle and conservation of energy A stone  tied to a string of length l is whirled around a vertical circle with the other end of the string at the centre. At a certain instant of time the stone is at the lowest position and has a speed u. What is the magnitude of change in its velocity as it reaches a position where the string is horizontal? Let’s assume that the potential energy at the lowest position be zero. So, when the string is horizontal, the stone has risen by a vertical height l, the length of the string which is also the radius of the vertical circle. If v is the magnitude of velocity at the horizontal position, then according to the law of conservation of energy, KE+PE at the lowest position = KE+PE at the horizontal position $\frac{1}{2}mu^{2}=\frac{1}{2}mv^{2}+mgl$ From the equation above, v-u can be calculated. $v=\sqrt{u^{2}-2gl}$ The following links will help you for deeper understanding and you can browse through some solved problems from the topic too. ## Which books should be referred for IIT JEE ? We have received hundreds of questions on the above subject. As the questions continue to pour in, we found it necessary to create a post on it. IIT JEE is an entrance exam of international repute and IITs are considered at par with MIT by many. So, if you are an IIT aspirant, it is very [...] ## How to prepare for IIT JEE? Surya Kant Dwivedi asks: “I am always confused in Physics that how can i do my best in Physics. It is my weakest portion in ’PCM’. Please guide me. Excelling in Physics in IIT JEE requires systematic planned preparation. The first step is to make sure that you can solve all the exercises from NCERT text book (including the solved examples and additional exercises) confidently and completely. This ensures that you have sufficient exposure and you are up to the standard expected of you. The next step is to get a good collection of questions asked previously in IIT JEE and solve them. If you are not able to solve a problem, you are welcome to post it here. Recommended Books: Fundamentals of Physics (Resnick Hallidey) Concepts of Physics (HC Verma) Irodov Problem in Physics ## A problem for IIT JEE aspirants Consider the shown arrangement in which a thin rope ‘A’ with a linear density mA is connected to a thick rope ‘B’ with linear mass density mB. The thick rope passes over a pulley and is connected to a heavy block of mass M. The separation of fixed support S from a pulley is L. A stationary wave is setup in this composite string such that the joint remains a node. Given that mb = 4 kg/m, ma = 0.4 kg/m, M = 1 kg and L = 4l = 1 m. Under these conditions, find: 1. The least possible frequency of vibration 2. The total energy of vibration if the amplitude for both the string is A = 1 mm and the string vibrates at the frequency obtained in (a). Categories: IIT JEE   Tags: , ## Eligibility Criteria for IIT JEE ELIGIBILITY FOR JEE-2010 Candidates must make sure that they satisfy all the eligibility conditions given below for appearing in JEE-2010: Date of Birth The date of birth of candidates belonging to GE, OBC and DS categories should be on or after October 1,1985. Whereas the date of birth of those belonging to SC, ST and PD categories should be on or after October 1,1980. The date of birth as recorded in the high school/first Board/ Pre-University certificate will be accepted. If the certificate does not mention the date of birth, a candidate must submit along with the application, an authenticated document indicating the date of birth. Year of passing Qualifying Examination (QE) A candidate must have passed the QE for the first time, after October 1, 2008 or in the year 2009 or will be appearing in 2010. Those who are going to appear in the QE later than October 1, 2010 are not eligible to apply for JEE-2010. The qualifying examinations (QE) are listed below: i) The final examination of the 10+2 system, conducted by any recognized central / state Board, such as Central Board of Secondary Education, New Delhi; Council for Indian School Certificate Examination, New Delhi; etc. ii) Intermediate or two-year Pre-University examination conducted by a recognized Board / University. iii) Final examination of the two-year course of the Joint Services Wing of the National Defence Academy. iv) General Certificate Education (GCE) examination (London/Cambridge/Sri Lanka) at the Advanced (A) level. v) High School Certificate Examination of the Cambridge University or International Baccalaureate Diploma of the International Baccalaureate Office, Geneva. vi) Any Public School/Board/University examination in India or in any foreign country recognized as equivalent to the 10+2 system by the Association of Indian Universities (AIU). vii) H.S.C. vocational examination. viii) Senior Secondary School Examination conducted by the National Institute of Open Schooling with a minimum of five subjects. ix) 3 or 4 year Diploma recognized by AICTE or a state Board of technical education. In case the relevant qualifying examination is not a public examination, the candidate must have passed at least one public (Board or Pre-University) examination at an earlier level. Minimum Percentage of Marks in QE Candidates belonging to GE, OBC and DS categories must secure at least 60% marks in aggregate in their QE. Whereas, those belonging to SC, ST and PD categories must secure at least 55% marks in aggregate in the QE. The percentage of marks awarded by the Board will be treated as final. If the Board does not award the percentage of marks, it will be calculated based on the marks obtained in all subjects listed in the mark sheet. If any Board awards only letter grades without providing an equivalent percentage of marks on the grade sheet, the candidate should obtain a certificate from the Board specifying the equivalent marks, and submit it at the time of counselling/ admission. In case such a certificate is not provided then the final decision rests with the Joint Implementation Committee of JEE-2010. 4. Important Points to note (i) One can attempt JEE only twice, in consecutive years. That means one should have attempted JEE for the first time in 2009 or will be appearing in 2010. (ii) Those who have accepted admission after qualifying in JEE in earlier years by paying full fees at any of the IITs, IT-BHU, Varanasi or ISM, Dhanbad, are NOT ELIGIBLE to write JEE at all irrespective of whether or not they joined in any of the programmes. (iii) The year of passing the Qualifying Examination is the year in which the candidate has passed, for the first time, any of the examinations listed above, irrespective of the minimum percentage marks secured. (iv) The offer of admission is subject to verification of original certificates/ documents at the time of admission. If any candidate is found ineligible at a later date even after admission to an Institute, his/ her admission will be cancelled automatically. (iv) If a candidate is expecting the results of the QE in 2010, his/her admission will only be provisional until he/she submits the relevant documents. The admission stands cancelled if the documents are not submitted in original to the concerned institute before September 30,2010. (v) If a candidate has passed any of the examinations, listed in Sub-section III.2, before October 1,2008, he/she is not eligible to appear in JEE-2010. (vi) If a Board invariably declares the results of the QE late (only after September 30, every year), the candidate is advised to attempt JEE in 2011 or later. (vii) The decision of the Joint Admission Board of JEE-201 0 regarding the eligibility of any applicant shall be final. Categories: IIT JEE   Tags: , , ## Physics Syllabus for IIT JEE Physics General: Units and dimensions, dimensional analysis; least count, significant figures; Methods of measurement and error analysis for physical quantities pertaining to the following experiments: Experiments based on using Vernier calipers and screw gauge (micrometer), Determination of g using simple pendulum, Young’s modulus by Searle’s method, Specific heat of a liquid using calorimeter, focal length of a concave mirror and a convex lens using u-v method, Speed of sound using resonance column, Verification of Ohm’s law using voltmeter and ammeter, and specific resistance of the material of a wire using meter bridge and post office box. Mechanics: Kinematics in one and two dimensions (Cartesian coordinates only), projectiles; Uniform Circular motion; Relative velocity. Newton’s laws of motion; Inertial and uniformly accelerated frames of reference; Static and dynamic friction; Kinetic and potential energy; Work and power; Conservation of linear momentum and mechanical energy. Systems of particles; Centre of mass and its motion; Impulse; Elastic and inelastic collisions. Law of gravitation; Gravitational potential and field; Acceleration due to gravity; Motion of planets and satellites in circular orbits; Escape velocity. Rigid body, moment of inertia, parallel and perpendicular axes theorems, moment of inertia of uniform bodies with simple geometrical shapes; Angular momentum; Torque; Conservation of angular momentum; Dynamics of rigid bodies with fixed axis of rotation; Rolling without slipping of rings, cylinders and spheres; Equilibrium of rigid bodies; Collision of point masses with rigid bodies. Linear and angular simple harmonic motions. Hooke’s law, Young’s modulus. Pressure in a fluid; Pascal’s law; Buoyancy; Surface energy and surface tension, capillary rise; Viscosity (Poiseuille’s equation excluded), Stoke’s law; Terminal velocity, Streamline flow, equation of continuity, Bernoulli’s theorem and its applications. Wave motion (plane waves only), longitudinal and transverse waves, superposition of waves; Progressive and stationary waves; Vibration of strings and air columns;Resonance; Beats; Speed of sound in gases; Doppler effect (in sound). Thermal physics: Thermal expansion of solids, liquids and gases; Calorimetry, latent heat; Heat conduction in one dimension; Elementary concepts of convection and radiation; Newton’s law of cooling; Ideal gas laws; Specific heats (Cv and Cp for monoatomic and diatomic gases); Isothermal and adiabatic processes, bulk modulus of gases; Equivalence of heat and work; First law of thermodynamics and its applications (only for ideal gases);  Blackbody radiation: absorptive and emissive powers; Kirchhoff’s law; Wien’s displacement law, Stefan’s law. Electricity and magnetism: Coulomb’s law; Electric field and potential;  Electrical potential energy of a system of point charges and of electrical dipoles in a uniform electrostatic field; Electric field lines; Flux of electric field; Gauss’s law and its application in simple cases, such as, to find field due to infinitely long straight wire, uniformly charged infinite plane sheet and uniformly charged thin spherical shell. Capacitance; Parallel plate capacitor with and without dielectrics; Capacitors in series and parallel; Energy stored in a capacitor. Electric current; Ohm’s law; Series and parallel arrangements of resistances and cells; Kirchhoff’s laws and simple applications; Heating effect of current. Biot–Savart’s law and Ampere’s law; Magnetic field near a current-carrying straight wire, along the axis of a circular coil and inside a long straight solenoid; Force on a moving charge and on a current-carrying wire in a uniform magnetic field. Magnetic moment of a current loop; Effect of a uniform magnetic field on a current loop; Moving coil galvanometer, voltmeter, ammeter and their conversions. Electromagnetic induction: Faraday’s law, Lenz’s law; Self and mutual inductance; RC, LR and LC circuits with d.c. and a.c. sources. Optics: Rectilinear propagation of light; Reflection and refraction at plane and spherical surfaces; Total internal reflection; Deviation and dispersion of light by a prism; Thin lenses; Combinations of mirrors and thin lenses; Magnification. Wave nature of light: Huygen’s principle, interference limited to Young’s double-slit experiment. Modern physics: Atomic nucleus; Alpha, beta and gamma radiations; Law of radioactive decay;  Decay constant; Half-life and mean life; Binding energy and its calculation; Fission and fusion processes; Energy calculation in these processes. Photoelectric effect; Bohr’s theory of hydrogen-like atoms; Characteristic and continuous X-rays, Moseley’s law; de Broglie wavelength of matter waves. Categories: IIT JEE   Tags: , Next Page »
2013-06-19 14:05:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3551681339740753, "perplexity": 2621.466166052394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708789647/warc/CC-MAIN-20130516125309-00044-ip-10-60-113-184.ec2.internal.warc.gz"}
https://support.bioconductor.org/p/9145800/
Building model matrix to correct for batch effect with biological and technical replicates: Problem with linear correlations 1 0 Entering edit mode mfaleevs • 0 @99060f43 Last seen 11 days ago United Kingdom Hello all, I was hoping someone here could help me out. I recently conducted some MASS SPEC for my samples. Each sample was run thrice through the machine. However, there was a large space of time between the first run and the consequent second and third run (both run at the same time), so I would like to conduct a batch effect. My data set looks something like this: Sample | Biological Rep 1 | Biological Rep 2 | Condition |C1 | C2 | C3 | C4 | c1 | C2 | C3 | C4 | Tech repeat |1 |2 |2 |1 |2 |2 |1 |2 |2 |1 |2 |2 |1 |2 |2 |1 |2 |2 |1 |2 |2 |1 | 2|2 | The actual dataset looks a bit like this sample (only showing one biological repeat) Protein |C1r1 |C1r2 |c1r3 |c2r1 |c2r2 | c2r3 | c3r1 |c3r2 |c3r3 |c4r1 |c4r2 |c4r3 | ---------------------------------------------------------------------------------------------- Protein1 |19 |34 |45 |10 |23 |22 |16 |92 |28 |11 |29 |23 | Protein2 |12 |24 |23 |11 |24 |23 |15 |21 |65 |19 |21 |26 | In the tech repeats, 1 was the technical repeat run first, and 2 represents the second and third repeat that were run at the same time. The model matrix that I have tried to conduct goes like this: tr1<- as.factor(rep(c(1,2,2),8)) #batch one technical repeat vs 2/3 technical repeat ms1<- as.factor(c(rep(1,6), rep(2,6), rep(3,6), rep(4,6))) #4 samples, 6 times run ex1<- as.factor(c(rep(1,3), rep(2,3), rep(3,3), rep(4,3), rep(1,3), rep(2,3), rep(3,3), rep(4,3))) # 2 biological repeat for each sample, each run thrice design1<- model.matrix(~ex1 + ms1+tr1) block <- c(1:6, 1:6, 1:6, 1:6) dupcor = duplicateCorrelation(df, design = design1, block = block) fit <- lmFit(df, design1, block = block, correlation = dupcor\$consensus) The design matrix looks like this: (Intercept) ex12 ex13 ex14 ms12 ms13 ms14 tr12 1 1 0 0 0 0 0 0 0 2 1 0 0 0 0 0 0 1 3 1 0 0 0 0 0 0 1 4 1 1 0 0 0 0 0 0 5 1 1 0 0 0 0 0 1 6 1 1 0 0 0 0 0 1 7 1 0 1 0 1 0 0 0 8 1 0 1 0 1 0 0 1 9 1 0 1 0 1 0 0 1 10 1 0 0 1 1 0 0 0 11 1 0 0 1 1 0 0 1 12 1 0 0 1 1 0 0 1 13 1 0 0 0 0 1 0 0 14 1 0 0 0 0 1 0 1 15 1 0 0 0 0 1 0 1 16 1 1 0 0 0 1 0 0 17 1 1 0 0 0 1 0 1 18 1 1 0 0 0 1 0 1 19 1 0 1 0 0 0 1 0 20 1 0 1 0 0 0 1 1 21 1 0 1 0 0 0 1 1 22 1 0 0 1 0 0 1 0 23 1 0 0 1 0 0 1 1 24 1 0 0 1 0 0 1 1 However, when I run the code it tells me that I have a linear combination in some of my variables Note: design matrix not of full rank (1 coef not estimable). If I remove any variables from the matrix then I run the risk of not accounting for everything. How do I navigate such a problem? Any input would be greatly appreciated! Thank you MassSpectrometryData BatchEffect r limma • 93 views 0 Entering edit mode @gordon-smyth Last seen 20 minutes ago WEHI, Melbourne, Australia This is basically a simple experiment with four conditions and two biological replicates. Your design matrix is unnecessarily complicated and I don't actually follow the logical meaning of some of the variables you have created. I can see that tr1 is the batch effect and I am guessing that ex1 represents the condition, but the meaning of ms1 and block is not evident to me. Although the MASS SPEC was run three times on each sample, technical replicates of this sort cannot be treated as biological replicates. A simple and good approach would be to average the three technical replicates for each condition and biological replicate. Then you would have a simple experiment with 8 sample values representing two biological replicates of 4 conditions. There would be no batch effect and no blocks. The design matrix formula would be just ~Condition. In this approach you would use: BiolRep <- gl(8,3,24) dfa <- avearrays(df, ID=BiolRep) Condition <- gl(4,1,8) design <- model.matrix(~Condition) fit <- lmFit(dfa, design) The alternative would be to keep all the technical replicates separate and use duplicate correlation. In this approach you would use: Condition <- gl(4,3,24) # same as your ex1 Batch <- factor(tr1) BiolRep <- gl(8,3,24) design <- model.matrix(~Condition+Batch) dupcor = duplicateCorrelation(df, design, block = BiolRep) I recommend the first approach with averaging as more robust, but the second approach with duplicateCorrelation will be more powerful and will find more DE proteins.
2022-08-14 10:01:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23164688050746918, "perplexity": 924.2079616581855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00346.warc.gz"}
http://itsbrady.blogspot.com/2008/01/
## Thursday, January 31, 2008 ### In the Know #28 February 1, 2008 Today marks the beginning of Black History Month. Dictionary.com's Word of the Day mien \MEEN\, noun: 1. Manner or bearing, especially as expressive of mood, attitude, or personality; demeanor. 2. Aspect; appearance. In the news: - California governor Arnold Schwarzenegger announces his support for Arizona Senator John McCain. - A Chinese official says harsh winter weather is threatening food production and adding to inflationary pressures. Today in History, according to Wikipedia: 1790 - The Supreme Court of the United States first convenes in New York City. 1862 - Julia Ward Howe's "The Battle Hymn of the Republic" (cover pictured) is first published. 1960 - Four African-American students sit at the counter of Woolworth's in Greensboro, NC, beginning the Greensboro Sit-ins. 2003 - Space Shuttle Columbia disintegrates upon reentry into Earth's atmosphere. Today's Famous Births: 1894 - John Ford, American director and producer, The Grapes of Wrath and The Searchers 1901 - Clark Gable, American actor, Gone with the Wind and It Happened One Night 1902 - Langston Hughes (pictured), American writer 1909 - George Beverly Shea, Canadian singer 1931 - Boris Yeltsin, 1st President of the Russian Federation 1937 - Don Everly, American musician, "All I Have to Do is Dream" 1948 - Rick James, American musician 1975 - Antwan "Big Boi" Patton, American musician, "Ms. Jackson" Trivia Today's Category - Pluto...the former planet, not the Disney character ~ Earth's circumference is 5.3 times bigger than Pluto's with 151.1 times more volume, but Earth is 476.2 times more massive. ~ A year on Pluto lasts 248.1 Earth years. ~ A day on Pluto lasts only 153.3 Earth hours. ~ Minimum temperature is 33K (-400F) and maximum temperature is 55K (-361F). ~ The atmosphere of Pluto is composed mostly of nitrogen, methane and carbon monoxide. ~ The mean apparent magnitude of Pluto is 15.1, making it visible only with magnification. I always wondered... ...how the current writer's strike works... The Writers Guild of America (WGA) is on strike against the Alliance of Motion Picture and Television Producers (AMPTP). There are three issues that are being contested by writers: 1 - Since 1988, Writer's have given 0.3% of video sales up to the first million dollars and 0.36% of subsequent sales. Because of decline in costs of DVD (and thus less income) and because the reported profit of DVD's is much higher than box office figures, the writer's have asked for double the residual amount (0.6%). This request was dropped early in the strike, but will probably be contested again in the not-too-distant future. 2 - New media residuals are being negotiated. New media includes internet downloads, streaming media, video on demand, satellite television and other means of delivery channels. Currently, there is no arrangement concerning these new medias, but the WGA is asking for 2.5% of the gross sales. The AMPTP has offered the same deal as for DVD, 0.3%, and no payment for streaming materials. Both of these have been rejected by the WGA. 3 - The WGA has no jurisdiction in reality TV and animation. They are asking for credit in these areas because reality TV needs creative scenarios and animation has slowly evolved from storyboard-only work to screenplays then storyboards. The WGA has been denied this by the AMPTP. The WGA has since dropped this request, but will keep the idea tabled for the future. [All references from Wikipedia.org unless otherwise noted] ### In the Know #27 January 31, 2008 Dictionary.com's Word of the Day Prone to anger; easily provoked to anger; hot-tempered. In the news: - John McCain wins the Florida primary. - Rudy Giuliani and John Edwards drop out of the presidential nomination race. Today in History, according to Wikipedia: 1606 - Guy Fawkes (pictured) is executed for plotting against Parliament and James I. 1876 - The United States orders all Native Americans to move into reservations. 1961 - Ham the Chimp travels into outer space. Today's Famous Births: 1797 - Franz Schubert (pictured), Austrian composer 1872 - Zane Grey, American Western writer 1919 - Jackie Robinson, American baseball player, the first African-American player in Major League Baseball, 6-time All Star 1931 - Ernie Banks, American baseball player, 10-time All Star 1941 - Dick Gephardt, American politician 1947 - Nolan Ryan, American baseball player, 8-time All Star 1970 - Minnie Driver, British actress, Good Will Hunting 1981 - Justin Timberlake, American singer, "What Goes Around...Comes Around" and "My Love" Trivia Today's Category - per request, Neptune...the planet, not the trident-wielding god ~ Neptune (pictured as seen by Voyager 2) is the eighth planet from the sun, located beyond the asteroid belt and is one of four Jovian planets, the gas giants. ~ Neptune's circumference is 3.8 times bigger than Earth's with 57.7 times more volume, but it is only 17.1 times as massive as Earth. ~ A year on Neptune lasts 164.8 Earth years. ~ A day on Neptune lasts only 16.1 Earth hours. ~ Mean temperature is between 55K (-361 F) and 72K (-330 F) depending on altitude. ~ The atmosphere of Neptune is composed mostly of hydrogen and helium. ~ The apparent magnitude of Uranus varies from 8.0 to 7.8 at its brightest, making it visible only with magnification. I always wondered... In order to understand how noise-canceling headphones work, one must first understand how sound waves work. A sound wave is composed of crests and troughs which, respectively, represent compressions (positive pressure) and rarefactions (negative pressure) of air. When graphed, a sound wave is seen as a sine wave. Destructive interference is what makes noise-canceling headphones work. Destructive interference occurs when crests meet troughs, or vice versa, in opposing sound waves. Alternatively, if crests meets crests and toughs meet troughs, constructive interference - or amplification - occurs (constructive and destructive interference pictured). So all that needs to happen in order for sound cancellation to occur is to reproduce a sound wave in the opposite direction and 180 degrees out of phase from the original sound wave. To visualize this, imagine two identical trains on the same track, traveling at the same speed, but towards each other. When the trains collide, neither will continue traveling along the track, instead they will "cancel each other out"...though quite violently in this example. Active noise-canceling headphones utilize a microphone, a circuit board (which analyzes sound waves and creates the opposite wave), and a speaker. The speaker is aimed away from the ear and transmits the out of phase sound waves, which practically destroys all sound waves coming toward the headphones. ["Noise-canceling headphones" reference: How Stuff Works] [All references from Wikipedia.org unless otherwise noted] ## Wednesday, January 30, 2008 ### Sorry Sorry about the hiatus, but I was out of town today. January 30th's In the Know will be up with February 1st. ## Monday, January 28, 2008 ### In the Know #26 January 30, 2008 Dictionary.com's Word of the Day Richly melodious; pleasant sounding; musical. In the news: - George W. Bush delivers his last State of the Union address. - Indonesia reports its 100th death from bird flu. Today in History, according to Wikipedia: 1835 - Richard Lawrence attempts to assassinate President Andrew Jackson. 1933 - Adolf Hitler is sworn in as Chancellor of Germany. 1948 - Mohandas Gandhi (pictured) is assassinated by Nathuram Godse. 1968 - The Tet Offensive is launched by the National Front for the Liberation of South Vietnam, a.k.a. Viet Cong. 1972 - Bloody Sunday in Ireland: British Paratroopers attack civil rights marchers in Northern Ireland. 1976 - George H. W. Bush becomes the Director of the CIA. Today's Famous Births: 1882 - Franklin D. Roosevelt, 32nd President of the United States. 1912 - Francis Schaeffer, American Evangelical theologian and pastor 1930 - Gene Hackman, American actor 1941 - Dick Cheney (pictured), 46th Vice President of the United States 1951 - Phil Collins, English musician, "In the Air Tonight" 1957 - Payne Stewart, American golfer, PGA Champion and two-time U.S. Open Champion 1962 - King Abdullah II, reigning King of Jordan 1974 - Christian Bale, Welsh actor, Equilibrium and Batman Begins Trivia Today's Category - per request, Uranus...the planet, not your anus, hahaha... ~ Uranus is the seventh planet from the sun, located beyond the asteroid belt and is one of four Jovian planets, the gas giants. ~ Uranus' circumference is 4.0 times bigger than Earth's with 63.1 times more volume, but it is only 14.5 times as massive as Earth. ~ A year on Uranus lasts 84 Earth years. ~ A day on Uranus lasts only 14-17 Earth hours. ~ Mean temperature is between 49K (-371F) to 76K (-323F) depending on altitude. ~ The atmosphere of Uranus is composed mostly of hydrogen and helium. ~ The apparent magnitude of Uranus varies from 5.9 to 5.2 at its brightest, making it difficult to see with the naked eye. I always wondered... ...how WiFi works... Wireless networks use radio waves to transmit data. A computer's wireless adapter translates data into a radio signal and transmits it using an antenna. A wireless router (pictured) receives the signal and decodes it. It sends the information to the Internet using a physical, wired Ethernet connection. The opposite is also true with the router sending information to the computer's antenna. The transmitters utilize 2.4Ghz and 5Ghz frequencies. These frequencies are higher than cell phone and television radio signals, allowing higher data transfer rates. The technical protocol for wireless routers is the 802.11 networking standard. There are several different versions of 802.11 technology utilizing the different frequencies and boasting varying transfer rates. A wireless router consists of (1) a port to connect to your cable or DSL modem, (2) a router, (3) an Ethernet hub, (4) a firewall and (5) a wireless access point. This router has a service set identification (SSID) name, which is the manufacturer by default, but can be changed by the user. 3 types of security are available in a wireless network: (1) Wired Equivalency Privacy and (2) WiFi Protected Access (WPA), which both use passwords, and (3) Media Access Control (MAC), which locates a physical address that is unique to a certain computer located in a list of safe computers. ["How WiFi Works" reference: How Stuff Works] [All references from Wikipedia.org unless otherwise noted] ### In the Know #25 January 29, 2008 Dictionary.com's Word of the Day 1. Of or pertaining to woods or forest regions. 2. Living or located in a wood or forest. 3. Abounding in forests or trees; wooded. noun: 1. A fabled deity or spirit of the woods. 2. One that lives in or frequents the woods or forest; a rustic. In the news: - At least 250 schoolchildren 12 to 18 years old and several teachers were taken hostage and finally released by at least seven militants inside a high school in Domail, Pakistan. - The United States Secret Service has evacuated the West Wing of the White House in Washington, D.C. after finding a suspicious package. Today in History, according to Wikipedia: 1845 - "The Raven" by Edgar Allan Poe (pictured) is published in the New York Evening Mirror. 1861 - Kansas admitted as the 34th U.S. state. 1886 - Karl Benz is granted a patent for the first successful gasoline-driven automobile. 2002 - In his State of the Union address, President George W. Bush describes "regimes that sponsor terror" as an Axis of Evil. Today's Famous Births: 1737 - Thomas Paine, American patriot, Common Sense and Rights of Man and The Age of Reason. 1843 - William McKinley, 25th President of the United States 1860 - Anton Chekhov (pictured), Russian writer, Uncle Vanya and The Cherry Orchard. 1874 - John D. Rockefeller, Jr., American entrepreneur and philanthropist 1880 - W.C. Fields, American actor 1945 - Tom Selleck, American actor, "Magnum P.I." 1954 - Oprah Winfrey, American actress and talk show host, "The Oprah Winfrey Show" and The Color Purple. 1960 - Greg Louganis, American diver, 4 Olympic gold medals 1970 - Heather Graham, American actress Trivia Today's Category - Saturn...the planet, not the car company or the game console ~ Saturn is the sixth planet from the sun, located beyond the asteroid belt and is one of four Jovian planets, the gas giants. ~ Saturn has a complex ring system, composed mostly of ice and dust. ~ Saturn's circumference is 9.45 times bigger than Earth's and is 763.6 times more voluminous, but is only 95.2 times as massive. ~ A year on Saturn lasts 29.46 Earth years. ~ A day on Saturn lasts 10-11 Earth hours. ~ Mean temperature is between 84K (-308F) and 134K (-218F), depending on altitude. ~ The atmosphere of Saturn is composed of mostly hydrogen. ~ The apparent magnitude of Saturn is between 1.2 and -0.2, making it easily visible to the naked eye. I always wondered... ...how braille works... In 1821, the Frenchman Louis Braille developed the braille system of writing for use by blind people. The system utilized cells comprised of 6 dot positions with the arrangement of raised dots in the positions deciding letters and characters. The cells are arranged with two columns of three dots each. The arrangement of dots creates 64 permutations (2^6). Dot heights are approximately 0.02 inches. Dot spacing is approximately 0.1 inches. Cell spacing is approximately 0.15 inches horizontally and 0.2 inches vertically. A standard braille page is 11 by 11.5 inches and consists of a maximum of 40-43 cells per line and 25 lines. ["Braille" Photo: Christophe Moustier - 2005] [All references from Wikipedia.org unless otherwise noted] ## Sunday, January 27, 2008 ### Who knew... Organ Hero "Carry on My Wayward Son" like you've never heard it before. ## Saturday, January 26, 2008 ### Lebrons So.....very...funny. Lebrons 2 on 2 "I'm on you like white on rice, like flies on shut yo mouth!" The other three Lebrons commercials "That's a quadruple double right there boy." Just in case you didn't figure it out, Lebron James plays all the characters. ### In the Know #24 January 28, 2008 Dictionary.com's Word of the Day neophyte \NEE-uh-fyt\, noun: 1. A new convert or proselyte. 2. A novice; a beginner in anything. In the news: - At least 45 people killed in a new surge of violence (pictured) that began after Kenya's disputed December 27 presidential election. - Gunmen killed 11 people in the village of Lusignan on the coast of Guyana. - Barack Obama wins the South Carolina Democratic primary. - An unresponsive United States reconnaissance satellite is in an uncontrolled, decaying orbit and will re-enter Earth's atmosphere in late February or early March. Today in History, according to Wikipedia: 1521 - The Diet of Worms begins. 1915 - U.S. Congress creates the United States Coast Guard (seal pictured). - At the launch of NASA mission STS-51-L, Space Shuttle Challenger explodes 73 seconds after lift-off. Today's Famous Births: 1890 - Robert Stroud, American convict, "The Birdman of Alcatraz" 1912 - Jackson Pollack, American painter, No. 5 1936 - Alan Alda, American actor and writer and director, "M.A.S.H." 1954 - Rick Warren, American pastor and author, 1955 - Nicolas Sarkozy (pictured), President of France 1959 - Frank Darabont, American filmmaker, The Shawshank Redemption and The Green Mile 1974 - Magglio Ordoñez, Venezuelan baseball player, 6-time All Star 1981 - Elijah Wood, American actor, Lord of the Rings trilogy Trivia Today's Category - per request, Jupiter...the planet, not the orchestral piece or the rocket ~ Jupiter (pictured, as seen from Voyager I - click it to watch the approach) is the fifth planet from the sun, located beyond the asteroid belt and is one of four Jovian planets, the gas giants. ~ Jupiter's circumference is 11.2 times bigger than Earth's with 1,317 times more volume, but it is only 318 times as massive as Earth. ~ A year on Jupiter lasts 11.86 Earth years. ~ A day on Jupiter lasts only 9.9 Earth hours. ~ Mean temperature is between 112K (-258F) to 165K (-163F) depending on altitude. ~ The atmosphere of Jupiter is composed mostly of hydrogen and helium. ~ The apparent magnitude of Jupiter varies from -1.6 to -2.9 at its brightest, making it easily distinguishable from the stars. I always wondered... ...how youth ministry works... If you have any idea...let me know. [All references from Wikipedia.org unless otherwise noted] ### In the Know #23 January 27, 2008 Today is International Holocaust Remembrance Day. Dictionary.com's Word of the Day 1. That cannot be removed, erased, or washed away. 2. Making marks that cannot easily be removed or erased. 3. Incapable of being forgotten; memorable. In the news: - A 16-year-old was arrested over allegations of intentions to hijack a commercial passenger airliner. - Iraqi Prime Minister Nouri al-Maliki announces a "decisive" offensive against al-Qaeda in Iraq. - Australia to withdraw troops from Iraq this year. Today in History, according to Wikipedia: 98 - Trajan becomes Roman Emperor. 1678 - The first fire engine company in the United States went into service. 1825 - U.S. Congress approves Indian Territory (in what is present-day Oklahoma), paving the way for the "Trail of Tears." 1880 - Thomas Edison is granted a patent for his electric incandescent lamp. 1926 - John Logie Baird makes the first television broadcast. 1945 - The Red Army arrives at the Auschwitz-Birkenau (entrance pictured) concentration camp in Poland. Today's Famous Births: 1756 - Wolfgang Amadeus Mozart, Austrian composer, Le nozze di Figaro 1832 - Lewis Carroll, English author, Alice's Adventures in Wonderland and "Jabberwocky" 1921 - Donna Reed, American actress, It's a Wonderful Life and From Here to Eternity 1940 - James Cromwell, American actor, The Green Mile and Babe 1955 - John G. Roberts, Jr. (pictured), 17th Chief Justice of the United States 1957 - Frank Miller, American comic book artist and writer and film director, 300 and Sin City 1971 - Jonathan Smith, American rapper, a.k.a. Lil' Jon Trivia Today's Category - per request, Mars...the planet, not the candy bar ~ Mars is the fourth planet from the sun. ~ Earth's circumference is 1.88 times bigger than Mars', but Earth is 9.35 times as massive. ~ It is referred to as the "Red Planet" due to its appearance from Earth. ~ A year on Mars lasts 686.5 Earth days. ~ A day on Mars lasts 24.55 Earth hours. ~ Surface temperatures range from about 133K (-220F) to 293K (68F). ~ The atmosphere of Mars is composed mostly of carbon dioxide. ~ The apparent magnitude of Mars at its brightest is about -2.9, making it visibly brighter than any star. I always wondered... ...how The Shawshank Redemption works... [1] Actor - Morgan Freeman; actor - Driving Miss Daisy and Glory and Se7en and Million Dollar Baby and so many more [2] Actor - Tim Robbins; actor - Jacob's Ladder and Arlington Road [3] Writer, short story - Stephen King; writer, short story - Stand By Me; writer, novel - The Green Mile and The Shining and a few horror novels too [4] Director and screenplay - Frank Darabont; screenplay - The Green Mile; executive producer - Collateral Greatest movie...ever. [All references from Wikipedia.org unless otherwise noted] ## Friday, January 25, 2008 ### Commercial Monster.com commercial...there's a perfect job for everyone...even a guy with ginormous legs. ### Banff Mountain Film Festival World Tour! Watch this video and let me know of you want to go on March 20 to Greenville for the Banff Mountain Film Festival World Tour. Last year's event was beyond phenomenal. $15 (I think) for three hours of awesome films. I can't wait! ## Thursday, January 24, 2008 ### In the Know #22 January 26, 2008 Dictionary.com's Word of the Day caterwaul \KAT-uhr-wawl\, intransitive verb: 1. To make a harsh cry. 2. To have a noisy argument. noun: 1. A shrill, discordant sound. In the news: - President Bush and Congress agree on an economic stimulus package. - A car bomb blast in a Christian suburb of the Lebanese capital Beirut has killed at least six people, including a top security official. Today in History, according to Wikipedia: 1564 - The Council of Trent issues its conclusions in the Tridentinum, which established the distinction between Roman Catholicism and Protestantism. 1837 - Michigan is admitted as the 26th U.S. state. 1905 - The Cullinan Diamond, the largest gem-quality diamond ever discovered at 3106.75 carats, is found near Pretoria, South Africa. 1988 - Andrew Lloyd Webber's Phantom of the Opera (pictured) debuts at Broadway's Majestic Theatre. 1998 - U.S. President Bill Clinton announces on American television that he had no "sexual relations" with intern Monica Lewinsky. 2005 - Condoleezza Rice is sworn in as U.S. Secretary of State, becoming the first African-American woman to hold the post. Today's Famous Births: 1880 - Douglas MacArthur (pictured), American general and Medal of Honor recipient 1904 - Seán MacBride, Irish statesman, founding member of Amnesty International and Nobel Peace Prize winner 1925 - Paul Newman, American actor and race car driver, Cool Hand Luke 1935 - Bob Uecker, American baseball player and broadcaster 1946 - Gene Siskel, American film critic 1955 - Eddie Van Halen, Dutch musician, Van Halen 1958 - Ellen DeGeneres, American actress and comedian, Finding Nemo and "The Ellen Show" 1961 - Wayne Gretzky (pictured), Canadian hockey player, "The Great One", 18 All Star game appearances 1970 - Kirk Franklin, American singer 1977 - Vince Carter, American basketball player, 8-time All Star Trivia Today's Category - per request, Venus...the planet, not the statue or the tennis player ~ Venus is the second planet from the sun. ~ Earth's circumference is 1.05 times bigger than Venus' and Earth is 1.23 times as massive. ~ It's similarities to Earth give Venus the nickname "Our sister planet". ~ A year on Venus lasts 224.7 earth days. ~ A day on Venus lasts 116.75 earth days (the sidereal day - the time it takes to complete one full rotation - on Venus is 243 days, which is longer than its year!). ~ The mean temperature on Venus' surface is 735K (863.3F). ~ The atmosphere of Venus is comprised of carbon dioxide and nitrogen. ~ The apparent magnitude of Venus can be as bright as -4.6, gaining the titles "The Evening Star" and "The Morning Star" I always wondered... ...how golf balls works... The earliest golf balls were smooth and made of wood. Early in the 17th century, the 'featherie' ball was introduced and was constructed of tightly compacted goose down inside a cowhide casing. Later, tree sap was heated and formed into a ball, which flew truer because of imperfections in the ball. This led golf ball makers toward the distinctive dimples we know today. The 20th century contributed core technology, introducing a solid inside the ball allowing designers to fine tune the length, spin and feel of golf ball characteristics. Today, golf balls have titanium or other metal cores and may be composed of up to four layers. The diameter of a golf ball cannot be smaller than 1.68 inches. 250 feet per second is the maximum velocity for the ball. Maximum weight of the ball cannot exceed 1.62 ounces. Most golf balls have between 300-450 dimples. The millisecond-long impact of club and ball determines velocity, launch angle and spin rate. In its flight, the ball will experience drag and lift. The dimples on the ball work in two ways which affect these aerodynamic forces: (1) as the ball moves through the air, the dimples reduce drag by reducing turbulence (pictured), and (2) backspin induced by the angle of the club creates lift which is magnified by the dimples. ["How golf balls work.." turbulence picture reference] [All references from Wikipedia.org unless otherwise noted] ### Laundromat This is another one of my favorite commercials: Orange Underground ### In the Know #21 January 25, 2008 Dictionary.com's Word of the Day disheveled \dih-SHEV-uhld\, adjective; also dishevelled: In loose disorder; disarranged; unkempt; as, "disheveled hair." In the news: - Romano Prodi (pictured), Italy's President of the Council of Ministers, loses a vote of confidence and resigns. - Democratic Presidential candidate Dennis Kucinich quits his bid for the presidency. - An Iraqi police chief is killed in a suicide bombing. Today in History, according to Wikipedia: 41 - Claudius is accepted as Roman Emperor after Caligula is killed. 1919 - The League of Nations is created. 1924 - The first Winter Olympics opens in Chamonix, France. 1949 - At the Hollywood Athletic Club, the first Emmy Awards (pictured) are presented. 1949 - David Ben-Gurion becomes the first Prime Minister of Israel. 1961 - President John F. Kennedy delivers the first live televised presidential news conference. 2004 - Mars rovers Opportunity lands on the surface of Mars. Today's Famous Births: 1627 - Robert Boyle, Irish chemist 1741 - Benedict Arnold, American general notorious for treason 1759 - Robert Burns (pictured), Scottish poet, Auld Lang Syne 1882 - Virginia Woolf, English writer 1938 - Etta James, American singer, "At Last" 1941 - Buddy Baker, American race car driver and commentator 1942 - Eusébio, Portuguese soccer player 1951 - Steve Prefontaine, American runner 1962 - Chris Chelios, American hockey player ,11 All-Star games 1981 - Alicia Keys, American singer 1984 - Robinho, Brazilian soccer player Trivia Today's Category - per request, Mercury...the planet, not the element ~ Mercury is the closest planet to the sun. ~ Earth's circumference is only 2.61 times the size of Mercury, but earth is 18 times as massive. ~ A year on Mercury lasts just under 88 days. ~ A day on Mercury lasts over 4200 hours (over 175 earth days). ~ Surface temperatures range from 80K (-315F) to 700K (800F). ~ Two NASA missions have studied Mercury: Mariner 10 (1974-75, upper picture)) and MESSENGER (MErcury Surface, Space ENvironment, GEochemistry, and Ranging - 2008, lower picture) [Note quality difference in photos]. ~ The atmosphere of Mercury contains hydrogen, helium, oxygen, sodium, calcium and potassium. ~ At its brightest, Mercury is an apparent magnitude of -2.0, which is brighter than Sirius (the brightest star in apparent magnitude). I always wondered... ...how that yellow first down line on a televised football game works... It would seem that placing that little yellow line on our magical living room screens might be so very simple, but this could not be farther from the truth. A colossal amount of technology is put into play to make it possible. Sportvision created a system called 1st and Ten to digitally paint the line. The system must: - know the orientation of the field - know where each yard line is - sense the camera's movement - recalculate perspectives, based on the camera's movement, at 30 frames per second (actually works at 60 fps) - cause the line to follow the curve of the field - work with multiple cameras - sense when players, refs or the ball cross the line, so as to not draw the line over them - be aware of superimposed graphics There are several mechanisms and means used to address these issues. First, each camera is equipped with a special mount that records the cameras pan, tilt, zoom and focus. Stored on a main PC is a 3-D model of the field and the location of each camera. This allows the camera mount encoders to work with the 3-D model to appropriately place the line. Also included in the software of the system is a color palette that distinguishes the field from the players, refs and the ball. When a pixel of a color not representing the field or first down lines passes over the line, the line disappears only where that pixel is. It takes four people to run the 1st and Ten: a spotter and operator input the first down line manually, then two other operators make any corrections to the line during the game (such as adding colors to the palette like mud or snow). The system utilizes 8 computers in total. ["How it works" reference: HowStuffWorks.com} [All references from Wikipedia.org unless otherwise noted] ## Wednesday, January 23, 2008 ### In the Know #20 January 24, 2008 Dictionary.com's Word of the Day nolens volens \NO-lenz-VO-lenz\: Whether unwilling or willing. In the news: - A border wall between Egypt and the Gaza Strip was partially destroyed allowing Palestinians to enter Egypt. - Iraq removes three stars that referenced Sadaam Hussein's Baath Party from the Iraqi flag (new flag pictured). Today in History, according to Wikipedia: 41 - Gaius Caesar, a.k.a. Caligula, is assassinated by his disgruntled Praetorian Guard 1848 - James W. Marshall finds gold at Sutter's Mill, sparking the California gold rush. 1918 - Russia finally adopts the Gregorian calendar, stating that the day after January 31st would be February 14th to correct the date. 2003 - The United States Department of Homeland Security (seal pictured) officially begins operating. Today's Famous Births: 76 - Hadrian, Roman emperor 1705 - Carlo Broschi, a.k.a. Farinelli, Italian castrato 1862 - Edith Wharton, American writer, Ethan Frome and The Age of Innocence 1917 - Ernest Borgnine (pictured), American actor 1918 - Oral Roberts, American evangelist 1939 - Ray Stevens, American musician 1941 - Neil Diamond, American musician 1941 - Aaron Neville, American singer 1949 - John Belushi, American actor 1968 - Mary Lou Retton, American gymnast Trivia Today's Category - Apartheid ~ The National Party enforced racial segregation in South Africa. ~ Apartheid lasted from 1948 to 1994. ~ In 1973, an International Convention of the United Nations General Assembly ruled that the system of apartheid amounted to a crime against humanity. ~ Apartheid strove to divide the country into sections that would aid in total physical segregation of races. ~ Anti-apartheid activist Nelson Mandela spent 27 years in prison for his fight against apartheid. ~ Mandela became an instrument of reconciliation after the fall of apartheid and won the Nobel Peace Prize in 1993. I always wondered... ...how Communism works... It doesn't... [All references from Wikipedia.org unless otherwise noted] ### In the Know #19 January 23, 2008 Dictionary.com's Word of the Day effusive \ih-FYOO-siv\, adjective: Excessively demonstrative; giving or involving extravagant or excessive emotional expression; gushing. In the news: - Actor Heath Ledger found dead in New York City. - Fred Thompson quits the U.S. Presidential race. Today in History, according to Wikipedia: 1556 - The deadliest earthquake in history hits Shaanxi Province in China. 1943 - Beginning of the Jewish-led Warsaw Ghetto Uprising 1960 - The Bathyscaphe USS Trieste (pictured) descends to 35,798 feet in the Challenger Deep of the Mariana Trench in the Pacific Ocean, the record depth achieved by any vehicle. 1973 - President Richard Nixon announces that a peace accord has been reached in Vietnam. Today's Famous Births: 1737 - John Hancock, American statesman, first signer of the Declaration of Independence 1855 - John Moses Browning, American inventor 1898 - Sergei Eisenstein, Russian film director, Bronenosets Potyomkin (Battleship Potemkin) 1950 - Richard Dean Anderson, American actor, MacGyver Due to time constraints, "Trivia" and "I always wondered..." will not be included in today's post. Sorry if this ruins your day. If this does ruin your day, consider your sad life and get mental help...my blog is not that interesting. Thanks for reading. - JB [All references from Wikipedia.org unless otherwise noted] ## Monday, January 21, 2008 ### In the Know #18 January 22, 2008 Dictionary.com's Word of the Day permeate \PUR-mee-ayt\, transitive verb: 1. To spread or diffuse through. 2. To pass through the pores or openings of. intransitive verb: 1. To spread through or penetrate something. In the news: - The New England Patriots (logo pictured, right) defeat the San Diego Chargers 21-12 to advance to the Super Bowl. - The New York Giants defeat the Green Bay Packers 23-20 to advance to the Super Bowl. - World stocks plunge on fears of United States recession. Today in History, according to Wikipedia: 1521 - Emperor Charles V opens the Diet of Worms 1905 - Bloody Sunday in St. Petersburg, Russia. 1973 - The Unites States Supreme Court comes to a decision in Roe v. Wade. 1973 - George Foreman beats the undefeated heavyweight Joe Frazier, "Down goes Frazier! Down goes Frazier! Down goes Frazier!" 1984 - The legendary "1984" commercial (pictured), aired during Super Bowl XVIII, advertising the Apple MacIntosh computer 1997 - Madeleine Albright becomes the first female Secretary of State. Today's Famous Births: 1561 - Sir Francis Bacon, English philosopher 1788 - George Gordon, Lord Byron (pictured), English poet 1869 - Grigori Rasputin, Russian monk, the "Mad Monk" 1875 - D.W. Griffith, American director, Birth of a Nation and Intolerance 1965 - Jeffrey Townes, American rapper and actor, DJ Jazzy Jeff 1965 - Diane Lane, American actress, The Outsiders Trivia Today's Category - The Kentucky Derby ~ The Kentucky Derby (pictured) takes place at Churchill Downs in Louisville, Kentucky. ~ The race is one and a quarter miles (2 km). ~ In the United States, the race is known as "The Most Exciting Two Minutes in Sports". ~ The Kentucky Derby is the first leg of the Triple Crown of Thoroughbred Racing, which also includes the Preakness Stakes and the Belmont Stakes ~ May 17, 1875 marked the inauguration of the race. ~ 2007 was the 133rd running of the race. ~ A blanket of 554 roses is draped over the winner, giving the derby the nickname "The Run for the Roses". ~ The purse for winning is$2 million. ~ Mint Julep is the official drink of the race. - Two horses, Secretariat (in 1973) and Monarchos (in 2001), have finished the race in under two minutes. I always wondered... ...how the human voice works... The human voice is created by vibrations through two membranes called vocal folds, or vocal cords. Male vocal cords are between 17mm and 25mm in length and female vocal cords are between 12.5mm and 17.5mm in length, making the male voice lower than the female voice. There are four basic divisions of vocal range, determined by the way the vocal cords vibrate. Vocal fry register, in which air is permitted to bubble through the vocal cords, is the lowest range. Modal register is the normal speech and singing voice and can cover as much as two octaves for well trained singers. Falsetto voice occurs when the edges of the vocal cords are used and extends the vocal range of any singer. Not all singers can achieve phonation in the whistle register, which is not physiologically understood. Physiologists know that vibrations for the whistle range occur in the anterior region of the vocal cords. In terms of frequency, human voices are roughly in the range of 80 Hz to 1100 Hz (that is, E2 to C6) for normal male and female voices together. The world record for the lowest note produced by a human voice is B-2 (two octaves below the lowest B on a piano, 8 Hz) by America's Tim Storms. The record for the highest vocal note is G10 (25087Hz) by Brazil's Georgia Brown. Below is a list of the vocal classifications of the basic choral ranges: Soprano: C4 (261.626 Hz) - C6 (1046.50 Hz) Mezzo-Soprano: A3 (220.000 Hz) - A5 (880.000 Hz) Contralto: E3 (164.814 Hz) - E5 (659.255 Hz) Tenor: C3 (130.813 Hz) - C5 (523.251 Hz) Baritone: G2 (97.9989 Hz) - G4 (391.995 Hz) Bass: E2 (82.4069 Hz) - E4 (329.628 Hz) A soprano who can sing higher than C♯6 is known as a sopranino and a bass who can sing G1 or lower is known as a sub-bass singer. Males who possess high ranges or can project falsetto, are referred to as countertenors and possess ranges equivalent to those of the female ranges, alto, mezzo-soprano and soprano. [All references from Wikipedia.org unless otherwise noted] ### PureFocus Don't let your youth group miss it! ### "I guess it is awesome..." Watch this Skittles commercial, click "Touch" when the page finishes loading. The end is so funny. ## Sunday, January 20, 2008 ### In the Know #17 January 21, 2008 Dictionary.com's Word of the Day Immature; lacking adult perception, experience, or judgment. In the news: - Adolfo Nicolas (pictured) is chosen as the new Superior General of the Society of Jesus, whose members are known as Jesuits. - George Bush proposes economic growth packages worth up to \$150 billion. Today in History, according to Wikipedia: 1861 - Jefferson Davis (pictured) resigns from the United States Senate. 1915 - Kiwanis International is founded in Detroit, Michigan. 1954 - The USS Nautilus, the first nuclear-powered submarine is launched. 1976 - Commercial service of Concorde, a supersonic passenger airliner, begins. Today's Famous Births: 1813 - John C. Frémont, American army officer, explorer and presidential candidate 1824 - Thomas "Stonewall" Jackson, American Confederate general 1918 - Richard D. "Dick" Winters (pictured), American war hero, portrayed in the HBO miniseries Band of Brothers 1922 - Telly Savalas, American actor, "Kojak" and Birdman of Alcatraz 1924 - Benny Hill, English actor, comedian and singer, "The Benny Hill Show" 1932 - John Chaney, American NCAA basketball coach, 1988 National Coach of the Year 1938 - Robert Weston Smith, American disc jockey and actor, better known as "Wolfman Jack" 1940 - Jack Nicklaus, American golfer 1941 - Plácido Domingo, Spanish tenor 1953 - Paul Allen, American entrepreneur, co-founder of Microsoft 1956 - Geena Davis, American actress, A League of Their Own 1963 - Hakeem Olajuwon, Nigerian-born basketball player, 12-time All Star 1965 - Jason Mizell, American disc jockey, better known as "Jam Master Jay", founder of Run-D.M.C. (pictured) Trivia Today's Category - John Grisham ~ John Grisham (pictured) graduated from the University of Mississippi School of Law in 1981. ~ He practiced criminal and civil law in Southaven, Mississippi for nearly a decade. ~ In 1983, he was elected as a Democrat to the Mississippi House of Representatives, where he served until 1990. ~ His first novel was "A Time to Kill", which had an original print of only 5,000 but would be reprinted to become a bestseller. ~ 17 Grisham novels have topped the New York Times bestseller list. ~ 7 of his novels have become full length box office feature films. ~ His novels sold over 60 million copies in the 90's. I always wondered... ...how surround sound works... Home theater surround sound has two main components. First is the A/V receiver, which receives, decodes and sends both audio and video signals. Second are the speakers. The speakers are divided onto channels: Left, center, right, surround and subwoofer. The notation of the surround sound is as follows: [Number of surround speakers].[number of subwoofers] (i.e. 5.1 = five surround speakers and one subwoofer). Surround sound comes to us in two forms: analog and digital. Analog surround is contained in two channels. The left and right channels contain two out of phase signals each. In the left channel, the left front speaker gets one signal and the left rear speaker gets the other, out of phase signal. The same occurs in the right channel. A center channel is constructed out of the two front speaker signals. Unlike analog, digital surround sound is recorded as ones and zeros. As such, it is the only format that can be recorded onto DVD's. Each channel is a separate track of the recording medium. The center channel acts as the anchor of the system; producing the majority of dialog and effects. The left and right speakers remove the majority of the dialog. Rear surround speakers create sound effects, which help to create the illusion of being in the action. The Low Frequency Effects (LFE) channel sends bass to the subwoofer. The most common surround sound formats are Dolby Pro Logic (logo pictured) and Digital Theater Systems (DTS, logo pictured). ["How surround sound works" reference: HowStuffWorks.com] [All references from Wikipedia.org unless otherwise noted]
2017-08-24 08:31:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17751118540763855, "perplexity": 13229.714671320666}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133447.78/warc/CC-MAIN-20170824082227-20170824102227-00622.warc.gz"}
https://neurologyonlinejournalclub.com/about/primers-for-general-readers/primer-on-statistics-for-non-statisticians/estimating-the-true-mean-from-a-sample-mean-significant-differences/
# Estimating the True Mean from a Sample Mean – “Significant Differences” Let us take a medical example of a parameter that has an approximately normal distribution in the adult human population, namely systolic blood pressure. In order to reduce the risk of heart disease and stroke, the chief medical officer of a multinational company wants the mean systolic blood pressure (bp) of its members to be no higher than 180 mmHg. The company nurses are worried they might not hit this target, and want to get a “preview” before the time when everyone’s blood pressure is to be recorded. Then they might have time to act if necessary by prescribing antihypertensives or de-stress programmes to the employees before the official measurement. They do not have the resources to measure everyone’s blood pressure to get this preview, so they take a sample. The question is, when they get the mean value from the sample, how confident are they that this sample mean will reflect the true population mean? If the mean is a little higher that required, might that reflect random variation in sampling the mean and the overall population mean is really at a safe level, or indeed if it is a little lower, might the overall mean really be too high and the sample was an underestimate? ## Determining a Significant Difference Statistics is all about estimation to a certain level of confidence. It is best to think about the above uncertainty in a backwards manner. Instead of asking (hypothesising), “Is the actual true population mean higher than the required population mean?”, we hypothesise that it is not higher, secretly wondering if we are going to end up rejecting this hypothesis – i.e. concluding that it is higher. Hypothesising something to see if we are going to reject it is called making the null hypothesis. Arbitrarily it is often decided that if a hypothesis is as unlikely as one in twenty (5% – or probability p=0.05, where 0 is impossible and 1 is certain), then we can reject it. So we consider the mean and standard error curve for the whole population; the mean according to the null hypothesis is the same as that for the desired mean bp, namely 180 mmHg. Then we look at where the measured mean for the sample of the population lies on this curve. If it lies higher than the point on the curve that corresponds to a probability value of 0.05, it is likely that the actual population mean is in fact significantly higher than the desired mean. We reject the null hypothesis at the p=0.05 level of significance. Usually we quote the actual p value to show exactly how confident we are in rejecting the null hypothesis. The way we do this calculation is as follows. The normal distribution curve is a complex formula, so what was done in the good old days before computers was to standardise the curve to be independent of curve width, the latter of course being quantified by its standard error. Thus each x-axis value point is described in terms of how many standard error values it is away from the mean. Once this is done, no matter what the data set, the curve always has the same shape. This means that if we take the value of difference between observed sample mean (μ) and desired population mean (σ), which is here the same as the null hypothesis mean (μ0), and convert this difference into standard error units by dividing by the standard error, the latter will always correspond to a certain probability value. ## The Z-Score This standardised x-axis value on the normal distribution curve is called the Z-score. In a plot of normal distribution of sample means about the true population mean, which is what we want in this example, to find the Z-score of a sample mean we take the difference between the sample mean and the true population mean and then divide it by the standard error of the mean. The Z-score is also a useful number to describe in a standardised manner how far a single value is away from the mean value. For example, in expressing bone density to describe severity of osteoporosis, one could give an actual value in terms of X-ray attenuation. It is more useful to a clinician or patient, however, to consider how the value deviates from typical density. The Z-score describes how many standard deviations (SD) a subject’s bone density is away from average. Here we consider a plot of normal distribution of values about a mean value rather than a plot of sample means about a population mean; to find the Z-score of a value we take the difference between the value and the mean and then divide by the SD. So in a normal distribution plot of values about a mean value, μ, we calculate the Z-score of a value by: Z = (value – μ)/SD In a normal distribution plot of sample means about the true population mean , σ, we calculate the Z-score of a particular sample mean value, μ, by: Z = (μ – σ)/SE We described in the previous section how the standard error of a mean of n values relates to the standard deviation of the values: SE=SD/√n Therefore the Z-score of a sample mean of n values is: Z = (μ-σ)√n /SD ## Determining if a Sample Mean is Greater than a Desired Mean Returning to our example of blood pressure, our null hypothesis was that the true population mean was no greater than a certain desired mean value. We can denote this null hypothesis upper limit mean equal to the population mean as μ0. So: Z = (μ-μ0)√n /SD We can estimate the SD of the population by calculating the SD of the sample, and hope that they are approximately the same. We now have enough information to calculate the Z-score, and hence the p-value associated with a certain sample mean value. If a sample of 20 people has a mean blood pressure (bp) of 185 mmHg and a SD of 22.5 mmHg, this is one SE greater than a desired bp of 180 mmHg. The one-tailed p value, the area of the section of the plot greater than the 1 SE line, is 0.16. (Note that this encompasses the 0.14 probability of the segment between 1 and 2 SE plus the 0.02 probability of the segment above 2 SE.) If the sample’s mean bp was 190 mmHg, this would be 2 SE greater than the desired mean, and the p-value is only 0.02. The bp would now be considered significantly greater than desired bp. Thus, if the desired bp is 180 mmHg, and a sample of 20 employees have a mean bp of 185 mmHg with an SD of 22.5 mmHg, the sample mean is one standard error, or one SD/√n, greater than the null hypothesis mean, so the Z-score is 1. When we look up this Z-score on a table that someone laboriously calculated from the complex formula (not shown here!) that describes the normal distribution curve, we get the corresponding p-value. On the table (or on the computer software), a Z-score of 1 corresponds to a probability of 0.16. This means that there is only a 16% chance that the sample could have been taken from a population whose mean was at the desired level according to the null hypothesis. This chance is quite low, but not low enough to satisfy the commonly used critical value of 0.05, or 5%. So we cannot conclude from the sample mean that the true population mean is higher than the desired mean. On the other hand, if the same sample had a mean bp of 190 mmHg the Z-score would be 2, i.e. the sample mean would be 2 SE higher than the null hypothesis mean. On the table, the probability of this is only 0.02. This would satisfy our criteria for rejecting the null hypothesis at the 0.05 level. Here we would say that the sample indicates that the whole population of employees are actually significantly hypertensive compared to the required mean level, with a p value of 0.02. The chief medical officer will be unhappy when he gets the results of the official measurement of the whole company!
2023-03-24 15:51:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329057693481445, "perplexity": 475.8388489756935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00029.warc.gz"}
http://tex.stackexchange.com/questions/14130/how-can-i-flush-text-left-in-the-cases-environment
# How can I flush text left in the cases environment? I have the following code: \documentclass{article} \usepackage{amsmath, amssymb, amsthm, graphicx, textcomp} \begin{document} d^2(A,B)=\begin{cases} (a_1-b_1)^2(1+m^2) &&\text{if $AB$ has slope $m$} \\ (a_2-b_2)^2 &&\text{if $AB$ is a vertical line} \end{cases} \end{document} but I get this output: Is there a way to get the text on the right after the formulas, so that there are only two lines? Thanks. - You are using two alignment characters on each row; only one of those can be used on each line of a cases environment; if you want to increase the separation between the math expressions and the annotations, you can use \quad, or \qquad, or \hspace with an appropriate value: \documentclass{article} \usepackage{amsmath, amssymb, amsthm, graphicx, textcomp} \begin{document} $d^2(A,B)= \begin{cases} (a_1-b_1)^2(1+m^2), &\text{if AB has slope m} \\ (a_2-b_2)^2, &\text{if AB is a vertical line} \end{cases}$ $d^2(A,B)= \begin{cases} (a_1-b_1)^2(1+m^2), &\qquad\text{if AB has slope m} \\ (a_2-b_2)^2, &\qquad\text{if AB is a vertical line} \end{cases}$ \end{document} -
2014-07-30 07:31:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275953531265259, "perplexity": 1681.0541338158544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268734.38/warc/CC-MAIN-20140728011748-00303-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/lens-with-a-mirrored-coating.796623/
# Lens with a mirrored coating 1. Feb 8, 2015 ### 1bigman 1. The problem statement, all variables and given/known data A biconvex lens of refractive index $n$ and radius of curvature $r$ and focal length $f$ floats horizontally on liquid mercury such that its lower surface is effectively a spherical mirror. A point object on the optical axis a distance $u$ away is then found to coincide with its image. What are $r$ and $n$? 2. Relevant equations The previous half of the equation made us derive $\frac{n_2}{v} + \frac{n_1}{u} = \frac{n_2-n_1}{r}$ for a spherical interface of radius $r$ separating 2 different media. I don't know if this would be useful here? Also have $\frac{2}{u} = \frac{1}{f}$ I believe we can't use the lens maker's equation in this circumstance? 3. The attempt at a solution Deriving the relations above. 2. Feb 8, 2015 ### Staff: Mentor What was the setup where you derived that? That does not take the mirror into account, so you have to check if you can apply this formula here. Why not? Did you draw a sketch? 3. Feb 8, 2015 ### 1bigman The relation was obtained by considering a spherical interface separating two different media. The image is formed at a distance $v$ in the sphere. My reasoning for not using the standard lens maker of $\frac{1}{f} = \left(\frac{n_1}{n_2} -1\right)\left(\frac{1}{r_1}-\frac{1}{r_2}\right)$ because that assumes the light goes on to exit the lens at the other end where as here it is reflected? I did do a sketch, but it doesn't give away much... 4. Feb 8, 2015 ### Staff: Mentor That equation gives a relation between the three quantities of the lens, it is independent of the light path in this specific setup. Which light paths did you draw? Can you show the sketch here? I think you can learn a lot from the sketch. 5. Feb 8, 2015 ### 1bigman My sketch had the lens in the mercury with rays of light being refracted as normal and then reflected and following it's path back to the source. Could it be as simple as using the derived expression in the question and the lens makers to solve for r and n? 6. Feb 8, 2015 ### Staff: Mentor The lens-maker equation won't be sufficient to find both r and n, but you'll get one condition. The information about the image gives the other one. 7. Feb 8, 2015 ### 1bigman True, but we can always give our answer in terms of $f$ as that is mentioned in the question? 8. Feb 8, 2015 ### Staff: Mentor Sorry, updated my post to better match the homework statement. See above. 9. Feb 8, 2015 ### 1bigman Ah, yes. So I'm guessing the equation calculated in the first part of the question is irrelevant in this circumstance? Otherwise I can't seem to find how the image gives any relation to $r$. 10. Feb 8, 2015 ### Staff: Mentor I still don't know what exactly it means. What is the setup where it was calculated, what are u and v? Every ray going from the point towards the lens/mirror combination gets reflected back in exactly the same way. That gives you a condition on the different angles and their relations. 11. Feb 8, 2015 ### 1bigman My apologies. The question for the first part is as follows: A spherical interface of radius R separates two media of refractive indices $n_1$ and $n_2$. Show that, in the paraxial approximation, a point object on the optical axis a distance $u$ from the interface in the first medium produces a point image on the optical axis in the second medium a distance $v$ from the interface given by $\frac{n_2}{v} + \frac{n_1}{u} = \frac{n_2-n_1}{r}$
2017-12-11 00:31:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44690364599227905, "perplexity": 616.102871761474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948511435.4/warc/CC-MAIN-20171210235516-20171211015516-00518.warc.gz"}
https://mathematica.stackexchange.com/questions/102924/reduce-an-expression-with-sqrt/102925
# Reduce an expression with Sqrt [closed] r Cos[γ[t]] + 1/2 l Sqrt[1 - (r^2 Sin[γ[t]]^2)/l^2] How can I ask Mathematica to write my expression without l^2 in the denominator such as: r Cos[γ[t]] + 1/2*Sqrt[l^2 - r^2 Sin[γ[t]]^2] Sorry, if my question is very simple, but I have been unable to work out an answer. ## closed as off-topic by m_goldberg, user9660, Bob Hanlon, Öskå, MarcoBDec 29 '15 at 0:10 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – m_goldberg, Community, Bob Hanlon, Öskå, MarcoB If this question can be reworded to fit the rules in the help center, please edit the question. r Cos[γ[t]] + 1/2 l Sqrt[1 - (r^2 Sin[γ[t]]^2)/l^2] Simplify[%, Assumptions -> {l^2 > 0, l > 0}] Or: r Cos[γ[t]] + 1/2 l Sqrt[1 - (r^2 Sin[γ[t]]^2)/l^2] Simplify[%, Assumptions -> {Re[l] > 0, Im[l] == 0}] But I am sure this is discussed in Simplify help page. EDIT As @BobHanlon pointed out l>0 implies l^2>0, so one should write: Simplify[%, Assumptions -> {l > 0}] • @Bendesarts You are welcome, but you might also accept it ;) – mattiav27 Dec 28 '15 at 13:41
2019-09-18 08:05:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7385625243186951, "perplexity": 3654.6623516533496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573258.74/warc/CC-MAIN-20190918065330-20190918091330-00371.warc.gz"}
https://projecteuclid.org/euclid.ecp/1461097549
## Electronic Communications in Probability ### On Recurrent and Transient Sets of Inhomogeneous Symmetric Random Walks #### Abstract We consider a continuous time random walk on the $d$-dimensional lattice $\mathbb{Z}^d$: the jump rates are time dependent, but symmetric and strongly elliptic with ellipticity constants independent of time. We investigate the implications  of heat kernel estimates on recurrence-transience  properties of the walk and we give conditions for recurrence as well as for transience: we give applications of these conditions  and discuss them in relation with the (optimal) Wiener test available in the time independent context. Our approach relies on estimates on the time spent by the walk in a set and on a 0-1 law. We show also that, still via heat kernel estimates, one can avoid using a 0-1 law, achieving this way quantitative estimates on more general hitting probabilities. #### Article information Source Electron. Commun. Probab., Volume 6 (2001), paper no. 4, 39-53. Dates Accepted: 18 January 2001 First available in Project Euclid: 19 April 2016 https://projecteuclid.org/euclid.ecp/1461097549 Digital Object Identifier doi:10.1214/ECP.v6-1033 Mathematical Reviews number (MathSciNet) MR1831800 Zentralblatt MATH identifier 0976.60073 Rights #### Citation Giacomin, Giambattista; Posta, Gustavo. On Recurrent and Transient Sets of Inhomogeneous Symmetric Random Walks. Electron. Commun. Probab. 6 (2001), paper no. 4, 39--53. doi:10.1214/ECP.v6-1033. https://projecteuclid.org/euclid.ecp/1461097549 #### References • Bass, R. F. (1998), Diffusions and elliptic operators. Probability and its Applications. Springer-Verlag. • Bucy, R. S. (1965), Recurrent sets. Ann. Math. Statist. 36, 535-545. • Carlen, E. A., Kusuoka, S. and Stroock, D. W. (1987), Upper bounds for symmetric Markov transition functions. Ann. Inst. H. Poincaré Probab. 23, no. 2, suppl., 245-287. • Deuschel, J.-D. and Giacomin, G. (2000), Entropic Repulsion for Massless Fields. Stoch. Proc. Appl. 89, 333-354. • Deuschel, J.-D. and Giacomin, G. and Ioffe, D. (2000), Large deviations and concentration properties for $nablavarphi$ interface models. Probab. Theory. Related Fields 117, 49-111. • Dynkin, E. B. and Yushkevich, A. A. (1969), Markov Processes: Theorems and Problems. Plenum Press. • Ethier, S. N. and Kurtz, T. G. (1986), Markov processes. Characterization and convergence. Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, Inc. • Fabes, E. B. and Stroock, D. W. (1986), A new proof of Moser's parabolic Harnack inequality using the old ideas of Nash. Arch. Rational Mech. Anal. 96, 337-338. • Giacomin, G., Olla, S. and Spohn, H. (1999), Equilibrium fluctuations for $nabla varphi$ interface model. Preprint Université de Cergy-Pontoise, accepted for publication on Ann. Probab. • Itô, K. and McKean, H. P. (1960), Potentials and the random walk. Illinois J. Math. 4, 119-132. • Stroock, D. W. and Zheng, W. (1997), Markov chain approximations to symmetric diffusions. Ann. Inst. H. Poincaré Probab. 33, no. 5, 619-649. • Fukai, Y. and Uchiyama, K. (1996), Wiener's test for space-time random walks and its applications. Trans. Amer. Math. Soc. 348, no. 10, 4131-4152.
2019-12-14 05:34:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4445064067840576, "perplexity": 3215.12356663813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00373.warc.gz"}
https://qanda.ai/en/solutions/mcfYSzsgrr-1-40-right)-8-1824-and-whose-product-is-as-large-as-possible-140-Find-two-positi
Symbol Problem $1$ $,40$ $\right)°$ $8$ $1824$ and whose product is as large as possible. $140$ Find two positive numbers $x$ and $y$ such that $x+y=60$ and $xy^{3}$ is maximum. $15.$ Find $tw0r$ positive numbers $x$ and $v$ such th Calculus Search count: 106 Solution
2021-04-14 02:08:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6946877241134644, "perplexity": 177.63592107902494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076454.41/warc/CC-MAIN-20210414004149-20210414034149-00273.warc.gz"}
http://kristinbradley.com/t403x/834354-under-double-account-system-depreciation-is
Most companies will not use the double-declining balance method of depreciation on their financial statements. The components are: (1) Revenue Account (2) Net Revenue Account (3) Capital Account and (4) General Balance Sheet. As a hypothetical example, suppose a business purchased a 30,000 delivery truck, which was expected to last for 10 years. The offers that appear in this table are from partnerships from which Investopedia receives compensation. To do this, divide 100 per cent by the number of years of useful life of the asset. Journalizing transactions using the double-entry bookkeeping system will eliminate fraud. 3) Double declining method. Thus, this system of accounting is based on the Dual Aspect Concept of accounting. 0.2 * 4000 flyers = Rs. Once the per unit depreciation is found out, it can be applied to future output runs. Conceptually, depreciation is the reduction in value of an asset over time, due to elements such as wear and tear. But, under Double Account System, it is a charge against appropriation of profit and, hence, debited to Net Revenue Account. Besides, the Balance of Revenue Account is transferred to Net Revenue Account and Balance of Net Revenue Account is transferred to General Balance Sheet. Depreciation=2×SLDP×BVwhere:SLDP = Straight-line depreciation percentBV = Book value at the beginning of the period\begin{aligned} &\text{Depreciation}=2\times \text{SLDP}\times\text{BV}\\ &\textbf{where:}\\ &\text{SLDP = Straight-line depreciation percent}\\ &\text{BV = Book value at the beginning of the period}\\ \end{aligned}​Depreciation=2×SLDP×BVwhere:SLDP = Straight-line depreciation percentBV = Book value at the beginning of the period​, Under the generally accepted accounting principles (GAAP) for public companies, expenses are recorded in the same period as the revenue that is earned as a result of those expenses. Thus, when a company purchases an expensive asset that will be used for many years, it does not deduct the entire purchase price as a business expense in the year of purchase but instead deducts the price over several years.. These include white papers, government data, original reporting, and interviews with industry experts. Double declining balance depreciation is one of these methods. The Modified Accelerated Cost Recovery System (MACRS) is the current tax depreciation system in the United States. Using the double declining balance method, however, it would deduct 20% of30,000 ($6,000) in year one, 20% of$24,000 ($4,800) in year two, and so on. For instance, a widget-making machine is said to \"depreciate\" when it produces less widgets one year compared to the year before it, or a car is said to \"depreciate\" in value after a fender bender or the discovery of a faulty transmission.For accounting in particular, depreciation concerns allocating the cost of an asset over a period of time, usually its useful life. The book value, or depreciation base, of an asset declines over time. The following points highlight the four components in preparation of final accounts under double account system. Property required to be depreciated under the Alternative Depreciation System (ADS). ADVERTISEMENTS: ADVERTISEMENTS: Here is a compilation of top four accounting problems on double account system with its relevant solutions. Besides, the Balance of Revenue Account is transferred to Net Revenue Account and Balance of Net Revenue Account is transferred to General Balance Sheet. Salvage value is the estimated book value of an asset after depreciation. With the constant double depreciation rate and a successively lower depreciation base, charges calculated with this method continually drop. The double-entry has two equal and corresponding sides known as debit and credit.The left-hand side is debit and right-hand side is credit. After five years, the expense of … But, under Double Account System, it is a charge against appropriation of profit and, hence, debited to Net Revenue Account. The Double Account System is a method of presenting the annual final accounts/annual financial statements of public utility undertakings, like Railways, Electricity, Gas, Water Supply, Tramways etc. Disclaimer 9. Copyright 10. Under both models depreciation for the period is charged in profit or loss account. It is also known as Receipts and Expenditure on Capital Account. I see through them! Content Guidelines 2. If the effect of depreciation is recorded directly in asset account then asset’s account will be credited with equal amount. Entity may make transfers from revaluation surplus to retained earnings equal … 5,000 and provision of 5% to be made for doubtful debts. The one exception is a capital lease, where the company records it as an asset when acquired but pays for the asset over time, under the terms of the associated lease agreement. Double Entry System of Accounting means every business transaction involves at least two accounts. It was first enacted and authorized under the Internal Revenue Code in 1954, and it was a major change from existing policy. In other words, every business transaction has an equal and opposite effect in minimum two different accounts. False. What Is the Double Declining Balance (DDB) Depreciation Method? 7. IRS tables specify percentages to apply to the basis of an asset for each year in which it is in service. Double-entry bookkeeping, in accounting, is a system of book keeping where every entry to an account requires a corresponding and opposite entry to a different account. Journal entry to record it will be: Prohibited Content 3. For other property required to be depreciated using ADS, see Required use of ADS under Which Depreciation System (GDS or ADS) Applies in chapter 4. The applicable convention determines the portion of the tax year for which depreciation is allowable during a year property is either placed in service or disposed of. Similarly, compared to the standard declining balance method, the double declining method depreciates assets twice as quickly. Flashlight Electric Company Ltd: (i) Fixed assets: ADVERTISEMENTS: Expenditure up to 1.1.2006: (a) Land and Buildings Rs 10,00,000 ; (b) Machinery Rs 15, 00,000. MACRS Depreciation is the tax depreciation system that is currently employed in the United States. Under Debtors System, credit sales, bad debts, return inward, discounts allowed and depreciation are not taken in the Branch Account. So it is possible to keep complete account. Under the double-declining-balance depreciation method, salvage value is deducted from the asset's cost like the other depreciation methods. The assets are recorded in the right hand side (asset side) and liabilities are recorded in the left hand side (liabilities side), i.e., it is prepared in its usual form. Depreciation is part of the process for accounting for an asset during its entire life. Form of Net Revenue Account: Component # 3. However, there is one additional step that entity may take while calculating depreciation of asset with revaluation surplus. To calculate depreciation using the double-declining method, its possible to double the amount of depreciation expense under the straight-line method. Under the double-declining balance method, the book value of the trailer after three years would be$51,200 and the gain on a sale at $80,000 would be … The DDB method records larger depreciation expenses during the earlier years of an asset’s useful life, and smaller ones in later years. 800 which is accounted. Prepare a Revenue Account, Net Revenue Account and the General Balance Sheet under the Double Account System from the following Trial Balance as on 31.12.1993, of the Rural Electric Supply Co. Ltd. A Call of Re. The double declining balance depreciation method is an accelerated depreciation method that counts as an expense more rapidly (when compared to straight-line depreciation that uses the same amount of depreciation each year over an asset's useful life). Three columns are also used—the first column shows the receipts on each item at the end of last year, the second column shows the receipts for the current year and the third column represents the total capital receipts to date. In using the declining balance method, a company reports larger depreciation expenses during the earlier years of an asset’s useful life. In short, under Single Account System, such interest is charged against profit, hence, debited to Profit and Loss Account. True. Content Filtrations 6. Internal Revenue Service. The purpose of this account is to show the sources of total capital and the application of the same in different fixed assets. These undertakings are usually incorporated under Special Acts and, as a result, the form of accounts is prescribed by, special statute. more Modified Accelerated Cost Recovery System … This is one of the two common methods a company uses to account for the expenses of a fixed asset. There are two types of accelerated depreciation methods, and both involve a multiple of the SLD balance method. The double declining balance (DDB) method is an accelerated depreciation calculation used in business accounting. After 10 years, it would be worth$3,000, its salvage value. This includes listed property used 50% or less in a qualified business use. All items of expenditure appear on the debit side whereas all items of income appear on the credit side of Revenue Account. On the other hand, the right hand side (or credit side) reveals the receipts on capital account including amount received from public for shares and debentures including the amount of fixed loans, if any. The double declining balance depreciation (DDB) method, also known as the reducing balance method, is one of two common methods a business uses to account for the expense of a long-lived asset. The MACRS, which stands for Modified Accelerated Cost Recovery System, was originally known as the ACRS (Accelerated Cost Recovery System) before it was rebranded to its current form after the enactment of the Tax Reform Act in 1986. Premium received on shares and debentures or any calls paid in advance are also to be added and Calls-in-arrear is to be deducted. Depreciation charge under the double declining depreciation method is calculated by applying the higher depreciation rate to the asset book value at the start of the period. The double declining balance depreciation method is an accelerated depreciation method that multiplies an asset's value by a depreciation rate. It contains expenditure of a capital in the left hand side (or debit side) including additions to fixed assets. This is actually the second part of the Balance Sheet which contains current assets and current liabilities together with the balance or totals of each side (as the case may be) of capital account. You can learn more about the standards we follow in producing accurate, unbiased content in our. The Internal Revenue Service (IRS) publishes detailed tables of lives by classes of assets. Useful life 3. The depreciation rates in DDD balance methods could either be … Generally accepted accounting principles require accrual-basis accounting. 10%, Meters and Electrical Instruments 15%, Advertising has been prepaid by Rs. This method takes most of the depreciation charges upfront, in the early years, lowering profits on the income statement sooner rather than later. Depreciation first becomes deductible when an asset is placed in service. : Preliminary expenses spent on the formation of an undertaking., Premium received on issue of shares and Debentures are treated as capital expenditure and, hence, will appear in the debit side and credit side, respectively, of capital account. It corresponds with the ordinary Profit and Loss Appropriation Account of a trading concern prepared under Single Account System. Each year, depreciation expense is debited for $6,000 and the fixed asset accumulation account is credited for$6,000. 1 per share was payable on 30.6.1993 and arrears are subject to interest at 10% p.a. The balance of the book value is eventually reduced to the asset's salvage value after the last depreciation period. It is an important component in the calculation of a depreciation schedule. The balance of Receipts and Expenditure on Capital Account is carried down and shown in the respective side of the General Balance Sheet or, the total of two sides of this account are shown on both sides of the General Balance Sheet. So the total Depreciation expense is Rs. The double declining balance (DDB) method is an accelerated depreciation calculation used in business accounting. Interest on loan and Interest on debentures are shown on the debit side of the Net Revenue Account, i.e., treated as an appropriation of profit since debentures and loans (long-term) are considered as a part of capital. When the depreciation rate for the declining balance method is set as a multiple doubling the straight-line rate, the declining balance method is effectively the double declining balance method. 800 . Depreciation rates used in the declining balance method could be 150%, 200% (double), or 250% of the straight-line rate. It is not a liability, since the balances stored in the account do not represent an obligation to pay a third party. Accessed Aug. 17, 2020. Under the double-entry system, another account has to be credited with the same amount. Investopedia uses cookies to provide you with a great user experience. Final Accounts under the Double Account System The final accounts under the Double Account System normally consist of: [a] Revenue Account [b] Net Revenue Account [c] Capital Account (Receipts and Expenditure on Capital Account) and [d] General Balance Sheet Revenue Account The final accounts under the Double Account System normally consist of: [a] Revenue Accessed Aug. 17, 2020. The reason is that it causes the company's net income in the early years of an asset's life to be lower than it would be under the straight-line method. […] Over the depreciation process, the double depreciation rate remains constant and is applied to the reducing book value each depreciation period. Provision for Depreciation Account which is an expense account used to record depreciation each period; An Accumulated Depreciation Account used to show aggregated depreciation; The Income Statement (Profit and Loss Account) The entries are two fold: First create the Provision for Depreciation … TOS 7. Under Debtors System fixed assets is shown on the credit side only after deducting the amount of depreciation, if any. In this method the depreciation account is debited because depreciation is an expense and the Asset account is credited because the value of the assets gets reduced to the extent of depreciation. 1 Accounting for depreciation in asset account. The declining balance method is one of the two accelerated depreciation methods, and it uses a depreciation rate that is some multiple of the straight-line method rate. Following are the main advantages of double entry system: Under this method both the aspects of each and every transaction are recorded. The double declining balance method of depreciation, also known as the 200% declining balance method of depreciation, is a form of accelerated depreciation. Step 2: Total Depreciation expense = Rs. Depreciation is an accounting method of allocating the cost of a tangible asset over its useful life and is used to account for declines in value over time. Because the double declining balance method results in larger depreciation expenses near the beginning of an asset’s life—and smaller depreciation expenses later on—it makes sense to use this method with assets that lose value quickly. The Capitalization Limit. By using Investopedia, you accept our, Investopedia requires writers to use primary sources to support their work. We also reference original research from other reputable publishers where appropriate. The unit of production method is a way of calculating depreciation when the life of an asset is best measured by how much the asset has produced. Certain other items, e.g., any kind of reserve, say, General Reserve, Capital Reserve etc., or any kind of funds, e.g., Depreciation Fund, etc., also will appear in the Balance Sheet. Plagiarism Prevention 4. Accelerated depreciation is any depreciation method used for accounting or income tax purposes that allow for higher deductions in the earlier years. Financial Accounting Standards Board. Depreciation charge is an expense therefore Profit and loss account is debited to record the expense. The amount of depreciation is then finally transferred to the profit and loss account as an item of Expenditure. Specifically, the DDB method depreciates assets twice as fast as the traditional declining balance method. The balance in the accumulated depreciation account is the sum of the depreciation expense recorded in past periods. Before publishing your articles on this site, please read the following pages: 1. Double entry system is acknowledged as the best method of accounting in the modern world. However, the final depreciation charge may have to be limited to a lesser amount to keep the salvage value as estimated. As its name implies, the DDD balance method is one that involves a double depreciation rate. It corresponds with the ordinary Profit and Loss Account (prepared under Single Account System) where the expenses are grouped under appropriate heading. Then, multiply this rate by 2. If you use the Alternative Depreciation System, the ADS recovery periods will generally be longer than the regular GDS recovery periods under the MACRS depreciation system. Privacy Policy 8. Under this system, the capitalized cost (basis) of tangible property is recovered over a specified life by annual deductions for depreciation. As a result, companies opt for the DDB method for assets that are likely to lose most of their value early on, or which will become obsolete more quickly. The double declining balance method is a type of declining balance method with a double depreciation rate. As you can see in the Excel analysis, a few key assumptions have to be made, and from there, an analyst can build the full schedule.Key assumptions include: 1. "FASB Special Report: The Framework of Accounting Concepts and Standards," Page 28–29. Double Entry Accounting System is an accounting approach under which each and every accounting transaction requires a corresponding and opposite entry in the accounting records and the number of transactions entered as the debits should be equal to that of the credits. In practice it is not the asset account but a contra asset account, the accumulated depreciation account, which is credited with the annual depreciation. This means that compared to the straight-line method, the depreciation expense will be faster in the early years of … Under the straight-line depreciation method, the company would deduct $2,700 per year for 10 years – that is,$30,000 minus \$3,000, divided by 10. Image Guidelines 5. N.B. Three columns are generally used for the purpose—the first column shows the expenditure on each item at the end of last year, the second one shows the additions which are made for the current year, and the third column represents the total capital expenditure to date. Beginning book value 2. Depreciation to be provided for on opening balance on Buildings 2½%, Machinery 7½%, Main 5%, Transformer etc. If anything, accumulated depreciation represents the amount of economic value that has been consumed in the past. Scrap value is the worth of a physical asset's individual components when the asset is deemed no longer usable. Capital Account: Using the steps outlined above, let’s walk through an example of how to build a table that calculates the full depreciation schedule over the life of the asset.Look at the screenshot below and then read the explanation of how it works. Problem 1: The following balances are extracted from the books of M/s. But, in case of Electricity Supply Companies, only the last one is taken into consideration. Double declining depreciation method is an accelerated depreciation method where the depreciation expense decreases with the age of the asset. Report a Violation, Double Account System: Features, Advantages and Disadvantages, Preparation and Presentation of Final Accounts, Prescribed Forms of Accounts of Electricity Supply Company: 11 Schedules. "Publication 946, How to Depreciate Property," Page 3. For depreciation, started opening accounts are the weakness for most students. Double Declining Balance (DDB) Depreciation Formula, SLDP = Straight-line depreciation percent, BV = Book value at the beginning of the period, Example of Double Declining Balance (DDB) Depreciation, Understanding the Declining Balance Method, generally accepted accounting principles (GAAP), FASB Special Report: The Framework of Accounting Concepts and Standards, Publication 946, How to Depreciate Property. U.S. tax depreciation is computed under the double-declining balance method switching to straight line or the straight-line method, at the option of the taxpayer. Opening accounts are the main advantages of double entry System is acknowledged as the traditional balance! Charge against appropriation of profit and Loss Account as an item of Expenditure appear on credit. Journal entry to record it will be credited with equal amount aspects each! Remains constant and is applied to future output runs government data, reporting... Electrical Instruments 15 %, Advertising has been prepaid by Rs the Alternative depreciation System ( ). Of assets: Component # 3 is applied to future output runs short, double! For doubtful debts Advertising has been consumed in the accumulated depreciation represents the amount of depreciation is recorded directly asset. Supply companies, only the last one is taken into consideration accept our, requires. Alternative depreciation System ( ADS ) is charged against profit, hence, debited to profit and Loss appropriation of... Entity may take while calculating depreciation of asset with revaluation surplus # 3 change existing... Of income appear on the Dual Aspect Concept of accounting means every business transaction has an equal and corresponding known... An item of Expenditure appear on the Dual Aspect Concept of accounting is based on the side. Double-Declining balance method with a great user experience life of the process for accounting or income purposes! Delivery truck, which was under double account system depreciation is to last for 10 years ) of tangible is. Has an equal and opposite effect in minimum two different accounts is debited profit! The reduction in value of an asset 's cost like the other depreciation methods, it! Part of the process for accounting or income tax purposes that allow higher... The estimated book value each depreciation period pages: 1 not a liability, since balances! Existing policy transaction has an equal and corresponding sides known as Receipts and on! Account do not represent an obligation to pay a third party authorized under the straight-line method are... Multiple of the two common methods a company uses to Account for the are! Account: the following points highlight the four components in preparation of final accounts under double Account.. Provision of 5 % to be added and Calls-in-arrear is to show the sources of capital. Step that entity may take while calculating depreciation of asset with revaluation surplus business use salvage! Declining method depreciates assets twice as quickly calls paid in advance are also to be depreciated under the Alternative System! For on opening balance on Buildings 2½ %, main 5 %, Transformer etc represents... Great user experience are usually incorporated under Special Acts and, hence, debited to Net Account! Of total capital and the application of the asset 's cost like the other depreciation methods method. That involves a double depreciation rate and a successively lower depreciation base, calculated... The best method of accounting means every business transaction has an equal and corresponding sides as... Its entire life cent by the number of years of useful life of SLD! Delivery truck, which was expected to last for 10 years, it is an important in. Accounting Concepts and Standards under double account system depreciation is '' Page 28–29 the credit side of Revenue Account government data original! Accounting for an asset during its entire life four components in preparation of final under double account system depreciation is under double Account System appropriate... Equal amount side whereas all items of income appear on the credit of., another Account has to be depreciated under the double-declining-balance depreciation method its... Name implies, the DDD balance method with a great user experience is recorded directly in asset Account asset. Meliodas Vs Escanor Bad Animation, World Geography Russia Lesson Plans, Boat Landing Sunset Beach, Nc Menu, Sorrel Drink For Sale, Ragnarok Mysteltainn Quest, Logos Bible Web Version, Dentist In Colchester, Vt,
2021-04-11 03:56:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2128877192735672, "perplexity": 3021.3830607259847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00479.warc.gz"}
https://www.jobilize.com/physics-ap/course/13-3-the-ideal-gas-law-temperature-kinetic-theory-and-the-by-openstax?qcr=www.quizover.com
# 13.3 The ideal gas law Page 1 / 11 • State the ideal gas law in terms of molecules and in terms of moles. • Use the ideal gas law to calculate pressure change, temperature change, volume change, or the number of molecules or moles in a given volume. • Use Avogadro’s number to convert between number of molecules and number of moles. In this section, we continue to explore the thermal behavior of gases. In particular, we examine the characteristics of atoms and molecules that compose gases. (Most gases, for example nitrogen, ${\text{N}}_{2}$ , and oxygen, ${\text{O}}_{2}$ , are composed of two or more atoms. We will primarily use the term “molecule” in discussing a gas because the term can also be applied to monatomic gases, such as helium.) Gases are easily compressed. We can see evidence of this in [link] , where you will note that gases have the largest coefficients of volume expansion. The large coefficients mean that gases expand and contract very rapidly with temperature changes. In addition, you will note that most gases expand at the same rate, or have the same $\beta$ . This raises the question as to why gases should all act in nearly the same way, when liquids and solids have widely varying expansion rates. The answer lies in the large separation of atoms and molecules in gases, compared to their sizes, as illustrated in [link] . Because atoms and molecules have large separations, forces between them can be ignored, except when they collide with each other during collisions. The motion of atoms and molecules (at temperatures well above the boiling temperature) is fast, such that the gas occupies all of the accessible volume and the expansion of gases is rapid. In contrast, in liquids and solids, atoms and molecules are closer together and are quite sensitive to the forces between them. To get some idea of how pressure, temperature, and volume of a gas are related to one another, consider what happens when you pump air into an initially deflated tire. The tire’s volume first increases in direct proportion to the amount of air injected, without much increase in the tire pressure. Once the tire has expanded to nearly its full size, the walls limit volume expansion. If we continue to pump air into it, the pressure increases. The pressure will further increase when the car is driven and the tires move. Most manufacturers specify optimal tire pressure for cold tires. (See [link] .) How do you convert 0.0045kgcm³ to the si unit? how many state of matter do we really have like I mean... is there any newly discovered state of matter? I only know 5: •Solids •Liquids •Gases •Plasma •Bose-Einstein condensate Thapelo Alright Thank you Falana Which one is the Bose-Einstein James can you explain what plasma and the I her one you mentioned Olatunde u can say sun or stars are just the state of plasma Mohit but the are more than seven Issa what the meaning of continuum What state of matter is fire fire is not in any state of matter...fire is rather a form of energy produced from an oxidising reaction. Xenda Isn`t fire the plasma state of matter? Walter How can you define time? Time can be defined as a continuous , dynamic , irreversible , unpredictable quantity . Tanaya what is the relativity of physics How do you convert 0.0045kgcm³ to the si unit? flint What is the formula for motion V=u+at V²=u²-2as flint S=ut+½at flint they are eqns of linear motion King S=Vt Thapelo v=u+at s=ut+at^\2 v^=u^+2as where ^=2 King hi hello King Explain dopplers effect Not yet learnt Bob Explain motion with types Bob Acceleration is the change in velocity over time. Given this information, is acceleration a vector or a scalar quantity? Explain. Scalar quantity Because acceleration has only magnitude Bob acleration is vectr quatity it is found in a spefied direction and it is product of displcemnt bhat its a scalar quantity Paul velocity is speed and direction. since velocity is a part of acceleration that makes acceleration a vector quantity. an example of this is centripetal acceleration. when you're moving in a circular patter at a constant speed, you are still accelerating because your direction is constantly changing. Josh acceleration is a vector quantity. As explained by Josh Thompson, even in circular motion, bodies undergoing circular motion only accelerate because on the constantly changing direction of their constant speed. also retardation and acceleration are differentiated by virtue of their direction in fitzgerald respect to prevailing force fitzgerald What is the difference between impulse and momentum? Manyo Momentum is the product of the mass of a body and the change in velocity of its motion. ie P=m(v-u)/t (SI unit is kgm/s). it is literally the impact of collision from a moving body. While Impulse is the product of momentum and time. I = Pt (SI unit is kgm) or it is literally the change in momentum fitzgerald Or I = m(v-u) fitzgerald Calculation of kinetic and potential energy K.e=mv² P.e=mgh Malia K is actually 1/2 mv^2 Josh what impulse is given to an a-particle of mass 6.7*10^-27 kg if it is ejected from a stationary nucleus at a speed of 3.2*10^-6ms²? what average force is needed if it is ejected in approximately 10^-8 s? John speed=velocity÷time velocity=speed×time=3.2×10^-6×10^-8=32×10^-14m/s impulse [I]=∆momentum[P]=mass×velocity=6.7×10^-27×32×10^-14=214.4×10^-41kg/ms force=impulse÷time=214.4×10^-41÷10^-8=214.4×10^-33N. dats how I solved it.if wrong pls correct me. Melody what is sound wave sound wave is a mechanical longitudinal wave that transfers energy from one point to another Ogor its a longitudnal wave which is associted wth compresion nad rearfractions bhat what is power it's also a capability to do something or act in a particular way. Kayode Newton laws of motion Mike power also known as the rate of ability to do work Slim power means capabilty to do work p=w/t its unit is watt or j/s it also represents how much work is done fr evry second bhat what does fluorine do? strengthen and whiten teeth. Gia a simple pendulum make 50 oscillation in 1minute, what is it period of oscillation? length of pendulm? bhat what is the difference between temperature and heat transfer? temperature is the measurement of hotness or coldness of a body... heat transfer is the movement of heat from one body to another Doc U get it right Titilayo correct PROMISE heat is an energy possesed by any substance due to random kinetic energy possesed by molecules while temperature is driving force which gives direction of flow heat bhat
2019-03-18 13:24:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6317155957221985, "perplexity": 1211.1526671727797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201329.40/warc/CC-MAIN-20190318132220-20190318154220-00311.warc.gz"}
https://brilliant.org/problems/sizing-up-factors/
# Sizing up factors Is it possible to write $3^{2016}+4^{2017}$ as the product of two integers, both of which are over $2018^{183}$? ×
2021-02-26 22:50:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5129297375679016, "perplexity": 382.6443273587545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357984.22/warc/CC-MAIN-20210226205107-20210226235107-00343.warc.gz"}
https://justbeing.life/blog/why-emotionally-intelligent-people-are-more-successful/
# Why emotionally intelligent people are more successful Research shows that people with strong emotional intelligence are more successful than those with high IQs or relevant experience. Plus EQ is trainable 7 minute read Thought I should share this post I found at FactCompany written by Harvey Deutschendorf that shows why emotionally intelligent people are more successful. Research shows that people with strong emotional intelligence are more likely to succeed than those with high IQs or relevant experience. We’ve learned that emotional intelligence (EQ) is a crucial skill for both leaders and employees. But several studies point to just how important EQ can be to success, even trumping IQ and experience. Research by the respected Center for Creative Leadership (CCL) in the U.S. found that the primary causes of executive derailment involve deficiencies in emotional competence. Each year, CCL serves more than 20,000 individuals and 2,000 organizations, including more than 80 of the Fortune 100 companies. It says the three main reasons for failure are difficulty in handling change, inability to work well in a team, and poor interpersonal relations. International search firm Egon Zehnder International analyzed 515 senior executives and discovered that those who were strongest in emotional intelligence were more likely to succeed than those strongest in either IQ or relevant previous experience. Research that has been done on the relationship between emotional intelligence (EQ) and IQ has shown only a weak correlation between the two. The Carnegie Institute of Technology carried out research that showed that 85% of our financial success was due to skills in “human engineering”, personality, and ability to communicate, negotiate, and lead. They found that only 15% was due to technical ability. In other words people skills or skills highly related to emotional intelligence were crucial skills. Nobel Prize winning Israeli-American psychologist Daniel Kahneman found that people would rather do business with a person they like and trust rather than someone they don’t, even if that that person is offering a better product at a lower price. To test out his findings, think of the last time you purchased a major item, a home, automobile, or large appliance where you had to dealings with a salesperson. Was the person someone who you liked and trusted? In my talks, I have found that whenever I asked that question, inevitably the entire audience answered that, yes, the person they bought a large item from was someone they liked and trusted. This theory about why salespeople with the right people skills do better than those who lack them is borne out by a study carried out by the Hay/McBer Research and Innovation Group in 1997. In a study carried out in a large national insurance company in 1997, they found that sales agents weak in emotional areas such as self-confidence, initiative, and empathy sold policies with an average premium of $54,000, while those strong in 5 of 8 emotional competencies sold policies on the average worth$114,000. Much of the research that has been done on emotional intelligence has been at the executive leadership level. The higher up the organization, the more crucial emotional intelligence abilities are as the impacts are greater and felt throughout the entire organization. There have been some studies, however, that show impacts at all levels. For example, a study by McClelland in 1999 showed that after supervisors in a manufacturing plant received training in emotional competencies such as how to listen better, lost-time accidents decreased by 50% and grievances went down from 15 per year to three. The plant itself exceeded productivity goals by \$250,000. The same principles apply in all areas of life, whether at work or in relationships. Everyone wants to work with people who are easy to get along with, supportive, likeable, and can be trusted. We want to be beside people that do not get upset easily and can keep their composure when things do not work out according to plan. ## How Do You Hire Emotionally Intelligent People? Self-awareness. The first thing that is essential for any degree of emotional intelligence is self-awareness. People with a high degree of self-awareness have a solid understanding of their own emotions, their strengths, weaknesses, and what drives them. Neither overly critical nor unrealistically hopeful, these people are honest with themselves and others. These people recognize how their feelings impact them, other people around them, and their performance at work. They have a good understanding of their values and goals and where they are going in life. They are confident as well as aware of their limitations and less likely to set themselves up for failure. We can recognize self-aware people by their willingness to talk about themselves in a frank, non-defensive manner. A good interview question is to ask about a time that the interviewee got carried away by their emotions and did something they later regretted. The self-aware person will be open and frank with their answers. Self-deprecating humor is a good indicator of someone who has good self-awareness. Red flags are people who stall or try to avoid the question, seem irritated, or frustrated by the question. Ability To Self-Regulate Emotions. We all have emotions which drive us and there is nothing we can do to avoid them. People who are good at self-regulation, however, are able to manage their emotions so that they do not control their words and actions. While they feel bad moods and impulses as much as anyone else, they do not act upon them. People who act upon their negative feelings create havoc, disruptions, and lasting bad feelings all around them. We feel before we think and people who constantly react from an emotional state never wait long enough to allow their thoughts to override their emotions. People who self-regulate have the ability to wait until their emotions pass, allowing them to respond from a place of reason, rather than simply reacting to feelings. The signs of someone who is good at self-regulation are reflection, thoughtfulness, comfort with ambiguity, change, and not having all the answers. In an interview, look for people who take a little time to reflect and think before they answer. Empathy. Empathy is another important aspect to look for when hiring. Someone who has empathy will have an awareness of the feelings of others and consider those feelings in their words and actions. This does not mean that they will tiptoe around or be unwilling to make tough decisions for fear of hurting someone’s feelings. It simply means that they are aware of, and take into consideration the impact on others. They are willing to share their own worries and concerns and openly acknowledge other’s emotions. A good way to look for empathy in an interview is to ask a candidate about a situation where a co-worker was angry with them and how they dealt with it. Look for a willingness to understand the source of the co-workers anger, even though they may not agree with the reasons for it. Social skills. Social skill is another area of emotional intelligence that is highly important in the workplace. To have good social skills requires a high level of the other skills aforementioned as well as the ability to relate and find common ground with a wide range of people. It goes beyond just friendliness and the ability to get along with others. People with social skills are excellent team players as they have the ability to move an agenda along and keep focus while at the same time remaining aware of the emotional climate of the group and possess the ability to respond to it. These people are excellent at making connections, networking, and bringing people together to work on projects. They are able to bring their emotional intelligence skills into play in a larger arena. To look for social skills in an interview, ask questions related to projects and difficulties encountered around varying agendas, temperaments, and getting people to buy in. I was fortunate enough to have started Tai Chi a moving meditation at a very early age. Practising Tai Chi for over 25 years has allowed me to build a solid foundation to support the most important aspect of EQ development, which is attention training. If you are interested in supporting yourself or helping the teams you manage, the links below can help you learn more about EQ training.
2019-09-16 16:12:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2403608113527298, "perplexity": 1729.0321397742214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572879.28/warc/CC-MAIN-20190916155946-20190916181946-00554.warc.gz"}
https://www.physicsoverflow.org/6902/under-conditions-renormalization-group-equations-reversible
# Under what conditions are the renormalization group equations "reversible"? + 4 like - 0 dislike 36 views As I understand it, the renormalization group is only a semi-group because the coarse graining part of a renormalization step consisting of 1. Summing / integrating over the small scales (coarse graining) 2. Calculating the new effective Hamiltonian or Lagrangian 3. Rescaling of coupling constants, fields, etc. is generally irreversible. So when doing a renormalization flow analysis one usually starts from an initial action valid at an initial renormalization time $t_0$ (or scale $l_0$) $$t = \ln(\frac{l}{l_0}) = -\ln(\frac{\Lambda}{\Lambda_0})$$ and integrates the renormalization group equations $$\dot{S} = -\Lambda\frac{\partial S}{\partial \Lambda} \doteq \frac{\partial S}{\partial t}$$ forward in renormalization time towards the IR regime. Under what conditions (if any) are the renormalization group transformations invertible such that the renormalization group equations are reversible in renormalization time and can be integrated "backwards" towards negative renormalization times and smaller scales (the UV regime)? As an example where it obviously can be done, the calculation of coupling constant unification comes to my mind. retagged Mar 9, 2014 + 4 like - 0 dislike Running the RGEs in reverse should be valid so long as you don't integrate over a scale where degrees of freedom enter/leave the theory. If you integrated out the electrons in QED, you'd have irrevocably lost that information in your low energy description of interacting photons. You'd see some non-renormalizable theory with interacting corrections to pure EM but RG evolving to the UV wouldn't tell you what that would be. Just like RG evolving QED to the UV keeps you unaware of the strong or the weak sector physics. On the other hand, so long as you've not crossed any characteristic scale in your theory, the theory at the scales you've integrated out should be the same as the theory at the scale you're currently at. So you should be able to go back to where you came from. To summarize, so long as you don't integrate out some characteristic scale, you can keep going back and forth. This post imported from StackExchange Physics at 2014-03-09 16:20 (UCT), posted by SE-user Siva answered May 9, 2013 by (710 points) Hm, I am not sure if I understand that completely. Does this mean, that in order to be able to go backward and forward, the number of couplings in the theory or relevant operators should not change? This post imported from StackExchange Physics at 2014-03-09 16:20 (UCT), posted by SE-user Dilaton Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsO$\varnothing$erflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
2017-11-19 03:07:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7240285873413086, "perplexity": 1327.6268372433171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805265.10/warc/CC-MAIN-20171119023719-20171119043719-00793.warc.gz"}